text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Practical DOE-Based Approach for Predicting Impact of Stress and Deformation in SLM Overhanging Features During Product Design
,
Introduction
One of the major challenges with implementing additive manufacturing (AM) technologies lies within production management [1]; the processes are not well understood by either product designers or production managers, which limits their usefulness and applicability in practice.Many of these processes exist but managers are often reluctant to adopt them for fear of unknown complications or problems.Additive technologies have much to offer the production world beyond prototyping, with new possibilities in mass customization, design freedom, simplified supply chains, and elimination of specialized tooling [2].A first step in this direction would be to equip engineering designers and production engineers with an appropriate tool to examine and judge the value of a manufacturing process within the scope of their work, a tool that can be applied easily and consistently across working groups and that has a universal meaning to stakeholders.The present study presents such a tool: a simple and powerful design tool to examine very complex problems relating to the successful and efficient application of AM technologies using a design-of-experiments approach [3].
One of the most useful of the AM processes is the selective laser melting (SLM) process, a variation of the powder bed fusion family of AM processes, typically used to create full-density metal parts.There are many advantages to employing SLM, including fine detail resolution, good surface finish, customizability of the mechanical properties, and ability to handle materials that traditionally have low manufacturability, such as nickel and titanium alloys [2,4].Figure 1 shows the basic mechanics of the SLM process [5].
Figure 1: SLM process schematic [5] However, the SLM process has a serious inherent problem: the process is extremely sensitive to the process control and input parameters; if the process is not very carefully controlled, the heat cycling of the fusing laser can introduce severe residual stresses into the material.These residual stresses can initiate part deformation, cracking, and delamination; these stresses can cause significant reduction of fatigue life or, in extreme cases, part selfdestruction during or immediately after printing [6,7].The most common way to deal with this problem, in practice, is to attach every feature to the build plate using solid and bulky support structures and perform stress relieving operations on the entire part before cutting it from the plate using an electric discharge machine or bandsaw [8].This is a practical solution for simple parts, but the problem becomes far more complex when dealing with the stresses in overhanging features.A number of fixes have been devised to deal with these residual stresses without special post-processing operations or cumbersome, wasteful support structures; some of them attempt to directly control the process parameters while some are efforts to work around the process constraints; a comprensive discussion of these methods can be found in [3,5] and will not be reproduced here.While some of these solutions have been quite successful in particular cases, what is sorely needed is a general design-for-manufacturability (DFM) theory for SLM [9].When designing parts that will be manufactured using any AM process (and especially so in very sensitive processes such as SLM), it is important to incorporate DFM principles whenever possible.DFM provides tools both for mitigating potential manufacturing problems from the inherent weaknesses of a process and for providing insight into extra design benefits that may be available when using a particular process [5,9,10,11].
Motivation and Approach
Part of the design of any experiment is the specification of a technique, usually in the form of an analysis of raw data taken during experimental runs.However, SLM is extremely difficult to experiment with in-situ, due to the fact that the chamber must remain oxygen-free and strictly undisturbed to work correctly [4].Even with the use of thermal cameras, it is next to impossible to capture reliable data that is not subject to interpretation on the instantaneous stresses and deformations as they form in the part during printing.It is a simple matter to study the damage and deformation after the part is complete, but that does not give the instantaneous inprocess perspective needed to understand SLM from the DFM perspective.Improvements are being made in this area [12] but a practical real-time solution is not widely available.Therefore, in order to capture a snapshot of the process in action in the most practical way without manufacturing thousands of dollars worth of test parts, thermo-mechanical Finite Element Analysis (FEA) model was built to simulate the SLM process in action at a particular point in time during the printing process.The energy input for the model is the laser power and the outputs range from thermal to stress and deformation results.FEA is a common method for studying AM processes in-run [6,13].
Heat Transfer Model Development
The FEA model was generated using the SolidEdge ST6 FEMAP FEA tool.The required inputs for an FEA model are the loads, the initial conditions, and the boundary conditions.The loading for the present experiment was the laser input.The proper initial conditions were calculated using a simple thermal model based on the first principles of heat transfer, as seen in Figure 2. Boundary conditions were applied to the model to simulating the effects of the powder support or a support structure.Powder-only support was modeled as a pressure equal to the force of gravity, where the overhang was free to move up, but not down into the powder bed.The support material was simulated using full fixed constraints on the bottom surface of the overhang.The study by Patterson et al. [3] uses a similar thermal model to simulate heat transfer in overhangs; setup of this theoretical model and its equations can be found by consulting Sections 3-4 of that work.
Model Validation
Once the FEA model was set up, an in-depth verification and validation study was completed.A series of test scenarios were created, borrowing geometry, process settings, and results from published studies [6,13,14,15,16], from which the geometries, boundary conditions, inputs, and final outcomes were known.In most cases the results from the FEA model were similar to the studies but some adjustments to the values of the boundary conditions (such as initial temperatures) were prompted by a few of the runs.Once these adjustments were made, the results from the model and the test studies were very similar, so the FEA model was considered to be valid.
Experimental Design: Factors
Five factors or main effects were chosen for study after an extensive literature review [3,5] to understand the experimental designs used in previous studies.The stated goal of the experiment is to provide a simple and easy-to-use model for design engineers and production managers and this outlook was essential in choosing which factors to examine.Many others could have been chosen, including layer thickness, but the five that appear to best represent the parameters set or evaluated by designers and managers were examined.The choice will be made by both engineers and management; the approach outlined in this paper facilitates this well, as it allows analysis of any important variables relative to each other and is not dependent on specific input variables.The variables and their levels for this study were (summary in Table 1): 1. Laser power: Modeled as a constant heat flux input into the FEA model on a square element equal in area to the laser spot.2. Laser spot diameter: Modeled as an element on the surface of the material, which the idea range of areas taken from the review completed for the factors (as discussed previously) 3. Laser scan speed: The heat flux is the load on the model, so the rate of input must be known 4. Thickness of the overhanging feature: The material thickness strongly affects the thermal gradient and the mechanical behavior of the material during processing 5. Support Structure: Support materials can often cause problems during design and processing and require extra resources to remove.
Experimental Design: Material Choice
It was decided to set up the experiment for two runs, each modeling the use of a different material.Run 1 was set up to use TI-6V-4Al and run 2 was set up to run 316L stainless steel, two commonly used materials in SLM.It was decided to keep the materials separate and not include them as an extra factor in the experimental design; this allowed the final results of the experiment to be more easily validated by being compared with the theory and with single-material published studies on SLM.In SLM, the material properties cannot be assumed to follow the standard material datasheets, due to the heat cycling and strain hardening experienced by the material; for the present study, a survey of SLM-related characterization studies [6,16,17,18,19,20,21,22,23,24,25] was conducted to collect experimentallyverified properties.The results are shown in Table 2; the shown properties are averages and were used as inputs into the FEA model.It should be noted that appropriate settings were pre-selected within the FEA software to account for heat of fusion and the gradients in material properties as the material is heated.
Experimental Design: Responses
Responses were chosen after group discussions about important part design parameters: 1.The in-process Von Mises stress near the laser melt spot but still in a non-molten area 2. The Von Mises stress 5mm away from the pool into the recently scanned area in order to read how the heat gradient affects the stress throughout the overhang 3. The Z-direction deformation of the material immediately surrounding the melt pool in order to deduce if enough stress was produced to plastically deform the part and introduce stresses 4. The Z-direction deformation at the furthest overhanging point on the feature from the melt pool in order to best understand the influence of any support material
Experimental Model
See Figures 2 and 3 for details of the geometry under study.The timedependent nature of the responses is converted to non-time-dependent by using the aforementioned FEA model to measure all of the responses at the same point in time in the printing.While this does not give a simulation of the SLM process as a function of time, it does allow all of the input factors and their interactions to be studied relative to the instantaneous responses and to each other; this allows the impact of each factor and each interaction on the responses to be captured, accomplishing the goals of this study.The most important assumption for this type of model is that the heat transfer and the material physics behave approximately the same in each layer of the part regardless of geometry, with the caveat that enough solid material exists beneath the layer in progress to effectively transfer the heat like a fin; this has been experimentally determined to be a minimum of 0.5-0.9mm[6,15] or approximately 10-15 layers.
Level of Significance
For the present study, a model adequacy level of significance of = 0.05 was chosen to be used to verify the ANOVA [26].A much more subjective decision is the selection of the level of significance for the ANOVA data, as this choice is based on the circumstances and goals of the experiment.The main purpose of the present study is to develop an engineering design tool, so it is imperative that the experiment provides the widest possible view of the process to designers.Therefore, it is vital to reduce the possibility of committing Type II errors; the probability of avoiding Type II errors is known as a power of the experiment.Increasing the power also increases the probability of making a Type I error (false acceptance of an insignificant factor), but this was judged acceptable for this experiment; any falsely accepted insignificant factors will remain insignificant and will not influence the outcome of the process, as the missing of a significant factor would in a major way.Therefore to reduce the possibility of false rejections to the lowest feasible level, it was decided to use a level of significance of = 0.25 for the ANOVA tests.
Experimental Runs
Following the basic set of assumptions and approximations for such an SLM model (as described in [3]), the numerical experiments to run.Loading the CAD model into the FEA software and applying the initial and boundary conditions were the first steps for each run.The experiment was run a total of 32 times each for the Ti-6Al-4V and 316L stainless steel materials, with the four responses gleaned for each run, for a total output of 256 data points.The planning, setup, execution, and data collection required approximately 80 hours to complete and were conducted over a period of three weeks.Two screenshots of the experiment during the execution phase in one of the runs are shown in Figure 4.The raw values of the responses from the experiment are shown in Figure 5.
Results and Analysis
Once the experiment was complete, the data (Figure 5) was checked for severe skewing and outliers and other indications that errors were made in the conduct and collection of the experiment.None were found, so the recorded data was divided into eight groups: one for each response for each material under study.Each of the eight subsets was then checked and subjected to a residual analysis using Minitab to verify the model adequacy and perform a preliminary check for significant factors and interactions.As shown, the model adequacy was immediately established in seven of the eight cases, while a Box-Cox transform was needed to show the adequacy of the model for the eighth case.A more detailed presentation of this technique and the procedure for model adequacy testing can be found in [3,26].Figure 6 shows the results of all eight of the ANOVAs that were conducted during the present study; the factors highlighted were the ones that had a significant influence on the particular response under study.
Discussion and Conclusions
In the present study, five SLM process input factors were studied to determine their influence upon each of four responses and two material cases for a 90-degree overhanging part feature.With the exception of the feature thickness factor, it was known at the beginning of the experiment that the chosen factors were likely to have significant influence on the generic SLM process from reviewed studies [6,15,27,28].The feature thickness was added because it was an obvious design consideration and had not been explored in any of the many studies reviewed by the authors.
The raw output values of the stress and deformation were reasonable and similar to those found in previous studies; four of the responses had equivalent conditions and responses in other studies including material choice, which aided in evaluation of the findings.These are shown in Figure 7.In all shown cases, the red bars indicate the range of values found in indicated previous studies [6,27] and the blue markers show the data taken in the present work.The use of supports in the present experiment has a clear influence, which is most obvious on Runs 17-32 in the comparisons with Kruth et al. [27].Note that the overhang geometry contains severe stress concentrations, so at in the overhang the stress readings will be significantly higher than they would be in a more generic geometry.Therefore, in order to compare the data with the studies in question, the raw values of the data were transformed by dividing them by the stress concentration factor, which was taken to be 2.5, which is an average textbook value for this kind of geometry.This way, the influence of the manufacturing process can be studied separate from the geometric influences and compared to experimental data fairly.This transform explains the difference between the stress data shown in Figure 5 with that in Figure 7.It should be re-stated that the goal of this experiment is not to collect technical or material process data but to provide a new perspective for designers and production managers to judge and evaluate the value of SLM within their production system.The purpose of analyzing technical data here is to provide further validation of the model and its relationship information.In practice, the model would be used to collect decision data primarily, in form of factors, interactions, and their p-values for comparison.
The following discussion addresses the experimental findings from this perspective and not from that of a specialized technical expert.There were five major findings from the present experiment: First was the fact that the feature thickness has a major influence on all of the responses, which had not been tested previously in other studies.The dominantly significant factor for an overhanging feature was its thickness.The second and most surprising of these findings was that the laser power had such a small influence upon the responses, where it only had any type of influence on the stress responses directly around the melt pool.
The third important finding in this study was the influence of the factor interactions.One of the major advantages of using full factorial experiments conduct this type of study is the ability to study the interactions between the factors, as well as the factors themselves.It is very clear from the findings of the present study that there are many significant interactions for each response; historically, this is not often considered in SLM studies, depriving a typical study of an extra dimension of view concerning SLM behavior.Using the ANOVA to determine significance weighed each interaction equally with each factor, demonstrating clearly that some of the interactions between factors can have much influence on the experiment and an interaction can be significant even of its constituents are not significant.The determination of all the important design inputs is important to the creation of a generalized DFM methodology for SLM and it is obvious that the interactions between factors, even insignificant factors, are worthy of much further study; in the present study, there were an average of 1.73 significant interactions per significant factor for Ti-6Al-4V and 3.8 per significant factor for 316L stainless steel.
The fourth major finding from this study was the large influence that specific material properties, particularly the yield and tensile strength, have on the stress and deformation responses from the overhanging feature.for the stainless steel shows a much larger difference between the melt pool stress and the central feature stress than what is seen for the titanium referring to Table 2, this difference in inversely proportional to the differences in yield and tensile strength for the two materials.A similar effect can be observed in the unsupported runs of the deformation responses (Figure 5).This is a very important consideration for designing for SLM, as the final part properties can vary significantly between processes and machines; the best example is the material properties collected in Table 2: in this case, the titanium alloy is actually more ductile on average than the stainless steel, while stainless steel is one of the most ductile engineering materials when bulk-formed.The effect of the material choice on the likelihood of having significant factor interactions is also shown in the results of Table 3: for the titanium alloy, the probability of a main effect being significant is 0.75, while the probability of an interaction being significant is 0.25; for the stainless steel, the probabilities are 0.50 and 0.37.
It is obvious that the information gathered and presented by this model is valuable to design engineers and production managers using or considering the use of SLM; it can be used for many other problems besides the design and production of overhanging features.This information would be very difficult and costly to extract out of a large series of specialized studies and to translate into a form that is useful in a business-industrial setting.Users of this technique will be able to quickly and easily examine or predict the outcomes of the SLM process in a way that is easily communicable with non-specialists.Further development of the method is needed, including comprehensive (and very expensive) experimental verification.
Figure 4 :
Figure 4: Example (L) deformation and (R) stress responses during experiment
Figure 6 :
Figure 6: ANOVA results and interactions
Table 1 :
Factors and levels
Table 2 :
Material properties used in model | 4,607.4 | 2018-10-02T00:00:00.000 | [
"Materials Science"
] |
Production of keV Sterile Neutrinos in Supernovae: New Constraints and Gamma Ray Observables
We study the production of sterile neutrinos in supernovae, focusing in particular on the keV--MeV mass range, which is the most interesting range if sterile neutrinos are to account for the dark matter in the Universe. Focusing on the simplest scenario in which sterile neutrinos mixes only with muon or tau neutrino, we argue that the production of keV--MeV sterile neutrinos can be strongly enhanced by a Mikheyev--Smirnov--Wolfenstein (MSW) resonance, so that a substantial flux is expected to emerge from a supernova, even if vacuum mixing angles between active and sterile neutrinos are tiny. Using energetics arguments, this yields limits on the sterile neutrino parameter space that reach down to mixing angles of the order of $\sin^2 2\theta \lesssim 10^{-14}$ and are up to an order of magnitude stronger than those from X-ray observations. While supernova limits suffer from larger systematic uncertainties than X-ray limits they apply also to scenarios in which sterile neutrinos are not abundantly produced in the early Universe. We also compute the flux of $\mathcal{O}(\text{MeV})$ photons expected from the decay of sterile neutrinos produced in supernovae, but find that it is beyond current observational reach even for a nearby supernova.
One of the most auspicious candidate particles for the dark matter in the Universe is the sterile neutrino-an electrically neutral fermion with a mass on the order of keV-MeV that couples to ordinary matter only through a tiny mass mixing with Standard Model (SM) neutrinos [1,2]. In the simplest sterile neutrino scenarios, it is assumed that the abundance of sterile neutrinos ν s is zero at the end of inflation, and they are later produced through their mixing with SM (active) neutrinos ν a [3,4] (see also [5]). Experimental constraints on the mass of keV sterile neutrinos and their mixing with SM neutrinos arise from the measured DM relic density [6], from Pauli blocking (the Tremaine-Gunn bound) [7,8], from Lyman-α forests [9], and from X-ray searches for radiative decays of sterile neutrinos ν s → ν a +γ [10][11][12][13]. From the combination of these constraints, one concludes that the ν s mass should be m s 4 keV, and its mixing angle with the SM neutrinos should be sin 2 2θ 10 −6 in the simple two-flavor approximation.
In this letter, we add a new limit to this inventory of constraints by considering sterile neutrino production in core-collapse supernovae (SN) [14][15][16][17][18][19][20][21][22][23]. A supernova develops when a 9M star runs out of nuclear fuel. The thermal pressure that normally counteracts gravity disappears, and the core of the star collapses into a neutron star. The temperature in the nascent neutron star is MeV, so that a thermal population of (active) neutrinos is produced. These ν a can oscillate into ν s , which escape the exploding star unhindered and may carry away significant amounts of energy [16,20,24]. Constraints on anomalous energy loss from SN 1987A will thus allow us to constrain the sterile neutrino parameters. Since ν a → ν s conversion can be resonant thanks to the ultrahigh matter density ∼ 10 14 g/cm 3 in the SN core, these limits will be very strong. The flux of sterile neutrinos with O(MeV) energies escaping from a supernova leads to a flux of secondary gamma rays when they decay, and we study this flux as well.
Our main results are summarized in fig. 1. The solid orange exclusion region shows that, in the mass range m s ∼ 2-80 keV, limits from energy loss in supernovae surpass previous limits by up to two orders of magnitude in sin 2 2θ. Note that, unlike the other limits shown in fig. 1, our bounds would still hold if sterile neutrinos are not part of the DM in the Universe. In the following, we discuss in detail how we have obtained our new limits and sensitivity estimates.
Sterile neutrino production in supernovae. We consider a simplified two-flavor oscillation picture with mixing between a sterile neutrino ν s and one species of active neutrinos ν x , x = µ or τ . We do not consider mixing between ν s and electron neutrinos to avoid complications arising from charged current interactions between ν e and electrons/positrons. The flavor basis Hamiltonian describing neutrino propagation in matter includes the vacuum oscillation term and a Mikheyev-Smirnov-Wolfenstein (MSW) potential V x that describes coherent forward scattering of ν x on the background matter via Z exchange [29][30][31]: Here, θ 1 is the ν s -ν x mixing angle in vacuum and ∆m 2 m 2 s is the mass squared difference between the two mass eigenstates [32]. The MSW potential is V where G F is the Fermi constant and N n , N νe , and Nν e are the neutron, electron neutrino, and electron antineutrino number densities. The + (−) sign corresponds to the potential experienced by neutrinos (antineutrinos). Since |N νe − Nν e | N n /2, the potential is positive for antineutrinos and negative for neutrinos. Note that we neglect terms that would arise from differences between the ν x andν x number densities because such differences are expected to be small in the Supernova bounds on the sterile neutrino mass ms and mixing sin 2 2θ (orange, this work) compared to previous constraints [12,13,25] from the Tremaine-Gunn bound [7,8] (yellow), from X-ray searches in the Andromeda Galaxy M31 [10] (green), from the Fermi GBM all-sky analysis [26] (light blue), and from the galactic [25] (dark blue) and extragalactic [11][12][13] (gray) diffuse X-ray background. For the latter constraint, we show also how it is modified if sterile neutrinos account for only a fraction of the DM in the Universe. It is understood that the other X-ray limits would shift in a similar way if Ωh 2 < 0.11. We also show in pink the region preferred by the properties of the core of the Fornax dwarf galaxy [27], and as a yellow star the parameter point that could explain the 3.5 keV X-ray line hinted at by refs. [13,28]. Black curves illustrate the parameter regions in which the Dodelson-Widrow mechanism [3] (L = 0) or the Shi-Fuller mechanism [4] with lepton assymetry L > 0 would yield the correct DM relic density Ωh 2 = 0.11. parameter region where our limits will lie [24]. We have checked that even adding an MSW potential corresponding to a maximal asymmetry does not alter our results significantly. We also neglect the momentum and angular dependence of neutrino self-interactions [33,34] and instead restrict ourselves to the simplified formalism used in [16,17,20]. Since N n , N νe , and Nν e are extremely large in the supernova core and gradually decrease with radius, most antineutrinos will encounter an MSW resonance on their way out of the exploding star. At the resonance, and the effective mixing angle θ m in matter becomes maximal [35]. We consider two physically different mechanisms for sterile neutrino production in supernovae: (i) Adiabatic flavor conversion at an MSW resonance.
A ν x of energy E streaming away from the supernova core can convert to ν s when the matter density, and thus the MSW potential V x , has reached the value satisfying the resonance condition, eq. (2). Each point (t, R) in time and space corresponds to a specific value of V x ; therefore, neutrinos of a specific energy E res (t, R) are resonantly converted at this point. We take the local neutrino luminosities and spectra and the local matter densities from the simulation of an 8.8M supernova by the Garching group [36] (see also [37][38][39][40]).
Hard scattering processes must be rare to give neutrinos sufficient time to convert adiabatically. We therefore require the spatial width of the MSW resonance regions, to be smaller than the mean free path λ mfp . The number dN MSW s /dE of ν s in an energy interval [E, E + dE] produced by adiabatic flavor conversion at the MSW resonance is given by [16] dN MSW Here, the time integral runs from the time of core bounce (t = 0) until ∼ 9 sec later, and R res (t , E) is the radius at which the resonance energy is E at time t . The quantity n ν (t , R res ) is the active neutrino number density at time t and radius R res , f x (E) is the energy distribution of active neutrinos,Ē is the average neutrino energy, P res (E) is the flavor conversion probability at the resonance, and the Heaviside Θ function implements the condition that neutrinos must have enough time between collisions to convert adiabatically. It is crucial that neutrinos do not encounter more than one MSW resonance on their way out of the supernova. (This would be different if we considered mixing between ν s and ν e instead of ν µ,τ [18,19].) Note that we can make the strongly simplifying assumption of radial symmetry, and we also neglect the depletion of active neutrinos by conversion into ν s . Moreover, we do not need to consider ν x streaming inwards. They would convert to ν s at the resonance, then travel through the core, and convert back to ν x on its far side. We parameterize f x (E) as [41] f with normalization (This relation definesĒ.) The "pinching parameter" α desribes the degree to which f x (E) differs from a Maxwell-Boltzmann distribution. [36]. Contributions from adiabatic flavor conversion and collisional production are shown separately. The νs spectrum extends to much higher energies than the νx spectrum because flavor conversion occurs in a region where temperatures are much higher than at the last scattering surface of νx.
The ν x → ν s conversion probability at the resonance is given by the Landau-Zener formula [16,17,35] with the oscillation length at the resonance, L res osc 2π/(V x sin 2θ). We find that adiabatic flavor conversion occurs mostly at radii ∼ 10-15 km, still inside the neutrino sphere at ∼ 20-30 km. In fig. 2, we compare dN MSW s /dE to the spectrum of active neutrinos. We see that the ν s spectrum extends to higher energies because most of the flavor conversion happens in a high-temperature region from which ν s can stream out freely, while ν x are still trapped.
(ii) Collisional production. Sterile neutrinos can also be produced from a mixed ν x -ν s state in hard scatterings on nucleons, electrons, and positrons. We take the interaction rate Γ x to be approximately equal to the dominant scattering rate on neutrons, and the cross section for this process is computed following [42,43]. The physical picture for collisional ν s production is as follows: starting with an ensemble of only ν x at t = 0, each of them soon acquires a small ν s admixture ∝ sin 2 2θ m by oscillation. A collision causes the collapse of the resulting mixed state into either a pure ν x or a pure ν s . Afterwards, oscillations start anew, and ν x are quickly replenished. After many collisions, the ν s abundance is proportional to sin 2 2θ m and Γ x .
Up to a factor of 1/2, this intuitive picture leads to the correct quantum mechanical Boltzmann equation [1,3,15,44,45] ∂ ∂t Here, dn s /dE and dn x /dE are the energy spectra of sterile and active neutrinos, respectively. At any given spacetime point, dn x (t, R, E)/dE is related to the distribution function in eq. ( The averaged oscillation probability P νx→νs is [1] P νx→νs = 1 2 The extra term (Γ x E/m 2 s ) 2 in the denominator compared to the usual expression for the mixing angle in matter [35] accounts for the suppression of ν s production when λ mfp is much smaller than the oscillation length, so that oscillations do not have time to develop between collisions (quantum Zeno effect).
Integrating eq. (7) over time and radius leads to the energy spectrum of sterile neutrinos produced collisionally, We again evaluate dN s /dE numerically using the data from [36]. The resulting ν s spectra, shown in fig. 2, can be harder than the ones from adiabatic production because the collisional production rate depends on Γ x , which grows proportional to E 2 . Constraints from supernova luminosity. We can constrain the energy output in sterile neutrinos from SN 1987A by comparing the observed energy output in active neutrinos of E a = few × 10 53 ergs [46,47] to the gravitational energy released in the collapse of a stellar core at the Chandrasekhar mass, which is also on the order of E tot = few × 10 53 ergs [36,48]. If a substantial fraction of E tot was carried away by sterile neutrinos, the observed E a could not be explained [49]. We therefore consider the ratio R(sin 2 2θ, m s ) ≡ E s (sin 2 2θ, m s )/E tot . We assume that R depends only weakly on the mass and type of the progenitor star, so that the values obtained for the supernova simulated in [36] are a good proxy for SN 1987A. Our computation of ν s production is only selfconsistent for R 1 because we neglect depletion of active neutrinos. We nevertheless extrapolate it to larger values and set a limit by requiring R < 1. The justification for this is that the associated uncertainty in our results is on the same order as the uncertainty of the predicted and measured energy outputs from SN 1987A. We have verified this assumption by rescaling in each time step of our calculation the active neutrino number densities to account for the energy carried away by sterile . 3. Constraints on the sterile neutrino parameter space from energy loss in supernovae, considering (a) adiabatic νx → νs conversion only or (b) collisional νs production only. The solid and dotted orange curves correspond to different assumptions on the maximum allowed energy loss, expressed here in terms of the ratio R of the energy output in sterile neutrinos and the total energy output. For comparison, we also show in green the parameters for which the Dodelson-Widrow mechanism [3] predicts the correct relic abundance of νs.
neutrinos up to this time. Doing so modifies our limit by 30%.
Our limits on the ν s parameter space are shown in fig. 3 for adiabatically and collisionally produced ν s separately, and in fig. 1 for the combination of both production mechanisms (requiring R < 1). Thanks to MSW enhancement, our constraints reach down to sin 2 2θ ∼ 10 −14 at m s ∼ 10-100 keV, surpassing all other limits in this mass range.
The shape of the exclusion regions in fig. 3 can be understood as follows. For adiabatic conversion at small m s , the oscillation length at the MSW resonance L res osc is large, making flavor conversion non-adiabatic according to eq. (6). At large m s 100 keV, the resonance condition of eq. (2) cannot be satistifed. At m s ∼ few × 10 keV, adiabatic flavor conversion is most effective at low sin 2 2θ ∼ 10 −14 . For larger sin 2 2θ, the radial width R width of the resonance region becomes too large, so that neutrinos scatter before having a chance to convert. Also collisional production is most effective when the mixing angle in matter is MSW-enhanced. At m s 10 keV, the resonance condition of eq. (2) is fulfilled only in the outer regions of the supernova, where the scattering rate is too low for effective collisional production. At m s 100 keV, the resonance condition is never satisfied.
Thus, if ν s are abundantly produced in a supernova, we expect the explosion to be accompanied by a flux of energetic (O(1-100 MeV)) secondary gamma rays. Photons in this energy range are not normally expected from a supernova or supernova remnant, both of which emit Xrays only at energies 10 keV [51,52]. The arrival times of the gamma rays from ν s decay are spread out over a time interval ∆t 3.6 hrs × d 1 kpc m s 1 keV where d is the distance to the supernova and E γ is the gamma ray energy (which is just half the ν s energy, E γ = E ν /2). Sterile neutrinos decaying immediately after their production lead to gamma rays that reach the Earth at the same time as the active neutrino burst. Gamma rays from ν s decaying only after travelling a distance d are delayed by ∆t.
The photon flux from ν s decay in units of cm −2 sec −1 MeV −1 is given by where γ = 2E γ /m s is the Lorentz boost factor. The factor in parenthesis accounts for the ν s decay probability. Note that the sterile neutrino spectrum dN νs (E ν )/dE ν is evaluated at E ν = 2E γ . We plot dφ γ (E γ )/dE γ for two sets of benchmark parameters in fig. 4, and the total gamma ray flux as a function of sin 2 2θ and m s is shown in fig. 5. In both figures, we have chosen d = 1 kpc. As expected from fig. 1 (orange region) and to constraints from X-ray searches [10-13, 25, 26] (green, dark blue, gray, and light blue regions).
fig. 2, and taking into account E γ = E ν /2, the photon spectrum peaks at E ∼ 10-100 MeV. Comparing the expected photon spectrum to the diffuse astrophysical gamma ray background measured by COMPTEL [53][54][55], EGRET [56][57][58] and Fermi-LAT [59], we find that the signal is still several orders of magnitude below the uncertainty on the background. Most likely, a direct observation of the ν s -induced photon signal is therefore beyond the scope of even the next generation of Compton telescopes [61][62][63] and would require a factor ∼ 10 5 improvement compaired to the projected sensitivity of ComPair [62]. Summary. In conclusion, we have computed the flux of hypothetical keV-MeV sterile neutrinos ν s from a supernova. We have constrained the ν s parameter space using energy loss arguments ( fig. 1), and we have estimated the gamma ray flux from ν s produced in a nearby supernova. Directions for future work include a more detailed calculation of the ν s flux from a supernova, going beyond the two flavor approximation and with a more detailed treatment of collective effects. We moreover plan to investigate how our results depend on the mass of the progenitor star [64]. | 4,359.6 | 2016-05-02T00:00:00.000 | [
"Physics"
] |
Development of the computerized decision-making support system for the prevention and revealing of dangerous zones of flooding
In considered article the results are presented of the carried out researches on high-mountainous lakes of the Tashkent area. Visual inspection of a modern condition of natural dams of considered highmountainous lakes of the Tashkent oblast is carried out. On the basis of the analysis of visual inspection it is possible to note, that the common technical condition of natural dams of the submitted high-mountainous lakes safe and does not represent danger to downstream situated territories. It was carried out the modeling of occurrence of emergency situation on high-mountainous lakes of the Tashkent oblast in case of full or partial destruction of natural dams. By an available calculation method of flooding waves are determined its key parameters, height of a wave, speed of distribution of a wave of break on the certain distances and reaching time the given wave up to the certain distances. On the basis of calculation method have been calculated and put on cross-sections of the rivers Pskem and Косу zones of possible flooding in full and partial destruction of dams of high-mountainous lakes. Results of the carried out calculations have been put on the digital map, created on platform ArcView 3.2a.
Introduction
It is necessary to note, that at present is marked the most intensive development under building and cultivation of crops the coastal zones of the high-mountainous rivers which are subject to flooding in the case of breakout of situated in up streams on them of highmountainous lakes and water reservoirs. Realization of the given research is aimed to development of the computerized decisionmaking support system for the prevention and revealing of dangerous zones of flooding, for the purposes of safe building coastal territories of valleys of the high-mountainous rivers and acceptance of the most effective architectural -planning decisions of settlements. The given system is constructed on base GIS -technologies which allows to solve optimization task of definition of safe planned zones of buildings at various scripts of formation of breakout waves and scales of destruction both natural, and artificial dams, situated on the top water objects and to choose the most effective architectural -planning decisions of settlements. Developed by laboratory « Hydrology and hydraulic engineering » of the institute of Water problems the technique allows to determine zones of flooding on the basis of the advanced calculation model and to define zones of safe building of coastal zones of the highmountainous rivers and by that to choose the most effective architectural -planning decisions of settlements. The basic purpose of the carried out researches is: Development of a calculation technique of breakout waves of water objects and the computerized decision-making support system for the prevention and revealing of dangerous zones of flooding, for the purposes of safe building coastal territories of valleys of the rivers. For achievement of the main aim of our investigations there was a necessity to solve tasks which were reduced to the following: Creation of digital model of a relief on the basis of a cartographical material of topographical maps; Development and improvement of a calculation technique of breakout wave formed at destruction of the partitioning off dam at various scales of its destruction; Definition of a modern technical and safe condition of partitioning off dams of considered high-mountainous water objects. Carrying out of calculations by definition of key parameters of breakout wave formed as a result of break of natural dams of considered water objects; Definition of zones of flooding the bottom territories in passing of breakout wave; An estimation of a degree of danger and zones of risk and an estimation of possible damage of possible break of considered high-mountainous water objects proceeding from threats; Development and creation of the computerized decision-making support system to prevent and revealing of dangerous zones of flooding, for the purposes of safe building coastal territories of valleys of the rivers.
The analysis of the literature devoted to studying of safety issues of hydrotechnical constructions, in particular dams, shows that these constructions can at occurrence on them of failures lead to extreme situations in the big territories. The probability of failures increases at the big term of operation of constructions which term exceeds 30-40 years. The analysis of the failures which have occurred as a result of destruction of dams specifies on their rather high on the average reliability, however on each one thousand dams one large failure or failure with heavy consequences (human victims, the big material losses, ecological infringements, etc.) [1][2][3][4][5][6][7][8][9][10]. Besides in the nature there are some lakes formed as a result of blocking of great volume of mountain weight in the rivers. Natural dams as, for example, high-mountainous lake Sarez is formed as a result of displacement of the big mountain weight subsequently by a strong earthquake. High-mountainous lakes of such type, as a rule, are formed as a result of natural calamity (emergency situation) of natural character -strong earthquakes, landslide processes, mudstream phenomena. To determine durability of existence of natural dams of high-mountainous lakes blocking a river beds represents the big difficulties in view of uncertainty of structure of mountain weight of a body of a natural dam. Such destructions of natural dams can cause large-scale destructions and formation of flooding waves which passing downstream of rivers and can overflow the big territories and result in human victims. Therefore for today one of the actual problems are calculation and definition of parameters of flooding waves, and definition of zones of flooding, with the purpose of definition of zones of paramount evacuation of the population from the settlements located in considered territories. The mountain areas of the Tashkent oblast are account about ten dangerous highmountainous lakes. High-mountainous lakes concern to the largest lakes formed by obstruction such as Shavorkul, Big Ikhnach and Bottom Ikhnach and Koksu. Besides it is necessary to note, that nowadays it is observed the intensive development under building and cultivation of crops of coastal zones of the high-mountainous rivers that are exposed of flooding by passage of freshet waters, and also by breaking down of highmountainous lakes and water reservoirs situated upstream. Tasks of the given researches was definition of parameters of flooding wave formed as a result of breaking down of natural dams of the given lakes and definition of zones of flooding by passage of flooding wave with the purpose of definition of zones of paramount evacuation of the population from settlements, located in considered territories.
Purposes and research tasks
The main purpose of the carried out researches is: Estimation of a degree of danger and zones of risk and estimation of possible damage of possible break of high-mountainous lakes Shavorkul, Big Ikhnach and Bottom Ikhnach and Koksu proceeding from threats, which are submitted in figure.
The object in view pursued the decision of the following tasks which are reduced to the following: • On the river Ikhnachsay which formed from Big Ikhnach and Bottom Ikhnach lakes it planned 3 settlement ranges. On the river Koksu which formed from Koksu lake it planned 5 settlement ranges. For carrying out of a tentative estimation of zones of flooding and revealing of dangerous zones of flooding, for the purposes of safe building coastal territories of valleys of the rivers it has been used the method of approached calculation of key parameters flooding waves.
Definition of time of arrival of water on the set distances
According to the mentioned below formula it was determined the time of arrival of flooding wave for the set distances. Results of the carried out researches show, that the maximal wave of break forming at full destruction of natural dams of lakes Shavorkul, Big Ikhnach and Bottom Ikhnach in range at an entrance to Charvak water reservoir makes 7.5 meters. Minimal and really possible reach time of flooding waves up to given range on the river Pskem takes 1,058 hour. On the river results of researches show Koksu, that the maximal wave of break forming at full destruction of a natural dam of lake Koksu in range at entrance in Charvak water reservoir makes 16.6 meters. Minimal and really possible reaching time of flooding waves up to given range on the river Koksu takes 0,3125 hour. It is necessary to note, that all considered high-mountainous rivers proceed in deep and narrow gorges and height of flooding waves, formed at break of high-mountainous lakes, do not represent special danger to the locations of the given rivers, except for places of injection of the rivers Pskem and Koksu in Charvak water reservoir where located settlements.
Conclusions
• Visual inspection of a modern condition of natural dams of considered highmountainous lakes of the Tashkent oblast is carried out. • On the basis of the analysis of visual inspection it is possible to note, that the common technical condition of natural dams of the submitted high-mountainous lakes safe and does not represent danger to down stream situated territories. • It was carried out the modeling of occurrence of emergency situation on highmountainous lakes of the Tashkent oblast in case of full or partial destruction of natural dams. • On the basis of a topographical material have been made cross-section sections of Pskem and Koksu river beds. • By an available calculation method of flooding waves are determined its key parameters, height of a wave, speed of distribution of a wave of break on the certain distances and reaching time the given wave up to the certain distances. • On the basis of calculation method have been calculated and put on cross-sections of the rivers Pskem and Косу zones of possible flooding in full and partial destruction of dams of high-mountainous lakes. | 2,266.6 | 2019-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
Comparison of the Alkali-Silica Reactivity of North Cyprus and South Cyprus aggregates\; preliminary studies using RILEM method
Alkali-silica reaction (ASR) is regarded as one of the most deleterious concrete durability problems, known to cause severe deteriorations in reinforced concrete structures all around the world. ASR involves the reaction of alkaline concrete pore solution with silica minerals in the aggregates and as a result, hydrous alkali-silica gel is produced. Expansion caused by this gel upon absorbing moisture results in serious deterioration in concrete. Although the susceptibility of South Cyprus aggregates to ASR has been previously studied to a very limited extent in the past, no scientific information on the ASR susceptibility of North Cyprus aggregates are available in the related literature. Beşparmak (Pentadaktylos) Mountains (North Cyprus) and from Troodos Mountains (South Cyprus) are positioned close to each other; however, aggregates obtained from both mountains are known to differ in composition. This difference in composition has the potential to yield ASR performances varying significantly. The aim of this study was to carry out preliminary investigations on the alkali-silica reactivity performance of both North and South Cyprus aggregates under same conditions, in a systematic and comparative manner. Aggregates obtained from both mountains are tested in combination with CEM I and CEM II (with supplementary cemenetitious materials) under the exposure conditions of RILEM method AAR-2. Preliminary results showed that North Cyprus aggregates are potentially reactive when used with CEM II, where South Cyprus aggregates tested under same conditions are detected to have much higher levels of reactivity.
Introduction
Alkali-silica reaction is a reaction between alkaline pore solution and silica minerals like cryptocrystalline quartz and opal present in aggregates. Pore solution of concrete contains potassium (K + ) and sodium (Na + ) ions and hydroxyl ions (OH -) [1,2]. In the highly basic environment of concrete microstructure, the hydroxyl ions (OH -) attack the Silanol (Si-OH) and Siloxane (Si-O-Si) groups of reactive silica and yield the reaction given in the following equations [3,4]: Si-OH + OH -+ Na + → Si-O-Na + H 2 O Si-O-Si+2 OH -+ 2Na + → 2(Si-O-Na) + H 2 O Under this attack, microcrystalline quartz with other aggregate particle form a viscous and hygroscopic gel called "alkali-silica gel" [5,6]. This gel absorbs water and expands in moist areas. The expansive pressure by the silica gel causes cracking and deteriorations in concrete [7]. The likelihood of ASR occurrence in concrete made with South Cyprus aggregates had been studied only in a limited extent and presented in previous studies [8,9]. On the other hand, there is no scientific information on the level of alkali silica reactivity of North Cyprus aggregates available in the related literature, since ASR concept has not been previously studied in North Cyprus. This study aims to provide experimental results on the alkali-silica resistance of both North and South Cyprus aggregates in a systematical and comparative manner. Beşparmak (Pentadaktylos) Mountains (North Cyprus) and from Troodos Mountains (South Cyprus) are positioned close to each other; however, aggregates obtained from both mountains are known to differ in composition. This difference in composition has the potential to yield significantly varying ASR performances. experience [11], which serve to classify level of reactivity easily and effectively. In addition to these, its relatively short testing period required to provide the results makes AAR-2 to be commonly preferred as a first step in investigating the reactivity of aggregates by research carrying out studies in this field. This test method basically involves preparation of mortar bars having the dimensions of 25mm × 25 mm 285 mm, with cement: fine aggregate and water/cement ratios defined to be 1:2.25 and 0.47, respectively. Three mortar bars were prepared for each batch and the length change observed in these bars under the exposure conditions suggested by the method was recorded systematically through out the testing period.
Sample Preparation and Materials
The types of aggregates that are known to be commonly used in concrete industries of South and North Cyprus were selected for this study, in order to be able to provide representative results for existing buildings in both sides of the island. "Crystallized dolomitic limestone" aggregates obtained from the quarries of Beşparmak (Pentadaktylos) Mountains in North Cyprus were obtained with the help of North Cyprus Geology and Mines Department, where "diabase" aggregates from Pirga quarry of Troodos Mountains were obtained with the help of Geological Survey Department of South Cyprus. The aggregates were sieved and prepared to have a particle size distribution between 4.75 to 0.15 as proposed in the AAR-2 method.
Weights of constituents needed for each batch of 3 mortar bars according to the AAR-2 was mentioned in Table 1 given below:
Cement (g) 400
Fine aggregate (g) 900 Water (g) 188 Water/cement ratio 0.47 Cyprus is known to have one of the warmest climates in Europe. As a Mediterranean island, the annual average (actual) temperatures observed in Cyprus during the day and night are 24°C and 14°C, respectively; where the apparent temperature felt during the day in summer period can go beyond 40°C frequently. The climatic conditions of Cyprus have caused the concrete industries of the island to generally prefer slow-hydrating blended cements with lower rates of heat of hydration. CEM II / B-M (with 21-35% supplementary cementitious materials (SCM)), which is the most typically used cement in North Cyprus, was chosen to test the potential alkali-silica reactivity of Cyprus aggregates used in concrete structures. CEM I, containing only cement clinker, was also used in this study as a control set, in order to verify the effect of SCM presence on the reactivity. Therefore, a total of four mortar batches made of two different aggregates (from "North" and "South" Cyprus) and two different cements (CEM I and CEMII/B-M) were made for investigating the ASR occurrence likelihood in Cyprus.
Exposure Conditions and Measurements
Three mortar bars were made for each batch as proposed in the standard AAR-2 method, by using the materials with the proportions given in Table 1. The samples were placed in moist cabinet at room temperature and 90% humidity for 24 hours. Then, the mortar bars were immersed in container with distilled water, sealed carefully and placed in thermal cabinet with 80℃ for 24 hours. After measuring initial length of each mortar bar (L 0 ) after the first 24 hours, then the bars were immersed in 1N sodium hydroxide and stored at 80℃ for 14 days. During the testing period of 14 days, five comparator readings (Ln), at 3rd, 6th, 9th, 12th and 14th days of experiment, were taken at the same time each testing day (age). Length change in the samples was measured with digital length comparator having 0.001mm sensitivity. The total change in the length of each mortar bars were calculated using the below relationship: Average of the length change observed in three bar samples were reported for each batch at each measurement age.
Results and Discussions
The results obtained in this preliminary study on the determination of alkali-silica reactivity of Cyprus aggregates are presented below in Fig. 1-4. Length change value presented for each age is the average obtained from three bars prepared for each batch. The error bars represent one standard deviation. "North" and "South" represents the crystallized dolomitic limestone aggregates from North Cyprus and diabase aggregates obtained from South Cyprus, respectively. The expansion limits defined by RILEM AAR-2 method for "Potentially reactive" and "Deleterious" behaviors, which are 0.1 % and 0.2% respectively, are shown in each graph for reference. In their work, Sims and Nixon [11] suggest the use the 0.1% and 0.2% length change limits for classifying "potentially reactive" and "deleterious" behaviors of mortar samples, respectively. These limits were defined based on studies carried out by RILEM on the aggregate combinations with previously known reactivity performances from different parts of the world [11]. Figure 1 illustrates the length change behavior of mortar bars made with "North" and "South aggregates in combination with CEM I type of cement which is defined by EN197-1 to contain only cement clinker, without any presence of supplementary cementitious materials (SCM). In the case of being used with CEM I, both "North" and "South" cements are observed to yield expansion values that are below the defined potentially reactive behavior limit. However, mortar bars made with "South" aggregates were observed to experience much higher expansion relative to the samples made with "North" samples. Nevertheless, both aggregates can be classified as "innocuous" in the case of being used in combination of this cement. Figure 2 illustrates the length change behavior of with "North" and "South aggregates in combination with CEM II/ B-M type of cement, which is defined by EN197-1 to contain 21-35% of SCMs. These results indicate that the SCMs content of CEM II had increased the likelihood of ASR occurrence for both aggregates. The expansion recorded for mortar bars made with "South" aggregates were observed to exceed 0.2%, which is the limit defined by Sims and Nixon [11] for "deleterious" alkali-silica reactivity behavior of aggregates. On the other hand, the mortar bars made with "North" aggregates was observed to experience a more gradual increase in length; the expansion recorded at the end of 14 days of testing period was observed to be slightly higher than 0.1%, which is the limit defined for "potentially" reactive behavior of aggregates. Fig. 3 and 4 provides a comparative information on the ASR potential of "South" and "North" aggregates, respectively, when they are used in combination with two different cements, namely CEM I and CEM II/ B-M. The latter cement is known to contain supplementary cementitious materials up to 35%. In the case of both aggregates, the mortar batch prepared with CEM II/ B-M yields in more than three times of expansion than the one observed for mortars prepared with CEM I. Some of the previous studies in the related literature [12][13][14][15] suggest that SCM incorporation to the mix is an effective way for reducing the expansion and related damage in concrete elements as a result of ASR. Besides their effect on permeability of concrete, formation of C-S-H as a result of pozzolonic activities is known to incorporate alkalies and therefore, caused a reduced pH level, which reduces the occurrence of ASR [14,15]. However, the findings of this study indicate that CEM II/B-M could potentially increase the alkali-silica reactivity experienced by samples, which may imply an inadequate alkali-binding effect of SCMs incorporated in the cement. Kerkoff [16] suggests that the effectiveness of the supplementary cementitious materials in ASR mitigation should be tested first with accelerated mortar bar test and if their performance in reducing expansion is not observed to be successful (e.g. like in this study), then further investigations could optionally be carried out with concrete-prism test having longer testing periods, in
Batches Alkali-Silica Reactivity
"North" aggregates with CEM I Innocuous "North" aggregates with CEM II Potentially reactive "South" aggregates with CEM I Innocuous "South" aggregate with CEM II Deleterious order to be able to obtain more realistic information on the actual performance of cement-aggregates combinations used. In any case, Kerkoff [16] emphasized the importance of reducing concrete alkalies to a limit required to control ASR effectively, if SCM-containing cements are observed to fail to mitigate ASR. Kerkoff suggest that such an inadequate ASR mitigation performance of cementitious materials can be due to the "pessimum" effect of SCM dosage. In such a case SCM dosage that is lower than the optimum dosage is reported to potentially yield even higher ASRrelated expansion when compared to the performance of not SCM-containing cements [16]. Table 3 summarizes the ASR performance and the material combinations used four different batches tested in this preliminary study. The findings presented in Table 3 have the potential to significantly influence practices of concrete industries in the island, mainly in North Cyprus. As mentioned before, CEM II/ B-M is the most widely used cement in North Cyprus for several decades, mainly due to its relatively slower rate of hydration, which is advantageous when the climatic condition of the island is considered. Higher temperature values are known to accelerate ASR [3,17]. Therefore, the hot climate of the island has the potential to favor an increased rate of alkali-silica reactivity. Since the combination of CEM II/B-M with "North" aggregates has been commonly used in concrete manufacture, then the existing structures in North Cyprus constructed decades ago with these materials might be potentially experiencing alkali-silica reaction.
The results provided for the diabase aggregates used in South Cyprus are in accordance with previous findings [8,9]. The result of this study indicates the critical importance of using the adequate cement type with the aggregates that are already suspected to exhibit alkali-silica reactivity, in order to prevent any future ASR-related damage in concrete structures in Cyprus.
Conclusions and Recommendations
This study provides preliminary results on the alkali-silica reactivity of North and South Cyprus aggregates obtained with The Ultra-Accelerated Mortar Bar Test Method (AAR-2) proposed by RILEM. With this study, the authors' aim was to provide fundamental information on the likelihood of ASR occurrence in concretes manufactured in Cyprus, which had not been extensively studied before in the island. Two types of cements were employed in the study; CEM II/ B-M with SCM content that is known to be preferred extensively by the concrete industries in the island, as well as CEM I containing only clinker that was employed as a control set in order to verify the effect of SCMs on the ASR performance of aggregates, were used. Preliminary results obtained in this study serves to draw the following conclusions: a) The use CEM II/B-M yielded results indicating the risk of ASR occurrence for both North and South Cyprus aggregates. In the case of using this cement, "North" aggregates are observed to be "potentially reactive", where "South" aggregates are observed to exhibit "deleterious" behavior at the end of the testing period. The obtained results confirm and contribute to the limited information of the level of reactivity of South Cyprus aggregates in the related literature. The level of reactivity of the North Cyprus aggregates was investigated for the very first time with this study. b) Majority of the concrete structures in North Cyprus are made with the combination of CEM II/B-M with crystallized dolomitic limestone aggregates, referred as "North" aggregates in this study. The "potentially reactive" behavior of this combination observed in this study implies the need for the concrete industry to re-consider the use of this type of cement with the locally available aggregates. The likelihood of ongoing ASR in the existing buildings should also be studied to verify any potential need to take precautions in order to prevent ASR-related damage that might become evident in the coming future. c) The mortar batches prepared both with North and South Cyprus aggregates in combination with CEM I were observed to exhibit "innocuous" behavior for ASR. When compared with the performance of batches prepared with CEM I, the use of SCM-containing CEM II yielded more than three times of the length change in mortar bars exposed to AAR-2 conditions. This finding implies the importance of selection adequate cement type to be used with the locally available aggregates in order to minimize ASR-related expansion and consequent damage in structures. d) The inadequate performance of SCM-containing CEMII/B-M cement in ASR mitigation should be further investigated; concrete prism test methods having longer testing periods and more realistic conditions can be employed to further investigate the ASR performance of the combination of Cyprus aggregates with this conventional cement used in the island. Further studies should also be carried out on the exact chemical composition of SCMs included in this cement type, in order to provide more insight on its level of alkali-binding characteristics in concrete. e) Further studies on the alkali-silica reactivity of Cyprus aggregates should also be carried out with additional techniques such as SEM-EDX, in order to provide more detailed information on the actual cause of expansion observed in the samples; the detection of formed alkali-silica gel in the samples would be beneficial to confirm the reactivity of aggregates with a higher certainty. | 3,732.4 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Publisher Correction: Trapping Phenomenon Attenuates the Consequences of Tipping Points for Limit Cycles
A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper.
with a large transport of heat to the Northern latitudes and the other one corresponding to a shut-down of the circulation consequently ceasing the heat transport 25 . As a part of the thermohaline ocean circulation, local deep ocean convection is also vulnerable to a shut down 26,27 . Both processes, the shut-down of THC on a global scale and deep-ocean convection on a local scale would have a large impact on the climate in the Northern hemisphere leading finally to a cooling in Northern-and Western Europe. The Indian Monsoon is expected to become more wet or more dry depending on which of the processes responsible for such changes like increasing albedo due to aerosol concentration or stronger El Nino's, respectively, are dominant in the future 24 . Another highly debated tipping point relates to the tendency of the arctic ice sheets to become thinner and finally to lead to an ice-free state in summer 28,29 . Finally, we mention the different approaches to study the recurrent switchings of ice ages and warmer climates before the Holocene, which are attributed to several stable states and transitions between them 30 .
Different scenarios are discussed in literature that lead to such critical transitions. On the one hand, changes in the environmental forcing, e.g. atmospheric temperatures or altered precipitation patterns can induce such transitions by reaching a critical threshold at which one of the states loses its stability and the system switches to another state. In mathematical terms, this scenario is related to a bifurcation; the combination of two such bifurcations often comprises a hysteresis 31 allowing for a switching between two alternative states, when a control parameter is varied 15 . When two stable states coexist, then a switching between them is mediated by fluctuations leading to noise-induced transitions 32,33 .
Due to the increasing concerns about such critical transitions on our planet earth, there is an urgent need to identify the approach of a regime shift or a tipping point before its occurrence. Such early warning signals, if applicable, can be used to anticipate the transition and to take measures to slow down the approach in the worst case or to avoid it if the expected alternative state is for some reason undesired. Such avoidance might not always be possible, particularly not in the climate system, but early warnings could be used to take political actions. During the last decade several methods have been developed to gain more insights into how to predict abrupt changes in the system dynamics, induced by its nonlinearity. One of the earliest measures identified is related to the time which is needed to respond to perturbations. While far away from the critical threshold, such perturbations die out quite quickly, this damping becomes significantly slower in the neighborhood of a threshold 32,33 . The perturbation applied can be either a single perturbation or some noise which is inevitable in experiments and in natural processes. In case of a single prescribed perturbation, this measure is easy to implement in experiments and therefore widely used in quantifying the distance to the threshold value.
In case of a noisy system the approach of the critical threshold can be quantified by an increasing variance and autocorrelation 33 . As a second effect noise leads to an irregular switching process between the two (or more) alternative stable states. This process is called flickering 33,34 , attractor hopping 35 or chaotic itinerancy 36,37 depending on the context in which it is studied. Hence, a second indicator has been introduced measuring this flickering or hopping process, which occurs in a bistable (or multistable) region in parameter space in which two or more stable states coexist. It is important to note that the hopping dynamics depends on the noise strength. While a large body of work is devoted to the impact of additive noise, many processes in nature, particularly in ecosystems, are affected by multiplicative noise, which has only rarely been considered. It has to be emphasized that environmental noise in ecosystem dynamics is always multiplicative and plays by far the more important role 38,39 . However, most of the previous work on indicators for critical thresholds is restricted to additive noise describing the impact of fluctuations on physical processes in the climate system, but being of limited value for ecological problems.
Many bistable systems considered in nature possess two stable equilibria, i.e. the system is stationary. For the above mentioned example of shallow lakes, the water in the lake loses transparency by shifting abruptly from a clear to a turbid state when a threshold in the level of nutrients is reached. As a result of this eutrophication process, submerged plants dramatically disappear beyond a tipping point 15 . For the example of desertification, rainfall patterns are the essential environmental conditions determining the shift from a perennial vegetation via localized vegetation patterns to the state of bare soil. Moreover, taking the grazing pressure by livestock in the Sahel zone into account, leads again to a shift from a perennial to an annual vegetation 21 . However, in analyzing the regime shift in the respective ecosystems, the periodic input of nutrients and precipitation due to the seasonal cycle has been neglected, but could have an important impact too. The same argument applies to the analysis of physical processes in the climate system driven by periodic or quasi-periodic changes in the orbital parameters of the sun leading to a variation in solar insolation with periods of about 23,000, 44,000 or 100,000 years, the well-known Milankovitch cycles 40 . Particularly the latter are assumed to be the major drivers for the development of the ice-ages before the Holocene 30 . Recently, these periodic forcings affecting the Albedo of the earth are studied to evaluate the impact of this variation on the Arctic and Antarctic ice cover 29,28 .
In this paper we focus on tipping points and regime shifts of periodically forced systems. In this class of systems, abrupt changes of the dynamics at critical transitions do not occur between steady states (equilibria), but between oscillating states (limit cycles). Though at first sight, the hysteresis curves which are usually drawn to discuss critical transitions look similar, however, the dynamics is quite different. For this scenario we show that a transient structure, a so-called channel, occurring in the system's state space beyond the tipping point, creates a short-term dynamical regime with specific properties which attenuates the criticality of the transition. The smoothing of the transition is demonstrated by computing a finite-time measure of the twisting behavior (rotation property) of the state space surrounding trajectories while inside the channel domain. This measure indicates that, the trajectories passing the channel beyond the tipping point have residual system properties of the limit cycle destroyed at that tipping point. Hence, the channel acts as a "ghost" of the destroyed limit cycle, retaining system trajectories in a very similar fashion. This fact is shown by statistical analysis of the intervals of time spent by noisy trajectories in the neighborhood of the limit cycle (pre-tipping) and the channel (post-tipping). Furthermore, we attribute to the ghost state the inconclusive diagnostic provided by variance and autocorrelation in anticipating tipping points.
Let us now indicate the differences in the dynamical approach to deal with limit cycles instead of asymptotic equilibria. Figure 1 shows the typical bifurcation diagram used to explain the appearance and disappearance of the coexistence of two alternative states. In contrast to the usual diagrams, the lines denote here one point of a limit cycle instead of stationary points. Therefore, the y-axis does not show one coordinate of the stationary state of the system, but one coordinate of the Poincaré section, a special construction which is very useful in analyzing periodic solutions of nonlinear dynamical systems (see Methods section). The Poincaré section in a periodically forced system defines a stroboscopic map in which the system is always analyzed at the same phase of the forcing, i.e. at times t, t + T, t + 2T, … . Hence, a limit cycle corresponds to a fixed point in this stroboscopic map which makes the similarity between Fig. 1 and the well-known sketches of bistability in the stationary case obvious.
There are two saddle-node or fold bifurcations of limit cycles denoted by F 1 and F 2 , at which two limit cycles, a stable and an unstable one, emerge or disappear. Crossing those tipping points the dynamics will change dramatically. Increasing the parameter value p, the continuation of the limit cycle on the upper branch will stop abruptly at F 2 and switch to a periodic behavior corresponding to the lower branch, while decreasing the parameter p and continuing the lower branch will result in a transition to the upper one at the critical threshold F 1 . Our main focus lies in the analysis of the regions around those critical transitions. Firstly, we address the question to what extent the usual criteria of critical slowing down and flickering will signal the approach to the transition (blue region). Secondly, we show that the critical transitions are hidden due to particular structures in state space, so-called channels, which appear in the neighborhood of the fold bifurcations of limit cycles, preventing a clear identification of the critical transitions.
Results
To illustrate our results, we employ a paradigmatic model system, the well-known Duffing oscillator 41 and apply a periodic forcing with amplitude A and frequency ω. In mathematical terms, this simple dynamical system reads: The parameter d is the amplitude of the system damping. The parameter σ controls the noise intensity given by the stochastic forcing ξ(t). The function ξ(t) represents the usual zero mean and unit variance noise with We use a fourth-order Runge-Kutta method to integrate Eq. (1), in this process, the time is measured as a function of the period of the external forcing, i.e., T = 2π/ω.
In a certain parameter range, the system described by Eq. (1) exhibits a generic scenario of bistability between two different limit cycles, i.e. two stable periodic solutions exist separated by an unstable one of saddle character. The corresponding bifurcation diagram is shown in the upper panel of Fig. 2. Though this diagram looks very similar to the general diagram depicted in Fig. 1, it shows only a Poincaré section of the stable limit cycles occurring for the system described by Eq. (1). To characterize those limit cycles in more detail, we compute the generalized winding numbers (GWN) for each of them along the bistable parameter range, the results are depicted in the bottom panel of Fig. 2. In this panel, the GWN is represented by w ∞ , this measure quantifies the asymptotic twisting of the local in neighborhood of limit cycles, a better description of this measure is given in the Methods.
The bifurcation diagram of Fig. 2 (Upper) shows the dependence of the noise-free Duffing oscillator on the forcing amplitude A. Two stable limit cycles, S 1 (blue) and S 2 (yellow), coexist for a range of parameters bounded by two fold bifurcations of limit cycles at the points F 1 and F 2 (tipping points). So, the system is subject to catastrophic shifts, tipping points, as the parameter A reaches the points A 1c or A 2c . Let us now check whether the autocorrelation coefficient at lag-1 and the variance of the system indicate the approach of the critical transition and can serve as early warnings. To this end, we apply now noise to the system and show the resulting behavior Fig. 3. We focus our analysis on the parameter region close to A ~ A 1c . Hence, in Fig. 3, we reverse the x-axis to better investigate the critical transition at the parameter A 1c . In Fig. 3 (Upper panel), the black line indicates the time evolution of a noisy trajectory with the parameter A varying in the same interval of the bifurcation diagram also indicated in this panel. In Fig. 3 (Middle panel), we show the autocorrelation at lag-1, a measure that usually increases with the approach of critical transitions of systems in equilibrium. In this panel, for limit cycles, we verify a sudden increase in the autocorrelation coefficient as soon as the noisy trajectory starts flickering between the stable limit cycles. But subsequently, it decreases as the system approaches the critical threshold and does not exhibit any significant change as the critical threshold A 1c is passed. Similar behavior is observed for the standard deviation of the noisy trajectory, shown in Fig. 3 (Bottom panel). Therefore, we find that the usual indicators of critical transitions between equilibria may not work for limit cycles. Instead we observe a continuous decreasing of the autocorrelation coefficient and the variance, not suitable to serve as an early warning signal. To explain this behavior, we investigate in more detail the state space structure occurring for parameters succeeding the fold bifurcation at A 1c .
Fold bifurcations of limit cycles are accompanied by the formation of channels in state space through which the trajectory has to go after entering it. To illustrate this behavior which has been first described by Manneville 42,43 in the context of intermittency in turbulence, we show in Fig. 4 the generic principle behind the formation of that channel. As outlined above, a limit cycle corresponds to a fixed point in the Poincaré section. In our case, one point x n in the Poincaré section is mapped onto the next point x n+1 by mapping the limit cycle stroboscopically every period of the forcing, so the fixed points are mapped onto the diagonal x n+1 = x n of the diagram x n+1 × x n shown in Fig. 4. In the bistable region we have three fixed points, two stable and one unstable separating the former two (Fig. 4, left panel). When the fold bifurcation is reached the stable and the unstable limit cycle merge and form an elliptic point (middle panel), while beyond the fold bifurcation a channel appears in state space through which the trajectory moves when it comes close to the region in state space where previously the two limit cycles have been located. Without noise, the trajectory would finally converge to the only stable limit cycle left in the system, denoted by S 1 . Due to the noise, the trajectory is kicked back to the channel from time to time and moving through it again and again. As a consequence of this behavior we observe even beyond the fold bifurcation, that the dynamics returns to the "ghost" of the limit cycle manifested as the channel. The resulting dynamics contains phases where the trajectory is close to the "ghost" and phases where is comes close to the only stable limit cycle but kicked away again by the noise. This way, the flickering dynamics goes on even though the system is well beyond the critical transition. For the very same reason the widely used indicators for critical transitions such as lag-1 autocorrelation function, variance as well as flickering can not signal the approach to the critical transition and the shift or tipping points happens with no warning. In our case, the characteristics of the critical slowing down indicators resemble the case of a smooth transition as analyzed in ref. 44. Additionally, we note that the dynamics before and beyond the critical transition is essentially the same, characterized by the hopping between the two stable states before and between the single stable state and the "ghost" beyond the tipping point. This behavior is generic and will occur for all fold bifurcations of limit cycles forming a channel after the bifurcation.
Let us now discuss the post-tipping behavior in more detail. To demonstrate that indeed the channel is the most essential structure in the state space deforming the noisy system beyond crossing the critical threshold, we analyze the scaling behavior of the length of the transient time with the distance from the threshold, Therefore, we define ε as a parameter measuring the distance from the critical threshold A 1c , i.e., ε = A 1c − A. Then, as a function of this distance ε, we measure the transient time, τ(ε), for trajectories to reach the remaining stable limit cycle (yellow). For these trajectories, we choose a set of initial conditions in the state space region previously occupied by the basin of attraction of the limit cycle S 2 (blue) destroyed in F 1 . In Fig. 5, we show the results for an ensemble of 300 random initial conditions for each distance ε. We find the time that trajectories spend to leave the channel scales as a power-law with the distance ε. The characteristic exponent is equal 0.5, and hence, it corresponds to the value known from the type I intermittency 42,43 . Hence, the characteristic time τ(ε) enables us to quantify the influence of the channel in the time evolution of trajectories as a function of the parameter distance ε.
This equality verifies that trajectories starting in the former basin of attraction of the limit cycle destroyed in the fold bifurcation are in fact being trapped in the channel associated to this bifurcation for a characteristic time, τ(ε). In order to obtain the twisting behavior of trajectories just during the time trapped in the channel, we introduce a finite-time version of the winding number (FTWN) represented by 〈 w(t, ε)〉 . A complete description of this definition is given in section Methods. In the diagram shown in Fig. 6, the color code indicates the FTWN given by 〈 w(t, ε)〉 , in the y-axis we represent the time evolution, t, in units of the period of the forcing, while in the x-axis we show the distance ε from the bifurcation point. The red line represents the function τ(ε) obtained from the adjustment in Fig. 5 for the characteristic time for trajectories to cross the channel. We observe that regardless of the parameter distance ε, the finite-time winding number has a defined value equal to 7.0 (blue in Fig. 6) for times lower than the corresponding τ(ε). Hence, from Fig. 6, we conclude that the post-tipping trajectories, while crossing the channel, conserve the twisting behavior (rotation properties) of the stable limit cycle destroyed in the tipping point.
In the following, we confirm the existence of the residual twisting behavior of the destroyed limit cycle by obtaining the FTWN of sets of initial conditions crossing the channel. Firstly, we choose the parameter A such Figure 4. (a) Two fixed points of the node type (S 1 and S 2 ) are coexisting with a saddle. The black arrows indicate how initial conditions dynamically behave in the neighborhood of each fixed points. (b) As a system parameter is varied the node S 2 and the saddle collide forming an elliptic point E (saddle-node or fold bifurcation). (c) As the parameter crosses the critical bifurcation parameter, the initial conditions (arrows) that used to belong to the attraction domain of the extinct node S 2 are now converging to the node S 1 through a channel formed in the mapping.
that the dynamics will take place in the channel, i.e., A 1c minus a small distance ε = 0.0095, then we compute the FTWN during the time τ(0.0095) = 33.34 (T) for a grid of initial conditions. Attributing different colors to the FTWN obtained for the trajectories corresponding to each initial condition, we clearly distinguish, in the grid of Fig. 7(a), two types of dynamic behavior. (i) the FTWN corresponding to trajectories that cross the channel (initial conditions in blue, fast twisting) and (ii) the the trajectories converging directly to the remaining stable state (initial conditions in yellow, slow twisting). In order to compare the twisting properties of the channel, measured by FTWN, to the twisting behavior around the stable states in the bistable region, we characterize the twisting around the two stable states, S 1 and S 2 , by the asymptotic generalized winding number (GWN). Figure 7(b) shows those twisting properties of trajectories starting on the same grid as in Fig. 7(a) but computed by the asymptotic (GWN) for a forcing amplitude A before the tipping. We notice the similarity between twisting of trajectories crossing the channel beyond tipping (blue in Fig. 7(a)) and around the stable state before tipping (blue in Fig. 7(b)).
To illustrate further that observations of the system's trajectories are insufficient to determine whether the system is bistable (pre-tipping) or has a dynamical channel (post-tipping), we show in Fig. 8(a) the temporal evolution of a trajectory of the noisy the Duffing oscillator as the parameter A increases with time in the same interval as in Fig. 8(a). We notice that, even after the limit cycle S 2 marked in blue disappears in F 1 , the noisy trajectory (black line) is still flickering into the state space region previously occupied by the extinct limit cycle around = − x 4. This becomes even more obvious when comparing two noisy trajectories with fixed forcing at an amplitude A in the bistable region (pre-tipping) to a trajectory with a forcing amplitude beyond the tipping point (red line in Fig. 8(a). In a statistical sense those two trajectories are indistinguishable, indicating that the pre-tipping and the post-tipping behavior are very similar, with flickering between two distinct state space regions of S 1 and S 2 or the "ghost" of S 2 respectively.
As a consequence, time series as the main window to observations in nature, would show the flickering phenomenon before and beyond the tipping points making the transition in the observed data to appear smooth instead of abrupt. In order to verify this statement, we investigate the intervals of time, θ, that a noisy trajectory elapses in the neighborhood of the stable state (before the tipping point), and in the channel (beyond the tipping point). The idea behind this study is to extend the notion of escape times to characterize the dynamics beyond the tipping point. In bistable systems one usually computes the mean escape time or mean first passage time to identify the stability of each stable state in a stochastic sense. While for systems possessing a double well potential, it is possible to compute those escape times analytically 45 , one has to rely on numerical estimations for arbitrary multistable systems 35 . Though the vast majority of nonlinear dynamical systems do not possess a potential, the scaling of the escape rates remains valid. Specifically, in Fig. 9, we obtain the distribution of time intervals spent by trajectories in the neighborhood of the stable states and in the channel. The time interval, θ is also expressed in units of the period of the forcing T.
Specifically, we show in Fig. 9 the distributions of the time intervals spent by trajectories in the neighborhood of the stable limit cycle S 2 which will go extinct at the tipping point ( Fig. 9(a)) with the distribution of those time intervals spent in the neighborhood of the "ghost" of S 2 beyond the tipping point ( Fig. 9(b)). Both distributions are exponential distributions, so that the probability density function can be approximated by where μ = 〈 θ〉 > 0 is the mean value of the distribution. While in the bistable parameter region the mean time spent close to the limit cycle S 2 is 〈 θ〉 ≈ 10 periods of the forcing, it is only slightly shorter (〈 θ〉 ≈ 6 periods of the forcing) beyond the tipping point. However, the density function for the dynamics close to the channel is narrower and higher than in the bistable region, indicating that the shorter intervals of time are more frequent. Hence, even for the parameter A lower than A 1c (beyond tipping point), the frequency with which trajectories visit the neighborhood of the extinct state is not zero, i.e., the flickering phenomenon still occurs after the tipping point. It means that even after the extinction of the limit cycle in A 1c , the state space channel keeps retaining trajectories, avoiding their abrupt definitive transition to the unique survival stable state.
To emphasize that, the characteristics of the dynamics changes smoothly and not abrupt when crossing the tipping point, we show in Fig. 9(c) the changes in the distribution function when decreasing the forcing A from the bistable to the monostable region. In this figure, the color code indicates the probability densities for each parameter A shown in the x-axis. We observe that the distribution of time intervals smoothly changes as the tipping point is approached and passed, indicating that there is no abrupt transition crossing the threshold. However, as the parameter A is passed through the tipping point, a considerable increasing in the density of time interval values around the expected value is observed making the distribution narrower for parameters well beyond the tipping point.
Discussion
In summary, we address tipping points of systems subjected to periodic external forcing. The asymptotic solutions of this class of systems inherently settle into oscillating stable states (limit cycles), a more complex dynamics than the stable steady states (equilibria) for which the tipping points have been widely studied. In nature, the most noticeable occurrences of such oscillating attractors are found in ecology and climate sciences where periodic and quasi-periodic variabilities arise from external factors such as seasonality and astronomical forcing. Here, for a generic periodically forced system that generates such oscillations, we consider the typical hysteretic scenario to investigate tipping points, i.e., a bistable parameter region where the tipping is represented by fold bifurcations of limit cycles rather than steady states. As the parameters are varied and the system reaches a fold bifurcation, in which a stable limit cycle is destroyed leaving a transient structure, a so-called channel, in the state space of the system. Hence, for parameters beyond this tipping point, the channel gives rise to a short-term dynamics which possesses similar properties than the destroyed limit cycle and can therefore be attributed to a "ghost" of the latter. We find that a residual dynamical property of the limit cycle destroyed in the tipping point, namely its twisting behavior, occurs in the short-term dynamics for parameters in the post-tipping region. This finding indicates that the short-term behavior carries dynamical information of the destroyed oscillating stable state.
For system parameters fixed in the post-tipping region, we obtain the time evolution of the system subject to a stochastic noise. With this, we show that the "ghost" attractor retains systems trajectories in a very similar fashion of the stable limit cycle destroyed in the tipping point. Additionally, by obtaining the statistics of the time intervals that noise trajectories spend in the neighborhood of the stable limit cycle and in the neighborhood of the "ghost", we find that the PDFs of waiting times in both regions have the same exponential profile and do not differ much in their expected values. Therefore, the "ghost" dynamics plays an essential role in attenuating the critical transition in a way that it may be seen as a smooth transition when trying to diagnose it from real-world data. Hence, none of the well-known methods like autocorrelation function, variance or flickering are suitable to identify this particular transition properly. The emergence of the "ghost" masks the transition until the system is well beyond the tipping point and makes it to appear smooth instead of catastrophic. Finally, we would like to note that, despite of our choice of dealing with periodically forced systems due to their importance to ecological and climate systems, the "ghost" state masking the critical transition should also be observed in parameter regions succeeding fold bifurcations of limit cycles in autonomous systems.
Methods
Poincaré Section. As we consider systems whose asymptotic behavior are limit cycles, the final dynamics are oscillations rather than equilibria. Bifurcation analysis are performed by defining a Poincaré section, which usually is a hyper-surface arranged transversally to the limit cycle, where the whole system dynamics is described by a discrete system. Letting f be the function that describes the intersections of the limit cycle with the section, for trajectories in three dimensional space, results that = n n n n 1( 1) 2( 1) 1 2 , where x 1n , x 2n are the coordinates of the n th crossing. Consequently, on the surface of section, limit cycles are represented by fixed points of f. Then, states, as shown in bifurcation diagrams such as of Fig. 1, are defined in the surface of section, and in case the section is chosen to be a plane, they are denoted by the (x 1 , x 2 ) plane coordinates.
For the Duffing oscillator described by Eq. (1), a suitable Poincaré section is the so-called T-shift. The dynamics over the section is represented by discrete variables ( x nT ( ), x(nT)) defined as the solution pair ( x, x) collected every period, T = 2π/ω. Fold Bifurcation of Limit Cycles and State Space Channel. In case of limit cycles, a risky bistable configuration occurs when two stable limit cycles are coexisting with one unstable cycle of saddle type. The emergence of a dynamical channel at this scenario can be described on a suitable Poincaré section. We show in Fig. 10(a) that the stable limit cycles yield two fixed points of the node type, S 1 and S 2 in the surface of section, while the unstable limit cycle generates a fixed point of saddle type. The stable manifold of the saddle separates the initial conditions attracted by each node (blue and red in Fig. 10(a)). As the control parameters are varied approaching the fold bifurcation that delimitate the bistability region, such as F 1 and F 2 in Fig. 1, one of the node fixed points approaches the saddle. When the system is set to the parameters at the fold bifurcation point, for instance, the node S 2 and the saddle collide, and they both disappear forming an elliptic fixed point denoted by E in Fig. 10(b). For post-bifurcation parameters, the schematic of the Poincaré section is shown in Fig. 10(c), the node and the saddle no longer exist in the Poincaré section. However, trajectories starting in the space state region, which used to be the basin of attraction of the destroyed node S 2 , converge to the remaining node but not before being attracted by the stable manifold of the unstable elliptic point. Effectively, the system's trajectory behaves as if there existed a channel constraining the trajectory and leading it to the remaining stable state.
The occurrence of these dynamical channels related to fold bifurcations of limit cycles has been first discussed by Pomeau and Manneville, and has been argued to be the mechanism responsible for the laminar phase in the type-I intermittency scenario 42,43,46 . However, in type-I intermittency, chaotic bursts re-inject the trajectory in the dynamical channel. The trajectory spends long time intervals to cross the channel (the laminar phase) eventually escaping to the chaotic phase space region, producing another chaotic burst. In this work, there is no chaotic process to re-inject the trajectory into the channel, so we introduce a Gaussian noise which resets the trajectory to a random configuration belonging to the basin of attraction of the stable state extinct in the fold bifurcation. This procedure successively ejects the trajectory off the neighborhood of the survival stable state, forcing it to successively cross the channel along its time evolution. Regardless of the mechanism used to re-inject the trajectory Figure 10. Two dimensional representation of the bifurcation scenario. (a) Two fixed points of the node type (S 1 and S 2 ) are coexisting with a saddle. The black lines indicate how initial conditions dynamically behave in the neighborhood of the fixed points. The colors indicate the domain of attraction of each node. (b) As a system parameter is varied the node S 2 and the saddle collide forming an elliptic point E (saddle-node or fold bifurcation). (c) As the parameter crosses the critical bifurcation parameter, the initial conditions that used to belong to the attraction domain of the extinct node S 2 are now converging to the node S 1 through a channel formed in the plane. | 7,557.2 | 2018-11-15T00:00:00.000 | [
"Physics"
] |
Quantification of microplastics by count, size and morphology in beverage containers using Nile Red and ImageJ
Abundant evidence of microplastics (MP) found in the environment, and its toxicity effect in animals calls for human-related research. However, well-established quantitative controlled studies on the potential route of human exposure to MP are still sparse. MP count, size and morphology in 15 polylactic acid (PLA)-lined plastic cups and 15 PLA-lined paper cups were examined using Nile Red fluorescence tagging, microscopic photography, and morphology assessment and quantification based on ImageJ. In the plastic cups, the count and area of MP fibers were found to be significantly higher compared with blanks (p< 0.05), but not MP particles or total MP. In paper cups, count or area was not significantly different in terms of MP particle, MP fibers or total MP. No interesting trend was observed in the distribution regarding the size of MP particles or fibers. These results indicate that selected paper cups and plastic cups could be considered as safe beverage containers, but further research on the toxicological effects of MPs in different morphologies released from plastic cups on human health is needed.
INTRODUCTION
Microplastics (MPs) are water-insoluble, synthetic polymers of diverse shapes, with sizes ranging from 1 μm to 5 mm (Frias & Nash ). The detection of MPs in human stools also suggests MP consumption results in their transportation through, and contamination of the gastrointestinal system (Schwabl et al. ). These findings raised concern about the impact of MPs on human health and called for research in human toxicity (Smith et al.
Study design
To meet our objectives, we included one experiment group of 15 plastic cups, one experiment group of 15 paper cups and one control group of 5 procedural blanks in our study design. Previous studies support using Nile Red fluorescence for distinguishing MP from non-polymer materials, and programs like ImageJ have been used to quantify MP using
Sample collection and processing
We examined polylactic acid (PLA)-lined paper cups and PLA-lined plastic cups, which were common beverage containers (manufactured by NatureWorks) that were used and could be purchased in the New England area. Fifteen paper cups and 15 plastic cups were collected from a cafeteria in Harvard T.H. Chan School of Public Health.
To minimize potential contamination from external sources during the experiment, such as airborne fibers and particles, the whole experiment was processed in a fume hood, and the workplace as well as all glassware were sterilized using the standardized biochemistry laboratory protocol. The pre-filtered Milli-Q water was prepared by vacuum filtration of Milli-Q water with the same glass microfiber filter used to filter out MPs in each condition.
Regular personal protection equipment was worn throughout the whole process.
To examine the contamination of MPs from the surface of the containers, 100 mL of pre-filtered Milli-Q water was added to all containers, followed by a slow, 1-min clockwise revolving motion in the fume hood under the laboratory temperature (23-25 C). Afterwards, the filtered water was transferred to a clean beaker, dyed by Nile Red solution (dissolved with acetone) with a working concentration of 10 μg/mL, and incubated for 30 min. This concentration strikes a balance between fluorescent strength of dyed MP and reduction of 'noise' from Whatman filters that were used for filtration (Maes et al. ). Each beaker of incubated solution was then vacuum filtered through a dedicated glass microfiber filter. The processed filters were dried in the Petri dish for at least 24 h in the laminar fume hood.
To account for the potential contamination from ambient air, chemicals, glassware and other testing materials, we introduced five procedural blanks. In blank samples, we used clean beakers that were washed with pre-filtered Milli-Q water and went through the same procedure (rinsing, staining, vacuum filtration and incubation) identical to all the samples in the experiment groups. The level of microplastic contamination found in our procedural blanks suggests accountability of MP contamination from the laboratory environment.
Data collection
Using an Olympus BX-60 with zoom at 10 × , we examined
Data processing
Each recognized cluster of fluorescence was manually classified as either particles or fibers, based on the morphology of the fluorescence, in order to overcome miscounting issues that can arise when using a high threshold for MP classification by ImageJ. With ImageJ, any pixel that did not meet a set level of fluorescence shows up as white, while any pixel meeting the requirement shows up as black (Figure 1). This sometimes leads to fibrous MP being cut into multiple parts, which causes ImageJ to classify one fibrous MP as multiple MPs. We made duplications, separations and 'blacked out' parts of a photo prior to ImageJ analysis to separate fibrous MP from particles in a photograph, which prevented automated overcounting. This process was performed for every sample and was done independently by two researchers whose results of classification were compared and finalized through discussion. Both of them were blinded to the sample types. As a result, we obtained separated folders that contained photos with particles only or with individual fibers for each sample.
All manually separated photos were processed by ImageJ 1.52q. Image processing was based on the ImageJ
Statistical analysis
Data analysis was performed in R v3.5.1. The csv file for every image was read and summarized within samples for particles and fibers, respectively. Files that contained no result indicated that any fluorescence did not meet our absolute threshold, even if identified by the naked eye. Particle, fiber and total MP area of a sample were calculated by summing the area of the particles, fibers or both, respectively. For particle counts, row number in the csv file, which was a detached observation of MP, was summed up. Fiber count was measured by image count because each fiber had a dedicated image modified to remove other fluorescence. Particle area and fiber area were measured by summing up the area for all images within each sample. The feature used for particle inclusion was maximum Feret diameter, the maximum distance possible between any two parallel planes restricting the particle perpendicular to that direction. We included particles with maximum Feret diameter 7 μm, which was the maximum Feret of the smallest recognized MP by human eye, under the microscope across all photos. For fiber inclusion, the threshold was area >0 mm 2 . T-tests were performed to find any statistically significant differences in numbers and areas of MPs between different sample types.
RESULTS
All samples of 15 paper cups, 15 plastic cups and 5 blanks were successfully collected, processed and analyzed.
During the data analysis process, 6,860 images were processed in total. On average, around 30 images per sample were originally mixed with fibers and particles, and around 6-7 images needed to be reprocessed due to misrecognition of background per sample by ImageJ. Figure 1 displays what a typical MP fiber (Figure 1(a)) and an MP particle indicates that for MP particles, the numbers were quite similar between the three conditions. However, fiber count was significantly higher in plastic cups compared with blanks.
The counts that included MP particles or combined numbers were not significantly different between the three conditions. Figure 3 displays the total MP areas across our three types of samples. We could see that the area of MP fibers from plastic cup samples was significantly higher than that of the blanks (p ¼ 0.032). Total MP area in terms of the sum of particles and fibers was higher in both plastic and paper cups compared with the blanks, but was not significant with p-values of 0.095 and 0.320, respectively. We can, therefore, conclude that there was no significantly elevated level of count and area in plastic or paper cups in terms of MP particle or total MP. Nevertheless, MP fiber count and area were significantly higher when pre-filtered water was exposed to plastic cups, when compared with our blanks.
We also performed an analysis on the distribution of the size of MP in terms of particle maximum Feret diameter, particle area and fiber area. Density plots were made by merging information of all recognized particles and fibers across all samples within each sample type. However, the probability densities of log particle maximum Feret diameter (mm) of three sample types fall on the same curve and fit well. No significant differences were found in the distribution of particle maximum Feret diameter (a), particle area (b) or fiber area (c) in the log scale as shown in Figure 4.
These results indicate that the differences observed previously in counts and areas between sample types were independent to particle or fiber size distributions, and the differences exist in all levels of size of MPs. These results are also supported by comparing the particle median maximum Feret diameter (d), particle median area (e) and fiber median area (f) as shown in Figure 4. No significant difference was found in plastic cups or paper cups compared with blanks according to the result of t-tests.
DISCUSSION
Partially consistent with prior studies, MP fibers are found to be significantly elevated releasing from the surfaces of the PLA-lined plastic cups while being vibrating under the laboratory temperature (around 23 C). A related study demonstrated that steeping a single plastic teabag at brewing temperature (95 C) could release around 11.6 billion MP and 3.1 billion nano-plastics into a single cup of the beverage (Hernandez et al. ). Another study found that opening and closing a series of plastic bottles increased the number of microplastic particles (Winkler et al. ). Table 1, we could see a tradeoff speed. This study demonstrated that a smaller limit of detection (7 μm) is compatible with automated counting for each filter, without the need for stitching images.
However, the insignificant result of total MP in this study
The detection limit was set to 50 μm in Prata's study by allowing detection of fluorescence of over 3 px (Prata et al. ). In Mason's study, MPs were visualized as long as 1 px was recognized (Mason et al. ). The reason we did not use a minimum recognized pixel number to define our detection limit is that it would count detached pixels from the main particle that could not be eliminated through 'despeckle' function in ImageJ as independent particles, which would affect the accuracy of quantification greatly.
Additionally As the regulatory and policy environment shifts toward
CONCLUSIONS
This study provides quantitative estimates to MPs releasing from the surface of PLA-lined beverage containers. The data showed PLA-lined paper cups are not significant sources of MPs. PLA-lined plastic cups were found to be significant sources only for MP fibers, but not MP particles or total MPs. These results indicate that selected paper cups and plastic cups could be considered as safe beverage containers. Further research on the toxicological effects of MPs in different morphologies released from plastic cups on human health is needed. | 2,529.8 | 2020-12-08T00:00:00.000 | [
"Engineering"
] |
Normal and Abnormal Development of Motor Behavior: Lessons From Experiments in Rats
In this essay a few relevant aspects of the neural and behavioral development of the brain in the human and in the rat are reviewed and related to the consequences of lesions in the central and peripheral nervous system at early and later age. Movements initially are generated by local circuits in the spinal cord and without the involvement of descending projections. After birth, both in humans and in rats it seems that the devlopment of postural control is the limiting factor for several motor behaviors to mature. Strong indications exist that the cerebellum is significantly involved in this control. Lesions in the CNS at early stages interfere with fundamental processes of neural development, such as the establishment of fiber connections and cell death patterns. Consequently, the functional effects are strongly dependent on the stage of development. The young and undisturbed CNS, on the other hand, has a much greater capacity than the adult nervous system for compensating abnormal reinnervation in the peripheral nervous system. Animal experiments indicated that the cerebellar cortex might play an important part in this compensation. This possibility should be investigated further as it might offer important perspectives for treatment in the human.
INTRODUCTION
The consequences of brain damage in human babies during the perinatal period differ importantly from those at later age. The behavioral disturbances as they become apparent after brain damage at early age often change with age. In addition, the abnormalities as they ultimately emerge, often are different from those after similar lesions at adult ages. In adults in contrast, the effects of brain lesions generally appear soon after the inflicting incident and the symptoms often might be fairly well predicted on the basis of the size and the site of the lesion.
The delayed emergence of handicaps after lesions at early age may be related to the circuitries in central brain areas not being fully established yet and fiber projections from the damaged areas not having reached their final target areas. Lesions at adult age, however, disturb wellbalanced sets of circuitries with specific functions. Another aspect is that it often is difficult to relate the severity of motor handicaps to the size of the damage which has occurred at young age. Large *tel: + 31 50 361 4257 fax: + 31 50 363 2751 e-mail"<EMAIL_ADDRESS>(C) Freund & Pettman, UK, 2001 lesions sometimes only lead to minor symptomatology, while in other cases severe disturbances may occur which are difficult to explain from the minute abnormalities on the fMRI-images or PETscans. These discrepancies might be due to what often is referred to as an increased plasticity of the young nervous system. The ill-defined term 'plasticity' refers to the modifiability of neuronal interactions by changes in neural circuitry or in synaptic efficacy and a high degree of plasticity generally is taken to be beneficial for compensating the functional effects of brain damage. However, brain regions with their fiber projections still growing also might be more vulnerable to disturbances and aberrant synaptic connections might easily develop by virtue of this increased plasticity.
Plasticity of the brain after lesions at early and later age obviously are related to the developmental stage and knowledge ofthe early development of the nervous system and the time tables of its fundamental processes are prerequisites for understanding and interpreting the differential reactions of the young and older nervous system to damage. The aim of this essay is to review a few of the relevant aspects of the neural and behavioral development of the brain and on that basis to discuss the results from studies into the consequences of lesions in the central and peripheral nervous system at early and later age. Motor development in the human is compared to that in the rat and the rat's development, in turn, is discussed in the perspective of some aspects of its neurobiological development.
Rats are often selected for investigations into the normal and experimentally disturbed development of the nervous system, as they are born at an early stage of brain development. This allows to study aspects of their early development in the postnatal period. However, this very aspect has to be taken into account when extrapolating rat data to human development: the stage of brain development in rats at 10 to 13 days after birth is analogous to that in term babies (Romijn et al., 1991). Another obvious consideration should be to acknowledge the important differences between the properties of the nervous systems of primates and rats. The function and the neuro-anatomy of the corticospinal tract illustrates this point. In primates, pyramidal neurons in the motor cortex are monosynaptically connected to the spinal motoneurones innervating the hand and wrist muscles thus enabling so-called fractionated finger movements. In rats, however, pyramidal fibers predominantly terminate in the dorsal laminae of the spinal cord and their function probably mainly is to influence the processing of afferent input (Lemon et al., 1997). Studies on the effects of motor cortex lesions in rats, therefore, only can answer part of the questions which we have, e.g. in relation to Cerebral Palsy in the human.
Prenatal development of motor behavior
The first movements in human fetuses can be observed by ultra-sound scanning from the onset of the 7th week of post menstrual age (PMA; De Vries et al., 1982) and this is shortly before the stage when Hooker (1952) in aborted fetuses firstly could elicit neck and trunk movements by tactile stimulation. The spontaneously occurring 'just discernible movements' initially only involve minor head and neck movements, but a few days later also trunk and extremity movements participate (De Vries et al., 1982). In the weeks thereafter, a repertoire develops of arm and leg as well as trunk and head movements and specific patterns such as sucking and breathing movements (for details see, De Vries et al., 1982;Prechtl, 1984;Prechtl, 2001). So-called General Movements, consisting of a complex and variable pattern of trunk and extremity movements, is the most frequently occurring movement pattern and these occur from the 9 t week PMA, throughout pregnancy, and until about 5 months after birth (Prechtl, 1997;2001).
In rats, the first movements can be elicited at the 15 embryonic day (Angulo y Gonzalez, 1932), and the following day (El6) movements emerge spontaneously. Movements in trunk, head, fore and hindlegs occur with increasing frequency in the next days (Narayanan et al., 1971;Smotherman & Robinson, 1988). The head moves in lateral directions and uncoordinated wriggling movements of the trunk, as well as foreleg movements may be observed. The vigorousness of the movements increases gradually in the days to follow and from E20, the first signs of coordination between the extremity-segments have been observed (Bekoff & Lau, 1980). However, coordination between the individual limbs does not occur in rats prior to birth (Narayanan et al., 1971).
The motor development in the human fetus and in the rat shows some striking similarities. Both in the human and in the rat, movements from the onset are part of patterns in which several groups of muscles are involved. In both species, the first movements occur in the neck-region. In the rat, the emergence of fore-and hindleg movements follow a clear-cut cephalo-caudal trend, while movements of the mouth and the tongue develop relatively late. In the human, however, such a trend does not seem to occur. In the human, the first movements emerge shortly before reactions can be elicited by tactile stimulation. In the rat, movements can be elicited half a day before the onset of spontaneous motility and this indicates that in both species, afferents (indirectly) connect to the motoneurones at about the time when motoneuronal axons reach the muscles.
Movements also occur in anencephalic human fetuses (Preyer, 1885). Also rats, after transection of the spinal cord or after decapitation, keep moving (Hooker, 1930;Narayanan et al., 1971), and these observations indicate that the spinal cord is able to autonomously generate motility. Ultrasound scanning of movements in 7 anencephalic human fetuses with gestational ages varying from 16 to 35 weeks PMA indicated that such movements lack the fluency and variability of those movements in normal fetuses (Visser et al, 1985). This investigation also showed that the absence of the pontine region is accompanied by a disturbed temporal distribution of motility. This indicates that, at least from the 16 th week PMA, brain stem projections modulate fetal activity in the human fetus.
Some aspects of rat neurobiological development before birth Motoneurones and segmental afferents. The neuroblasts in the spinal cord, which later differentiate into motoneurones, are proliferating between E11 and El3. It is noteworthy that the motoneurones which later innervate extremity muscles are generated a day earlier than those which will innervate the (filogenetically older) trunk muscles (Altman & Bayer, 1984). After migrating to their final destination along radially oriented glial cells (Henrikson & Vaughn, 1974), the motoneurones differentiate between El2 and El4, and this process follows a rostro-caudal gradient (Altman & Bayer, 1984). Roots emanating from the ventral horn have firstly been demonstrated at El2, and the motoneuronal axons have reached the intercostal muscles at El4 (Dennis et al., 1981). From that age, muscle contractions can be elicited by electrical stimulation. Remarkably, this is almost 2 days before the stage when Angulo y Gonzalez (1932) firstly could elicit reactions by tactile stimulation. As at early stages the transmitter release is low (Dennis et al., 1981), this possibly could explain this discrepancy. The release of AcetylCholine (ACh) only begins to increase substantially when the multiple innervation of muscle fibers has been eliminated (Diamond & Miledi, 1962).
The proliferation of neurones in the dorsal ganglia (between El2 and E15) virtually coincides with that of the future motoneurones (Altman & Bayer, 1984). From El3, the first sensory fibers were observed to penetrate the limb bud, and afferents reach the dorsal horn of the cervical cord at E14.5 (Vaughn & Grieshaber, 1973) and in the thoracic cord at E15.5 (Smith & Hollyday, 1983).
Around that stage, tactile stimulation may elicit movements (see above). Proprioceptive reflexes, on the other hand, which are partly mediated by monosynaptic connections between afferent fibers and motoneurons, develop a few days later. Muscle spindles have been demonstrated already before birth (Milbum, 1973), and Kudo & Yamada (1985) could elicit the myotatic reflex of the triceps surae muscle in rats from E19.5.
Movements and pattern generators. Fetal movements, both in the human and in the rat, emerge at a stage when the descending projections, e.g., from the brainstem, have not yet developed. This implies that at those stages they are generated by endogenous activity in local spinal cord-circuits. It is striking that from early stages onwards, these movement patterns involve several muscle groups, and this implies patterned activity in adjacent spinal cord segments connected by propriospinal intemeurones. Particularly interesting are the motor pattems, such as rhythmic mouth movements, breathing movements, and leg movements. For such rhythmical movement pattems, a specific spinal circuitry termed a Central Pattern Generator (CPG) has been implied on the basis of experiments by Graham Brown (1914) and others since (e.g., Grillner, 1975). Already at early stages of development and only a few days after the motoneurones have migrated towards their localization in the spinal cord, synchronous and rhythmical bursts of activity can be recorded in these circuits. This is even before any activity in the muscles is obvious and before sensory fibers have reached the dorsal horns of the spinal cord (Cazalets et al., 1995; for a review see Cazalets, 2001). At later stages when brain stem projections have developed, excitatory amino acids (EAA) and serotonin (5-HT) act together in the modulation of the rhythmic CPG activity.
Descending projections. Descending projections from the brain stem reach the caudal spinal cord in the rat already a few days before birth. This holds, e.g. for the medially descending projections, from the reticular formation and the vestibular nuclei.
These have reached lumbar levels from El6 (for review, Lakke, 1997) and shortly after their arrival at segmental levels, these fibers make provisional synaptic contacts. Vinay & Clarac (1999) have demonstrated that electrical stimulation at the lower brain stem already on the first postnatal day induces responses in the ventral roots, but important changes occur thereafter, indicating that still important changes in circuitry occur during further development.
Spinal projections of the Limbic Motor System (Kuypers, 1982; also termed the Emotional Motor System, Holstege, 1991) containing noradrenergic and serotonergic fibers have reached lumbar spinal cord levels before birth as well (Rajaofaetra et al., 1989;. The medial portions of this motor system influence the excitatory state of intemeurons, motoneurons, and probably CPGs via diffusely projecting monoaminergic fibers. As the serotonergic terminals initially are present widespread but later on only in the dorsal and ventral horn, Rajaofaetra et al., (1989) hypothesized that 5-HT containing fibers at early stages might play a role in the stabilization of terminals from other projections in the spinal cord.
In summary. Motility develops early, from the moment when motor axons reach their muscles.
These movements initially are generated by local circuits in the spinal cord and without the involvement of descending projections. Descending projections only develop later, and they initially may subserve functions which differ from those at adult age.
Motor development and postural control after birth Human babies, immediately after birth, start breathing; they start sucking, vestibular reflexes emerge, but particularly from the second month onwards, the repertoire starts to expand significantly (Prechtl, 1984). Goal directed arm movements, head movements, and smiling occur which play a role in the social interaction with the environment. An important prerequisite for these newly emerging behaviors is adequate postural control (for review, Hadders-Algra, 2001; see also, Hadders-Algra, this issue). Prechtl (1989) studied neck muscle activity in babies in prone and supine positions during external perturbations, and he and his coworkers observed that direction-specific activity in the neck muscles develops from the 8th to 10th postnatal week onwards. More recent results on EMG recordings in neck and trunk muscles of babies during reaching support this conclusion (Van der Fits & Hadders-Algra, 1998; Van der Fits et al., 1999a). Successful reaching movements in infants develop around 4 to 5 months of term age. These movements are accompanied by contractions in trunk and neck muscles (Van der Fits et al., 1999a), but it lasts until after 15 months, when the activity in postural muscles consistently anticipates reaching movements, being suggestive for a feedforward control of the activation of postural muscles (Van der Fits et al., 1999b).
A stepping response can be elicited in the first weeks after birth, and these movements probably are the remains of the fetal pattern of alternating leg movements. Walking movements only reappear at the onset of the development of unsupported walking, when the babies are 7 to 9 months old.
The alternating leg movements in the fetal and neonatal period and those during walking at later ages probably both are produced by a CPG in the lumbar spinal cord (Forssberg, 1985). Unsupported walking, however, only develops by virtue of advanced levels of postural control and equilibrium maintenance. Assaiante and Amblard (1995) hypothesized, on the basis of their own results and those of others, that from this age until about 6 to 8 years, postural control mechanisms still are occupied with mastering the intricate balance problems and that then an ascending temporal organization of balance control prevails, while only in the years thereafter the adult type of postural control gradually develops.
In rats, locomotion in their first 2 or 3 days is effected by crawling movements with the forelegs, and also head movements in the horizontal plane may occur (Geisler et al., 1993). At the end of the first week, rats may lift their ventral body surface off the floor and make a few staggering steps (Geisler et al., 1993;Gramsbergen & Mulder, 1998). This pattern strikingly changes at P 15. In the course of or 2 days, the immature locomotion pattern is replaced by adult-like walking, characterized by fluent and swift movements (Altman & Sudarshan, 1975;Westerga & Gramsbergen, 1990). Then, rats are able to stand on their hindpaws (rearing) for extended periods of time, and head movements may occur when walking. This transition is accompanied by changes in the EMG recorded from hindlimb and back muscles . Until P14, recordings from the gastrocnemius and the tibialis anterior muscles show irregular patterning of the EMG, with cocontractions during walking but from P15 to P 16, the EMG changes into an interference pattern with clearly delineated bursts.
Circumstantial evidence indicates that from that age, descending projections impinge (indirectly) upon motoneuronal pools. EMG bursts in the long back muscles from this same age become closely linked to the leg movements during walking (Geisler et al., 1993;Gramsbergen, 1999). Experiments in rats involving vestibular deprivation from P5 indicated that a change in postural control mechanisms probably is instrumental to this sudden shift into the adult-like walking pattern. Plugging the horizontal semicircular canals (which prevents the endolymph to circulate) leads to a retardation in postural development. The emergence of rearing (standing on the hindlegs without support) e.g., is delayed by 5 days (Geisler et al., 1996). Pertinent to our reasoning is that the adult type of walking is delayed as well (Geisler et al., 1996) and even, that a 3-day delay occurs in the development of EMG patterns in the back muscles (Geisler & Gramsbergen, 1998). These results therefore indicate that the development of postural control is the limiting factor for the development of the efficient and smooth movement patterns.
Neuroanatomical aspects of postural development
The question is which development might explain the sudden shift in motor behavior around the end of the second week in rats. Changes in the properties of motoneuronal membranes, sudden shifts in the force production of hindleg muscles, or a relation with the opening of the eyes (in rats, around P14) could be refuted (for a discussion, see, Gramsbergen, 1998;Kemell, 1998). Another possibility is that the establishment of synaptic connections between descending projections and motoneuronal pools is causally related to this transition in motor behavior. The corticospinal tract in rats starts to grow from E 16 and at the day of birth, the first pioneer fibers have descended as far as the cervical intumescence (Schreyer & Jones, 1982;Joosten et al., 1987). Between P7 and P10 they have descended along the extent of the spinal cord, but it is not known when the ultimate synaptic terminals in the spinal cord are established. In the adult rat, the tract mainly terminates in the dorsal horn and therefore, its main role probably is to modulate sensory input (Porter & Lemon, 1993). A minority of the fibers indirectly connect to motoneurones (Liang et al., 1991) and lesioning of the corticospinal tract affects fine digital flexion movements in rats (Castro, 1972), but motor cortex lesions do not interfere either with the transition in motor behavior around P15 or with postural control mechanisms (own observations).
The rubrospinal tract has descended to the caudal spinal cord segments already at E 17 (Lakke & Marani, 1997). Unilateral ablation of a cerebellar hemisphere at early postnatal stages leads to serious motor handicaps from P15 (see also, below), and the delay between the descent of the rubrospinal tract and the functional consequences of cerebellar lesions would imply a prolonged waiting period between the descent and the establishment of synaptic connections at segmental levels. A more likely possibility is that developmental changes at central levels parallel this shift in postural control and motor development. In the cerebellum, important changes occur in cortical circuitry and in olivary-cerebellar interactions during the first 2 postnatatal weeks and these indeed may explain the behavioral changes (see, Gramsbergen, 1984). Also other evidence points to the cerebellum being elementary for the coupling of the movement performance and postural control (see, Grillner, 1975 referring to animal species; Forssberg, 1985 referring to human babies). For this reason, much experimental research is invested now in elucidating the involvement of the cerebellum, the precerebellar nuclei, and the basal ganglia in the linkage of postural control and the planning and execution of movements (e.g., Houk & Wise, 1995). In the human, deficient postural control has been claimed to be a major factor in cases of CP (Aicardi & Bax, 1992). Insight in this aspect therefore is important for understanding not only normal development but also deviant motor development.
In summary. Both in humans and in rats it seems that the development of postural control is the limiting factor for several motor behaviors to mature. In rats, the developments of posture and movements initially proceed more or less independently. Only at later postnatal stages both these aspects become intimately linked to effect feed-forward control of movements. Strong indications exist that the cerebellum is significantly involved in this control.
Introduction
Disorders in human motor development are relatively common. It is estimated that around 2 individuals per 1000 are suffering from CP, an umbrella term covering a group of non-progressive but often changing motor impairment syndromes secondary to lesions or anomalies of the brain arising in the early stages of development. In contrast with the clinical and social dimensions of the handicaps involved, is the still widespread notion that brain lesions at neonatal age have less severe consequences for later functioning than similar lesions at adult age. This was supported by experimental research in young and adult monkeys by Margaret Kennard (Kennard, 1936;Kennard & Fulton, 1942). The advent of modem fiber tracing methods and sensitive neurophysiological techniques were of great help in studying neuroanatomical changes after brain damage, and careful behavioral observations have demonstrated that the new connections which may develop after lesions at early age not always are beneficial for functioning. Remodeling processes are not restricted to young ages, and new fiber connections termed collateral sprouts also may develop after lesions at adult age (Tsukahara, 1981;Tsukahara et al., 1983). The rearrangements after such lesions, however, seem to be much less extensive compared to those after early lesions. Recent research has indicated that repair of lesions in certain brain areas might also be effected by dormant neuroblasts induced to differentiate as a consequence of the lesion, even at later stages of development (see, Kolb, this issue). Cerebellar hemispherectomy in young and older rats: A case in point The cerebellar hemispheres play an important role in the regulation of muscular activity in the extremities, in linking postural control and movements and also in motor learning. In a series of investigations we studied the behavioral and neuroanatomical effects of the removal of one cerebellar hemisphere, including its deep nuclei at neonatal and older ages in rats. Ablations were performed at P2 when the proliferation of granular cells in the cerebellum just has started; at P5 and P10 when the cerebellar circuitry is being established, and in young adult rats (at P20 and at P30). An unexpected result from these experiments was that in rats lesioned at P5 and P10, the first neurological handicaps became apparent only around P 15 (Gramsbergen, 1982; also see above). We also showed that the effects on locomotor behavior in the rats operated at P20 or P30 were distinctly less severe compared to those after lesions at P5 or P10. Neuroanatomical investigations which were started in order to explain these discrepancies revealed that unilateral cerebellar hemispherectomy only before P10 leads to extensive neural remodeling (for review, see Gramsbergen & IJkema-Paassen, 1984). For example, Castro (1978), Gramsbergen & IJkema-Paassen (1982), and others found an aberrant projection from the remaining deep cerebellar nuclei onto the ipsilateral red nucleus after early lesioning (in normal rats, only contralaterally projecting cerebello-rubral fibers occur). Another example is the rerouting of the spinocerebellar fibers at the side of the lesion (Castro & Smith, 1979). These fibers, which normally project onto the parallel fiber system in the cerebellar cortex, now make synaptic contacts with neurones in the vestibular nuclei. The common denominator in all these experiments is that neuronal rearrangement is restricted to animals which were lesioned until the 10 th day of life. Lesioning beyond this age does not lead to extensive remodeling.
The nature of the normal (contralaterally projecting) and aberrant (ipsilaterally projecting) cerebellorubral projections was studied with the double-labeling technique (Gramsbergen & IJkema-Paassen, 1982). This investigation showed that the aberrant fibers were not collaterals from the normally projecting fibers but that they stem from separate parent cells in the remaining cerebellar nuclei. One of the most plausible possibilities seemed that the aberrantly projecting neurons emanate from supernumerary neurons in the deep cerebellar nuclei; neurones which normally die during early development. This possibility was investigated by inventarising cell death patterns in the deep cerebellar nuclei from P2 until P20 (Gramsbergen & IJkema-Paassen, 1987). In normal rats, neuronal cell death in the cerebellar nuclei stops at about the 15 tn day, but after a cerebellar hemispherectomy at P2, this process ends already at P8. The interpretation of these results is that neurons in the deep cerebellar nuclei escape physiological cell death by making aberrant connections in the ipsilateral red nucleus (Gramsbergen & IJkema-Paassen, 1991).
In summary. Motor handicaps after cerebellar lesions at early stages (i.e. before the 10 t day) only become apparent from the 15 th day. This late expression probably is due to the immaturity of the cerebellum itself or to its descending projections until this age. Lesions at early stages, interfere with fundamental processes of neural development, such as the establishment of fiber connections and cell death patterns. The functional effects are strongly dependent on the stage of development.
Introduction
In contrast to the limited capacity to neuroanatomical reorganization after lesions in the adult CNS is the aggressive outgrowth of peripheral nerves after transection. Outgrowing nerve fibers from the proximal nerve stump after transection may cover considerable distances, e.g. to a target muscle, but functional recovery generally is poor. This has been demonstrated after regeneration of the oculomotor nerve in a human patient (Bender & Alpert, 1937). This nerve carries axons to the levator palpebrae muscle; the inferior oblique; the medial, superior, and inferior rectus muscles. When the patient, after regrowth of the nerve, was asked to move his eyes in a specific direction, this led to inadequate eye movements and eyelid contractions at the affected side. This result indicated that the severed motor axons randomly reinnervate the respective muscles.
A similar inability of nerve fibers to relocate their muscles of origin has been demonstrated after transection of the sciatic nerve in adult rats (e.g., Bodine-Fowler et al., 1997). This inability is in contrast with the amazing pathfinding and recognition properties of outgrowing axons at embryonic stages of development. From this perspective, the suggestion that peripheral nerve lesions at young age have less deleterious functional consequences (Hardman & Brown, 1987) is of great interest. The specific question is whether the greater compensational capacity after peripheral nerve transection at young ages is due to still effective pathfinding capacities of the out-growing nerves. Another possibility is an increased capacity for functional compensation in the CNS. In order to unravel these differential effects and its neuro-anatomical and neurophysiological substrates, we studied the consequences of lesions in the sciatic nerve at adult and at young age in rats. The sciatic nerve innervates a multitude of hindleg muscles which are involved in extension and flexion of the knee and ankle joint.
Peripheral nerve lesions at adult and at young age In adult rats, a gap (12 mm) was made in the sciatic nerve in the left hindleg, proximal to the bifurcation into the tibial and common peroneal nerve, and this gap was repaired with the removed nerve segment in a reversed orientation (Meek et al., 1999). Neurological assessments after recovery periods up to 21 weeks after the operation indicated a return of sensory functions to some degree, but walking remained severely disturbed throughout the observation period, as indicated by abnormal foot placing and dragging ofthe leg (Meek et al., 1999). In another group of nine rats, we studied the locations of the motoneurones which, after reinnervation, innervate a few of the hindleg muscles which are particularly important for walking. Fifteen or twenty one weeks after transection of the sciatic nerve and after reinnervation, the gastrocnemius, soleus, and tibialis anterior muscles at the right and at the left side were injected with retrogradely transported Cholera Toxin subunit B (CTB). The results indicated that the motoneurons innervating the respective muscles were dispersed over much wider areas than at the unoperated side (Gramsbergen et al., 2000). In still another group of six rats, we implanted EMG electrodes in the gastrocnemius and the tibialis anterior muscles and the activation patterns were recorded during walking 15 and 21 weeks after the transection. Normally, the EMG recordings in the gastrocnemius muscle show tonic bursts during the stance phase but at the affected side, the EMG was markedly irregular. The burst onset increased more slowly and sometimes activity continued during the swing phase. The tibialis anterior muscle normally is only active with a brisk bUrst at the onset of the swing phase. At the left side, however, we often recorded irregular and badly phased bursts during the swing phase and even more remarkably, we regularly observed activity during the stance phase, indicating that the tibialis anterior and the gastrocnemius muscles are coactivated during the stance phase.
Results described by Luff and Webb (1985) in cats support our results. They cross-innervated the (tonic) soleus muscle with the nerve which normally innervates the (phasic) extensor digitorum' longus muscle. After recovery, they recorded EMG patterns during walking, and often they observed burst activity in the soleus muscle, similar to that what normally is observed in the extensor digitorum muscle. Our observation of a coactivation of the tibialis anterior and the antagonistic gastroc-nemius muscle and the histological evidence indicate that the motoneurones after transection aselectively have reinnervated these two muscles (and, undoubtedly all other muscles which were severed from innervation by the transection).
In order to study whether transections at early age have less deleterious effects, in a second set of experiments we transected the sciatic nerve at the 10 th postnatal day. The walking behavior of the rats was assessed at regular intervals after the operation on the basis of qualitative criteria, such as the nature of hindleg movements, foot placement, leg abduction, etc. Results indicated that after 8 to 10 weeks, a near-to-normal walking pattern had reappeared and, only incidentally, we observed abnormal foot placing or irregularities in walking. In another group of six rats which were treated similarly, we recorded the EMG during walking. Quite unexpectedly, we found that the EMG patterns during walking showed marked abnormalities in the activations of the gastrocnemius and tibialis anterior muscles. These patterns closely resembled those patterns which we detected after transection at adult age. EMG bursts were irregular and both in the gastrocnemius and the tibialis anterior muscle we observed coactivation during all phases of the stepcycle. These abnormalities lasted at least until 21 weeks after transection. The results strongly suggest that also after a transection at the 10 day, the outgrowing axons in the sciatic nerve randomly reinnervate their muscles, and this is currently the object of further investigation.
Our hypothesis to explain a near-to-normal walking pattern in conjunction with severely disturbed EMG patterns is that subtle readjustments are effected in the force recruitment in the respective muscles, despite a random reinnervation by the motoneurones. A corollary to this hypothesis obviously is that such readjustments are particularly outspoken after a sciatic nerve transection at early age. A likely site for a compensatory modulation of force recruitment is the cerebellum. It is well known that the cerebellar hemispheres are intimately involved in modulating motor output to extremity muscles on the basis of ascending information from the spinal cord and also in motor learning. In a pilot experiment, we explored whether such compensatory mechanisms in the cerebellum might account for these findings. In our groups of three rats each, the left sciatic nerve was transected at their 10 m day. After 8 weeks, their motor behavior was recorded on videotape and assessed, and this showed that walking was very similar to that in the other groups of early transected rats (see above). Thereafter at around the 50 th day, in one group of three rats the left cerebellar hemisphere was ablated and in another group the right cerebellar hemisphere; still in another the left sensory-motor cortex was removed, and in the last group the right sensorymotor cortex. After another recovery period of 30 days, their behavior in the walking alley was again recorded and evaluated. The preliminary results of this experiment indicated that the rats with lesions in the sensorimotor cortex at either side did not show overt abnormalities when walking. Rats after a cerebellar hemispherectomy at the right side had an atactic gait and signs of a hyperextension in the hindleg at the right side, but those rats in which the cerebellar hemisphere was removed at the left side, in addition to signs of the cerebellar ataxia, also showed a marked deterior-ation of step cycle characteristics at the left side, the side of the sciatic nerve transection.
These experiments need further elaboration, but the results indicate that the cerebellar hemisphere indeed plays a role in the compensation after peripheral nerve lesions at early stages. If so, the exploitation of these processes, which might be homologous to motor learning, might open new perspectives for a goal-directed treatment.
In summary. The young central nervous system in rats has a much greater capacity than the adult nervous system for compensating abnormal reinnervation in the peripheral nervous system. The cerebellar cortex might play an important part in this via processes which might be homologous to motor learning. This perspective should be investigated further as it might offer important perspectives for treatment.
EPILOGUE
Experimental research has elucidated many fundamental aspects of the neuroanatomical and neurophysiological consequences of lesions at early and later ages in the central nervous system in the rat, and this can to some extent be extrapolated to human development. Our knowledge on developmental processes in the brain and behavior in rats now offers a sound basis to test the possibilities of implanting stem cells in damaged areas, the mobilization of silent primordial cells in the CNS in order to repair lesioned areas, or new compensational strategies by motor learning. This ultimately may lead to new strategies in treating the effects of brain lesions in the human, acquired in the perinatal period. | 8,009.4 | 2001-01-01T00:00:00.000 | [
"Psychology",
"Biology"
] |
Diffusive lensing as a mechanism of intracellular transport and compartmentalization
While inhomogeneous diffusivity has been identified as a ubiquitous feature of the cellular interior, its implications for particle mobility and concentration at different length scales remain largely unexplored. In this work, we use agent-based simulations of diffusion to investigate how heterogeneous diffusivity affects the movement and concentration of diffusing particles. We propose that a nonequilibrium mode of membrane-less compartmentalization arising from the convergence of diffusive trajectories into low-diffusive sinks, which we call ‘diffusive lensing,’ is relevant for living systems. Our work highlights the phenomenon of diffusive lensing as a potentially key driver of mesoscale dynamics in the cytoplasm, with possible far-reaching implications for biochemical processes.
Introduction
Diffusion is a fundamental phenomenon of transport at scales ranging from atoms to galaxies.In cells, diffusion of individual components occurs in a complex, crowded milieu (Ellis, 2001;Luby-Phelps, 1999;van den Berg et al., 2017) that is known to exhibit position-dependent diffusivity (Berret, 2016;Garner et al., 2023;Huang et al., 2022;McLaughlin et al., 2020;Śmigiel et al., 2022;Xiang et al., 2020).Diffusion can occur within or between cellular compartments, where concentrated components carry out chemical reactions.This rich interaction of diffusion and compartmentalization provides the context for cellular biochemistry.Diffusivity varies inversely with viscosity, a key biophysical parameter of the cytoplasm (Bausch et al., 1999;Hu et al., 2017) that dictates translational and rotational mobility of proteins and, by extension, possibly influences their activity (Huang et al., 2022;Lippincott-Schwartz et al., 2001;Pan et al., 2009).While diffusivity has been implicated in modulating or driving a range of cellular processes (Molines et al., 2022;Persson et al., 2020;Xie et al., 2022), the role of inhomogeneous diffusivity in shaping biochemistry by regulating biomolecular concentration and dynamics remains poorly understood.Observation of diverse instances of accumulation across scales motivates our search for uncovering how space-dependent diffusivity affects cell biology.The accumulation of small molecules within the nuclear pore, for instance, has been attributed to diffusion through a viscous region (Ma et al., 2012).At the macroscale, Chladni patterns are an example of particle concentration resulting from inhomogeneous stochastic transport coefficients (Grabec, 2017).The implications of inhomogeneous diffusivity as a nonequilibrium phenomenon occurring at time scales and length scales relevant to biology remain largely unexplored.Theoretically, more information is required to specify the problem than just the diffusion constant: different mathematical interpretations of the stochastic term in diffusion equations with a spatially inhomogeneous diffusion constant result in different physical predictions (see Appendix for more information).Interestingly, diverse mesoscale outcomes are also seen in the case of active biological matter (Bechinger et al., 2016;Needleman and Dogic, 2017;Yeomans, 2017), the density-dependent concentration of active Brownian particles (Cates and Tailleur, 2015) and size-dependent condensation kinetics in the case of C. elegans colony formation (Chen and Ferrell, 2021).While these phenomena focus on motile energy-expending tracers, here we emphasize the underlying space-dependency of a physical property characterizing diffusion.In particular, accumulation arising from inhomogeneous diffusivity may represent a novel mechanism of effective compartmentalization, a key activity for cells in regulating biochemical processes.
In this work, we employ agent-based modeling to explore how position-dependent diffusivity can affect the distribution of tracer particles.We show that under a set of assumptions that relate to the ambiguities intrinsic to modeling inhomogeneous diffusivity (see Appendix), transport due to a diffusivity gradient leads to particle trajectories being biased toward areas of lower diffusivity, leading to effective compartmentalization and the growth of concentration gradients; we call this effect 'diffusive lensing,' in non-quantitative analogy to the effects on light rays of media with inhomogeneous refractive index, including refraction and the formation of caustics.Analyzing particle trajectories, we show that diffusive lensing manifests differently from homogeneous diffusion at the emergent scale.We conclude that inhomogeneous diffusivity may have diverse implications for intracellular transport, from sequestering particles to modulating where and when higher-order processes such as clustering happen, in a way that is not predictable from equivalent homogeneous-diffusivity models and could affect biochemical reactions.
Inhomogeneous diffusivity drives particle accumulation
We probed the effect of inhomogeneous diffusivity on particle concentration using agent-based modeling of particle dynamics (Figure 1-figure supplement 1A; see Methods).In our modeling, the expected macroscale behavior is dictated by the Itô interpretation of heterogeneous diffusion (see Appendix) (Volpe and Wehr, 2016).Our model was non-anticipatory in that for a modeled particle traversing an inhomogeneous diffusivity, the step size distribution was defined by the diffusivity at its present position.Other equally consistent interpretations (such as the entirely anticipatory 'isothermal' interpretation) produce different macroscale behaviors (Figure 1-figure supplement 1B).The range of physically incompatible possibilities resulting from different interpretations is known as the Itô-Stratonovich dilemma (Lau and Lubensky, 2007;Sokolov, 2010;Tupper and Yang, 2012;Van Kampen, 1988;Volpe and Wehr, 2016).For systems at thermal equilibrium, the isothermal convention best describes transport; however, the non-equilibrium nature of the cellular interior motivates the consideration of non-isothermal conventions; the physically appropriate convention to use depends upon microscopic parameters and timescale hierarchies not captured in a coarse-grained model of diffusion.Note that while the Itô interpretation is deployed here, it is possible to convert from one interpretation to another (Volpe and Wehr, 2016) resulting in different interpretations converging at the same physical outcome (see Appendix).The equation used here is distinguished from the conventional 1D diffusion equation which arises from Fick's laws and is only unambiguously true for homogeneous diffusion (characterized by constant diffusivity).
Over the course of the simulation, particles accumulated in the low-diffusivity zone (Figure 1A and C), consistent with steady-state closed form Itô-convention solutions (Tupper and Yang, 2012).This accumulation entailed the transient depletion of particles on the high-diffusive side of the interface.
A similar accumulation was observed in a smooth diffusivity gradient (Figure 1B and D).In both cases, the results from agent-based modeling were corroborated by predictions of the steady-state analytical forms derived from theory.Thus, agent-based simulations demonstrate that under the Itô convention, areas of decreased diffusivity lead to increases in the concentration of diffusing particles.We term this phenomenon 'diffusive lensing'.
Interaction-mediated clustering is affected by heterogenous diffusivity
Diffusive lensing is an interaction-free mode of concentrating particles that stands in contrast to a more typical paradigm of particle accumulation: interaction-driven formation of higher-order structures like protein complexes, gels, crystals, and phase-separated condensates (Banani et al., 2017;Vekilov, 2010;Wu et al., 2023).How might interaction-induced clustering be modulated by inhomogeneous diffusion in a cellular context?To address this question, we heuristically modeled interparticle interactions via a neighbor-sensing scheme in high and low interaction-strength regimes.The scheme involved using a step size for the modeled particle, which decreases as the number of particles in the vicinity increases (see Methods).At low interaction strength, clustering occurred only at the low-diffusivity end of a gradient (Figure 2A), while the same interaction strength was insufficient to produce clusters in a uniform diffusivity distribution (Figure 2-figure supplement 1A and C).In contrast, a high interaction strength resulted in robust clustering manifesting before particle gradient formation reached the steady-state, leading to clustering towards the high-diffusivity side of the simulation region as well (Figure 2B).At this high interaction strength, the clustering rate remained the same throughout the region in the absence of a gradient .Taken together, the results reveal that diffusive lensing can modulate clustering and under certain circumstances cause diffusivity-dependent localized cluster formation, and furthermore that the relative strengths and timescales of each phenomenon quantitatively dictate whether increased clustering will preferentially occur in low-diffusive zones.Similar density-dependent clustering is observed in the case of active Brownian particles during motility-induced phase separation (Cates and Tailleur, 2015).Effects of diffusive lensing on particle concentration may additionally regulate reaction rates and drive stochastic clustering of enzymes (Jilkine et al., 2011).
Heterogeneous diffusion alters bulk particle motion as measured by in silico microrheology
The diffusion coefficient is a fundamental biophysical parameter that affects numerous other phenomena, including biochemical reaction rates.To elucidate particle diffusion at the microscale in the context of diffusive lensing, we used an in silico implementation of microrheology to analyze particle trajectories (see Methods; Figure 3-figure supplement 1A).We computed the mean squared displacements (MSDs) for uniform diffusivity simulations (in the case of unencumbered and confined diffusion) and used these to understand how MSD is affected by heterogenous diffusivity in two cases: a continuous diffusivity gradient and a discrete step in diffusivity.
Particle diffusion was unencumbered in the case of large bounds (relative to step size) (Figure 3A) and confined in the case of small bounds (Figure 3B) all in agreement with earlier results (Dix and Verkman, 2008;Saxton, 2007).The MSD at saturation in homogeneously diffusive systems was found to be agnostic to the underlying uniform diffusivity of the system, indicating that it is exclusively determined by the simulation region size.In contrast, particles in a diffusivity gradient exhibited dynamics intermediate to those of homogeneous high and low diffusivity cases, both in the diffusion coefficient and saturation MSD (Figure 3C,inset).The lowering of the saturation MSD reflects particle diffusion occurring within apparent simulation region bounds that confine more than the actual simulation region size.We note that such effective modifications of geometry are also a general feature of optical lensing.Apparent bounds were also found to occur in the two-zone diffusivity case (as in Figure 1A) where, at steady-state, particles populated the simulation region non-uniformly (Figure 3-figure supplement 1B).For most of the diffusivity ratio parameter space, irrespective of whether the smaller zones were more or less diffusive relative to the bulk, a reduction in MSD was seen indicating effectively lower diffusion bounds (Figure 3D).The magnitude of reduction depended on whether most particles resided in the larger or smaller of the two zones.In one observed case ( µi µo = 0.25 ), however, the saturation MSD was higher than what was seen in the homogeneous diffusion scenario possibly due to particles robustly populating the bulk milieu followed by directed motion into the low-diffusive zone.The saturation MSD was also found to depend on the location of the low-diffusive zone: a more-centered zone resulted in a lowered saturation value, possibly due to weaker ratchet effects (Figure 3-figure supplement 1C and D).These results point to the insufficiency of using the diffusion coefficient alone to describe diffusion in heterogenous milieu.They also indicate a potentially rich interplay between heterogenous diffusivity and anomalous diffusion that requires further investigation.
In silico FRAP in heterogeneously diffusive environments reveals drivers of mesoscale dynamics
The in silico microrheology analysis we performed provided insights into dynamics at the singleparticle level (i.e. the microscale).To explore collective, emergent behaviors at the mesoscale while continuing to make contact with feasible experiments, we employed an in silico version of fluorescence recovery after photobleaching (in silico FRAP) (Figure 4-figure supplement 1A and B), in more cell-like inhomogeneous environments.In particular, we modeled low-diffusive patches/granules in a cell using a three-parameter disc-packing setup comprising granule radius ( r ), packing density ( ϕ ), and the ratio of granule diffusivity to bulk diffusivity ( µi µo ) (see Methods).We investigated the effect on dynamics of varying these parameters individually, with the goal of gaining an understanding of the effects of varying the amount, nature, and distribution of viscogens in cells.In all cases, the in silico 'photobleaching' event was conducted after the steady-state was attained (Figure 4-figure supplement 1C, D and E).To explain observed changes in the recovery time that would be measured in a FRAP-type experiment, we probed how the mean dwell time of particles in low-diffusive granules varies as a function of these parameters.A decrease in the diffusivity ratio ( µi µo ) at fixed ϕ and r resulted in a decline in measured particle mobility, as characterized by an increase in the simulated FRAP t 1/2 values (Figure 4A).Decreasing µi µo from 1 to 0.1 caused an approximate doubling of t 1/2 (or halving of diffusivity).Similar reduction in mobility was observed upon variation of ϕ or r separately, keeping the diffusivity ratio constant (Figure 4B and C).The decrease in average mobility in all three cases arose from changes in flux between the low-diffusive and bulk zones, as reflected by an increase in mean dwell times of particles within low-diffusive granules (Figure 4-figure supplement 1F, G and H).Furthermore, such reductions in mobility were emergent in that they arose from the interplay between granular diffusivity and bulk-granule fluxes, as the regions of interest in the simulated photobleaching events comprised granules and the surrounding bulk environment.To investigate whether particle dynamics is affected by the underlying topography realizing the system's diffusivity, we averaged the granular and bulk diffusivity values to produce weighted-average diffusivity values, and compared in silico recovery in these simulations to that of the equivalent granule-comprising simulations.Such an averaging of the diffusivity to cause an effective uniform mobility for all resident particles resulted in slower dynamics than that of the equivalent granule-comprising simulations (Figure 4D).We conclude that inhomogeneity in diffusivity drives rapid effective dynamics via fluxes between the granular (interior) and bulk (exterior) environments, creating 'diffusive highways' for particles to move rapidly between low-diffusive regions.The diffusive lensing of particles into low-diffusive zones, and their consequent dwelling in these regions, can be tuned by modulating the underlying diffusivity distribution in myriad ways.
Discussion
The complex milieu of the cellular interior has been recently shown to feature heterogeneous diffusivity (Garner et al., 2023;Huang et al., 2022;McLaughlin et al., 2020;Parry et al., 2014;Śmigiel et al., 2022;Xiang et al., 2020), yet the consequences of such inhomogeneity on compartmentalization and mesoscale molecular dynamics have remained unclear.Through agent-based modeling of diffusion using the Itô integration convention, we show that heterogenous diffusivity can lead to simulated particle trajectories converging into low-diffusive hotspots, causing the accumulation of diffusing particles into membrane-less compartments defined by the lower-diffusivity zones.We term this mode of transport 'diffusive lensing.'The underlying conclusions from our 2D simulations extend to 3D directly (see Methods).Diffusive lensing has wide-ranging effects on particle distribution and dynamics and, furthermore, it can occur across a wide parameter space.We, therefore, speculate that diffusive lensing is a ubiquitous phenomenon in living systems.We found that inhomogeneous diffusivity allows for particle mobility at the microscale and mesoscale to be different from that expected in the presence of homogeneous diffusion.Such an expectation is in line with predicted and observed deviations from normal diffusion in cells (Bancaud et al., 2012;Baum et al., 2014).The relative strengths of diffusive lensing and inter-particle interactions (if any) determined the extent to which clustering was modulated by diffusive lensing: this interplay may be important for determining the effects of inhomogeneous diffusivity on biochemical reaction rates.In these simulations of clustering, particle concentration did not affect diffusivity.In the case that particle concentration decreases diffusivity (for example in the case of branched polysaccharides like glycogen), diffusive lensing may create a positive feedback loop that drives particles into areas where low diffusivity has been nucleated.The effect of diffusive lensing on runaway pathological processes like protein aggregation is a potential direction for future work.
Spatially-averaged effective diffusion timescales were found to depend on the microscopic diffusivity distribution: the same average diffusivity can give rise to slower or faster dynamics depending on whether it is realized via homogeneous or heterogenous diffusivity distributions.In the latter case, the bulk region interspersed between the low-diffusive hotspots provides 'diffusive highways' that contribute to large fluxes at the diffusivity interface, thereby accounting for the faster dynamics.Such expressways and their associated fluxes may impact reaction kinetics by altering substrate turnover rates, congruent with the model of unusual transport processes potentially modifying reaction kinetics (Bénichou et al., 2010).In the context of subcellular low-diffusive regions (Garner et al., 2023), cells may compensate for geometry-imposed constraints on packing density and size of these regions by altering the diffusivity ratio (against the bulk milieu) instead.To map the detailed effects of inhomogeneous diffusivity on reaction rates, however, our work suggests that a key prerequisite is to chart a suitable set of meta parameters that provide an adequate description of inhomogeneous diffusion (Jin and Verkman, 2007), as a one-parameter description relying exclusively on the average diffusion coefficient is insufficient to fully specify the dynamics.
Changes in viscosity have been shown to occur in the context of cellular processes including cell death (Kuimova et al., 2008), stress adaptation (Persson et al., 2020) and protein aggregation (Thompson et al., 2015).At any given time point, intracellular transport dynamics arise emergently from contributions across length scales ranging from crowding in the bulk milieu due to proteins (Wang et al., 2010), and large biomolecules (Delarue et al., 2018) to cytoskeleton (Carlini et al., 2020;Chaubet et al., 2020) and active flows in the cytoplasm (Arcizet et al., 2008), all leading to unusual anomalous diffusive behaviors at the mesoscale (Banks and Fradin, 2005;Bressloff, 2014;Dix and Verkman, 2008;Höfling and Franosch, 2013;Kuznetsova et al., 2015;Swaminathan et al., 1997;Zhou et al., 2008).These diffusive behaviors cannot be decoupled from the intrinsic heterogeneity in biomolecular properties themselves (Heald and Cohen-Fix, 2014;Milo and Phillips, 2015).The effects of all of these subcellular determinants and energy-dependent processes on how position-dependent diffusivity is maintained in a cell remains unclear.
Not all cases of heterogeneous diffusivity will lead to diffusive lensing.This ambiguity is captured by the so-called Itô-Stratonovich dilemma (Lau and Lubensky, 2007;Sokolov, 2010;Tupper and Yang, 2012;Van Kampen, 1988;Volpe and Wehr, 2016).Any mathematical conceptualization of diffusion in the presence of position-dependent diffusivity must confront this dilemma, according to which the steady-state concentration distribution of a diffusing tracer depends not only on the localized diffusivity distribution but also on conventions based on microscopic parameters not captured in a coarse-grained model of diffusion; these parameters might, for example, include correlation lengths and times of viscogens or physical characteristics of polymers (Bo et al., 2021;Kupferman et al., 2004;Lau and Lubensky, 2007;Sokolov, 2010;Tupper and Yang, 2012;Van Kampen, 1988;Vishen et al., 2019).We speculate that any source of heterogeneity in diffusivity (including, but not limited to: mesh size experienced by the diffusing tracer, temperature changes, viscogen identity, and concentration) can, in turn, modulate diffusive lensing by means of altering either the particle or the environment-induced noise relaxation time.While the Itô convention is deployed here to model the nonequilibrium cellular interior (Gnesotto et al., 2018;Phillips et al., 2012), in some cases the isothermal convention may be better suited for modeling transport.The choice of the convention (and the effect of the dilemma, by extension) may also be subverted by recasting the dynamics into an alternate convention by taking suitable drift terms into consideration (see Appendix).Indeed, diverse conventions have been used to model experimentally observed accumulation arising from varied sources of such position-dependent noise (Bringuier, 2011;Pesce et al., 2013;Volpe and Wehr, 2016).
Our work underscores the need to not only examine diffusivity distributions in vivo as a function of local composition and the environment, but also to study their time evolution in response to external stimuli.More speculatively, we suggest that diffusive lensing serves as a potential candidate for a rudimentary mode of pre-biotic compartmentalization. Lensing-driven accumulation of diverse biomolecules may have served to produce chemically enriched spaces, acting as an antecedent of more sophisticated, membrane-bound, and membraneless organizational modalities; such a protocell organization is orthogonal to currently studied models (Monnard and Walde, 2015).This work demonstrates that diffusive lensing can have strong effects on transport and may be common in cellular contexts, modulating both passive and active flows.Future experimental and theoretical work will elucidate the extent of lensing inside and outside of cells and its effects on the biochemical reactions that sustain life.
Agent-based modeling (random walk simulations)
Agent-based modeling of diffusion was conducted via 2D random walk simulations.Irrespective of how the Itô-Stratonovich dilemma is interrogated, the underlying diffusion equations contain additive, separable contributions from each dimension, and this extends to 3D as well.Calculations were, therefore, carried out in 2D for simplicity and visualizability.Non-interacting point particles were initialized uniformly in a 2D simulation region with an aspect ratio matching that of an E. coli bacterium (Phillips et al., 2024).During each time step (also termed epoch or frame), every particle was moved along each coordinate by step sizes sampled from a uniform distribution, U (−S, S) , where denotes the step size limit.Across a large number of steps, the distribution of displacements converges to the normal distribution by virtue of the central limit theorem.While sampling was not performed via the normal distribution directly by using the diffusion coefficient (D) as a parameter, the diffusion coefficient was instead arrived at as an emergent property of trajectories comprising a simulation, in a ground-up fashion.Reflection off the wall was modeled using a mirror-image rule.To model a zone of differential diffusivity relative to bulk diffusivity (either a fluid or a diffusivity zone), particle step sizes were sampled from zones characterized by different diffusivities, noting that the diffusion coefficient and diffusivity are inversely related Phillips et al., 2024 and S∞ √ D .At all times, step sizes were sampled from distributions defined by the diffusivity around the present position in accordance with the Itô interpretation of multiplicative noise (Volpe and Wehr, 2016) (for theoretical predictions of the steady-state behaviors, see Numerical methods for the diffusion equations).In all simulations, a set seed of 1 was used for the random number generator.Simulations were run on MATLAB R2020a on Sherlock (a high-performance computing cluster at Stanford).
In the simulations which included inter-particle interactions, these interactions were modeled via a neighbor-sensing approach.The step size limit was modified as per the relation, S eff = Se − kn , wheredenotes the sensing strength and denotes the number of neighbors (defined as those particles lying within a cutoff span around the particle in question).Such a rule-based approach modeled an effective attractive potential for the inter-particle interactions.Local density calculation used the same cutoff and the data were normalized to the mean local density of particles during initialization.Considering the computational work due to neighbor-sensing, a smaller number of particles (10 3 ) were deployed, for a longer period of 2×10 4 epochs.
In the low-diffusive granule simulations, the granules were modeled as disks with randomly initialized centers and fixed radii (r), covering the simulation region up to a desired packing density, ϕ .The algorithm saturated for ϕ ≥ 0.6 , in which case, the disks were generated as per cubic close packing, and their positions were incrementally deviated over steps to reduce local ordering as much as possible.The ratio of diffusivity inside the granules to diffusivity outside the granules ( µi µ0 ) was the third parameter under consideration.No two disks were allowed to overlap and all disks were kept confined within the boundaries of the simulation region.The default setup is as follows:r = 0.01 μm (uniform), ϕ =0.6 (that is, 60% of the simulation region is covered by the granules) and µi µ0 = 0.05.Titration of one of these three parameters involved keeping the other two at the specified levels.
Numerical methods for the diffusion equations
The Fokker-Planck equations corresponding to the Ito, Stratonovich, and isothermal interpretations of inhomogeneous diffusion are as follows Gardiner, 2004; Tupper and Yang, 2012 (here c(x, t) denotes the concentration distribution and D(x) denotes the position-dependent diffusivity): These equations were numerically evaluated via forward time-centered space (FTCS) schemes, with length and time increments set as 10 -3 and 5×10 -7 arbitrary units, respectively, and the number of time steps was set to 10 5 .A Gaussian well profile was used for the diffusion coefficient and the initial condition for the concentration distribution was a uniform distribution (Figure 1-figure supplement 1B).For the theoretical prediction in each case, the following relation is used: c(x, t)D(x) 1−α = constant in steady-state, where α denotes the integration convention used (Tupper and Yang, 2012).Analysis and data visualization were performed on MATLAB R2019a.
In silico microrheology
Analysis of particle trajectories was carried out via quantifying the mean squared displacements (MSD).These were calculated from 10 4 trajectories (each 10 5 timesteps in duration) per simulation.The timestep was set as 50 μs so that the diffusion coefficient was ≈ 5 µm 2 /s order of magnitude for a small protein's mobility in the E. coli cytoplasm (Milo and Phillips, 2015).
In silico FRAP
In silico fluorescence recovery after photobleaching (FRAP) studies were performed on the diffusion simulations to quantify emergent dynamics at the mesoscale.10 5 particles were deployed for a total duration of 0.5 s (10 4 epochs).Circular regions (radius of 0.2 μm) were chosen as the regions of interest (ROIs).In silico photobleaching was instantaneously performed and involved assigning the particles in the ROI the photobleach status.The background was chosen from a uniform diffusivity setup to ensure that the normalization is standardized.The outward turnover of these particles and the simultaneous inward flux of unbleached particles were captured via t 1/2 , the time taken for recovery up to 50% of the steady-state level of unbleached particles in the ROI (Sprague and McNally, 2005).In these simulations, t 1/2 connotes the time taken for the number of 'unbleached' particles in the ROI to reach 50% of the steady-state value.To dissect particles' behavior during the simulation (in terms of bias towards inhabiting the low-diffusive granules), we calculated the mean dwell time across all particles, per simulation.This involved averaging the periods (of any duration) spent by particles inside low-diffusive granules.For normalization, the total simulation duration was used (0.5 s).
Figure 1 .
Figure 1.Low diffusivity leads to accumulation of particles.(A) Particle distribution at various timesteps of a simulation with a step-like lower-diffusivity region.(B) Particle distribution at various timesteps for a simulation with a diffusivity gradient.(C) Steady-state particle distribution for the simulation in (A).(D) Steady-state particle distribution for the simulation in (B).The online version of this article includes the following figure supplement(s) for figure 1:
Figure 2 .
Figure 2. Interaction-driven clustering is modulated by heterogenous diffusivity.(A) Progress of a simulation comprising particles possessing weak interactions ( k = 0.04 is the interaction strength; see Methods), initialized with a uniform concentration of particles.(B) Progress of a simulation comprising particles possessing strong interactions ( k = 0.1 ), initialized with a uniform concentration of particles.The online version of this article includes the following figure supplement(s) for figure 2:
Figure supplement 1 .
Figure supplement 1. Particle clustering at different strengths in homogeneous versus heterogeneous diffusivity environments.
Figure 3 .
Figure 3. Heterogeneous diffusion alters bulk particle motion as measured by in silico microrheology.(A) mean squared displacement (MSD) versus time for homogeneous diffusion of 10,000 particles in a 5 mm × 5 mm simulation region.(B) Same as (A) for homogeneous diffusion in a more tightly bounded simulation region (1 μm × 0.45 μm).(C) MSD versus time for inhomogeneous diffusion in a diffusivity gradient versus homogeneous diffusion in the extreme diffusivity cases (simulation region size: 1 μm × 0.45 μm).Inset: zoomed region showing differential saturation of the MSD.(D) MSD versus time for inhomogeneous diffusion due to a stepwise diffusivity distribution with diffusivity ratio µi µo relative to the bulk (simulation region size: 1 μm × 0.45 μm).In all cases, n=10,000 particles for MSD calculation (error bars denote SEM).The online version of this article includes the following figure supplement(s) for figure 3: Figure supplement 1. Magnitude and distribution of inhomogeneity in diffusivity affects diffusive lensing. | 6,199.2 | 2024-03-13T00:00:00.000 | [
"Physics",
"Biology"
] |
Research and Development of a 3D Jet Printer for High-Viscosity Molten Liquids
Micro-droplet jetting manufacture is a new 3D printing technology developed in recent years. Presently, this new technology mainly aims at ejecting a low-viscosity medium. Therefore, a device for ejecting high-viscosity molten liquid is designed by analyzing the injection principle of high-viscosity molten liquid. Initially, the cooling mechanism is designed to overcome the defect that the piezoelectric stacks cannot operate in high-temperature conditions. Thereafter, the mathematical model of the liquid velocity in the nozzle is derived, and the factors influencing injection are verified by Fluent. Subsequently, a prototype of the jet printer is fabricated, and the needle velocity is tested by the laser micrometer; the relationship between voltage difference and the needle velocity is also obtained. The experimental results matched the theoretical model well, showing that the voltage difference, needle radius, nozzle diameter, and taper angle are closely related to the injection performance of the 3D jet printer. By using a needle with a radius of 0.4 mm, a nozzle with a diameter of 50 μm, a taper angle of 90°, a supply pressure of 0.05 Mpa, and a voltage difference of 98 V, a molten liquid with a viscosity of 8000 cps can be ejected with a minimum average diameter of 275 μm, and the variation of the droplet diameter is within ±3.8%.
Introduction
3D printing technology is the crystallization of three-dimensional modeling and precision machinery via computer numerical control (CNC) technology; specific materials are stacked layer by layer to create a three-dimensional entity identical to the 3D model [1]. So far, stereo lithography appearance [2,3], selective laser sintering [4,5], fused deposition modeling [6,7], powder material selective bonding technology [8,9], and micro-droplet jetting manufacture [10,11] are relatively developed technologies in the field of 3D printing. Micro-droplet jetting manufacture (MDJM) is a new 3D printing technology developed in recent years. The principle of this technology is based on discrete deposition technology, which sprays liquid through a 3D printing device, controls the trajectory of the droplet ejection via the motion platform, accurately sprays the droplet at a specified position, and gradually accumulates into a three-dimensional model. MDJM has many advantages, such as various types of ejection materials, lower maintenance costs, higher printing precision, and more fine-printed parts, compared with other technologies [12]. This technology is also widely used in biomedical manufacturing, three-dimensional micro-structure manufacturing, the microelectronics industry, micro-spacecraft, and other fields [13].
Scholars have conducted extensive research on the theory and technology of MJDM due to its considerable advantages. Dalian University of Technology built a piezoelectric pulse micropore injection system and sprayed droplets of Sn63Pb37 eutectic alloy [14]. An electric field deflection Figure 1 shows the structure of the jet printer, which comprises the piezoelectric stack, drive frame, lever, heat insulation, heat sink, heater, needle, and nozzle. The piezoelectric stack is sintered from many ceramic sheets. It is the power source of the device for its high-frequency vibration after receiving the square wave signal. The jet printer mainly uses a lever to amplify the displacement of the piezoelectric stack, and the needle moves together with the lever. The liquid chamber is heated by the heater to melt viscous liquid, and the heat will be transferred to the piezoelectric stack, which also generates heat when working. This phenomenon will cause damage to the piezoelectric stack if the surface temperature is too high. Therefore, insulation is used to protect the piezoelectric actuator from the heat induced by the heater. Moreover, the heat sink and cooling holes are used to dissipate the heat generated by the piezoelectric stack. Scholars have conducted extensive research on the theory and technology of MJDM due to its considerable advantages. Dalian University of Technology built a piezoelectric pulse micropore injection system and sprayed droplets of Sn63Pb37 eutectic alloy [14]. An electric field deflection jetting device was developed by Harbin Institute of Technology and successfully sprayed pure metal tin [15]. South Korea's Taik-Min Lee et al. designed a piezoelectric molten solder printing system, wherein the diameter and volume of solder balls was 60-150 μm and 0.14-1.8 nL, respectively, and the various tube and rod metal structures were successfully printed [16,17]. However, the current injection apparatus is mainly used for spraying low-viscosity paraffin, alloy materials, and high molecular polymers, and its sprayable viscosity is generally less than 30 cps [18]. Research on the injection of a high-viscosity molten medium in the 3D printing field is limited.
By analyzing the injection principle mechanism of high-viscosity molten liquid, a device for jetting high-viscosity molten liquid, which provides a new method and technology for 3D printing, is designed. The advantages of high precision and fast response of the piezoelectric stacks are applied to jetting high-viscosity molten liquids. Initially, the cooling mechanism is designed to overcome the defect that the piezoelectric stacks cannot operate in high-temperature conditions. Thereafter, the mathematical model of the liquid velocity in the nozzle is derived, and the influencing factors on injection are verified by Fluent. Subsequently, the prototype of the jet printer is fabricated, the displacement of the needle is tested by a laser micrometer, and the influence of voltage on the needle velocity is obtained. Finally, the experimental results corroborate that the voltage difference, needle radius, nozzle diameter, and taper angle are closely related to the injection performance of the jet printer. Figure 1 shows the structure of the jet printer, which comprises the piezoelectric stack, drive frame, lever, heat insulation, heat sink, heater, needle, and nozzle. The piezoelectric stack is sintered from many ceramic sheets. It is the power source of the device for its high-frequency vibration after receiving the square wave signal. The jet printer mainly uses a lever to amplify the displacement of the piezoelectric stack, and the needle moves together with the lever. The liquid chamber is heated by the heater to melt viscous liquid, and the heat will be transferred to the piezoelectric stack, which also generates heat when working. This phenomenon will cause damage to the piezoelectric stack if the surface temperature is too high. Therefore, insulation is used to protect the piezoelectric actuator from the heat induced by the heater. Moreover, the heat sink and cooling holes are used to dissipate the heat generated by the piezoelectric stack. Figure 2 shows that the jetting principle of the jet printer can be divided into the following three steps:
Structure of the Device
1. In the normal state, the piezoelectric stack is powered on, and the needle and the nozzle are in a close-fit state with the role of sealing, which can effectively prevent the fluid in the chamber from flowing out through the nozzle orifice ( Figure 2a).
2. The needle separates from the nozzle under the restoring force of the spring when the piezoelectric stack is powered off. The fluid in the chamber is transported to the gap between the needle and nozzle under pressure (Figure 2b).
3. The needle moves down while an electric signal is applied to the piezoelectric stack, and instantaneous high pressure can be formed between the needle and nozzle. The high pressure generated will force the fluid at the nozzle orifice to overcome the fluid force, thereby realizing the injection (Figure 2c).
Micromachines 2018, 9, x 3 of 16 Figure 2 shows that the jetting principle of the jet printer can be divided into the following three steps: 1. In the normal state, the piezoelectric stack is powered on, and the needle and the nozzle are in a close-fit state with the role of sealing, which can effectively prevent the fluid in the chamber from flowing out through the nozzle orifice ( Figure 2a).
2. The needle separates from the nozzle under the restoring force of the spring when the piezoelectric stack is powered off. The fluid in the chamber is transported to the gap between the needle and nozzle under pressure (Figure 2b).
3. The needle moves down while an electric signal is applied to the piezoelectric stack, and instantaneous high pressure can be formed between the needle and nozzle. The high pressure generated will force the fluid at the nozzle orifice to overcome the fluid force, thereby realizing the injection ( Figure 2c).
Thermodynamic Analysis
Piezoelectric stacks can convert electrical energy into mechanical energy. However, a large amount of heat is generated when operating at high frequencies because of the internal friction during operation. The amount of heat generated can be expressed as [19]: where P is the heat value generated, tan is the dielectric loss coefficient, f is the driving frequency, C is the equivalent capacitance of the piezoelectric stack, and PP is the peak-to-peak value of applied voltage.
To ensure the stable working state of the piezoelectric stack, the long-term working temperature should be less than 50% of the Curie temperature [20] (i.e., the temperature of the piezoelectric stack should be less than 175 °C when working). During operation, both the heater and heat generated by the piezoelectric stack itself can increase the temperature of the piezoelectric stack. Therefore, the stack must be properly cooled to prevent it from being damaged by overheating.
Thermodynamic simulation is then carried out, and a proper structure is designed to reduce the heat of the piezoelectric stack when operating at high frequency. The model of the jet printer is imported into Ansys Workbench (V14.5) (Ansys, Inc., Canonsburg, PA, USA), its main parts are composed of 45-section structural steel, the nozzle and liquid heater temperature are set to 180 °C and 120 °C separately, the ambient temperature is set to 25 °C, the convection coefficient is set to 5 W/(m 2 ·k), and the piezoelectric stack heating coefficient is set to 2.04 × 10 6 W/m 3 .
Thermodynamic Analysis
Piezoelectric stacks can convert electrical energy into mechanical energy. However, a large amount of heat is generated when operating at high frequencies because of the internal friction during operation. The amount of heat generated can be expressed as [19]: where P is the heat value generated, tan δ is the dielectric loss coefficient, f is the driving frequency, C is the equivalent capacitance of the piezoelectric stack, and U PP is the peak-to-peak value of applied voltage. To ensure the stable working state of the piezoelectric stack, the long-term working temperature should be less than 50% of the Curie temperature [20] (i.e., the temperature of the piezoelectric stack should be less than 175 • C when working). During operation, both the heater and heat generated by the piezoelectric stack itself can increase the temperature of the piezoelectric stack. Therefore, the stack must be properly cooled to prevent it from being damaged by overheating.
Thermodynamic simulation is then carried out, and a proper structure is designed to reduce the heat of the piezoelectric stack when operating at high frequency. The model of the jet printer is imported into Ansys Workbench (V14.5) (Ansys, Inc., Canonsburg, PA, USA), its main parts are composed of 45-section structural steel, the nozzle and liquid heater temperature are set to 180 • C and 120 • C separately, the ambient temperature is set to 25 • C, the convection coefficient is set to 5 W/(m 2 ·k), and the piezoelectric stack heating coefficient is set to 2.04 × 10 6 W/m 3 .
The thermodynamic simulation of the jet printer without the cooling mechanism is shown in Figure 3a. The temperature distribution of the piezoelectric stack is between 188 • C and 208 • C. When working in this condition, the piezoelectric stack will incur damage. As shown in Figure 3b, a layer of PEEK insulation is added between the drive frame and liquid chamber to reduce the heat transferred by the heater. A layer of heat sink is placed around the piezoelectric stack, and air flows through its interior. The temperature distribution of the piezoelectric stack is between 125 • C and 132 • C, and the temperature of the piezoelectric stack is reduced to a suitable value. The thermodynamic simulation of the jet printer without the cooling mechanism is shown in Figure 3a. The temperature distribution of the piezoelectric stack is between 188 °C and 208 °C. When working in this condition, the piezoelectric stack will incur damage. As shown in Figure 3b, a layer of PEEK insulation is added between the drive frame and liquid chamber to reduce the heat transferred by the heater. A layer of heat sink is placed around the piezoelectric stack, and air flows through its interior. The temperature distribution of the piezoelectric stack is between 125 °C and 132 °C, and the temperature of the piezoelectric stack is reduced to a suitable value. The thermodynamic analysis of the jet printer reveals that the cooling mechanism can be designed to effectively reduce the temperature during the operation of the piezoelectric stack. Thus, even the limit condition can be satisfied.
Theoretical Analysis of Jetting
Many scholars have analyzed the principle of the droplet ejection. The inertial force obtained by the droplet must overcome the viscous drag and surface tension. The fluid velocity must exceed a critical value for the adhesive solution to achieve injection. A large fluid velocity allows the droplet to gain a large kinetic energy, making it easy for the fluid to overcome its own viscous force and spray out, which results in large fluid viscosity.
Owing to the high viscosity of the fluid, all the fluid flow is assumed to be in the laminar regime. Figure 4 shows a geometric model of the needle and nozzle. The model is used to analyze the state of liquid movement during the collision of the needle with the nozzle. During the impact of the needle against the nozzle, a portion of the molten liquid between the needle and the nozzle will be ejected from the nozzle orifice, and the remaining liquid will flow back to the liquid storage chamber. The law of weight conservation shows the following: where Q is the reduced flow between the needle and the nozzle when the needle moves downward, is the flow rate of the returning liquid, and ℎ is the flow rate in the nozzle orifice.
When the needle moves downward at velocity vs, the reduced flow between the needle and the nozzle is where R is the needle radius and is the nozzle taper angle. The thermodynamic analysis of the jet printer reveals that the cooling mechanism can be designed to effectively reduce the temperature during the operation of the piezoelectric stack. Thus, even the limit condition can be satisfied.
Theoretical Analysis of Jetting
Many scholars have analyzed the principle of the droplet ejection. The inertial force obtained by the droplet must overcome the viscous drag and surface tension. The fluid velocity must exceed a critical value for the adhesive solution to achieve injection. A large fluid velocity allows the droplet to gain a large kinetic energy, making it easy for the fluid to overcome its own viscous force and spray out, which results in large fluid viscosity.
Owing to the high viscosity of the fluid, all the fluid flow is assumed to be in the laminar regime. Figure 4 shows a geometric model of the needle and nozzle. The model is used to analyze the state of liquid movement during the collision of the needle with the nozzle. During the impact of the needle against the nozzle, a portion of the molten liquid between the needle and the nozzle will be ejected from the nozzle orifice, and the remaining liquid will flow back to the liquid storage chamber. The law of weight conservation shows the following: where Q is the reduced flow between the needle and the nozzle when the needle moves downward, Q g is the flow rate of the returning liquid, and Q h is the flow rate in the nozzle orifice. When the needle moves downward at velocity v s , the reduced flow between the needle and the nozzle is where R is the needle radius and θ is the nozzle taper angle. When the needle nearly hits the nozzle, the gap between the needle and the nozzle is infinitely close to 0, Q g S g→0 = 0; therefore, where S g is the gap between the needle and the nozzle, d is the diameter of the nozzle orifice, and V is the velocity of the droplet in the nozzle orifice. Therefore, when the needle hits the nozzle, the velocity of the droplet in the nozzle orifice can be determined by Equations (3) and (4).
Equation (5) shows that the ejection velocity of the droplet is influenced by the needle velocity, needle radius, nozzle diameter, and taper angle.
where is the gap between the needle and the nozzle, d is the diameter of the nozzle orifice, and V is the velocity of the droplet in the nozzle orifice.
Therefore, when the needle hits the nozzle, the velocity of the droplet in the nozzle orifice can be determined by Equations (3) Equation (5) shows that the ejection velocity of the droplet is influenced by the needle velocity, needle radius, nozzle diameter, and taper angle.
Analysis of Jetting
According to the theoretical analysis of jetting, the velocity of the liquid in the nozzle orifice is mainly related to the impact velocity of the needle, the structural parameters of the needle, and the nozzle. Fluent (V6.3) (Ansys, Inc., Canonsburg, PA, USA) is used to simulate and analyze the influence of the aforementioned parameters on the flow rate in the nozzle orifice.
The geometric model shown in Figure 4 is introduced to Fluent for simulation analysis. Polyurethane molten liquid with a temperature of 120 °C is used as the simulation object. The density is set to 1300 kg/m 3 , the viscosity is set to 8000 cps, the surface tension coefficient is set to 20 mN/m, and the yield stress is set to 150 Pa in the simulation software. Point P1 is set at the outlet of the nozzle orifice to monitor the velocity of the flow. The inlet and outlet pressures are set to 2 × 10 5 Pa and 1.01 × 10 5 Pa, respectively. The needle radius R and velocity Vs are set to 0.75 mm and 1.5 m/s, the nozzle diameter d and taper angle are set to 100 μm and 90°, the length of the nozzle tip L is set to 0.3 mm as boundary conditions.
According to the simulation, as the needle impacts, the velocity of the molten liquid at point P1 at different times is shown in Figure 5. The figure shows that, during the downward impacting of the needle, the liquid in the gap between the nozzle and the needle flows upward, the liquid in the nozzle orifice flow downwards, and the flow velocity at point P1 gradually becomes large.
Thereafter, the flow rate at point P1 will be simulated at the instant of the needle striking the nozzle (at t4) to analyze the influence of these parameters on the injection.
Analysis of Jetting
According to the theoretical analysis of jetting, the velocity of the liquid in the nozzle orifice is mainly related to the impact velocity of the needle, the structural parameters of the needle, and the nozzle. Fluent (V6.3) (Ansys, Inc., Canonsburg, PA, USA) is used to simulate and analyze the influence of the aforementioned parameters on the flow rate in the nozzle orifice.
The geometric model shown in Figure 4 is introduced to Fluent for simulation analysis. Polyurethane molten liquid with a temperature of 120 • C is used as the simulation object. The density is set to 1300 kg/m 3 , the viscosity is set to 8000 cps, the surface tension coefficient is set to 20 mN/m, and the yield stress is set to 150 Pa in the simulation software. Point P1 is set at the outlet of the nozzle orifice to monitor the velocity of the flow. The inlet and outlet pressures are set to 2 × 10 5 Pa and 1.01 × 10 5 Pa, respectively. The needle radius R and velocity V s are set to 0.75 mm and 1.5 m/s, the nozzle diameter d and taper angle θ are set to 100 µm and 90 • , the length of the nozzle tip L is set to 0.3 mm as boundary conditions.
According to the simulation, as the needle impacts, the velocity of the molten liquid at point P1 at different times is shown in Figure 5. The figure shows that, during the downward impacting of the needle, the liquid in the gap between the nozzle and the needle flows upward, the liquid in the nozzle orifice flow downwards, and the flow velocity at point P1 gradually becomes large.
Thereafter, the flow rate at point P1 will be simulated at the instant of the needle striking the nozzle (at t4) to analyze the influence of these parameters on the injection. Micromachines 2018, 9, x 6 of 16
Simulation of the Needle Radius
The radius of the needle is set to 0.4, 0.5, 0.6, 0.75, and 1 mm. Figure 7 depicts the relationship between the flow velocity of the molten liquid at point P1 and the needle radius obtained by simulation. The figure shows that, when the needle hits the nozzle, the velocity of the flow at the P1 point gradually increases with the needle radius. This finding is due to the large volume of liquid wrapped between the needle and the nozzle when the needle radius is increased. The pressure generated between the needle and the nozzle becomes large when the needle hits the nozzle. More liquid also flows out in unit time, and the flow velocity in the nozzle becomes large. The radius of the needle is set to 0.4, 0.5, 0.6, 0.75, and 1 mm. Figure 7 depicts the relationship between the flow velocity of the molten liquid at point P1 and the needle radius obtained by simulation. The figure shows that, when the needle hits the nozzle, the velocity of the flow at the P1 point gradually increases with the needle radius. This finding is due to the large volume of liquid wrapped between the needle and the nozzle when the needle radius is increased. The pressure generated between the needle and the nozzle becomes large when the needle hits the nozzle. More liquid also flows out in unit time, and the flow velocity in the nozzle becomes large.
Simulation of the Nozzle Diameter
The diameter of the nozzle is set to 50, 60, 75, 100, 150, and 200 μm, and the relationship between the flow velocity of the molten liquid at point P1 and the nozzle diameter obtained by simulation is shown in Figure 8. The figure shows that, when the needle hits the nozzle, the flow velocity at the P1 point gradually decreases with the increase in the nozzle diameter. This finding can be attributed to the increase in the nozzle diameter, which decreases the resistance of the molten liquid, resulting in small pressure generated by the needle impact; accordingly, the velocity of the flow in the nozzle is reduced.
Simulation of the Nozzle Diameter
The diameter of the nozzle is set to 50, 60, 75, 100, 150, and 200 µm, and the relationship between the flow velocity of the molten liquid at point P1 and the nozzle diameter obtained by simulation is shown in Figure 8. The figure shows that, when the needle hits the nozzle, the flow velocity at the P1 point gradually decreases with the increase in the nozzle diameter. This finding can be attributed to the increase in the nozzle diameter, which decreases the resistance of the molten liquid, resulting in small pressure generated by the needle impact; accordingly, the velocity of the flow in the nozzle is reduced.
The radius of the needle is set to 0.4, 0.5, 0.6, 0.75, and 1 mm. Figure 7 depicts the relationship between the flow velocity of the molten liquid at point P1 and the needle radius obtained by simulation. The figure shows that, when the needle hits the nozzle, the velocity of the flow at the P1 point gradually increases with the needle radius. This finding is due to the large volume of liquid wrapped between the needle and the nozzle when the needle radius is increased. The pressure generated between the needle and the nozzle becomes large when the needle hits the nozzle. More liquid also flows out in unit time, and the flow velocity in the nozzle becomes large.
Simulation of the Nozzle Diameter
The diameter of the nozzle is set to 50, 60, 75, 100, 150, and 200 μm, and the relationship between the flow velocity of the molten liquid at point P1 and the nozzle diameter obtained by simulation is shown in Figure 8. The figure shows that, when the needle hits the nozzle, the flow velocity at the P1 point gradually decreases with the increase in the nozzle diameter. This finding can be attributed to the increase in the nozzle diameter, which decreases the resistance of the molten liquid, resulting in small pressure generated by the needle impact; accordingly, the velocity of the flow in the nozzle is reduced. The taper angle of the nozzle is set to 60 • , 75 • , 90 • , 120 • , and 150 • . Figure 9 shows the relationship between the flow velocity of the molten liquid at point P1 and the nozzle taper angle obtained by simulation. The figure shows that, when the needle hits the nozzle, the velocity of the flow at the P1 point gradually decreases with the increase in the nozzle taper angle. This finding is due to the small pressure generated by the impact of the needle when the taper angle of the nozzle is large. Moreover, the molten liquid flowing out from the nozzle orifice and the gap is reduced in unit time, and the velocity of the flow in the nozzle is reduced accordingly. The taper angle of the nozzle is set to 60°, 75°, 90°, 120°, and 150°. Figure 9 shows the relationship between the flow velocity of the molten liquid at point P1 and the nozzle taper angle obtained by simulation. The figure shows that, when the needle hits the nozzle, the velocity of the flow at the P1 point gradually decreases with the increase in the nozzle taper angle. This finding is due to the small pressure generated by the impact of the needle when the taper angle of the nozzle is large. Moreover, the molten liquid flowing out from the nozzle orifice and the gap is reduced in unit time, and the velocity of the flow in the nozzle is reduced accordingly. Fluent is used to demonstrate the effect of the needle velocity and the radius, and the nozzle on the injection of the molten liquid in jetting. The injection velocity of the molten droplet is positively correlated with the velocity and the radius of the needle and negatively correlated with the diameter and taper angle of the nozzle.
Experiment on the Needle Velocity
As indicated above, the needle velocity has an important influence on the injection of molten liquid. However, the velocity of the needle in the jet printer is controlled by the driving power of the piezoelectric stack. A test bench for needle movement is built to obtain the relationship between the needle velocity and the voltage difference generated by the driving power. Figure 10 exhibits that the test bench is composed of a driving power, a 3D jet printer, a laser micrometer (Keyence, LK-H020, Keyence Corporation, Osaka, Japan), and a computer. The piezoelectric stack vibrates under the driving power and drives the needle to move through the lever amplification system. Fluent is used to demonstrate the effect of the needle velocity and the radius, and the nozzle on the injection of the molten liquid in jetting. The injection velocity of the molten droplet is positively correlated with the velocity and the radius of the needle and negatively correlated with the diameter and taper angle of the nozzle.
Experiment on the Needle Velocity
As indicated above, the needle velocity has an important influence on the injection of molten liquid. However, the velocity of the needle in the jet printer is controlled by the driving power of the piezoelectric stack. A test bench for needle movement is built to obtain the relationship between the needle velocity and the voltage difference generated by the driving power. Figure 10 exhibits that the test bench is composed of a driving power, a 3D jet printer, a laser micrometer (Keyence, LK-H020, Keyence Corporation, Osaka, Japan), and a computer. The piezoelectric stack vibrates under the driving power and drives the needle to move through the lever amplification system. Figure 11 shows the motion characteristic curve of the needle when the voltage difference is 100 V and the high and low voltage times are both 1.5 ms. The ordinate represents the relative displacement of the needle and the abscissa represents the points collected by the laser micrometer (which can be converted to time). The rising edge represents the upward movement of the needle, and its movement time can be calculated by the chart. Cursor A is at a high level and cursor B is at a low level. The difference between the ordinates of the cursor A and B is the absolute displacement of the needle. The needle velocity can be obtained via absolute displacement divided by the time of the rising edge. Then, the experiment of the needle velocity will be carried out in different voltage differences. Figure 12 depicts that the velocity curve of the needle at different voltage difference is obtained after several measurements. The figure also shows that, as the voltage difference increases, the needle velocity increases accordingly. Figure 11 shows the motion characteristic curve of the needle when the voltage difference is 100 V and the high and low voltage times are both 1.5 ms. The ordinate represents the relative displacement of the needle and the abscissa represents the points collected by the laser micrometer (which can be converted to time). The rising edge represents the upward movement of the needle, and its movement time can be calculated by the chart. Cursor A is at a high level and cursor B is at a low level. The difference between the ordinates of the cursor A and B is the absolute displacement of the needle. The needle velocity can be obtained via absolute displacement divided by the time of the rising edge. Figure 11 shows the motion characteristic curve of the needle when the voltage difference is 100 V and the high and low voltage times are both 1.5 ms. The ordinate represents the relative displacement of the needle and the abscissa represents the points collected by the laser micrometer (which can be converted to time). The rising edge represents the upward movement of the needle, and its movement time can be calculated by the chart. Cursor A is at a high level and cursor B is at a low level. The difference between the ordinates of the cursor A and B is the absolute displacement of the needle. The needle velocity can be obtained via absolute displacement divided by the time of the rising edge. Then, the experiment of the needle velocity will be carried out in different voltage differences. Figure 12 depicts that the velocity curve of the needle at different voltage difference is obtained after several measurements. The figure also shows that, as the voltage difference increases, the needle velocity increases accordingly. Then, the experiment of the needle velocity will be carried out in different voltage differences. Figure 12 depicts that the velocity curve of the needle at different voltage difference is obtained after several measurements. The figure also shows that, as the voltage difference increases, the needle velocity increases accordingly. Given that the needle velocity is positively correlated with the voltage difference, the following experiments use the voltage difference instead of the needle velocity as a variable and always set the high and low voltage time to 1.5 ms.
Experiment on Influencing Factors of Injection
Through the previous theoretical analysis and the testing of the needle velocity, spraying molten droplets of high viscosity is possible by setting different needle radii, nozzle diameters, and taper angles and by adjusting the driving voltage. Therefore, the experiments will be separately performed for each parameter, and its effect on spraying high-viscosity molten liquid will be verified.
The medium used in this test is polyurethane (Loctite, Düsseldorf, Germany, model: 3542,), which is solid at normal temperature and has a viscosity of 8000 cps after heating to 100 °C. Figure 13 shows that the experimental system includes a 3D jet printer, piezoelectric driving power, motion platform, high-precision electronic scale (Sartorius, BT125D, Goettingen, Germany, range: 120 g, resolution: 0.01 mg), image measuring instrument (Wanhao, VMS-1510F, Odessa, FL, USA, precision: 2 μm), and a pressure supply system. Given that the needle velocity is positively correlated with the voltage difference, the following experiments use the voltage difference instead of the needle velocity as a variable and always set the high and low voltage time to 1.5 ms.
Experiment on Influencing Factors of Injection
Through the previous theoretical analysis and the testing of the needle velocity, spraying molten droplets of high viscosity is possible by setting different needle radii, nozzle diameters, and taper angles and by adjusting the driving voltage. Therefore, the experiments will be separately performed for each parameter, and its effect on spraying high-viscosity molten liquid will be verified.
The medium used in this test is polyurethane (Loctite, Düsseldorf, Germany, model: 3542), which is solid at normal temperature and has a viscosity of 8000 cps after heating to 100 • C. Figure 13 shows that the experimental system includes a 3D jet printer, piezoelectric driving power, motion platform, high-precision electronic scale (Sartorius, BT125D, Goettingen, Germany, range: 120 g, resolution: 0.01 mg), image measuring instrument (Wanhao, VMS-1510F, Odessa, FL, USA, precision: 2 µm), and a pressure supply system. Given that the needle velocity is positively correlated with the voltage difference, the following experiments use the voltage difference instead of the needle velocity as a variable and always set the high and low voltage time to 1.5 ms.
Experiment on Influencing Factors of Injection
Through the previous theoretical analysis and the testing of the needle velocity, spraying molten droplets of high viscosity is possible by setting different needle radii, nozzle diameters, and taper angles and by adjusting the driving voltage. Therefore, the experiments will be separately performed for each parameter, and its effect on spraying high-viscosity molten liquid will be verified.
The medium used in this test is polyurethane (Loctite, Düsseldorf, Germany, model: 3542,), which is solid at normal temperature and has a viscosity of 8000 cps after heating to 100 °C. Figure 13 shows that the experimental system includes a 3D jet printer, piezoelectric driving power, motion platform, high-precision electronic scale (Sartorius, BT125D, Goettingen, Germany, range: 120 g, resolution: 0.01 mg), image measuring instrument (Wanhao, VMS-1510F, Odessa, FL, USA, precision: 2 μm), and a pressure supply system. The motion platform drives the 3D jet printer according to the set program. When the specified position is reached, the motion platform sends a signal to the piezoelectric driving power. Thereafter, the driving power sends a pulse signal and drives the piezoelectric stack, and the droplets are ejected from the printer onto the substrate. The motion platform drives the 3D jet printer according to the set program. When the specified position is reached, the motion platform sends a signal to the piezoelectric driving power. Thereafter, the driving power sends a pulse signal and drives the piezoelectric stack, and the droplets are ejected from the printer onto the substrate.
In the following experiments, the liquid and nozzle heater are set to 100 • C and 120 • C, respectively, and the air pressure supply is set to 0.2 MPa in the whole process. The weight of the droplets in this paper is based on the injection of 1000 droplets, the total weight is measured by a precision electronic scale, and the average weight of a single point is obtained by calculation. The diameter of a droplet that is cooled and solidified on the substrate is measured by an image measuring instrument.
Experiment on the Voltage Difference
The needle velocity can be controlled by adjusting the different voltage parameters. The selected nozzle and needle have a diameter of 100 µm, a taper angle of 90 • , and a radius of 0.75 mm, respectively. Different voltage differences are set for jetting experiments, and the droplet weight is measured and recorded. The relationship between the droplet weight and the voltage difference is plotted ( Figure 14). respectively, and the air pressure supply is set to 0.2 MPa in the whole process. The weight of the droplets in this paper is based on the injection of 1000 droplets, the total weight is measured by a precision electronic scale, and the average weight of a single point is obtained by calculation. The diameter of a droplet that is cooled and solidified on the substrate is measured by an image measuring instrument.
Experiment on the Voltage Difference
The needle velocity can be controlled by adjusting the different voltage parameters. The selected nozzle and needle have a diameter of 100 μm, a taper angle of 90°, and a radius of 0.75 mm respectively. Different voltage differences are set for jetting experiments, and the droplet weight is measured and recorded. The relationship between the droplet weight and the voltage difference is plotted (Figure 14). Figure 14 shows that, when 75 V < voltage difference < 120 V, the weight of the molten droplet is positively correlated with the voltage difference. When the voltage difference is large, the moving velocity of the needle and the inertia force obtained by the droplet will also be large, and the weight of the ejected droplets will increase accordingly. When the voltage difference <75 V, the molten droplets cannot be detached from the nozzle to achieve jetting but are deposited at the nozzle orifice The ejection cannot be formed because the inertial force of the droplet cannot overcome its own viscous resistance when the voltage difference is reduced to a certain value.
Experiment on the Needle Radius
The nozzle with a diameter of 100 μm and a taper angle of 90° is selected, and the voltage difference is set to 100 V. The needles with radii of 0.4, 0.5, 0.75, and 1 mm are selected for the jetting experiment. The weight of the sprayed droplets is then measured.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference that can form a stable ejection under each needle is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the needle radius, is plotted ( Figure 15). Figure 14 shows that, when 75 V < voltage difference < 120 V, the weight of the molten droplet is positively correlated with the voltage difference. When the voltage difference is large, the moving velocity of the needle and the inertia force obtained by the droplet will also be large, and the weight of the ejected droplets will increase accordingly. When the voltage difference <75 V, the molten droplets cannot be detached from the nozzle to achieve jetting but are deposited at the nozzle orifice. The ejection cannot be formed because the inertial force of the droplet cannot overcome its own viscous resistance when the voltage difference is reduced to a certain value.
Experiment on the Needle Radius
The nozzle with a diameter of 100 µm and a taper angle of 90 • is selected, and the voltage difference is set to 100 V. The needles with radii of 0.4, 0.5, 0.75, and 1 mm are selected for the jetting experiment. The weight of the sprayed droplets is then measured.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference that can form a stable ejection under each needle is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the needle radius, is plotted ( Figure 15). The figure shows that, when the other parameters are the same, a large needle radius results in sprayed droplets with large weight and requires a small voltage difference to form a stable ejection, that is, the ability to jet is strong. The experimental conclusions are consistent with those obtained from the previous analysis.
Experiment on the Nozzle Diameter
The needle with the radius of 0.75 mm is selected, and the voltage difference is set to 100 V. The nozzles with the diameter of 50, 75, 100, and 150 μm and taper angle of 90° are selected for jetting experiment. The weight of the sprayed droplets is then measured.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference, which can form a stable ejection under each nozzle, is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the nozzle diameter, is plotted ( Figure 16). The figure shows that, when the other parameters are the same, a large needle radius results in sprayed droplets with large weight and requires a small voltage difference to form a stable ejection, that is, the ability to jet is strong. The experimental conclusions are consistent with those obtained from the previous analysis.
Experiment on the Nozzle Diameter
The needle with the radius of 0.75 mm is selected, and the voltage difference is set to 100 V. The nozzles with the diameter of 50, 75, 100, and 150 µm and taper angle of 90 • are selected for jetting experiment. The weight of the sprayed droplets is then measured.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference, which can form a stable ejection under each nozzle, is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the nozzle diameter, is plotted ( Figure 16). The figure shows that, when the other parameters are the same, a large needle radius results in sprayed droplets with large weight and requires a small voltage difference to form a stable ejection, that is, the ability to jet is strong. The experimental conclusions are consistent with those obtained from the previous analysis.
Experiment on the Nozzle Diameter
The needle with the radius of 0.75 mm is selected, and the voltage difference is set to 100 V. The nozzles with the diameter of 50, 75, 100, and 150 μm and taper angle of 90° are selected for jetting experiment. The weight of the sprayed droplets is then measured.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference, which can form a stable ejection under each nozzle, is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the nozzle diameter, is plotted ( Figure 16). The figure shows that, when the other parameters are the same, a large nozzle diameter results in sprayed droplets with large weight and requires a large voltage difference to form a stable ejection, that is, the ability to jet is weak. The experimental conclusions are consistent with those obtained from the previous analysis.
Experiment on the Nozzle Taper Angle
The needle with the radius of 0.75 mm is selected, and the voltage difference is set to 100 V. The nozzles with taper angles of 60 • , 90 • , and 120 • with a diameter of 75 µm are set for jetting experiment. The weight of the sprayed droplets is measured and recorded.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference that can form a stable ejection under each nozzle is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the nozzle taper angle is plotted (Figure 17). that is, the ability to jet is weak. The experimental conclusions are consistent with those obtained from the previous analysis.
Experiment on the Nozzle Taper Angle
The needle with the radius of 0.75 mm is selected, and the voltage difference is set to 100 V. The nozzles with taper angles of 60°, 90°, and 120° with a diameter of 75 μm are set for jetting experiment. The weight of the sprayed droplets is measured and recorded.
Then, the other parameters are kept unchanged for another experiment, the minimum voltage difference that can form a stable ejection under each nozzle is adjusted and recorded. The relationship between the droplet weight, the minimum voltage difference that can form a stable ejection, and the nozzle taper angle is plotted (Figure 17). The figure shows that, when the other parameters are the same, a large nozzle taper angle results in sprayed droplets with small weight and requires a large voltage difference to form a stable ejection, that is, the ability to jet is weak. The experimental conclusions are consistent with those obtained from the previous analysis.
Minimum Droplet and Consistency Analysis
The minimum molten droplet determine the injection resolution of the jet printer. According to the previous analysis and experiment, the dimension of the droplet is affected by the voltage difference, needle radius, nozzle diameter, and taper angle. These factors will also affect the injection capability of the jet printer. Table 1 shows the specific relationship ("↑" represents positive correlation, and "↓" represents negative correlation). The figure shows that, when the other parameters are the same, a large nozzle taper angle results in sprayed droplets with small weight and requires a large voltage difference to form a stable ejection, that is, the ability to jet is weak. The experimental conclusions are consistent with those obtained from the previous analysis.
Minimum Droplet and Consistency Analysis
The minimum molten droplet determine the injection resolution of the jet printer. According to the previous analysis and experiment, the dimension of the droplet is affected by the voltage difference, needle radius, nozzle diameter, and taper angle. These factors will also affect the injection capability of the jet printer. Table 1 shows the specific relationship ("↑" represents positive correlation, and "↓" represents negative correlation). The minimum droplet can only be obtained by selecting the optimum configuration and adjusting the appropriate parameters. In the next experiment, 10 droplets will be ejected under each set of configuration and parameters in Table 2, every droplet will be measured and the droplet diameter ultimately obtained is the average value via calculation. Table 2 shows that the needle with a radius of 0.4 mm, the nozzle with a taper angle of 90° and a diameter of 50 μm, a minimum voltage difference of 98 V, and a supply pressure of 0.05 MPa can obtain a minimum molten droplet, and the minimum average diameter is 275 μm.
Then, the consistency of the droplets will be analyzed. 100 droplets on the substrate were achieved with the configuration that are able to get the minimum droplet. Figure 18 shows a 10 × 10 droplet array with an average diameter of 275 μm, and the variation of the droplet diameter was within ±3.8%. The simple patterns shown in Figure 19 are all printed on the basis of the minimum droplets by the 3D jet printer herein. The simple patterns shown in Figure 19 are all printed on the basis of the minimum droplets by the 3D jet printer herein.
Conclusions
A piezoelectric 3D jet printer for high-viscosity molten liquid was newly devised in this study. Through theoretical analysis, simulation analysis, and experimental research, the influence of the needle velocity, needle radius, nozzle diameter, and taper angle on the jetting performance of high-viscosity molten liquid was verified. This study found the following: 1. The defect that the piezoelectric stacks cannot operate in high-temperature conditions can be solved by the cooling mechanism designed in this paper.
2. The experiments verified that the velocity of the needle is positively correlated with the voltage difference of the piezoelectric stacks.
Conclusions
A piezoelectric 3D jet printer for high-viscosity molten liquid was newly devised in this study. Through theoretical analysis, simulation analysis, and experimental research, the influence of the needle velocity, needle radius, nozzle diameter, and taper angle on the jetting performance of high-viscosity molten liquid was verified. This study found the following: 1.
The defect that the piezoelectric stacks cannot operate in high-temperature conditions can be solved by the cooling mechanism designed in this paper.
2.
The experiments verified that the velocity of the needle is positively correlated with the voltage difference of the piezoelectric stacks.
3.
Through simulation analysis and experimental research, the ejection capacity of the jet printer is positively correlated with the velocity and the radius of the needle and negatively correlated with the diameter and taper angle of the nozzle.
4.
Through experimental comparison, by using a needle with a radius of 0.4 mm, a nozzle with a diameter of 50 µm, a taper angle of 90 • , a supply pressure of 0.05 Mpa, and a voltage difference of 98 V, a molten liquid with a viscosity of 8000 cps can be sprayed with the minimum average droplet diameter of 275 µm, and the variation of the droplet diameter was within ±3.8%.
In this study, the experimental medium used is a type of polyurethane. For the next step, the focus should be on the effect of other high-viscosity molten liquids that have not been used for jetting in the 3D printing field before. | 11,565.4 | 2018-10-28T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
An Authenticated Key Agreement Scheme Based on Cyclic Automorphism Subgroups of Random Orders
Group-based cryptography is viewed as a modern cryptographic candidate solution to blocking quantum computer attacks, and key exchange protocols on the Internet are one of the primitives to ensure the security of communication. In 2016 Habeeb et al proposed a "textbook" key exchange protocol based on the semidirect product of two groups, which is insecure for use in real-world applications. In this paper, after discarding the unnecessary disguising notion of semidirect product in the protocol, we establish a simplified yet enhanced authenticated key agreement scheme based on cyclic automorphism subgroups of random orders by making hybrid use of certificates and symmetric-key encryption as challenge-and-responses in the public-key setting. Its passive security is formally analyzed, which is relative to the cryptographic hardness assumption of a computational number-theoretic problem. Cryptanalysis of this scheme shows that it is secure against the intruder-in-the-middle attack even in the worst case of compromising the signatures, and provides explicit key confirmation to both parties.
Introduction
Cryptographic techniques are an essential tool to guarantee the security of communication in modern society.Today, the security of nearly all of the cryptographic schemes used in practice is based on number theoretic problems [1,2,3] .However, schemes like these will become insecure as soon as large enough quantum computers arrive.The reason for this is Shor's algorithm [4] , which solves number theoretic problems like integer factorization and discrete logarithms in polynomial time on a quantum computer.Therefore, one needs alternatives to those classical public key schemes which are based on mathematical problems not affected by quantum computer attacks.
Group-based cryptography is one of the main candidates for this [3,5,6] .
In 2016, using semidirect product of two groups, Habeeb et al [7] proposed a key exchange protocol (the HKKS protocol) based on the work of [8, 9].Unlike all the operating principles of the existing Diffie-Hellmanlike protocols, its basic passive security is based on a stronger computational group-theoretic assumption than the current assumptions of hardness of discrete logarithm problems.
However, the HKKS protocol is still a "textbook" key exchange protocol which is actually not suitable for use in real-world applications due to its lack of any oracle interaction among users (public key owners) and an attacker [10] .In this paper, having discarded the unnecessary disguising notion of semidirect product in the HKKS protocol, we will establish a simplified yet enhanced authenticated key agreement scheme (denoted by the HYZ scheme) based on cyclic automorphism subgroups of random orders, which includes mutual identification of Alice and Bob.Our contributions are: 1) The passive security of the HYZ scheme, which is relative to the cryptographic hardness assumption of a computational number-theoretic problem, is analyzed in terms of formal security terminology.Additionally, selection of protocol parameters for passive security is analyzed.
2) To guarantee its active security, we utilize the "encryption-then-signature" mode to protect twofold for the protocol messages.We show that the HYZ scheme is secure against the intruder-in-the-middle attack even in the worst case of compromising the signatures, and obtains the highest level of assurance regarding key agreement, i.e., explicit key confirmation to both parties [11] : A is assured that B has computed the shared symmetric key , and no one other than B can compute .
The organization of the paper is as follows.In Section 2, the HYZ scheme is proposed.In Section 3, the shared key formula is proven and its passive and active security properties are discussed.Conclusions are given in Section 4.
In this section, we describe the HYZ scheme utilizing a kind of public-key infrastructure (PKI) [11,12] .Our strategy is to make hybrid use of certificates which are signed by a TA (Trusted Authority) and symmetric-key encryption as challenge-and-responses in the public-key setting.Each user U has a digital signature function with verification algorithm .The TA also has a signature scheme with a public verification algorithm .The verification algorithms are compiled and made public by the TA , who certifies that is actually the verification algorithm for U and not for any malicious attacker Mallory.Each user U has a certificate where ID(U) is certain identification information for U.The public domain parameters consist of a group ( ) ⋅ , a given element ∈ with order > and a given element . Suppose that Alice and Bob want to establish a symmetric key to use in an encryption function .
∈ means [10] sampling element is taken uniformly random in set .
5) Submitting
( ) to TA, Alice asks TA to verify that is Bob's verification algorithm.
6) Alice uses to verify Bob's signature in Step 4).If the signature is not valid, then she "rejects" and quits.Otherwise, she "accepts", computes and sends the following to Bob: 8) Submitting ( ) to TA, Bob asks TA to verify that is Alice's verification algorithm.Finally, Bob uses to verify Alice's signature in Step 7).If the signature is not valid, then he "rejects" and quits.Otherwise, he "accepts".
Proof of the Shared Key
Theorem 1 : Alice and Bob share the same symmetric key , i.
, by the definition in [13], one has
=
. This completes the proof.
The Passive Security Properties of the HYZ Scheme
Taking ( ) φ = , we see that the standard Diffie-Hellman protocol is a special case of the HYZ scheme, and so it provides a heuristic evidence of the basic security of the HYZ scheme.
From the random numbers ∏ would be limited (we call this phenomenon "falsely big data").
The basic security of the HYZ scheme is based on the cryptographic hardness assumption of the following computational number-theoretic problem: Given φ , and , it is computationally infeasible to compute .More specifically, consider the following experiment for a pair of group-generation algorithms ( ) provides for mutual identification of A and B. This in turn thwarts the intruder-in-the-middle attack.Next, we shall show that the HYZ scheme provides explicit key confirmation [11].
The HYZ scheme is established involving mutual identification in the public-key setting.So, if an adversary is active, he will be detected by the honest participants in the session.
Using ) .Now, assuming that B executed the scheme according to its specifications, and the signature is valid in step 6), A can infer that B has computed the value of The analysis from the point of view of B is similar.Summarizing the discussion above, we have established the following theorem.
Theorem 2: The HYZ scheme is an authenticated key agreement scheme that provides explicit key confirmation to both parties, assuming that the problem is intractable.
Conclusion
Based on the HKKS protocol and cyclic automorphism subgroups of random orders, we proposed the HYZ scheme and proved its shared key formula.Two necessary conditions for its passive security are analyzed: the order of automorphism ( ) φ ∈ and the period of φ with respect to should be chosen large enough to thwart exhaustive key search and avoid the phenomenonof "falsely big data", respectively.Furthermore, we conducted the ( ) A G experiment, defined the problem and depicted the assumption in terms of formal security terminology.To guarantee its active security, we utilized twofold protections for the protocol messages.It was showed that the HYZ scheme is secure against the intruder-in-the-middle attack even in the worst case of compromising the signatures, and is an authenticated key agreement scheme that provides explicit key confirmation to both parties, assuming that the problem is intractable.
Compared with the HKKS protocol, the security of the HYZ scheme has been improved in a number of aspects.Future work includes cryptanalysis of the HYZ scheme resistance to the "linear algebra attack" mounted by Romank'ov [14] .
1 ) 2 )
Alice chooses a random number ∈ { } − .Then she computes and sends the following to Bob: Bob chooses a random number ∈ { } − .Then he computes and sends the following to Alice: the necessary conditions for passive security of the HYZ scheme is that the order should be chosen large enough to thwart exhaustive key search.The two cyclic automorphism subgroups of ( ) of random orders involved in the HYZ scheme are φ element of group ).Another necessary condition for passive security of the HYZ scheme is that the period should be chosen large enough; otherwise, whenever ≥ , we have ( ) | 1,925 | 2017-01-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Internet of things issues related to psychiatry
Background Internet of Things (IoT) devices for remote monitoring, diagnosis, and treatment are widely viewed as an important future direction for medicine, including for bipolar disorder and other mental illness. The number of smart, connected devices is expanding rapidly. IoT devices are being introduced in all aspects of everyday life, including devices in the home and wearables on the body. IoT devices are increasingly used in psychiatric research, and in the future may help to detect emotional reactions, mood states, stress, and cognitive abilities. This narrative review discusses some of the important fundamental issues related to the rapid growth of IoT devices. Main body Articles were searched between December 2019 and February 2020. Topics discussed include background on the growth of IoT, the security, safety and privacy issues related to IoT devices, and the new roles in the IoT economy for manufacturers, patients, and healthcare organizations. Conclusions The use of IoT devices will increase throughout psychiatry. The scale, complexity and passive nature of data collection with IoT devices presents unique challenges related to security, privacy and personal safety. While the IoT offers many potential benefits, there are risks associated with IoT devices, and from the connectivity between patients, healthcare providers, and device makers. Security, privacy and personal safety issues related to IoT devices are changing the roles of manufacturers, patients, physicians and healthcare IT organizations. Effective and safe use of IoT devices in psychiatry requires an understanding of these changes.
Background
The era of the Internet of Things (IoT) has arrived, where smart, connected technologies are being embedded in everyday objects such as cars, toothbrushes, washing machines, and physical infrastructure on a massive scale. The use of IoT devices for remote monitoring, diagnosis and treatment, is viewed as an important way to improve and expand individualized medical care and assist with lowering costs, including for bipolar disorder and other mental illness (Deloitte 2018;de la Torre Díez et al. 2018). While there is no standard definition, the IoT describes "the extension of network connectivity and computing capability to objects, devices, sensors and items not ordinarily considered to be computers" (Internet Society 2015). IoT devices can be thought of as physical devices with embedded technology that can sense, generate, store, and send data, and sometimes respond to commands via actuators that can modify the physical world. Increasingly, IoT devices will be installed in the home for medical purposes as selected by patients or recommended by physicians.
Today, a diverse range of IoT devices are found in homes, retail businesses, public spaces, hospitals and healthcare facilities, vehicles, utility infrastructure, and are directly worn by consumers. Virtually every consumer electronics device is now sold as a connected IoT device (NIST 2019). The scale of the IoT is unprecedented, with estimates of 30 billion connected devices by 2020 (Nordrum 2016), and that half the total global Internet traffic will be machine-to-machine connections by 2022 (Cisco 2019). About 71% of homes in North America, and 57% in Western Europe have at least one IoT device (Kumat et al. 2019). The scale, complexity and passive nature of data collection creates many new and unique challenges for the use of IoT devices in psychiatry, with functions for detection of emotion, mood state, stress, activity patterns and cognitive skills (Glenn and Monteith 2014;Abdullah and Choudhury 2018;APA 2019). This paper will discuss IoT issues related to psychiatry and general medicine with examples for bipolar disorder, including the major challenges to security, safety and privacy, and the complex impacts on manufacturers of IoT devices, patient users, and healthcare organizations.
IoT background
A confluence of factors led to the rapid increase in IoT devices (Internet Society 2015;GAO 2017). The expansion and decreasing costs of multiple types of networks (e.g. broadband, cellular, and short range wireless networks including Wi-Fi, Bluetooth, Zigbee) led to near ubiquitous connectivity. Inexpensive miniaturization of electronics enabled the development of parts, such as sensors, that fit in very small objects, including biosensors for healthcare monitoring (Kim et al. 2019). Cloud computing allowed distributed IoT devices to interact with back-end processing centers for data management and storage. New data analytic techniques allowed aggregation and analysis of the large volumes of data created by IoT devices. The fundamental Internet Protocol (IPv6) was updated to vastly increase the number of available network addresses. Finally, new business models were developed for the IoT, based on data collection.
A typical home network consists of a wireless router connected to the Internet. IoT devices are connected to the wireless router either directly or indirectly through a hub device. Although a smartphone or tablet app may be used to initially configure the IoT device, the data collected by the IoT device are sent using the wireless router to a server at the manufacturer or IoT service provider. An IoT device for home use contains electronics for data collection, often involving sensors, cameras, and microphones. Some IoT devices can subsequently be managed by a smartphone app or website. Examples of the variety of IoT devices available for home, consumer health and fitness, and approved medical IoT devices are shown in Table 1.
In psychiatric research, IoT devices are often wearables, such as wristwear, clothing, belts and body patches, containing sensors to measure physical activity and heart rate variability. The sensor data from the wearables may be combined with other data sources, and used to classify emotional reactions, mood states and stress in various psychiatric disorders (Reinersten and Clifford 2018; Zhu et al. 2019). Examples of research involving IoT devices and bipolar disorder are shown in Table 2, with studies using activity patterns to distinguish bipolar disorder from other diagnoses, and heart rate variability to predict mood state.
Security challenges
There are security challenges with IoT devices that differ from those involving traditional computers. Many IoT devices are battery powered, and have severe constraints on power, memory, and processing resources. These devices lack the capacity to run conventional operating systems, and to support encryption or anti-virus software (IoT Cybersecurity Alliance 2017; Bacceli et al. 2013). Many IoT devices lack a software upgrade process, or have only a very cumbersome process to upgrade (GAO 2017). IoT devices that are embedded in products or systems may be inaccessible. Many IoT devices are never rebooted, have a service life much longer than for traditional computer equipment, and could contain obsolete or dangerous hardware and software (Intel 2016). A poorly secured IoT device may potentially affect the security of every interconnected device, local and remote (Internet Society 2015). This allows hackers to target nontraditional devices such as a television or refrigerator, both to exploit home networks and launch an external cyberattack (NSA 2016). Collecting data using cloud computing also presents many potential opportunities for data mismanagement and improper security controls (GAO 2017).
Many FDA-approved medical devices have a long life span and were developed before the era of interconnectivity and the need for cybersecurity . Most digital devices approved by the FDA would today be considered IoT devices. The FDA now recommends monitoring cybersecurity throughout the entire product life-cycle (FDA 2016a). If cybersecurity issues require a software or firmware update, the device manufacturer is responsible for updates to address the cybersecurity risk (FDA 2020a). Changes solely to strengthen cybersecurity typically do not need FDA review and should be performed routinely (FDA 2016a(FDA , b, 2020a, but implementation is often delayed with so many diverse stakeholders (Woods et al. 2019). However, if the software or firmware changes affect the device safety or effectiveness, FDA approval is required prerelease (FDA 2016a). The FDA has adopted a premarket submission standard to demonstrate steps taken to mitigate cybersecurity risks (UL 2018), requires a unique device identifier (FDA 2019a) and has plans to adopt other measures to improve medical device safety (FDA 2018). Cybersecurity is an international problem and starting in 2020, new European Union Medical Devices Regulation will tighten regulatory controls, increase device traceability throughout the supply chain, and require ongoing post-market surveillance (McDonough 2019).
Safety challenges
Some medical IoT devices have the potential to directly endanger the safety of the owners (GAO 2017). Safety and security concerns of IoT devices are interconnected, as poor security impacts safety and safety violations may impact security (Zalewski et al. 2019) Although the FDA has no confirmed reports of patient harm due to a cybersecurity incident involving a medical device (FDA 2020a), the FDA has released 11 safety warnings since 2013 involving insulin pumps, implanted cardiac devices, cardiac monitors, infusion pumps and central patient monitoring displays (FDA 2020a). In 2020, the FDA identified 12 cybersecurity vulnerabilities with Bluetooth Low Energy wireless technology, a communications protocol used in medical devices from several manufacturers (FDA 2020b; DHS 2020). While patients want to be told of cybersecurity risks with medical devices (FDA 2019b), impacted patients and clinicians may react conservatively. In a study of a firmware update to mitigate a cybersecurity vulnerability found in an implanted cardiac pacemaker, only about 25% of those affected chose to upgrade (Saxon et al. 2018). Other technology issues may lead to safety risks with medical devices. For example, although a continuous glucose monitor was functioning properly, a server outage at the manufacturer stopped alerts and other communications to parents and caregivers (Parmer 2019). There may also be safety risks from consumer IoT health and fitness devices. For example, the close proximity of some wearables to the body may lead to skin irritations from chemicals in the device, and chemical burns from battery leaks (CPSC 2017).
Privacy challenges
The use of IoT devices in the home, and of wearables, encroaches on spaces long considered and valued as private-the home and the body (Rosner and Kenneally 2019). IoT devices are eroding the boundaries between public and private, and create the potential for continuous monitoring of activities, speech, behavior and emotions (Internet Society 2019). People may no longer be able to keep privacy boundaries in place. However, privacy remains very important to most. In a 2019 survey, more than 80% of Americans found the potential risks outweigh the benefits when companies collect data, and felt they had very little or no control over the data collected by companies or the government (Pew Research 2019; Auxier and Rainie 2019). In a survey of consumers in five countries, 75% distrust the way that data are being shared (Internet Society 2019). Nearly constant surveillance may lead to chilling and conforming effects on behavior in the home (Rosner and Kenneally 2019; Oulasvirta et al. 2012;Kamiinski 2014). Privacy is a particularly important concern for individuals with psychiatric disorders, especially due to the stigma (Monteith and Glenn 2016;Bauer et al. 2017). Many consumers may not be aware that "surveillance capitalism" is now the business model in virtually every economic sector, including every smart product or personalized service (Zuboff 2019). Digitized human experience is now raw material for translation into behavioral predictions. Massive amounts of data from all possible digital activities (online, smartphone, financial, IoT devices at home including health tracking and monitoring, urban and commercial IoT) are collected. These data are then combined, analyzed and packaged as "prediction products" to tell business customers how people will behave now and in the future (Zuboff 2019). People with mental illness may be especially at risk of harm from errors and biases in data and algorithms associated with automated decision making (Monteith and Glenn 2016;Bauer et al. 2017).
The fundamental approach to privacy on the Internet is based on notice and choice with the user providing consent to a privacy policy. However, most IoT devices have no means for user interaction such as a screen, mouse or keyboard (Peppet 2014). IoT device privacy policies are often on a web site, and do not clarify the ownership, use and sale of all collected data (Peppet 2014). Consumers may not realize that data from health and fitness trackers may be routinely sent to third parties, or even that their IoT devices are interacting with the Internet. Some individuals may provide consent for data collection without understanding the scope, such as with an IoT enabled television that includes voice recognition (GAO 2017). A simple binary consent may not be sufficiently flexible for the online environment (International Institute of Communications 2012). Furthermore, many users routinely ignore or do not carefully read online privacy policies (Pew Research 2019; West 2019).
The use of prescribed medical digital devices creates new challenges related to consent. In addition to traditional medical consent based on discussion with a physician, the patient often has to register with the company who manufactured the device and provide consent to a user agreement (Klugman et al. 2018). Corporate user agreements are often long, written in legalese, and are non-negotiable. Yet mental illness may interfere with the capacity to provide traditional informed consent (Okai et al. 2007;Lepping et al. 2015;Morán-Sánchez et al. 2016). Other privacy issues associated with prescribed medical devices relate to data ownership, data use, and data sharing by device manufacturers. Health related privacy remains very important to patients. In a 2019 study of 4000 adults representative of the US population, only 10% want to share health data with technology companies (Rock Health and Stanford 2019). Another concern is that consumers may not understand that de-identified data are routinely vulnerable to re-identification techniques in the era of big data (Narayanan et al. 2016;Rocher et al. 2019). For example, in a dataset from 14,451 individuals with protected health data removed, 95% of adults were reidentified using aggregated physical activity data measured by accelerometers (Na et al. 2018).
New roles for manufacturers
Embedded processors are being added to everyday objects, yet most traditional manufacturers lack in-house technical expertise and are unaware of security risks and interoperability issues (Sadler 2017;Hypponen and Nyman 2017). In the highly competitive, global consumer products market, manufacturers rush to get a device to market, focus on lowering costs and gaining market share, and often release products with little testing (Sadler 2017). The primary source of recurring revenue for most IoT devices is not selling multiple devices to the same customer, but selling the data collected by the devices (Anderson 2018). Manufacturers rely on thirdparty support for product design, component purchase, and assembly, with hardware and software components frequently re-used in IoT products beyond what they were initially designed for (GAO 2017;Sadler 2017). The use of identical or near-identical software and firmware in many devices can magnify the impact of a successful attack when a vulnerability is found, and increases the potential for successful attacks (GAO 2017;Intel 2016). The complex global supply chain also poses diverse security risks (Kshetri and Voas 2019;Radanliev et al. 2019).
The result is that security built into IoT devices is far weaker than in traditional devices on the Internet, such that IoT devices are now a larger target for hackers than traditional web applications and servers (Boddy et al. 2018). For example, the public and private keys that are used in certificates to ensure encryption security can be compromised if random number generation is flawed. In a study of 75 million RSA certificates from the Internet, keys shared a common factor based on a random number in 1 of 172 certificates from IoT devices versus 1 in 20 million from standard websites (Kilgallin 2019). These weak keys expose users to a wide variety of potential harms. A hacker with a re-derived private key for a SSL/ TLS server certificate may impersonate a server, capture login credentials, medical and financial data, decrypt stored communications, and intentionally cause a device to malfunction (Kilgallin 2019). Another example relates to the apps that accompany many IoT devices. In a study of apps that accompany 96 popular IoT devices (32 apps), 31% had no encryption, and another 19% had poor encryption (Mauro et al. 2019). IoT startups may introduce a product but quickly go out of business or abandon a device, but the device may remain in a home for many years without any potential for security upgrades (Fu et al. 2017).
In 2020, a new law in the UK requires manufacturers to provide unique passwords for individual IoT devices that are not resettable to universal factory settings, state the minimum length of time they will provide security updates, and provide a public contact point to report vulnerabilities (Gov.UK 2020). This is an important step towards improving IoT security and protecting consumers.
New roles for patients
For healthcare, patients will use a combination of consumer health and fitness IoT devices and prescribed medical IoT devices. Consumer IoT devices provide insufficient security information in their manuals or websites (Blythe et al. 2019), and patients often get security advice from family and friends (Redmiles et al. 2016). In a study of 1878 websites providing security advice, only 25% were written at a standard reading level (e.g., Reader's Digest) with the rest harder to understand (Redmiles et al. 2018). Patients will not only be the user but will install, configure, manage and decommission consumer IoT devices, and prescribed IoT medical devices that communicate with the provider. Patients may not realize that ongoing maintenance may be required for a medical device including software or firmware updates, battery changes, and sensor replacements (Woods et al. 2019;Klugman et al. 2018). Some routine behaviors may negate the validity of data collected from IoT devices and trigger serious privacy and security concerns. When consumers buy a new smart device, they focus on features and functions and overlook security settings (NSA 2016). In multiple surveys in the US, Canada and the UK, the majority of consumers did not change their router's default password (Powell 2018;De Leon 2019;ESET 2019). When consumers borrow, rent, gift or resell their used IoT devices without removing their association to the device, collected data may be assigned to the wrong individual (Khan et al. 2018). Patients with mental illness may have fewer digital skills than the general public (Bauer et al. 2017(Bauer et al. , 2020. Patients may lack the knowledge to follow security advice. For example, the FBI recommends that devices with private and sensitive data, such as a laptop or medical device, be kept on a separate home network from other IoT devices such as a refrigerator (FBI 2019). However, a patient's medical devices are usually located on the same wireless network as all the home IoT devices from many manufacturers (Fu et al. 2017). The result is the safety and security vulnerabilities of home and provider systems are combined, with each becoming a potential backdoor vulnerability to the other (Fu et al. 2017). Patient medical devices that are connected to medical facilities pose a major cybersecurity threat and are often viewed as the weakest link within healthcare networks (Deloitte 2018;Sun et al. 2019;Grau 2020). In addition to many security issues in a wide range of home IoT devices, a 2019 US study found that many wireless routers for home networks lack basic security protections (De Leon 2019).
New roles for healthcare organizations
Healthcare organizations must recognize the increased risks associated with interconnected medical devices and take an aggressive role to protect patients, physicians, and staff, and medical data from cybersecurity threats. This protection must extend to the rapidly growing number of remote connections from patients at home transmitting large volumes of data from medical devices or health and fitness devices. In 2017, the US Cybersecurity Task Force rated healthcare cybersecurity in "critical" condition (HHS 2017), and for 2019, ECRI Institute found cybersecurity attacks from hackers exploiting remote access as the number one health technology hazard (ECRI 2018). Every aspect of the interconnected healthcare network, including users of all backgrounds, hardware, firmware, software and communications channels, present different levels of risk and are part of the security problem (ECRI 2018). Providing adequate security protection in healthcare is resource intensive and will require considerable investment to improve IT security skills, communicate and coordinate with device manufacturers and patients, implement ongoing, comprehensive, multi-layered security controls, and deploy measures to promptly address vulnerabilities and install updates (HHS 2018). Healthcare IT organizations should take the lead in establishing ongoing IoT related education for all physicians, staff, and connected patients, including for the busy, disinterested, compromised or financially challenged.
Limitations
There are many limitations to this paper. The specific benefits, efficacy, and risks of IoT devices used in psychiatry were not discussed, including technology concerns such as sensor accuracy, manufacturing practices such as sensor and part substitutions across the product life cycle, and the use of proprietary algorithms (Bauer et al. 2020). Proposed new approaches to validation and efficacy testing (Coravos et al. 2020), and discussion of the FDA Digital Health Software Pre-certification Program were omitted (Lee and Kesselheim 2018). The potential conflict of interest for clinicians collaborating with technology companies on the development of IoT devices was not discussed.
Proposed technical standards, government regulations, and commercial and academic approaches to improve privacy and security of the IoT were not included. Technical details related to interoperability of data from diverse devices and systems, software quality, data quality, operations, bandwidth, edge processing outside the data center, and cloud computing were omitted. Privacy challenges related to 5G cellular networks were not included (Marcos 2017). Details regarding cybersecurity and safety issues for regulated medical devices were not provided. Unique challenges of some medical devices, such as the need for quick and simple access in emergencies, were not discussed (Sametinger et al. 2015). Methods to increase physician and patient knowledge of the IoT, legal and ethical issues including provider and manufacturer responsibility for errors, and contractual issues were not included. Digital inequalities, including equitable access to IoT devices, and differences in patient skills, and the impacts of security or privacy breaches on patient trust of physicians and healthcare organizations were not discussed. The environmental issues of energy consumption and carbon footprint for the billions of IoT devices and systems used to analyze the collected data were not discussed (Bol et al. 2015;Ashrad et al. 2017).
The article search occurred between December 2019 and February 2020. Since the pandemic began, the growth rate of new IoT devices has slowed due to lower consumer and enterprise demand, manufacturing shutdowns, supply chain interruptions, and reduced project funding (GSMA 2020; ABI Research 2020). Despite this, the use of some healthcare IoT devices such as digital thermometers is growing (Leuth 2020), and recovery of the IoT marketplace is expected to start in 2021 (GSMA 2020).
Conclusions
It is inevitable that more IoT devices are coming to psychiatry In the future, there will be a choice of IoT medical devices for psychiatrists to recommend including for bipolar disorder. Patients will increasingly use IoT medical devices to monitor general medical conditions, in addition to consumer health and fitness devices. While IoT devices offer many potential benefits for remote monitoring and treatment, there are risks associated with IoT devices, and from the connectivity between patients, healthcare providers, and device makers. Understanding these risks is necessary for optimal use of IoT devices in psychiatry. Security, safety and privacy issues are changing the roles of manufacturers, patients and healthcare IT organizations. It is important to determine how these devices can be used in real-world settings, to obtain data that are clinically valuable, and to avoid security, privacy and safety issues for the patient, physician and healthcare organization. | 5,394.2 | 2021-04-02T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Effects of enhanced downwelling of NOx on Antarctic upper-stratospheric ozone in the 21st century
Ozone is expected to fully recover from the chlorofluorocarbon (CFC) era by the end of the 21st century. Furthermore, because of anthropogenic climate change, a cooler stratosphere decelerates ozone loss reactions and is projected to lead to a super recovery of ozone. We investigate the ozone distribution over the 21st century with four different future scenarios using simulations of the Whole Atmosphere Community Climate Model (WACCM). At the end of the 21st century, the equatorial upper stratosphere has roughly 0.5 to 1.0 ppm more ozone in the scenario with the highest greenhouse gas emissions compared to the conservative scenario. Polar ozone levels exceed those in the pre-CFC era in scenarios that have the highest greenhouse gas emissions. This is true in the Arctic stratosphere and the Antarctic lower stratosphere. The Antarctic upper stratosphere is an exception, where different scenarios all have similar levels of ozone during winter, which do not exceed pre-CFC levels. Our results show that this is due to excess nitrogen oxides (NOx) descending faster from above in the stronger scenarios of greenhouse gas emissions. NOx in the polar thermosphere and upper mesosphere is mainly produced by energetic electron precipitation (EEP) and partly by solar UV via transport from low latitudes. Our results indicate that the thermospheric/upper mesospheric NOx will be important factor for the future Antarctic ozone evolution and could potentially prevent a super recovery of ozone in the upper stratosphere.
Introduction
Stratospheric ozone experienced a dramatic decrease from the 1960s until the 1990s due to anthropogenic chlorofluorocarbon (CFC) emissions (Cicerone, 1987;Anderson et al., 1991) and the associated increase of reactive chlorine oxides (ClO x ) in the stratosphere. Since then, the Montreal Protocol has been able to limit the use of CFCs (Velders et al., 2007), and in the beginning of the 21st century stratospheric ozone has been showing signs of recovery (Solomon et al., 2016).
Greenhouse gas emissions also alter the stratospheric ozone (Langematz, 2018). As a consequence of higher levels of carbon dioxide (CO 2 ), the stratosphere is cooling, which decreases the rate of stratospheric ozone loss (Li et al., 2009). This is projected to lead to a super recovery of ozone, i.e. higher ozone concentrations than before the 1960s, especially in the upper stratosphere (WMO, 2018, Chap. 4). In addition to the impact on net chemical ozone production, climate change is modulating ozone through changes in the atmospheric circulation. Climate models predict that the Brewer-Dobson circulation (BDC) is increasing (Garcia and Randel, 2008;Butchart, 2014), which leads to enhanced transport of ozone into the polar lower stratosphere and a reduction of ozone in the equatorial lower stratosphere (Langematz, 2018;Shepherd, 2008). This transport effect is projected to be stronger in the Northern Hemisphere, leading to a more prominent ozone super recovery in the Arctic lower stratosphere than in the Antarctic lower stratosphere (WMO, 2018, Chap. 4).
Polar stratospheric ozone is also impacted from above. Energetic electron precipitation (EEP) from the magnetosphere produces reactive nitrogen oxides (NO x ) in the thermosphere and upper mesosphere (Andersson et al., 2018). Sporadic solar proton events can also produce NO x in the polar mesosphere (Jackman et al., 2008). Solar UV absorbed in the lower thermosphere also produces NO x , which can be transported to polar latitudes (Gérard et al., 1984). The chemical lifetime of NO x in the mesosphere and lower thermosphere is enhanced during wintertime polar darkness due to the absence of photolysis. This allows NO x to be transported to the upper stratosphere with the prevailing vertical residual circulation (Solomon et al., 1982;Garcia, 1992;Randall et al., 2006;Funke et al., 2014), where it depletes ozone in a catalytic reaction (Lary, 1997). Recently, Maliniemi et al. (2020) showed in the Whole Atmosphere Community Climate Model (WACCM) that this stratospheric indirect NO x will increase substantially during the 21st century in the Southern Hemisphere in scenarios with increasing greenhouse gas emissions. Similar results have been obtained earlier with the EMAC chemistry-climate model (Baumgaertner et al., 2010). This is a consequence of stronger mesospheric descent in the future Antarctic, while no such strengthening of the mesospheric descent is predicted in the Arctic (Maliniemi et al., 2020).
In this paper, we investigate the ozone distribution over the 21st century under four different future scenarios using the WACCM chemistry-climate model. We concentrate on polar stratospheric variability during winter but also show results over the whole middle atmosphere and during all seasons. Section 2 describes the data and statistical methods. Section 3 provides results divided to three subsections: the polar winter ozone evolution from the pre-industrial era until the end of the 21st century; differences in the global ozone distribution at the end of the 21st century between the strongest and conservative future scenarios regarding their greenhouse gas emissions; and same for the polar ozone. A summary is given in Sect. 4.
Data and methods
The data used in this study are from simulations of a freerunning version of WACCM6 within CESM2. The model components and parameterizations are described in detail by Marsh et al. (2013), with updates detailed by Gettelman et al. (2019). Five different simulations are analysed. Historical simulations (three ensemble members) cover the period 1850-2014 (Coupled Model Intercomparison Project phase 6 (CMIP6) DECK simulations). Four different future scenario (CMIP6 ScenarioMIP: SSP1, SSP2, SSP3 and SSP5 (Shared Socioeconomic Pathway)) simulations cover the period 2015-2100 (O'Neill et al., 2016). SSP1 and SSP3 have one ensemble member, and SSP2 and SSP5 have five ensem-ble members. For the historical, SSP2 and SSP5 model simulations, the results shown here are ensemble means.
Different SSPs include a wide range of future actions by society, including greenhouse gas emissions. Global average CO 2 concentrations in 2100 are 446 ppm in SSP1, 603 ppm in SSP2, 867 ppm in SSP3 and 1135 ppm in SSP5 (Meinshausen et al., 2020). The radiative forcing increase of the climate system by 2100 relative to the pre-industrial era is 5.0 W/m 2 in SSP1, 6.5 W/m 2 in SSP2, 7.2 W/m 2 in SSP3 and 8.7 W/m 2 in SSP5. The details of the different SSPs can be obtained from Riahi et al. (2017). All model runs are forced with solar activity following the recommendations of CMIP6. This provides estimates of the solar activity before the space era and a future solar forcing scenario (Matthes et al., 2017). Solar forcing consists of total and spectral solar irradiance, as well as galactic cosmic rays, solar proton events and energetic electron precipitation. All SSPs have the same future solar activity scenario (called the "reference scenario"; see details in Matthes et al., 2017).
We concentrate on monthly mean zonal mean volume mixing ratios of ozone, NO x and ClO x , as well as temperature and zonal wind. The latitudinal resolution of the model is 0.94 • (192 bins) and altitude ranges from the surface up to ≈140 km (in 70 levels). In this study, we focus on altitudes from around the mesopause to the surface (0.01 to 1000 hPa). We analyse the centennial time series (1850-2100) of ozone and ClO x concentrations in the polar stratosphere, as well as the Brewer-Dobson circulation in the equatorial stratosphere. The smooth long-term variations shown in Figs. 1, 2, 3 and 7 are calculated using the LOWESS method (locally weighted scatterplot smoothing) applied with a 31-year window (Cleveland and Devlin, 1988). More details of the method can be found in Maliniemi et al. (2014). The relative change in ozone over the whole atmosphere from 1960 to 2000 in the historical simulation shown in Fig. 4 was calculated by subtracting a 5-year mean centred on 1960 from a 5-year mean centred on 2000. Significance was calculated using a Mann-Kendall test (Mann, 1945). The same analysis was also used for the relative ozone change from 2017 to 2098 in SSP5.
We subtract SSP2 means from those of SSP5 to evaluate the differences in ozone (Figs. 5 and 9), NO x (Figs. 8 and 9), temperature and zonal wind ( Fig. 6) during 2090-2100 period, i.e. SSP5[ensemble mean]-SSP2[ensemble mean]. Statistical significance for the differences between SSP5 and SSP2 during 2090-2100 are calculated applying a Monte Carlo method: we take a random 11-year time period from 2015-2100 and calculate the difference in each latitudeheight bin. This is performed 1000 times and the original value (difference of 2090-2100) is compared to the distribution of these 1000 repetitions to obtain the fraction of more extreme differences (both tails of the distribution). This fraction then represents the p value in each bin with the null hypothesis that there is no difference between SSP5 and SSP2. In addition, we use a method proposed by Wilks (2016) called a false detection rate. This is done because our results for 2090-2100 differences (and relative change from 1960 to 2000 in Fig. 4) are presented over several latitudes and altitudes, and thus have a multiple hypothesis testing situation. This method adjusts the p values to take into account the spatial autocorrelation and the fact that the probability of erroneously rejecting the null hypothesis increases with the number of individual hypothesis tests. Thus, after the procedure, we obtain a global significance of 95 % of the whole presented grid, which means that the probability of erroneously rejecting the (individual) null hypothesis will be 5 %.
Results
3.1 Centennial polar winter ozone in different future scenarios Figure 1 shows the late winter polar total column ozone time series for both hemispheres. The minimum level of ozone is reached a few years after 2000 (Solomon et al., 2016). Ozone returns to the 1980s level around 2050 in the Southern Hemisphere and a little bit earlier in the Northern Hemisphere. Columns in the different future scenarios begin to diverge from each other after 2050. Both polar regions show a super recovery in SSP3 and SSP5; i.e. the column ozone exceeds 1960 levels towards the end of the 21st century, which is more notable in the Northern Hemisphere, as explained further below. One can also see that yearly variability (thin lines in Fig. 1) is somewhat larger in SSP1 and SSP3. This is because there is just one ensemble member for those SSPs, while the SSP2 and SSP5 results are the mean of five ensemble members. Figure 2 shows the time series of late winter ClO x in the Antarctic stratosphere. One can see that the maximum level of ClO x coincides with the minimum in ozone around the year 2000. After that, ClO x starts to decrease as a result of the Montreal Protocol (Velders et al., 2007). All different future scenarios have approximately the same evolution of stratospheric ClO x due to all SSPs having the same World Meteorological Organization (WMO) future scenario for CFCs (Meinshausen et al., 2020). We note that in the Arctic stratosphere the evolution of ClO x across the various SSPs is also very similar (not shown). Figure 3 shows the evolution of the mean ozone volume mixing ratio for the different SSPs in the upper and the lower stratosphere in both polar regions. Lower-stratospheric ozone changes are very similar to those of total column ozone in both the Antarctic and the Arctic as would be expected since the majority of the ozone is in the lower stratosphere. Upperand lower-stratospheric ozone in the Arctic shows a super recovery in SSP3 and SSP5, and the decrease from 1960 to 2000 is notably larger in the upper stratosphere. In the Antarctic upper stratosphere, ozone only returns to roughly the 1960 level in all SSPs; no super recovery is predicted in any of the SSPs, and the level of ozone in SSP5 is slightly less than in SSP2 and SSP3 at the end of the 21st century. Sudden decreases of ozone in yearly values (downward spikes in thin lines) can also be seen in the Antarctic upper stratosphere. These have been previously shown to be due to the large solar proton events (SPEs) during winter (Maliniemi et al., 2020). However, after the 2050s, no major SPEs occur in the CMIP6 solar reference scenario (Maliniemi et al., 2020). Should there be a series of major SPEs in the period between 2050 and 2100, then we would expect the levels of ozone in the Antarctic to be lower than in these projections, further decreasing the likelihood of a full ozone recovery in the Antarctic upper stratosphere. Figure 4 shows the relative change of ozone from 1960 to 2000 over the whole atmosphere during August/September and February/March. During austral winter, strong ozone de-pletion occurs in the Antarctic lower stratosphere. In addition, there is a 15 %-30 % decrease of ozone in the polar upper stratosphere. This is approximately the altitude of peak effectiveness of the Cl/ClO catalytic cycle that occurs throughout the year in the presence of sunlight, while the ClO/ClO cycle has peak effectiveness at the lower stratosphere and requires colder temperatures and the presence of polar stratospheric clouds (Lary, 1997). Figure 4 also shows the relative change of ozone from 2017 to 2098 over the whole atmosphere in SSP5. Ozone increase during the 21st century is most pronounced in the Antarctic lower stratosphere. Similar to Fig. 3, the upper-stratospheric ozone increase during winter is more pronounced in the Arctic than in the Antarctic. An additional feature is seen in the lower equatorial stratosphere where ozone decreases during the 21st century. These ozone changes over the 21st century are further discussed in the following chapters.
3.2 Global ozone difference between SSP5 and SSP2 at the end of the 21st century Figure 5 presents the difference in monthly ozone between SSP5 and SSP2 during 2090-2100. There is substantially more stratospheric ozone in SSP5 relative to SSP2. SSP5 has approximately 0.5 to 1.0 ppm more equatorial stratospheric ozone above 20 hPa in all months. However, below 20 hPa, SSP5 has significantly less equatorial ozone than in SSP2 (up to −0.3 ppm). An additional feature is seen in the mesosphere where consistently lower ozone levels are predicted in SSP5 than in SSP2. However, the negative anomalies are less than −0.1 ppm in regions other than high latitudes. These global differences between SSP5 and SSP2 can be explained in terms of carbon dioxide and methane emissions (Kirner et al., 2015). The increased ozone in the upper stratosphere is caused by decreased ozone loss reactions due to a cooler future middle atmosphere. This is because of the temperature dependency of the Chapman cycle (Brasseur and Solomon, 2005). Figure 6 shows the temperature difference between SSP5 and SSP2. The temperature is between 4 and 8 K lower in SSP5 than in SSP2 in the upper stratosphere, with the largest differences in high latitudes during winter. The mesospheric ozone decrease between SSP5 and SSP2 could be partly due to additional methane emissions in SSP5 (Riahi et al., 2017). Methane oxidation produces water vapour and hydrogen oxides (HO x ) (le Texier et al., 1988), which Kirner et al. (2015) proposes to influence the evolution of ozone in the mesosphere.
Negative ozone anomalies in the equatorial lower stratosphere are mainly due to dynamical changes. Climate change has been predicted to accelerate the Brewer-Dobson circulation (Garcia and Randel, 2008;Butchart, 2014), as shown for the annual BDC in different scenarios in Fig. 7. The largest equatorial vertical residual circulation speed at the end of the 21st century occurs in SSP5, followed by SSP3, SSP2 and SSP1, respectively. One can also see that the meridional transport at 50 hPa altitude accelerates in both hemispheres in the future and more in SSP5 than in SSP2. This leads to enhanced transport of ozone from the lower equatorial stratosphere, resulting in a negative anomaly in SSP5 relative to SSP2 (Langematz, 2018). The ozone difference in the lower equatorial stratosphere between SSP5 and SSP2 could also be partly due to increased overhead ozone, which attenuates the ultraviolet radiation and decreases the photolysis of oxygen in this region (Kirner et al., 2015).
Polar ozone and NO x differences between SSP5
and SSP2 at the end of the 21st century Figure 5 shows that the Arctic stratosphere ozone in SSP5 exceeds ozone in SSP2, reaching the highest values during winter (November to March) but this does not occur in the Antarctic stratosphere. During winter (June to October), a negative ozone anomaly (in SSP5 relative to SSP2) is obtained descending from 1 hPa to 10-20 hPa. Figure 8 shows the NO x difference between SSP5 and SSP2 averaged between 2090 and 2100. Over the whole atmosphere, there is slightly less NO x in SSP5 than in SSP2. This is in line with slightly lower N 2 O emissions in SSP5 than in SSP2 (Riahi et al., 2017) and a cooler stratosphere increasing the chemical destruction of NO x (Stolarski et al., 2015). However, one can see that there is a substantial increase of NO x in the Antarctic mesosphere and upper stratosphere from June until September. The NO x increase in the upper stratosphere is up to 10 ppb. Maliniemi et al. (2020) showed that southern polar mesospheric descent rates will accelerate in the future under higher greenhouse gas forcing, which leads to more NO x being transported from the upper mesosphere/thermosphere to the upper stratosphere. In SSP5, there is about a 10 %-20 % faster descent at the end of the 21st century than in SSP2 (Maliniemi et al., 2020).
The NO x difference in the Northern Hemisphere is less dramatic. There is an increase in the upper mesosphere from November to March (see Fig. 8) but it does not descend to lower altitudes. Figure 6 shows the difference of zonal wind between SSP5 and SSP2 during southern and northern winters. The polar vortex is weaker in the Northern Hemisphere but slightly stronger in the Southern Hemisphere in SSP5. A stronger polar vortex tends to accelerate mesospheric descent due to the filtering of westerly gravity waves and the resulting easterly gravity wave drag in the mesosphere. As a result, NO x anomalies descend further downward in the Southern Hemisphere. Figure 9 shows polar ozone and NO x differences between SSP5 and SSP2 during the winter months in both hemispheres. The altitude of the negative ozone anomaly in the Antarctic stratosphere follows the altitude of NO x increase closely and is statistically significant during September. In the Northern Hemisphere winter, no polar NO x increase occurs below 0.1 hPa, and ozone concentration in the stratosphere does not experience any dramatic variability over different winter months. One can also see that after the NO x peak has passed in the Antarctic, the ozone values around 1 hPa during October return back to higher levels in SSP5 than in SSP2 and become comparable to the ozone levels in the Arctic stratosphere at the same altitude.
Transport to the polar region at 1 hPa is primarily from above during winter (Smith et al., 2011), while in the lower polar stratosphere meridional transport from the equatorial lower stratosphere via the BDC is important. Ozone super recovery in the upper polar stratosphere is thus mainly predicted due to the decreased ozone loss reactions in colder temperatures (WMO, 2018, Chap. 4), while in the lower polar stratosphere it is because of the increased transport from the equatorial lower stratosphere (Langematz, 2018). While our simulation study is not a single forcing experiment and thus not optimal to precisely estimate different contributions, they do present self-consistent projections of the future evolution of ozone. Enhanced transport of NO x to the Antarctic upper stratosphere from above as a result of climate change could counteract enhanced net ozone production seen elsewhere in the atmosphere and potentially prevent an ozone super recovery in the Antarctic upper stratosphere (see Fig. 2).
Summary
In this paper we show that future scenarios with stronger greenhouse gas forcing lead to overall higher levels of simulated stratospheric ozone. Ozone in SSP5 relative to SSP2 is higher in the low and midlatitudinal upper stratosphere at the end of the 21st century. This is a consequence of increased greenhouse gas emissions and the resulting lower temperatures in the middle atmosphere. A cooler stratosphere will decrease ozone loss reactions, leading to an ozone increase in the upper stratosphere. SSP5 has less ozone than SSP2 in the equatorial lower stratosphere. This negative ozone anomaly is a consequence of accelerated transport to the polar lower stratosphere via a stronger Brewer-Dobson circulation.
In SSP3 and SSP5, ozone will have a super recovery in the Arctic stratosphere and Antarctic lower stratosphere towards 2100, in agreement with WMO (2018, Chap. 4). However, ozone in the Antarctic upper stratosphere reaches similar levels across the different future scenarios which are not above the pre-CFC levels at the end of the 21st century. We show that this is due to excess NO x descending to the upper stratosphere from the polar thermosphere and upper mesosphere in the stronger greenhouse gas scenarios (Maliniemi et al., 2020) and the resulting catalytic ozone loss.
Following the adoption of the Montreal Protocol, stratospheric ClO x will decrease in the future (Velders et al., 2007). As a result, the catalytic NO x cycle is more important for ozone variability in the future. Polar thermospheric and upper mesospheric NO x is mainly produced by EEP and partly by solar UV via transport from low latitudes (Gérard et al., 1984). During winter polar darkness, NO x has a long chemical lifetime and descends to the stratospheric altitudes. Since the descent rate is accelerating in the Antarctic mesosphere under higher greenhouse gas emissions, this indirect NO x will have an increasing importance for the future of ozone in the Antarctic stratosphere.
Seasonal stratospheric ozone depletion due to the descending indirect NO x has been also shown to influence stratospheric temperatures and the polar vortex (Arsenovic et al., 2016;Salminen et al., 2019;Asikainen et al., 2020). Thus, there is a great potential of improving future projections and seasonal variability of the polar stratosphere by implementing a more accurate solar forcing, including EEP to the Earth system models (Matthes et al., 2017). Author contributions. DRM provided the WACCM model outputs. VM analysed the data and wrote the manuscript. All authors contributed to the analyses of the results and modification of the manuscript.
Competing interests. The authors declare that they have no conflict of interest.
Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 5,206.6 | 2021-07-21T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Interpreting the atmospheric composition of exoplanets: sensitivity to planet formation assumptions
Constraining planet formation based on the atmospheric composition of exoplanets is a fundamental goal of the exoplanet community. Existing studies commonly try to constrain atmospheric abundances, or to analyze what abundance patterns a given description of planet formation predicts. However, there is also a pressing need to develop methodologies that investigate how to transform atmospheric compositions into planetary formation inferences. In this study we summarize the complexities and uncertainties of state-of-the-art planet formation models and how they influence planetary atmospheric compositions. We introduce a methodology that explores the effect of different formation model assumptions when interpreting atmospheric compositions. We apply this framework to the directly imaged planet HR 8799e. Based on its atmospheric composition, this planet may have migrated significantly during its formation. We show that including the chemical evolution of the protoplanetary disk leads to a reduced need for migration. Moreover, we find that pebble accretion can reproduce the planet's composition, but some of our tested setups lead to too low atmospheric metallicities, even when considering that evaporating pebbles may enrich the disk gas. We conclude that the definitive inversion from atmospheric abundances to planet formation for a given planet may be challenging, but a qualitative understanding of the effects of different formation models is possible, opening up pathways for new investigations.
INTRODUCTION
The distribution of bulk planetary properties such as mass, radius, and orbital parameters encode critical information that constrains planet formation models (see e.g., Ida & Lin 2004;Alibert et al. 2005;Mordasini et al. 2009;Hasegawa & Pudritz 2011;Lambrechts & Johansen 2012;Bitsch et al. 2015;Nayakshin & Fletcher 2015;Cridland et al. 2016;Emsenhuber et al. 2020a;Schlecker et al. 2021). In addition, the chemical composition of planet atmospheres has long been regarded as a key to unlock the process of planet formation (e.g., Gautier & Owen 1989;Owen & Encrenaz 2003) and is the explicit goal of many atmospheric characterization studies (see, e.g., Madhusudhan 2019, for a recent review). This is because the chemical abundances of planetary atmospheres are highly complementary to bulk planetary parameters: they likely relate to the composition of the planetary building blocks in the protoplanetary disk, be it planetesimals, pebbles, or gas. The composition of the building blocks is determined by disk processes, while their relative importance and accretion location for a given planet is determined by the process of planet formation. Consequently, there have been a number of studies which investigate how planet formation may set the composition of an exoplanet, focusing on the planetary carbon-to-oxygen number ratio (C/O), nitrogen content, content in refractory material 1 , or just overall metal content (e.g., Öberg et al. 2011;Fortney et al. 2013;Marboeuf et al. 2014;Madhusudhan et al. 2014;Cridland et al. 2016;Mordasini et al. 2016;Madhusudhan et al. 2017;Lothringer et al. 2021;Schneider & Bitsch 2021a;Khorshid et al. 2021).
Combining observatories such as the Hubble Space Telescope (HST) and Spitzer Space Telescope allowed for a first look at atmospheric C/O values, albeit with large uncertainties (e.g., Line et al. 2014;Benneke 2015;Brewer et al. 2017), or leading to controversial findings, as in the case of WASP-12b, which was claimed to be either carbon or oxygen-rich, with finally a firm water detection in transit pointing towards C/O 1 (Madhusudhan et al. 2011;Crossfield et al. 2012;Swain et al. 2013;Line et al. 2014;Stevenson et al. 2014;Benneke 2015;Kreidberg et al. 2015). Studying the bulk atmospheric enrichment of exoplanets, mostly based on water detections in HST WFC3 spectra, has also been attempted, but the community would clearly benefit from data with higher signal-to-noise and larger spectral coverage to improve abundance constraints, break degeneracies with clouds, and probe additional atmospheric absorbers (e.g., Kreidberg et al. 2014;Fisher & Heng 2018;Wakeford et al. 2018;Pinhas et al. 2019;Welbanks et al. 2019). We note here, and discuss later, that a connection between atmosphere and bulk planet composition is far from trivial (also see, e.g., the recent discussion in Helled et al. 2021, and the references therein).
Luckily, the quality of observational constraints on the planetary composition is expected to be rapidly improving. The advent of retrieval methods for mediumand high-resolution observations (e.g., Brogi et al. 2017;Brogi & Line 2019;Gibson et al. 2020), may allow to constrain the atmospheric volatile and refractory content (Lothringer et al. 2021) from the ground, and to trace even isotopologues (Mollière & Snellen 2019). For example, ; Pelletier et al. (2021); Line et al. (2021) used high-resolution retrievals to constrain planetary C/O values, while the medium resolution retrievals of Zhang et al. (2021) indeed revealed isotpologues for the first time. Moreover, recent observations of the GRAVITY instrument at the Very Large Telescope Interferometer (VLTI), have led to some of the most precise constraints on C/O for planetary-mass objects to date (Gravity Collaboration et al. 2020;Mollière et al. 2020). Most importantly, the recent launch of the James Webb Space Telescope (JWST) and, later in the decade ARIEL, are expected to lead to excellent constraints on the C/O ratio, especially for transiting planets, and may probe the nitrogen and refractory content of cool planets (e.g., Greene et al. 2016;Wang et al. 2017;Danielski et al. 2018;Tinetti et al. 2018). With these next-generation telescopes the focus will likely shift from observational uncertainties to uncertainties in the models for atmospheric characterization (e.g., Line & Parmentier 2016;Feng et al. 2016;Blecic et al. 2017). The development of new characterization techniques is therefore necessary for interpreting future observations. This work has already begun (e.g., Caldas et al. 2019;Taylor et al. 2020;Feng et al. 2020;MacDonald et al. 2020;Pluriel et al. 2020;Lacy & Burrows 2020;Changeat et al. 2021;MacDonald & Lewis 2021;Nixon & Madhusudhan 2022). Now that the atmospheric abundance constraints may become more precise than ever before, it is timely to revisit the justification stated for many observational campaigns: how can a planet's formation history actually be constrained, given its atmospheric abundances, and how well? What are the actual formation quantities that are constrainable? What are the major obstacles that would need to be overcome in case this is not possible? And, lastly, if such an inversion process is challenging for a single planet, could the distribution of abundance patterns be used to constrain some aspects of planet formation?
Our study aims at addressing some of the questions stated above. Specifically, we discuss planet formation and its complexities in the context of the inversion challenge (Section 2). In Section 3, we present a methodology that may prove useful for assessing the consequences of a given formation model choice, where we use a nested sampling method to constrain formation parameters, given the atmospheric composition, for different model assumptions. We show example applications, namely how chemical disk evolution, or pebble drift, evaporation, and accretion, may affect the inferred formation and migration history of the planet HR 8799e. Our method can be used to qualitatively understand differences between planet formation implementations. In Section 4 we summarize which molecular and atomic species can and will be probed by atmospheric observations, and how these may serve to broadly inform the process of planet formation. We end with a short discussion and summary of our study in Section 5.
THE COMPLEXITY OF THE PLANET FORMATION PROBLEM
The idea of using planetary composition to constrain planet formation gained traction in the field of exoplanets with the seminal paper byÖberg et al. (2011). Here, the authors propose that the C/O value derived from a planet's composition could be used to constrain where in a protoplanetary disk it formed. The general idea is outlined in Figure 1, very similar to the original Figure 1 inÖberg et al. (2011). Assuming a smooth, static, 1dimensional disk, the authors calculated where important volatile gases such as H 2 O, CO 2 and CO (sorted by decreasing condensation temperature) freeze out, if present. Because for the temperature gradient in a protoplanetary disk it holds that dT /dr < 0, where r is the distance from the star, H 2 O freezes out first when moving outward, followed by CO 2 and CO. This directly affects the C/O values in the gas and solid phases because water, for example, removes oxygen from the gas phase, when condensing. 2 The idea then is that if the planetary C/O and overall metallicity (here: C and O content) are known, it is possible to determine where in the disk a planet has formed. This method of determining the process of planet formation has since been cited in virtually every study that aims at constraining the atmospheric composition of an exoplanet. This also has to do with the comparative ease with which the atmospheric C/O value may be constrained, as we discuss in Section 4. The general idea presented inÖberg et al. (2011) is powerful, but it is undeniably so that planet formation is more complicated than assumed in their study. In the following we give a summary of processes that may have to be taken into account, and inverted, when trying to connect planet composition to formation in practice.
Disk elemental composition and structure
Constraining a planet's formation location based on its elemental abundance ratios (e.g., C/O) requires that 2 We note that we assumed that 33 % of all O and 38% of all C is contained in the refractory solids in the example setup shown in Figure 1, which leads to a high solid-phase C/O inside the H2O iceline. the elemental composition of the protoplanetary disk is known. A good starting point may be to assume that the protoplanetary disk has an elemental composition that is identical to that of the host star (planet formation may deplete stellar photospheres in metals with respect to the disk, however, see Chambers 2010;Bitsch et al. 2018a;Booth & Owen 2020). It is therefore crucial to have knowledge about the host star's abundances that is as complete as possible, in excess of just [Fe/H], or to at least use existing scaling relations to approximate the stellar metal content for elements other than iron (e.g., Bitsch & Battistini 2020). The disk composition and assumed mass also sets an upper limit on the amount of metals that a planet may accrete during its formation (e.g., Baraffe et al. 2008). Alternatively, the retrieved metal content of an exoplanet may also be used to place a lower limit on the disk metal content, or even total mass, analogous to the concept of the minimum mass solar nebula (e.g., Hayashi 1981). Moreover, the disk's physical and thermal structure is important to set the radial fractionation of elements into different molecular species, in solid and gaseous form, that can be accreted by a forming planet. The disk structure depends on the assumed (dust) opacities, and therefore the dust evolution (Schmitt et al. 1997;Birnstiel et al. 2016;Savvidou et al. 2020). Moreover, disks are viscously (Lynden-Bell & Pringle 1974) evolving (or due to photoevaporation and disk winds, see Clarke et al. 2001;Suzuki et al. 2016;Bai et al. 2016;Chambers 2019), prone to different instabilities (e.g., Flock et al. 2017;Klahr et al. 2018), and will be affected by the presence of (especially massive) planets that may induce spiral density perturbations, lead to the formation of vortices, or open gaps (e.g., Lin & Papaloizou 1986;Crida et al. 2006;Pinilla et al. 2015;Lobo Gomes et al. 2015;Binkert et al. 2021).
Another important effect determining the disk gas composition is the evaporation of pebbles inside of icelines, which may significantly increase the local volatile content of gas in the disk (Piso et al. 2015;Booth & Ilee 2019;Schneider & Bitsch 2021a). This could lead to planets being enriched in volatiles much more than expected from pure gas accretion in the classicalÖberg et al. (2011) setup. It is important to mention that the dynamics of pebbles is likewise determined by the disk structure, which sets the pebbles' growth rates, Stokes parameters (i.e., drift speeds), and may trap them into local pressure maxima (e.g., Paardekooper & Mellema 2006;Ormel & Klahr 2010;Birnstiel et al. 2012;Lambrechts et al. 2014).
Disk chemistry
The chemical composition of the protoplanetary disk (e.g., Henning & Semenov 2013) is of central importance for determining the composition of planetary building blocks, that is, the disk's gas and solid phases. InÖberg et al. (2011), and the more recentÖberg & Wordsworth (2019) study, a simplified and static disk chemical model is assumed. In practice, the disk's chemical composition will evolve because both the disk gas and the volatile ices on grain surfaces will undergo chemical processing (e.g., Eistrup et al. 2016;Molyarova et al. 2017;Eistrup et al. 2018). This means that chemical reactions between atoms and molecules in both gas and ice may alter which molecular species are the dominant carriers of elements such as C, O, and N, over time. This is an important effect, and is expected to alter the inferred formation history of a planet, as has been pointed out by Eistrup et al. (2016). Examples are the conversion of CO into CO 2 ice over time, or the conversion of N 2 gas into NH 3 ice (Molyarova et al. 2017;Eistrup et al. 2018;Semenov & Wiebe 2011). Another example of the importance of disk chemistry on planet formation are the processes that lead to the observed carbon depletion in the inner solar system (e.g., Mordasini et al. 2016;Cridland et al. 2019), which may be attributable to the irreversible chemical destruction of carbon grains within a disk's so-called soot line, or connected to chondrule formation (e.g., Kress et al. 2010;Gail & Trieloff 2017;van 't Hoff et al. 2020;Li et al. 2021).
The disk chemistry itself is sensitive to many processes, such as stellar evolution (Miley et al. 2021), or the cosmic-ray ionization rate (Eistrup et al. 2016;Schwarz et al. 2019). Moreover, whether or not the initial composition of the disk matter is molecular, that is, 'inherited' from the composition of the natal molecular cloud, or elemental ('reset' scenario) can strongly influence the C/O values (Eistrup et al. 2016). The disk's physical structure may also have an impact on its chemical evolution. As an example we highlight the effect of the selfshadowing of the disk, allowing for nominally too cool compositions to occur closer to the star than otherwise expected (Ohno & Ueda 2021).
Planet formation
The idea that planet formation may be constrained through planet composition, as presented inÖberg et al. (2011), conceptually boils down to comparing the planetary C/O value and total metal enrichment to the C/O of the disks' solid and gaseous phases as a function of orbital distance from the star. However, planet formation involves and connects many complex processes. This means that a forming planet cannot accrete arbitrary amounts of gas or solids at (or from) arbitrary positions in the disk.
For example, a limiting factor for planets forming via the core accretion paradigm (e.g., Mizuno 1980;Pollack et al. 1996), specifically when accreting pebbles, is the pebble isolation mass. Pebble accretion, which is the accretion of roughly cm-sized solids by the forming planet (e.g. Ormel & Klahr 2010;Lambrechts & Johansen 2012), can only dominate the solid accretion process until the growing planet reaches this isolation mass M iso (e.g., Lambrechts et al. 2014;Bitsch et al. 2018b;Ataiee et al. 2018), after which pebble accretion stops. This is because the planet induces the formation of a pressure bump in the disk, exterior to its orbit, which traps the inward-drifting pebbles. The isolation mass places an upper limit on the refractory content of a planet. A refractory content higher than allowed by this concept of M iso could point to the importance of accreting planetesimals (e.g., Mordasini et al. 2016;Brügger et al. 2020), unless the planet formed very close to its star, within the refractories' icelines (Schneider & Bitsch 2021b).
For planets growing in-situ, a planetesimal isolation mass was established by Lissauer & Stewart (1993). It is caused by the planet depleting the local reservoir of planetesimals within the zone of its gravitational influence (the so-called 'feeding-zone'). In contrast to the pebble isolation mass, the planetesimal feeding zone increases with planet mass. The different ways of how a planet's refractory content and total mass scale in planetesimal and pebble accretion models might thus allow to put limits on the contributions of the different solid reservoirs for a given planet. The planetesimal isolation mass can also be overcome via (giant) impacts or if a proto-planet migrates into regions of the disk still containing planetesimals (Alibert et al. 2005).
Moreover, the accretion of gas by the growing planet is a 3-dimensional process (e.g., D'Angelo et al. 2003;Ayliffe & Bate 2009;Szulágyi et al. 2014;Ormel et al. 2015;Schulik et al. 2019Schulik et al. , 2020. This may be especially important as the gas composition and C/O value is thought to be changing above the mid-plane of the disk (e.g., Molyarova et al. 2017;Cridland et al. 2020a). Similar to the solid accretion processes, the amount of gas a planet can accrete during formation is limited. In contrast to the solids, however, the ultimately limiting factor is the lifetime of the protoplanetary disk. Once a planet enters runaway accretion, it will accrete gas as quickly as can be provided by the viscously evolving disk, until the disk dissipates. More specifically, it has been shown that gas delivery to the planet can be severely limited by gap formation, which in turn is controlled by the discs viscous resupply of the planetary feeding zone (e.g., Lubow et al. 1999;Lissauer et al. 2009;Ayliffe & Bate 2009;Schulik et al. 2020;Bergez-Casalou et al. 2020). For embedded planets below the gap opening mass, gas accretion may be limited if radiative cooling is counteracted by hot inflowing gas from the ambient disk. In this case even gas that enters the planetary Hill sphere is not accreted (so-called 'recycling', Cimerman et al. 2017).
Another important process of planet formation is migration. The migration history of a planet is critical to its final composition, because it determines where in the disk it accretes material. Because theÖberg et al.
(2011) approach strives to ultimately constrain formation locations with respect to the disk icelines, migration may be less of a problem if a forming planet did not migrate across icelines. Interestingly, planets forming in the inner disk may actually be trapped at locations just beyond the water iceline (e.g., Bitsch & Johansen 2016;Cridland et al. 2016;Müller et al. 2021). There also exist other traps not connected to icelines, such as the disk's dead zone inner edge (Bitsch et al. 2014;Cridland et al. 2016). Qualitatively speaking, and when not currently trapped, planets are expected to migrate either by fast type-I migration or via slower type-II migration, the latter of which ensues once the planet is massive enough to open a gap in the disk (see, e.g., the review by Baruteau et al. 2016). Quantitatively, there is an ongoing debate about the actual magnitude of the torques (and therefore speed of migration), for example for type II migration (e.g., Dürmann & Kley 2015;Robert et al. 2018).
Further complicating the picture are N-body interactions between the forming planets. This process is now regularly included in models of planet formation and population syntheses, but comes at an increased numerical cost (e.g., Alibert et al. 2013;Chambers 2016;Lambrechts et al. 2019;Izidoro et al. 2021;Emsenhuber et al. 2020a). While increasing the complexity of describing planet formation, N-body interactions may represent an alternative avenue for producing hot Jupiters, which may have been scattered in by planets further out (e.g., Jurić & Tremaine 2008;. N-body interactions can also alter the composition of a planet via giant impacts. The amount of solids brought into a giant planet via such impacts might be substantial or even dominant compared to the amount accreted from small bodies like planetesimals or pebbles (e.g., Emsenhuber et al. 2020b;Ginzburg & Chiang 2020;Ogihara et al. 2021). If the impactors originally formed in clearly different regions of the disk than the forming planet, this would blur the meaning of a well-defined formation location of a planet. Accounting for the effects of N-body interactions when trying to invert planet formation thus seems challenging. Interestingly, it has recently been shown that while N-body interactions tend to randomize the process of planet formation, machine learning techniques such as random forests still allow to predict the outcome of planet formation quite accurately (Schlecker et al. 2021). In this work the authors show that the initial parameters of the planet formation model described in Emsenhuber et al. (2020a), mainly the initial location of the planetary embryo and the dust mass of the disk, may be used to predict which class a forming planet will belong to (super-Earths, Neptunelike, giant planets, etc.). These classes also correspond to certain orbital distances, and planetary compositions.
The discussion here focused mostly on planet formation via the core accretion paradigm. Other aspects are of importance if a planet forms via gravitational instability (GI, see Boss 1997). The disk structure (and thus formation environment of the planet) will be quite different in this case. This is because GI planets may form early, when the disk is still massive, in the outer parts of the disk (Boss 2021;Schib et al. 2021). We note that while GI is classically regarded as a way to produce wide-separation gas giant planets, it has also been suggested to allow for the formation of less massive, small-separation planets (Nayakshin 2010), caused by fast inward migration after formation and the associated mass loss, dubbed 'tidal downsizing'.
Planetary bulk -atmosphere coupling
When aiming to constrain planet formation based on the results of atmospheric abundance characterizations one must make assumptions on how the atmospheric composition relates to the bulk composition of a planet. It has been pointed out that planets growing via core accretion may have a layer of heavily metal-enriched gas above their solid cores, due to the evaporation of pebbles and planetesimals that are destroyed when entering the hot planet's proto-atmosphere (e.g., Mordasini et al. 2016;Helled & Stevenson 2017;Brouwers et al. 2018;Brouwers & Ormel 2020). The recent constraints by the Juno spacecraft on the interior of Jupiter are consistent with this assessment, pointing to the existence of a dilute core (Wahl et al. 2017;Debras & Chabrier 2019). We note that the exact distribution of metals in Jupiter' interior is difficult to explain, however, and may involve a formation process stretching over 2 Myr (see Helled et al. 2022, for a recent review). What is more, the planet bulk metallicity constraints for transiting planets derived in Miller & Fortney (2011); Thorngren et al. (2016); tend to be higher than the metallicities reported for planetary atmospheres (albeit with large uncertainties, see Welbanks et al. 2019), making an enrichment of the interior with respect to the atmosphere a likely scenario, and allowing for a first estimate regarding the efficiency of planetary mixing.
The question that naturally arises from these findings is how representative the inferred atmospheric composition is of the bulk of the planet, even for giant planets, where the gas dominates the mass budget. For this it needs to be understood if and how well the metals can be mixed throughout the planetary envelope. Whether this happens at all is not clear, as a gradient in metallicity (and therefore mean molecular weight) may stabilize the planetary interior against convective motions (e.g., Ledoux 1947;Chabrier & Baraffe 2007;Leconte & Chabrier 2012). How a solid core or compositional gradients tend to mix throughout the planetary envelope for gas giant planets, often taking Jupiter as an example, is presently investigated (Vazan et al. 2015(Vazan et al. , 2016Moll et al. 2017;Vazan et al. 2018;Müller et al. 2020;Ormel et al. 2021). The results reported in these studies do not yet agree, predicting either fully mixed envelopes, or ones where a metal gradient (and core) persists in the planet. In addition to imperfect mixing also rainout processes likely play an important role, potentially depleting both solar system and exoplanet atmospheres in metals (e.g., Spiegel et al. 2009;Wilson & Militzer 2010).
From the discussion above it becomes clear that the metal enrichment inferred from the atmospheric retrievals, which serve as an input for any formation analysis, is only a lower limit for the true planetary metal enrichment. As long as the metals are locked into the invisible interior of the planet homogeneously, and not selectively, this may still allow to constrain the solids' location of origin in the protoplanetary disk, as long as the atmosphere is still enriched enough for solid accretion to be the dominating factor. In the case where the relative elemental composition in the atmosphere (except for H and He) is different from the deep interior, or when the atmosphere is depleted with respect to the deeper interior to the point where it mimics the atmospheric metallicities expected from pure gas accretion, this poses a problem. Depending on how planet formation ensued, the former case may still trace the origin of the solids that were accreted towards the end of the formation process. The latter case could be resolved by checking the relative atomic abundances in the presumed metal-poor planet. If refractory atomic species are relatively abundant, this may point to a dominant accretion of solids which are mostly locked into the planet's interior. An interesting recent discussion of how to constrain planet formation in the case where the atmospheric and planetary bulk compositions differ can be found in Helled et al. (2021), who come to similar conclusions. We note again that these avenues for analyzing the origin of the metals in a planet's atmosphere will be further complicated if volatile-rich gas from evaporated pebbles was accreted by the planet.
Atmospheric evolution
The atmospheric composition of an exoplanet may also evolve due to atmospheric evaporation, or by secular enrichment of infalling comets and asteroids. Evaporation is especially important for close-in low-mass planets (e.g., Jin & Mordasini 2018). The atmosphere may be partially or fully lost due to thermal or non-thermal processes, where the thermal escape separates into the regimes of Jeans escape or hydrodynamical escape, depending on the local thermal state of the atmosphere (see, e.g., Barman 2018). For the atmosphere to become relatively enriched in metals by evaporation two criteria have to be met. Firstly, the atmosphere needs to be of low enough mass to allow for a significant amount to be lost. Secondly, the atmospheric escape process needs to preferentially retain the heavier atmospheric species, for which the atmosphere needs to be in the Jeans escape regime (e.g., Bourrier & Lecavelier des Etangs 2018). In the hydrodynamic escape regime the heavier metal species would be lost together with hydrogen. We note that this transition is gradual, however, and that there can be mass fractionation in hydrodynamic outflows as well, depending on the magnitude of the total mass flux (e.g., Hu et al. 2015). These authors also report that such outflows can selectively deplete the atmospheres of Neptune and sub-Neptune-mass planets in hydrogen over multiple Gyr, provided that the initial atmospheric mass is small enough (< 10 −3 of planetary mass). For gas giant planets evaporation may thus be a less relevant process for changing the atmospheric composition. An extreme case that is worth mentioning is the Roche lobe overflow that may affect the closest-in gas giant planets. This process could strip away the upper gaseous envelope, potentially revealing the more metal-enriched layers below. Roche lobe overflow has been discussed as the potential origin of LTT 9779 b, a planet in the hot Neptune desert (Jenkins et al. 2020). By extension, significant atmospheric evaporation may lead to similar outcomes if a compositional gradient is present in the atmosphere.
Another possibility of atmospheric evolution is the secular contamination of the planetary atmospheres by infalling comets or asteroids. Here the frequency of cometary impacts and the persistence of the enrichment they cause in the atmosphere need to be estimated. Turrini et al. (2015) find that the additional water a comet may deposit in the visible atmosphere of the hot Jupiter HD 189733b would have to persist for 500-5000 years before being removed, assuming impacts of kmsized comets every 20-200 years, otherwise no significant enrichment is possible. A quantitative assessment of cometary enrichment requires an estimate of the persistence timescale, however. As this appear to be lacking from the literature, we present a simple first-order analysis below. We start by approximating local mixing timescales as τ mix = H 2 P /K zz , where H P is the planetary pressure scale height and K zz the atmospheric eddy diffusion coefficient, and find A K zz value of 10 8 cm 2 /s within the radiative zone is well within the estimates by passive tracers reported from model calculations for HD 189733b and HD 209458b (Parmentier et al. 2013a;Agúndez et al. 2014). For self-luminous planets the correct value to be chosen is unclear, but K zz = 10 5 cm 2 /s may at least be a useful lower limit (Ackerman & Marley 2001). In the deeper regions of the atmosphere convective overshoot may lead to higher values for K zz , smoothly transitioning towards the value expected for fully convective atmospheres as the radiative-convective boundary (RCB) is approached (e.g., Ludwig et al. 2002;Helling et al. 2008).
The τ mix estimate of less than a year in Equation 1 thus shows that mixing potentially proceeds on faster timescales than those quoted by Turrini et al. (2015), at least locally, meaning that any material added by cometary impacts should be mixed away quickly. This does not preclude a slower, homogeneous enrichment of the radiative atmosphere by cometary impacts over time. However, the question is how quickly this enrichment will be removed into the bulk interior of the planet by entrainment into overshooting convective blobs at the radiative-convective boundary. By extension, the enrichment of the visible atmosphere may be lower if the impactors deposit their metals below the radiativeconvective boundary: for the bulk of Jupiter, for example, the mixing timescale has been estimated to be at most a few years (Debras & Chabrier 2019).
Assuming diffusive mixing in a 1-d atmosphere, we derive the following expression for the increase in mass fraction X of a certain species in the planetary atmosphere, due to cometary impacts: Here,Ṁ is the mass accretion rate of comets, g the planetary gravity, R P the planetary radius, H P the atmospheric pressure scale height, and P i the average location of the destruction of impacting comets in units of pressure. P RCB is the location of the radiative-convective boundary or, alternatively, the location where the deep mixing becomes fast enough to make the local mass fraction equilibrate to the average of the planetary interior (which is assumed to be well mixed). In any case it must hold that P i < P RCB for this expression. A derivation can be found in Appendix A. When assuming pure-water comets, and tracking the change in the mass fraction of water, a relative enrichment of ∆X/X 0 = 3 × 10 −4 to 3 × 10 −2 is found, where X 0 is the planet's bulk water mass fraction. See Appendix A for more details and which values to assume for the various quantities. We therefore conclude that secular cometary enrichment of the planetary atmosphere may be unlikely for giant planets, but a better modeling of the process, for example the lower boundary treatment when using a 1-d diffusion approximation, is needed.
FROM PLANET COMPOSITION TO FORMATION OUTCOMES
From Section 2 it is obvious that a full inversion, from atmospheric composition to planet formation, is still a long way off. However, for qualitatively understanding the ramifications of different planet formation assumptions it would be useful to possess a tool that compares the inverted outcomes of such models. In this case the effect of a given process may be studied in isolation, allowing the user to get an intuition for the importance of a given assumption. This is in contrast to attempting to invert a full formation model, which may be either too numerically costly, or require too many parameters when compared to the limited number of observational characteristics.
In what follows, we will demonstrate such an anaylsis setup by starting with the inversion of the formation model used inÖberg et al. (2011);Öberg & Wordsworth (2019) and applying it to the compositional constraints obtained for the directly imaged planet HR 8799e. As a second step, we will introduce the effects of either including chemical evolution of the protoplanetary disk, or the effect of pebbles that drift and evaporate in the disk. Comparing the results of these setups for HR 8799e serves to highlight the likely importance of disk chemical evolution for its inferred migration history, and studies whether pebble accretion may have been a likely scenario for this planet. We end this section by suggesting other toy model setups, testing for the influence of various formation model complexities described in Section 2.
Formation model inversion
Formation models produce synthetic planets, or populations thereof, starting from a physical model and set of formation parameters. These can be, for example, the initial disk mass, the disk composition, and the starting position of a planetary embryo in a disk. When attempting to constrain planet formation based on measured planetary compositions the formation models need to be inverted, because planet composition is an outcome of the formation models. More specifically, if assuming that planetary compositions can be measured with a given uncertainty, we are then interested in the probability distribution of formation parameters ϑ of a formation model M, given this measurement. Using Bayes' theorem, this can be written as Here, P (ϑ|M) is the prior probability of ϑ before considering any data C, while P (C|ϑ, M) is the likelihood for observing C, given that ϑ is true. In practice this This composition is then used as input for the formation model inversion, which uses Nested Sampling to compare the inferred composition to the prediction of a toy planet formation model. The resulting posterior distribution of the formation parameters represent constraints of the planet formation process. Running inversions for various toy formation models then allows to study the impact of differing model assumptions.
may be written as, for example, where N species is the number of measured atmospheric species, C M i (ϑ) is the formation model prediction of the planet abundance of species i, and ∆C i are the measurement uncertainties. Here we chose a simple form of the likelihood for clarity, assuming that the measured abundances of different atmospheric constituents are independent, and follow a Gauss distribution. In general, the functional form of the log-likelihood can be arbitrarily complicated. For example, the abundance posterior of an atmospheric retrieval may be used directly, which can be approximated by, say, a Gaussian mixture model. In our application in Section 3 we chose an intermediate step, accounting for the covariance between the atmospheric oxygen and carbon content. We note that such an inversion process does not necessarily only need to consider elemental abundances as input measurements. Any observed property of a planet, such as its orbital parameters, could in principle be included in this analysis, as long as it is predicted by a formation model.
In practice, we will compute samples of the target distribution P (ϑ|C, M) by numerically integrating the numerator of the right-hand side of Equation 4 using the so-called Nested Sampling method (Skilling 2004). In short, Nested Sampling is a Monte Carlo technique to integrate functions in highly-dimensional model parameter spaces. Here it is the space spanned by the formation parameters. When integrating the numerator of Equation 4, the integral value resulting in the so-called model evidence, Nested Sampling will automatically generate samples of our target distribution. In principle model evidence ratios allow to distinguish between different formation models. However, as long as we cannot invert full state-of-the-art formation models, this may be possible only when considering sets of assumptions that lead to wildly different outcomes, and where one clearly represents a better fit. We use the PyMultiNest (Buchner et al. 2014) package for inverting formation models, which is a Python wrapper of the MultiNest code (Feroz et al. 2019). A schematic illustrating our approach is shown in Figure 2 (2019), assuming their static protoplanetary disk model as described for the young solar nebula, but with the icelines and composition adapted to the planet system of interest. For every volatile species we define a constant mass fraction in relation to the total disk mass. Inside of its iceline the volatile species is in the gas phase, outside it is in the solid phase. This leads to the well-known step-like behavior of the C/O values in the solid and gaseous phase of the disk, as shown in Figure 1. More details on our implementation can be found in Appendix B.1.
We then assume that the planet formation process can be fully described by a set of four parameters, which ultimately allow to map to the bulk composition of the planet: where M P is the total planet mass, and M solid is the mass of the solids (refractory species and volatile ices) accreted by the planet. It holds that M P = M gas + M solid , which we use to determine the amount of gas a planet accreted. The parameters a solid and a gas de- note the orbital distances where the solids and gas were accreted, respectively. For a given value of ϑ our setup then uses the disk model to determine the planet's accretion location with respect to the disk icelines. Species in the gas phase will be used to determine the composition of the accreted gas; the analogous is done for the solids. Together with the mass fractionation between solids and gas in the planet this determines the planet's composition C.
Adding chemical time evolution
TheÖberg et al. (2011) disk setup is convenient for conceptually studying the usefulness of C/O. As discussed in Section 2 there are many complicating factors that make a true inversion from C/O to planet formation parameters extremely challenging. In our second toy model setup we single out one of these processes, and study how chemical evolution in the protoplanetary disk changes inferences when compared to the static disk model.
As discussed before, chemical reactions may process the initial disk abundances over time, shifting carbon and oxygen atoms to different chemical species. For example, CO, which is very volatile, can be removed from the gas by surface reactions, processing it into less volatile CO 2 (Molyarova et al. 2017;Eistrup et al. 2018;Bosman et al. 2018;Schwarz et al. 2018). If a large fraction of CO gas in a given region of the disk mid-plane undergoes such processing into CO 2 ice, then an amount of elemental C and O equivalent to that initially carried in CO will be processed from the gas into the ice in this region. This is one example of how chemical processing in disks can alter the elemental partitioning of, for example, C and O, in the gas and ice. These processes are thus expected to influence inferences on the location of planet formation as first discussed in Eistrup et al. (2018).
To study the effect of disk chemical evolution in formation inversions, we replaced the static disk abundance model with a time-dependent model. For this we calculated the evolution of the disk chemical composition using the ANDES code, which describes a quasi-stationary 2-d axis-symmetric protoplanetary disk (Akimkin et al. 2013;Molyarova et al. 2017). ANDES solves for the time-dependent chemical composition of the disk with a detailed description of grain surface and gas-phase reactions, see Appendix B.2 for more details. For the initial disk setup we use the abundances given in Table 2, that is, equal to the disk abundances used for our static disk model. Although the chemical model includes other elements, we only consider H, He, C, O, and N-bearing species in the calculations. The refractories are considered to be chemically inert, and condensed at all times. An example for the resulting time evolution of the C/O ratio in the disk is shown as the yellow to dark blue lines in Figure 3. The process of planet formation is then modeled in the same way as in the static disk case, but with the difference that the composition of the accreted gas and solids are taken from the disk's chemical evolution calculations. Formally this also allows to add the formation time as an additional parameter to be constrained, but here we chose to initially only study and compare the inversion outcomes for different times during chemical evolution.
Studying the effect of pebble accretion
In addition to the two setups described above, we study the effect of pebbles drifting and evaporating in the protoplanetary disk, and their accretion, as a third scenario. In general, pebbles will quickly drift towards the central star in an unperturbed disk, because of the torque exerted by the head wind of the disk gas. This wind is caused by the radial pressure support of the gas, which makes it orbit the star at sub-Keplerian speeds. Therefore, unless there are local pressure enhancements that trap pebbles (caused by, e.g., a planet), pebbles will drain into the inner parts of the disk quickly, releasing copious amounts of volatiles into the gas phase when crossing their respective icelines (e.g., Booth et al. 2017;Booth & Ilee 2019;Schneider & Bitsch 2021a). Therefore, in addition to accreting pebbles directly, this process can be crucial for setting the composition of forming planets by gas accretion.
For modeling the effect of pebbles on the formation inversion of HR 8799e, we fed disk compositional structures from the chemcomp model (Schneider & Bitsch 2021a) into our inversion framework. In short, chemcomp solves for the pebble growth, drift, and evaporation in a viscously evolving disk. In addition, it includes a full planet formation model in the pebble accretion paradigm, handling planet-disk interactions such as gap opening and planet migration. Because chemcomp is too slow for inverting it directly, we use its pebble drift and evaporation prescription in an unperturbed disk to obtain a disk compositional structure as a function of time, to which we then apply our inversion framework, indentical to the treatment of the disk's chemical evolution.
Application of toy model inversions to HR 8799e
As discussed in Section 1, deriving accurate and precise C/O ratios from current exoplanet observations is challenging, but this will likely change with JWST. Using high-resolution spectrographs, or the interferometric GRAVITY instrument at the VLT, the community is already starting to derive precise C/O values using ground-based instruments. This has to do both with the data, but also the use of state-of-the-art retrieval techniques (Brogi & Line 2019;Mollière et al. 2020;Pelletier et al. 2021;Line et al. 2021). Below we will make use of the atmospheric composition derived for HR 8799e from GRAVITY observations (Mollière et al. 2020), and try to constrain how its derived formation history changes when disk chemical time evolution or pebbles are included.
In order to run the formation inversion for HR 8799e, a disk elemental composition needs to be assumed for HR 8799. HR 8799 is a λ-Boötis-type star; this means that the abundances measured for its iron-peak elements are subsolar, with values of [Fe/H] = −0.55 ± 0.10 (Sadakane 2006) or [Fe/H] = −0.52 ± 0.08 (Wang et al. 2020) having been inferred for iron specifically. A similar depletion is expected for Mg, Si, and other massive iron-peak elements, while the abundances of elements typically found in volatile species (C, N, O) are expected to be close to solar (e.g., Paunzen 2004). Indeed, the latest analysis of Wang et al. (2020) inferred [C/H] = 0.11 ± 0.12, [O/H] = 0.12 ± 0.14, and (C/O)/(C/O) = 0.96 ± 0.19, which are all consistent with solar, but slightly enriched in C and O. According to Wang et al. (2020), the most likely explanation for the observed composition of HR 8799 is recent accretion of volatile-rich material onto the outer layers of the star, for example from an evaporating hot Jupiter, or of volatilerich ices scattered into the inner system by the four HR 8799 planets. We will therefore use the composition of the Solar System fromÖberg & Wordsworth (2019) as our nominal abundance model, because it could be unlikely that the star has a bulk elemental composition identical to its observed photospheric values. When relevant, we will also report on how our results change if taking the λ Boo abundances of HR 8799 at face value, however. The iceline positions in the HR 8799 disk are set to the values derived from the ANDES chemical evolution model, at t = 0.
For HR 8799e we make use of the atmospheric retrieval results reported in Mollière et al. (2020). Of relevance for the formation inversion are the derived values for the planetary mass, as well as the atmospheric metallicity and C/O ratio. As the mass is spectroscopically determined, it has large error bars. However, it still results in a constraint on the total amount of solids that the planet incorporated, which can also be estimated from multiplying the atmospheric metallicity by the planetary mass. More specifically, multiplying the planet mass with the inferred atmospheric metallicity results in a lower limit on the metal mass (in solid or gaseous state) that a planet accreted, which may be dominated by solids in cases of high metallicities. We used the actual posterior distributions on planetary gravity and radius from the spectral retrievals to construct the inversion prior for the planetary mass (effectively corresponding to a 1-σ upper limit of 14 M Jup ). We also study the effect of using a tighter mass constraint in the pebble inversion scenario.
Converting the atmospheric C/O ratio for use in the inversion method requires special care. In the spectral retrievals the metallicity is used as a free parameter to scale all elemental abundances except H and He, after which the C/O ratio is set by scaling O. In the formation model the formation location as well as relative gas-to-solid accretion fraction sets the O, C, and refractory metal content, from which a C/O ratio can be calculated. Therefore the C content is no longer strictly coupled to the refractory metal content, in contrast to the spectral retrievals which we use as input for our formation retrievals. This inconsistency has to be kept in mind when we use the C/O and metallicity of the spectral retrieval to obtain atmospheric C/H and O/H values, and compare these to the C/H and O/H predicted by the formation model. In general, independently constraining C/H and O/H in atmospheric retrievals is the better avenue for running formation inversion studies.
We note that it is also important to account for the amount of oxygen that has been sequestered into atmospheric clouds. Because the C/O constraints from Mollière et al. (2020) include this effect, the atmospheric retrieval results can be used without modification. For C/O constraints from retrievals that constrain absorber abundances independently, corrections would need to be applied.
Moreover, sampling the C/O and metallicity posterior of the spectral retrieval leads to tightly correlated C/H and O/H values, and we take this into account by fitting it with a two-dimensional Gaussian distribution. The distribution's covariance matrix is then used to describe the uncertainties of C/H and O/H during the inversion process. If this was not done the independent uncertainties in C/H and O/H (as obtained from the diagonals of the covariance matrix) would allow for a spread in C/O values much larger than obtained from the spectral retrieval, rendering a formation inversion meaningless. Taking the spectral retrieval results from Mollière et al. (2020) The inversion process finds a clear preference for the solids to stem from outside the CO iceline, or from within the H 2 O iceline. This is intuitively easy to understand: because the spectral retrieval resulted in a superstellar atmospheric metallicity, and a C/O ratio consistent with stellar, this means within theÖberg et al. (i), but additionally placing a prior on the accreted solid mass, corresponding to a 20-M⊕ upper limit defined by the pebble isolation mass. Case (iii): same as Case (ii), but including a tighter prior on the mass of HR 8799e, based on a dynamical mass estimate. Case (iv): same as Case (iii), but using a larger value for the pebble isolation mass (100 M⊕). An arbitrary offset has been applied to the ellipses of Cases (i) to (iv), for clarity.
the C/O ratio set by the accretion of solids. Accreting solids of stellar C/O is possible at orbital distances outside of the icelines of all major carbon and oxygen carrying species, the outermost being CO (see Figure 1). For HR 8799e's current orbital position ∼15 au (Wang et al. 2018), this could mean that the planet underwent some orbital migration after solid accretion, as the CO iceline for a young A5 host star such as HR 8799 is expected to be around 35 au. To illustrate this, we overplot today's orbital location of HR 8799e in the left panel of Figure 5.
Alternatively, due to a high organic carbon content of the refractories in our toy model, a roughly stellar C/O ratio is also attainable if the planet formed within the water iceline, and then migrated or scattered outward to its current orbital position. This formation channel for distant giant planets was proposed, for example, in Marleau et al. (2019). While this is an intriguing result, we stress that it is dependent on the disk compositional model we assume, and the formation model used in general. We also note that the high carbon content of the refractories in the inner disk may be unlikely, see our discussion in Section 2. The 1-and 2-d projection of the full posterior of the formation inversion is shown in the left panel of Figure 12, and discussed in Appendix C.
We also carried out inversions using the λ Boo composition of the star for the disk. This was done by increasing the oxygen and carbon abundance by 30 %, and decreasing the iron and silicate content of the refractory material by 70 %. The oxygen no longer bound in silicates was added to H 2 O, which is the dominant reservoir of oxygen in the protoplanetary disk. CO, the second most abundant oxygen reservoir should not change, because the carbon content of the disk is not changed when applying the depletion of the iron-peak elements. Finally, the oxygen abundance is increased until C/O=0.54 is reached, which is the value reported in Wang et al. (2020). We find that the most likely location of origin of the accreted solids is still outside the CO iceline. The formerly second likely location, inside the H 2 O iceline, vanished: the solid C/O there is exclusively set by the refractory species, which have a much higher C/O value of 2.6 now, because of the strong silicate depletion. The associated posterior is shown in the right panel of Figure 12, in Appendix C.
CHEMICAL DISK EVOLUTION
Next we analyze how robust the above findings are when adding chemical evolution of the disk composition. We thus ran a formation inversion of HR 8799e using the formation model that included chemical evolution. In practice this was done by inferring the planet formation parameters using the composition of the ANDES disk chemical models, as a function of time. ANDES computes the abundances as a function of altitude above the midplane. We used the resulting surface densities of the disk to determine the composition of the gas and solids. We assumed a young (1 Myr) host star at HR 8799's current mass and L = 3.58 L (Yorke & Bodenheimer 2008), with a disk that produces an accretion luminosity of 0.233 L (corresponding to 10 −8 M yr −1 ), where L and M are the solar luminosity and mass. For a given inversion at time t we assumed that the disk composition is fixed at the value that the chemical evolution predicts at that time. This is an approximation, because it implicitly assumes that planet formation happens over a characteristic timescale < 10 5 yr, which is chosen as the time step between the snapshots of the disk composition. Because the goal of the present exercise is to study the zeroth order effect that chemical evolution may have, we deem this approximation acceptable. Future applications could incorporate the time of formation as another free parameter, also assuming (or trying to infer) the duration of the planet formation process.
The results of the inversion including disk chemical evolution are shown in the upper right panel of Figure 4, indicating a good fit of the atmospheric composition. The right panel of Figure 5 shows the posteriors on the location where the planet accreted its solids (or, alternatively, where these solids originated in the disk) as a function of time. For reference, the iceline positions of the static disk model are indicated as well.
To understand the results of the inversions with chemical evolution it is useful to reconsider the underlying C/O distribution in the disk as a function of time, shown in Figure 3. For t > 0 it is seen that the disk gas C/O value outside the CO 2 iceline decays, while the C/O of the solid component increases. This is because CO is converted into CO 2 ice on the surfaces of dust grains outside the iceline of CO 2 over time. The conversion rate depends on the CO abundance in the ice, which drops rapidly inside the CO iceline. So while the reaction rate increases with temperature, the conversion is more efficient right inside the CO iceline (Bosman et al. 2018). Thus, the process occurs first for larger disk radii, and later for smaller radii, and drives the solid C/O towards the stellar value also inside the static CO iceline. We note the the ANDES model also included the formation of CH 3 OH ice. Because HR 8799e is found to have a C/O ratio similar to the stellar one, this means that the region for its most likely formation (or the region of origin of its accreted solids) expands inwards over time to include smaller disk radii. This effect is clearly visible in the right panel of Figure 5. We therefore confirm the findings presented in Mollière et al. (2020), where it was argued that processing CO gas into CO 2 ice may have a significant effect on the formation location of HR 8799e. As stated in Mollière et al. (2020), this also has consequences for how strongly HR 8799e may have migrated to reach its present-day orbit. If chemical evolution was significant in HR 8799e's natal protoplanetary disk, and if the exoplanet formed late enough, it may have migrated much less (or not at all) than in cases where it formed early.
In general, our findings emphasize the importance of disk chemical evolution for planet formation that has been reported in Eistrup et al. (2018). It also shows that any analyses that try to infer planet formation based on atmospheric compositions should compare the relevant chemical timescales to the timescales of planet formation.
Similar to the static disk case, assuming λ Boo-type elemental abundances for the chemical evolution is not expected to change the results significantly. Inreasing the carbon and oxygen abundance by 30 % is within the modeling uncertainties of the disk chemistry, and the additional oxygen going into H 2 O due to the silicate depletion is irrelevant to the evolution of the CO iceline. In the inner part of the disk, within the CO 2 iceline, water ice is slowly destroyed to form CO 2 ice, which raises the solid C/O in the inner disk over time, similar to the CO condensation within the static CO iceline. The amount of available CO 2 that can be formed is independent of the H 2 O ice fraction to first order, and the timescale over which this happens is set by the cosmic ray ionization rate, so it is independent of the water concentration.
PEBBLE DRIFT AND EVAPORATION
In this section we model the effect of pebble drift and evaporation on the formation inversion of HR 8799e. This process can be crucial for setting the composition of forming planets. When neglecting pebble drift, planets whose atmospheric metal content is set by gas accretion are generally expected to have sub-stellar metallicities and super-stellar C/O values. In contrast, planets with an atmospheric metal enrichment dominated by solid accretion may have have super-stellar metallicities, but sub-stellar C/O ratios (e.g., Öberg et al. 2011;Madhusudhan et al. 2014;Mordasini et al. 2016;Madhusudhan et al. 2017). In the case of pebble drift, however, evaporation of pebbles inside of the CO, CO 2 , and potentially the CH 4 icelines can lead to disk gas that is significantly enriched in these species, allowing for super-stellar metallicities and C/O ratios in the disk's Time (yr) Figure 7. Evolution of the CO concentration of the disk gas, normalized by the initial CO concentration, in the pebble drift and evaporation scenario. The earliest times are characterized by a spike in CO close to the CO iceline, due to pebble evaporation, followed by its viscous spreading.
gas phase, and therefore in the atmospheres of planets (e.g., Booth et al. 2017;Schneider & Bitsch 2021a,b). For setting up the chemcomp pebble disk model, we used the same initial disk surface density and temperature structure as for the disk's chemical evolution case. Likewise, the initial disk composition was fixed to the one described in Table 2. For the disk viscosity we chose an intermediate value of α = 5 × 10 −4 , where α is the usual dimensionless diffusion coefficient, in units of c s H, where c s is the local mid-plane sound speed and H the disk's pressure scale height (Shakura & Sunyaev 1973). This value is consistent with observational data on turbulence in protoplanetary disks, suggesting α of the order of 10 −3 − 10 −4 (Pinte et al. 2016;Flaherty et al. 2017Flaherty et al. , 2018. The disk viscosity is a key parameter for the pebble problem, with smaller values of α leading to larger pebbles, thus generally faster inward drift, and longer persistence time scales of the gas locally enriched by pebble evaporation. All solid material is considered to be in the form of pebbles, with an initial particle size of 1 µm, which then evolve by growth and drift (e.g., Birnstiel et al. 2012).
The disk's resulting C/O values in the solid and gas phase are shown in the left panel of Figure 6. At t = 0 the disk C/O values reproduce our static setup. At larger times, however, the effect of drifting pebbles becomes noticeable very quickly. Pebbles drifting across the CO iceline will start enriching the gas phase in CO (also see Figure 7). Some of this gas diffuses outward again, condensing on the inward drifting pebbles, and increasing the pebble C/O value to unity just outside the CO iceline. The same effect is visible just outside the the CO 2 and H 2 O icelines, where the solid C/O values reach 0.5 and 0, respectively. Away from the icelines the C/O of the solids remains largely unchanged, however. At the same time we note that the solid surface density will drop significantly over the simulated time due to pebble drift, by up to two orders of magnitude, while the gas surface density only drops by less than one order of magnitude. Inside the CO 2 iceline the C/O ratio of the gas immediately drops at t > 0 as CO 2 evaporates off the inward drifting pebbles. At later times the gas' C/O starts rising again as the CO gas that has evaporated off the pebbles inside the CO icelines reaches the inner disk regions, due to the disk's viscous evolution. An analogous evolution can be observed for the disk gas inside the H 2 O iceline; the gas' C/O value first drops significantly due to the water evaporating off the pebbles, but rises again at later times as gas enriched in CO 2 and CO viscously spreads inwards.
For studying the effect of pebble accretion, drift, and evaporation, we investigated four scenarios with our formation inversion setup. Case (i) is simply applying the disk compositional model, as determined by the pebble drift and evaporation framework, in the formation forward model. Case (ii): like (i), but putting an upper limit of 20 M ⊕ on the mass that can be accreted as solids, accounting for the concept of the pebble isolation mass (see Section 2.3, and Bitsch et al. 2018b). Case (iii): like (ii), but replacing the upper mass limit on the planetary mass from the spectroscopic retrieval (M P < 14 M Jup , Mollière et al. 2020) with a tighter prior from the dynamical mass estimate reported in Brandt et al. (2021), that is M P = 9.6 +1.9 −1.8 M Jup . Case (iv): like (iii), but increasing the pebble isolation mass to 100 M ⊕ . The reasoning for testing these different cases will be discussed below, where we summarize the inversion results obtained for the different cases.
The compositional fit for Case (i) is depicted by the leftmost ellipse in the lower panel of Figure 4. In this scenario pebble drift, evaporation, and accretion is able to reproduce the observed abundance pattern of HR 8799e. Conceptually, Case (i) simply tests whether the results of the static disk model inversion change when introducing pebbles, but does not yet apply any prior knowledge on how pebbles are accreted, such as the concept of the pebble isolation mass. The reason for the good compositional fit of Case (i) becomes evident when studying the right panel of Figure 6, which shows the resulting posterior for the most likely accretion location of the solids for HR 8799e. Because the solid C/O values do not change significantly, except for just outside the icelines, the result that significant accretion of solids from outside the CO iceline is likely does not change when compared to the static setup of the disk composition. Just outside the CO iceline the probability goes down, however, because CO gas recondensing on the pebbles drives up the pebbles' C/O to values larger than the planetary one. What is noticeable is that the region inside the CO iceline at t > 0 is somewhat more likely when compared to t = 0. This is because the disk gas, enriched by CO from evaporating pebbles, and with C/O = 1, is of high enough metallicity to somewhat offset the C/O value of solids accreted inside the CO iceline, which is too low when compared to the planet. The enrichment of the disk gas in CO over time is shown in Figure 7. We note that the likelihood for accreting a significant mount of pebbles decreases over time, because pebbles will drain to the inner parts of the disk. We neglect this effect here. In Case (i) the upper limit on the planetary mass from the spectroscopic retrieval, together with a super-stellar atmospheric metallicity, leads to a 1-σ upper limit of 570 M ⊕ on the accreted pebble mass. Such a high value is inconsistent with the concept of the pebble isolation mass. Therefore we deem Case (ii), where we set an upper limit of 20 M ⊕ on the accreted solid mass, a more likely scenario. We note that the pebble isolation mass is a function of the disk viscosity α, and that it is very sensitive to the disk aspect ratio (M iso ∝ [H/r] 3 , with H being the disk scale height). The value of 20 M ⊕ is what we derive for the HR 8799 disk model at the location of the CO iceline, using the scaling relations reported in Bitsch et al. (2018b). The compositional fit for Case (ii) is shown in the lower panel of Figure 4 (second ellipse from the left). Also in this scenario pebble drift is able to reproduce the observed abundance pattern of HR 8799e, but leads to a generally somewhat lower planetary metal enrichment. We note that these results assume that all accreted pebbles are visible in the atmosphere, which is equivalent to full core dissolution and mixing. Moreover, in order to allow pebble enrichment to have a noticeable effect on the planet composition, the inversion constrains the planetary mass to < 3.5 M Jup . What is more, as a result of the prior limit on the accreted solid mass and the planetary mass prior, the inversion deems scenarios more likely where the composition of the accreted gas has more impact than in Case (i). The resulting probability distribution on the locations a gas where the planet accreted its gas is shown in the left panel of Figure 8. The most likely locations and times for the gas accretion correspond to the situation where the gas enriched by the evaporating pebbles reaches approximately C/O values of 0.6, corresponding to the planet's atmosphere (cf. left panel of Figure 6). In this scenario the most likely formation accretion location is inside of HR 8799e's current orbital position, which would require some outward migration if taken at face value. (iii), that is Miso = 20 M , MP = 9.6 +1.9 −1.8 MJup, at solar disk composition (left ellipse) with a setup where the λ Boo-type composition of HT 8799 was assumed for the disk instead (right ellipse). An offset was applied to these ellipses for clarity.
In Case (iii) we study the effect of enforcing an upper limit on solid accretion, due to the pebble isolation mass, and a tighter constraint on the planetary mass. The mass prior stems from a dynamical analysis based on the orbital characterization of the HR 8799 system, and accelerations from the Gaia-Hipparcos catalog (Brandt et al. 2021). The compositional fit for Case (iii) is shown in the lower panel of Figure 4 (second ellipse from the right). In this case the inversion struggles to reproduce the observed enrichment pattern of HR 8799e; while it fits the atmospheric C/O ratio well, the planetary enrichment is generally too low, but improves at later times. This is explained from the fact that the high mass prior assumed for HR 8799e, together with the low pebble isolation mass, does not allow for the pebbles to play a significant role in the planet enrichment, while especially at early times the disk gas is not enriched enough by gas that has evaporated off the inward drifting pebbles. This situation is thus alleviated at later times, when the disk gas enrichment increases, but it is never enough to fully reproduce the planetary metal enrichment. The right panel of Figure 8 shows probability distribution of a gas . Because only gas accretion is able to affect the planetary composition noticeably in Case (iii), it is essentially a higher contrast version of the a gas distribution of Case (ii), shown in the left panel.
Case (iv) essentially studies the case when the planet started forming very far outside the CO iceline, in the outer parts of the disk. Due to the disk flaring, H/r increases towards the outer disk, and we would find M iso = 50 M ⊕ , corresponding to H/r = 0.07, at 200 au. Because we are interested in an upper limit on what pebble accretion could contribute we also assume that the disk viscosity is very high for the M iso calculation (α = 0.004, instead of the nominal 0.0005); this results in M iso = 100 M ⊕ . The corresponding enrichment pattern of the planet is shown in the lower panel of Figure 4, rightmost ellipse. Unsurprisingly, note only the planet C/O but also its metal enrichment are better fit now, when compared to Case (iii), leading to a good fit overall. Like expected, a solid values outside the CO iceline are the most likely for this case, with some additional gas accretion from within the CO 2 iceline.
From our investigation it thus becomes evident that pebbles alone, for average M iso values, may not be sufficient to fully explain the observed abundance pattern of HR 8799e, even when making the assumption that all pebbles accreted onto the planet (likely onto the forming planetary core) mix into the visible atmosphere. This conclusion hinges on at least three assumptions. First, if the planetary mass was actually lower than reported in Brandt et al. (2021), enriching the planet by the accreted solid pebbles becomes easier. This is shown by the our Case (ii), where the inversion when imposing M iso = 20 M Jup resulted in an a good fit by constraining the planetary mass to below 3.5 M Jup . Next, if the pebble isolation mass is much higher than our baseline case (e.g., 100 instead of 20 M ⊕ ), which is possible for large disk viscosities and the planet initially forming far out in the disk, pebble enrichment becomes a likely scenario for explaining the abundance pattern of HR 8799 again. Lastly, if the composition of the disk is different from our baseline case, even M iso = 20 M ⊕ with the dynamical mass prior of HR 8799e (Case iii) becomes a likely scenario again. This is seen in Figure 9, where we show what happens when running Case (iii) again, but assuming the λ Boo-type composition of HR 8799 for the disk composition. Due to the carbon and oxygen content being ∼30 % higher in this case, the accretion of gas that is enriched by evaporated pebbles leads to a better agreement with the total atmospheric metal enrichment. We note that all of these conclusions are based on the spectroscopic retrieval result for HR 8799e, and a slightly lower retrieved metallicity would make pebble accretion more likely again. Due to the large uncertainties on the atmospheric metallicity, even the pebble scenario with the worst fit (Case iii) is only about one standard deviation away from the mean composition derived in the spectroscopic retrievals.
Lastly, it should be kept in mind that other likely important effects connected to pebbles were not studied here. For example, we neglected the effects that the outer HR 8799 planets may have had on the pebble flux that reaches the inner disk, and therefore HR 8799e's position. Outer planets may prevent pebbles from drifting inward and evaporating at the CO iceline (e.g., Bitsch et al. 2021;Schneider & Bitsch 2021a). It is unclear to what degree this effect is important here, because the giant planets may have formed late enough that some pebble drift may already have taken place in the disk before shutting off the pebble flux. In addition, if HR 8799e, the innermost planet, formed first (high surface densities and orbital periods, thus shorter accretion timescales) it may have been less affected by the formation of the outer planets.
Suggested toy models to study other formation aspects
Above it was studied how inferences drawn from a simple formation model change if chemical evolution of the protoplanetary disk, or the drift, evaporation and accretion of pebbles is included. As discussed in Section 2, planet formation is the combination of quite a number of key processes. A concurrent formation inversion with all the ingredients appears both numerically and conceptually unworkable, at the moment. It will still be instructive, however, to add certain aspects of the planet formation problem to such inversion calculations, to study their influence in isolation, or to assess the magnitude of their importance for atmospheric compositions. In Table 1 we list the way in which many of the aspects mentioned in Section 2 may be studied via inversion of the formation process.
To give an example, it would be straightforward to feed disk compositional models that include the disk's self-shadowing into the inversion framework. This process has been suggested by Ohno & Ueda (2021), where the shadowing is caused by a dust pile-up at the water iceline. Depending on the grain properties and densities, such a scenario may allow for very volatile species such as CO, N 2 and even noble gases such as Ar, to condense at distances from the star that are nominally too hot. To study such an effect, various mid-plane disk and abundance structures, for differing dust density contrasts, could be explored.
Another setup that would be instructive is to further investigate the effect of incomplete mixing between the deep interior and planetary atmosphere for gas giant planets. As discussed in Section 2, the metal enrichment inferred from atmospheric characterization studies is only a lower limit for the true planetary metal enrichment. Where available, an upper limit could be placed based on the analyses of planetary bulk metallicities, as obtained in Thorngren et al. (2016); . The impact of metallicity gradients could potentially be studied by adding a parameter f mix which describes whether the metals accreted during formation fully mix (f mix = 1) into the atmosphere, or not (f mix = 0). As long as a planetary atmosphere is of super-stellar metallicity, f mix will simply be inversely correlated with the accreted solid mass (if pebble evaporation is neglected). Once a planetary atmosphere is of stellar or substellar metallicity, f mix may also correlate with the formation location of a planet, depending on the disk's abundance structure. An example for this can be constructed by considering our inversion results for HR 8799e in the static disk picture. Because the atmospheric metallicity is high, and the planet has a stellar C/O value, the inferred atmospheric C/O ratio could only be reproduced by accreting solids from outside the CO iceline. If the planet's atmospheric metallicity was stellar, it could instead have formed at any location in the disk, as long as the location of gas accretion is equal to the location of solid accretion (measured with respect to the icelines). If f mix was added as a free parameter, small f mix values would again have yielded regions outside the CO iceline as the most likely region of origin for the solids accreted by the planet.
It is also conceivable to construct a three-component model for the formation inversion which separates the planetary mass into three reservoirs: solids accreted onto the core, solids accreted and mixed into the gaseous envelope, and the gas itself. Each of these would also be associated with a parameter that described where the corresponding material was accreted. One could then define an f mix for the core (or solids in the deep interior) which would describe the degree to which the deep core dissolves and mixes into the envelope.
Lastly, abundance constraints on the refractory content of a planet, as presented in Lothringer et al. (2021) for WASP-121b, for example, may be used to put an upper limit on the mass a planet accreted through pebbles. As long as the planet did not form very close around its star, where also refractories may enter the gas phase, refractories can only be incorporated into the planet by solid accretion. If the inferred amount of accreted refractories is higher than expected from the concept of the pebble isolation mass (see Section 2), an upper limit on the amount of accreted pebbles, as well as a lower limit on the amount of accreted planetesimals (or other impactors such as smaller planets, e.g., Ginzburg & Chiang 2020), may be constrainable. Similar constraints may be obtained from the cases where the volatile enrichment of
Aspect
Potential tractability in formation model inversions Disk composition and structure Unknown disk elemental abundances Scale using stellar [Fe/H], try varying composition according to scaling uncertainties. Available solid reservoir Impose limit based on likely disk mass and dust-to-gas ratio. Disk (thermal) structure Feed in disk structures from dedicated disk models. Explore if parameterizing 3-d effects in 1-d model is possible. Changes in disk structure will affect, e.g., iceline positions, as in Ohno & Ueda (2021). Planetary back-reaction on disk Use simplified gap opening criteria to limit gas accretion, compare to disk lifetimes. Apply pebble isolation mass (limit refractory reservoir accessible to planet).
Include pebble drift & evaporation at icelines
Increase gas metallicity inside of icelines as function of time, also see Sect. 3.2.
Disk chemistry
Chemical evolution of disk Run formation inversion with chemical composition as function of time. Also see Section 3.2. Inherited or 'reset' disk abundances Explore impact of differing assumptions on disk abundances for the inversion process. Cosmic ray ionization and stellar irradiation Use best guesses for retrievals, otherwise explore different values.
Refratory carbon depletion in inner disk
Explore via on/off switch.
Planet formation
Pebble and planetesimal accretion Compare inferred solid (refractory) mass of planet with isolation masses. Constrain upper limit on accreted M pebbles , lower limit on accreted M planetesimals .
3-d planet accretion
Test impact of parameterizations, for example vertically averaged abundances for gas accretion Planet migration Allow to fit for multiple formation locations? Add priors on formation location: planet traps? Add priors enforcing inward migration (e.g., agas ≤ a solid )? Leveraging full complexity of formation models Explore use of machine learning techniques.
E.g., random forest predictors as demonstrated in Schlecker et al. (2021). Planet formation by gravitational instability Likely treatable, but requires changes. E.g., steady state viscous disk model → infall disk model Planet bulk -atmosphere coupling Metallicity gradient inside planet Use multi-component model, infer mixing efficiency f mix ∈ [0, 1] to reveal correlations with other formation parameters.
Atmospheric evolution Evaporation
Can be important for lower mass planets or gas planets with a metallicity gradient. Use inverted evaporation models to reveal correlation with formation parameters? Infall of comets / asteroids Better quantitative modeling needed.
Potentially not important for gas giant planets. Table 1. Aspects of planet formation and their potential treatment toy formation model inversions.
a planet is higher even than what pebble evaporation in a disk may provide: any additional volatile mass must then be accreted in the form of volatile ices.
FUTURE OBSERVATORIES AND A CENSUS OF ATMOSPHERIC COMPOSITIONS
In the previous sections we discussed that inverting atmospheric compositions to reveal the detailed formation history of a planet is hardly at the moment: the process of planet formation is too complex, with too many unknowns, and likely too numerically costly to invert. However, the James Webb Space Telescope, the class of future ground based Extremely Large Telescopes (ELTs), and later ARIEL will record high-quality spectra for hundreds of planets. In this section we summa-rize the compositional constraints that can be extracted from such atmospheric measurements and how the resulting atmospheric enrichment patterns for the planetary population may allow to constrain planet formation in a broader sense.
C/O
The importance of the planetary C/O ratio for informing planet formation has been discussed in Sections 1 and 2. In these sections we also discuss the complications that may make the picture likely more complex than suggested by the foundational study byÖberg et al. Figure 10. Potential atmospheric visibility of various absorbers in planetary atmospheres. Every species or group of species shown here is known to be spectrally active. We searched the literature for the average atmospheric temperatures where these species are visible. Alternatively we used the equilibrium chemistry code described in Mollière et al. (2017) and checked for which temperature range the species is present in the atmosphere. Our standard assumption was solar metallicity and abundance ratios, and a pressure of 0.1 bar, whereas dissociation and ionization values were obtained from assuming pressures from 0.1 to 0.001 bar. We either assumed solar C/O (= 0.55) or C/O = 1.1. The temperatures given therefore should only serve as rough guidance, and do not necessarily correspond to a planet's effective temperature. We note that chemical transitions also depend on the metallicity, and the pressure at the planetary photosphere (therefore effectively also on the planetary gravity g). Moreover, many of these species can be affected by disequilibrium chemistry (see, e.g., Fortney et al. 2020), or be cold trapped into condensates (e.g., Spiegel et al. 2009;Parmentier et al. 2016). The chemical behavior of the species listed here is described in Section 4, and Appendix D for the refractories.
drift and evaporation may lead to super-stellar enrichments of the gas at C/O values both smaller or larger than stellar. Interestingly, however, super-stellar C/O values and enrichments are difficult to obtain without considering pebble evaporation (also see Section 3.2), so a large population of planets which such abundance characteristics may indicate a dominant role of pebbles for setting planetary abundances. Similarly, a large enough overall metal enrichment of a planet, especially if formed in the outer disk, may be difficult to explain from pebble evaporation, even for low disk viscosities, which would point more towards planetesimal accretion playing an important role.
In general, C/O is also popular because it determines the relative abundances of the spectrally active C-and O-bearing molecules in the exoplanet atmospheres such as H 2 O, CH 4 , CO, CO 2 , HCN, C 2 H 2 . C/O therefore regulates the spectral appearance of a planet in the near-to mid-infrared (e.g., Seager et al. 2005;Fortney et al. 2005;Madhusudhan 2012;Moses et al. 2013;Mollière et al. 2015;Molaverdikhani et al. 2019a;Goyal et al. 2020;Hobbs et al. 2021b).
For reference, Figure 10 shows under which atmospheric conditions the absorbing species that trace the C/O ratio in gas-dominated planets may be visible. Also see, for example, Lodders & Fegley (2002) for a detailed description of the atmospheric chemistry. For temperatures below about 1000 K the atmosphere will be rich in H 2 O and CH 4 , for higher temperatures these species will be converted into CO until either C or O runs out, depending on the C/O ratio. For high temperatures and C/O 1, CH 4 will thus be visible; for high temperatures and C/O 1, H 2 O will be visible. For further increasing temperatures and C/O 1, CH 4 is replaced by increasing amounts of C 2 H 2 and HCN (see, e.g., Madhusudhan 2012; Mollière et al. 2015). We note that the chemical transitions mentioned here also depend on the local atmospheric pressure (Mollière et al. 2015;Molaverdikhani et al. 2019a). CO can still be visible in cool atmospheres, especially of self-luminous brown dwarfs and planets, because atmospheric mixing may transport CO-rich gas from the deep (hotter) atmosphere to the photosphere (e.g., Zahnle & Marley 2014;Miles et al. 2020). Whether such disequilibrium abundances are expected for irradiated (often transiting) planets is less clear, because the insolation leads to more isothermal atmospheres. For planets which are still strongly cooling (with a high internal temperature) or heated by processes such as eccentricity dampening, CH 4 may be strongly suppressed . As mentined, ground-based highcontrast or high-resolution observations have started to obtain the first useful constraints on C/O. The state-ofthe-art will greatly improve once JWST and later Ariel will allow for a larger census of planetary compositions (also see our discussion in section 1).
N/O, N/C
The importance of atmospheric nitrogen-bearing species such as NH 3 or HCN for constraining exoplanet formation has been recognized recently, especially for planets forming in the outer solar, and extrasolar disks. The reason for this is that nitrogen, predominantly in the form of N 2 in protoplanetary disks, is extremely volatile. Planets forming at increasingly larger distances, when dominated by solid metal enrichment, will therefore exhibit increasingly lower N/O or N/C ratios, and vice versa if dominated by gas metal enrichment. This is because several icelines of C and O-bearing species are crossed towards larger orbital radii, while N 2 stays in the gas phase (Turrini et al. 2021). If the planet forms at wide enough orbital distances, eventually N 2 will freeze out as well, leading to an enhanced atmospheric nitrogen content, which will scale similarly with atmospheric metallicity as the abundances of C-and O-bearing species. The high nitrogen content of Jupiter has therefore led to the interpretation that Jupiter formed in the outer regions of the Solar System, beyond the location of the N 2 iceline at ∼ 30 au, which is also consistent with the planet's elevated abundance of noble gases (e.g., Owen & Encrenaz 2003;Bitsch et al. 2015;Öberg & Wordsworth 2019;Bosman et al. 2019;Cridland et al. 2020b). Similar to the the discussion of C/O, the situation is likely more complicated also for the nitrogen enrichment. Both disk self-shadowing (see Ohno & Ueda 2021, and our discussion in Section 3.3) or pebble drift and evaporation (Schneider & Bitsch 2021b) are likely complicating factors. We also note that a planet that forms late within a protoplanetary disk's lifetime may be less sensitive to N 2 , as cosmic ray ionization may process N 2 to NH 3 ice over Myr timescales, such that the importance of NH 3 and its iceline increases over time, with the iceline of NH 3 being much closer to the star than the one of N 2 (Semenov & Wiebe 2011).
In exoplanets the only spectrally active nitrogen bearing species of relevance are NH 3 and HCN. N 2 , which is the dominating nitrogen bearer at larger temperatures, has negligible opacity in the near-and mid-infrared. NH 3 , on the other hand, should be detectable in the mid-IR using JWST in exoplanet atmospheres (e.g., Danielski et al. 2018). Moreover, evidence for NH 3 has been seen in high-resolution studies (Giacobbe et al. 2021;Sánchez-López et al. 2022). For C/O 1, NH 3 is only abundant up to ∼ 500 K (Lodders & Fegley 2002). HCN, on the other hand, will be visible for temperatures of 1500 K or larger, if C/O > 1 (e.g., Madhusudhan 2012; Mollière et al. 2015). We indicate these detectability ranges in Figure 10. Chemical disequilibrium may (or may not) allow for NH 3 or HCN to be visible at intermediate temperatures (500 K < T < 1500 K) in irradiated planets as well (see, e.g., MacDonald & Madhusudhan 2017;Fortney et al. 2020;Hobbs et al. 2021a, and the references therein). For self-luminous planets disequilibrium chemistry may play less of a role for N, as iso-abundance lines are parallel to atmospheric pressure-temperature profiles (Zahnle & Marley 2014). As before, all chemical transition temperatures also depend on the atmospheric pressure.
We note that the chemical behavior of N, C and O bearing species described here mostly hinges on chemical equilibrium or simple atmospheric disequilibrium treatments, also considering the planetary atmospheres to be one-dimensional and of mostly scaled solar abundances (except for the C/O ratio). The recent and intriguing results of Giacobbe et al. (2021), who detected H 2 O, CO, HCN, C 2 H 2 , NH 3 , and CH 4 in the atmosphere of HD 209458b (with an equilibrium temperature of ∼ 1500 K) are a reminder that atmospheric chemistry may be much more complex than discussed above. An important effect is the horizontal advection of chemical abundances predicted from coupling chemical models to the output of 3-d general circulation models (e.g., Agúndez et al. 2014;Baeyens et al. 2021), which could also be connected to condensate rain-out (Sánchez-López et al. 2022). Also photochemistry is important, especially in the upper atmospheric layers (e.g., Venot et al. 2012;Kopparapu et al. 2012;Molaverdikhani et al. 2019b). Telescopes such as JWST, current high-resolution spectrographs, and ultimately instruments mounted on ELT-class telescopes will allow us to investigate these effects more thoroughly.
R/O
Measuring the refractory content of an atmosphere could provide unique insight into a planet's formation history. As has been argued recently by Lothringer et al. (2021), measuring the refractory-to-oxygen ratio R/O of a planet constrains the importance of metal enrichment by rocky accretion relative to icy or gaseous accretion. Here R stands for any element that traces the refractory content of the planet (Fe, Na, K, Si, Mg, Ti, ...), or an average of such elements. As argued further, this may allow the placement of constraints on whether the planet (or its solid building blocks) migrated significantly during formation. We argue that R/O may potentially even be useful to constrain the relative importance of pebble and planetesimal accretion, in the core accretion paradigm, also see Section 3.3 and the discussion in Schneider & Bitsch (2021b).
In Figure 10 we indicate the temperature ranges over which various refractory-tracing atmospheric absorbers are visible. We refer the reader to Appendix D for a discussion of the chemistry of the refractory-tracing absorbers. Lothringer et al. (2021) put emphasis on ultrahot Jupiters, for which various refractory elements exist as molecules (metal oxides or hydrides), atoms, or ions in the gas phase. We also note that species such as H 2 S and PH 3 may be useful refractory tracers at intermediate atmospheric temperatures (Wang et al. 2017;Öberg & Wordsworth 2019). While H 2 S and PH 3 are volatile species, the dominant carrier of P and S atoms in a protoplanetary disk appear to be refractory species (Öberg & Wordsworth 2019). Moreover, measuring the abundances of Na and K in planetary atmospheres may be worthwhile tracers of the refractory content (Welbanks et al. 2019).
Refractory cloud species may affect planetary spectra by muting molecular features and reddening the spectral energy distribution. Silicate particles like MgSiO 3 and Mg 2 SiO 4 are especially interesting, as they may lead to visible absorption features around 10 micron (e.g., Cushing et al. 2006;Wakeford & Sing 2015). Due to the complex micro-physical problem of cloud formation (e.g., Rossow 1978;Powell et al. 2018;Woitke et al. 2020), measuring a refractory abundance from observed cloud absorption may prove difficult, however. Moreover, clouds may complicate measuring and interpreting the abundances of gas refractory species due to cold trapping by condensate rainout (e.g., Spiegel et al. 2009;Parmentier et al. 2016). An interesting alternative to silicate clouds could be searching for the absorption of gaseous SiO at ∼7 micron with JWST's MIRI instrument. SiO is promising because it is the most abundant Si-bearing gas species after the silicates evaporate (Viss-cher et al. 2010), and is more stable than H 2 O against dissociation (by about 500 K). This should allow detecting this species in ultra hot Jupiters.
Other metal oxides such as TiO, VO, AlO, CaO have features in the optical and near-infrared (e.g., Sharp & Burrows 2007;Lothringer et al. 2020). Similarly, metal hydrides may be useful refractory tracers, at similar temperatures as the metal oxides. Species such as FeH, CaH, MgH, NaH, CrH, TiH all have absorption features in the optical and near-infrared (Sharp & Burrows 2007;. Metal atoms are visible in the atmosphere once the refractory clouds are no longer present (e.g. Mg, Fe), or once the dominant molecular species (such as SiO for Si) have been dissociated. Mg, Fe, Ca, Cr, Ni, V, Na and maybe Co have been detected in the ultra hot atmospheres of KELT-9b and WASP-121b in the optical (Hoeijmakers et al. 2019(Hoeijmakers et al. , 2020. Finally, metal ions become visible in the hottest atmospheres as soon as the atoms have been ionized. This has led to the detection of Fe+, Ti+, Cr+, Sc+, Y+ and maybe Sr+ in the hottest known exoplanet KELT-9b in the optical (Hoeijmakers et al. 2019).
DISCUSSION AND SUMMARY
Inferring the formation history of a planet, based on its atmospheric composition, is one of the most cited goals of the atmospheric characterization community. In our work we take a look at what obstacles need to be overcome to make such an inversion feasible.
Summarizing the complex and interconnected processes that govern planet formation (see Section 2), we conclude that actually inverting planet formation in this way is still a long way off, if even possible at all. Current formation models are likely too complex (too many free parameters), too uncertain (which processes to consider, which assumptions to make for them), and too numerically costly (N body interactions, dust evolution, hydrodynamical evolution, disk chemical evolution, etc.). Many of these problems may actually be alleviated in the coming years or decades, but the degree to which such a full formation inversion will ever become possible is difficult to assess, at the moment. As an interesting avenue for inverting full, state-of-the art formation models, we want to highlight the recent work by Schlecker et al. (2021), where a random forest technique was used to predict planetary formation outcomes based on formation model input parameters. It will have to be seen in how far this method can be used to predict planetary abundances.
Apart from this conclusion, we also introduce a method that allows to study and compare the qualitative impact of different assumptions made in the modeling of planet formation, see Section 3. Assuming some measured planetary compositions as observations, we use nested sampling to invert simplified formation models, constraining their corresponding formation parameters. Due to the challenges mentioned above, such invertible formation models cannot be complex enough to yield reliable results on a given planet's formation process. However, they may allow to study the importance of various formation aspects in isolation. As an example, we show how the deduced formation history of the directly imaged planet HR 8799e changes if the composition of the protoplanetary disk in which it forms is allowed to evolve chemically. We find that chemical evolution may significantly affect the migration history inferred for this planet; the planet may have migrated much less if chemical evolution is taken into account. What is more, we show that the drift, evaporation, and accretion of pebbles is able to reproduce the planetary C/O value, but whether it can reproduce the inferred high atmospheric metallicity depends on the assumptions made for the disk viscosity, pebble isolation mass and the disk composition. We end this section by suggesting a number of other formation processes that could be studied in a similar way, for example metallicity gradients and ineffective mixing of the planetary interior.
While the detailed inversion of planet formation may still be in the far future, it is clear the atmospheric abundance constraints obtained with new and upcoming instruments will be crucial to inform planet formation models in a broader sense. In Section 4 we summarize under which atmospheric conditions various spectrally active atmospheric species that trace the C/O value (H 2 O, CO, CH 4 , CO 2 , C 2 H 2 , HCN), nitrogen content (NH 3 and HCN), and refractory content (H 2 S, PH 3 , alkalis, refractory clouds, metal oxides, hydrides, atoms and ions) may be observable in H/He-dominated atmospheres. Instruments such as GRAVITY, CRIRES+ (or other high-resolution spectrographs), JWST, and facilities further in the future like ARIEL and the ELTs will obtain abundance constraints for many of these species. We discuss how the C/O values derived for the atmospheric composition of exoplanets may allow us to con-strain the importance of pebble drift and evaporation, and how the refractory content of a planet may constrain the relative contribution of planetesimal and pebble accretion.
Making the connection between atmospheric abundances and formation a reality seems daunting, but the likely transformative nature of observations of many upcoming observational facilities will lead to more precise atmospheric abundance constraints for exoplanets than ever before. The constraints obtained from these observations will require being put into context, to assess what information on planet formation may possibly be gleaned from them. With these data one may begin assessing the degree to which planet formation can indeed be informed by the atmospheric composition of exoplanets.
We would like to thank the anonymous referee for their detailed report, which greatly improved the quality of Here we derive Equation 2, which estimates the change in mass fraction of a given species due to the enrichment of a planet's atmosphere by impacts, also see Section 2.5. We also discuss the case of pure water comets increasing the atmospheric water content of a Jovian planet.
To begin we estimate the mixing in the atmosphere by 1-d diffusion. We thus write, using the usual 1-d diffusion equation for concentrations (e.g., Parmentier et al. 2013b): where ρ is the atmospheric density, X the atmospheric water mass fraction, t the time, z the atmospheric altitude,Ṁ the mass accretion rate of pure-water comets, and R P the planetary radius. W (z, z i ) is defined as with Θ being the Heaviside step function. This means that we assume that the comets are destroyed in a narrow layer of width ∆z i , at altitude z i in the atmosphere. Using the equation of hydrostatic equilibrium (∂P/∂z = −ρg), together with the equation of state of an ideal gas (P = ρk B T /µ) and that H P = k B T /µg one finds that once can express Equation A1 as where P is the atmospheric pressure, g the gravity, k B the Boltzmann constant, T the atmospheric temperature and µ the atmospheric mean molecular weight. For a steady state ansatz and integration over P one obtains that for P ≥ P i + ∆P i /2, and ∂X/∂P = 0 for P ≤ P i − ∆P i /2, and a linear transition between these two cases for P ∈ (P i − ∆P i /2, P i + ∆P i /2). In the following we assume that ∆P i P i . Thus it holds that the enrichment in units of mass fractions ∆X = X − X 0 at P i and at lower pressures (higher altitudes) is where K zz was assumed to be constant for simplicity. P RCB denotes the location of the radiative-convective boundary or, more generally, the altitude in the planet below which the planet is well mixed, with X(P > P RCB ) = X 0 and P RCB > P i . From Equation 2 we see that ∆X will become large for small P i and large P RCB . Pinhas et al. (2016) found that water ice comets of 1 km in size will be destroyed by 100 bar if impacting Jupiter at terminal velocity. From our condition that P RCB > P i this means that we need to consider P RCB > 100 bar. For warm Jupiters the maximum depth of the RCB may be as deep as P RCB = 200 bar Sarkis et al. 2021). Then assuming X 0 = 10 −3 , M P = 1 M Jup , R P = 1 R Jup , H P = 200 km, K zz = 10 8 cm 2 s −1 , P RCB = 200 bar and P i = 100 bar results in a relative enrichment of ∆X/X 0 = 3 × 10 −4 , if a very high impact rate of 10 5 comets of 1 km size per year are assumed. This would correspond to 275 impacts per day (and would double the total water content of the planet in 5 × 10 6 yr). Therefore, combining reasonable estimates for the parameters describing the atmosphere with a very high cometary impact rate would not really change the water content of the planet's atmosphere (neglecting the change in X 0 over 5 × 10 6 years). An obvious caveat of our toy model that comes to mind is the assumption that the planet will mix any pollution away instantaneously at pressures larger than the radiative convective boundary. The deep convective K zz , estimated from mixing length theory, may be in the range of K zz = 10 9 cm 2 s −1 (see, e.g., Equation 4 of Zahnle & Marley 2014). Extending the integration domain to 2 × 10 4 bar and setting K zz = 10 9 cm 2 s −1 for P > P RCB leads to ∆X/X 0 = 3 × 10 −2 when numerically integrating Equation A4, that is, 100 times higher, but still too low.
Finally, to demonstrate the good agreement of our analytical Equation A5 we show a comparison to Equation A3, numerically integrated to very long times t, to obtain the limiting case t → ∞, in Figure 11. For this comparison 20 comets per year of 20 km size were assumed, which are destroyed at an unrealistically low P i = 1 bar (to obtain non-negligible ∆X/X 0 values). 20-km comets, 20 impacts/yr, P i = 1 bar Eq. E.3, numerical integration, t → ∞ Eq. E.5, analytic (2019), which assumes a static power-law for the disk temperature and density of the young solar nebula. Inside its iceline position a given volatile species is in the gas phase, outside it is in the solid phase. For every species the mass fraction compared to the total disk mass is tabulated, in addition to the mass fractions of the constituent atoms within a volatile species. We also account for refractory material, which we include in our framework by setting its iceline position to zero. The background gases H 2 and He are included by setting their iceline to 1000 au. This ensures that the refractory and background species stay condensed/gaseous within the simulation domain. Table 2 lists the mass fractions, iceline positions, and atomic composition of all considered disk species and their constituent atoms. We note that the iceline positions given here are those expected for the disk around HR 8799, which have been obtained from our ANDES disk model, see Section 3.2.
The mass fractions were obtained using the provisional proto-solar nebula composition fromÖberg & Wordsworth (2019) with some modifications, see below. The abundance of a species i is given as number fractions x i inÖberg & Wordsworth (2019), compared to the number of hydrogen atoms n H . This was converted into mass fractions m i relative to the total disk mass using with µ i being the molecular mass of species i in atomic mass units. This expression is obtained from the disk's total mass M tot , the mass of species M i of species i, and setting where it was assumed that most of the disk mass is contributed by H and He atoms. Setting µ H = 1, µ He = 4 as well as assuming that n He /n H = 0.1 (see Table 8 in Lodders 2019, for the recommended proto-solar abundances) leads to the relation given in Equation B6. The refractory composition model was likewise constructed using the information given inÖberg & Wordsworth (2019). In their model, this results in 30 % of all oxygen in the form of refractory silicates, identical to the amount of oxygen in H 2 O. We assumed that the silicates consist purely of MgSiO 3 , which also conserves the solar Mg/Si abundance ratio, which is close to unity (Asplund et al. 2009;Lodders 2019). The refractory carbon component plus volatile organics account for 50 % of all carbon atoms, with a 3:1 ratio between the two. To simplify we added the carbon of the volatile organics by increasing the CO abundance, taking the required oxygen from the water mass reservoir. Because the fate of organic carbon, especially in the inner part of the disk, is uncertain anyway (see, e.g., Mordasini et al. 2016;Cridland et al. 2019, and the references within) we decided to forego a more careful treatment of the organic carbon reservoir for our conceptual study here. Iron, sulphur and phosphorous atoms were assumed to only be present in the refractory phase. The C, Fe, S, and P abundances, relative to H, were taken from Asplund et al. (2009). Our resulting C/O ratio distribution for the disk solid and gas components is shown in Figure 1.
B.2. Disk chemical evolution
Here we describe the ANDES chemistry model used for inverting the formation model including disk chemical evolution (see Section 3.1.2), which includes the evolution of the disk's chemical composition. In ANDES, surface reactions are described by the Langmuir-Hinshelwood mechanism and are not limited to hydrogenation. H and H 2 tunneling through reaction barriers is also included. Any dynamical effects on the distribution of C and O, such as drifting grains, are omitted. The surface density profile is described by a power law with the exponent equal 1.5. The chemical network is based on ALCHEMIC (Semenov et al. 2010;Semenov & Wiebe 2011), with updated binding energies from Cuppen et al. (2017). It incorporates the effects of XUV irradiation, cosmic rays, and radionuclides as ionization sources. The dust size distribution is described by a power law with p = −3.5 between 0.005 and 25 µm, which reflects dust growth in disks compared to the ISM. It is used to calculate the radiation field and dust temperature in the disk's upper layers. An average grain radius of 0.35 µm is used for calculating surface reaction rates. The disk abundances are initialized assuming that all volatile species are in the gas phase, using the same abundances as used for the static disk model, see Table 2.
C. FULL INVERSION POSTERIOR OF HR 8799E
In the left panel of Figure 12 we show the 1-and 2-d projections of the full posterior resulting from the formation inversion of HR 8799e, as discussed in Section 3.2, using the static disk composition. The posterior of the planetary mass closely follows the mass prior, which we sampled by using the spectral retrieval results for HR 8799e, namely log(g) = 4.0 ± 0.5, R P = 1.12 ± 0.09 R Jup , as reported in Mollière et al. (2020). We neglected the error on R P , assumed that log(g) follows a Gaussian distribution, and converted to mass via 10 log(g) R 2 P /G, where G is the gravitational constant. A flat prior was assumed on the formation/accretion locations. The prior on the accreted planetesimal mass was taken to be flat, ranging from 0 to 1000 M ⊕ . The posterior of the accreted solid mass can be explained considering the atmospheric metallicity that was used as an input to the formation inversion ([Fe/H] = 0.48 ± 0.25), and the total mass of the planet. The increased probability of the planet having accreted solids from outside the CO iceline or inside the H 2 O iceline (discussed in Section 3.2) is visible. There also exists a less likely solution with lower total metallicity (lower solid mass) where both the solids and the gas were accreted within between the H 2 O and CO icelines. This branch of solutions can be explained by studying Figure 1, showing the variation in C/O in the disk gas and solids as a function of distance: within these two icelines the solids' C/O is sub-stellar and the planetary C/O can be raised to higher values by accreting gas which has an increased C/O ratio. In our current model setup this only works if the planet has a low metallicity, otherwise the gas enrichment cannot compete with the metal enrichment from the solids. In the right panel we show the corresponding posterior in the case when λ Boo-type abundances are assumed for the disk of HR 8799. The solution inside the H 2 O iceline is no longer valid for a solid , due to the high local solid C/O ratio.
D. REFRACTORY CHEMISTRY
Here we give a short description of the chemical behaviour of atmospheric species that trace the planetary refractory content, as shown in Figure 10. We outline their behavior as a function of temperature, assuming a pressure of 0.1 bar, whereas dissociation and ionization values were obtained from assuming pressures of 0.1 to 0.001 bar. We either assumed solar C/O (= 0.55) or C/O = 1.1. We only roughly determine the transition temperatures, as these may also depend on the atmospheric gravity and metallicity. Moreover disequilibrium chemistry, internal luminosity, insolation flux and cold trapping can play important roles (see, e.g., Spiegel et al. 2009;Fortney et al. 2020;Parmentier et al. 2016). The temperatures given here do therefore not necessarily directly translate into planetary effective temperatures. If no reference is given, we use the equilibrium chemistry code described in Mollière et al. (2017) to determine the chemical behavior.
H 2 S H 2 S condenses into NH 4 SH at ∼200 K, the higher temperature condensates MnS, ZnS, Na 2 S are of minor importance (Lodders 2010). For C/O < 1, H 2 S dissociates at ∼ 2000 K, while it moves into species like CS for C/O > 1 at temperatures around 1500-2000 K.
PH 3 PH 3 condenses into H 3 PO 4 at 500 K. Its presence in Jupiters atmosphere at lower temperature indicates a deep quenching point however, such that PH 3 may still be visible at lower temperatures (see, e.g., Baudino et al. 2017). For temperatures approaching 1000 K, PH 3 is increasingly converted into PH 2 .
NA
Na condenses into Na 2 S at ∼ 900 K. In principle alkalis such as Na could also be sequestered into high-temperature condensates such as feldspars. However, this does likely not occur due to the rainout of silicates (e.g., Line et al. 2017) which deplete Si from the atmosphere, which is needed for feldspar formation. Above 900 K Na is thus in the gas phase, until it gets ionized at around 2500 K.
K K condenses into KCl at ∼ 900 K. In analogy to Na, sequestration of K into feldspars does likely not occur due to silicate rainout. Thus, K is in the gas phase from ∼900 K to ∼2000 K, after which it is ionized. Ionization occurs at temperatures roughly 500 K cooler than for Na.
REFRACTORY CLOUDS
As mentioned above, the refractory cloud species Na 2 S and KCl likely form at temperatures below 900 K. Here we focus on the remaining cloud species forming at intermediate to hot temperatures, and concentrate on those carrying the largest mass and/or opacity, often using the data given in Wakeford et al. (2017), or own equilibrium chemistry calculations. Refractory clouds can exist if the atmospheric temperature is below their respective evaporation temperature. Like all chemical transitions discussed here, this temperature depends on the elemental abundance and local atmospheric pressure. Under our adopted standard conditions silicates such as MgSiO 3 and Mg 2 SiO 4 evaporate at temperatures around ∼1600 K, while iron clouds are stable until ∼1700 K. VO and calcium titanates are stable until 1600 to 1800 K, respectively. Aluminum-bearing condensates such as Al 2 O 3 , which are among the most stable ones, evaporate around 1900 K. Among the species listed here, potentially only silicates, Al 2 O 3 , and KCl may actually form in the visible part of the atmosphere, as these species have low surface energies, leading to high nucleation rates (Gao et al. 2020). The cloud bases will reside deeper inside the planetary atmosphere for lower temperatures, with the cloud particles entering from above, or settling below the photosphere. For brown dwarf this temperature-dependent removal of silicate clouds is thought to cause the L-T transition, which typically occurs at T eff = 1200 to 1400 K (e.g., Best et al. 2021, and the references therein), while for planets and low-gravity brown dwarfs this limiting temperature may be as low as approximately 1000 K (e.g., Morley et al. 2012;Marley et al. 2012;Charnay et al. 2018).
SIO
SiO is an especially interesting molecule for tracing the abundance of the refractory silicates in the atmosphere such as MgSiO 3 or Mg 2 SiO 4 . As soon as the silicates evaporate (around 1600 K) their constituent atoms move into the gas phase. While atomic Mg is then the preferred gaseous form of Mg, Si will move into SiO (Visscher et al. 2010). For C/O 1, SiO enters the gas phase at around 1300 K, which is when SiC evaporates. SiO then starts to dissociate around 3500 K for C/O 1, while moving into SiS around 2000 K for C/O 1. The local evaporation-dependent temperatures given here could be lower than the observed transition as a function of planetary effective temperature, where high-pressure cloud formation could cold-trap Si into silicates.
METAL OXIDES
Similar to SiO the other metal oxides form as soon as the refractory clouds evaporate. Possible species of interest are TiO, VO, SiO, AlO, CaO (e.g., Sharp & Burrows 2007;, with the relevant evaporation temperatures of the clouds ranging from ∼1600-1900 K at our adopted standard condidtions. Again, these are then only expected to be visible in the atmosphere if not cold-trapped into condensates at lower altitudes, that is, higher pressures. Except for SiO (see discussion in the SiO section above) most of these metal oxides are not expected to form in atmospheres with C/O 1 (Madhusudhan 2012;. In general, metal oxides will be destroyed by dissociation at high enough temperatures, with TiO and VO dissociating at temperatures similar to water (around 3000 K). As stated above SiO is a bit more stable, dissociating from temperatures higher by about 500 K.
METAL HYDRIDES
Similar to metal oxides, metal hydrides such as FeH, CaH, MgH, NaH, CrH, TiH may form as soon as the refractory clouds have been evaporated, thus at local atmospheric temperatures of around 1600 to 1900 K and cooler temperatures for NaH, as Na 2 S evaporates at ∼900 K already. Of course the cold trapping statement from above holds here as well. The hydrides will be destroyed by thermal dissociation at high enough temperatures, for example MgH and FeH dissociate around 3000 K or so (Lothringer et al. 2018). For these two species it should also be noted that the main gas phase carrier is atomic Mg and Fe in (ultra) hot Jupiter atmospheres, and that MgH and FeH are less abundant by about a factor 10 4 (Visscher et al. 2010).
METAL ATOMS
Metal atoms can exist in the gas phase as soon as the sequestering refractory condensates evaporate (modulo cold trapping). As mentioned above, gaseous Fe and Mg are the main gas carriers of these elements once the silicates and iron condensates are gone. Si takes over as the main gas species only after SiO is dissociated. Fe, Mg, Ti, Ca, Ni all are ionized between 3500 to 4000 K or so, with Al being ionized at somewhat lower temperatures (Lothringer et al. 2018;Hoeijmakers et al. 2019;Kitzmann et al. 2018).
METAL IONS
Metal ions become visible in the atmosphere as soon as the atoms have been ionized, see immediately above. | 26,514.6 | 2022-04-28T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
ON TWO CONSEQUENCES OF CH ESTABLISHED BY SIERPI ´NSKI
. We study the relations between two consequences of the Continuum Hypothesis discovered by Wac law Sierpi´nski, concerning uniform continuity of continuous functions and uniform convergence of sequences of real-valued functions, defined on sub-sets of the real line of cardinality continuum.
Introduction
In [11] we studied the following two consequences of the Continuum Hypothesis (CH) distinguished by Wac law Sierpiński in his classical treaty Hypothèse du continu [14] (the notation is taken from [14]): C 8 There exists a continuous function f : E → R, E ⊆ R, |E| = c, not uniformly continuous on any uncountable subset of E. C 9 There is a sequence of functions f n : E → R, E ⊆ R, |E| = c, converging pointwise but not converging uniformly on any uncountable subset of E. Sierpiński [13] checked that C 8 implies C 9 .The status of the converse implication remains unclear.Let us notice, however, that in Topology I by Kuratowski [7], footnote (3) on page 533 suggests that the two statements are in fact equivalent.
In [11] we considered the following stratifications of statements C 8 and C 9 for uncountable cardinals κ ≤ λ ≤ c: C 8 (λ, κ) There exists a set E ⊆ R of cardinality λ and a continuous function f : E − → R, which is not uniformly continuous on any subset of E of cardinality κ.C 9 (λ, κ) There exists a set E ⊆ R of cardinality λ (equivalently: for any set E ⊆ R of cardinality λ) and there is a sequence of functions f n : E → R, converging on E pointwise but not converging uniformly on any subset of E of cardinality κ.In particular, we proved in [11] that: • C 8 (c, c) ⇔ C 9 (c, c), and each of these statements is equivalent to the assertion d = c, provided that the cardinal c is regular.
• C 8 (ℵ 1 , ℵ 1 ) ⇔ C 9 (ℵ 1 , ℵ 1 ), and each of these statements is equivalent to the assertion b = ℵ 1 .Here d and b denote, as usual, the smallest cardinality of a dominating and, respectively, an unbounded family in N N corresponding to the ordering of eventual domination ≤ * (cf.[4]).
An important role in our considerations was played by the notion of a K-Lusin set (cf. [1]) which we extended declaring that an uncountable subset E of a Polish space X is a κ-K-Lusin set in X, ℵ 1 ≤ κ ≤ c, if |E ∩ K| < κ for every compact set K ⊆ X.We proved in [11] that C 9 (λ, κ) is equivalent to the statement that there is a Polish space X and a κ-K-Lusin set of cardinality λ in X.
In this note we present additional results related to the subject of [11].Most of them were earlier announced in [11,Section 4].
In Sections 2, 3 and 4 we investigate C 8 -like phenomena in a more general setting.We are interested in two closely related problems (X and Y are fixed separable metric spaces).ß Problem 1.Is the existence of a set E ⊆ X of cardinality λ and a continuous function on E with values in Y , which is not uniformly continuous on any subset of E of cardinality κ, related to either C 8 (λ, κ) or C 9 (λ, κ)? ßProblem 2. Does the existence of a set E ⊆ X of cardinality λ and a continuous function on E with values in Y , which is not uniformly continuous on any subset of E of cardinality κ, imply that there also exists such a function on E with values in R?
ßConcerning Problem 1, we observe that the existence of a separable metric space X of cardinality λ, a metric space Y , and a continuous function on X with range in Y , which is not uniformly continuous on any subset of X of cardinality κ, already implies that there exists a κ-K-Lusin set of cardinality λ in some Polish space and, consequently, that C 9 (λ, κ) holds true (cf.Proposition 2.1).
Conversely, C 9 (λ, κ) implies that there exists a κ-K-Lusin set E of cardinality λ in P, the set of irrationals of the unit interval I = [0, 1], such that for every non-σ-compact Polish space Y there is a continuous function on E which is not uniformly continuous on any subset of E of cardinality κ (cf.Theorem 2.2).
On the other hand, if Y is compact, then the existence of a set E ⊆ I of cardinality λ, and a continuous function f : E → Y , which is not uniformly continuous on any subset of E of cardinality κ, implies C 8 (λ, κ) (cf.Theorem 3.2).
ß Concerning Problem 2, we show that if a set E in the Hilbert cube I N is zero-dimensional and there exists a continuous function on E with range in any uncountable compact metric space Y , not uniformly continuous on any subset of X of cardinality κ, then there is also such a function with range in any uncountable compact metric space Z, and in particular, in I (cf.Theorem 3.1).
On the other hand, assuming CH, we prove the existence of a set E ⊆ I N of cardinality c such that there is a continuous function f : E − → I N , which is not uniformly continuous on any subset of E of cardinality c but each continuous function g : E → R is constant on a subset of E of cardinality c.The construction of a witnessing pair E and f falls under a general scheme, described in Section 4, of constructions of C 8like examples based on a generalization of the notion of a κ-K-Lusin set.
Section 5 is a slight departure from the topic but it is closely related to a reasoning of Sierpiński concerning C 9 .We shall show that a Hausdorff space X is Čech-complete and Lindelöf if and only if there is a sequence f 0 ≥ f 1 ≥ . . . of continuous functions f n : X → I converging pointwise to zero but not converging uniformly on any closed non-compact set in X (cf.Theorem 5.2).The existence of such a sequence for any Polish space X was a key step in proving that statement C 9 (λ, κ) is equivalent to the existence of a κ-K-Lusin set E of cardinality λ in P (cf.[11,Theorem 2.3]).
In this note P always denotes the set of irrationals of the unit interval I = [0, 1].It is homeomorphic to the Baire space N N , the countable product of the set of natural numbers N = {0, 1, 2, . ..} with the discrete topology (cf.[6]).
Mappings into non-compact spaces and C 9
We start with a general observation.Proposition 2.1.If there exist a separable metric space (X, d X ) of cardinality λ, a metric space (Y, d Y ), and a continuous function on X with range in Y , which is not uniformly continuous on any subset of X of cardinality κ, then there exists a κ-K-Lusin set of cardinality λ in some Polish space and, consequently, C 9 (λ, κ) holds true.
Proof.Let f : X → Y be a continuous function which is not uniformly continuous on any subset of X of cardinality κ.The function f extends (cf.[6,Theorem 3.8]) to a continuous function f : By the choice of X, we have that |K ∩ X| < κ, which shows that X is a κ-K-Lusin set of cardinality λ in G.By [11,Theorem 2.3], this proves C 9 (λ, κ).
On the other hand, statement C 9 (λ, κ) already implies (and in view of Proposition 2.1, is equivalent to) the existence of C 9 -like example for functions with values in arbitrary non-σ-compact Polish spaces.Theorem 2.2.For any uncountable cardinals κ ≤ λ ≤ c the following are equivalent: (1) C 9 (λ, κ), (2) there exist a set E ⊆ R of cardinality λ, a non-σ-compact Polish space Y , and a continuous function f : E → Y which is not uniformly continuous (with respect to arbitrary complete metric on Y ) on any subset of E of cardinality κ.
Moreover, any κ-K-Lusin set E of cardinality λ in P has the property expressed in (2) with respect to any non-σ-compact Polish space Y .
Let Y be an arbitrary non-σ-compact Polish space.Let h : P − → Y be a homeomorphic embedding of P onto a closed subspace h(P) of Y (cf.[6,Theorem 7.10]).We will show that E together with f = h|E have the required properties.
To that end, let us fix a set A ⊆ E with |A| = κ.Then Ā, the closure of A in I, is not contained in P since otherwise Ā would be a compact set in P, intersecting E on a set of cardinality κ.So let us pick a k ∈ A, k ∈ N, such that lim k→∞ a k = a and a ∈ I \ P. Now, if f were uniformly continuous on A with respect to a complete metric d on Y , f would take Cauchy sequences in A to Cauchy sequences in Y .In particular, the sequence (f (a n )) n∈N would be Cauchy in Y , hence lim k→∞ f (a k ) = z for some z ∈ Y .This, however, is not the case: since the set {a 0 , a 1 , . ..} has no accumulation point in P, the set {f (a 0 ), f (a 1 ), . ..} has no accumulation point in h(P) and hence also in Y , as h(P) is closed in Y .ß The implication (2) ⇒ (1) follows immediately from Proposition 2.1.
Mappings into compact spaces and C 8
The results of the previous section show that C 8 -like statements for functions with values in non-σ-compact Polish spaces are actually equivalent to statement C 9 .Apparently, the situation changes when we consider functions with values in compact spaces.As the following result shows, if a set E ⊆ I N is zero-dimensional and there exists a continuous function on E with range in any uncountable compact metric space Y , not uniformly continuous on any subset of X of cardinality κ, then there is also such a function with values in R, witnessing that C 8 (λ, κ) holds true.Theorem 3.1.Let E be a zero-dimensional subset of a compact metric space X.If there exists a continuous function on E with range in a compact metric space Y , not uniformly continuous on any subset of E of cardinality κ, then there is also such a function with range in the Cantor ternary set C in I. Consequently, for any uncountable compact metric space Z, there is also such a function with values in Z.
Proof.Let h : E → Y be a continuous function not uniformly continuous on any set of cardinality κ.
Using the compactness of Y , let as fix a sequence (K n , L n ) n∈N of pairs of disjoint compact sets in Y such that for any pair (K, L) of disjoint compact sets in Y , there is n with K ⊆ K n and L ⊆ L n .
For each n ∈ N, we let , and using the fact that E is zero-dimensional and separable, we choose a continuous function u n : E → {0, 2} taking on C n value 0 and on We shall show that the function f : E → I defined by the formula which takes values in the Cantor ternary set C, is not uniformly continuous on any subset of E of cardinality κ.
To that end, let us fix a set A ⊆ E with |A| = κ.We shall first make the following observation.Let a ∈ Ā, the closure of A in X.Then for any n, since u n takes on A values 0 or 2 only, the oscillation Let us note that h|A : A → Y cannot be extended to a continuous (hence, by the compactness of Ā, uniformly continuous) function h : Ā → Y , since otherwise the function h|A = h|A would itself be uniformly continuous, contrary to the the fact that |A| = κ.Consequently, there must be closed disjoint sets K, L in Y such that the closures of (h|A) −1 (K) and (h|A It follows that u n |A has the oscillation at a equal to 2 and let us assume that n is the smallest index with this property.The oscillation of each of the functions u 0 |A, . . ., u n−1 |A is then equal to 0, hence we can find a neighbourhood V of a in X such that all these functions have constant values on A ∩ V .
Let us pick x
It follows that the oscillation of f |A at a is at least 3 −n−1 .In effect, f |A has no continuous extension over Ā, which means that f is not uniformly continuous on A.
ß Finally, if (Z, d) is an arbitrary compact metric space and e : C → Z is a homeomorphic embedding of C into Z, then since e −1 is uniformly continuous, the function e • f : E → Z has desired properties.
As an immediate corollary we obtain the following equivalent form of statement C 8 (λ, κ).Theorem 3.2.For any uncountable cardinals κ ≤ λ ≤ c if the cofinality of λ is uncountable, then the following are equivalent: (1) C 8 (λ, κ), (2) there exists a set E ⊆ R of cardinality λ, a compact metric space Y and a continuous function f : E → Y , which is not uniformly continuous on any subset of E of cardinality κ.Moreover, any set E ⊆ I that witnesses C 8 (λ, κ) for some continuous function from E to I has the property expressed in (2) with respect to any uncountable compact metric space.
Proof.(1) ⇒ (2).If f : E − → R is a continuous function on a set E ⊆ R of cardinality λ, which is not uniformly continuous on any subset of E of cardinality κ, then since the cofinality of λ is uncountable, by shrinking E, if necessary, we may assume that the range of f is contained in a closed interval Y of length 1.
If additionally E ⊆ I and f : E → I, then since E contains no non-trivial interval, it is zero-dimensional, and Theorem 3.1 applies.ß (2) ⇒ (1).Now let f : E → Y be a continuous function with values in a compact metric space Y , which is not uniformly continuous on any subset of E of cardinality κ.We may again assume that E is contained in a closed interval X.Then, E being zero-dimensional, it is enough to apply the final part of the assertion of Theorem 3.1 to Z = I.
By the results of [11] (cf. Section 1), Theorems 2.2 and 3.2 lead to the following corollary.Throughout this section we assume that G is an uncountable G δ -set in a compact metric space X.
Given a collection K of compact sets in X containing all singletons, we say that an uncountable subset The C 8 -like examples, presented later in this section by means of K -Lusin sets, are based on the following observation, applied also in the proofs of [11, Theorems 3.8 and 3.9].Proposition 4.1.Let ϕ : X − → Y be a continuous map onto a compact metric space Y such that ϕ|G is a homeomorphism onto ϕ(G).Let K be a collection of compact sets in X such that whenever A ⊆ G and ϕ|A extends to a homeomorphism over Ā, the closure of A in X, then we obtain a continuous function on a set of cardinality λ, which is not uniformly continuous on any subset of E of cardinality κ.
Proof.Aiming at a contradiction, assume that f |B is uniformly continuous (with respect to any metric compatible with the topology of Y ) on a set B ⊆ E of cardinality κ and let A = f (B) = ϕ −1 (B).Then, since ϕ|A : A → B is also uniformly continuous, the function ϕ|A extends to a homeomorphism over Ā (cf.[2, Theorem 4.3.17]).Consequently, Ā ∈ K .This, however, is impossible, since on one hand we have |T ∩ Ā| < κ, T being a κ-K -Lusin set in X, but on the other hand A ⊆ T ∩ Ā has cardinality κ.
4.1.
Zero-dimensional spaces.Throughout this subsection let us additionally assume that the (compact metric) space X is zero-dimensional.ß A proof of the following fact is given in [9, Lemma 4.2] (it is based on an idea similar to that in [3, proof of Lemma 5.3]).Lemma 4.2.For any G δ -set G in X there is a continuous map ϕ : X → Y onto a compact metric space Y , such that ϕ|G is a homeomorphism onto ϕ(G), ϕ(X \ G) ∩ ϕ(G) = ∅ and the set Y \ ϕ(G) is countable.
Various C 8 -like examples could be constructed with the help of Lemma 4.2 and the following observation.By a σ-ideal on X we mean a collection I of Borel sets in X, closed under taking Borel subsets and countable unions of elements of I, and containing all singletons.Proposition 4.3.Let I be a σ-ideal on X and let K be the collection of all compact sets from I.
If G is a G δ set in X such that G ∈ I and ϕ : X − → Y is a continuous map described in Lemma 4.2, then Ā ∈ K for any A ⊆ G such that ϕ|A extends to a homeomorphism onto Ā.
we obtain a continuous function on a set of cardinality λ, which is not uniformly continuous on any subset of E of cardinality κ.
Proof. Let us fix A ⊆ G such that ϕ|A extends to a homeomorphism φ between Ā and ϕ(A). Then the set Ā
The final assertion follows directly from Proposition 4.1.
As the following proposition shows, the above observation could be applied to various natural σ-ideals.Let us recall that un uncountable set T in a Polish space Y is a Lusin set in Y , if |T ∩ D| < ℵ 1 for every closed nowhere-dense subset of Y .Proposition 4.4.Let I be a σ-ideal on X, let K be the collection of all compact sets from I and let us assume that I is not generated by K (i.e., there is a set from I which is not covered by any F σ -set from I).Then, there exists a G δ -set in X such that G ∈ I but no non-empty, relatively open set in G is covered by an F σ -set from I. Consequently, every Lusin set T in G is ℵ 1 -K -Lusin in G hence, it gives rise to a C 8 -example (as described in Proposition 4.3).
Proof.Let B ∈ I be (a Borel) set not covered by any F σ -set from I. Then the existence of a G δ -set in X such that G ⊆ B but G is not covered by any F σ -set from I follows from a theorem of Solecki (see [15]).By shrinking G, if necessary, we may assume that G has the desired properties.
It follows that if K ∈ K , then G ∩ K is meager in G, and, consequently, T ∩ K is countable for any Lusin set T in G.
Remark 4.5.A typical example of the situation described in Propositions 4.3 and 4.4 is when X is a copy of the Cantor in R of positive Lebesgue measure, I is the σ-ideal of Lebesgue measure zero Borel sets in X (then K is the family of closed Lebesgue measure zero sets in X) and G is a dense copy of irrationals in X of Lebesgue measure zero.
Then the function f = ϕ −1 |H : H → R, where H = ϕ(G), is a homeomorphic embedding of H, a copy of irrationals, to R, with the property that for every Lusin set L of cardinality c in H, the function f |L is not uniformly continuous on any uncountable subset of L. This provides an alternative proof of the theorem of Sierpiński that the existence of a Lusin set of cardinality c in P implies C 8 (cf.[14, proof of Théorème 6 on page 45]).
Infinite-dimensional spaces.
The assumption that the space X is zero-dimensional in Theorem 3.1 is essential, as demonstrated by the following result (CH in this theorem can be weakened to the assumption that no family of less than c meager sets covers R, cf.Remark 4.8).
Theorem 4.6.Assuming CH, there exists a set E ⊆ I N of cardinality c such that (1) there is a continuous function f : E → I N , which is not uniformly continuous on any subset of E of cardinality c, (2) each continuous function g : E → R is constant on a subset of E of cardinality c.
A key element of the proof of this theorem is a Henderson compactum -a compact metrizable infinite-dimensional space whose all compact subsets of finite dimension are zero-dimensional, cf.[10].
More specifically, we shall need the following fact, where punctiform sets are the sets without non-trivial subcontinua, cf.[3, 1.4.3].Lemma 4.7.There exists a non-empty punctiform G δ -set M in a Henderson compactum H ⊆ I N such that for each M ′ ⊆ M with dim (M \ M ′ ) ≤ 0, every continuous function u : M ′ → R is constant on a set of positive dimension.
A justification of this lemma is rather standard, but since we did not find convenient references, we shall give a proof to this effect at the end of this subsection.For now, taking this fact for granted, we shall proceed with a proof of our main result.
Proof of Theorem 4.6.Let us adopt the notation of Lemma 4.7, and let K be the collection of all compact zero-dimensional sets in the Henderson compactum H.
Using CH, we inductively construct a c-K -Lusin set T in M such that |T ∩ L| = c for every Borel set L in M with dim L > 0. To that end, we list all elements of K as K α : α < c and all Borel sets L in M with dim L > 0 as L α : α < c , repeating each such set L continuum many times.Then, we subsequently pick for ξ < c using the fact that, under CH, the set in brackets is zero-dimensional, by the sum theorem for dimension zero (cf.[3, 1.3.1]).Finally, we let In particular, T meets each Borel set in M of positive dimension in continuum many points.
Indeed, w can be extended to a continuous function w : Let us now apply a counterpart of Lemma 4.2 for infinite-dimensional spaces to the following effect (cf. the proof of [3, Lemma 5.3.1]):there exists a continuous surjection ϕ : Claim 2. The set E = ϕ(T ) satisfies the assertion of Theorem 4.6.
Since E is homeomorphic to T , Claim 1 shows that (2) in Theorem 4.6 is satisfied.
To check also assertion (1) in this theorem, we shall make sure that the function f = ϕ −1 |E : E → H is not uniformly continuous on any set of cardinality c.
Since T is a c-K -Lusin set in M, by Proposition 4.1 it is enough to verify that for any non-empty A ⊆ M, whenever ϕ|A is an embedding (the closure of A is in H), then Ā ∈ K , i.e., dim Ā = 0.
In the assumptions of Theorem 4.6, CH can be weakened to the assertion (usually denoted by cov(M ) = c, cf.[5]) that no family of less than c meager sets covers R. The only change in the proof requires checking that the inductive definition of the sequence p ξ : ξ < c is correct.More precisely, the following is true.
Then U K is open and dense in the function space C(I N , R) and this space being perfect Polish, the assumption cov(M ) = c guarantees that the set [6, 8.32]).
Let us recall (cf.[3]) that a closed set L in a topological space Z separates the space Z between sets A 1 , A 2 ⊆ Z, if Z \ L = U 0 ∪ U 1 , where U 0 , U 1 are open, disjoint and A i ⊆ U i , for i = 0, 1.If A 1 and A 2 are singletons, then we say that L separates Z between the respective points.
Let F = K .To prove that dim F = 0, it is enough to show that for any pair of disjoint compact sets A, B in I N , I N can be separated between A and B by a closed set disjoint from F .Indeed, let x ∈ F and let U be a relatively open subset of F with x ∈ U and F \ U = ∅.Then U = V ∩ F for an open V in I N with B = I N \ V = ∅, and separating I N between A = {x} and B by a closed set in So let A, B be disjoint compact sets in I N .We can pick f ∈ G such that f (A) ⊆ (−∞, 0), f (B) ⊆ (0, +∞), as such functions form an open, non-empty set in C(I N , R).Then L = f −1 (0) is a closed set in I N separating I N between A and B with the property that L ∩ K = ∅ for all K ∈ K , cf. ( 1) and (2).
Proof of Lemma 4.7.Let us fix a Henderson continuum K in I N .We shall consider on I N the metric assigning to points (s 0 , s 1 , . ..), (t 0 , t 1 , . ..)This theorem goes back to Mazurkiewicz, and can be justified by the reasoning in the proof of [10,Theorem 3.9.3]We adopt the notation from Claim 1.Since K is a compact, connected set in I N , we can find open connected neighbourhoods Let D be a countable set dense in S, and let (c n , d n ) : n ∈ N be an enumeration of all ordered pairs of distinct points from D, such that each such pair appears in the sequence infinitely many times.
Let us fix n > 0. We shall define an embedding h n : I N → U n such that the distance from h n (a) to c n and the distance from h n (b) to d n is less than 1 n .To that end, let us notice that U n is arcwise connected, being an open, connected set in I N (cf.[16,Proposition 12.25]), and hence there is a continuous f : Let m ≥ n be large enough to ensure that for any x, y ∈ I N , whenever the first m coordinates of x and f (y) coincide, then x ∈ U n .Now, denoting by p j : I N → I the projection onto j'th coordinate, we define the embedding h n by p i (h n (x)) = p i (f (x)) for i ≤ m and p m+i (h n (x)) = p i (x) for i = 1, 2, . ...
Having defined the embeddings h n , we shall check that the set M in Claim 2 satisfies the assertion of Lemma 4.7.
Proof.In view of [11,Theorem 2.1] we only have to prove the "if" part of the above equivalence.So let (X, d) be a separable metric space.Let f n : X − → I, n ∈ N, be a sequence of continuous functions with f 0 ≥ f 1 ≥ . .., which converges to zero pointwise but does not converge uniformly on any set with non-compact closure in X.
Let ( X, d) be the completion of (X, d); clearly, the space X is Polish.There exists a G δ -set X in X, containing X such that each f n extends to a continuous function fn : X → I. Since X is dense in X, we have f0 ≥ f1 ≥ . . .and in particular, the set Since X ⊆ G, it remains to make sure that G ⊆ X, to conclude that X = G is completely metrizable as a G δ -subset of the (Polish) space X.
So let c ∈ G and let us pick x k ∈ X, k ∈ N, such that lim k→∞ x k = c.Let K = {c} ∪ {x k : k ∈ N}.Then K is compact, so the sequence ( fn ) n∈N converges uniformly on K (this is a special instance of the Dini theorem [2, Lemma 3.2.18]).It follows that the sequence (f n ) n∈N converges uniformly on K ∩X, a closed set in X which therefore, by the assumed property of (f n ) n∈N , is compact.Since the sequence (x n ) n∈N converges to c and all x n are elements of K ∩ X, so is c.In particular, c ∈ X.
With the help of Theorem 5.1 we shall establish the following more general result (for terminology see [2]).
Theorem 5.2.Let X be a Hausdorff space.Then X is a Čechcomplete Lindelöf space if and only if there is a sequence f 0 ≥ f 1 ≥ . . . of continuous functions f n : X → I converging pointwise to zero but not converging uniformly on any closed non-compact set in X.
Proof.First, let X be a Čech-complete Lindelöf space.By a theorem of Frolík, it follows that there is a perfect map p : X → Y onto a Polish space Y , cf. [2, 5.5.9].Let us recall that for a Hausdorff space X and a metrizable space Y this means, cf.[2, Theorems 3.7.2 and 3.7.18],that p is a continuous, closed mapping and the inverse image of every compact subset of Y is compact.It is straightforward to check that if functions f n : Y → I, n ∈ N, with f 0 ≥ f 1 ≥ . .., satisfy the assertion of Theorem 5.1, the functions f n • p : X → I, n ∈ N, have the required properties.
ß Next, given a sequence f 0 ≥ f 1 ≥ . . . of functions f n : X → I, described in the theorem, let us consider the diagonal map (1) F = (f 0 , f 1 , . ..) : X → I N and let Y = F (X).
Corollary 3 . 3 . 4 .
If either κ = λ = c or κ = λ = ℵ 1 , then the following are equivalent: (1) There exist a set E ⊆ R of cardinality λ, a non-σ-compact Polish space Y , and a continuous function f : E → Y which is not uniformly continuous (with respect to any complete metric on Y ) on any subset of E of cardinality κ. (2) There exist a set E ⊆ R of cardinality λ, a compact metric space Y and a continuous function f : E → Y , which is not uniformly continuous on any subset of E of cardinality κ.Constructing C 8 -like examples from K -Lusin sets
Remark 4 . 8 .
Assuming cov(M ) = c, the union of any family K of compact zero-dimensional sets in I N , with |K | < c, is zero-dimensional.Proof.Let K be a family of compact zero-dimensional sets in I N , with |K | < c.For any K ∈ K , let us consider(1)
Claim 1 .
There exists a punctiform G δ -set S in K and distinct points a, b ∈ S such that each relatively closed set in S separating S between a and b has dimension at least 2. Indeed, since dim K ≥ 4, K contains a punctiform G δ -set W with dim W ≥ 3.
and H = M -the closure of M in I N × I, one obtains sets M ⊆ H satisfying the assertion of Lemma 4.7.
[12,ied to a pair of disjoint compact sets A and B in K such that for any set N in K with dim N ≤ 2 there is a continuum in K joining A and B and missing N (cf.[12,Theorem4.2]), and to a continuous map π :K → [−1, 1] sending A to −1 and B to 1.ß Now, since dim W ≥ 3, there is a ∈ W such that all sufficiently small neighbourhoods of a in W have boundaries of dimension ≥ 2.An argument in [8, proof of Theorem 8, p. 172] gives a point b ∈ K \ {a} such that for S = W ∪ {b}, every relatively closed set in S separating S between a and b has dimension ≥ 2. Since S is a punctiform G δ -set in K, it satisfies Claim 1.Claim 2. There are embeddings h n | 7,856.8 | 2023-06-20T00:00:00.000 | [
"Mathematics"
] |
Optimization of Seed Germination, Callus Induction, and Secondary Metabolites Production in Allium Jesdianum: Elicitation by Methyl Jasmonate and Putrescine
Allium jesdianum is considered as the most important genera of the Liliaceae family with various pharmacological properties. The present study was carried out to determine the best pre-germination treatment in A. jesdianum. The highest seed germination (58.33%) was observed at chemical treatment including the combination of α-naphthaleneacetic acid (1 mgL − 1 ) and benzyl aminopurine (3 mgL − 1 ) on Murashige and Skoog (MS) media. The highest callus induction (86.7%) was observed at MS/2 media, which was supplemented by NAA (1 mgL − 1 ) and BAP (3 mgL − 1 ) from hypocotyl explants. Two elicitors including methyl jasmonate (MeJ) (0, 25, 50, and 100 µM) and putrescine (Pu) (0, 0.5, and 1 mM) were used to investigate their effects on lipid peroxidation, callus growth traits, total phenolics content (TPC), the contents of total avonoids (TFD), total avonols (TFL), anthocyanin, total chlorophyll, total carotenoids and on antioxidant activity through DPPH (1,1-diphenyl-2-picrylhydrazyl) method under callus culture. The results showed the superiority of MeJ over Pu for increasing the secondary metabolites and antioxidant activity in callused of A. Jesdianum, compared to the control. The highest TPC (6.02 mg GAEg − 1 FW), TFD (0.52 mg QEg − 1 FW), and TFL (0.39 mg QEg − 1 FW) were observed under 50 µM of MeJ. Meanwhile, the highest value for anthocyanin (8.99 µ mol g − 1 FW) was achieved at 25 µM of MeJ. The highest DPPH activities were observed at 50 and 100 µM of MeJ. Putrescine (0.5 mM) elicitation showed only superiority for callus growth rate (0.53 mm day − 1 ) and carotenoids (0.045 mg g − 1 DW). In this study, effective seed germination and elicitation of A. Jesdianum callus cultures were developed for the rst time. Thus, new protocols could be suitable for future studies in biotechnological aspects of this medicinal plant. and the effects of two different elicitors (MeJ and Pu) on the enhancement of different phenolics and callus related traits in A. jesdianum, by the rst. The ndings showed higher eciency of MeJ than Pu for increasing different secondary metabolites and antioxidant activity in calli of A. jesdianum. The superiority of Pu elicitation was increasing callus growth and the contents of two studied pigments (carotenoids and chlorophyll). This study suggests evaluating other different concentrations of Pu for callus elicitation. The concentration of 50 µM MeJ showed the best selective dose for enhancing the contents of phenolic compounds in A. jesdianum through callus culture. This study could lead to introducing a new way to improve the production of the benecial phytochemical compounds from undifferentiated callus cells in the threatened medicinal plant of A. jesdianum.
Introduction
Medicinal plants are traditionally considered among the main sources of secondary metabolites (SMs) and natural drugs with antioxidant properties (Yue et al. 2016). The discovery of new natural products with medicinal characteristics has been recently the subject of intense research (Yue et al. 2016). These SMs are important sources for producing different cosmetics, fragrances, avors, food additives, and pharmaceutical and industrial products (Namdeo 2007;Erb and Kliebenstein, 2020). The technique of plant cell culture is based on a set of biotechnological methods for producing valuable plant-speci c SMs on large scale, independent from environmental conditions (Vanisree et al. 2004;Smetanska 2008). In vitro culture techniques have been used for enhancing different phytochemical compounds as a promising bio-production way for desired and/or rare natural products (Dias et al. 2016; Yue et al. 2016).
In this regard, in vitro elicitation is an e cient strategy to promote plants' defense mechanism for scavenging the reactive oxygen species (ROS) (Matkowski 2008). The subsequent modulating and production of plant SMs occurs by regulating the expression of genes that are involved in producing key enzymes implicated in the biosynthesis of different defense-related compounds as phytoalexins, avones, avonoids phenolics, and other bioactive compounds (Namdeo 2007) under the elicitation process. To enhance the production of these components, chemical elicitors have been widely applied in recent years as environmentally benign techniques (Narayani and Srivastava 2017). In vitro production of worthy compounds using callus culture has been very well recognized and is being used widely for different industrial applications (Narayani and Srivastava 2017). Allium jesdianum Boiss (A. jesdianum) is an important endemic, threatened, and underutilized plant species of Iran (known locally as "Bonsorkh" and "Yazdi onion") that grows in high altitudes (1800-2600 m) of Zagros mountains in west and northwest regions of Iran (Kalantari et al. 2018;Pirbalouti 2019). This species is eaten raw or as a cooked vegetable or is used as a avor additive to fresh or cooked foods in Iran. In the eld cultivation, its propagation rate is very slow and it takes years to produce a new variety.
In folk medicine of Iran, bulbs and leaves of this plant are used for the treatment of cold, kidney problems, and rheumatic pains (Pirbalouti 2019). Also, other medical properties as antibacterial (Dorosti et al. 2017) and anticancer (Dorosti et al. 2017) are reported for this valuable species. In this regard, many attempts have been made to isolate and exploit the active medicinal compounds from it. Considering the potential value of A. jesdianum cell or tissue cultures for biotechnological application, there is an urgent need to develop strategies for the protection and conservation of this genus by the development of its fast seed germination and in vitro conservation protocols.
The present work was aimed to study (a) the effects of physical and chemical pre-treatments on seed germination of A. jesdianum under in vitro culture, (b) the optimization of an e cient procedure for callus induction of A. jesdianum, and (c) establish an e cient strategy for enhancement in commercially vital SMs of A. jesdianum under elicitation by Pu and MeJ through callus cultures.
Seed collection and seed germination experiments
A. jesdianum seeds were gathered from the highland of Sabzkooh province (31° 27′ N 52° 40′ E mountain about 2,140 m height), in Chaharmahal-o-Bakhtiari region of Iran. A. jesdianum seeds were identi ed by the experts of Botanical Researcher of Department of Natural Resources, Isfahan University of Technology (IUT), Isfahan, Iran .The seeds were surface disinfected with 70% (v/v) ethanol for 90 sec and surface sterilized thoroughly with 2% (v/v) sodium hypochlorite for 15 min. The seeds were then rinsed three times with sterile water to remove the remaining amount of disinfection liquid. The sterilized seeds were then placed on to MS medium (Murashige and Skoog, 1962) 3 : Duchefa] ). Other seeds were treated with physical pre-treatments such as NaCl (1 M), cold (4°C for four weeks) and mechanical scari cation, then were cultured on MS medium supplemented with different PGRs combinations ( Table 1). The seeds were cultured on MS medium as a control group. In order to determine the effect of physical and chemical pretreatments on germination experiments were conducted in the photoperiod of 16/ 8 (light/dark) at 23 ± 2 • C for seed germination.
Germination related traits
To determine the optimal parameters of seed germination, the impacts of certain physical and chemical pretreatments on germination rate and germination percent were evaluated. The germination process was observed by emerging the rst radicle. The number of seeds germinated in each pre-treatment was recorded every two days. The germination percentage was calculated using the following equation
Physiological characteristics
Callus relative fresh weight (RFW) was according to this formulae: RFW = (FW 2 − FW 1 )/ days, where, FW 1 is the fresh weight of the callus at the initiation of the elicitation and FW 2 is the nal fresh weight of the callus at the nal day post of elicitation process. CGR was calculated at 15, 30, 45 and 60 day post callus transferring to elicitation media.
Lipid peroxidation assay
The amount of cell membrane damage was determined by measuring malondialdehyde (MDA) as the end product of peroxidation of membrane lipids. For this assay, 0.2 g of fresh calli were homogenized with 3 mL trichloroacetic acid (TCA) (0.1% w/v) (Merck, Com.). The homogenate was centrifuged for 30 min at 4000 rpm. Then the supernatant (5 mL) was collected and mixed with 4 mL of thiobarbituric acid (TBA) (0.5% w/v) and 20% TCA (w/v). Then, the sample was heated at 95°C for 25 min and then placed on ice bath. Its absorbance was measured at 532 and 600 nm with a spectrophotometer. The content of MDA was calculated by an extinction coe cient of 155 mM − 1 cm − 1 and expressed as µmol g − 1 FW (Heath and Packer, 1968).
Methanolic callus extract
At rst, 0.5 g of completely dried callus were powdered and added to 10 mL of diethyl ether The mixture was well mixed and stored in a refrigerator for 24 h. For complete evaporation of diethyl ether, the supernatant was transferred to dryer. Then 10 mL of 80% methanol were added to that puri ed supernatant and then ltered using a 0.4 µm lter.
Determination of total phenolic content
Based on the Folin-Ciocalteau reagent method described by Singleton et al. (1999) total phenolics content (TPC) was determined. In brief, 0.2 g fresh weight of celli were homogenized with 3 ml CH 3 OH (Merck. Com) and centrifuged at 4000 rpm for 25 min. Then, methanolic extract (0.5 mL) was mixed with 2.5 mL of Folin-Ciocalteau reagent (Sigma-Aldrich, Com), followed by the addition of 2.0 mL of 7 % Na CO solution. The mixture was disposed in the dark for 90 min at room temperature. The absorbance of the supernatant was recorded at 765 nm against the reagent blank by a spectrophotometer. The amount of TPC was then quanti ed by the method of calibration curve using gallic acid (GA 3 ) (Sigma-Aldrich, Com) as standard.
Determination of total avonoids and total favonols content
Total avonoids (TFD) and total avonols (TFL) contents were determined spectro-photo metrically using the method of Miliauskas et al. (2004). In brief, 0.2 g fresh calli was ground in 3 mL of methanol and centrifuged at 4000·rpm for 25 min. Then, supernatant was used to assay TFD and TFL contents. For estimation of TFL, 0.5 mL of methanolic extract, 0.5 mL of AlCl 3 (2% w/v) solution and 1.5 mL of sodium acetate (5% w/v) were mixed and after 90 min of incubation at room temperature, the absorbance of mixture was measured at 445 nm. For total avonoid 0.5 mL of methanolic extract, 125 µL of AlCl 3 solution (%10 w/v) and 125 µL of CH 3 COOK (1M) were mixed and the samples were kept at room temperature for 30 min. A spectrophotometer was used to measure the absorbance of the reaction mixture at 415 nm with. The contents of total avonoids and avonols were expressed as mg quercetin (QE) equivalents per gram of fresh mass (mg QE g − 1 FW).
Determination of anthocyanin
At the rst step, 2 mL acidi ed methanol (1% HCI) was used for homogenizing the fresh calli (0.2 g) at room temperature (Hara et al. 2003). After one day, the total extract was centrifuged for 25 min at 4000 rpm. Then, the content of anthocyanin was determined by spectrophotometer at wave length of 511 nm based on the extinction coe cient of Raphanusin (33000 M − 1 cm − 1 ).
Determination of photosynthetic pigments
Carotenoids and chlorophyll contents were determined according to the method of Lichtenthaler and Buschmann (2001). Acetone (80%) (Merck, Com.) was used for these pigments extraction from fresh calluses. For carotenoids, absorption spectra of the extracts were measured at 470, 652 and 665 nm and its concentrations were calculated according to the Lichtenthaler equations.
DPPH radical scavenging activity
Radical scavenging activities of sa ower calli were determined by DPPH assay (Golkar and Taghizadeh, 2018). The methanolic extract (20 µL) was added to 1 mL of 50 µM DPPH (Sigma, Aldrich) solution in methanol. The prepared extracts ranged from 0 to 250 µg mL − 1 .The mixtures were mixed and incubated in the dark condition for 20 min. The reduction of DPPH absorption was measured at 515 nm. The positive control was ascorbic acid (Sigma, Aldrich). The DPPH radical scavenging activity was calculated using the equation: Inhibition concentration (inhibition percentage: IP %) = (absorbance controlabsorbance sample)/(absorbance control) × 100.
Statistical analysis
This study elicitation section was carried out as a factorial based on a completely randomized design with six replications. The germination and callus induction experiments were carried out as a completely randomized design with six replications. The analysis of variance done using SAS software version 9.3 (SAS Institute, 2011). Then, mean comparisons (± standard deviations) were carried out using Fisher's least signi cant difference (LSD 5% ) test.
Seed germination
The optimization of germination is considered as a major gap in A. jesdianum. To the best of our knowledge, no previous study has reported the optimization in germination through seed in A. jesdianum. Some studies have optimized germination in some Allium species (Kamenetsky and Gutterman 2000). Some treatments such as scari cation and darkness helped break the dormancy of some genus as A. suworowii, A. a atunense, and A. altissimum at cold temperatures (Kamenetsky and Gutterman 2000). This study, for the rst time, evaluated the effects of different treatments to triggering germination of A. Jesdianum under in vitro culture. Different treatments including mechanical (scari cation), chemical (different combination of plant growth regulators), and thermal conditions (cold temperature) were applied to optimize the germination ratio in A. jesdianum ( Table 1). Evaluation of germination under different treatments showed extensive variation in GP in about 30-45 days after initiating the treatment.
There was a statistically signi cant difference (P < 0.01) in GP and GR between different pre-germination treatments (Table S1). According to the data in Table 1 scari cation and cold-water stress were found to be effective for germination in seeds of A. jesdianum; however, chemical treatments had the most effect on breaking seed dormancy. Therefore, from the above ndings, it can be inferred that dormancy of the seeds of A. jesdianum was probably associated with the enclosed covering around the embryo. The seeds under control condition gave a similarly high GP (52.17%) but seeds took a long time to germinate, similar to germination in Parkia biglobosa (Abbas et al. 2018). It seems that the ecological background of the natural habitat for seed collection region and harvesting time play predominant roles in seed germination capacity in A. jesdianum. The optimized protocol could be used for rapid propagation of this plant under in vitro conditions.
Callus induction
Despite the role of A. jesdianum as an important medicinal plant, no research has focused on using in vitro cultures in this neglected species. In the present investigation, callus initiation from explants (hypocotyl and seeds) were observed within 14-18 days on all medium compositions (Fig. 1). The induced calli grew into yellow-to-greenish color with a semi-friable structure on four different media. The growth of calli were fast after their induction.
Analysis of variance showed a signi cant effect of treatments on CI and CGR (Table S2). Callus induction in all tested PGRs and explants were effective ( Table 1). The effects of different PGRs (2,4-D, NAA and BAP) and explants (seed and hypocotyl) on callus induction (%) and callus growth rate of A. jesdianum is presented at Table 2. As seems, this is the rst report regarding the optimized protocol for callus induction in A. jesdianum. According to Table 2, MS/2 media supplemented with 1 mgL − 1 NAA and 3 mgL − 1 BAP were recognized as the best combination for highest callus induction ( 83.92%) in seeds explants of A. jesdianum (Table 2), which showed no signi cant difference with MS/2 media supplemented with NAA (1 mg L − 1 ) + BAP (3 mgL − 1 ) from hypocotyl explants ( Table 2). The least value for CI (42.01%) was denoted to MS/2 media which was supplemented by 2 mgL − 1 BAP and 1 mgL − 1 2,4-D ( Table 2). These ndings showed no callus induction on complete MS media for this species. So it could be resulted that lower concentrations of macro and micro elements was better for cells differentiation from explants in this genus. The good response of hypocotyl explants could be due to homogenous cell nature and easy availability of growth substances to each cell of homogenous tissues, hypocotyl explants showed high potential for callus induction. The combined effects of auxins and cytokinins have been considered as essential factors for callus induction in A. jesdianum. Comparison of four different treatment implied that higher concentrations of BAP (as cytokinin group) than 2,4D and NAA (as auxins), could be promotive for more frequency in callus initiation and its growth in A.jesdianum. (Table 2), but the lowest one (0.18 mm day − 1 ) was observed at MS/2 media which was supported by 2 mgL − 1 BAP + 1 mgL − 1 2,4D ( Table 2). This implied at a positive interactive effects of these concentrations of BAP and 2,4D on used explant for enhancing the callus growth.
Callus growth traits
When studying the callus growth, the responses of relative fresh weight and callus growth rate were evaluated to different doses of MeJ and Pu. Overall, a decrease in RFW was observed in response to most of the treatments compared to the control ( Table 3). The least content for RFW (0.60) was observed at 100 µM concentration of MeJ rather than control (0.42) ( Table 3). The stressor effects of both used elicitors (especially at higher concentrations) reduced the RFW. This is probably due to the inhibitory effects of elicitors on cell growth and capacity of cell osmotic adjustment, which increase the requirement for maintaining turgor of the growing cells, consuming energy, and decrease in callus growth (Table 3). For CGR, all elicitations resulted in a signi cant reduction compared to the control, except for 0.5 (mM) of Pu (Table 3). This result indicates the bene ciary effects of lower concentrations of Pu (0.5 mM) on motivating callus growth. Such a response is consistent with ndings of cell growth and callus diameter in Hypericum perforatum (Abdollahipoor et al. 2017). It is well established that Pu stimulates not only callus cell extension but also its cell division. The nonsigni cant effects of MeJ on callus growth were also reported on cultures of Zanthoxylum stenophyllum Hemsl (Biondi et al. 2004). Elicitation by these elicitors changed the color of calli to yellow-to-pale brown and pale brown after the elicitation period. The calli under Pu were more turbid. The elicitation process diminishes the friable structure of the calluses to an approximate extent.
Lipid peroxidation
During the elicitation process, some processes such as lipid peroxidation of membranes lead to enhancing antioxidant enzyme activity and the activation of SMs production (Das and Roychoudhury 2014 Table 3). The MDA content showed a 2.94-fold increase (P < 0.05) in the calluses exposed to 1 mM (Pu) compared with the control cultures. In agreement with these previous studies, exposure to MeJ signi cantly increased the MDA content in Panax ginseng. Table 3). The highest (6.02 mg GAEg − 1 FW) and the least (2.86 mg GAEg − 1 FW) contents of TPC were observed at treatments of 50 mM MeJ and control treatments, respectively (Table 3). In callus cultures treated with 25, 50, and 100 µM of MeJ, the TPC showed 1.32-, 2.10-, and 1.01-fold increases compared to control, respectively (Table 3). When treated with 50 µM MeJ, callus cultures induced maximum corresponding TPC levels of 6.02 mg GAEg − 1 FW, while 100 µM MeJ resulted in inhibited levels with the value 2.89 mg GAEg − 1 FW on callus culture (Table 3) (Table 3). For Pu elicitation, only the lower concentration level (0.5 mM) showed a signi cant difference with control (Table 3). This nding was like the positive effects of MeJ increase in the total avonoids of Hypericum perforatum (Wang et al. 2015) and (Table 3). This nding was similar to the elicitation effects of Pu on anthocyanin production of Daucus carota under callus culture (Sudha et al. 2003). Therefore, at higher concentrations, both elicitors had an inhibitory or non-signi cant effect on Ant content in A. jesdianum. Among these two elicitors, MeJ acted as a better inducer of anthocyanin biosynthesis in A. jesdianum callus cultures than Pu. The novel results from the present study show that callus cultures of A.jesdianm under elicitation could be considered as an e cient way for these valuable pigments production.
A Table 3). The highest (0.27 mg g − 1 DW) and the least (0.002 mg g − 1 DW) contents for ChlT were for 50 µM (MeJ) and 1 mM Pu, respectively ( (Fig. 2). It was observed that A. jesdianum calluses had a dosedependent DPPH radical scavenging activity under MeJ elicitation (Fig. 2). This result suggests A. jesdianum is a good free radical scavenger under elicitation. Similarly, the increase in antioxidant activity of callus cells (e.g., DPPH method) was observed under MeJ elicitation in different cell culture techniques (Ali et al. 2007;Ho et al. 2020).
Correlation between different traits
The simple correlation among studied traits is presented in Table 4. A negative and signi cant correlation was found between RWC with TFD (-0. 46 ** ) and TFL (-0.35 ** ). The lipid peroxidation content as MDA showed a negative and signi cant correlation with RWC (-0.64 ** ) and CGR (-0.53 ** ) ( Table 5). This nding shows the importance of cell wall integrity on retaining cell water content and its growth under cellular conditions. The antioxidant activity (DPPH method) showed positive and signi cant correlation with TFL (0.43 ** ), TPC (0.33 ** ), and ChlT (0.37 ** ). The antioxidant activity through the DPPH method was found to be mainly TPC (0.33 ** ) and TFL (0.43 ** ), depending on callus cultures treated with different elicitors. Furthermore, a positive correlation between TPC and TFL with DPPH suggests that the elicitation process increased the antioxidant activity of A. jesdianum through more accumulation of TPC and TFL in callus culture.
Conclusion
Plant cell cultures have been perceived as an attractive method for producing SMs under controlled conditions. This study optimized seed germination, callus induction, and the effects of two different elicitors (MeJ and Pu) on the enhancement of different phenolics and callus related traits in A. jesdianum, by the rst. The ndings showed higher e ciency of MeJ than Pu for increasing different secondary metabolites and antioxidant activity in calli of A. jesdianum. The superiority of Pu elicitation was increasing callus growth and the contents of two studied pigments (carotenoids and chlorophyll). This study suggests evaluating other different concentrations of Pu for callus elicitation. The concentration of 50 µM MeJ showed the best selective dose for enhancing the contents of phenolic compounds in A. jesdianum through callus culture. This study could lead to introducing a new way to improve the production of the bene cial phytochemical compounds from undifferentiated callus cells in the threatened medicinal plant of A. jesdianum. Abbreviations BAP : 6-Benzylaminopurine; 2,4-D: 2,4-Dichlorophenoxyacetic acid; FW: Fresh weight; DW: Dry weight; NAA: α-Naphthaleneacetic acid; MS: Murashige and Skoog; MeJ: Methyl jasmonate; Pu: Putrescine; TPC: total phenolics content; TFD: total avonoids; TFL: total avonols; DPPH: 2,2-diphenyl-1-picrylhydrazyl. | 5,437.6 | 2021-02-15T00:00:00.000 | [
"Environmental Science",
"Biology",
"Chemistry"
] |
Reproduction ratio and growth rates: Measures for an unfolding pandemic
The initial exponential growth rate of an epidemic is an important measure that follows directly from data at hand, commonly used to infer the basic reproduction number. As the growth rates λ(t) of tested positive COVID-19 cases have crossed the threshold in many countries, with negative numbers as surrogate for disease transmission deceleration, lockdowns lifting are linked to the behavior of the momentary reproduction numbers r(t), often called R0. Important to note that this concept alone can be easily misinterpreted as it is bound to many internal assumptions of the underlying model and significantly affected by the assumed recovery period. Here we present our experience, as part of the Basque Country Modeling Task Force (BMTF), in monitoring the development of the COVID-19 epidemic, by considering not only the behaviour of r(t) estimated for the new tested positive cases—significantly affected by the increased testing capacities, but also the momentary growth rates for hospitalizations, ICU admissions, deceased and recovered cases, in assisting the Basque Health Managers and the Basque Government during the lockdown lifting measures. Two different data sets, collected and then refined during the COVID-19 responses, are used as an exercise to estimate the momentary growth rates and reproduction numbers over time in the Basque Country, and the implications of using those concepts to make decisions about easing lockdown and relaxing social distancing measures are discussed. These results are potentially helpful for task forces around the globe which are now struggling to provide real scientific advice for health managers and governments while the lockdown measures are relaxed.
the number of secondary cases generated from a primary infected case during its infectiousness before recovering in a completely susceptible population [6]. Both concepts can be extended to larger compartmental models and into the phase when effects of the control measures become visible and parameters slowly change, leading to the so called momentary momentary growth rates λ(t), and momentary reproduction ratios r(t).
In the beginning of COVID-19 epidemics, the process of collecting data were often not yet well organized or pre-organized in the way that we could immediately use to feed models and extract accurate measurements for the momentary growth rates and the momentary reproduction numbers. To mitigate and suppress COVID-19 transmission, draconian intervention measures were rapidly implemented, crippling our economies as lockdowns were implemented. As research to develop an effective vaccine is ongoing, epidemiologists and public health workers are the frontline of this pandemic, focusing on the well known public health surveillance strategies of testing, isolation and contact tracing of infected COVID-19 individuals. Up to date, more than 4 million cases were confirmed with about 300 thousand deaths, and these numbers are still increasing [7].
After several weeks of social distancing restrictions, lockdowns start now to be lifted and modeling task forces around the globe are struggling to apply the concept of r(t), often called R 0 , to decide whether social distancing relaxation decisions are taken in the right period of time, i.e, when the outbreak is assumed to be controlled, with negative growth rates and a momentary reproduction numbers below 1. Although the absolute value of r(t) can vary, countries we should rather look at the threshold behavior as it is independent, when we use the growth rates primarily, of those modeling uncertainties and clearly indicates if the outbreak is under control or not when estimations are below or above 1. Complementary measures of growth rates for different variables such as hospitalization, intensive care unites (ICU) admissions and deceased, where data is also collected, should be evaluated when political decisions are taken.
3
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. ; In this report we present the growth rates and reproduction numbers for the COVID-19 epidemic in the Basque Country, an autonomous community in northern Spain with 2.2 million inhabitants. For the reproduction number calculation we use a refined stochastic SHARUCDtype model -an extension of the well known simple SIR model that is frequently used to model different disease outbreaks [8,9,10], developed within a multidisciplinary task force (so-called Basque Modelling Task Force, BMTF) created to assist the Basque Health managers and the Basque Government during the COVID-19 responses. The model is calibrated using the empirical data provided by the Basque Health Department and the Basque Health Service (Osakidetza), continually collected with specific inclusion and exclusion criteria. Able to describe well the incidences of disease for different variables of tested positive individuals, this framework is now used to monitor disease transmission, including estimations of the momentary growth rates and reproduction numbers, while the country lockdown is gradually lifted [11].
Using two different available data sets for the Basque Country, collected from March 4 to May 9, 2020, the data was revised as variable definition for positive cases was changed in respect to the diagnostic test used, we present results obtained for the momentary growth rates and reproduction ratios during the ongoing COVID-19 epidemic in the Basque Country and discuss the implications of using those concepts during an unfolding pandemic.
Materials and Methods
For the Basque Country we use the cumulative data for the following variables defined as: i) total tested positive patients (I cum ) in yellow which are recorded in categories for ii) hospital admissions (C H ), in red, iii) intensive care units admissions (C U ) in purple, iv) recovered (C R ) in green and v) deceased (D) in black. At the beginning of the outbreak, only patients with severe symptoms admitted to a hospital were tested using the PCR (polymerase chain reaction) method. As testing capacities increased, including also antibody tests used mainly as screen-4 . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint ing tool in nursing homes, less severe symptomatic cases started to be tested, contributing to enhance the number of confirmed positive cases in the population. This data collection (named "Data set A") includes, for each category or variable, patients tested with both PCR and rapid antibody tests. "Data set A" has now being revised to include patiences, in all categories, who were tested positive with PCR method only (named "Data set B"). Using the data for the all positive cases, the momentary growth rates (λ) and the momentary reproduction numbers (r(t)) are calculated for both data sets, A and B, and results are compared.
The underlying mathematical model and empirical data
We consider primarily SHARUCD model versions as stochastic processes in order to compare with the available data which are often noisy and to include population fluctuations, since at x := (x 1 , ..., x 10 ) tr , giving the dynamics for the probabilities p(x, t) as with n = 10 different transitions w j (x), as described by the mechanisms above, and small deviation from state x as ∆x j := 1 N · r j . For the refined SHARUCD model we have explicitly 5 . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 28, 2020. ; the following transitions w j (x) and its shifting vectors r j given by , r 4 = (0, 1, 0, −1, 0, 0, 0, 0, 0, −1) tr w 5 (x) = (1 − ξ)γx 3 , r 5 = (0, 0, 1, −1, 0, 0, 0, 0, 0, 0) tr w 6 (x) = γx 5 , r 6 = (0, 0, 0, −1, 1, 0, 0, 0, 0, −1) tr With these w j (x) and r j specified we also can express the mean field ODE system The deterministic version of the refined model is given by a differential equation system for all classes, including the recording classes of cumulative cases C H , C A , C R and C U by Model parameters and initial conditions are shown in Table 1, where β is the infection 6 . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. rate and φ is the ratio describing the asymptomatic/mild infections contribution to the force of infection. γ is the recovery rate, µ is the disease induced death rate and ν is the ratio of hospitalized going to the ICU. η is the proportion of susceptible being infected, develop sever symptoms and being hospitalized whereas 1 − η is the proportion of susceptible becoming infected and developing mild disease or asymptomatic. ξ is the ratio of detected, via testing, mild/asymptomatic infect individuals. is the import rate needed to describe the introductory phase of the epidemics and for the present study, we assume to be much smaller than the other additive terms of the force of infection, given the strong observational insecurities on the data collected at the beginning of the outbreak.
For completeness of the system and to be able to describe the initial introductory phase of the epidemic, an import term should be also included into the force of infection. For the present study, we assume to be much smaller than the other additive terms of the force of infection, given the strong observational insecurities on the data collected at the beginning of the outbreak, when would matter most.
Growth rate
After an introductory phase, the epidemic entered into an exponential growth phase, which started in the Basque Country around the March 10, 2020 and due to the effects of the imposed control measures has left to a slower growth around March 27, 2020 [11]. This exponential growth phase is typical for any outbreak with disease spreading in a completely susceptible population, as observed already in the SIR-system, from the dynamics of the infected dI dt = β S N − γ · I when S(t) ≈ N , such that a linear differential equation dI dt = (β − γ) · I =: λ · I with an exponential growth factor λ is obtained. This growth factor then can be measured again from disease data via λ = 1 I · dI dt = d dt ln(I) giving a straight line in a semi-logarithmic plot of the data.
7
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. ;
Parameters, variables and
Description severe disease and hospitalized 54.0 A(t 0 ) mild disease and asymptomatic 80.0 proportion of hospitalization 0.4 ξ detection ratio of mild/asymptomatic 0.4 ρ import parameter − Table 1: Model parameters and initial condition values.
For larger compartmental models we obtain similarly an exponential growth factor. For the basic SHARUCD model [11] we have the active disease classes H and A with the dynamics given by d dt now including disease induced transition to death via the mortality rate µ and transition to ICU admission with admission rate ν. For an epidemic in its initial phase, i.e. S(t) ≈ N , we now for entries into the disease classes and is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Reproduction ratio
Another measure of the spreading of the disease in its initial phase is the basic reproduction number (R 0 ), the number of secondary cases I s from a primary case I p during its infectiveness before recovering in a completely susceptible population.
9
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. ; In its simplest version for SIR models a primary case, I p (t 0 ) = 1, recovers via dIp dt = −γI p , hence I p (t) = I p (t 0 )e −γ(t−t 0 ) . The number of secondary cases from the primary case is given by dIs dt = β S N I p (t) with I s (t 0 ) = 0, a simple inhomogeneous linear differential equation in case of a entirely susceptible population S(t) = N . The solution is I s (t) = β γ · I p (t 0 ) 1 − e −γ(t−t 0 ) + I s (t 0 ) and gives the total number of secondary cases from a primary case as the long time limit as I s (t → ∞) = β γ , hence the basic reproduction number is simply R 0 = β γ . So we have the relation between R 0 and the growth rate λ here as R 0 = β γ = 1 + λ γ . Generalized, the reproduction ratio is then given by r = I s (t → ∞)/I p (t 0 ) = β γ as the ratio of secondary cases produced by primary cases during their infectiousness.
This concept can be also generalized for larger compartmental models, with the notions of matrices B and G as introduced above. For any primary cases H p or A p we have with , using again the matrix exponential. For secondary cases H s and A s we have the dynamics of x s = (H s , A s ) tr given by d dt x s = B · x p (t) with solution analogously to the SIR case as or from generation x n to generation x n+1 the discrete iteration x n+1 = F cot x n . For the present case we have the next generation matrix given as with its dominant eigenvalue for the basic SHARUCD-model and the other one being zero. In the limiting case of a simple SIR-type model (with φ ≈ 1 and µ, ν γ) we obtain again r 1 = β/γ as can be easily seen.
This concept of the reproduction ratio can be extended into the phase when effects of the control measures become visible and parameters slowly change. The momentary reproduction 10 . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint ratios (r) can be analyzed, as frequently done for the COVID-19 epidemics, but often called "basic reproduction number". While the momentary growth rate follows directly from the time continuous data at hand, the momentary reproduction ratio depends on the notion of a generation time γ −1 .
we use smoothing of the differences of logarithmic positive cases with τ = 7 d and ∆t = 5 d To obtain the momentary growth rates from data directly we use λ = d dt ln(I) at first applied to the cumulative tested positive cases I cum (t) obtaining, via a smoothing window, the new cases after time τ as and hence, the growth rate
Results and discussion
From the growth rate, the reproduction ratio is calculated with the recovery period γ −1 obtained from our underlying model and recent literature about SARS-CoV-2 interaction with human hosts [12,13,14,15,16,17]. Assuming the recovery period to be of 10 days, we use 7 days smoothing of the differences of logarithmic positive cases to include all possible fluctuations during the data collection process such as "weekend effects", for example, when we often observe a consistent low number of cases reported that are then adjusted shortly after. For this first exercise, both data sets show negative growth rates from April 1st, 2020, confirming a decrease in disease transmission. Nevertheless, when looking at the long term results for the "Data set A" (see Fig. 2 a-b)), an increase of the growth rate over time is estimated, with values crossing the threshold and becoming positive from April 23 to May 1, 2020, whereas "Data set B" (see . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint transmission. The momentary reproduction numbers follow the same trends for both data sets respectively, depending on the data set used. The observed signal from "Data set A" would significantly impact decisions on lockdown lifting, as the national plan for lifting the restrictions imposed during the state of alarm, called "Plan for the Transition towards a new normality", was announced on April 28, 2020 [18]. Taking place over 4 phases, with a gradual de-escalation to "a new normality", the plan is dependent on the on-going progress of COVID-19 epidemic's control across the different regions of Spain. However, results obtained by "Data set B" would, alternatively, support the already started lockdown lifting with its "Phase Zero" initiated on May 4, 2020. When assuming a short recovery period of γ = 4 days, see Fig. 4, similar results are observed between the different data sets, only with variation on the absolute values.
The momentary growth rates for the various variables are also calculated to verify and support the interpretation of the estimated r(t) threshold behaviour since for any assumed recovery period γ −1 , results obtained for the various variables are the same, changing only when considering the different data sets. Figure 4 shows the behavior of three variables that are synchronized in the Basque Country, I cum , C H , and C U . They also cross the threshold to a negative growth rate on April 1st, 2020, confirming the observed r(t) trend obtained by looking at data on I cum alone. Recovered and deceased cases, shown in Fig. 4 b), follow 1-2 weeks later, due to the delay between onset of symptoms, hospitalization and eventually death, reaching negative growth rate on April 7 and April 11, 2020, respectively. Besides the observed deviation of I, for the "Data set A", the other variables are kept below the threshold, constantly negative until May 9, 2020, supporting the political decision of starting lifting the lockdown measures rather sooner than later in time. So which measure should be considered to guide political decisions? Here, the answer is simple. When the available data is consistently collected and defined, the momentary growth rates for different variables, I, H, U , R and D, measured directly from the data at hand, should also be considered as complementary investigation.
12
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint Figure 1: Ensemble of stochastic realizations of the SHARUCD-type model. a) Cumulative tested positive cases I cum (t). From April 6, 2020, we note an increase of reported positive cases as the testing capacities were increasing. In b) cumulative hospitalized cases C H (t), c) cumulative ICU admission C U (t), d) cumulative deceased cases D(t).
13
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. 14 . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020.
15
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. Figure 4: Using data on PCR + rapid tests we plot the momentary growth rates estimation from the data on positive tested infected cases (yellow), hospitalizations (red) and ICU admission (purple) are plotted in a) and recovered (green) and deceased cases (black) in b). Using data on PCR tests only we plot the momentary growth rates estimation from the data on positive tested infected cases (yellow), hospitalizations (red) and ICU admission (purple) are plotted in c) and recovered (green) and deceased cases (black) in d).
As the concept of R 0 used alone can be easily misinterpreted, specially now when testing capacity is increasing and consequently the number of new notified cases, the BMTF now monitors the development of the COVID-19 epidemic in the Basque Country by considering not only the behaviour of the momentary growth rates λ(t) and momentary reproduction numbers r(t) for the positive cases I cum (t), but also the λ(t) for hospitalizations (C H ), ICU admissions (C U ), deceased (D) and recovered cases (C R ), assisting the Basque Health Managers and the Basque Government with results that are obtained by the model framework, based on available 16 . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted June 28, 2020. data and evidence as scientific advise. Without interfering in any political decision, we now use "Data set B", with a clearer definition of tested positive cases I cum and all other variables that follow, C H , C U , C R and D, and recovery period of γ = 4, shown in Figure 3 d) and Figures 4 c-d). At the moment, the reproduction ratio r is estimated to be below the threshold behavior of r = 1, but still close to 1, meaning that although the number of new cases reported in the Basque Country are decelerating, the outbreak is still in its linear phase and careful monitoring of the development of the dynamics of the new cases from all categories, based on new information and data, to support the upcoming political decisions that will change the current life situation of millions of people is required.
Using the available data for the Basque Country, a small community with short path for data collection and validation, we developed a modeling framework able to predict the course of the epidemic, from introduction to control measure response, potentially helpful for task forces around the globe which are now struggling to provide real scientific advice for health managers and governments while the lockdown measures are relaxed..
17
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 28, 2020.
20
. CC-BY-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 28, 2020. ; https://doi.org/10.1101/2020.05.18.20105528 doi: medRxiv preprint | 5,763.8 | 2020-05-21T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Medicine"
] |
Paper—Development of Guided Inquiry Based Learning Materials Enriched with Augmented Reality in... Development of Guided Inquiry Based Learning Materials Enriched with Augmented Reality in Electrolysis Cell Material
In electrolysis cell chemistry learning material students still experience Difficulties related to chemical material roommates characteristics include three levels of representation macroscopic, submicroscopic, and symbolic. Therefore, the chemistry should be taught by inquiry strategies must be supported a program that is Able to visualize particulate material aspects in an electrolytic cell, such as Augmented Reality. The purpose of this study was to produce guided inquiry-based teaching materials assisted by Augmented Reality on the topic of electrolysis cells. . The development of teaching materials is Carried out in three stages namely (1) analysis phase, (2) design phase, and (3) development stage. Teaching materials are validated by three expert validator. The results of the validation Obtained a percentage of 87. 6% for aspects of the assessment in terms of media, 86.1% for aspects of the assessment in terms of material and 87.4% for the readability test. This indicate that the chemistry teaching material that has been developed is included in the very feasible category so that it can be used as a support in learning activities electrolysis cell material. Keywords—Augmented Reality, Electrolysis Cell, Guided Inquiry
Introduction
One of the goals of chemistry learning is to train students in the science process skills. Learning science such as chemistry is very important because it can practice science process skills or inquiry skills to solve problems in students. Therefore, the science process skills need to be developed. Trianto [11] states that science process skills have an important role that is to help students learn to develop their minds, provide opportunities for students to make discoveries, be able to improve memory, and assist in learning science concepts. The science process skills in students can be trained by developing inquiry-based learning strategies. Inquiry learning is learning that emphasizes the process of finding a concept so that it can improve the science process skills and learning motivation. The implementation of guided inquiry models can make the learning process more meaningful because students become more active, can increase understanding in linking findings and concepts and be able to improve the ability to think critically and inquiry [13].
One component supporting the implementation of inquiry-based learning strategies is the development of teaching materials. To date many inquiry-based teaching materials have been developed. However, the implementation of these teaching materials still raises other problems. One characteristic of chemistry learning is that it requires understanding to reach three levels of representation. The problem as stated above appears in one of the chemical materials, namely electrolysis cells. Herawati's research results [3] revealed that chemistry learning currently only focuses on two levels of representation, namely macroscopic and symbolic representations so students tend to memorize abstract concepts. This results in students not being able to understand the processes that occur at the particulate level. One effective method to accommodate the need for teaching materials to support submicroscopic aspects is to present three-dimensional visualization, one of which uses the help of Augmented Reality technology. AR technology is able to visualize the representation of concepts with abstract or particulate representations in three dimensions.
Augmented Reality can also be applied in the world of education. Karen Hamilton and Jorge Olenawa in [10] revealed that the application of technology Augmented Reality has advantages in the field of education including: (a) it can create contextual learning in learning skills; (b) can realize the concepts learned; (c) can create better learning conditions; (d) can be applied to different learning models; (e) learners can find discoveries in their own way; and (f) there are no consequences if an error occurs during the learning process.
Ida Wuryandari, and Albarda [4] who used AR technology to study hydrocarbon bonds in chemical learning to improve student understanding. Kerawalla [7] revealed that AR can increase students' motivation in applying the real environment from a variety of new perspectives. Agree with Chang [2] revealed that AR is able to contribute in increasing students' motivation in learning. Based on these explanations, AR has an important role in the learning process because it can increase students' motivation in learning and finding information related to learning material. AR technology is a beneficial technology in the process of teaching and learning activities. Through teaching materials that are equipped with this AR, students are invited to turn images or abstract objects into semi-real [14].
In its application this application uses instructional media wherein there are markers in teaching materials that will display objects in three dimensions (3D) when scanned. This application was made to make it easier for high school students to learn electrolysis cell material, with a good appearance, material and operations so that it is expected that this application can be accepted to further be applied as a learning medium.
However, this AR technology also has advantages and disadvantages. The advantages possessed include; 1) More interactive, 2) Effective in use, 3) Can be widely implemented in a variety of media, 4) Simple object modeling, because it only displays a few objects, 5) Easy to operate. In addition, AR also has the advantage to attract students' attention in learning, be able to create a collaborative learning atmosphere, and make students more independent [14]. The use of AR is also able to help students' cognitive processes because the process also involves the senses of the eye iJIM -Vol. 14, No. 12, 2020 [5], [6]. While the drawbacks are 1) Sensitive with changing viewpoints, 2) Not too much maker, 3) Requires a lot of memory on the equipment installed.
The purpose of this development is producing a guided inquiry-based chemical teaching material enriched by Augmented Reality on electrolysis cell material. In addition, it is also to find out the validity of the developed teaching material. Validation is done by experts and users, where the expert is a chemistry lecturer while the user is a high school chemistry teacher. Readability test was conducted by 20 students of class XII of SMA Negeri 2 Pare.
Methodology
Development of teaching materials refers to the development model proposed by Lee and Owens [9], which consists of five stages, including: 1) assessment / analysis which includes needs assessment and front-end analysis, 2) design, 3) development, 4) implementation, and 5) evaluation. The design of the development of structured products in Figure 1. This development is only carried out until the third stage, namely development. The implementation and evaluation stages were not carried out because of the limited time, cost and research staff. The stages in this development model are as follows.
Assessment / Analysis
Phase assessment / analysis is divided into 2 phases Needs Assessment and the phase of the Front-End Analysis [9].
Needs Assessment: This stage is done by interview and observation methods. Interviews and observations aim to find out the learning activities and learning resources used during the learning process at school.
Front-End Analysis:
Phase front-end analysis consists of several steps: Audience Analysis, Technology Analysis, Task Analysis, Critical-Incident and Issue Analysis, Objective Analysis, and Media Analysis.
Design
This design phase is a product planning stage in the form of a storyboard that will be developed into a chemical teaching material product. This stage can be done after obtaining the data or information needed for development in the previous stage. This stage includes the activities of making storyboards, designing product specifications and designing the structure of the material developed in teaching materials.
Development
This stage is the stage of developing teaching materials into a real product in accordance with the designs that have been previously designed.
Data collection instruments using a questionnaire that was prepared using a Likert scale. The questionnaire used for the assessment included two types of questionnaires, the first instrument was for experts and users while the second instrument was used for readability testing. Data obtained from the results of the validation and readability test were analyzed using the following formula.
x 100% Information: P: Percentage score Σx: Total score answers from validator : Maximum score if all validators give a very decent score The score obtained in the form of a percentage scale indicates that the teaching material is appropriate or needs to be revised. The criteria for determining the eligibility of teaching materials can be seen in Table 1. The product of this development is in the form of teaching materials in the form of A6-sized pocket books equipped with AR. Teaching materials consist of two components, namely supporting components and content components. Supporting components consist of a cover or cover, preface, table of contents, instructions for use, basic competencies, learning objectives, concept maps, chemical info, let's get to know, bibliography, and profile page. The content component includes content material that consists of three chapters, each chapter consisting of seven levels. Each level is a step in the guided inquiry learning model. The first chapter discusses the basic concepts of electrolysis cells, the second chapter discusses reactions to electrolysis cells, and the third chapter discusses Faraday's Law. At the end of this teaching material is also equipped with a final evaluation to measure students' understanding of the electrolysis cell material that is equipped with features students can see the score obtained and the answer key after working on the evaluation.
Based on the development procedure according to Lee & Owens [9] which consists of several stages, the following results are obtained. In the first stage, the Phase is Assessment / Analysis divided into 2 namely the Needs Assessment stage and the stage Front-End Analysis. The needs assessment phase is carried out using interview and observation methods. Interviews and observations aim to find out the learning activities and learning resources used during the learning process at SMA Negeri 2 Pare. The front-end analysis phase is further divided into several stages, first the audience analysis, based on observations it can be seen how the learning process takes place, students' responses to the teacher, and how students receive the material delivered by the teacher. This information can be used by researchers in adjusting teaching material to be developed.
Second, the Technology Analysis, this stage is an analysis of the availability of technology owned by students. Facilities owned by students are smartphones and laptops. In addition, the wifi network is also available at school. The task analysis phase is carried out in the selection of material to be studied in teaching materials. The material that can be chosen is the material that is considered difficult to be understood by students. The Phase is Critical-Incident and Issue Analysis carried out to determine which material is already understood by students and which material is not yet understood by students, so that by determining material that is already understood and not understood by students the researcher can determine the part of material that must be taught with more dominant and not. This is done to maximize the solution of existing problems. Electrolysis cell material is considered quite difficult because the students' understanding is only at a sufficient stage.
Next, the stage Objective Analysis, Preparation of teaching materials that are developed based on the Basic Competency (KD) pair 3.6 and 4.6 sub-subject electrolysis cells. The selected material is identified in relation to the formulation of learning objectives in accordance with predetermined basic competencies. Phase Media Analysis aims to determine the type of teaching materials that are suitable to be applied in accordance with field conditions. This design phase is the product planning stage in the form of a storyboard that will be developed into a chemical teaching material product. At this stage consists of several activities which include the manufacturing of storyboards for teaching materials, designing product specifications and designing the structure of the material developed in these teaching materials. The storyboard that has been made will be consulted and validated by a supervisor who will then be developed into a product of teaching materials.
This development phase is the stage of developing teaching materials into a real product in accordance with the designs that have been previously designed. The development of teaching materials begins with developing the elements in teaching materials in accordance with the design on the storyboard that has been made, ranging from text, audio, video, images, 2D animation and 3D animation. Furthermore, when the developed product is finished, validation is done to the expert validator. Some examples of pictures of teaching materials that have been developed as follows. The learning steps in the teaching material developed are game based. The steps in learning are made up of many missions, so students are asked to complete missions during the learning process. So, students like playing a game that consists of many missions and must complete each mission given, because in each mission has its own challenges. The involvement of games in learning really helps students to increase motivation in learning. Learning that can be done anywhere with the help of books and mobile phones can increase student motivation and the test can be done easily. Many studies have proven that game-based learning can increase student motivation. the game in learning will make the learning process more fun and able to increase student motivation [8]. Teaching materials created by adding game applicatives can increase learning interest and help students' cognitive processes to learn [1].
The results obtained by quantitative data and qualitative data. Quantitative analysis is done by calculating the average score of scoring from various aspects provided by an expert validator of three people. While qualitative data is obtained from comments and suggestions that have been given and will be used as consideration in making product revisions. Data validation results are presented as follows.
5.1
Results validation media Based on the results of the validation of media experts it can be concluded that this teaching material is very suitable for use in learning electrolysis cell material with a percentage of 87.6%.
5.2
Validation results matter Based on the validation results matter experts concluded that these materials are very appropriate to use the learning materials electrolysis cell with a percentage of 86.1%.
Analysis of student readability test results
After validation by the expert, the reading material can be read out for 20 students of class XII SMAN 2 Pare. Readability test aims to determine the readability of teaching materials. Data obtained from the readability test in the form of quantitative and qualitative data. Data obtained from the readability test is presented in Table 4 below. After validation by experts, these materials can be tested legibility to 20 class XII students of SMAN 2 Pare. Readability test aims to determine the legibility of teaching materials. Data obtained from the test of legibility in the form of quantitative and qualitative data. Data obtained from the legibility test are presented in Table 4 below. Based on the results of the readability test, it can be concluded that this teaching material is very suitable for use in learning electrolysis cell material with a readability percentage of 87.4%. Comments and Suggestions from the results of the readability test by 20 grade XII students of SMAN 2 Pare are shown in Table 5.
Table 5. Comments and Suggestions Readability Test
To-student Comments and Suggestions 4 Already following the technology only a few mobile phones do not support the application 5 The animation is funny and attracts interest in learning electrolysis material 8 The material presented is good, I can understand easily 12 The animation is funny and attracts interest in learning electrolysis material 15 Provides supporting applications to make it easier for users 16 More and more animation is displayed 19 More and more animation is displayed
Conclusion
The final form of product that has been developed is guided inquiry-based teaching material enriched by Augmented Reality on electrolysis cell material with a validity of 87.6% for media validity, 86.1% for material validity, and 87.4% for readability testing. This teaching material is in the form of a pocket book in A6 size. In the book consists of three learning chapters, where each chapter consists of seven levels of learning activities which are stages of the guided inquiry model. Learning activities are carried out online by scanning the qr code provided in the book. The book is equipped with learning media such as learning videos, two-dimensional animation, and Augmented Reality animation. | 3,795.4 | 2020-07-31T00:00:00.000 | [
"Chemistry",
"Education",
"Computer Science"
] |
Read Games : Power , Ideology , and Repression in Atlus ’ Growlanser : Heritage of War
This article argues that although many people state that literary theorists are attempting to take over the field of game studies, there has yet to be a real attempt to apply literary theory to a close reading of specific games. The article then proceeds to apply Althusser's and Foucault's ideas of power and state aparatuses to Growlanser: Heritage of War in an attempt to demonstrate that to find out the messages that games are proposing literary theory could be applied.
Certainly, there are some, although not many, articles that work with specific games, just as literary criticism works with specific texts.However, most of these articles analyze aspects of the game other than the narrative content or the theoretical postulates.After reading through several paragraphs of what seems to be promotional material for the Grand Theft Auto games, Joris Dormas (2006) proceeds to engage with the interactivity concepts of the game and only superficially mentions what the game might be telling the players in respect to identity.It almost feels like switching the title Grand Theft Auto for any other open world game would not impact the effect of the article.In this case, the game is an example for a general idea, as in most game related academia, instead of the focus of the idea being analyzed.Similarly, the reference to Counter-Strike in Talmage Wright's Creative Player Action in FPS Online Video Games (2002) could have been switched for almost any other FPS and the article would not have suffered.Likewise, most other articles dealing with specific games, except perhaps Greg Smith's Computer Games have Words Too: Dialogue Conventions in Final Fantasy 7 (2002) and Halo and the Anatomy of the FPS (2002), don't deal with the games themselves, but with a certain idea or concept, often revolving around gaming, and uses the games as examples.Even out of the two articles that seem to be the exception, one seems to be nothing more than an attempt to give academic praise to the game play wonders of Halo.It doesn't explore any ideas or conventions within the game itself.This is far from the expected colonization from the fields of humanities, which focus on texts and ideas found in the texts themselves, and where changing the name of one text for another would result in an academic disaster.This begs the question: is it possible to do a close reading of a game?Is it really possible to apply literary theory to a game and explore the ideas presented in the game successfully?To answer those questions I decided to apply Foucault's and Althusser's ideas on power, ideology, and repression to the game Growlanser -specifically to the Peace Maintenance Brigade (PMB), an institution found within the game -to demonstrate that through the exposure of the institutionalization of peace enforced at gunpoint, the inclusion of an all-seeing eye in the form of the PMB's Admonisher, and the active participation of the player in the founding of the PMB and in the furthering of its cause, this specific Growlanser game does, in fact, encourage the use of ideological and repressive state apparatuses for the founding of carceral societies.
Growlanser's Heritage of War
Growlanser: Heritage of War is the fifth installment in the Growlanser series.It was developed by Career Soft and published by Atlus.Released on September 2007 in North America, the game sported a limited edition case which included a 100-page art book, a multimedia disc including music and artwork from the game, a keychain, three pins, and two collector cards.The game went through a single printing and was then discontinued.
The game play is a mix of standard exploration and a dialogue-centered storyline, and the battle system is a mixture of real time strategy and traditional menu-driven commands, as is usual of the Growlanser games.The most interesting aspect of the game is found within the narrative itself, as the game, besides having an intriguing plot, allows the player to speak to the non-playable characters in different ways and make choices that affect the outcome of the game.
Growlanser's plot is divided into five main chapters spanning twenty years, with more chapters being unlocked depending on the course and outcome of the game.The game is set on an island continent which is surrounded by an eerie force field which lets nothing in or out.The continent is overrun by Screapers -biologically engineered monsters -and the three main kingdoms in the continent -Cyrilthia, Grangale, and Neylern, are at war.The game's story opens with Seldous, an explorer who is trying to bring peace to the continent in a nonviolent way, and his childhood friend and lover Vanette.In Growlanser's first prologue Seldous is researching the "Paradise Legend", a story that tells that a long time ago there was a kingdom in the island that lived in total peace.During his travels he meets Isaac, an adventurer who wants to put an end to war at all cost.Together with Professor Pernagi they travel the continent in order to find something to put an end to war.Their discovery was an ancient weapon -The Admonisher, a floating eye that rains destruction wherever it aims.With this weapon Seldous and Isaac found the Peace Maintenance Brigade and force the nations to a peace at gunpoint after demonstrating its power on a number of Grangale soldiers.The PMBs's politics say that if any kingdom should engage in war with another they will be obliterated.
The following three scenarios, in which the player controls Randall, a historian who makes an appearance during Seldous' scenario, General Rockbein, a Neylern general plotting rebellion against Grangale and the PMB, and Fannile, a researcher for the PMB, help to give the player background information on the state of various nations twenty years after Seldous and Isaac.
The main scenario begins roughly ten hours into the game.In this scenario the player controls the main character of the story, a young orphan called Haschen from the village of Warslee, as he is operated by the PMB Haschen receives the ability to weaken Screapers and goes on missions for the PMB which eventually culminate in a colossal battle with the Screaper Queen.
The village of Warslee is under the protection of the PMB.To the PMB staff and all those who live in PMB-protected villages the PMB is a wonderful organization and the Admonisher is a heaven-sent gift, but to the inhabitants of Neylern and Grangale, nations disputing for land in the PMB's conferences, the PMB is nothing more than a nuisance, as these nations would rather march to war than negotiate.It is here where the game demonstrates that it supports the implementation of what Foucault calls the Carceral State.Even though throughout most of the story Krious, Seldous' son, argues that there should be a better way to have peace than at gunpoint, the majority of the characters in the game, and very often the player, begin to think that without the Admonisher peace will be broken and the land will be submerged in war again, as it indeed happens in the latter stages of the game.Although the game ends in a somewhat hopeful note with Rufus, one of the commanding Grangale officers, saying "yes, we could work towards a peaceful world without some form of admonisher-like weapon", it is a doubtful hope.In the end the player is given the idealist notion that it is possible to work towards a peaceful world, but the game does not actually show any of the characters actually doing anything to strive for peace -there is no happily ever after.In fact, should the player unlock Sherris', Rufus', and Zeikwalt's hidden scenarios they will witness nothing more than additional battles, and with all of the kingdoms, including the PMB, still having a large standing army and no sign of the disbanding of said forces at the end of the game it seems doubtful that true peace will be achieved.Because of this it can be argued that Growlanser's statement is not really that "a peaceful world can be achieved", but that "for there to be a truly peaceful world someone must force it on people." Growlanser's heritage is war.
Power, Ideology, and Repression in Atlus' Growlanser: Heritage of War
According to Foucault, the ideal prison follows a model "that concentrates all coercive technologies of behavior " (2001a, pp. 1637).This means that a true prison is in itself an ideological state apparatus and a repressive state apparatus at the same time.If this is the case, then all manifestations of a culture -school, church, government, politics, etc. -are an ideal prison model, as they all use all the 'coercive technologies of behavior' available at their disposal.In all of these institutions individuals are given a certain set of rules to live by, and if those individuals break some of the rules they are punished accordingly, be it by school suspension, exile from the church, or imprisonment.Although this goes against Althusser's vision of state apparatuses, where the repressive state apparatus is composed of some parts of society, such as the army or the police force, and the ideological state apparatuses composed of others, such as school and government, it seems more accurate to say that in a society each node of power represents a single state apparatus which is both repressive and ideological, each aiming for its own goals, and each varying on how much ideology or force to use.Even Althusser agrees that while the repressive state apparatus functions mainly by force and the ideological state apparatus functions mainly by subversive ideology, repressive state apparatuses hold themselves up with, or hide behind, ideology while ideological state apparatuses use force to some extent, sometimes in the form of expulsion or exile.In this case, it is more accurate to say that state apparatuses use varying degrees of repression and ideology depending on circumstances.These state apparatuses can be found in many, if not all, major works of literature.However, they can also be found in other means of narrative, such as film and video games.
In Atlus' game Growlanser the player is treated to a representation of a state apparatus being both repressive and ideological, as well as having the coercive qualities of a prison.Because of its narrative and the role the gamer plays in the unfolding of the narrative, the gamer is taught that both repressive and ideological state apparatuses are acceptable to some extent depending on the circumstances, thus strengthening the player's belief in a society where repression might be acceptable -to live in a carceral city is fine.
The Peace Maintenance Brigade is a nationless group founded by Isaac, an explorer who lost his wife to war and wants to put an end to it by whatever means accessible, Seldous, an explorer who wants to bring peace in a nonviolent manner, and Pernagi, a professor who knows about the planet's innate energy, in order to prevent massive war efforts between Grangale, Neylern, and Cyrilthia.The player, as the main character, takes the role of PMB founders Seldous, Isaac, and Pernagi, and later of PMB commanders Haschen and Krious and PMB lead researcher Fannile, among other characters.Through these avatars the player gets to experience the thought processes and points of views of the characters.In other words, just as the characters on screen become extensions of the player, the player becomes extension of the characters (Gee 2003(Gee , 2005(Gee , 2007)).
According to Althusser (2001), when children go to school they learn know how, skills that are useful for different jobs.Furthermore, he argues that in school children also learn the rules of good behavior.This type of systematic brainwashing is seen in Growlanser's PMB in two levels.On the surface level, the characters in the game go to the PMB Training Grounds to receive schooling in fields best suited to further the PMB's agenda, such as archeology (in order to explore ruins and discover ancient weapons), zoology (in order to learn how to dominate animals and monsters), and military tactics.A few born with enhanced intelligence are taught biological engineering in order to experiment with the life-force of humans.As they learn these skills, they are drilled in the PMB's philosophy of "war is bad, we are the keepers of peace, we must use the Admonisher to destroy any opposition."The second level of the proposal in favor of the state apparatus is seen in the player.Players, assuming the role of PMB founders and commanders, take on the view that the PMB is a good thing.The player has become the agent for furthering the PMB's agenda of peace for the benefits of its high ranking officers.In furthering the reach of the PMB, the player has become the capitalist.And why should the player not feel joy when furthering an agenda that is supposed to bring peace to the land?It is here where the PMB's agenda is noticed in full effect.
Although the PMB holds diplomatic meetings between the leaders of the three kingdoms in order to maintain peace, the true reason that the leaders of each kingdom don't declare war on each other is the existence of the Admonisher.The Admonisher is a giant floating machine controlled by the PMB that can obliterate armies and restructure the continent's surface in seconds by firing enormous bursts of energy at single targets.The PMB's peace at gunpoint by means of an all-seeing eye seems to do little for the citizens of the continent who constantly have to deal with Screapers (genetically engineered beasts that feed on humans), a force field surrounding the continent which does not allow for external trade, and a rapidly eroding land that began to rot faster than usual since the founding of the PMB and the launch of the Admonisher.Furthermore, the conferences held at the PMB headquarters seem to ensure that a power shift does not occur instead of actually encouraging change.The player, in the mentality of a PMB founder or commander, accepts this as a way of life -the way it has always been.The player accepts the PMB as the world's sovereign entity and acknowledges it as good under the excuse of war is bad.However, the player disregards the well-being of those outside the PMB, and often of those in the PMB not in close relations to the player's lead characters.It is often the case that a player will rather sacrifice three or four PMB soldiers permanently than sacrifice one of the primary characters during a single event or battle and revive the character afterwards.Whether the reasons for this are strategic or for narrative purposes, the fact remains that those characters closer to the player, who holds the power, will have more benefits than temporary characters.
According to Althusser, ideological state apparatuses "present themselves to the immediate observer in the form of distinct and specialized institutions" (2001, pp. 1489).Ideological state apparatuses includes religion, education, family, legal systems, political systems, trade unions, communications, and culture.In Growlanser, the PMB is such an institution.It presents itself as an institution that upholds peace.However, this peace is enforced at gunpoint.
The PMB was established in the center of the continent, and the Admonisher floats on the sky atop the highest point of the land, threatening to smite everyone with its laser, as it did Grangale when it was launched.When it was founded, Grangale had just finished an invasion against Neylern, and Cyrilthia had finished a short-scale invasion of the city-states to the west.The PMB's existence froze all the borders, causing an abuse of power from Grangale to Neylern and from Cyrilthia towards the city-states, and not allowing the smaller areas a chance to retake their lands.This results in resentment from all parties towards the PMB.Furthermore, the PMB's politics of not interfering with negotiations ensures that the PMB remains as the sovereign entity in the land.In every conference held, Neylern and the city-states appeal to the PMB for help, and the PMB responds by turning a blind eye as long as there is no war.This freezing of borders (and taxes from weaker kingdoms to support the PMB unwillingly) ensures that the state apparatus remains in power.
Althusser's further explanations on state apparatuses fit the PMB as well.He further explains that "the state apparatus defines the state as a force of repressive execution and intervention" (2001, pp. 1487).With a self-given authority of executing even kings who order wars and the self-imposed right of being able to obliterate kingdoms, the PMB is nothing short of Althusser's definition.The PMB is indeed a fusion of army, police, courts, prison, school, and church.
Foucault suggests that truth, or at least the idea of truth, is in a way sponsored by whoever holds the power (2001b, pp.1668.)He continues by arguing that truth is centered in scientific discourse which is backed by political and economic institutions in power.This is also the case with the PMB.The PMB's idea of truth, war is bad, is indeed sponsored simply by the fact that the Admonisher can wipe out any dissention from their way of thought.The PMB is the powerful apparatus that produces (ideological state apparatus) and enforces (repressive state apparatus) their version of truth.
According to Foucault (2001b), one of the results of normalizing the 'truth' spoken by those who hold power is that the power to punish becomes legitimate.Another is continued surveillance.Both of these things are seen in the PMB.Their authority to punish, even to the point where they are allowed (by themselves) to send small ground troops into the cities of other countries, becomes legitimate after the lifting of the Admonisher.The sense of continued surveillance on neighboring countries is, once again, embodied in the floating, all-seeing eye at the center of the continentthe Admonisher.
Finally, Althusser (2001) states that the state apparatus may survive political events which affect the possession of power.This is observed by the end of the game, when the founders of the PMB die, the Admonisher is destroyed, and Krious and Haschen, now the top ranking commanders, set off in order to subvert the leaders of other countries with their ideology of peace through the PMB and diplomacy and the threat of a large, Admonisher-less, yet powerful, army.Even though the union proposed by the end of the game is one where various sovereign states gather, the complete unity of the continent under the PMB seems inevitable, as the PMB officials are appointed as a "best among equals."Whether this peaceful union is best for the continent and the people in it remains to be seen.That tale will be dealt with, possibly, in other Growlanser titles.However, the design of the PMB in this specific Growlanser game does, in fact, seem to encourage the use of ideological and repressive state apparatuses, at least under certain circumstances.
Closing Comments
In the end, the PMB is not as nice or benign of an organization as it may appear when the game begins.The true nature of the PMB comes to surface thanks to Foucault's and Althusser's theories, and their integration into the game's plot.This shows that thanks to the integration of literary theory into a video game we can explore ideas present within a game with a level of depth that is not possible to reach without literary theory.The reason for this is that the main function of literary theory is to explore specific ideas presented within specific texts.By applying literary theory to texts one can discover different levels of ideas found within the text.The same is true of applying literary theory to video games.Although this exploration of ideas might not fit the more traditional game studies approach, where a certain gaming aspect, such as interactivity or design, is explored, the only real difference is what we look at -ideas or game play.Neither approach is wrong.If someone wanted to look at the notions of human behavior presented in Ar Tonelico then that individual would have to resort to theories of human behavior and sexuality, and probably reference Freud on sexuality related topics and Skinner on behavior related issues; while if someone wanted to look at the Dive System in Ar Tonelico that person would have to use a ludologic approach, as suggested by Frasca, Juul, and Aarseth, and talk about levels, accessibility, and progress in the game's design.With so many varied approaches at our disposal we should ask, is it convenient to use literary theory to analyze games?That depends.To do a good analysis might take at least two times playing the game through and a lot of reading of literary theory.Is it worth the effort?Only if you want to find out what the games are really saying. | 4,764.2 | 2008-09-21T00:00:00.000 | [
"Philosophy",
"Political Science"
] |
A Review of Multiliteracies Pedagogy in Primary Classrooms
In the digital era, students are walking new literacy paths. For this reason, there is a need to explore evolving literacy practices in school pedagogy. This is often addressed by the expanding use of the concept of multiliteracies . This article reviews studies (N = 67) of multiliteracies pedagogy. The main purpose was to explore how the concept of multiliteracies has been used and understood in primary classroom research. The findings indicate that the studies often took into account both the multimodality of meaning-making and the diversity of learners. Recommendations are made for future multiliteracies studies to strengthen the pedagogical practices.
Introduction
This paper presents a systematic analysis of research articles on multiliteracies pedagogy in primary classrooms. In twenty-first-century classrooms, the concept of multiliteracies posits that "text," which humans use to make meaning in their everyday lives, vary depending on social context and prevailing modes of meaning-making (image, sound, writing, gesture, speech, drama etc.) (Kalantzis & Cope, 2012;. Digital communication has further increased the ways in which humans interact and understand meanings. New technologies regularly transform and redefine what it means to be literate (Gee, 2003), requiring teachers to become more knowledgeable about how different texts can be used in the classroom (Anstey & Bull, 2010).
In order to understand multiliteracies and its role in teaching and learning in schools, the concept of literacy must be recognized as shifting and therefore reconceptualized in our changing society (Anstey & Bull, 2006). Rapidly evolving societal, curricula have included at least traces of cognitive, sociocognitive, functional and sociocultural theories (Kauppinen, 2010). Similarly, distinct theoretical backgrounds have resulted in differing pedagogical practices in writing. In a pedagogical context, Ivanič (2004) has identified discourses of writing that range from skills, genre and process discourses to discourses of creativity, social practice and sociopolitics. Traditionally, however, in both practice and research, reading and writing are commonly seen as separate processes and texts as static genres.
Beyond the more traditional theoretical underpinnings, changing literacy practices and the NLG manifesto have affected not only literacy research but also education policy, with international implementation of multiliteracies in core curricula. For instance, the Ontario curriculum focuses on "students' ability to use their knowledge and skills in listening, speaking, reading, writing, viewing, and representing to understand, critically analyze, and communicate a broad range of information and ideas from and about their multicultural, multimedia environment" (Ontario Ministry of Education, 2006). In Australia, the national standards require that children in year 1 begin to create short "imaginative and informative texts" that include multimodal elements and also use texts with supporting images (Australian Curriculum, Assessment, and Reporting Authority, n.d.). In Finland, the new Finnish Core Curriculum for Basic Education (NCC, 2016) includes multiliteracy as a transversal competence. The Common Core State Standards in the US include a focus on using technology and digital media as a way to access information and communicate knowledge in ways that support students' learning across various modes (NGA & CCSO, 2010).
However, challenges remain in applying the multiliteracies approach to school practices. First of all, the theoretical concept of multiliteracies is complex, and, in fact, Palsa and Ruokamo (2015) concluded that use of the concept in research differs from the concept of multiliteracy as applied in the Finnish core curriculum. In particular, while the theoretical concept refers more to a pedagogical approach, the concept is applied in practice as a set of communication abilities. Secondly, despite the wide range of literacy studies focusing on multiliteracies, there is an overall lack of systemic analyses and reviews of multiliteracies being applied in schools and this emphasizes the need for this analysis, which is beginning at the primary level. In exploring how the concept of multiliteracies has been applied to pedagogy in primary education, the analysis presented here will contribute both to discussion of multiliteracies pedagogy and to future multiliteracies research in education.
Method
To gain an overview of implementation of multiliteracies in the classroom, we conducted a systematic literature review. According to Finfgeld-Connett (2014), systematic analysis responds to questions that are difficult to answer using quantitative methods or findings from isolated qualitative investigations. As systematic reviews are also designed to identify existing gaps in the field of research and to make practical recommendations (Pittaway, 2007), this method serves the aims of the present study. The guiding research question was as follows: How has the concept of multiliteracies been used and understood in primary classroom research?
The data in this study included peer-reviewed studies that focused on multiliteracies in the primary classroom. The data were retrieved using both ERIC and the Academic Search Complete search engine; ERIC because it is an important education research database and Academic Search Complete because it allowed us to expand our search to include social sciences and the humanities. In the first phase of the study, the following search was used, with modifications of Boolean operators for different platforms, such as multiliterac* OR multiple literac* OR multimodal literac* AND elementary OR primary AND reading. We accepted multimodal literacy as a keyword because it is often used synonymously with multiliteracies. We also used the search term reading in order to link the articles on texts and literacy but not particularly to reading comprehension or reading research. Therefore, as a search word reading does not have any emphasis and our study is focused on multiliteracies in a classroom context.
The search for articles extended through 2014, and the initial process yielded 338 articles. These articles were reviewed on the basis of the inclusion and exclusion criteria presented in Table 1. Based on these criteria, several articles were excluded at this point, leaving 106. To ensure reliability, we then carefully read the remaining 106 articles and discussed problematic cases for inclusion or exclusion. For instance, we discussed whether summer school programs would count as classroom contexts. As contexts and curricula are often quite different, we decided to exclude them from the list. At this point, one of the main criteria for exclusion was if the article did not include empirical data focusing on students in primary school classrooms. After this final step, we were left with a final sample of 67 articles (Appendix 1) from 1997 to 2014. Despite this careful identification of articles, we may not have located all those fulfilling our criteria. However, we believe the sample is large enough for a representative review of multiliteracies pedagogy in primary classrooms. While we also recognize that there have been many further relevant publications since 2014, we are confident that this article provides an essential archival perspective on foundational issues.
For the purposes of data analysis, we applied the principles of qualitative content analysis in systematic reviews. To begin, the coding categories were defined inductively by studying a subset of articles and drawing inferences about organizing codes (cf. Finfgeld-Connett, 2014). The coding template was then constructed as follows: article topic, definition of multiliteracies, theoretical and/or pedagogical concept connections, main research questions or aims, method and data (including country of data collection and the children's age), media of modes (e.g. claymation movie, video, book, comic, photograph) and key recommendations in relation to multiliteracies pedagogy. Based on Kalantzis and Cope's (2012) distinctions of meaning-making, the articles were also classified according to 1) multimodality, 2) diversity in global connectedness (cultural, social and linguistic diversity) or 3) both.
The research team jointly coded two articles Ranker, 2007) to clarify the criteria for the categories. We selected one article as an exemplar (Ranker, 2007) and one that was more challenging for category determination . A clarifying conversation allowed the team to communicate understandings of the categories, and each of the present authors then coded a set of articles. We continued to discuss problematic cases; for example, the methodology and methods or the country of data collection were not always clearly specified.
After coding, each topic was analyzed according to the principles of qualitative content analysis. This is a flexible method of data analysis that can range from impressionistic interpretation to highly systematic analysis (Hsieh & Shannon, 2005;Finfgeld-Connet, 2014). The results were reported in the form of figures and tables and through descriptive analyses. In this paper, we concentrate on concept definition, theoretical approaches, methods and data, as well as on article topics and the key aspects of cultural, social and linguistic diversity.
Definition of Multiliteracies
Although the definition of multiliteracies was not always clearly stated, the review confirms that the work of the NLG influenced almost all of the articles, most of which either mention the NLG or refer to works by Bill Cope and Mary Kalantzis. In particular, many authors cited their edited volume Multiliteracies: Literacy Learning and the Design of Social Futures (2000), which is a further elaboration of the theory of multiliteracies. Kalantzis and Cope were also members of the NLG. An article by Callow (2003, np) provides one example of a clear definition built on the work of the NLG: The term multiliteracies acknowledges the multiplicity of meaning-making modes (visual, textual, audio, etc.) as well as the wider social contexts of these modes, from diverse local settings to global communities. This definition follows Kalantzis and Cope (2012) in highlighting two major aspects of meaning-making: multimodality and diversity. Thirty-six of the reviewed articles focused on multimodality; six focused more specifically on cultural, social and linguistic diversity, and twenty-five addressed aspects of both multimodality and diversity.
The data also confirmed the importance of design as an aspect of multiliteracies (e.g. Ranker, 2007). This refers both to the sign-making, reading and production of texts that use different modes of communication (Kress, 2010) and to the "designing of social futures" articulated in the NLG manifesto. The manifesto itself is also a political statement, in which the future is seen possible to imagine and design as something that allow all students to participate in public and community life.
Theoretical approaches. Because the reviewed articles share basic ideas developed by the NLG, we can identify some general theoretical assumptions that they explicitly or implicitly entail. In many cases, social semiotics (van Leeuwen, 2005;Kress, 2003Kress, , 2010, was seen as an important theoretical approach (e.g. Callow, 2006;). Secondly, several articles referred to multimodality in their theoretical framework; among these, Hughes and Morrison (2014, p. 612) defined multimodality as "meaning-making through many representational modes (Jewitt & Kress, 2003), digital and social". According to Callow (2006, p. 7), multimodal refers to "texts which include spoken, written, visual, aural and interactive aspects".
Along with social semiotics and multimodality, another important theoretical approach relating to the definition of multiliteracies was Lev Vygotsky's sociocultural theory (Vygotsky, 1962), also associated with Jean Lave and Etiene Wenger (1991) (e.g. . The central idea of sociocultural theory is that learning occurs through relations with others rather than by means of purely individual cognitive processing. As one example, Hughes and Morrison (2014) took account of this social dimension when studying multiliteracies pedagogy in the development of English language learners' identities as writers.
The data also reveal the strong influence of the sociolinguistic approaches of Halliday (1993) and Gee (1996) (e.g. Doherty, 2002. Social linguistics theory (Gee, 1996(Gee, , 2015 is important for multiliteracies research because it changes the understanding of situated contexts and their impact on learning. In some of the reviewed articles, researchers applied New Literacy Studies theory to social linguistics (e.g. , following James Paul Gee's (1996; argument concerning social and cultural approaches to language, literacy and learning. New Literacy Studies (e.g. Gee, 1996;Mills, 2016;Lankshear, 1997;Lankshear & Knobel, 2003) emphasizes literacy practices that involve reading and writing in cultural contexts. For example, Hill (2010) explored the ways in which young children use new forms of literacy at home and in the community, as well as in educational settings.
Some authors emphasized this social diversity by using theories such as additive bilingualism (Lambert, 1974) or multilingualism (Cenoz & Genesee, 1998); third space theory (Gutiérrez, 2008); critical theory (Luke & Freebody, 1997) and theories of communicative action (Habermas, 1981) as well as situated learning theory (Lave & Wenger, 1991). Here, the emphasis on equal opportunities celebrates diversity and enhances the participation of all students in classroom activities. However, although inclusive education has embraced the notion of social justice and equal participation, the inclusive turn in education was not mentioned as a driving theory (cf. Forlin, 2010;Ainscow, Conteh, Dyson & Gallanaugh, 2010).
According to Lankshear and Knobel (2006), literacies are understood only when they are situated within social, cultural and historical contexts and connected to social identities. The articles in our dataset shared the perspective that education and literacy are socially situated as the constructions of social groups rather than of individuals and cognitive skills alone. provide and exemplar of these starting points as they examined how multiple literacies are constructed through teacher-student interactions.
Multiliteracies pedagogy should take into account not only literacy practices in the school environment but also those at home and in various other cultural contexts and situations relating, for example, to popular culture.
Methods
The reviewed studies were typically qualitative, with an emphasis on observational data collection (e.g. Callow, 2006;Ranker, 2007;. The use of multiple data sources was also common; for example, as well as collecting school-wide observational data from a year-long study, conducted interviews with parents and staff members. study included field notes, observation schedules, student work samples, students´ self-assessment journals and teacher reflections. In addition, some studies included descriptions of different activities, such as classroom activities , learning communities , rewriting situations and children's products and literacy programs . Ethnographic methods were also used (e.g. Del-Castillo, Garcia-Varela & Lacasa, 2003;. For example, Mills (2007a) explored students' access to multiliteracies at a suburban state school in Queensland, Australia, by using critical ethnography drawing on observational data, continuous audio-visual recordings, field notes and self-reflective journals. Quantitative methods were used in only two of the reviewed articles. studied students' multimodal representations of scientific knowledge, and used triangulation combining quantitative and qualitative data in their study of laptop use among 554 students. The most common research setting was Australia (N = 24), followed by the US (N = 20) and Canada (N = 13). Only a few of the reviewed studies were conducted in other countries.
Article topics can be divided into five main categories; these are shown in Figure 1, along with the main aspects of diversity in each category.
Topics: Writing and text production. The category of writing and text production was a common topic in the reviewed articles. Produced texts were typically multimodal and included digital videos , TV commercials , web pages Del-Castillo, Garcia-Varela & Lacasa, 2003) and traditional stories redesigned with new technologies (e.g. . For this reason, the production process often drew on digital media and communication.
Topics: ICT.
Although text production often employed information and communications technology (ICT) and topic areas tended to overlap, some articles reflected a particular focus on ICT in teaching multiliteracies, as in the work of Hersteman (2011a, 2011b, 2013). In addition, Ridgewell and Exley (2011) studied forum-based netspeak (the communication style found in online forums) in the context of scientific learning. Topics: Reading and analyzing texts. Studies of reading and analyzing texts included those that focused on students' interpretations of texts (e.g. Ajay, 2011; Callow, 2006;. Interestingly, several of the studies of text analysis concerned how adults interpret students' texts. For example, Mills (2011) performed a genre analysis of students' claymation movies, and Albers (2009) and sought to understand multimodal children's texts such as drawings or panorama boxes. In an exceptional thematic text analysis, Fisher, Albers and Frederic (2014) explored punitive literacy practices in the classroom by analyzing student's drawings of situations in which they had misbehaved. All of the texts in this category were multimodal, with an emphasis on visuals (e.g. comic books, picture books, animated videos, claymation movies, drawings, panorama boxes and PowerPoint presentations).
Topics: Classroom practices. A significant number of the reviewed studies (N = 25) were classified as relating to classroom practices (Figure 1). For example, studied the literacy practices and cultural models constituting the mandated balanced literacy curriculum, and studied how children engage with new literacies and multimodal semiotic practices using available classroom materials. Perspectives in this category were wide ranging, and the topics covered the integration of science, math and technology literacies and the conceptualization of literacy and literacy practices for children with specific language impairments .
Reading pedagogy was foregrounded, as for example in Cumming-Potvin's (2007) study of progress in literacy within the multiliteracies pedagogy and in work on supporting students' self-knowledge of reading. Similarly, explored the writer identities of English language learners, and identity among English language learners was also studied by Cummins, Bismilla, Chow, Cohen, Giampapa, Leoni, Sandhu, and Sastri (2005). Student engagement was studied especially among students with low socioeconomical background (SES) and disengaged students . Thus, besides different types of literacy practices, these articles were related to instruction, support and scaffolding in classrooms, student engagement and identity, conceptualization of literacy, learning environment and curriculum.
Topics: Discourses and access to multiliteracies and other aspects of diversity.
There were six studies in the data, which we interpreted mainly as perspectives on diversity, coding them as "Discourses and access to multiliteracies" (see Figure 1). They included four studies by Kathy Mills, who has studied discourse, diversity and access to multiliteracies in culturally diverse classrooms . The other two studies include Comber, Thomson and Wells (2001), exploring social action and power in a local urban renewal project, and Palmer (2010), who examined dual-language education in terms of race and equity.
Beyond these six studies, aspects of diversity were sometimes connected to multimodal meaning-making. For example, text production ( Figure 1) was sometimes associated with concepts of diversity, notably in and Lotherington and Chow (2008), as well as in , who focused on multilingual and multicultural classrooms. linked the construction of a website about violence to collective identity, and there was a similar community dimension in Marshal and Toohey's (2010) exploration of intergenerational storytelling. In the studies that focused more on ICT, considerations of diversity were less common, although some important connections were made between technology and diversity. For example, studied technological skills among urban Aboriginal students, and Hill (2010) explored different cultural capitals and geographic areas.
Diversity was less often considered in studies of reading and analyzing texts ( Figure 1) than in those focusing on text production. However, Ajay (2011) focused on students' sociocultural experiences, and Albers, Frederick and Cowan (2009) looked at gender features in students' drawings while Callow (2006) emphasized social justice issues in his study of visual metalanguage.
Among studies of classroom practices (Figure 1), diversity was often linked to multilingualism. Practices were studied in multilingual classrooms (Cummins, Bismilla, Cohen, Giampapa & Leoni, 2005, p. 41) and among English language learners (e.g. or ethno-racial and language minority students . work on multiliteracies instruction in a multi-ethnic class emphasized social justice as an integral aspect of diversity. Similarly, involved an entire class in a collective social action project to help victims of Hurricane Katrina after it struck the Southeast United States.
Discussion
In the past 20 years, literacy has increasingly been viewed as a socially constructed phenomenon; according to Kitson, Fletscher and Kearny, "what counts as literacy is locally and situationally defined through the action of members in a social group" (2007, p. 30), and language is described in holistic terms as a set of socially and culturally situated practices for meaning-making (e.g. . The concept of multiliteracies broadens our understanding of what constitutes a text and raises the question of the relationship between language, literacy and education. For this reason, we set out to study multiliteracies pedagogy in primary classrooms, focusing on how the concept has been used and understood. As multiliteracies is widely used in research, we found it relevant to explore how the new literacy environment and multiliteracies pedagogy is understood, especially among primary students. In this section, we will present some of our key findings, along with directions for future research. Our analysis supports Palsa's and Ruokamo's (2015) view that the NLG manifesto is seminal in that it provides the theoretical basis for multiliteracies and continues to influence current research. The reviewed articles took account of the multimodal nature of the texts and placed more emphasis than traditional literacy research on visual and digital texts (cf. Kulju et al., 2017). However, beyond the integration of technology and multiple modalities, multiliteracies pedagogy should encompass the wide range of linguistic, social, racial, cultural, sexual and gendered identities of children in our classrooms (Boyd & Brock, 2015). The shared perspective confirmed here was that education and literacy are socially situated and grounded in constructions of social groups rather than in individual and cognitive skills alone.
In our data, diversity usually referred to multicultural and multilingual classes and to students for whom English was a second language. In some cases, diversity referred to students who are disadvantaged due to experiencing poverty. In future studies, this aspect of diversity should be extended to specific social and cultural issues and to gender diversity. For instance, we see potential in combining feminist or queer theories to provide important perspectives of text production or identification as influencing literacy practices (Blaise & Taylor, 2012;Cherland & Harper, 2007). Even linguistic resources could be more widely taken into account in terms of different written and spoken forms of language in certain social groups. As pointed out by the NLG (1996), one of the most important skill that the students need to learn is the ability to negotiate regional, ethnic or class-based dialects or cross-cultural discourses, and non-English speaking countries or societies should also be included. This expanded view of diversity in the multiliteracies pedagogy could also extend the possibilities of inclusivity in education, placing similar emphasis on social justice and equal participation (cf. Forlin, 2010;Ainscow, Conteh, Dyson & Gallanaugh, 2010).
Most of the studies reviewed here were qualitative case studies based on observational data. In developing existing understandings of multiliteracies pedagogies, we advocate the use of more varied methods, including expanding qualitative methods and incorporating quantitative and mixed methods studies. Combinations of quantitative and qualitative studies could bring new insights into the research in this area. Quantitative methods could contribute to the evaluation of learning outcomes of multiliteracies pedagogy, and serve in taking account of background variables related to the social diversity of learners. Data collection might be broadened to, for example, video diaries, to capture students' voice more effectively (cf. Kulju et al., 2017).
Qualitative research can also provide a formative evaluation for quantitative measures (Maxwell, 2012). Researchers on multiliteracies pedagogy should thus also consider the indications of qualitative studies to determine what should be researched on a large scale. In addition, collaboration across scientific fields such as education, psychology, linguistics, digital studies, literature and social sciences may provide insight into various methodologies as well as into different aspects of multimodality and diversity in multiliteracies pedagogy (cf. Kulju et al., 2017).
The reviewed studies covered a broad range of issues related to multiliteracies pedagogy. Interestingly, and in contrast to more traditional literacy research, text production took precedence over text reception. Additionally, reading was studied mainly from the perspective of text analysis, as a concequence of which it seems that the multiliteracies approach neglects the skills aspect of reading. This is probably related to the emphasis on design, referring both to the sign-making, reading and writing texts that use different modes, and to the "designing of social futures", as expressed in the NLG manifesto. In other words, reading and writing seem to blend, which creates challenges in planning and evaluating learning processes at school.
This also raises the question of learning within multiliteracies pedagogy-that is, how students develop their literacy skills and how we assess their learning. Future studies should strive to capture in greater detail the features of project-type reading and writing action in schools and their relation to curriculum aims in order to support the development of school pedagogy. In saying this, we should reiterate that this paper and multiliteracies concerns pedagogy rather than skills.
A few limitations of this study should be noted, relating mainly to data search. The definition of search terms was a complex process; we constrained the search by using the term reading in the original process, which may have limited the selection of papers. In addition, research using the term multiliteracy as a keyword is only now emerging in Finland, following recent changes in the national curriculum and the translated terms it employs. Additionally, several of the papers were less specific about their theoretical or methodological bases, which created challenges for the analyses. Despite these limitations, however, this study is the first systematic review to concentrate on multiliteracies pedagogy at the primary level and it may contribute to future research on multiliteracies classroom practices. Moving forward, it seems important to continue the review through 2018 to identify changes and trends in recent research. However, multiliteracies pedagogy is a continuum, and while new technologies emerge and become part of everyday schooling, the grounding principles of multiliteracies remain unchanged. As an alternative to monocultural and book-centered education, multiliteracies pedagogy foregrounds meaning making, using different modes of communication in diverse contexts.
Conclusion
Since the publication of the NLG manifesto in 1996, academic discussions about theoretical perspectives, pedagogical possibilities, criticism about literacy teaching and learning and social diversity at schools have increased enormously. Leander and Boldt (2013) noted that 'A Pedagogy of Multiliteracies' has been foundational to doctoral programs, published research, conference presentations and papers as the central manifesto of the new literacies movement.
As the present findings indicate, a body of predominantly qualitative studies conducted in primary classrooms has also been influenced by the New London Group's manifesto. These studies explore the multimodality of meaning-making in both text analysis and in text production and acknowledge the diversity of learners. Future studies should continue this collaboration between students, teachers and researchers (cf. McClay, 2006) in order to understand changing literacy practices. To properly develop this "new basics" of literacy education, that collaboration should also extend to families and communities. By expanding the methodology, aspects of diversity and taking more effectively into account the social practices that are continually moving, changing and unfolding in spaces (Mills 2016) we can hope to develop a pedagogy that bridges old and new or evolving literacy practices and environments. | 6,122.2 | 2018-05-23T00:00:00.000 | [
"Education",
"Linguistics"
] |
Deep Learning for Fake News Detection in a Pairwise Textual Input Schema
: In the past decade, the rapid spread of large volumes of online information among an increasing number of social network users is observed. It is a phenomenon that has often been exploited by malicious users and entities, which forge, distribute, and reproduce fake news and propaganda. In this paper, we present a novel approach to the automatic detection of fake news on Twitter that involves (a) pairwise text input, (b) a novel deep neural network learning architecture that allows for flexible input fusion at various network layers, and (c) various input modes, like word embeddings and both linguistic and network account features. Furthermore, tweets are innova-tively separated into news headers and news text, and an extensive experimental setup performs classification tests using both. Our main results show high overall accuracy performance in fake news detection. The proposed deep learning architecture outperforms the state-of-the-art classifiers, while using fewer features and embeddings from the tweet text.
The language used in forging fake news is deceptive, in the sense that it is intended to provoke and aggravate the users emotionally and lead them to spread the fake news [5,11,12,[15][16][17]19,20,23], (e.g., "You thought this is on behalf of the people in Hong Kong. On the contrary, it is a rascality of putting the "false freedom" label on the will of most of Hong Kong people."). Another common indicator of deceptive language is the promotion of only one viewpoint, and thus being highly subjective [12,16,20,22], e.g., ("@feituji1994 I think we should supporting the Hong Kong Government."). Additionally, grammatical and spelling mistakes, as well as the use of the same limited set of words are characteristic properties of deceptive language [7,11,12,16]. The recent development of natural language processing (NLP), data mining, and machine learning tools has led to a more qualitative understanding of the features of deceptive language (linguistic features), as well as of the features of malicious users and entities (network account features) [1,2,4,5,7,8,11,12,[14][15][16][17][18][19]22].
Unlike previous works, our work presents the following novelty and contribution: • While the problem of fake news detection has been tackled in the past in a number of ways, most reported approaches rely on a limited set of existing, widely accepted and validated real/fake news data. The present work builds the pathway towards developing a new Twitter data set with real/fake news regarding a particular incident, namely the Hong Kong protests of the summer of 2019. The process of exploiting the provided fake tweets by Twitter itself, as well as the process of collecting and validating real tweet news pertaining to the particular event, are described in detail and generate a best practice setting for developing fake/real news data sets with significant derived findings. • Another novelty of the proposed work is the form of the input to the learning schema. More specifically, tweet vectors are used, in a pairwise setting. One of the vectors in every pair is real and the other may be real or fake. The correct classification of the latter relies on the similarity/diversity it presents when compared to the former. • The high performance of fake news detection in the literature relies to a large extent on the exploitation of exclusively account-based features, or to the exploitation of exclusively linguistic features. Unlike related work, the present work places high emphasis on the use of multimodal input that varies from word embeddings derived automatically from unstructured text to string-based and morphological features (number of syllables, number of long sentences, etc.), and from higher-level linguistic features (like the Flesh-Kincaid level, the adverbs-adjectives rate, etc.) to network account-related features. • The proposed deep learning architecture is designed in an innovative way, that is used for the first time for fake news detection. The deep learning network exploits all aforementioned input types in various combinations. Input is fused into the network at various layers, with high flexibility, in order to achieve optimal classification accuracy. • The input tweet may constitute the news text or the news header (defined in detail in Section 4). Previous works have used news articles headers and text as the two inputs for pairwise settings. However, this is the first time that tweets are categorized to headers and text based on their linguistic structure. This distinction in twitter data for fake news detection is made for the first time herein, accompanied by an extensive experimental setup that aims to compare the classification performance depending on the input type. • Our work provides a detailed comparison of the proposed model with commonly used classification models according to related work. Additionally, experiments with these models are conducted, in order to assess and compare directly their performance with that of the proposed pairwise schema, by using the same input. • Finally, an extensive review of the recent literature in fake news detection with machine learning is provided in the proposed work. Previous works with various types of data (news articles, tweets, etc.), different categories of features (network account, linguistic, etc.), and the most efficient network architectures and classification models are described thoroughly.
The rest of this paper is structured as follows: Section 2 discusses the recent related work regarding fake news detection from social networks, including the most common types of data and efficient machine learning techniques. Section 3 describes the creation and preprocessing of the data sets used in our experiments. Section 4 outlines the methodology regarding the feature set (Section 4.1), the embedding (Section 4.2), and the network architecture (Section 4.3). Section 5 presents the experiments' implementation, both for real header and real text input. Section 6 discusses the experiments' results, and compares them to recent related work. Section 7 discusses the findings, concludes the paper, and presents some guidelines for future work.
Related Work
The spread of fake news has caused severe issues, having a great impact on major social events. Consequently, the recent related work regarding fake news detection from social networks is vast and several researchers have attempted to organize it and identify the most common types of data and machine learning techniques. Vishwakarma and Jain [8] listed the recent methods and data sets for fake news detection based on the content type of news they are applied to-the input data being either text or images. The review of Perera [22] offered an overview of the deep learning techniques for both manual and automatic fake news detection, identified 7 different levels of fake news based on the context, as well as on the motive for their creation and diffusion, and analyzed their processing by algorithms implemented for social media. Alam and Ravshanbekov [12] provided a definition for fake news and discussed the positive impact of combining NLP and deep learning techniques in automatic fake news detection. In a survey by Merryton and Augasta [4], baseline classifiers and deep learning techniques for fake and spam messages detection were overviewed, and the most common NLP preprocessing methods and tools, as well as the mostly used linguistic feature sets and data sets, were discussed. Han and Mehta [13] identified several fake news types and linguistic features, evaluated the performance of baseline classifiers and the performance of deep learning techniques regarding fake news detection, and compared them in the basis of balancing accuracy and lightweightness. Shu et al. [2] collected the existing definitions of fake news in the recent related work, identified the differences among the features, and the impact of fake news on social and on traditional media, and discussed the recent fake news detection approaches.
Regarding ensemble learning and reinforcement learning, there are certain works achieving high performance. Agarwal and Dixit [5] used the LIAR data set for the fake news class, and a data set from Kaggle, consisting of 20,801 news reports from the USA, for the real news class, resulting in a binary classification framework. They extracted credibility scores and other linguistic features from the text, and both data sets were normalized and tokenized. Python-based tools and libraries (Scikit-Learn, pandas, numpy, Keras, NLTK) were used for data preprocessing and the experiments. They created an ensemble, consisting of a Support Vector Machine (SVM), a Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), a k-Nearest Neighbor (KNN), and a Naive Bayes classifier, that used Bag of Words, Term Frequency-Inverse Document Frequency (TF-IDF), and n-grams. Their model achieved up to 97% accuracy with the LSTM. Wang et al. [6] developed the WeFEND framework for automatic annotation of news articles, which used user reports from WeChat as a form of weak supervision for fake news detection. They extracted textual and linguistic features from the data and conducted experiments with reinforcement learning, using the Linguistic Inquiry and Word Count (LIWC) and LSTM, reaching an accuracy value of up to 82%.
There are several approaches that explore the significance of textual and linguistic features for fake news detection. Nikiforos et al. [1] created a novel data set, consisting of 2366 tweets in English, regarding the Hong Kong protests of August 2019. Both network account and linguistic features were extracted from the tweets, while several features were identified as determinant for fake news detection. Their approach considered binary classification, and SMOTE over-sampling was applied to address class imbalance. The feature extraction, the SMOTE over-sampling and the experiments were conducted in the RapidMiner Studio. The performance of baseline classifiers, i.e., Naive Bayes and Random Forest, was evaluated, the final model achieving up to 99% accuracy. Zervopoulos et al. [18] also created a data set regarding the same events. It consisted of 3908 tweets in English, and Chinese translated into English (fake news class), and 5388 tweets in English from news agencies and journalists (real news class). They used exclusively linguistic features, translated Chinese tweets into English with Google's Translation API, and identified linguistically relevant tweets. Python, Scikit-Learn, and NLTK were used for the preprocessing and the experiments. They evaluated the performance of Naive Bayes, SVM, C4.5, and Random Forest classifiers, achieving an average of 92.1% F1 score, and the best results were obtained with Random Forest. Jeronimo et al. [20] used a data set consisting of 207,914 news articles of 2 major mainstream media platforms in Brazil, collected from 2014 to 2017 (domains: Politics, Sports, Economy, and Culture), (real news class), and 95 news of 2 fact-checking services in Brazil (fake news class), collected from 2010 to 2017. The features were extracted by calculating the semantic distance between the data and 5 subjectivity lexicons (argumentation, presupposition, sentiment, valuation, and modalization) with Scikit-Learn. They conducted experiments with XGBoost, Random Forest (using Bag of Words and TF-IDF modeling), obtaining higher performance for inter domain scenarios. Mahyoob et al. [11] used 20 posts from PolitiFact as real news and 20 posts from Facebook as fake news, deriving 6 classes in total. They performed a qualitative and a quantitative data analysis with the QDA tool, comparing the posts on the basis of their linguistic features. Wang et al. [26] created LIAR, a new, publicly available data set for fake news detection. It consisted of approximately 12,800 manually labeled short statements of various topics from Politifact. Surface-level linguistic patterns were used for the experiments with hybrid CNNs, setting a benchmark for fake news detection on the novel data set. Shu et al. [27] presented a novel fake news data repository, FakeNewsNet. It contained 2 data sets with various features, including news content, social context, and spatiotemporal information. They also discussed the potential use of the FakeNewsNet on fake news and deception detection in social media. Ruchansky et al. [28] proposed a hybrid deep learning model for fake news and deception detection, by using features that included information regarding text and user behavior. They achieved up to 82.9% accuracy with experiments with a data set consisting of 992 tweets, 233,719 users, and 592,391 interactions.
Regarding deep learning, there are certain works achieving high performance. Sansonetti et al. [19] created a novel data set, consisting of 568,315 tweets that reference news indexed on PolitiFact, 62,367 news (34,429 fake news, 29,938 real news) referenced by tweets, and 4022 user profiles (2013 who publish mostly fake news, 2008 who publish mostly real news). They used both network account and linguistic features, and conducted experiments for offline and online analysis with CNN, LSTM, dense layer, and baseline classifiers (SVM, kNN), achieving up to 92% accuracy. Kumar et al. [16] compared different ensembles for binary classification on 1356 news from Twitter and 1056 real and fake news from PolitiFact. They created a data set per topic, and then they tokenized and encoded them. They used BeautifulSoup, Python, GloVe, and GPy. They conducted experiments with embeddings, CNN, and LSTM (ensemble and bidirectional). The CNN and bidirectional LSTM ensembled network with attention mechanism achieved the highest accuracy (88.78%). Alves et al. [21] created a novel, binary class data set, consisting of 2996 articles written in Brazilian Portuguese, collected from May to September 2018. The data set was normalized and tokenized, and Keras and TensorFlow were used. The experiments were conducted with a bidirectional and a regular LSTM and a dense layer. The 3-layer deep bidirectional LSTM with trainable word embeddings achieved accuracy up to 80%. Victor [3] used the PHEME data set and the LIAR data set, and conducted experiments with a deep two-path CNN and a bi-directional Recurrent Neural Network (RNN) for supervised and unsupervised learning, achieving up to 83% accuracy. Koirala [10] created a novel data set of 4072 news articles from Webhose.io, regarding fake news about COVID-19. They used linguistic features and conducted experiments with baseline classifiers, LSTM and dense layer, achieving an accuracy value between 70% and 80%.
Pairwise learning schemata are very popular in machine learning. The training data consist of lists of items that are specifically ordered within each list. Koppel et al. [29] presented a simple pairwise learning model for ranking. Experiments with the LETOR MSLR-WEB10K, MQ2007, and MQ2008 data sets were performed by using the Tensorflow library and its implementation of the Adam-Optimizer. Dong et al. [7] used the PHEME data set for semi-supervised, binary classification with baseline classifiers, LSTM, and a deep two-path learning model containing 3 CNNs; both labeled and unlabeled data were used to train the model. Their performance was better than supervised learning models in the case where the distribution between the training and test data sets differed, and it proved to be more resistant to overfitting. Agrawal et al. [14] used tweets containing multimedia content; the training set consisted of approximately 5000 real news and approximately 7000 fake news, and the test set consisted of approximately 1200 real news and approximately 2500 fake news. They fused a pairwise ranking approach and a classification system, using image-based features, Twitter user-based features, and tweet-based features. For the classification a deep neural network, logistic regression, and SVM were used, along with n-grams and doc2vec vectors. The ranking was derived from the calculation of the distance between the features (contextual comparison) of tweets of the same topic (by hashtag). The ranking system outputs were incorporated within the classification system. They achieved accuracy up to 89% for real news and 78% for fake news. Bahad et al. [17] used 2 unstructured news data sets from the open machine learning repository (Kaggle) for binary classification. The experiments were conducted with LSTM, RNN, and CNN, using Python and TensorFlow. The highest accuracy, up to 98%, was achieved by the bi-directional LSTM-RNN. Abdullah et al. [15] used tokenized news from 12 distinct categories, and the prediction of the category determines the fake from the real news (12 classes). The experiments were conducted on Kaggle's cloud, with CNN, LSTM, and dense layer, achieving up to 97.5% accuracy. In a machine learning setting, Mouratidis et al. [30] presented a general deep learning architecture for learning to classify parallel translations, using linguistic information, of 2 machine translation model outputs and 1 human (reference) translation. They showed that the learning schema achieves the best score when information from embeddings and simple features are used for small data sets. Augenstein et al. [31] used a framework that combines information from embeddings in a multi-task learning experiment. They evaluated their approach on a variety of parallel classification tasks for sentiment analysis, and showed that, when the learning framework utilizes the ranker scores, the classification system outperforms a simple classification system.
More specifically, in this work, the learning schema is inspired by the architecture proposed for machine translation evaluation by Mouratidis et al. [30], and transferred to the domain of fake news detection, as described in Section 4. We define the input for this architecture based on the data set of [1] and according to the work of Augenstein et al. [31]. Augenstein et al. [31] have used news articles' headers and text as the two inputs for pairwise settings. However, this is the first time that tweets are categorized to headers and text based on their linguistic structure, as described in Section 3. The aim of this work was to identify the best practice setting for fake news detection. The proposed model exploits different input types (e.g., word embeddings, morphological and higher-level linguistic features) in various combinations. Input is fused into the model at various layers, with high flexibility, in order to achieve optimal classification accuracy. A detailed comparison of the proposed model with commonly used classification models according to related work is also presented.
Data
The data set used in our work is that of Nikiforos et al. The tweet text is used as input to the proposed neural network (described in Section 4). To this end, the tweets were divided into 4 distinct categories, depending on the class (real/fake) and the type of the tweet text (header/text). Therefore, the resulting categories are (a) real header, (b) real text, (c) fake header, and (d) fake text. As headers (real or fake), we consider the tweets that make a single-sentence statement (e.g., "Black terror: The real threat to freedom in Hong Kong"), in a form similar to newspaper headlines. Tweets that are longer than one sentence (e.g., "People with ulterior motives attempt to make waves in Hong Kong through the "color revolution", inciting student groups and Hong Kong citizens who do not know the truth, besieging the police headquarters and intending to undermine Hong Kong's stability") are considered as text (real or fake). There are two tweet inputs for the pairwise setting per experiment, T1 and T2. For the first experiment, T1 is a real header and T2 can be either a real text, or a fake text, or a fake header. For the second experiment, T1 is a real text and T2 can be either a real header, or a fake text, or a fake header. Table 1 presents more details about the corpora. Imbalance between the two classes was observed, the fake tweet class being the minority class. Consequently, we applied the SMOTE filter to the minority class. Using SMOTE over-sampling [32], the total number of tweets increased from 2363 to 3766. Similarly to Nikiforos et al. [1], both the network account and the linguistic features are used in our experiments. Every feature is scaled by the MaxAbsScaler [33]. The network account features were collected at the same time with the corresponding tweets from Twitter [1] to provide information about the account that posted the tweet and its connections throughout Twitter, as shown in Table 2. The network account features "user display name", "user screen name" and "in reply to user id" were not included in the final feature set, due to the large number of missing values. Regarding the account feature "account creation date", the dates were converted from text to numerical. Regarding the account feature "tweet time", the dates were converted from text to numerical, and the times were converted from 12-h mode to 24-h mode to avoid the ambiguity between p.m. and a.m. The linguistic features were extracted from the tweet text [1] to depict the specific language traits and forms per tweet, as shown in Table 2. The final feature set contains 18 features in total.
Embedding Layer
In order to model the textual input, an embedding layer (automatically calculated) is used for the two different tweets per input pair (T1, T2). The embedding layer used is the one provided by the Keras library [34]. The encoding function applied is the one-hot function. The embedding layer size, in number of nodes, is 18. The input dimensions of the embedding layer are in agreement with the vocabulary of each input tweet text, taking into account the most frequent words.
Network Architecture
The fake news detection task is viewed as a binary classification problem. We propose a pairwise ranking approach in detecting tweets with fake content. Two tweets (T1, T2) are provided as input. The annotation for this problem is calculated as follows: is a real tweet and T2 is a real tweet 1, if T1 is a real tweet and T2 is a fake tweet where y is the classification class label. The vectors (T1, T2) are used as input to the model, in a pairwise setting. Based on these tweets, the embedding vectors EmbT1, EmbT2 were created on the embedding layer (described in Section 4.2). The MaxAbsScaler is used, as a preprocessing method for EmbT1, EmbT2. EmbT1 and EmbT2 were integrated in a parallel setting, and the vector (EmbT1, EmbT2) is thus created, and becomes the input to the hidden layer. The output of the hidden layer is the input to the last layer of the model. In this layer further input fusion takes place, i.e., a matrix F[i,j] is added, which is a 2D matrix with linguistic and network account features, as described in Section 4.1. The output label is modeled as a random variable in order to minimize the discrepancy between the predicted and the true labels, using maximum likelihood estimation, while the classification problem is modeled as a Bernoulli distribution. The model of the architecture is shown in Figure 1.
Experiments
The present work investigates the modeling process that identifies real vs. fake tweets (text and headers) using the learning schema in Figure 1. For the first experiment (Experiment 1), T1 is a real header and T2 can be either a real text, a fake text, or a fake header. For the second experiment (Experiment 2), T1 is a real text and T2 can be either a real header, a fake text, or a fake header. The vector (T1, T2) is the input to the learning schema.
The model architecture for both experiments is defined as follows: • Size of layers: Dense 1 and 2 with 128 hidden units, Dense 3 with 1 hidden unit (last layer). • Output layer: Activation Sigmoid. • Activation function of dense layers: 1 and 2 Relu, 3 Sigmoid. • Dropout of dense layers: 0.4. Table 3 presents additional parameters of the neural model. For all the experiments, we used 10-fold cross-validation, which is effective for small data sets. Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it.
Results
In this section, the experiment results are presented. In order to quantify and evaluate the performance of the classifier, the Positive Predictive Value (Precision) and the Sensitivity (Recall) for both output labels were used as evaluation metrics. They are objective measures, commonly used in classification tasks. The first metric shows which proportion of classifications is actually correct, whereas the second metric shows the proportion of actual positives that is classified correctly.
Accuracy Performance
Our main results are shown in Table 4. Regarding Experiment 1, prior to SMOTE over-sampling, Precision is 97% for real tweets and 100% for fake tweets and Recall is 95% for real tweets and 74% for fake tweets. After SMOTE over-sampling, Precision is 100% for real tweets and 100% for fake tweets and Recall is 100% for real tweets and 96% for fake tweets. Regarding Experiment 2, prior to SMOTE over-sampling, Precision is 99% for real tweets and 100% for fake tweets and Recall is 97% for real tweets and 93% for fake tweets. After SMOTE over-sampling, Precision is 100% for real tweets and 100% for fake tweets and Recall is 96% for real tweets and 96% for fake tweets.
It is observed that for both experiments the performance is increased after SMOTE over-sampling. Another observation is that for Experiment 2, in which the real text is T1, the performance is better than that of Experiment 1 prior to SMOTE. Consequently, the Experiment 2 setting is the most efficient for fake news detection, as it does not require SMOTE over-sampling to achieve better results. This also indicates that the correlation of the real text with text in general is greater than that of the real header. The correlation of the real header with the rest of the data is increased after SMOTE over-sampling, and thus for the framework of Experiment 1 the number of data affects the performance. Experiment 2 results also indicate that the real text (as T1) is highly correlated with data (T2, either real header, fake header, or text), compared to the respective correlation of the real header (as T1) with data (T2, either real text, fake header, or text) of Experiment 1. The latter correlation is slightly improved after SMOTE over-sampling, leading to the conclusion that the number of data affects the performance of the Experiment 1 framework. The proposed deep learning architecture achieves high overall accuracy performance, classifying mostly correctly both fake and real tweets, and thus shows great potential for successful fake news detection. Figure 2 shows the accuracy performance according to training speed and batch size. By increasing the batch size and the epochs there is no significant accuracy increase. The best performance has been obtained for batch sizes 16 and 20 epochs. In Figure 3 the accuracy of the model is presented per experiment for both fake and real news, based on the network account feature user id. It is observed that the accuracy of prediction of real tweets (text and header) is not affected by this feature, while the accuracy of prediction of fake tweets (text and header) is reduced slightly when this feature is not used (1 to 2% decrease). Consequently, the network account feature user id does not affect the performance significantly.
Comparison to Related Work
In order to have a direct comparison of our experimental results with earlier work [1,18,36], additional experiments were run. Different configurations were experimented with, including Naive Bayes [1], Random Forest [18], and SVM, Logistic Regression [36] for Exp1 and Exp2. The WEKA framework was used as backend [37]. The evaluation metric used for the comparison is the harmonic mean of Precision and Recall, i.e., the F1 score.
It is observed that the proposed deep learning architecture outperforms the state-ofthe-art classifiers for both experiments (up to 4% on average F1 score for Random Forest, up to 3% for Logistic Regression, up to 8% on average for SVM, and up to 15% for Naive Bayes) for both experiments. In addition, it is quite significant that the proposed deep learning architecture achieves a high F1 score for both fake and real tweets detection. The Random Forest classifier detected successfully all of the real tweets, and quite well the fake tweets. The Naive Bayes and SVM classifiers faced problems in identifying real tweets from fake tweets (Figure 4). The accuracy metrics (Precision, Recall, F1 score) for each class (fake/real tweet) of our work is compared to those of recent related works, as shown in Table 5. More specifically, Zervopoulos et al. [18] used a larger data set (3908 fake and 5388 real tweets) concerning the same event (Hong Kong protest movement of summer, 2019). However, they used exclusively linguistic features. Their best results were obtained with Random Forest, achieving (on average) 93.6% Precision, 91.3% Recall, and 92.1% F1 score. Nikiforos et al. [1] used similar data set and feature sets with those used in our work, though a different feature selection methodology was applied. Their best results were obtained with Naive Bayes and SMOTE oversampling, achieving (on average) 99.8% Precision and 99% Recall. It is therefore observed that the model proposed in our work obtains better results and achieves higher performance, compared to these works. The above observations ( Figure 4, Table 5) conclude that the proposed deep learning architecture, using 18 features and information (embeddings) from the tweet text, achieves the best accuracy results.
Conclusions
Unlike previous works, our work presents the following novelty and contribution. While the problem of fake news detection has been tackled in the past in a number of ways, most reported approaches rely on a limited set of existing, widely accepted, and validated real/fake news data. The present work builds the pathway towards developing a new Twitter data set with real/fake news regarding a particular incident, namely the Hong Kong protests of the summer of 2019. The process of exploiting the provided fake tweets by Twitter itself, as well as the process of collecting and validating real tweet news pertaining to the particular event, are described in detail and generate a best practice setting for developing fake/real news data sets with significant derived findings.
Another novelty of the proposed work is the form of the input to the learning schema. More specifically, tweet vectors are used, in a pairwise setting. One of the vectors in every pair is real and the other may be real or fake. The correct classification of the latter relies on the similarity/diversity it presents when compared to the former. The high performance of fake news detection in the literature relies to a large extent on the exploitation of exclusively account-based features, or to the exploitation of exclusively linguistic features. Unlike related work, the present work places high emphasis on the use of multimodal input that varies from word embeddings derived automatically from unstructured text to string-based and morphological features (number of syllables, number of long sentences, etc.), and from higher-level linguistic features (like the Flesh-Kincaid level, the adverbsadjectives rate, etc.) to network account-related features.
The proposed deep learning architecture is designed in an innovative way, that is used for the first time for fake news detection. The deep learning network exploits all aforementioned input types in various combinations. Input is fused into the network at various layers, with high flexibility, in order to achieve optimal classification accuracy. The input tweet may constitute the news text or the news header (defined in detail in Section 4). Previous works have used news article headers and text as the two inputs for pairwise settings. However, this is the first time that tweets are categorized to headers and text based on their linguistic structure. This distinction in twitter data for fake news detection is made for the first time herein, accompanied by an extensive experimental setup that aims to compare the classification performance depending on the input type.
Our work provides a detailed comparison of the proposed model with commonly used classification models according to related work. Additionally, experiments with these models are conducted, in order to assess and compare directly their performance with that of the proposed pairwise schema, by using the same input. Finally, an extensive review of the recent literature in fake news detection with machine learning is provided in the proposed work. Previous works with various types of data (news articles, tweets, etc.), different categories of features (network account, linguistic, etc.), and the most efficient network architectures and classification models are described thoroughly.
More specifically, the deep learning architecture by Mouratidis et al. [30] is used as a basis to fake news detection, whereas the input for this architecture is based on the data set of [1], and defined according to the work of Augenstein et al. [31], who compared news headers and text through their pairwise framework to detect fake news text.
Our main results show high overall accuracy performance of the proposed deep learning architecture in fake news detection. For both experiments, the performance is increased after SMOTE over-sampling. For Experiment 2, where T1 is real text, the performance is better than that of Experiment 1 prior to SMOTE. Consequently, the Experiment 2 setting is the most efficient for fake news detection, as it does not require SMOTE over-sampling to achieve better results. This also indicates that the correlation of the real text with text in general is greater than that of the real header. The correlation of the real header with the rest of the data is increased after SMOTE over-sampling, and thus for the framework of Experiment 1 the number of data affects the performance. Experiment 2 results also indicate that the real text (as T1) is highly correlated with data (T2, either real header, fake header. or text), compared to the respective correlation of the real header (as T1) with data (T2, either real text, fake header, or text) of Experiment 1. The latter correlation is slightly improved after SMOTE over-sampling, leading to the conclusion that the number of data affects the performance of the Experiment 1 framework. Additional experiments with Naive Bayes, Random Forest, and SVM were also run, using the WEKA framework as backend [37], in order to compare directly our experimental results with earlier work. More specifically we achieved up to 99% accuracy with Naive Bayes [1], 92.1% average F1 score with Random Forest [18], and up to 92% accuracy with CNN [19]. The proposed deep learning architecture outperforms the state-of-the-art classifiers, while achieving high F1 score for both fake and real tweets detection. The Random Forest classifier detected successfully all of the real tweets and quite well the fake tweets. The Naive Bayes and SVM classifiers faced problems in identifying the real tweets from the fake ones. In conclusion, the proposed deep learning architecture, using 18 features and information (embeddings) from the tweet text, achieves the best accuracy results.
In future work, we will aim to test a different model configuration (e.g., different kinds of neural network layers). Apart from the pairwise classification schema that is used in this paper, we will test other classification schemata, for identifying fake content. In addition, the proposed model will be tested to a wider field of problems for fake content detection, e.g., spams. Finally, it is worth exploring further data sets and other content formats (e.g., multimedia content, photos, videos) in the proposed model.
Data Availability Statement:
The data presented in this study are openly available in [1].
Conflicts of Interest:
The authors declare no conflict of interest. | 8,037.4 | 2021-02-17T00:00:00.000 | [
"Computer Science"
] |
A Review on Non-Invasive Pregnancy Diagnosis in
Many wild cats around the world are facing the danger of extinction due mainly to habitat destruction, poaching, illegal trade, inbreeding, diseases and conflict with humans. As a result, conservationists moved towards the application of ex-situ conservation programs through captive breeding to save endangered species for future introduction to their natural homes. However, breeding success of these cats in captivity relies largely to the availability of non-invasive techniques using urine and fecal samples to detect pregnancy hormones in the early stages of reproductive management. Wild cats are usually prone to stress when physically restrained for blood collection to carry out pregnancy tests. This stress jeopardizes the success of captive breeding of endangered cat species. Monitoring reproductive hormones in urine and feces provided an alternative and successful tool for reproductive management of these species without exposing the animals to stress or workers to the danger of handling big cats. The detection of prostaglandin F2 alpha and its metabolites PGFM (13, 14-dihydro-15-Keto-PGF2 alpha) in urine and feces using radioimmunoassay (RIA) or enzyme immunoassay (EIA) has been extensively used for this purpose and is a breakthrough in pregnancy diagnosis in wild cats. Further studies are needed to validate the technique in other non-felids species.
Introduction
About 36 species of felines have been recognized around the world where many of them are red listed by IUCN as vulnerable or endangered due to poaching, habitat loss and habitat fragmentation. Human activities are driving many species of wildlife towards extinction [1]. The number of species regarded as threatened more than doubled between 2000-2015 [2]. Many efforts have been exerted to counteract the driving forces that threaten wild felines in their natural habitat by applying exsitu conservation programs. In the early 90s, captive breeding of threatened species became one of the more popular methods for saving endangered species [3]. However, the success of these efforts relies on better understanding of the reproductive cycles governed by the endocrine profiles of these species to facilitate ex-situ conservation efforts [4].
To launch captive breeding programs for captive felines some non-invasive techniques have been used to diagnose pregnancy in these species without stressing the animals. Stressinduced activities by veterinarians and their assistants to collect blood samples can have a negative impact on the animal welfare and lead to failure in conception [5]. Therefore, monitoring of reproductive hormones using urine and fecal samples provided an important and alternative approach for pregnancy diagnosis in felines compared with traditional methods [6].
In mammals, uterine and placental prostaglandin F2 alpha is involved in the regulation of reproductive related processes such as embryonic development, parturition and resumption of ovarian activities. The hormone is rabidly metabolized in the liver to its plasma metabolite (PGFM, 13, 14-dihydro-15keto-PGF2 alpha) which is excreted in urine and feces [7]. The detection of prostaglandin metabolite in the excreta of these species proved to be quick and non-stressful tool for pregnancy diagnosis for a wide collection of captive and free ranging animal species [8].
The present mini-review attempts to demonstrate the applicability of this technique from the point of views of their inventors and theoretically assess its success in feline's reproduction. Wild cats and their conservation status. Wild cats are wild species of the family Felidae that live naturally in different parts of the world within their normal range of distribution except in Australia and Antarctica. There are 36 species of felines that are recognized including lions( Pantheraleo), leopards ( Pantherapardus), jaguars (Pantheraonca), tigers
Journal of Dairy and Veterinary Sciences
(Pantheratigris), lynx (Lynx lynx), bobcats(Lynx rufus), cheetahs ( Acinonyxjubatus), serval (Leptailurus serval), caracal(Caracal caracal) and other small cats. According to IUCN Red list (2007), one species is critically endangered, four species are endangered, 13 are vulnerable, eight are near threatened, and 11 are least concern. The animal status within the IUCN categories is based on the current population size and/or area of distribution. In the Arabian region including the Arabian Peninsula, Jordan, Syria, Iraq, and North African countries six species have been recognized where two species were assessed as regionally extinct, one species is critically endangered and three species are of least concern [9].
Living cats belong to two subfamilies, Pantherinae and Felinae. Pantherinae includes lions, leopards, and tigers while Felinae includes the rest of non-panther cats. Cats are obligate carnivores and they are sometimes called as hyper-carnivores because the very large proportion of animal protein they require to consume in diets compared to other carnivores [10]. Some of the wild cats, such as tigers are persecuted in their habitats for their medicinal values in China [11]. Similarly, other African cats such as lions, leopards, serval and cheetahs are severely poached. Cheetahs, for example is illegally shipped from Somalia to the Gulf region and to south East Asian countries [12].
Sample collection and storage
Hodges et al. [13] reviewed the methods of urine and feces sampling for steroid hormones detection in captive animals. They mentioned that urine samples can be collected midstream from a container placed under the drainage in the floor of the animal cage or by aspiration of the sample from the floor using a pipette or syringe. As small as 0.2ml of the sample is sufficient to quantify the amount of steroids in the urine. Samples preferably centrifuged before analysis to remove any debris contained and frozen at -20°C. Fecal samples, on the other hand, are collected directly from the floor, homogenized and frozen at -20°C until time of analysis. Long-term storage in ethanol or collection of old samples may alter the concentration of fecal steroid hormones [14][15][16].
Pregnancy diagnosis techniques in cats
Captive breeding of endangered species in zoos and other animal facilities has taken a considerable attention as a tool for maintaining bio-diversity [17]. In the beginning of 1960s, many studies, focusing on the concentration of relaxin hormone in companion dogs and cats, indicated the potential use of immunoreactive relaxiin for pregnancy diagnosis in wild animals but with challenges in some species. Bergfelt et al. [18] stated that relaxin detection may have limited value for pregnancy diagnosis in wild bottlenose dolphins when used alone, but it could be of a confirmatory tool if used in combination with an elevation of progesterone in a single sample analysis. Some of the early techniques of detecting reproductive activities and pregnancy in wild cats measured the urine relaxin hormone using a bench-top serum assay (Witness relaxin kit, synbiotics corp., San Diego, California 92127, USA) [19]. Van Drossen et al. [20] stated that detection of urine relaxin for pregnancy diagnosis is a reliable method in domestic cats and other wild felines and canines. However, Dehnhard et al. [6] mentioned that the main disadvantage of this technique as a pregnancy diagnostic tool is its pregnancy related time course. The hormone level is highest at mid-pregnancy and fall to base line during the last trimester and hence prediction of parturition appears unachievable with this approach.
Other modified non-invasive methods for pregnancy diagnosis in wild cats were described by measuring urine and fecal prostaglandin (PGFM) metabolites by either enzyme immunoassay (EIA) or Radioimmunoassay (RIA). In mammals, uterine and placental prostaglandin (PGF2 alpha) is involved in the regulation of reproductive activity including embryonic development, initiation of parturition and resumption of ovarian activity. The hormone is rapidly metabolized to its plasma metabolite PGFM (13, 14-dihdro-15-keto-PGFM) that has been detected in urine and feces [21]. The levels of this hormone increased during the last trimester of pregnancy in seven of the eight main lineages of felidae and therefore, represented a suitable indicator for pregnancy diagnosis in these species [22,23]. Denhard et al. [6] suggested that monitoring PGF2 alpha metabolites in fecal samples is a reliable method for pregnancy diagnosis in cats and a threshold of over 5mg/g of dry feces can serve as indication for near parturition in a week time. Despite its wide applicability in many species the technique was of limited use in some other wild species including white rhinos (Ceratotheriumsimum), red fronted lemur (Eulemurrufifrons) and hares (Lepuseuropoeus) [21]. Despite success with assisted reproduction in felids and the humane non-invasive methods for pregnancy diagnosis, the results remain inconsistent [24]. Much of the inconsistency of the results seen in cats can be related to the two ovulation mechanisms observed in felid species: induced versus spontaneous. While success in assisted reproduction has been achieved in some species such as cheetahs and Ocelots (Leoparduspardalis) other species such as clouded leopards (Neofelisnebulosa), tigers and fishing cats (Prionailurusviverrinus) presented very little success [25]. Other research findings [26,27] concluded that prostaglandin F2 alpha is further metabolized to other metabolites that can be successfully used to diagnose pregnancy in Eurasian lynx and leopard cats (Prionailurusbenegalensis).
Conclusion
To conclude I can say that in almost all-felid species urine and fecal PGFM can be used with an acceptable level to differentiate between pregnant, pseudo-pregnant and nonpregnant individuals. A good non-invasive pregnancy diagnosis can eliminate any source of stress to the species when other invasive methods (blood samples) are used. The technique may be less sensitive in other felid species and not applicable for other non-felid species. Further investigations are required to validate its application in other wild animal species. | 2,128 | 2017-02-28T00:00:00.000 | [
"Environmental Science",
"Biology",
"Medicine"
] |
The Application of Resonance Light Scattering Technique for the Determination of Tinidazole in Drugs
A resonance light scattering technique to determine tinidazole in drugs was developed by tetraphenylboron sodium (TPB). Tinidazole was found to bind B(C6H5)−4 anion and transformed to tinidazole-TPB aggregate which displayed intense resonance scattering light. Effects of factors such as wavelength, acidity, stabilizers, and interferents on the RLS of tinidazole TPB were investigated in detail. The RLS intensity of the tinidazole-TPB suspension was obtained in sulfuric acid solution (pH = 1.44). The resonance scattering light intensity at the maximum RLS peak of 569.5 nm was linear to the concentration of tinidazole in the range of 10.0–30.0 μg mL−1 with a detection limit of 5.0 μg mL−1. Good results were also obtained with the recovery range of 95.13–106.76%. The method was applied to determine tinidazole in injections and tablets, showing high sensitivity and accuracy compared with the high performance liquid chromatography method (HPLC) according to Chinese Pharmacopoeia.
INTRODUCTION
Tinidazole is chemically 1-(2-ethylsulfonyl-ethyl)-2-methyl-5-nitroimidazole ( Figure 1). It is active against protozoa and anaerobic bacteria and is used like metronidazole in a range of infections [1]. The drug is reported to hydrolyze quantitatively in alkaline conditions to 2-methyl-5nitroimidazole and under photolytic conditions, the drug yields intermediate, rearrangement, and degradation products [2].
Resonance light scattering (RLS) is an elastic scattering and occurs when an incident beam in energy is close to an absorption band. Pasternack et al. first established the RLS technique to study the biological macromolecules by means of an ordinary fluorescence spectrometer [3][4][5]. Due to their high sensitivity, selectivity, and convenience, RLS studies have attracted great interest among researchers [6][7][8][9][10]. RLS has emerged as a very attractive technique that has been used to monitor molecular assemblies and characterize the extended aggregates of chromophores. In recent years, RLS technique has been used to determine pharmaceutical [11,12] and various biological macromolecules such as nucleic acid [13,14], protein [15,16], metal ion [17], and bacteria [18], while the study and determination of tinidazole with RLS technique were not yet reported.
Several analytical methods for tinidazole have been developed so far such as HPLC [19], LC-MS [20], capillary electrophoresis [21], spectrophotometry [22], voltammetry [23], and electrochemical methods [24,25]. Among these analytical methods, the voltammetry method according to Chinese Pharmacopoeia is popular and regarded relatively reliable for the determination of tinidazole. Although it often provides very accurate results, it suffered from cost time and complexity. HPLC was also used to determine tinidazole in drugs in Chinese Pharmacopoeia with good result but needs tedious pretreatment.
Herein, we report a robust, quick, and simple method for the determination of tinidazole in injections and tablets with NaB(C 6 H 5 ) 4 as a probe by RLS technique. The obtained results were almost in agreement with those obtained by the currently used HPLC method according to Chinese Pharmacopoeia.
Apparatus
RLS spectra were obtained by synchronous scanning in the wavelength region from 250 to 750 nm on a JASCO FP-6500 spectrofluorometer (Tokyo, Japan) using quartz cuvettes (1.0 cm). The width of excitation and emission slits was set at 3.0 nm. HPLC analysis was carried out on an Agilent 1100 HPLC system (USA) equipped with G1314A isocratic pump, a thermostatted column compartment, a variablewavelength UV detector (VWD), and Agilent ChemStation software. The pH measurements were carried out on a PHS-3C exact digital pH meter equipped with Phoenix Ag−AgCl reference electrode (Cole-Palmer Instrument Co., Ill, USA), which was calibrated with standard pH buffer solutions.
Reagents
A working solution of tetraphenylboron sodium (10.0 mg mL −1 ) was prepared with methanol-water solution (20 : 80, v/v). A stock solution of tinidazole was prepared by dissolving tinidazole (> 99.99%, Sigma) in the doubly distilled water. The working solutions of tinidazole were obtained by diluting the stock solution prior to use. Sulfuric acid solution (0.18 mol L −1 ) was used to control the acidity, while 0.1 mol L −1 NaCl was used to adjust the ionic strength of the aqueous solutions. All other reagents and solvents were of analytical reagent grade and used without further purification unless otherwise noted. All aqueous solutions were prepared using newly double-distilled water.
Scheme
The composition of precipitate was determined by the Job-Asmus method [26]. The molar ratio tinidazole: TPB was found to be 1 : 1. It is possible that stronger basic secondary amine group in the molecule of tinidazole was transferred to cationic ion and reacted with tetraphenylboron. The precipitation reaction may be as follows.
Standard procedure
An appropriate aliquot of tinidazole working solution was added to a mixture of 1.0 mL of tetraphenylboron sodium solution (10.0 mg mL −1 ), and 1.0 mL sulfuric acid (0.18 mol L −1 ) and diluted to 10 mL with water. After standing for five minutes later, the solution was scanned on the fluorophotometer in the region of 250 to 750 nm with Δλ = 0 nm. The obtained RLS spectrum was recorded and its intensity was measured at 569.5 nm. The enhanced RLS intensity of tinidazole-TPB system was represented as ΔI = I − I 0 (I and I 0 were the RLS intensities of the system with and without tinidazole). The operations were carried out at room temperature.
The HPLC separation was performed on Kromasil ODS column (250 mm × 4.6 mm, 5 μm, Hanbon Science & Technology Co., Ltd) connected with a Zorbax SB-C 18 guard column (20 mm × 4 mm, 5 μm). The mobile phase consisted of methanol and 0.1% acetic acid aqueous solution (20 : 80, v/v) and the flow-rate was 1.0 mL/min. The volume of sample injected was 20 μL. The monitoring wavelength was 310 nm. The column temperature was set at 25 • C.
Sample
The injections of tinidazole were diluted 100 to 200 folds with pure water. The tablets of tinidazole were dissolved in 500 mL pure water and filtered through a 0.45 μm cellulose acetate membrane. A 1.0 mL aliquot of the prepared sample solutions was added to a 10 mL volumetric flask instead of tinidazole standard solution.
Characteristics of the RLS spectra
The RLS spectrum of B(C 6 H 5 ) 4 −Na in sulfuric acid solution (0.018 mol L −1 ) is shown in Figure 2b. It can be seen that the RLS intensity of B(C 6 H 5 ) 4 −Na is quite weak in the whole scanning wavelength region. In contrast, upon addition of trace amount of tinidazole to B(C 6 H 5 ) 4 −Na solution, a remarkably enhanced RLS with a maximum peak at 569.5 nm was observed under the same conditions ( Figure 2, c-g). It can be clearly observed that there were two peaks located at 452.0 and 569.5 nm in the RLS spectrum of tinidazole-TPB system. The addition of increasing tinidazole to the B(C 6 H 5 ) 4 −Na solution leads to the gradual enhancement in RLS intensity, exhibiting a concentration-dependent relationship. The production of RLS and its intensity are correlative with the formation of the aggregate and its particle dimension in solution [3].
As shown in Figures 2a and 2b, when the RLS intensities of tinidazole and NaB(C 6 H 5 ) 4 were considered alone, they were quite weak. It thus can be concluded that B(C 6 H 5 ) − 4 anion reacted with tinidazole and produced a new-formed compound whose RLS intensity was much higher than that of tinidazole or NaB(C 6 H 5 ) 4 when they existed separately. Moreover, the dimension of tinidazole-TPB particles may be much less than the incident wavelength, and thus the enhanced light-scattering signal occurs under the given conditions. In this way, the resonance light scattering formula [26] could be applicable to the tinidazole-TPB system.
Effects of pH values in medium
The newly formed tinidazole-TPB compound may be ascribed to the higher electrostatic attraction between TPB and tinidazole than that of the coexistent sodium ion. Moreover, the RLS is relevant to the dimension of the formed aggregated species. Hence, the pH value may exert certain influence on the attraction strength and the dimension of suspension particles, and thus the RLS production and its intensity. As shown in Figure 3, the RLS intensity of NaB(C 6 H 5 ) 4 Xin Yu Jiang et al. solution did not change with the variation of pH in range of 1.44-6.44, whereas that of the tinidazole-TPB system presented different traits. The RLS intensity of the tinidazole TPB decreased from pH 1.44 to 6.44. Acidity strongly affected the form of ammonium ion, which reacted with the B(C 6 H 5 ) − 4 . A maximum RLS intensity was obtained around pH 1.44 and this value was selected for the subsequent measurements.
Effect of ionic strength
There existed high concentration of sodium chloride (0.9%) in tinidazole samples such as injections. Did the large amounts of Na + and Cl − affect the RLS spectra of tinidazole-TPB system? The Na + and Cl − may interfere with the electrostatic attraction between TPB and tinidazole. Herein, sodium chloride was used to maintain the ionic strength of the solution. The unexpected observation is that both of the RLS intensity of TPB-Na and tinidazole-TPB system hardly changed with the concentration changes of added NaCl (Figure 4). Therefore, the system can be allowed in the solutions with high ionic strength such as injections.
Addition orders
The effect of addition order on the RLS intensity is listed in no large effect on the RLS intensity. The proposed assay of tinidazole has a wide pH range.
Stability
The formation process of tinidazole-TPB particles includes three steps: nucleation, crystal growth, and aggregation, which will affect the sizes of the particles directly. Because the size of the particles is one important factor deciding RLS intensity, stabilizer must be used to control the size of particles, prevent the rapid sedimentation of the particles, and improve the reproducibility of RLS intensities of solutions. To improve the reproducibility of RLS intensity of a suspension system, it is crucial to impede the rapid sedimentation of the particles. In this regard, various stabilizers were usually used. However, tinidazole-TPB system is very stable within 20 minutes ( Figure 5) and the average deviation of RLS signal was found to be lower than 2.28%.
Tolerance of foreign substances
Some cationic and anionic species normally found in injections and tablets were studied by the addition of foreign substances. Their concentration relative to tinidazole and the corresponding influence to the determination are displayed in Table 2. Table 2 shows that few coexisting ions interfere with the determination of tinidazole. Common ions such as Na + , Ca 2+ , Ba 2+ , Mg 2+ , Zn 2+ , Co 2+ , Cu 2+ , Al 3+ , and Pb 2+ can be tolerated at high concentrations because they did not combine with B(C 6 H 5 ) − 4 . However, some ions such as K + and NH + 4 can only be tolerated at very low concentration (10 μg mL −1 ). In the studied species, NH + 4 and K + were affected seriously due to similarity of ionic radius. However, NH + 4 and K + were nearly absent in the sample, so it would not interfere with the determination. The most abundant Na + would interfere at the concentration of up to 1000 times than that of tinidazole. Because Na + was studied by adding NaAc and Ac − had more molecular weight than Na + , the tolerant level of Ac − was larger. The results demonstrated that the addition of CO 2− 3 and PO 3− 4 in excess of 1000 folds in concentration relative to tinidazole can induce moderate RLS signal. This may be due to the formation of extended aggregate around tinidazole-TPB particle cores by the relatively higher negatively charged ions of CO 2− 3 and PO 3− 4 . Other studied ions have nearly no effects on the determination when their concentration was the same as or more than tinidazole. Due to the good selectivity of this method, assays can be performed without removing other coexisting ions.
Detection and quantification limits
The detection limit was calculated as s b + 3s, where s b is the average signal of ten blank solutions and s the standard deviation. The quantification limit was calculated as s b + 10s, where s b is the average RLS signal of ten blank solutions and s the standard deviation. When the RLS intensity at 569.5 nm was selected, the detection limit and quantification limit were calculated to be 5.0 μg mL −1 and 10.0 μg mL −1 , respectively, indicating high sensitivity of this method for the determination of tinidazole. The sensitivity of the RLS method is prominently higher than that of turbidimetry (results are not presented).
Detected wavelength and calibration curves
From the RLS spectra (Figure 2) of tinidazole-TPB system, three peaks are located at 452.0 and 569.5 nm. The maximum RLS peak is located at 569.5 nm. Calibration curves were determined for five different concentrations of tinidazole standard solutions under these two wavelengths. Each calibration sample was detected in triplicate. According to the above standard procedure, the calibration curves were obtained by plotting the concentration of tinidazole against the intensity of RLS spectra at 452.0 and 569.5 nm (Figure 6). Table 3 lists the parameters and correlation coefficients of the calibration plots with two wavelengths. The ΔI(y) and the tinidazole concentrations (x) were fit to the linear function. The results of the regression analysis were then used to back-calculate the concentration results from the ΔI, and the back-calculated concentrations and appropriate summary statistics (mean, standard deviation (SD), and percent relative standard deviation (RSD)) were calculated and presented in tabular form. From Table 3, detected wavelength has obvious effect on the linear relationship of this method. To different detected wavelength, the RLS intensity of the system is also different. It offers a wide detected range for different concentration of tinidazole in samples. The lowest detection limit and quantification limit took place at 569.5 nm, because 569.5 nm is the maximum RLS peak.
Precision
The precision study was comprised of repeatability and reproducibility studies. These were developed in five different samples. The repeatability was established by analyzing the samples five times. The reproducibility was determined by analyzing each sample on three different days over about one month. The repeatability and the reproducibility are < 2.37% and < 3.96%, respectively. These results indicate that the present method can be used for quantitative analyses of tinidazole.
Recovery
To establish the accuracy of the method, this procedure was also performed on tinidazole added to samples. Table 4 shows the recoveries of tinidazole applying this analytical method. From Table 4, good results are obtained with the recovery range of 95.13-106.76%.
Comparison of RLS and HPLC methods
As shown in Table 5, the proposed method was applied to determine tinidazole concentration in injections and tablets. The attained results were compared with that of HPLC method. From Table 5, it was seen that the RLS results were in agreement with the HPLC method according to Chinese Pharmacopoeia. The average RSD of the RLS method is 0.79%-1.83%, which is slightly lower than that of the HPLC method (1.22%-2.48%), which proved that the RLS assay of tinidazole in drugs was practical.
In this paper, we compared two methods to analyze tinidazole in injections and tablets. These two methods, RLS and HPLC, can give similar results for tinidazole content in drugs (Table 5). However, the operations of RLS and HPLC methods were significantly different. The HPLC method appears to suffer from complexity and cost time, whereas the RLS method described here is robust, cost effective, and simple while retaining sufficient sensitivity. It took more than 10 minutes for an HPLC analysis, but only 1 minute for RLS analysis. Second, the RLS analysis was not affected by small variation in temperature, so it could be carried out at room temperature. But the temperature had a significant effect on the HPLC analysis. The HPLC column temperature was set at a fixed temperature. Third, the RLS analysis did not use organic solvent, but toxic acetonitrile is used in the HPLC analysis.
Mechanism discussion
Light scattering is caused by the presence of fine particles. Because the dimension of tinidazole-TPB particles is much less than the incident wavelength, it should be in accordance with the resonance light scattering formula, which is shown as follows [27]: where R(θ) is the resonance light ratio at the scattering angle θ, which is equal to the ratio of the scattering intensity of incident light I(θ) at the angle θ to the intensity of incident light I 0 ; n 1 and n 0 are the refractive indices of solute and medium, respectively; N 0 is the number of particles per unit volume; υ is the volume of the particle; and λ is the wavelength of incident light in the medium. If c is the concentration of tinidazole-TPB solution, and ρ is the density of each particle, so N 0 υ is equal to c/ρ. The formula above can be expressed as In the experiment, θ is 90 • , υ remained nearly constant because the experiment conditions such as acidity and the adding volume of stabilizer and other reagents were kept as identical as possible to obtain the same size particles; n 1 , n 0 , λ, and ρ were all constant. According to (2), RLS intensity is proportional to the concentration of tinidazole-TPB suspen-sion (c) or the number of particles in the unit volume (N 0 ). Therefore, tinidazole can be determined based on this theory.
CONCLUSION
In this contribution, we proposed a resonance light scattering technique to determine tinidazole in drugs. The analytical results showed that our method is rapid, sensitive, selective, and potential to be put into practice. This method may also be a valuable approach for the development of detection of tinidazole in serum. | 4,163.6 | 2007-06-14T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
OPPORTUNISTIC APPROACH TO EXPLOIT WIRELESS SPECTRUM BY USE OF COGNITIVE RADIO: A REVIEW
Cognitive radio (CR) technology is envisaged to solve the problems in wireless networks resulting from the limited available spectrum and the inefficiency in the spectrum usage by exploiting the existing wireless spectrum opportunistically. In this paper, intrinsic properties and research on software defined cognitive radio (SDCR) are presented. Firstly brief
INTRODUCTION
The advent of software defined radio (SDR) [1] technology offers a more sophisticated form of processing resources than prior radio technology. SDR can be defined as a mechanism in which user reconfigure radio software dynamically to fulfill the requirements of the user. Radio can be programmed to transmit and receive on variety of frequencies. Main objective of reconfiguration is proper utilization of available bandwidth of the radio. Initially SDR technology was exclusively used for military applications but later on it was being used commercially. One technology that do not promise only processing capabilities but also provide flexible structure that is applicable to wide array of operational scenarios is Cognitive Radio (CR). An example of CR application is considering a case where vacant portions of the TV broadcast bands could be shared with unlicensed devices with sufficient intelligence to detect the licensed users and avoid causing harmful interference to those users. Unlicensed user want to access licensed bandwidth by applying various strategies and methods.
This paper describes basic definition of Cognitive Radio (CR), functionalities provided by it and its architecture that describe the working of its various components. It also describes that how available spectrum bandwidth can be monitored and selected by CR users for proper utilization of unused bandwidth. Finally this paper ends with Software Defined Cognitive Radio (SDCR) and research topics on SDCR are also defined.
COGNITIVE RADIO (CR)
CR technology is a key technology that enables cognitive radios ad hoc networks to use spectrum in dynamic manner. CR is a radio that can change its transmitter parameter depending upon the interaction with radio environment in which it operates. Basic characteristics of CR are as follows: [5,18]
Cognitive Capability
Cognitive capability is the capability of radio technology to sense the information from its radio environment.
Reconfigurability
It enables the radio to be dynamically programmed according to the radio environment to fulfill new requirements of the user. CR can be programmed to transmit and receive on a variety of frequencies.
The main objective of cognitive radio is maximum utilization of available spectrum through cognitive radio and reconfigurability as described above. Since most of the bandwidth is already assigned, main aim is to share licensed spectrum without interfering with transmission of other licensed users, the cognitive radio enable usage of temporarily unused bandwidth which is referred to as spectrum hole [5]. If this band is further needed by some licensed user then CR w w w . i j c t o n l i n e . c o m user has to move to another spectrum hole, altering its transmission power level or modulation scheme to avoid interference as shown in fig.1 Fig: 1 Spectrum Hole concept.. [3]
CR ARCHITECTURE
The architecture of CR is shown in fig. 2. at the right side of the fig. we see that SDR processing structure is accomplished in this architecture and right side of the fig. shows components involved in augmenting an SDR architecture to allow for cognitive capabilities.
At left side of the fig. PAAL layer is defined in terms of certain standard radio concepts. This is a key layer if one wants to allow for reuse of the cognitive portion of the architecture with different conventional radio implementations. That is, different radios could use different signal processing algorithms at a very low level that have no bearing on how a cognitive radio application perceives an instance of, for example, a certain kind of waveform. it also makes it possible for a CR to do things by exporting SDR primitive actions in a platform independent format.
Fig : 2 CR architecture..[8]
At the right side of the figure, Rules, ontology and reasoning mechanisms components are described. Remaining components are another layer, Ontology Rule & Abstraction Layer. It allows ontology and rule concepts to be represented in a platform-independent standard. This is important if one wants to allow the same radio implementation to be used with alternate ontology and rule reasoning platforms. Just as radio notions such as signal and waveform should have meaning independent of any particular radio implementation, so too notions such as concept, and rule should have meaning independent of any particular implementations.
PROTOTYPE SIMULATION
We have implemented a prototype simulation environment capable of handling the beacon signal conflicts. The simulation enables one or more CRs and one or more beacons to be represented in a two dimensional space. The CRs can be mobile, Means that they can be any user who can move around in simulation environment. As a radio is moved and as the various components of the environment change, an environment handler and a simulation manager ensure that the necessary events are propagated to the various elements of the simulation. Fig. 3 shows an example of the current system display and CR user interface.
A beacon signal conflict situation occurs when two beacons with opposing policies for the same channel overlap in some region. In fig. 3 the CR is positioned more or less equidistant from two such beacons. Such a conflict will matter to a CR only if it causes a problem with respect to a one of its goals. Suppose that the CR user has indicated a desire to use channel C and that a beacon signal conflict exists for C. Now how CR knows that beacon signal conflict occurs for channel C? It knows this due to series of inferences which are enabled by its ontological knowledge.
In terms of the architecture in fig. 2, SDR component process two incoming signals. Each signal is known to be associated with a certain logical channel, and certain logical channels are known to be reserved for beacons. Therefore, using its ontological knowledge, the CR concludes that the two signals it is receiving are two distinct beacon signals. Once a signal is known to be from a beacon the CR is able to interpret the content of the signal based on properties of the signal. So the CR is at that point in a position to know it is receiving a signal saying the CR is allowed to use channel C and a signal that saying the CR is not allowed to use channel C. The CR also knows the strength of each signal. Formally, the kind of ontological reasoning just described relies upon the use of well-defined frameworks, in which definitions such as the following (schematic) definition can be encoded:
Primary network
Primary network can be referred to existing network where primary users (PU) have license to operate in certain spectrum band. Operations of primary users (PU) are controlled through primary base station.
CR network
The CR networks are also known as secondary networks that does not have license to operate in desired band CR users are mobile so, they can communicate with each other in multi-hop manner. Hence additional functionality is required for CR users to share the licensed spectrum band. Usually, CR networks are assumed to function as stand-alone networks, which do not have direct communication channels with the primary networks. Thus, every action in CR networks depends on their local observations.
In order to adapt to dynamic spectrum environment, the CRAHN necessitate spectrum aware operations which form spectrum-cycle [5]. As shown in fig. 4b, cognitive cycle consists of following steps: spectrum sensing, spectrum decision, spectrum sharing and spectrum mobility.
Spectrum sensing
A CR user can be allocated to only an unused portion of the spectrum. Therefore, CR user should monitor available spectrum band and detect spectrum hole (unused space). This capability is required in following case: 1. CR users find available spectrum holes over wide frequency range for their transmission (out-of-band sensing). 2. CR users monitor spectrum band during transmission and detect the presence of primary networks so as to avoid interference.
Spectrum sensing provides following functions as shown in fig. 6 Fig: 6 Spectrum sensing structure .. [3] 4.1.1.1 PU detection-CR user observes its local radio environment. Based upon its local observations, CR users determine the presence of PU transmission and accordingly identify current spectrum availability. Generally, PU detection techniques for CRAHNs can be classified into following groups [3]: primary transmitter detector, primary receiver detector, interference temperature management.
As shown in fig. 7a, transmitter detection is based upon the detection of weak signal from primary transmitter through local observations of CR users. In transmitter detection, in order to distinguish between used and unused spectrum bands, CR users should have the capability to detect their own signal from a PU transmitter. The local FR observation used in PU detection sensing is based on the following hypothesis model [3]: Where r(t) is the signal received by the CR user, s(t) is the transmitted signal of the PU, n(t) is a zero-mean additive white Gaussian noise (AWGN) and h is the amplitude gain of the channel. H0 is a null hypothesis, which states that there is no licensed user signal in a certain spectrum band. On the other hand, H1 is an alternative hypothesis, which indicates that there exists some PU signal.
As shown in fig. 7b, the primary receiver detection aims at finding primary users that are receiving data within communication range of CR user. Interference temperature management accounts for the cumulative RF energy from multiple transmissions and sets a maximum cap on their aggregate level that the primary receiver could tolerate, called an interference temperature limit. As long as CR users do not exceed this limit by their transmissions, they can use this spectrum band. However, the difficulty of this model lies in accurately measuring the interference temperature since CR users cannot distinguish between actual signals from the PU and noise/interference. For these reasons, most of current research on spectrum sensing in CRAHNs has mainly focused on primary transmitter detection.
Sensing control-This function enable CR user
to perform its sensing operations adaptively to the dynamic radio environment. The main objective of spectrum sensing is to find more spectrum access opportunities without interfering with primary networks. To this end, the sensing operations of CR users are controlled and coordinated by a sensing controller which considers following main issues on: 1. how long and frequently CR users should sense the spectrum to achieve sufficient sensing accuracy in in-band sensing, and 2.how quickly CR user can find the available spectrum band in out-of-band sensing.
Cooperation-The information observed by CR
users is shared with its neighbors so that sensing accuracy can be improved.
Spectrum Decision
Spectrum decision is a process to decide on the best spectrum band among the available bands according to the QoS requirements of the applications Spectrum decision is closely related to the channel characteristics and the operations of PUs. Spectrum decision usually consists of two steps [19]: First, each spectrum band is characterized based on not only local observations of CR users but also statistical information of primary networks. Then, based on this characterization, the most appropriate spectrum band can be chosen. Spectrum decision needs to consider the end-to-end route consisting of multiple hops. Furthermore, available spectrum bands in CR networks differ from one hop to the other. As a result, the w w w . i j c t o n l i n e . c o m connectivity is spectrum-dependent, which makes it challenging to determine the best combination of the routing path and spectrum.
Fig: 8 Spectrum decision structure..[3]
Main functionalities required by spectrum decision are as follows: Spectrum characterization: Based on the observation, the CR users determine not only the characteristics of each available spectrum but also its PU activity model.
Spectrum selection:
The CR user finds the best spectrum band for each hop on the determined end-to-end route so as to satisfy end-to-end QoS requirements.
Reconfiguration:
The CR users reconfigure communication protocol as well as communication hardware and RF front-end according to the radio environment and user QoS requirements.
CR ad hoc users require spectrum decision in the beginning of the transmission. CR users characterize the available spectrum bands by considering the received signal strength, interference, and the number of users currently residing in the spectrum, which are also used for resource allocation in classical ad hoc networks. However, unlike classical ad hoc networks, each CR user observes heterogeneous spectrum availability which is varying over time and space due to the PU activities. This changing nature of the spectrum usage is considered in the spectrum characterization. Based on this characterization, CR users determine the best available spectrum band to satisfy its QoS requirements. Furthermore, quality degradation of the current transmission can also initiate spectrum decision to maintain the quality of a current session.
Spectrum sharing
The shared nature of the wireless channel necessitates Coordination of transmission attempts between CR users. So, spectrum sharing provides the capability to maintain the QoS of CR users without causing interference to the PUs by coordinating the multiple accesses of CR users as well as allocating communication resources adaptively to the changes of radio environment.
Spectrum sharing techniques are generally focused on Two types of solutions [26]: 1. spectrum sharing inside a CR network (intra-network spectrum sharing) and 2. among multiple coexisting CR networks (inter-network spectrum sharing). However, since the CRAHNs do not have any infrastructure to coordinate inter-network operations, they are required to consider the only intra-network spectrum sharing functionality. All decisions on spectrum sharing need to be made by CR user in distributed manner. Fig. 9 shows functional block for spectrum sharing in CRAHNs.
Fig: 9 Spectrum Sharing structure.. [19]
Spectrum sharing shares some features with spectrum sensing are as follows: Resource allocation: based on QoS, CR users select the proper channel (channel allocation) and adjust their transmission power so as to achieve QoS requirements as well as resource fairness. Sensing results should not violate interference constraints.
Spectrum Access: It enables multiple CR users to share the spectrum resource by determining who will access the channel or when a user may access the channel. Once proper spectrum band is selected in spectrum decision, communication channels in that spectrum need to be assigned to a CR user while determining its transmission power to avoid interference to primary networks (resource allocation). Then CR user decides that when spectrum should be accessed to avoid collision with other users (spectrum access).
Spectrum mobility
CR users can also be known as visitors to the spectrum. Hence, if the specific portion of the spectrum is in use, is required by primary user (PU), it is needed that communication should be continued in another vacant portion of the spectrum. This notion is called spectrum mobility. With the concept of Spectrum mobility, a new type of handoff in CR networks occurs, the so-called spectrum handoff, in which, the users transfer their connections to an unused spectrum band. In CRAHNs, spectrum handoff occurs [3]: (1) when PU is detected, (2) the CR user loses its connection due to the mobility of users involved in an on-going communication, or (3) with a current spectrum band cannot provide the QoS requirements. Spectrum mobility structure for cognitive radio ad hoc networks (CRAHNs) is shown in fig. 10. w w w . i j c t o n l i n e . c o m
SOFTWARE DEFINED COGNITIVE RADIO
Cognitive radio can be defined as a radio that Senses its operational (radio) environment and can dynamically adjust its radio operating parameters accordingly by collaborating wireless and wired networks. As results of sensing, when some vacant and available frequencies/time slots exist in a system, users temporally utilize the frequencies and time slots. When users would like to use several communication systems and some vacant and available frequencies and time slots that exist over some communication systems, users temporally utilize the frequencies and time slots [11]. To promote the research and development of the technology, in 2005, Ministry of Internal Affairs and Communications (MIC), JAPAN started a project regarding improved spectrum efficiency [11]. The project includes four research topics shown in Fig. 11: (1) research and development on elemental technology for cognitive radio terminals, (2) research and development on cognitive radio communication technology, (3) research and development on improved technology of frequency utilization in space domain, and (4) research and development on super-semiconductor based filter technology.
The first two research topics are closely related to the research of cognitive radio. To realize cognitive radio, there are many solutions. But SDCR is one of best technologies to control QoS (Quality of Service) of communication system, because the functions of radio communication systems are defined by software and the software is named as "Waveform." By sensing the existence of communication system by changing the waveforms, profiling the existed communication systems, and providing better communication systems to the users.
RESEARCH TOPICS OF SDCR
When cognitive radio is realized, the radio equipment must consist of hardware platform and software platform [11]. In order to realize cognitive radio hardware part, the components that are described in fig. 12 must be considered.
Hardware platform
The research topics are categorized into two parts: radio frequency (RF) part (shown in fig: ) and digital signal processing part (shown in fig: ). As for RF part, radio equipment must recognize the presence of several radio communication systems in the particular frequency band from VHF and UHF bands to microwave band and utilize favorite radio communication systems by the results of recognition. To realize multi-band transmission and reception, the items shown in Fig. 12 must be considered.
On the other hand, the following items must be studied for digital signal processing part. a) Broadband, high resolution and low power Consumption AD/DA converters. b) Low power consumption reconfigurable digital signal processor.
In order to realize the above reconfigurable processor, four core functions must be needed: (1) Multi-IP core part, (2) Full reconfigurable part, (3) Parameter controlled reconfigurable part and (4) interfaces part, respectively.
Software platform
When software defined cognitive radio equipment is developed, software platform that loads waveforms to the hardware platform and profiles the existed radio communication environment must be needed as shown in Fig. 13. From this figure, there are many managers on the platform.
RESEARCH CHALLENGES
Distributed power allocation: The CRAHN users determine their transmission power in distributed manner without the support of central entity. So, interference may occur due to limitation of sensing area. So proper power control methods should be adopted to maximize the capacity with protection of transmission of PU [3]. w w w . i j c t o n l i n e . c o m Switching delay management: The spectrum switching delay is closely related to not only hardware but also to algorithm development for spectrum sensing, spectrum decision, link layer, and routing. Thus, it is desirable to design spectrum mobility in a cross-layer approach to reduce the operational overhead among each functionalities and to achieve a faster switching time.
A/D converter for sampling wideband signal: to reliably detect primary user signals through spectrum sensing, the most challenging circuits in the implementations is the A/D converter required for sample wideband signal with large dynamic range [19]. Spectrum capacity estimation and different quality of services requirements necessitates new spectrum decision models.
CONCLUSION
Radio spectrum is valuable resource in wireless communication systems, and it has been the focus of many research and development efforts over trhe last several decades. Cognitive radio networks provides the key features to solve current wireless network problems by introducing opportunistic usage of the frequency band and available spectrum that are not occupied by licensed users. However they must be able to do so while minimizing the interference they cause to primary users (existing users). Hence cognitive radio must be able to sense its radio environment and dynamically adjust its radio parameters according to the requirements of the users. Many researchers are currently engaged in developing the communication technologies and protocols required for CR networks. | 4,684.4 | 2012-06-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Time-domain and spectral-domain investigation of inflection-point slow-light modes in photonic crystal coupled waveguides
We report on spectral-domain and time-domain measurements and numerical calculations of group velocities in a photonic crystal coupled waveguide, where the unique guided mode band structure has a flat band region within the photonic band gap allowing for slow light observation. The spectral dependence of group velocity, which is measured by interference method, indicates the existence of slow light modes around the inflection point of the unique flat band, rather than at the band edge. Timedomain observation of optical pulses propagating along two-dimension slab photonic crystal coupled waveguides is also demonstrated by using a high speed oscilloscope. By adjusting the wavelength of the input pulses toward the flat band of the coupled defect modes, an increasing duration time between reference and output pulses are clearly observed. An extremely small group velocity of 0.017c is thus obtained. Calculated group velocities show good agreement with our measured results. ©2007 Optical Society of America OCIS codes: (230.7370) Waveguides; (250.5300) Photonic integrated circuits; (999.9999)
Introduction
Utilizing photonic band gap structures to slow down the speed of light has recently been acquiring much attention because of potential applications in devices such as optical delay lines [1,2], all-optical buffers [3,4], and optical storages [5].One such promising structure is two-dimensional (2D) photonic crystal (PhC) slab waveguides [6].If properly designed, these waveguides exhibit a unique dispersion relationship for the traveling light and can selectively retard the propagation speed of certain waveguide modes.Theoretical calculations have shown that extremely small group velocities can exist for the defect mode at the band edge of PhC waveguides due to a relatively flat band structure [6,7].Experimental measurements of slow light in PhC waveguides have also been reported.Some used the frequency-domain approach, which indirectly deduce the group index by analyzing the transmission spectrum using Fabry-Perot or interference methods [6,8].Some measured the phase delay of modulated signal to determine the group velocities [9,10].Results from direct time-domain measurements on the wave propagation in PhC waveguides, however, are very limited [11,12] because of the narrow operation bandwidth and large group-velocity-dispersion at the band edge.
Recently, Mori and Baba proposed a photonic crystal coupled waveguide (PhCCW), where a unique flat band of coupled modes is formed in the photonic band gap [13].An Sshaped dispersion curve with a flat region (or the inflection point) inside the first Brillouin zone was obtained for the waveguide mode.Their simulation results showed that the waveguide mode has a wider operation bandwidth and a lower dispersion for slow light propagation than that of a conventional single line-defect PhC waveguide.However, the experimental realization of a PhCCW and the light behavior at the unique flat band have not been reported.
In this paper, we present for the first time the measured transmission spectrum of a properly designed PhCCW to verify the existence of the unique flat band.Next, we demonstrate the deduced group velocities as a function of wavelengths by using an interference approach, where the spectra of an integrated Mach-Zehnder interferometer (MZI) employing PhCCWs were measured, to further clarify the inflection-point slow light modes of this flat region.Finally, we report the time-domain observation of the retardation of propagation pulses in a 2D slab PhCCW by using a high speed oscilloscope.The group velocities of the optical pulses at different wavelengths were measured.An extremely small group velocity of 0.017c was obtained at the wavelength of λ =1537.30nm, which is very close to the inflection-point slow light modes in the flat band.
Device design and fabrication
A PhCCW consists of two W1 type waveguides, where a row of air-holes (W1) is removed from the otherwise hexagonal PhC lattices [14].The two W1 waveguides are separated by three rows of air holes.The radius of the holes in the center row and those besides the waveguides and the position of the holes along the W1 waveguides are used as design parameters.We fabricated the coupled waveguide structures on a 4-inch silicon-on-insulator (SOI) wafer with a 3 μm buried oxide layer and a thin silicon guiding layer (thickness = 205 nm).The photonic crystal with hexagonal lattices having lattice constant a = 438nm was defined by E-beam lithography.The air holes were obtained by etching through the silicon layer using an inductively coupled plasma reactive ion etcher (ICP-RIE).After etching the silicon layer, the suspended silicon membrane was formed by removing the buried oxide layer using a selective wet etch, an HF solution.The top view of a completed coupled waveguide is shown in Fig. 1(a).By adjusting the structural parameters mentioned above, we can easily obtain an S-shapedlike coupled defect band with clear inflection points.Figures 1(b) and 1(c) are the two examples of theoretical band structure for the guided modes, which were calculated by the 2D plane wave expansion method with effective index approximation.In Fig. 1(b), there are two inflection points, M and N, with a negatively sloped region in between.By optimizing the design parameters, we were able to obtain a nearly flat region between the two inflection points.In this case, with the band structure shown in Fig. 1(c), both the first derivative and the second derivative of the ω(k) curve approach zero at the inflection point A. So we have a situation that not only the group velocity of the waveguide mode is very small, but the dispersion of the propagating optical pulse is minimized.
Spectral domain measurement
In order to investigate the characteristic of PhCCWs with such special defect bands in the experiments, some design rules are considered here.The input and the output of the PhCCW were integrated with two optical multimode interference (1x2 MMI) couplers for efficient coupling to the outside world [see insert of Fig. 1(a)].These MMI's, with 50/50 split ratio, allow us to excite and collect the coupled-defect modes at the input and the output ports of the PhCCW.The interface between the PhCCW and the access strip Si waveguides of the MMI's was properly designed to enhance the coupling efficiency in the slow light region [15].The width of the waveguides between the PhCCW and the MMI was tapered to ensure single mode propagation in the two branches.PhCCWs with three different lengths (L = 100, 200 and 500 μm) were prepared.
A typical transmission spectrum of the fabricated PhCCW with 200μm long is shown in Fig. 1(d).In this case, the structural parameters were designed to be the same as that used in Fig. 1(c).A sharp dip (marked as point "A") can be clearly seen at around λ=1537 nm indicating that the propagating light suffers a severe change in velocity.Usually for PhC waveguides, the so called "slow light" phenomenon is observed when waveguide modes propagate near the band edge [6].But here, the dip (or slow light) is in the middle of the transmission spectrum.This is due to the special slow light region associated with the inflection point of the S-shaped-like dispersion relationship caused by the PhCCW.
For further clarification of the S-shaped-like band in PhCCWs, we also investigated the spectral dependence of the group velocities by utilizing interference method.Figure 2(a) shows the schematic of the device layout.An integrated MZI structure [8], which consisted of a reference branch (stripe Si waveguide) and a signal branch (PhCCW butt-connected to MMIs), was used.Two MMI splitters that split light equally between the branches were applied to realize the interference experiments.The structural parameters of PhCCW were designed with the band structure shown in Fig. 1(c).
A typical transmission spectrum of this MZI device is shown in Fig. 2(b).The maximum and minimum of the oscillating fringes correspond to the constructive and destructive interference, respectively.It is seen very clearly that the oscillation period changes rapidly between wavelengths of 1530 nm and 1540 nm.This indicates a sharp increase of the relative phase shift in interference.The group index of the propagating light in the PhCCW can be deduced from this interference spectrum using where L=200 μm is the length of a PhCCW and n si (λ)=3.4 is the index of Si waveguide in the reference branch [8]. Figure 2(c) shows the deduced group index as a function of wavelength.We can see that the group index n g increases rapidly as the wavelength increases above 1535 nm and reaches the highest value at around 1537 nm, and then diminishes beyond 1540 nm.For comparison, we have also calculated the group index from the S-shaped-like band structure shown in Fig. 1(c) using n g = c/(dω/dk).The result is shown by the black dashed curve in Fig. 2(c).Very good agreement was obtained between the measured and the calculated results, demonstrating the existence of small group velocities at around λ=1537 nm, which corresponds to the inflection point, "A", shown in Fig. 1(c) and the dip shown in Fig. 1(d).
Time domain measurement
Next, we explore the temporal dynamics of the propagating light in this special slow light region by doing the time-resolved measurements on the PhCCW.A passive mode-locked Erdoped fiber ring laser, operated in the 1525-1565 nm range, was used as the light source.Optical pulses with width of 24-28 ps (FWHM) and a repetition rate of 10 MHz were used for the measurement.The input signal was steered into two paths by an optical splitter.One went to the PhCCW which we wanted to test and the other went into an optical delay line.The signals were then combined at the output and the waveform was displayed and recorded by a high speed oscilloscope, which, with a fast integrated optical-module, had a 28 GHz of unfiltered optical bandwidth for optical signal detection.
In order to determine the group velocity of the waveguide mode, we used PhCCWs with three different lengths (L=100, 200, and 500 μm).Since the optical delay line was the same for all three samples, the transmitted pulse from the delay line was used as a reference and the waveforms measured from samples with different lengths could be overlapped to show the propagation delay in the PhCCWs.Figures 3(a to measurements with input pulses of three different wavelengthsλ =1541.02nm, 1539.38 nm and 1537.30nm, respectively.The reference signal is the left most pulse in each figure and other pulses are measured waveforms of the transmitted light from the three samples.By measuring the delays of the output pulses with respect to the reference signal, the group velocity was then determined.Take Fig. 3(b) as an example, the duration time between the output pulses and the reference pulse was 99, 108, and 132 ps for samples with length L = 100, 200 and 500 μm, respectively.Plotting the duration time versus the waveguide length and fitting the data with a linear line, [see Fig. 3(d)] we obtained a group velocity of 0.039c.Similarly, the group velocity determined from the curves in Fig. 3(a), where the input pulse has a center wavelength of 1541.02nm, is 0.050c.
In Fig. 3(c), where the input pulses have a center wavelength of 1537.30nm, the pulses clearly travel slower than those in the other two cases.The output waveform for the sample with L=500 μm is very weak.But if we magnify the picture, a clear output waveform centered at t = 430 ps can be seen [see Fig. 3(e)].From the delay time measured in Fig. 3(c), we determined a very slow group velocity of only 0.017c.In other words, the speed of light is slowed down by about sixty times than that in vacuum.
Using the same method, we have investigated the frequency dependence of group velocity in the wavelength range from 1545.00nm to 1537.30nm and compared it with the theoretically calculated group velocities obtained from the slopes (dω/dk) of the band structure in Fig. 1(c).The result is shown in Fig. 4(a).The deviation between the measured points and the theoretical curve is due to fabrication disorder, which results in slight difference in structural parameters.In spite of this discrepancy, it clearly shows that the group velocity goes down as the launched frequency approaches the inflection-point "A".These extremely small group velocities are attributed to a relatively flat region in the band structure of the coupled waveguide modes.The slow light region is in the middle of the transmission spectrum of the waveguide mode.This is different from the previously reported slow light experiments in single line-defect PhC waveguides, where the waveguide modes propagate near the zone edge (or the edge of the transmission spectrum) of the band structure.This gives us a larger tolerance in obtaining the slow light region.
From the curve shown in Fig. 4(a), we can evaluate the value of group velocity dispersion (GVD) defined as -(dv g -1 /dω).Fig. 4(b) shows the calculated GVD (black line) and the experimentally obtained GVD (blue line), which is obtained after curve fitting the data points in Fig. 4(a).Despite the offset in frequency in Fig. 4(b), a good qualitative agreement between the two was obtained.Different from those of the W1 type PhC waveguides [11,12], the calculated dispersion curve has a positive and a negative part, and in a very narrow region in between it goes to zero.Therefore, it is possible that by reaching the center point "A" one may obtain an extremely small group velocity with zero dispersion.Due to the narrow bandwidth of this reduced GVD region in the current design and the limitation of the measurement system, however, we did not succeed in exploring such reduced GVD region in the experiment.We believe that by further optimizing the PhCCW structure to enlarge the bandwidth of the low dispersion region [13] or by launching a longer pulse length, it is possible to observe that the guided modes propagate along the PhCCW at an extremely group velocity and minimized dispersion.
Conclusion
We have designed and fabricated a photonic crystal coupled waveguide, where the guided mode has a unique flat band region within its photonic band gap allowing slow light propagation.Special inflection-point slow light modes that appear in the middle of the transmission spectrum instead of at the band edge were obtained.A sharp dip in the transmission spectrum and the peaking of the group index from the Mach-Zehnder interference measurement clearly indicate the existence of such inflection point.The group velocities of the optical pulses in this special waveguide were measured in time domain by comparing the propagation delay of the optical pulse through the waveguide with that of a reference optical delay line.A very clear slowdown of the optical pulses was directly recorded on a fast oscilloscope.An extremely small group velocity of 0.017c has been observed at λ=1537.30nm, due to the inflection-point slow light modes.This study, to our knowledge, is also the first consistent and reliable observation of the inflection-point slow light in the spectral and time-domain analysis.
Fig. 1 .
Fig. 1. (a).SEM image of PhCCW with lattice constant a = 438 nm and hole diameter 2r 0 =250 nm.The radius of holes in the center row (r c ) and those besides the waveguides (r 1 , r 2 ) are designed as r c =0.44a, r 1 =0.23a, and r 2 =0.30a, respectively.The position of holes with radius r 2 shifted toward the waveguides (s 2 ) is 0.15a.The inset shows MMI devices with dimension 3.2 μm x 5.7 μm used in the experiments.(b) and (c) are the calculated band diagrams of PhCCWs with different structural parameters: (r 0 , r c , r 1 , r 2 , s 2 )= (0.30a, 0.44a, 0.30a, 0.33a, 0.00a) in (b) and (0.28a, 0.41a, 0.26a, 0.27a, 0.22a) in (c).The black line and S-shape-like of blue (or red) line represent the odd and even modes of coupled bands in PhCCW, respectively.The characters of A, M and N indicate the inflections points of bands.(d) the measured transmission spectrum of PhCCWs with length L=200 μm.In this case, the structural parameters are designed to be the same as that used in (c).
Fig. 2 .
Fig. 2. (a).Schematic of an integrated MZI structure.The gray bold line represents the stripe Si waveguides.(b) Measured transmission spectrum of a MZI sample.In this case, the length and the lattice constant of PhCCW are 200 μm and 438 nm, respectively.(c) Wavelength dependence of group indices (blue squares) deduced from (b) by using inference approach.The black dash line shows the theoretical group indices calculated from the inverse of slopes of the blue line (coupled band) in Fig. 1(c).
)-3(c) show the transmitted pulses through the PhCCWs and the optical delay line recorded on the oscilloscope.(a), (b), and (c) correspond
Fig. 3 .
Fig. 3. Time-resolved measurements obtained by recording output pulses on the oscilloscope.The black, red, and blue lines represent samples with PhCCW length of L=100, 200, and 500 μm, respectively.The waveforms in the leftmost of plots, named reference signals (Ref.).Others correspond to output signals (Out.), which means pulses travel through PhCCW.Measurements at different central wavelengths of launched pulses λ =1541.02nm, 1539.38 nm, and 1537.30nm are shown in (a), (b), and (c), respectively.The magnified plot of the blue line in (c) is shown in (e).A weak output signal is clearly observed.(d) The measured duration time with different launched wavelengths as a function of PhCCW length.
Fig. 4 .
Fig. 4. (a).Measured group velocities (blue dot) in comparison with theoretical ones (black line), which are derived from the calculated band diagrams in Fig. 1(c).The character "A" indicates the calculated inflection point of the coupled band.The lowest group velocity in experiments is about 0.017c at ω=0.2849 (orλ =1537.30nm).(b) Calculated GVD (black line) from Fig. 1(c) and experimental GVD (blue line) derived from measured Vg in Fig. 4(a). | 4,009.6 | 2007-03-19T00:00:00.000 | [
"Physics"
] |
Saccharomyces cerevisiae transcriptional reprograming due to bacterial contamination during industrial scale bioethanol production
Background The bioethanol production system used in Brazil is based on the fermentation of sucrose from sugarcane feedstock by highly adapted strains of the yeast Saccharomyces cerevisiae. Bacterial contaminants present in the distillery environment often produce yeast-bacteria cellular co-aggregation particles that resemble yeast-yeast cell adhesion (flocculation). The formation of such particles is undesirable because it slows the fermentation kinetics and reduces the overall bioethanol yield. Results In this study, we investigated the molecular physiology of one of the main S. cerevisiae strains used in Brazilian bioethanol production, PE-2, under two contrasting conditions: typical fermentation, when most yeast cells are in suspension, and co-aggregated fermentation. The transcriptional profile of PE-2 was assessed by RNA-seq during industrial scale fed-batch fermentation. Comparative analysis between the two conditions revealed transcriptional profiles that were differentiated primarily by a deep gene repression in the co-aggregated samples. The data also indicated that Lactobacillus fermentum was likely the main bacterial species responsible for cellular co-aggregation and for the high levels of organic acids detected in the samples. Conclusions Here, we report the high-resolution gene expression profiling of strain PE-2 during industrial-scale fermentations and the transcriptional reprograming observed under co-aggregation conditions. This dataset constitutes an important resource that can provide support for further development of this key yeast biocatalyst. Electronic supplementary material The online version of this article (doi:10.1186/s12934-015-0196-6) contains supplementary material, which is available to authorized users.
Background
Brazilian bioethanol is mainly produced from the fermentation of sugarcane juice and molasses by Saccharomyces cerevisiae. Yeast cells are added at the beginning of the production season and are recycled at the end of each fedbatch fermentation cycle, every 8 to 15 hours, for approximately 210 consecutive days. Because the feedstock is not completely sterilized prior to fermentation, microbial contaminants are continuously introduced to the distillery environment, resulting in a dynamic competition between the desired inoculated strain and wild yeast strains and bacteria [1][2][3].
PE-2 and CAT-1 are the most versatile and widely adopted S. cerevisiae strains used by Brazilian distilleries [1]. Previously, we described the genome structure of the JAY270 strain, a clonal isolate derived from a commercial PE-2 stock [4]. That study provided initial insights into the genetic mechanisms that underlie the strong performance of this strain as an industrial biocatalyst. JAY270 is a heterothallic diploid strain, and its genome is characterized by a high degree of heterozygosity. This intrinsic genetic diversity is likely a key factor in the extraordinary ability of PE-2 to thrive in the harsh environment found in industrial fermentation tanks. PE-2 typically persists for the whole production season as the dominant strain in the yeast population, stemming the proliferation of wild contaminant yeast strains.
Yeast flocculation is a phenotype derived from cell-cell adhesion controlled by a well-characterized pathway (FLO gene family members, and their transcriptional regulators). This pathway is activated in response to environmental cues including cell density, carbon and/or nitrogen sources, pH, temperature, oxygen, agitation, ethanol concentration, and the presence of cations (reviewed by [5,6]). Flocculation is undesirable during fed-batch bioethanol production because it impairs the centrifugation step required for cell recycling, and it also reduces the cellsubstrate contact surface, thereby slowing fermentation kinetics and reducing yield [1,7].
Most strains used in sugarcane bioethanol production, including PE-2, are non-flocculant in pure culture. However, industrial-scale fermentations employing these strains occasionally exhibit flocculation-like features that cause significant productivity losses. In those cases, the flocculationlike phenotype is typically due to co-aggregation between yeast and bacterial contaminant cells [7,8], rather than conventional genetically determined yeast self flocculation. Lactobacillus species are the main bacterial contaminants found in sugarcane bioethanol production due to their ability to tolerate ethanol stress [8 -11% (v/v)] and the antibacterial acid wash administered to the yeast cells prior to pitching each new batch (pH 2.0 -3.0) [9]. L. fermentum, L. vini and L. plantarum, have been reported to be the main agents responsible for the co-aggregation of yeast cells [10,11]. The mannose-specific adhesin (Msa) found in L. plantarum and L. fermentum has been implicated in cell-cell interactions [12][13][14]. Hirayama et al. [15] examined co-aggregation in a panel of S. cerevisiae mutants with gene deletions of twelve mannan cell wall constituents. Among them, the mnn2Δ mutant strain lost the capacity to co-aggregate with L. plantarum cells. Mnn2p is a mannosyltransferase that transfers the first α-1,2-linked mannose to the mannan core structure to form a side chain that is subsequently extended by Mnn5p [16]. In the absence of Mnn2p, the mutant identified by Hirayama et al. [15] has an unbranched mannan chain that is incapable of linkage to the bacterial adhesin Msa.
Although S. cerevisiae is an acid-tolerant organism [17], exposure to high concentrations of organic acids produced by bacterial contaminants slows down the yeast metabolism and reduces fermentative fitness [18][19][20]. Narendranath et al. [21] reported that the synergism between lactic and acetic acids reduced the rates of yeast growth, glucose consumption, and ethanol production.
Although recent advances have been made in the characterization of the cellular pathways that contribute to the success of PE-2 as a bioethanol producer, such laboratory-based studies cannot accurately replicate the biotic and abiotic stresses encountered by this strain during industrial-scale fermentations [7,22]. To gain a better understanding of the molecular physiology of PE-2 under actual production conditions, we determined the gene expression profiles from cells collected directly from distilleries, and contrasted them to the transcriptional responses triggered by co-aggregation with bacterial contaminants. This dataset provides valuable information to support the genetic improvement of PE-2 and other bioethanol-producing strains, specifically, in the development of strategies to reduce or avoid co-aggregation in the presence of bacterial contaminants.
Experimental dataset
Brazilian sugarcane bioethanol fermentation is characterized by a dynamic competition between high productivity industrial yeast strains and wild yeast and bacteria that contaminate the production process [1][2][3]. Early in the 2009 sugarcane harvest season (April), a pronounced flocculation-like phenotype was observed in the yeast population at a distillery in São Paulo state. Since the PE-2 yeast strain used as the initial inoculum is nonflocculant, the cell-cell adhesion observed was likely caused by co-aggregation between yeast cells and bacterial contaminants [7]. On that occasion, we collected samples directly from the fermentation tanks at seven time points of a single fed-batch cycle from this flocculated (FL) condition. The distillery's operator initiated a combination of antibiotic and acid treatments to control the bacterial contamination and cellular co-aggregation. This treatment lasted for several weeks and was effective. By July, the culture had fully reverted to its original non-flocculated state (typical fermentation; TF), despite present similar number of bacteria (1 × 10 6 CFU/mL). We then returned to the distillery and collected samples at six time points from the beginning to the end of a single disaggregated fed-batch cycle.
We isolated random yeast colonies from the FL and TF samples and genotyped them using PE-2 specific PCR markers recently developed by our group [2]. Despite the three-month interval between the collections, 95% of the yeast colony isolates from either flocculation condition matched the unique banding profile of the original PE-2 inoculum (data not shown). This result was consistent with the high degree of adaptation and persistence in long-term fed-batch sugarcane fermentation with cell recycling that is the hallmark feature of the PE-2 strain. It also reassured us that PE-2 was indeed the major yeast strain present in the microbial population, therefore allowing us to interpret the results of the transcriptomics studies presented below as a reflection of the molecular physiology of this particular strain.
Alignment of RNA-seq reads and microbial identification
The RNA-seq libraries from the six TF and seven FL samples were sequenced using Illumina technology (see Methods). Combined, they totaled approximately 330 million 36-bp single-end reads and approximately 11.9 Gb of sequence information. On average, approximately 76% of the reads from each sample aligned to reference S. cerevisiae genes and were interpreted as being derived from PE-2 transcripts (Methods and Additional file 1).
The material used to prepare the sequencing libraries also included some non-mRNA molecules, which were also sequenced and generated reads. We took advantage of this feature of the data and mined it for sequences derived from the bacterial cells present in the fermentations. We performed rRNA identification through alignment of the RNA-Seq reads to the SILVA rRNA database [23]. An average of approximately 5% of the total reads were classified as ribosomal sequences, with 0.26% being assigned to a bacterial origin (Additional file 1). The bacterial read counts per taxon were calculated for the different taxonomic levels using the SILVA rRNA database. The family level distribution of the bacterial sequences detected in the two fermentation conditions sampled are shown in Figure 1A. Interestingly, TF and FL had a similar overall distribution of bacterial families. However, within the Lactobacillaceae family, most of the reads derived from the flocculated condition were assigned to a single species, Lactobacillus fermentum (~93%) ( Figure 1B). In contrast, only 41% of the Lactobacillaceae reads belonged to this species in the typical fermentations. This observation was significant since L. fermentum has been reported to induce sedimentation in S. cerevisiae [9,10]. To evaluate in principle the ability of L. fermentum to induce co-agregation with PE-2, we isolated bacterial colonies from this species from our FL samples and confirmed their identity by 16S rDNA PCR and Sanger sequencing. These isolates were co-cultured with PE-2 under laboratory conditions and a comparable behavior to that observed at the distillery was observed ( Figure 1C and D). The PE-2 yeast cells became co-aggregated and sedimented when co-cultured with greater than 1 × 10 5 L. fermentum cells/mL. A representative scanning electron micrograph of PE-2 yeast and L. fermentum bacterial cells from these co-cultures under laboratory conditions is shown in Figure 1E.
Metabolite analysis
Analysis of the chemical composition of the collected samples revealed four significant differences in the kinetics of flocculated versus typical fermentation ( Figure 2). Compared to the TF samples, the FL samples had (i) lower final ethanol titer, (ii) lower glycerol production, (iii) higher lactic and acetic acid concentrations, and (iv) slower rate of sucrose hydrolysis.
The fermentation batches took longer to complete in FL (21 hours) than in TF (15 hours), and the final ethanol content of the FL samples (64.4 g/L) was~25% lower than that of the TF samples, although both had somewhat similar residual sugar levels (FL: 7.6 g/L; TF: 4.5 g/L).
Glycerol production by yeast cells is inversely associated with ethanol titer [1]; however, the production of a limited amount of glycerol is important to help maintain an optimal redox equilibrium and proper cellular osmoregulation [24,25]. One of the many desirable properties of PE-2 is its well-balanced production of glycerol, which is low enough to have high ethanol yield, but high enough to tolerate stress. Interestingly, TF samples had almost three times higher glycerol concentration (4.74 g/L) than FL, suggesting that the FL cells were metabolically imbalanced and thus were likely less tolerant to the stressful environment of industrial fermentation.
Saccharomyces cerevisiae strains do not produce high amounts of organic acids [17,26], therefore most organic acids detected during bioethanol production have been attributed to bacterial contaminants, primarily Lactobacillus [9,27]. Under laboratory conditions, PE-2 produces only 1.5 and 2.4 mg/L of lactic and acetic acids, respectively [18]. In our study, the final content of acetic and lactic acids was 6 and 3.5 fold higher, respectively, in FL relative to TF (Figure 2). Since the overall residual sugar contents were similar at the end of both fermentation conditions, the flow of the sugar feedstock must have been significantly altered in the FL fermentation condition, being diverted from the intended ethanol production by yeast to instead being misused by bacterial contaminants to produce organic acids.
Due to its prolonged duration and the low ethanol titer, we estimated that co-aggregated fermentation resulted in the loss of approximately 12 million liters of bioethanol during the three months that the distillery operated under this condition, underscoring the critical importance of this problem to the bioethanol industry.
Differential gene expression analysis
The number of reads from the respective RNA-seq libraries that aligned to reference genes was used in the identification of differentially expressed (DE) genes between the time courses of the two fermentation conditions, as well as within each of the two conditions (Table 1). Time points TF1 and FL1 were used as references. Gene expression comparisons between fermentations ( Table 1, C-Comparative) were performed by comparing time points from early phases of fermentation (e.g. TF1 vs. FL1) and late stages (e.g. TF6 vs. FL7). Moreover, we performed a global analysis using all TF libraries versus all FL libraries (TFs vs. FLs) to examine the cumulative effect of all time points for each fermentation condition. DE genes were obtained from the global analysis using gene expression averages from the six timepoints of TF compared with the seven time-points of FL, with a p-value cutoff of 0.01 (Table 1). The complete RNA-seq data are available at the Gene Expression Omnibus (http://www.ncbi.nlm.nih.gov/geo) under accession number [GSE41834] (Additional file 1), and the lists of DE genes are shown in Additional files 2, 3 and 4.
Despite the lower content of ethanol obtained in the two conditions, glycolysis-related genes were not differentially expressed between them. Curiously, the sucrosehydrolyzing gene, SUC2, was up-regulated by a factor of 4 in the TF condition. Furthermore, the expression of SUC2 in the TF condition increased three-fold when the addition of sugarcane extract stopped (TF3) and the level of C6 sugars (glucose and fructose) was reduced from 36 g/L to 6 g/L. Figure 3 shows the transcriptional profile of SUC2, described by the RPKM metric and sucrose concentration, as a function of fermentation time for the FL (3B) and TF (3C) conditions. This rapid activation of SUC2 expression appears to be important for the prompt stress response to nutrient limitation (i.e. C6 sugars) during fermentation [28]. However, SUC2 expression in the FL samples decreased seven-fold during the process. This pattern could be partially explained by the presence of sucrose in the FL samples during the entire fermentation process ( Figure 2C), which would have provided a continuous supply of C6 sugars to the cells and might have thus caused SUC2 repression [29]. The reduced surface contact between yeast cells and the medium due to cell-cell adhesion in FL could be associated with this distinct pattern of sucrose consumption. Therefore, we can reason that one of the causes of the longer time required for the FL fermentation could be the down-regulation of SUC2, leading to a low availability of fermentable sugars.
Although S. cerevisiae is a vigorous and acid-tolerant fermentative organism [17], high concentrations of organic acids with a low pH and high concentration of ethanol reduce its metabolic rate [18][19][20]. We determined the pH for the FL and TF samples and obtained comparable measurements ranging from pH 3.8 to pH 4.3. As shown in Figure 3D, the overall levels of gene expression decreased when the organic acid content reached values greater than 4 g/L in the FL samples, suggesting that organic acids produce strong gene repression in the yeast cells. In this case, fewer genes were expressed and at lower levels compared with the previous time-points. At the low pH of fermentative conditions, the organic acids produced by contaminating bacteria exist primarily in their undissociated state [30]. Such undissociated organic acids present in the substrate diffuse across the cell membrane and dissociate in the cytoplasm, generating protons that lower the intracellular pH and inhibit many metabolic functions [31]. Moreover, this dissociation produces charged anions that can in turn produce free radicals, leading to severe oxidative stress [31]. Narendranath et al. [21] reported a negative synergist effect between lactic and acetic acids when concentrations of organic acids are present in the medium at 0.5% (w/v) and 0.04% (w/v), respectively. This combination inhibited the cellular growth rate and decreased the rates of glucose consumption and ethanol production. Genes related to flocculation (MUC1, FLO5, FLO8, FLO9, FLO10 and PHD1) were not found to be upregulated in the FL samples ( Figure 4A). This result confirmed that the observed cellular co-aggregation was not due to yeast genetic control. We observed that the main transcriptional differences between the FL and TF conditions were related to content variations in the concentrations of organic acids present in the medium. The major plasma membrane H + -ATPase, encoded by PMA1 [32], was not differentially expressed between samples at the beginning of fermentation (TF1 vs. FL1). However, we verified a two-fold PMA1 induction in flocculated fermentations at the last time point. Pma1p-related genes, AST1 (targeting factor to plasma membrane), PMP1, PMP2 and HRK1 (regulatory elements), had similar expression patterns ( Figure 4B). These data show that the mechanism used to pump out protons to regulate cytoplasmic pH is up-regulated in the FL cells. This stress response, however, consumes excessive ATP and may cause an inhibitory action by energy depletion [31].
Previous studies have shown that the main transcriptional responses of S. cerevisiae in the presence of weak acids (lactic and acetic) are related to cell wall components, membrane-associated transport processes and iron homeostasis [19,33,34]. The HAA1 transcription factor and Haa1p-regulated genes have been reported to be up-regulated in response to lactic and acetic acids [34][35][36][37]. Among the Haa1p target genes, we observed the up-regulation of TPO2, YGP1, PHM8, GRE1, YPR157w, YER130c and HRK1 in the FL7 sample compared with TF6 ( Figure 4C). However, we did not observe differences in the expression of HAA1 itself between fermentations, suggesting a co-regulation of those seven genes by distinct transcription factors [36].
During FL, cell wall-related genes changed their expression dramatically compared with TF. Kawahata et al. [19] reported that the depleted expression of the cell wall components SED1, DSE2, CTS1, EGT2, SCW11, SUN4 and TOS6 increased the resistance of S. cerevisiae to lactic acid. Here, the PE-2 strain used the same mechanism for FL, down-regulating these seven genes by a factor of 3-to 6.8-fold in response to the organic acid concentrations at the end of the FL time course ( Figure 4D).
To validate the RNA-seq data, 15 genes were assessed by RT-qPCR, for a total of 60 pairwise comparisons. The total expression trends of the time-points analyzed were 87% similar between the different techniques, with correlation values of R 2 = 0.7604 and R 2 = 0.7951 for the FL and TF samples, respectively (Additional file 5).
Gene ontology of DE genes
Gene ontology (GO) analyses were performed to identify functional signatures in gene expression using the DE genes between fermentation conditions (TF1 vs. FL1; TF6 vs. FL7; TFs vs. FLs). Two enriched GO terms were particularly meaningful in the context of industrial fermentations: cellular amino acid and vitamin metabolic processes (Additional file 6). Negative values were obtained for the TF samples, and positive values were obtained for the FL samples. General analysis (TFs vs. FLs) was performed using six time-points for the TF samples and seven time-points for the FL samples. The beginning of fermentation is denoted as TF1 and FL1, and the end of fermentation is denoted as TF6 and FL7. The software Expander6 was used for the gene clustering image drawn using the end of fermentation as a reference.
For the FL samples, several genes assigned as "cellular amino acid metabolic process" (MET2, MET3, MET4, MET14, MET16, MET17, MET28, MET32, STR3 and GSH1) are involved mainly in the methionine (MET) and glutathione (GSH) biosynthesis pathways ( Figure 4E). GSH has an important role in the protection of S. cerevisiae against oxidative stress [38,39]. The first, and ratelimiting, step in the GSH biosynthetic pathway occurs when GSH1 catalyzes the conjugation of glutamate and cysteine (reviewed in [40]). Because methionine is involved in cysteine biosynthesis, the expression profile of the MET genes has a direct effect on GSH biosynthesis by supplying cysteine to the pathway [41]. Moreover, the transcription factors for Met4p and Met32p, which are required expression of MET biosynthetic genes, are also essential for GSH1 expression by cadmium-mediated regulation [42]. The MET-and GSH-related gene expression profiles (up-regulated at FL) suggest that the yeast cells in the FL samples were under oxidative stress, most likely due to the formation of intracellular reactive oxygen species triggered by lactic [43] and acetic [44] acids.
Genes involved in the vitamin-related metabolic process were identified prominently up-regulated in the TF samples. Interestingly, most of the genes identified (e.g., PET18, PHO3, RPI1, THI2, THI3, THI4, THI13, THI20, SNO2 and SNZ3) participate in thiamine (vitamin B1) metabolic processes ( Figure 4F). The SNO/SNZ genes are required for vitamin B1 and B6 biosynthesis and also have a role in oxidative stress tolerance [45][46][47]. Moreover, under laboratory conditions, bioethanol strains carrying amplifications of these genes have been shown to be less sensitive to fluctuation in the vitamin B levels when cultured in a medium with a high sugar concentration [48], and these genes have been suggested to be important for adaptive growth in an industrial process [4,48]. Our transcriptional data is consistent with those reports and underscores the importance of thiamine genes for the adaptation of the PE-2 strain to sugarcane bioethanol production.
Differential allelic expression
Sequencing analysis of the PE-2 genome revealed that this strain is highly heterozygous [4]. We took advantage of the high number of PE-2 sequences generated by the RNA-seq reads (~9 Gb) to identify differences in allelic expression during the different fermentations. Differential allelic expression (DAE) at a threshold of 2-fold between alleles was used for a case of DAE to be called (i.e., more than 66% of the reads aligned to a specific gene came from a single allele). When the coding region had more than one heterozygous SNP, DAE was determined accounting for the cumulative imbalance for all the phased SNPs across the entire gene.
Our initial analysis identified 195 candidate DAE genes that were found in both TF and FL conditions (Additional file 7; TF and FL). Interestingly, 140 of those genes were located at consecutive positions on the right arm of Chr13, between FAR8 (YMR029c) and the right telomere (TEL13R), and all of them had reads that were essentially derived from only one allele ( Figure 5A). This striking pattern suggested that a Chr13 region of approximately 600 kb was homozygous in the PE-2 strain present at the time in the distillery, likely due to a mitotic recombination event leading to loss of heterozygosity (LOH). To confirm this homozygous pattern in the industrial isolates, we designed primers to partially amplify the RCE1 (YMR274c) locus, which contains a heterozygous SNP (A/G) in the JAY270/PE-2 strain at position 874 within a recognition site for the MboI restriction endonuclease. RCE1 sequences were amplified from PE-2 cells isolated from the industrial fermentations as well as from the JAY270/PE-2 heterozygous diploid and from the S288c isogenic strain FY23 (uncut allele). The MboI digestions of the PCR products confirmed that the PE-2 cells isolated from industrial tanks were indeed homozygous for RCE1, while a heterozygous pattern was observed for JAY270 ( Figure 5B).
Exclusion of the Chr13 right arm genes resulted in 55 genes with DAE identified in both fermentations. In addition, we also found 61 DAE genes exclusively in the TF samples, and 33 exclusively in the FL samples ( Figure 5C and Additional file 7), suggesting a fermentation conditiondependent expression pattern regulated by specific transcriptional responses. We hypothesize that the observed DAE patterns may be due to the differential effect on allele expression of heterozygous SNPs at cis-elements at the regulatory regions of the DAE genes. Gene ontology analysis of the DAE genes did not reveal a functional enrichment in this relatively limited gene set (the only statistically enriched GO term detected was "unknown function"). Although individual cases of DAE may play an important role in dictating the fermentation performance of PE-2 under typical and/or co-aggregated conditions, it is unclear at this point which are those genes and what is their specific function.
Conclusion
The gap in the basic biological knowledge about PE-2 and its related strains represents a significant barrier to genetically improving these strains and fully exploiting their biotechnological potential [8]. The genetic engineering of bioethanol strains should be preceded by genomic and transcriptomic studies to identify the genetic characteristics that are associated with yeast fermentative fitness [7,22]. The results presented here provide new insights into the biology of the PE-2 strain and allowed us to identify stress response mechanisms during bioethanol production. Information derived directly from industrial scale fermentations can be used to support studies aimed at developing superior fermentative fitness in the PE-2 strain. The data described here represent an important step to reach those goals. The allele with higher expression was arbitrarily designated as allele 1 (blue) and the lower expression allele was designated as allele 2 (red). The hatched box represents a 100-kb homozygous region (including the centromere) in the JAY270/PE2 diploid where no DAE information could be assessed. The DAE plot suggests that the breakpoint of the Chr13 LOH event occurred within the homozygous region. The position of the RCE1 locus on the right arm is shown. B-The genotypes at the RCE1 locus were determined by PCR followed by restriction analysis using MboI. The predicted banding patterns for the alleles were: Homozygous for allele 1 (cut) 543 bp and 256 bp; Homozygous for allele 2 (uncut) 799 bp; Heterozygous pattern: 799 bp, 543 bp and 256 bp. A molecular weight marker ladder of 100 bp incremental size fragments was used in line 1. The 500 bp marker band is indicated. C-Venn diagram showing the number of DAE genes identified exclusively in the TF and FL, and the DAE genes identified in simultaneously in both TF and FL.
Fermentation samples collection
Biological samples from two different industrial fermentation conditions were collected directly from bioethanol fermentation tanks at the Nova América distillery (Maracaí-SP, Brazil). At the beginning of the 2009 season, the PE-2 culture used as the starting inoculum at this distillery became flocculent due to bacterial coaggregation. On that occasion, we collected samples at seven different fermentation time points (FL samples) during one fed-batch cycle. Over the following three months, the yeast cells were treated with antibiotics and sulfuric acid before re-pitching the next batch. The yeast community eventually reverted to its typical phenotype (disaggregated), at which time samples were collected at six intervals during a typical fed-batch fermentation cycle (TF samples). Three biological replicates were collected for each of the thirteen sampled time-points. After collection, the samples were immediately transferred to a container with dry ice for the posterior chemical and transcriptomic analysis. Aliquots of each sample were also maintained on ice to preserve the viability of cells.
Yeast genotyping
Unfrozen aliquots from each condition (FL and TF) were diluted and plated in YPD solid medium (yeast extract 10 g/L [w/v], peptone 20 g/L [w/v], glucose 20 g/L [w/v] and agar 20 g/L [w/v]). Colonies were isolated and DNA extraction was performed following to a phenolchloroform protocol [49]. Twenty yeast colonies derived from each condition were analyzed using PE-2 specific PCR markers as described by Carvalho-Netto et al. [2].
Metabolite analysis
Aliquots of the biological replicates were centrifuged, and the supernatants were diluted in water (1:3), filtered in Millipore 0.22-μm filters, and analyzed by High Performance Liquid Chromatography (Alliance 2795, Waters, Milford, MA, USA) using a refractive index detector (HPLC-RI) and an Aminex HP-87H column (Bio-Rad Laboratories, Hercules, CA, USA). The HPLC readings for sucrose, glucose, fructose, ethanol, glycerol, acetic acid and lactic acid in the samples were fit to respective standard curves to determine their concentrations.
RNA isolation and RNA-seq library preparation
The total RNA of the samples was extracted using a phenol and chloroform protocol [50]. Illumina RNA-seq libraries were prepared following the manufacturer's recommendations. Briefly, mRNA was isolated from 1 μg of the total RNA using oligo(dt) magnetic beads, and then fragmented in the presence of divalent zinc ions. The fragmented RNA was then used for first and second strand cDNA synthesis. Double-stranded cDNA was end-repaired and 3' adenylated for the ligation of sequencing adapters. After adapter ligation, fragments of approximately 250 bp were isolated by gel electrophoresis and PCR amplified. The libraries were validated on an Experion DNA chip (Bio-Rad, Hercules, CA, USA) and quantified using a Qubit fluorometer (Invitrogen, Carlsbad, CA, USA). Each library was sequenced in one lane of an Illumina Genome Analyzer II× (GAII×) sequencer, resulting in 20-30 million 36-bp single-end reads.
Gene expression analysis and functional annotations
The complete dataset of RNA-seq reads has been deposited in SRA under accession number [SRA057038]. For each RNA-seq library, reads were aligned to a custom reference gene database constituted by S. cerevisiae S288c genes (www.yeastgenome.org) and 20 JAY291specific genes [4] (Additional file 1). The alignment was performed using SOAPaligner version 2.20 [51], allowing up to two base mismatches and discarding repeat reads. A Perl script was then created to calculate the number of reads aligned per gene for each RNA-seq library.
The output file was analyzed using the DEGseq package [52] for the identification of differentially expressed (DE) genes. Pairwise comparisons within a fermentation condition (typical and flocculated, individually) and between fermentation conditions (typical versus flocculated) are shown in Table 1. For the comparative analysis (TFs vs. FLs), the DEGseq was configured to use the time points within fermentation (TF1-TF6 and FL1-FL7) as experimental replicates. A p-value cutoff of 0.01, with a fold change > 2 (up-regulated) or < -2 (down-regulated), were used to determine the DE genes in these comparisons. Gene expression levels were defined using the RPKM formula [53]. Clustering and visualization of the DE genes were obtained using EXPANDER [54].
We also quantified the genomic background transcription (RPKM threshold) using 1787 intergenic regions larger than 500 bp. The RPKM threshold was estimated through the alignment of reads to intergenic regions using SOAPaligner [51], allowing up to two base mismatches and discarding all repeat reads. The distribution of the RPKM values from the genes and intergenic regions for each RNA-seq library was used to estimate the RPKM threshold by visual inspection. The gene expression levels with RPKM values below the RPKM threshold were not considered to be expressed genes, and these genes were discarded from the differential expression analysis when the expression levels were below the RPKM threshold in the respective libraries.
Gene ontology (GO) terms of the DE genes were obtained from SGD (http://www.yeastgenome.org/cgi-bin/ GO/goSlimMapper.pl) using the Yeast GO-Slim Process parameters and a cutoff p-value <0.01. Functional GO enrichment terms were obtained using DE genes between fermentations (TF1 vs. FL1; TF6 vs. FL7; TFs vs. FLs).
RNA-seq validation by Real Time qPCR (RT-qPCR)
To confirm the RNA-seq results, 15 genes were analyzed by RT-qPCR in four pairwise comparisons (TF1 vs. TF4, TF1 vs. TF6, FL1 vs. FL4 and FL1 vs. FL7), for a total of 60 pairwise comparisons. A list of the genes and primers used is presented in Additional file 8. Aliquots of the samples used to construct the RNA-seq libraries were used in transcriptase reverse reactions to synthesize cDNA using the SuperScript Direct cDNA Labeling System (Invitrogen, Carlsbad, CA, USA) according to the procedures described by the manufacturer. The RT-qPCR mix consisted of 8 μL of SYBR Green Supermix (Bio-Rad Laboratories), 1 μL of each primer (0.5 μM final concentration), 5 μL of water and 1 μL of cDNA. The reaction program consisted of one hold at 95°C for 5 min, followed by 40 cycles of 15 s at 95°C and 75 s at 60°C. Fragment amplification and detection of SYBR Green (Applied Biosystems, CA, USA) were performed with the Step One Plus thermalcycler (Applied Biosystems). The relative expression ratio was calculated using the 2 -ΔΔCT method [55] using primers with amplification efficiencies between 90-100% (−3.6 ≥ slope ≥ −3.3). The ACT1 and YNL134c genes were selected as endogenous genes to normalize the expression values for the TF and FL samples, respectively, as both genes showed little variation in expression among the different RNA-seq libraries.
Bacterial identification
Although conventional RNA-seq libraries are enriched for mRNA through the use of oligo(dt) magnetic beads, a small proportion of sequences corresponding to other RNA species is often detected [56], allowing the identification of the bacterial species present in the FL and TF samples. The RNA-seq reads were aligned into the SILVA rRNA database [23] using SOAPaligner, configured to allow two mismatches and discard any repeat reads. A custom Perl script was developed to parse the output file obtaining the read counts per taxon using different taxonomic levels. Bacterial families that accounted for less than 5% of the total reads from the TF or FL samples were not used in further analyses.
To identify the bacterial species associated with yeast co-aggregation, bacterial colonies were isolated in LB solid medium (tryptone 10 g/L [w/v], yeast extract 5 g/L [w/v], NaCl 10 g/L [w/v] and agar 20 g/L [w/v]) under anaerobic conditions. The bacterial DNA extraction protocol was adapted from Collart and Oliviero [50] using lysozyme (100 mg/mL) and proteinase K (10 mg/ mL). PCR amplification of the 16S rDNA was performed in a final volume of 50 μL. The reaction mix consisted of 4 ng of DNA, 0.5 μM each of F27 (5′ AGA GTT TGA TCM TGG CTC AG 3′) and R1378 (5′ CGG TGT GTA CAA GGC CCG GGA ACG 3′) primers [57], 0.25 mM each dNTP, 3.5 mM MgCl 2 , 1X Colorless GoTaq Flexi Buffer (Promega, Madison, WI, USA) and 1.25 U of GoTaq Flexi DNA Polymerase (Promega). The amplification program consisted of one initial hold at 94°C for 3 min, followed by 40 cycles of 30 s at 94°C, 30 s at 55°C and 60 s at 72°C. A final 5-min extension was performed at 72°C. The PCR products were purified using the NucleoSpin Extract II purification kit (Macherey-Nagel, Düren, Germany), according to the manufacturer's instructions. The PCR products (45 ng) were Sanger-sequenced using the Big Dye Terminator kit (Applied Biosystems, Foster City, CA, USA) on a 3500 Genetic Analyzer (Applied Biosystems). The reaction program consisted of one hold at 94°C for 2 min, followed by 35 cycles of 20 s at 94°C, 30 s at 55°C and 2 min at 60°C. Bacterial rDNA sequence similarity was obtained by BLASTn analysis using the GenBank nonredundant (NR) Database (http://blast.ncbi.nlm.nih.gov/).
RCE1 amplification and genotyping
For the loss of heterozygosity (LOH) analysis, a segment of the RCE1 gene was PCR-amplified in a final volume reaction of 50 μL using 1 ng of yeast genomic DNA, 0.5 μM each of RCE1_F (5′ ACC TTA TAT TGT GGA CCC GTT 3′) and RCE1_R (5′ CTC GAT AGA ATT CCA TAA TAG 3′) primers, 0.25 mM each dNTP, 3.5 mM MgCl 2 , 1X Colorless GoTaq Flexi Buffer and 1.25 U of GoTaq Flexi DNA Polymerase (Promega, Madison, WI, USA). The amplification program consisted of one hold at 94°C for 2 min, followed by 35 cycles of 40 s at 94°C, 40 s at 56°C and 80 s at 72°C. A final 5-min extension was performed at 72°C. The PCR products were purified and digested using 10 U of MboI (New England Biolabs, Ipswich, MA, USA). The digested fragments were resolved in 2% (w/v) agarose gels and visualized by ethidium bromide staining.
Additional files
Additional file 1: Table S1. Reads obtained by RNA-seq analysis during industrial bioethanol production. For each RNA-seq library, reads were aligned to a custom reference gene database constituted by S. cerevisiae S288c genes (www.yeastgenome.org) and 20 JAY291-specific genes. In order to assigned ribosomal sequences, reads were aligned into the SILVA rRNA database.
Additional file 2: Table S2. Differentially expressed genes among the fermentations. Pairwise comparisons between fermentation conditions (typical versus flocculated) were performed using the time points within fermentation (TF1-TF6 and FL1-FL7) as experimental replicates. A p-value cutoff of 0.01, with a fold change > 2 (up-regulated) or < -2 (downregulated), were used to determine the differentially expressed (DE) genes in this comparison. Gene expression levels were defined using the RPKM formula. | 8,351 | 2015-01-30T00:00:00.000 | [
"Biology",
"Engineering",
"Environmental Science"
] |
The sum of the series of reciprocals of the quadratic polynomials with double positive integer root
This contribution, which is a follow-up to author's papers [3] and [4], deals with the series of reciprocals of the quadratic polynomials with double positive integer root. The formula for the sum of this kind of series expressed by means of harmonic numbers are derived and verified by several examples evaluated using the basic programming language of the computer algebra system Maple 16. There is stated another formula using generalized harmonic numbers, too. This contribution can be an inspiration for teachers of mathematics who are teaching the topic Infinite series or as a subject matter for work with talented students.
INTRODUCTION AND BASIC NOTIONS
Let us recall the basic terms.For any sequence of numbers the associated series is defined as the sum The sequence of partial sums associated to a series is defined for each as the sum The series converges to a limit if and only if the sequence converges to s, i.e. .We say that the series has a sum and write .
The -th harmonic number is the sum of the reciprocals of the first natural numbers: The generalized harmonic number of order in power is the sum where are harmonic numbers.Every generalized harmonic number of order in power can be written as a function of generalized harmonic number of order in power using formula (see [6]) whence From formula (1), where and , we get the following table 1.
Tab.1: Some harmonic and generalized harmonic numbers
THE SUM OF THE SERIES OF RECIPROCALS OF THE QUADRATIC POLYNOMIALS WITH DOUBLE POSITIVE INTEGER ROOT
We deal with the problem to determine the sum of the series for positive integers , i.The partial sums , i.e. generalized harmonic numbers are also determined by the formula (see [5]) This surprising identity was derived by the contemporary brilliant amateur French mathematician Benoit Cloitre (see [1]).
Tab. 2 Some approximative values of the sums 3.156731 Computation of 16 couples of the sums and took over 16 hours.The relative errors, i.e. the ratios , range between for and for .
CONCLUSIONS
We dealt with the sum of the series of reciprocals of the quadratic polynomials with double positive integer root , i.e. with the series We derived that the sum of this series is given by the formula We verified this main result by computing 16 sums by using the CAS Maple 16.
Two another ways how to calculate the sum is using the value of generalized harmonic number of order in power and the improper integral or the short formula with the value of the generalized harmonic number The series of reciprocals of the quadratic polynomials with double positive integer root so belong to special types of infinite series, such as geometric and telescoping series, which sums are given analytically by means of a formula which can be expressed in closed form.
e. to determine the sum of the series the sum of the series the sum of the series etc.Clearly, we get the formula where is the th partial sum of the series , and also the formula A problem to determine the sum is so called Basel problem.This problem was posed by Pietro Mengoli (1625-1686) in 1644.In 1689 Jacob Bernoulli (1654-1705) proved that the series converges and its sum is less than 2. In 1737 Leonhard Euler (1707-1783) showed his famous result .This sum presents the value of the Riemann zeta function The values of the -th partial sum correspond to the values , so their first ten values are presented in the third row of the table 1.Some another values of the -th sums , computed by CAS Maple 16, are , , , , , whereas the series converges to the number .
) we get Theorem 1 Theorem 2 Example 1
The series where is integer, has the sum Remark In[2] it is stated the equality which can be proved using a geometric sum-type expansion of the denominator and evaluation of the subsequent integrals by means of the integration by parts and L'Hôpital's Rule.Using formula (6) we get The series where is integer, has the sum Evaluate the sum of the series by formula i) (9), ii) (11), and iii) (8) and compare the obtained results. | 1,001 | 2016-10-30T00:00:00.000 | [
"Mathematics"
] |
3D strain-induced superconductivity in La2CuO4+δ using a simple vertically aligned nanocomposite approach
We demonstrate a new way to increase the superconducting TC of cuprates using 3-D strain control in nanocomposite thin films.
INTRODUCTION
High-temperature superconductor (HTS) cuprates are very important materials for a range of energy applications, and increasing their superconducting transition temperature (T C ) is an important goal. There are a range of cuprates with T C values above 100 K, even up to >150 K in Hg-based compounds under pressure (1), but their structures and processing are highly complex, and most cuprates have strong electric anisotropy, which means that their in-field properties are poor. Increasing the T C value of single-layer cuprates could lead to strong benefits for practical applications, in terms of both performance and cost (2).
From a phenomenological perspective, the methods for achieving high T C in the cuprates are well known (3): (i) optimize the carrier concentration in the CuO 2 planes (4); (ii) eliminate defects and disorders in the CuO 2 planes (5, 6); (iii) make the planes flat, square, and of optimal size [superexchange coupling of Cu-O-Cu in the planes correlates with the superconducting order parameter; the a parameter (a) should be optimized] (7,8); (iv) optimize the interplanar distance; and (v) have a large Cu-apical oxygen (Cu-O A ) distance to ensure that carriers are localized in the CuO 2 planes [the c parameter (c) should be maximized] (9)(10)(11).
For point (v), the close relationship between the T C and Cu-O A distance has been demonstrated experimentally by chemical substitution of cations (6,(12)(13)(14), application of hydrostatic pressure (15)(16)(17), and control of Madelung (electrostatic) potential (18,19). Figure 1A shows the strong dependence of T C on the Cu-O A distance and the charge carrier hopping range for hole-doped cuprates, including one-, two-, and three-layer systems (11,20).
In the (La,Sr) 2 CuO 4+ system, in-depth studies of interfaces have been instrumental to control Madelung strain, electronic reconstruction, and mobile charge carrier concentration (18,19,(21)(22)(23)(24). (La,Sr) 2 CuO 4+ is an ideal system to study the tuning of the lattice in relation to T C , because it has a relatively low variation of excess oxygen content (i.e., 0.05 < < 0.13) and is less likely to lose oxygen in response to strain compared to, for example, YBa 2 Cu 3 O 7-x .
The landmark work of metal/superconductor or metal/insulator bilayer La 1.55 Sr 0.45 CuO 4 /La 2 CuO 4+ films grown on LSAO (LaSrAlO 4 ) showed a strong link between T C and Cu-O A in La 2 CuO 4+ . a was controlled by epitaxial strain with the substrate, and c (and hence Cu-O A ) was controlled by Madelung electrostatic strain (out-of-plane Coulomb interactions) (18,19,25). T C was increased to 50 K at the bilayer interface in a single unit cell (18,19). Doping studies, in addition to Madelung strain, also showed the importance of lattice expansion on T C (14). The bilayer thin film work confirmed previous uniaxial mechanical tensing (8 GPa) along the c direction in La 1.85 Sr 0.15 CuO 4 single crystals, where T C was increased to 52 K from ~40 K (17). We note that strain effects have also been shown to be important in FeSe superconductor films and, with T C , increased by up to 65 K from 8 K at the film/substrate interface (26).
It is important to note, however, that while LSAO is a very standard substrate for growth of the (La,Sr) 2 CuO 4+ films, whether in heterostructures or plain films, it is not an ideal substrate from a structural matching viewpoint (19) because a = 3.755 Å, which is much smaller than the optimum value (~3.84 Å) to achieve a maximum T C (7,8,27). Hence, point (iii) in the above list of methods for HTS is not optimized for epitaxial growth of single-layer cuprates on LSAO. The phenomenological relationship between the maximum achievable T C obtained for a wide range of cuprate superconductors and a is shown in Fig. 4E.
It is also important to note that the standard epitaxial thin films of La 2 CuO 4+ have a lower T C value than the bulk value (~40 K). Despite the nonoptimized lattice structure of LSAO, the highest T C La 2 CuO 4+ films are grown on this substrate; hence, a T C value of ~35 K (5 K lower than the bulk value) is achieved (19). Only Naito et al. (28) presented a much enhanced T C of 57 K in films of La 2 CuO 4+ grown on LSAO. They postulated that this high T C was due to the epitaxial strain. However, this result was never reproduced, although the epitaxial strain effect on superconductivity has been widely explored (19,(28)(29)(30). Lower T C values are obtained on other substrates (31), and this is likely related to disorder effects in addition to non-optimized strain effects. The use of epitaxial strain to tune the strain in plain La 2 CuO 4+ films is not ideal because it is not possible to simultaneously optimize a and c, i.e., they cannot be controlled independently owing to Poisson/elastic effects (19,32). Hence, points (iii) and (v) in the above list of the methods for HTS will not be optimized simultaneously.
The question to be explored in this work is whether it is possible to increase c in the (La,Sr) 2 CuO 4+ system without reducing a, which is the case when the standard LSAO substrate is used, and therefore whether it is possible to further increase T C . The work also explores whether it is possible to achieve more than just a single unit cell effect. As described below, the approach used is to engineer the lattice strain in a new way so that the Cu-O A distance is increased and the in-plane distance is not reduced compared to bulk. We use a vertically aligned self-assembled (i.e., naturally formed) nanocomposite (VAN) system in which the interfaces are clean, i.e., there are no second phases or chemical mixing, which can occur with artificial superlattices (33). The standard common VAN structure is shown in Fig. 1 (B and C). Our aim is to create a VAN structure consisting of vertical La 2 CuO 4+ nanostructures, e.g., standard nanopillars, in a LaCuO 3 matrix. We chose to use pure La 2 CuO 4+ , and not La 1.85 Sr 0.15 CuO 4 [(Sr)214], because Sr would also substitute in LaCuO 3 , which would lead to suboptimal and, more importantly, unspecified doping in La 2 CuO 4+ . We chose LaCuO 3 as a straincontrolling phase because it will not poison La 2 CuO 4+ and contains the same chemical constituents. In addition, because LaCuO 3 contains Cu 3+ , it can act as an oxidizing source to dope La 2 CuO 4+ (34). Under the film growth conditions, LaCuO 3 will most likely be in the composition range, LaCuO 3- , 0 < < 0.5, which will make it antiferromagnetic (35,36). In our pure La 2 CuO 4+ system, optimal hole doping then relies on obtaining highly oxygenated La 2 CuO 4+ .
In a VAN structure, the La 2 CuO 4+ nanostructured inclusions will be stretched along the out-of-plane c direction by vertical epitaxy with the LaCuO 3 matrix. We do not use chemical substitution for inducing strain as this simultaneously expands (or contracts) both the out-of-plane and in-plane lattice parameters. In addition, we do not use substrate control of a to induce an elastic expansion of c (and hence the Cu-O A distance) because this will require shrinkage of a, which is undesirable. We use a (001)-oriented SrTiO 3 (STO) substrate to enable La 2 CuO 4+ to keep its bulk a. This is the case because La 2 CuO 4+ does not grow coherently on STO owing to the different lattice structures (31).
The VAN approach is very different to the heterostructure Madelung strain approach because (i) the in-plane parameter should be maintained at the bulk value and is not reduced by growth on LSAO; (ii) it is relatively simple as films are grown from a composite target; (iii) it allows growth of billions of interfaces per film rather than just one or a few, giving the possibility to increase T C of the whole film if the interfaces are closely spaced enough; and (iv) the vertical strain in the La 2 CuO 4+ nanopillars is highly tunable by choosing different matrix materials.
Here, La 2 CuO 4+ is referred to as 214 and LaCuO 3 is referred to as 113. Although detailed elastic moduli are not available for both 113 and 214, 113 is assumed to be stiffer than 214 along the c direction because the layered 214 structure should be more compliant along (20). A larger hopping range and a weaker contribution of the d z2 orbital of Cu give more localized conduction (more 2D-like) in the CuO 2 layers (9)(10)(11)20), and these lead to the higher T C . The standard VAN structure aimed for in this work is composed of a LaCuO 3 film matrix, with La 2 CuO 4+ nanopillars incorporated by self-assembled growth, as shown in three-dimensional (3D) (B) and 2D views (C). the c direction. Hence, the hypothesis is that 113 will dominate the strain state in 214 via vertical epitaxy.
We note that while 113 has the same basic building blocks as 214, i.e., the LaCuO 3 perovskite units contain CuO 6 octahedra, it is nonsuperconducting, likely because it contains a three-dimensional (3D) network of O-Cu-O bonds (35). Depending on its oxygen content, 113 ranges from a poor metal to an insulator to a semiconductor (36). In terms of its magnetic properties, 113 has a rich magnetic phase diagram showing antiferromagnetism (AFM), Pauli paramagnetism, or AFM with ferromagnetic canting, depending on the oxygen content (36). More than 10 epitaxial, self-assembled thin films of 214 and 113 (thickness, ~25 to 100 nm) were grown from a single mixed ceramic target by pulsed laser deposition (PLD) onto (001) STO substrates. STO was chosen as the substrate as it has a perovskite structure similar to that of 113, and so, 113 will be epitaxially stabilized on STO. 113 has a tetragonal structure (a = 3.8189 Å; c = 3.97268 Å; unit cell volume, 57.993 Å 3 ) when it is fully oxidized (34)(35)(36). Because c is most closely matched to a of STO (a = 3.905 Å), the films are expected to grow with the out-of-plane a axis, giving a axis-oriented 113 films (a-113). As already mentioned, because 214 is structurally mismatched to STO, this means that 214 will not grow coherently on the STO and so should be relaxed in plane (31,32). From the point of enhancing T C , a relaxed a for 214 (i.e., 3.79 Å) is preferable over a reduced one, as is the case when it is grown epitaxially on LSAO (i.e., 3.755 Å).
We used a composite PLD target of 214:113 with an atomic ratio (n:m) of 2:3. There is a good stoichiometry transfer from target to films using PLD, and so, the atomic ratio was assumed to be the same in the films and target. We aimed to achieve relatively fine nanostructures of c axis-oriented 214 (c-214) inclusions in the 113 matrix so that, as much as possible, 214 would be strained vertically by 113 without relaxing laterally to the bulk value. A smaller fraction of 214, and hence a lower n/m value, would give finer nanostructures (33), but a very small value means that the nanostructures would become discontinuous.
All films, except for one, labeled S5, were grown using the same growth and annealing conditions as described in Materials and Methods. S5 was cooled without post-annealing in O 2 after growth.
We show that T C can be increased to ~50 K in 214 VAN films, consistent with the observation of >10 nm highly strained (expanded c without reduction) regions around the 214/113 interfaces, in which there are billions in each film, i.e., in much larger volumes than in previous artificial superlattice studies. More than 10 films were grown with enhanced T C . We note that 214 films or La 1.85 Sr 0.15 CuO 4 [(Sr)214] films of similar thickness grown on STO are not superconducting [this work, (32)] and highly optimized films grown on LSAO have T C values of ~35 K (19). Our work shows that the increased T C is closely linked to increases in c, while a lies close to the bulk value. It also shows that the VAN approach eliminates the need for post-annealing under highly oxidized conditions. We propose how to increase the volume fraction of the 50 K phase in the VAN films and how to further increase T C .
RESULTS AND DISCUSSION
We focus on the properties of the five ~25-nm-thick films (films S1 to S5). Films of 50 nm thick and above did not provide reliably increased T C values. As we will show later, this is related to strain relaxation of the strain-controlling a-113 phase.
We show that simple O 2 annealing, rather than O 3 annealing, as is normally the case for plain 214 films, is sufficient to provide hole doping for superconductivity. We further show this using x-ray photoemission spectroscopy (XPS) on two nanocomposite films: one cooled without post-annealing in O 2 after growth (S5) and the other with post-annealing at a PO 2 (partial pressure of oxygen) of 500 mbar for 1 hour after growth (S3).
As shown in note S1, the valence band (VB) spectra for both samples consist of strongly hybridized states of Cu3d and O2p. S3 shows a metallic state as confirmed by the appearance of a density of states across the Fermi level, whereas S5 is insulating. The VB spectrum of S5 is in agreement with the insulating phase of 214. The metallic state of S3 is consistent with a hole-doped induced insulatorto-metal transition, i.e., excess oxygen by annealing in oxygen.
Further evidence of excess oxygen is confirmed by the concurring shift of the binding energies (BEs) of La 4d and O1s toward lower values of S5. This BE shift is due to a downward shift of the Fermi energy (chemical potential) by hole doping, which has been observed in many hole-doped transition metal oxides and hole-doped cuprate oxides (37). Hence, S3 was oxygenated without carrying out super-oxygenation or ozone annealing after growth, as is normally the case to make 214 superconducting (19,30,32). In addition, with 113 as an oxygen source in the film, local stretching of c in 214 may also enable easier oxygen incorporation into interstitial positions, as we will discuss later.
We now focus on the superconducting properties of the films. Figure 2A shows the temperature dependence of resistance, R (T), for film S1, with a bias current I = 100 A. A T C onset of ~50 K is observed with a broad transition. The measuring geometry is from top to bottom, i.e., from the top electrode [platinum (Pt)] to the bottom electrode (Nb:STO substrate), as shown in the inset of Fig. 2A. It was necessary to do a top-to-bottom measurement because a lateral superconducting path would be blocked by the 113 phase. Room temperature conductance atomic force microscopy demonstrates the film structure of conductive regions embedded in insulating ones and is presented in note S2. The dependence of R (T) on the bias current (I) for film S2 is shown in note S3. One or more factors could be responsible for the broad superconductor transition: (i) The superconducting path may not be continuous from the top to the bottom of the film, leading to percolation effects that can broaden the transition. As shown in Fig. 3E, the c-214/a-113 interface is stepped, and so, the superconducting path could be tortuous. (ii) A very thin (two to three unit cells) insulating tunneling layer of a-113 is present at the interface between the c-214 film and the STO (also observed in Fig. 3E and detailed in note S3). (iii) The superconducting regions are filamentary and characterized by different T C onsets due to an inhomogeneous strain distribution at the c-214/a-113 sample interfaces (38)(39)(40).
The electrically measured T C onset of 50 K agrees with the temperature where the diamagnetic Meissner component becomes dominant in the zero-field cooled (ZFC) M (T) curve, as shown in the bottom of Fig. 2A. The Meissner signals were approximately few × 10 −6 electromagnetic units at 10 K, indicating a ~5% superconductor volume fraction in the film.
To further explore the spatial variation of superconductivity, we undertook low-temperature (4.2 K) scanning tunneling microscopy (STM) and spectroscopy (STS) measurements on films S3 and S4 (that had a gold capping layer). The highly resistive 113 regions (at 4.2 K) prevented us from performing stable large-scale topographic images on sample S3 and achieving clear topography-spectroscopy correlations due to the many tip-sample crashes we encountered. To overcome this problem, we prepared a gold-coated (S4) sample, where correlations between topography and proximity-induced superconductivity were achieved. Let us first address the STS measurements performed on film S3. Superconducting gaps () in the differential conductance (dI/dV) versus applied voltage (V) tunneling spectra were found in localized areas. The gap values () and depths (zero-bias conductance) largely varied over the sample, the first from ~10 to ~2 meV and the second from ~0.55 to ~0.95 of the normal tunneling conductance, as demonstrated in Fig. 2B.
The quasi-particle coherence peaks observed in our tunneling spectra are smeared and much smaller compared to those typically observed in spectra measured on conventional superconductors, as well as on some of the high-temperature cuprate superconductors such as Bi 2 Sr 2 CaCu 2 O 8 (Bi2212). Our results, however, are consistent with measurements performed on optimally doped (Sr)214 single crystals (T C of 40 K) after cleaving under He gas (41). According to Kato et al. (41), in an optimally doped (Sr)214 single crystal, over 70% of the spectra have no apparent coherence peaks, and the magnitude of the observed peaks (if they exist) is much weaker than that typically observed for Bi2212. In our experiment, the coherence peaks may be further reduced because of spatial averaging in our nonhomogeneous sample and/or degraded surface conditions resulting from sample transfer from the growth chamber to the STM apparatus. In that respect, we note that STM/STS is a surfacesensitive tool (42), even to the level of identifying surface order parameter symmetry differing from that in the bulk (43), and therefore, signatures of d-wave superconductivity (which may exist in the bulk of our cuprate samples) can be reduced in the STM tunneling spectra due to surface disorder (44). However, d-wave superconductivity is still observed, although in a subtle way, by the in-gap spectral structure observed in the green and black curves in Fig. 2B (resulting in relatively high zero-bias conductance), manifesting the effect of zero-energy Andreev-bound states.
Moreover, we could fit well our STS data to spectra calculated using the theory for tunneling into a d-wave superconductor (45) over a wide range of tunnel barrier strengths, as detailed in notes S4 and S5. We also discuss in note S4 the correlation between the topographic image presented in the inset of Fig. 2B and the red and blue spectra. A maximal observed gap value of ~10 meV is close to that of optimally doped (Sr)214 single crystals (T C of 40 K) measured after cleaving under He gas (41). Because our films were not vacuumcleaved, their surfaces may have degraded, leading to a reduction in T C . Hence, the 50 K measured from the R (T) data is broadly in agreement with the maximal 40 K related measured gap by STS. As shown in Fig. 2B, much narrower gaps were also observed in different regions of film S3, indicating regions of lower T C (<40 K) material (41) or highly degraded surface.
Further evidence of a spatial gradient of superconductivity was gained by studying the Au-capped film S4. The Au capping layer enables good measurement stability to be achieved over large scan areas, which contain regions of very low conductivity within which superconducting islands are embedded, as shown in (46). In our sample, proximity-induced superconducting islands develop in the Au film in regions that are directly connected to superconducting regions in the underlying film, and their spatial extension can be monitored by STS. Figure 2C demonstrates such measurements. The red curve was measured at the middle of the line, the green and blue curves were measured on the left, and the brown and black curves were measured on the right. It is important to note that we do not know the underlying surface morphology, so we can only speculate that the red curve was measured near the 113-214 boundary (showing the widest and deepest gap), the green and blue curves on the 214 nanostructure region, and the brown and blue spectra on the 113 region (where the gap is expected to decay over a shorter distance).
To understand the origin of the ~50 K T C , we undertook a detailed analysis of the crystal structure of film S3 in different regions of the film, in particular at interfacial regions. All the films are of very high epitaxial quality with a major fraction of c-214 and a-113 and some minor a axis-oriented 214 (a-214) (Fig. 3A).
The lattice parameters of the 214 films grown on STO were all in the range of a = 3.79 to 3.81 Å, c = 13.08 to 13.14 Å. a is slightly larger and c is slightly smaller than the bulk value of 214 (a = 3.794 Å), possibly because of a partial strain effect from the STO substrate. The minor a-214 is also observed in the planar transmission electron microscopy (TEM) image and exhibits an orthogonal pattern as shown in Fig. 3B.
Note S2 shows AFM topographic images of the same film. At first glance, the AFM images are similar to the planar TEM images as shown in Fig. 3B. However, at closer inspection, a different structure is seen, i.e., a much denser pattern of orthogonal grains is observed. Hence, these grains are not the minor a-214 grains observed in Fig. 3B. There is a 1:1 correspondence between the topography and current images in note S2, indicating that the film microstructure is made up of a major fraction of orthogonal conducting grains embedded in a nonconducting matrix.
Considering the XRD (x-ray diffraction), TEM, and AFM data altogether, the conducting grains correspond to c-214 and the nonconducting matrix is a-113. The structure differs from the idealized VAN nanopillar structure of Fig. 1B, which shows isolated nanopillars. Here, we have a highly faceted plate-like form of inclusions instead of nanopillars. Such a faceted structure should enable highly effective vertical strain control of c-214 by vertical epitaxy with a-113.
Further cross-sectional scanning TEM (STEM) analysis (Fig. 3C) reveals that the film is composed of c-214 and a-113, and the minor a-214 is present at some boundaries between c-214 and a-113 in the upper parts of the film. Roughly equivalent amounts of c-124 and a-113 grains are observed, consistent with the ratios of phases mixed in the target material, i.e., 214:113 = 2:3. In addition, the phases observed are the same as in the XRD plot of Fig. 3A and the planar TEM of Fig. 3B. Figure 3D shows a schematic structure of the film. The white-boxed areas (a and b) highlight regions that are probed more closely by cross-sectional STEM and spatial lattice parameter analyses to determine the origins of the 50 K phase.
We first examine Box a (Fig. 3E), which shows a nearly vertical c-214/a-113 boundary. In the region close to the substrate, c in the a-113 film is perfectly strained to the STO substrate and adopts the in-plane lattice parameter of STO, i.e., 3.905 Å. Hence, according to volume conservation, a in the a-113 film will be 57.993/ (3.9052) 2 = 3.803 Å. This value is slightly lower than the global a measured for a-113, which is 3.814 Å from XRD. A value of 3.814 Å is closer to the bulk value of 3.8189 Å for 113, consistent with the fact that there is some relaxation of a with film thickness.
The c-214/a-113 interface in Fig. 3E is stepped and not sharp and likely forms in this way to minimize interfacial energy in this high-energy interface from the dissimilar crystal structures. The c-214 grains are tilted by approximately 1° to 2° with respect to the horizontal plane. 113 grows a axis oriented on STO because c of 113 (a = 3.8189 Å and c = 3.97268 Å, for the fully oxidized tetragonal phase) more closely matches a of STO (3.905 Å). The phase grows coherently with the STO, as expected for these same crystal structures. c-214 grows on a very thin layer (two to three unit cells of 113 as shown in the left panel of Fig. 3E). c-214 grows more easily on a-113 than STO because 214 and 113 contain CuO 6 building blocks, but it grows incoherently due to lattice misfit of the crystal structures.
In several regions of the vertical interface in Fig. 3E, domain matching epitaxy (DME) of exactly two unit cells of c-214 with seven unit cells of a-113 is observed. A schematic of the crystal structure matching is shown in Fig. 3F, with the interfacial region indicated by horizontal arrows. By knowing the value of a in 113, c in c-214 at the interface can be estimated using DME. Taking the lower a value of 3.803 Å, c in c-214 is calculated to be ~7 × 3.803/2 = 13.310 Å, which is much higher than the bulk value of 13.165 Å. c would be even higher toward the top of the film, where a in a-113 is larger owing to a-113 lattice relaxation with thickness.
We now measure the local lattice parameters in a c-214 grain bounded by a-113 and compare these values with the above calculated value. Figure 4A shows a magnified STEM image of Box b of Fig. 3 (C and D). A horizontal line is shown from I to II in Fig. 4A, along which the lattice parameter calculations were made using an open source package, Atomap. Lines I to II represent a region in the c-214 grain that is not overlapped with the a-113 interface. Considering the nonvertical nature of the interfaces and the possible overlapping near the interfaces, points close to the a-113 interfaces on the left-and right-hand side could be influenced by the a-113 lattice. Point I was ~5 nm from the left-hand a-113 interface, whereas point II was ~2 nm from the right-hand a-113 interface. Analyses were undertaken to determine the lattice parameters spatially, as described in detail in Materials and Methods. This enabled the lattice parameters of each unit cell to be measured within ±0.01 Å.
At the region closest to the interface, i.e., in circled regions 1 to 3 in Fig. 4B, c in c-214 is very high, without a being reduced substantially in comparison with the bulk value of 3.794 Å, and this explains the ~50 K superconductivity (19). The enhancement of c by different amounts over a region of ~10 nm agrees with the broadened R (T) and the STM measurements (Fig. 2) and with the ~5% volume fraction of superconducting phase obtained from the magnetization measurements. The average c values in the three different regions of Fig. 4B are determined to be 13.28 ± 0.01 Å (region 1), 13.22 ± 0.01 Å (region 2), and 13.20 ± 0.01 Å (region 3), compared to the bulk value of c = 13.165 Å. The corresponding a values from the three different regions of Fig. 4C are 3.76 ± 0.01 Å, 3.78 ± 0.01 Å, and 3.79 ± 0.01 Å, compared to the bulk value of a = 3.794 Å.
The strong interfacial stretching of c to 13.28 ± 0.01 Å in region 1 is close to the calculated value of 13.310 Å from DME. The slightly smaller value is explained by the value not being measured right at the interface. Moving from the interface into the bulk of the c-214 grain, i.e., from II to I in Fig. 4A, c decays to a value of 13.02 ± 0.01 Å, i.e., lower than the bulk value (c = 13.1646 Å), but it rises again to the bulk value further into the c-214 grain.
a shows an inverse relation to c, as expected from elastic effects. The lattice parameters in c-214 have a spatial variation from the a-113 interface into the bulk of the c-214 grain. From very high c at the interface, there is dipping of c (and corresponding peaking of a) when moving into the c-214 grains. This can be explained by a residual stress state, compensating the strong vertical stretch near the interface. The higher-than-bulk c values will produce regions of higher T C values, and the lower-than-bulk c values will produce regions of reduced T C values. Figure 4D shows T C versus c. The c values were experimentally determined from Fig. 4B. Literature data for bulk 214 and films grown on LSAO (19,32) are also included. The a values determined from Fig. 4C are also included in Fig. 4D. The influence of T C on the out-of-plane distance for cuprates was shown earlier in Fig. 1A, and the analogous T C versus a plot is shown in Fig. 4E, together with the a values measured in Fig. 4C.
In Fig. 4D, for a given c, as one moves diagonally to the left of the diagram to the direction of the blue arrow, T C increases rapidly with a. The yellow, red, and blue zones in Fig. 4D have different a values of 3.75 ± 0.01 Å, 3.77 ± 0.01 Å, and 3.79 ± 0.01 Å, respectively. The slope of the lines has been set at the experimentally determined slope for 214 on LSAO (19). The wider gap between the red and blue zones compared to the yellow and red zones in Fig. 4D is due to the increasing steepness of the T C versus a plot (Fig. 4E) with increasing a.
Studies, so far, have considered the importance of either apical oxygen (and hence c) on T C (Fig. 1A) or a on T C . Figure 4D reveals for the first time the great sensitivity of T C to both c and a, and also indicates that a further enhancement of T C would be possible in 214 if c and a were both further stretched. Hence, if a was optimized to be closer to the optimum 3.84 Å value (Fig. 4E), then this should push T C even higher than the values we have observed in this work. Coherent growth of 214 on a structurally matched substrate with lattice parameters close to 3.84 Å is one possible approach to optimize a, as long as the VAN approach is also used to enhance c at the same time.
Last, this proof-of-concept work showing the importance of expansion of both c and a could be developed to give even larger volume fractions of 50 K superconductor than the ~5% obtained here. For example, by using faster growth rates, finer c-214 inclusions could be made (20 nm or below is desired so that c-214 is within the 10-nm strained interfacial region with the c-113 matrix), rather than the ~50-nm inclusions shown here. Achieving a finer microstructural size while maintaining high-quality epitaxy has been demonstrated previously in other VAN systems, even down to sizes as small as 2 to 5 nm (33).
CONCLUSION
With the aim of vertically strain engineering c (and hence the apical distance) in c axis-oriented La 2 CuO 4+ (c-214), we grew selfassembled, vertically aligned nanocomposite films of La 2 CuO 4+ + LaCuO 3 (113) on (001) STO. In a simple one-step process without any post-growth oxygenation using ozone and/or high-pressure oxygen as is normally used, 50 K superconducting material was produced in situ. The VAN approach does not reduce the in-plane lattice, as is normally the case for growth of 214 on standard LSAO substrates. The higher T C phase is consistent with the presence of ~10-nm-wide regions of expansion of the out-of-plane 214 lattice (c parameter) produced at 214/113 vertical interfaces, spaced ~50 nm apart, of which there are billions per film. It is also consistent with the additional effect of relatively high in-plane lattice (a parameter). T C equals the highest T C of 50 K previously observed at one-to two-unit cell interfaces in ozone-annealed metal/superconductor bilayer heterostructures. Our proof-of-concept work represents an entirely new approach to 3D strain engineering of cuprates, giving the possibility to further enhance T C values in 214 and other cuprate superconductors. This could be achieved by using a different strain (19,32). The projected influence of a on the plot [as determined from (E)] is shown schematically by the blue arrow. The yellow dashed line is a guide for 214 on LSAO (19). Yellow, red, and blue zones have the same slope as the guide for 214 on LSAO but with different a. (E) Maximum T C versus a (7,8,27) for optimized cuprates. A strong dependence of T C on a (and hence the in-plane Cu-O distance) is observed with a peak in T C at a ~ 3.84 Å (7,8). Zones A, B, and C from (D) are included on the plot to show the good fit of the experimental data of this work to the "model" plot.
controlling matrix material and/or different growth substrates, as well as larger volume fractions of higher T C superconductor, giving finer (~20 nm or below) 214 inclusions, with minimal lateral strain relaxation.
Target fabrication
We used a composite ceramic target of 214:113 with an atomic ratio (n:m) of 2:3. This ratio was chosen because we aimed to achieve relatively fine nanocolumns of c-214 so that, as much as possible, the column would be strained vertically by 113 without the column relaxing laterally to the bulk value. A smaller fraction of 214, and hence a lower n/m value, would give finer columns (47), but a very small value means that the columns would become discontinuous.
For reference, we made single-material targets of La 2 CuO 4 (214) and LaCuO 3 (113). Ceramic targets were synthesized by conventional solid-state reaction. A stoichiometric mixture of La 2 O 3 and CuO was ground and subsequently calcined at 950°C for 12 hours under oxygen flow. After cooling down the mixture to room temperature, it was ground again, and this process was repeated three times. After this, the mixture was ground and pressed into a 1-inchdiameter pellet disc. The pellet was sintered at 1050°C for 12 hours under oxygen flow.
Thin-film fabrication
More than 10 films (thickness, ~25 to 100 nm) were grown by PLD using a KrF excimer laser at a substrate temperature of 700°C with a repetition rate of 2 Hz and a fluence of 1.2 to 1.5 Jcm −2 with 0.13 to 0.14 mbar flowing O 2 . After growth, unless otherwise stated, the films were annealed in 500 mbar of O 2 at 500°C for 60 min and then slowly cooled afterward (at a rate of -10°C/min) to room temperature while maintaining 500 mbar of O 2 background pressure. The oxygenation conditions for the films are much less oxidizing than standard ozone annealing or high-pressure oxygen (>70 bar), which are normally required to make standard 214 films superconducting (19,30,32). This is expected because 113 is a highly oxidized phase with Cu in the Cu 3+ state. Hence, it has the potential to oxidize 214 in the composite. TiO 2 -terminated (001)-oriented STO and 0.5 weight % Nb-doped STO (Nb:STO) (a = 3.905 Å) singlecrystal substrates (CrysTec GmbH) were used.
Characterization of structure and physical properties
Structural analysis of the films was done by XRD analysis using a high-resolution PANalytical Empyrean vertical diffractometer (Cu K radiation). Magnetic property measurements were performed using a superconducting quantum interference device (SQUID; Quantum Design) in a temperature range of 10 to 250 K.
Electrical transport measurements were performed on films grown on Nb:STO substrates using a four-probe configuration with two electrical contacts on Nb:STO and two contacts on a Pt electrode deposited on the thin-film surface with direct current magnetron sputtering. The inset of Fig. 2A shows a top-to-bottom measuring geometry.
XPS was performed using a SPECS PHOIBOS 150 electron energy analyzer with a monochromatic Al K x-ray source (hv = 1486.6 eV) and a total energy resolution of 500 meV. The BE was calibrated using a polycrystalline Au foil placed in electrical contact with the film surfaces after deposition, which simultaneously helped to avoid charging effects during XPS measurements.
Scanning transmission electron microscopy
An FEI Titan G2 80-200 STEM with a Cs probe corrector, operated at 200 kV, was used in this study. The STEM images were recorded by using a high-angle annular dark-field detector.
Lattice parameter determination
We used real-space image analysis to obtain maps of lattice parameters. The open source package Atomap (https://ascimaging. springeropen.com/articles/10.1186/s40679-017-0042-5, https:// pypi.org/project/atomap/) was used to extract atomic column positions by fitting 2D Gaussian functions. Pairwise distances between neighboring columns were used to extract lattice parameters. Calibration was done using the distance between Sr columns of the STO substrate in the same field of view. The lattice parameter of STO was assumed to be 3.905 Å. We found that there was a systematic variation of the out-of-plane lattice parameter in the slow scan (out-of-plane) direction, which was likely caused by drift during image capture. To minimize this effect, profiles were taken in the horizontal (fast scan) direction without any vertical component. To cancel the systematic error in the out-of-plane lattice parameter, we calibrated the measured c using the values obtained from XRD. The averaged c value of c-214 from STEM is 13.15 Å, whereas the value from XRD is 13.14 Å for S3. We then calibrated c measured from STEM by using the difference between these two values. The systematic errors in the in-plane measurements were very small because they were measured in the fast scan direction. It is also important to consider the random errors in both the out-of-plane and in-plane directions. The random error was estimated by determining the range of lattice parameters measured in the STO substrate along the in-plane direction and was determined to be ±0.02 Å. Each data point in Fig. 4 (B and C) was an average of six measured points along the slanted lines of ~10 nm length parallel to the c-214/a-113 interface, as shown in Fig. 4A, giving an overall random error of ±0.02 Å/√6 = ±0.01 Å.
Scanning tunneling microscopy STM and STS measurements were carried out at 4.2 K using a cryogenic scanning tunneling microscope with a Pt-Ir tip operating in a clean He exchange gas environment. The differential conductance dI/dV versus the applied voltage V spectra were acquired with set current and bias-voltage values of 0.1 nA and 10 mV, respectively. A bias voltage of 100 mV, which is larger than the superconducting gap measured for the films, was used to acquire topographic images. The dI/dV spectra were fitted with the Dynes model by assuming unconventional d-wave symmetry of the superconducting order parameter. To determine the spatial variation of superconductivity, a gold capping layer was necessary to achieve good measurement stability over large scan lengths, as shown in (46).
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/5/4/eaav5532/DC1 Note S1. XPS data showing that O2 annealing (rather than O3 annealing) is sufficient to oxygenate La2CuO4+ in the nanocomposite films. Note S2. Conductance atomic force microscopy at room temperature. Note S3. Dependence of resistance versus temperature on bias current (100 and 500 A). Note S4. Correlating tunneling spectra with topography and data reproducibility. Note S5. Fitting spectra measured in the Andreev spectroscopy regime. Fig. S1. XPS spectra (VB, O 1s, La 4d, and Cu 2p3/2) for films S4 and S5. Fig. S2. Atomic force microscopy images at room temperature for film S3. Fig. S3. R (T) with bias currents of 100 and 500 A (top plot) and corresponding ZFC M (T) (bottom plot) for film S2. Fig. S4. Three tunneling spectra measured on sample S3. Fig. S5. Tunneling spectrum in the Andreev spectroscopy regime and fitting for sample S3. Reference (48) | 9,413.6 | 2019-04-01T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Image reconstruction exploiting object sparsity in boundary-enhanced X-ray phase-contrast tomography.
Propagation-based X-ray phase-contrast tomography (PCT) seeks to reconstruct information regarding the complex-valued refractive index distribution of an object. In many applications, a boundary-enhanced image is sought that reveals the locations of discontinuities in the real-valued component of the refractive index distribution. We investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT that exploit the fact that a boundary-enhanced PCT image, or its gradient, is often sparse. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the l(1)-norm or TV-norm of the image, subject to data consistency constraints. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data.
Introduction
X-ray phase-contrast tomography (PCT) methods [1][2][3][4][5][6] seek to reconstruct three-dimensional (3D) images that depict object features possessing little or no X-ray absorption-contrast. In boundary-enhanced PCT [2,[7][8][9], information regarding the locations of boundaries in the refractive index distribution is sought, rather than an accurate estimate of the refractive index distribution itself [10,11]. Such boundary-enhanced images can, for example, facilitate the delineation of soft tissue structures in biomedical applications or material flaws in nondestructive testing applications.
It has been demonstrated that image reconstruction in boundary-enhanced PCT can be achieved by use of the parallel-beam filtered backprojection (FBP) algorithm [2,7,8] or other analytic reconstruction algorithms [12,13]. In order to avoid image artifacts when employing these algorithms, tomographic measurements must be typically be acquired at a large number of view angles [14]. This is highly undesirable because it can yield long data-acquisition times and also may damage the sample due to the large radiation exposure. For these reasons, there remains an important need to develop reconstruction algorithms that can reconstruct accurate boundary-enhanced images in PCT from knowledge of measurement data acquired at a reduced number of tomographic view angles, i.e., few-view measurement data. A natural and effective way to accomplish this is to develop iterative image reconstruction algorithms that exploit a priori information regarding commonly possessed characteristics of boundary-enhanced images.
In this work, we investigate two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT. The image reconstruction algorithms are inspired by the emerging field of compressive sensing [15], and are based on the premise that a boundary-enhanced PCT image, or the gradient of the image, is often sparse. By 'sparse', we mean that the discrete representation of the image possesses a relatively small number of non-zero voxels or other elements that are used to mathematically represent the image. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the ℓ 1 -norm or TV-norm of the image, subject to data consistency constraints. Both algorithms employ a thresholding procedure to promote sparse solutions. We demonstrate that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data. The proposed algorithms are also demonstrated to produce significantly weaker image artifacts than those produced by a conventional iterative image reconstruction algorithm.
Imaging geometry and data function
We will utilize the parallel-beam tomographic scanning geometry depicted in Fig. 1. However, the results that follow can readily be adapted to the case of spherical wave illumination in the paraxial limit [16,17], i.e., a cone-beam geometry with a small cone-angle. The z-axis of the reference coordinate system (x,y,z) defines the axis of rotation of the tomographic scanning. The rotated coordinate system (x r , y r , z) is related to the reference system by x r = x cosφ+ y sinφ, y r = y cosφ−x sinφ, where the tomographic view angle φ is measured from the positive x-axis. A weak phase-amplitude object positioned at the origin is irradiated by a plane-wave U i (x r , z, φ) with wavelength λ, or equivalently wavenumber , which propagates in the direction of the positive y r -axis. The intensity of the transmitted wavefield is measured in the (x r , z) plane located at y r = d, and will be denoted by I(x r , z, φ). Unless a phase-object is considered, an additional intensity measurement I 0 (x r , z, φ) on the contact plane y r = 0 will also be acquired. A tomographic data set is obtained by measuring a collection of such intensity measurements for φ ∈ [0,π).
Let δ(x,y,z) ≡ n(x,y,z) − 1 denote the object function, where n(x,y,z) is the object's real-valued refractive index distribution n(x,y,z). We will employ the notation δ(r 2 ; z) ≡ δ(x,y,z), where r 2 = (x,y), as a convenient description of a transverse slice (of constant z) of the 3D object function. For a sufficiently small object-to-detector distance d and a weakly absorbing object [16], the data function (1) satisfies the imaging model (2) where R denotes the 2D Radon transform acting on a plane of constant z of and δ(r 2 ; z) and . We consider that both I(x r , z, φ) and I 0 (x r , z, φ) are measured, and therefore g (x r , z φ) can be regarded as a known quantity. By use of the commutativity of the Radon and Laplace transforms [18], it can be verified that (3) where ∇ 2 is the 3D Laplacian operator. Equation (3) represents the imaging model for boundary-enhanced PCT in its continuous form. The image reconstruction task in boundaryenhanced PCT is to determine an estimate of ∇ 2 δ(r 2 ; z) from knowledge of g(x r , z, φ). When g(x r , z, φ) is measured at a large number of view angles φ, this can readily be accomplished, for example, by use of the 2D parallel-beam FBP algorithm [2,7,8]. However, in the case of few-view measurement data, analytic reconstruction methods are known to be ineffective and the use of iterative methods is warranted.
Note that the right-hand side of Eq. (3) corresponds to a stack of 2D Radon transforms of ∇ 2 δ (r 2 ; z) along the z-axis and the coordinate z can be interpreted as a parameter that specifies a transverse slice. This reflects that the 3D imaging model can be described by a collection of 2D ones.
Discrete form of imaging model
When a digital detector is employed, the measured intensity data correspond to an ordered collection of numbers rather than a function of a continuous variable. We will denote the discrete data function as (4) where r and s are integer-valued detector-element indices and t is the tomographic view index. Here, denotes the detector-element dimension in a square detector array of dimension L × L, and N denotes the number of samples measured in each dimension. The quantity Δ θ denotes the angular sampling interval between the uniformly distributed view angles. The reconstruction algorithms described below can be applied in the case of non-uniformly sampled measurement data as well. Although not indicated in Eq. (4), the measured discrete data will also be degraded by the averaging effects of the sampling aperture.
Because the reconstruction problem is inherently 2D, we will consider the problem of reconstructing a transverse slice of the object function located at z = s n Δ d , which corresponds to the position of a detector row indexed by s = s n . Let the vector g ∈ ℝ M 1 denote a lexicographically ordered representation of g[r, s n , k]. The dimension M 1 is defined by the product of the number of detector row elements and the number of view angles.
Because we will be focusing on iterative reconstruction algorithms, we will also require a discrete representation of the object. An M 2 -dimensional approximate representation of ∇ 2 δ (r 2 ; z = s n Δ d ) can be described as (5) where, without loss of generality, is assumed to be an integer. In Eq. (5), the expansion coefficients b z [l,m] represent the discrete image values and {Ψ l,m (r 2 )|l = 1,··· M 2 /2,m = 1,···M 2 /2} represent a collection of expansion functions. As discussed in Section 3.1, in this work we will adopt conventional image pixels as the choice for Ψ l,m (r 2 ) and b z [l,m] will represent the L 2 inner product of Ψ l,m (r 2 ) and ∇ 2 δ(r 2 ; z = s n Δ d ). However, it should be noted that other sets of expansion functions [19] could be employed to form a finite-dimensional approximate object representation. The accuracy of the reconstructed tomographic image will generally depend, in a complicated and object-dependent manner, on the choice of expansion functions and reconstruction algorithm [20].
Let b ∈ ℝ M 2 denote a lexicographically ordered representation of the discrete image b z [l,m]. The system of linear algebraic equations that represent the discrete imaging model can be expressed as (6) where the M 1 × M 2 system matrix R̂ is a discrete representation of the 2D Radon transform operator R [21]. In practice, R̂ can and should be modified to account for the spatial resolution characteristics of the imaging system and/or other physical factors that are well-characterized.
Few-view boundary-enhanced image reconstruction in PCT
The reconstruction task we consider is to determine an estimate of the boundary-enhanced image b from knowledge of few-view measurement data g. Because in this situation Eq. (6) is typically underdetermined, conventional reconstruction algorithms can produce significant image artifacts. To circumvent this, we will exploit the fact that b or its gradient can often be sparse. The problem of determining sparse solutions to underdetermined linear systems of equations has received much attention in recent years, in large part due to the emerging field of compressive sensing and related signal processing applications [15,[22][23][24][25][26]. Inspired by these works, we have investigated the image reconstruction approaches described below.
Object sparseness in boundary-enhanced PCT
A vector b is said to be sparse if it contains a relatively small number of non-zero elements. Sparseness has been exploited to effectively regularize the ill-conditioned or singular nature of certain classes of discrete linear inverse problems [27,28]. In our study of boundaryenhanced PCT, sparsity will be exploited in two different ways.
In many cases, the boundary-enhanced image ∇ 2 δ(r 2 ; z) will be naturally sparse; i.e., the number of non-zero terms in Eq. (5) will be much less than M 2 . For example, when δ(r 2 ; z) is piecewise constant and pixels are employed as the expansion functions, the non-zero terms in Eq. (5) correspond to locations of boundaries in δ(r 2 ; z), while the expansion functions that correspond to uniform regions will be weighted by a zero value of b z [l,m]. As demonstrated below, by reconstructing an estimate of ∇ 2 δ(r 2 ; z) that is sparse, image artifacts due to data incompleteness can be effectively mitigated. This reconstruction strategy is mathematically formulated in Section 3.2.
It is interesting to note that, in many applications of image processing, a Laplacian operator is explicitly applied to an image in order to establish a sparse image representation [29], which can facilitate the solution of an inverse problem. However, in boundary-enhanced PCT, we observe that the Laplacian operator is implicitly administered by the wave propagation physics. Accordingly, a sparse object representation is automatically embodied by the imaging model as described by Eqs. (3) or (6).
Alternatively, data incompleteness can be mitigated by exploiting the sparseness of the gradient of an object rather than the object itself [15,30,31]. If an object is sparse, its gradient image will also be sparse. However, the converse is not necessarily true. As demonstrated below, by reconstructing an estimate of ∇ 2 δ(r 2 ; z) that possesses a sparse gradient, noise in the measurement data can be effectively mitigated. This reconstruction strategy is mathematically formulated in Section 3.3.
Constrained, ℓ 0 -norm minimization
A sparse estimate of b can be obtained by solving the following optimization problem: (7) where b ℓ 0 represents the reconstructed image estimate, denotes the ℓ 0 -norm of b = (b 1 , b 2 , ···, b M ), and the data error tolerance ε is a parameter of the optimization problem that needs to be specified. The optimization problem in Eq. (7) yields the image estimate with minimum ℓ 0 -norm constrained such that its projection data are within a given ℓ 2 -distance ε of the measured projection data. As the ℓ 0 -norm presents a formidable computational challenge, a fruitful strategy has been to relax this norm to ℓ 1 minimization [15,22], or to relax to the non-convex ℓ p -norm with p < 1 [32].
For this work, we aim at solving Eq. (7) through a strategy similar to the iterative-hardthresholding (IHT) algorithm by Blumensath and Davies [33], where the sparsity of the image b is fixed to s, the number of non-zero pixels. Consider the optimization problem: (8) which finds the s-sparse image that minimizes the data error. Clearly, the data residual ε(s) = |Rb s − g| will depend on s. When s is smaller than the actual number of non-zero pixels, then ε(s) ≫ 0. As s increases ε(s) will decrease, eventually leveling off at a minimum value permitted by inherent inconsistency in the data. The desired image b s occurs for a value of s where ε(s) levels off.
Constrained, TV-norm minimization
Solving the optimization problem Eq. (8) for a series of s-values can effectively find sparse solutions, but it does nothing to control the noise-level in the image. Our previous experience with TV-minimization methods in CT [23,30] has shown that they not only find images possessing a sparse spatial-gradient but can also effectively denoise the image. In order to introduce some control over noise, one must first decide on a sparsity s* of the solution as described above. Next, we consider the following TV-minimization problem: (9) where ε′ ≥ ε(s*). Here, ||b|| TV ≡ ||∇b|| 1 denotes the TV-norm of b, which is defined as the ℓ 1norm of the discrete gradient-image ∇b. The sparsity of the image is constrained to remain at or below s*, and as ε′ increases the resulting image becomes denoised. Because boundaryenhanced images are often sparse, the sparsity constraint is expected to help mitigate artifacts due to data incompleteness. Unlike s*, which depends only on the sparsity of the object, the optimal value of ε′ depends on many factors such as data quality, and object properties such as contrast and structural complexity. As a result, ε′ is in important parameter of the reconstruction algorithm that needs to be varied and optimized on a case-by-case basis taking into account each particular imaging task.
Description of image-reconstruction algorithms
The IHT algorithm solves Eq. (7) by alternating between a gradient descent step on the data residual, (Rb −g) 2 , and hard thresholding, keeping the s largest elements in the image. For our implementation, the gradient descent step is replaced by projection onto convex sets (POCS). For the problem at hand, we have found empirically that POCS is more efficient than the using gradient descent in the IHT algorithm. On the other hand, we offer no proof of convergence for the present variation of IHT. While standard IHT is proved to converge for certain conditions on the isometry constants, it can be demonstrated that the isometry constants of the operator R̂ lie well outside the range for which convergence proved. It has, however, been noted in the literature that derived restricted isometry properties, though sufficient, are far from being necessary conditions. For the IHT-POCS algorithm, detailed below, we have performed extensive testing with computer simulated phantoms under difficult scanning conditions where only 25 projection views are used. Under conditions of ideal, noiseless data we were able to accurately recover several test phantoms of similar sparsity to the images shown in the results section. We tested robustness of the recovery by varying the starting image, and by designing difficult test phantoms where a component, n, with the same sparsity is adding to the test image that has a minimum value of ||Rn|| 2 . These simulation results give us confidence in using IHT-POCS for reconstructing data sets with 90 views.
We refer to the present IHT implementation as IHT-POCS. An example pseudo-code of IHT-POCS is: The symbol := is used to represent replacement, i.e., the variable on the left is replaced by the quantity on the right hand side. The parameter β is the relaxation parameter for the POCS algorithm on line 4. It is not used here, hence it is set to one, but often implementations of POCS use a relaxation schedule for β. For example, one might use β = 1.0 * 0.99 i , where i indicates the iteration number. The size of the data g is N d , and in line 4 each data measurement is indexed by j. The vector (R) j is a row of the system matrix R̂ corresponding to the single data measurement g j . The operator H s at line 5 is responsible for enforcing s-sparseness; the s largest elements, in terms of absolute value, of its argument are kept, and all other elements are set to zero. The IHT-POCS algorithm was run to convergence, meaning that there was no longer any appreciable change in the image. This point generally occurred within 1000 iterations for the results shown below.
To perform the TV-minimization in Eq. (9) we employ our ASD-POCS algorithm described in Ref. [30], but we alter the POCS portion of the algorithm to incorporate image sparseness.
Our previous implementation of ASD-POCS included steps for enforcing image positivity. As this assumption does not apply here, the positivity-step is removed. Instead, we substitute, in place of the positivity-step, the H s operator to enforce s-sparseness of the image. Strictly speaking, the H s operator does not perform a projection onto a convex set and as a result it is not likely that ASD-POCS converges mathematically to the solution of Eq. (9), but we find empirically that the algorithm is effective and optimality conditions of the resulting image can be checked. Further information on constrained-TV minimization and optimality conditions can be found in Ref. [30]. Here, we refer to this version of ASD-POCS as the IHT-POCS-TV algorithm.
Guidance from simulations in designing the image reconstruction algorithm framework
At first glance, the image-reconstruction problem posed by phase-contrast imaging seems to be an ideal application for recent, compressive sensing (CS) algorithms, because the object function is dominated by edges and is therefor typically sparse. The application of CS methods, however, is complicated by two main factors: (1) the inherent ill-posedness of the few-view image reconstruction problem, and (2) CS theory deals with object representation using a finite expansion set. Specifically, on the second point, the underlying object function that we wish to recover is represented in a pixel expansion set, which can only approximately represent the object. The following set of simulations illustrate how we arrived at the algorithms described in the previous section. As the justification of the algorithms is based on simulation, we have performed numerous tests, the most of which cannot be presented in this article. The simulations are performed with a phantom composed of randomly placed ellipses of high eccentricity somewhat resembling, in terms of sparsity, the foam object scanned in the next section. The first set of simulations are performed on an idealization, where the phantom is first pixelized on a 512×512 grid prior to generating the projection data. The second set of simulations generate projection data from analytical ray-ellipse intersection formulas. As will be seen, the discretization of the image array involves a significant approximation, and its impact can be more than that due to noise.
Discrete, ellipse phantom
The first set of simulations involve the same scanning configuration used later with actual experimental data. The projection data were computed as a 2D discrete Radon transform acquired on a 2048 bin detector, and few-view reconstruction is studied with 90 projection views taken at equal intervals over 180°. The ellipse phantom shown in Fig. 2 is discretized on a 512×512 array. The resulting test image has a sparsity of 16257 or approximately 2 14 pixels. We first investigate the restricted isometry property (RIP) for R̂ with the above system parameters, then demonstrate the exact recovery of this test image using IHT-POCS.
The restricted isometry properties of the Radon transform-
The RIP studies presented here have two purposes: (1) to put the present image-reconstruction problem in the context of the state-of-the-art CS algorithms, and (2) to help generate difficult test phantoms with which to explore robustness of the image-reconstruction algorithms. It is clear that solving Eq. (6) may admit a large number of solutions, but the the null-space of R̂ may be empty when the set of images, which R̂ operates on, is restricted to only those with a sparsity s. In fact, the sparsity restriction will, in general, make R̂ closer to an isometry: an operator that does not change the size of f s , where f s represents an image of sparsity s. One of the central properties of an operator that CS makes use of is the isometry constant δ s , which is a number where the following inequalities hold (10) for all images f s [34]. Smaller δ s leads to better recovery, a value of δ s = 1 means that there is an s-sparse image in the null space and exact recovery is impossible. For proving convergence of algorithms, it is often important to consider isometry constants of multiples of s, such as δ 2s . This is because the difference of two s-sparse images will in general be 2s-sparse.
Two problems associated with practical use of the RIP are that isometry constants are not invariant to scalings of the system matrix R̂ and that finding isometry constants for all but certain classes of random matrices can be computationally prohibitive; one essentially has to search all s-sparse images. The search issue for the present problem can be simplified, using the fact that the Radon transform has the greatest difficulty in distinguishing neighboring pixels.
To assess isometry constants for the Radon transform, we design a search using only neighboring pixels. The scaling issue can be handled by obtaining a distribution of σ s ≡ ||Rf s ||/|| f s || and multiplying R̂ by the a constant that yields the minimum δ s , where the largest and smallest values straddle 1.0.
To find an estimate of δ s , we start by assessing δ 1 which can be done simply by projecting all images with one non-zero pixel (the one-pixel images are normalized to unity so that the denominator in the definition of σ 1 is not needed), finding the image that shrinks the most under R̂( 11) and the one that expands the most under R̂(
12)
Correspondingly, we have and With the constant scaling, the isometry constant for s = 1 becomes: The result for δ 1 is accurate because it is feasible to search all one-pixel images. The following iterative procedure, however, is yields approximate estimates of δ s that are less than or equal to the true values. Given we find by solving: (13) where α varies in [0,2π) and p represents all unit, one-pixel images where the non-zero pixel is both, located at a zero-valued pixel and adjacent to a non-zero pixel, in . Note that the search combination with the trigonometric functions automatically preserves normalization.
The image
can be searched for in a similar manner and the δ s estimate is found from the corresponding values of and . The above procedure gives a lower bound on δ s , as the search space is restricted and the one parameter optimization in Eq. (13) is also limited; lower norms of Rf s can likely be found by allowing variation of the coefficients of all non-zero pixels in the test image.
For the present configuration with the 512×512 image, we carried out the δ s estimation to s = 100. This is well short of the sparsity of the ellipse phantom, but far enough to make an important point. In Fig. 3, the estimated isometry constant is plotted as a function of sparsity. The obvious feature of this graph is that the isometry constants are large. An article presenting the CS algorithm GraDeS [35], similar to IHT, has a concise summary table of CS algorithms and isometry constant values where these algorithms are proven to converge to the exact solution, under noiseless conditions. The largest of these constraints, for algorithms that can be applied to the present tomographic system, is δ 2s < 1/3. This value is exceeded already at very low sparsity. On the other hand it is known that the RIP is a sufficient condition, not a necessary one. And it appears that the gap between the two is quite large. Thus, at this point of the development of CS algorithm applications to tomography, the only course of action is to perform exhaustive simulation. A side benefit of performing the RIP analysis above is that we can use the images and as test phantoms, shown in Fig. 4, to increase difficulty of reconstruction.
Accurate image-reconstruction with IHT-POCS, and algorithm stress tests -One of the basic tests of any CS algorithm involve exact image recovery from ideal data.
We have performed multiple tests of this sort under many conditions and with many different phantoms. We show here some of these results. For the first set of tests the ellipse phantom is discretized on the 512×512 image array, and from this image the projection data are generated with exactly the same discrete projection operator as used in the reconstruction algorithms. The results of the IHT algorithm with μ= 2.0 and the IHT-POCS algorithm are shown in Fig. 5. For both algorithms the hard-thresholding employed the exact sparsity of the phantom, because this test attempts to demonstrate recoverability of the exact image. Although the plots show convergence numbers up to 1000 iterations, the IHT-POCS and IHT algorithms were run to 1000 and 5000 iterations, respectively, to the point where the change in error went to zero. The plots in Fig. 5 show faster convergence for IHT-POCS than that of IHT for this example. For the present POCS implementation, we employed sequential access of the data, but it may be possible to accelerate convergence further by adopting other data access strategies [36][37][38].
From Fig. 5 it is clear that image recovery by IHT-POCS is highly accurate even though, as was demonstrated above, the isometry constant for the present phantom's sparsity is quite close to unity. As we are operating IHT well outside of its proven range of isometry constant values, we do not expect it to yield an accurate reconstruction. The IHT results show recovery of the ellipse boundaries, but there is significant speckle noise overlaying these structures. The performance of IHT-POCS is surprisingly good, and we conjecture that it may be possible to prove exact convergence for a large range of isometry constants or based on some other CS principle.
To test robustness of the IHT-POCS algorithm we varied the starting image, using uniform zero values and random values. In each case the image recovery was highly accurate, to numerical precision. We also performed reconstructions with IHT and IHT-POCS on the ellipse phantom in Fig. 4 with and added. The results in Fig. 6 again show a surprisingly good recovery with the IHT-POCS algorithm. Note that the blown-up region of the image, showing reconstructions of at the lower left of each panel, shows remarkable recovery by IHT-POCS while the same component of the phantom is practically invisible to IHT.
Again, we stress here that these simulations are just that. We cannot generalize such conclusions to arbitrary system matrices, but we are confident from multiple simulations that for the discrete Radon transform data on discrete sparse objects, IHT-POCS is a useful and robust algorithm. The results of this section also points to a few interesting theoretical problems: is it possible to find exact isometry constants of the discrete Radon transform?, and is it possible to prove exact recovery with IHT-POCS and for what conditions?
Continuous, ellipse phantom
This section describes some important issues in the application of CS to Radon transform inversion. The above example points out one of these issues. The highly accurate image reconstruction, above, was performed on data generated from the discretized ellipse phantom. When the ellipse phantom data are generated by applying the continuous operator R, even when no noise is introduced, the projection data are not consistent with the 512×512 image matrix. The magnitude of the resulting inconsistency can be larger than that due to signal noise. In this section, we discuss the impact of this type of inconsistency, the motivation of going to IHT-POCS-TV, and how we view the role of the image-reconstruction algorithm.
Image reconstruction on the 512×512 grid-
The results of applying IHT and IHT-POCS to data generated by the continuous Radon transform are shown in Fig. 7. Both algorithms are able to recover the basic structure of the phantom, but it is clear that there are artifacts in both images. The comparison between IHT and IHT-POCS is also interesting to illustrate an example of how each algorithm handles data corruption. The fact that the main structures are clearly visible is evidence of robustness of each algorithm. And the lack of streak artifacts is noteworthy. The absence of streak artifacts clearly depends on the fact that we input the correct sparsity into each algorithm, so a good estimate of image sparsity is important for streak artifact reduction and is part of the proposed algorithm.
The speckle noise present in the images, however, can obscure small structures. That each of the algorithms has speckle noise is not surprising, because there is no element in either IHT or IHT-POCS that controls variations amongst neighboring pixels. An obvious way to reduce this type of data inconsistency would be to decrease the pixel size in order to better approximate continuous objects. Another motivation of increasing the number of pixels is to fully utilize the resolution of the 2048-bin projections. But implementing this modification leads to fundamental difficulties requiring us to change the sparsification principle.
5.2.2.
Going to larger grid sizes-In order to be able to reconstruct images of dimension 2048×2048 pixels, we employ the above mentioned IHT-POCS-TV algorithm. Using a pixel basis, the problem with exploiting pixel-sparsity is that the number of non-zero pixels scales inversely with the square of the pixel-width for 2D images. Sure enough, embedding the ellipse phantom into a 2048×2048 array yields 260,513 non-zero pixels, roughly a factor of 16 increase over the 512×512 embedding. This number of non-zero pixels already exceeds the number of measurement rays, 90×2048=184,320. Thus exact recovery is impossible using pixel sparsity. This issue projection data from continuous objects complicates the oft-made claim in the CS literature about sub-Nyquist sampling [39].
Exploiting, instead, sparsity in the image gradient has three advantages: (1) the sparsity at each level of embedding is lower, at least for this type of piece-wise constant phantom, (2) the sparsity scaling with image dimension is linear in the pixel width instead of quadratic, and (3) algorithms for TV-minimization do penalize large variations between neighboring pixels, so when scanning conditions do not meet the conditions of exact recovery, images with reduced speckle noise can still be obtained.
Direct application of our previous algorithm on constrained, TV-minimization in Ref. [30] has to be modified. The previous application to CT image reconstruction included a positivity constraint. That constraint cannot be used here because the object function is the Laplacian of the refractive index distribution, which can take on negative values. If we are able to determine a ball-park estimate of object sparsity, we can use effectively a sparsity constraint with the TVminimization to control steak artifacts in the image. Although we cannot expect IHT or IHT-POCS to recover an accurate 2048×2048 image, we may be able to use these algorithms to estimate image sparsity. Shown in Fig. 8, is the data error as a function of the sparsity parameter for IHT-POCS. For this image resolution, we know that the sparsity is slightly less than 2 18 and in the plot we see that it is around such a value that there is a clear change in slope of the data-error as a function of image sparsity. Not only does this plot suggest a range of sparsities s to explore, but it also yields lower bounds on ε for IHT-POCS-TV as seen by the optimization problem, Eq. (9), which it is designed to solve approximately.
The way we view the application of IHT-POCS-TV is that s and ε are parameters of the algorithm that need to be explored to find optimal settings for a particular application. The sparsity parameter s is used to control streak artifacts, while ε controls image roughness (larger ε allows greater data error to achieve lower image TV). The plot in Fig. 8 is used as a guide to help choose appropriate values of ε and s. 6. Investigation of algorithm performance using experimental data 6
.1. Experimental data
An experimental investigation of boundary-enhanced PCT was conducted at the 2-BM microtomography beamline at the Advanced Photon Source (Argonne National Laboratory). A detailed description of this imaging system has been published elsewhere [14]. A piece of foam was imaged with λ = 1.0 ×10 −10 m, and untruncated PCT projection data were acquired at 1440 view angles. A few-view data set was obtained by keeping only 90 measurements that were evenly-spaced over the interval [0,180°). The detector contained 2048 ×2048 pixels with an effective pixel dimension of 6.5 microns.
Implementation of reconstruction algorithms
We numerically implemented the POCS, IHT-POCS, and IHT-POCS-TV reconstruction algorithms described above and employed them to reconstruct boundary-enhanced images from few-view measurement data. As a benchmark, we also reconstructed images from the complete data set containing 1440 view angles by use of the FBP algorithm. All images were reconstructed on a 2048 × 2048 Cartesian grid with pixel dimension of 6.5 microns.
The performances of the IHT-POCS and IHT-POCS-TV algorithms are strongly influenced by the choice of the threshold parameter s. We adopted a systematic method for determining an effective value for this parameter, as described in Section 5.2.2. Estimates of the image b were reconstructed assuming different values of s. From knowledge of the reconstructed b, the data residual was computed as ε b = |Rb − g| and plotted as a function of s as shown in Fig. 9. A relatively small value of s is sought, due to our knowledge that b is generally sparse. However, if the chosen value is too small, the data residual will sharply increase indicating that b could not be accurately represented by the number of pixels that were permitted to take on non-zero values. For the object we considered in our experimental studies, Fig. 9 suggested that s = 2 18 specified a reasonable compromise between object sparsity and representation error.
Reconstructed images
Reconstructed boundary-enhanced images corresponding to a transverse slice through the object are displayed in Fig. 10. The image reconstructed from the complete data by use of the FBP algorithm is shown in Fig. 10-(a) do not contain the conspicuous artifacts produced by the POCS algorithm and contain most of the object features that are visible in the image reconstructed from the complete data by use of the FBP algorithm [ Fig. 10-(a)]. The images show some variation in the grey-level scalings, which is due to differences in how the discretization error is propagated through each algorithm. Although the image produced by the IHT-POCS algorithm clearly depicts the image boundaries, it does contain some perceptible numerical errors such as isolated bright spots in the background. Such artifacts are not present in the image produced by the IHT-POCS-TV algorithm because they would increase the value of the image TV, which is sought to be minimized by the algorithm.
Regions-of-interest within the reconstructed images corresponding to two additional transverse slices are shown in Figs. 12 and 13. The size of the displayed regions are 500×900 and 600×350 pixels, respectively. In each figure, the image in subfigure (a) was reconstructed by use of the FBP algorithm from the complete data and the images in subfigures (b), and (c) were reconstructed by use of the IHT-POCS and IHT-POCS-TV algorithms from the few-view data. The boundary-enhanced images reconstructed by use of the IHT-POCS and IHT-POCS-TV algorithms from few-few data contain almost all of the object features that are visible in the image reconstructed by use of the FBP data from complete data. Moreover, as observed and explained above, streak-like and other image artifacts are very effectively mitigated by the IHT-POCS-TV algorithm.
Summary
We have proposed and investigated two iterative algorithms for few-view image reconstruction in boundary-enhanced PCT. The image-reconstruction algorithms are based on the premise that a boundary-enhanced PCT image is typically sparse or possesses a sparse gradient image. In order to exploit object sparseness, the reconstruction algorithms seek to minimize the ℓ 1norm or TV-norm of the image, subject to data consistency constraints. Both algorithms employ a thresholding procedure to promote sparse solutions. To our knowledge, this is the first attempt to mitigate data incompleteness in PCT by use of object sparsity constraints.
By use of experimental data, we demonstrated that the algorithms can reconstruct accurate boundary-enhanced images from highly incomplete few-view projection data. The proposed algorithms were also demonstrated to produce significantly weaker image artifacts than those produced by a conventional iterative image reconstruction algorithm. The proposed reconstruction algorithms will benefit applications of boundary-enhanced PCT by permitting a significant reduction in data-acquisition times and minimizing the exposure of the sample to damaging radiation. In future studies, the reconstruction algorithms can be investigated for use in boundary-enhanced PCT employing polychromatic sources [9] and in applications that involve other types of data incompleteness. The imaging geometry of in-line phase-contrast tomography. Phantom composed of ellipses of high eccentricity that roughly resembles, in terms of sparsity, the object in the experimental results. Estimated values of the isometry constants δ s for the Radon transform with 90, equally-spaced views and 2048 detector bins per projection using 512×512 pixels to represent the scanned object. As only a restricted search is performed these values are interpreted as a lower bound on the true isometry constants. Ellipse phantom made more challenging by adding (bottom, left) and (top, left). These images are added with a similar total energy as that of the original ellipse phantom. Reconstructions of the ellipse phantom by IHT-POCS (left) and IHT (middle) algorithms and semi-log plots of the data and image error of both algorithms. Reconstructions of the ellipse phantom, with and added, by IHT-POCS (left) and IHT (right) algorithms. Image reconstructions of the ellipse phantom by IHT-POCS (left) and IHT (right) algorithms. For these results the data are generated from the continuous Radon transform of the ellipse phantom. A plot of data residual vs. image sparsity for reconstruction of the ellipse phantom on a 2048×2048 grid using the IHT-POCS algorithm. A plot of data residual vs. threshold parameter employed in the IHT-POCS and IHT-POCS-TV algorithms. 550×900 pixel region-of-interest positioned near the center of the images in Fig. 1-(a)-(d) are displayed in subfigures (a)-(d). 600×350 pixel region-of-interest in reconstructed images corresponding to a slice of constant z. The image reconstructed from 1440 tomographic views by use of the FBP algorithm is contained in subfigure (a). Images reconstructed from 90 tomographic views by use of the IHT-POCS, and IHT-POCS-TV algorithms are displayed in subfigures (b) and (c). | 9,237.2 | 2010-05-10T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
MicroRNAs Which Can Prognosticate Aggressiveness of Bladder Cancer
Bladder cancer (BC) is still characterized by a very high death rate in patients with this disease. One of the reasons for this is the lack of adequate markers which could help determine the biological potential of the tumor to develop into its invasive stage. It has been found that some microRNAs (miRNAs) correlate with disease progression. The purpose of this study was to identify which miRNAs can accurately predict the presence of BC and can differentiate low grade (LG) tumors from high grade (HG) tumors. The study included 55 patients with diagnosed bladder cancer and 30 persons belonging to the control group. The expression of seven selected miRNAs was estimated with the real-time PCR technique according to miR-103-5p (for the normalization of the results). Receiver operating characteristics (ROC) curves and the area under the curve (AUC) were used to evaluate the feasibility of using selected markers as biomarkers for detecting BC and discriminating non-muscle invasive BC (NMIBC) from muscle invasive BC (MIBC). For HG tumors, the relevant classifiers are miR-205-5p and miR-20a-5p, whereas miR-205-5p and miR-182-5p are for LG (AUC = 0.964 and AUC = 0.992, respectively). NMIBC patients with LG disease are characterized by significantly higher miR-130b-3p expression values compared to patients in HG tumors.
Introduction
Bladder cancer (BC) is characterized by the high rate of non-muscle invasive BC (NMIBC) at the moment of diagnosis (75-80%) [1,2]. Transitional cell carcinoma (TCC) constitutes the majority of the urothelial carcinoma of the bladder. There are two described alternative molecular pathways of developing BC, characterized by different genetic changes and different biological potentials. The first alternative includes changes of papillary and an always non-invasive character, while the other alternative can be either papillary or non-papillary and is often invasive (into the lamina propria-T1 stage; or muscularis propria-T2 stage) [3,4]. Patients suffering from muscle invasive BC (MIBC) at the moment of the initial diagnosis are treated with radical cystectomy (RC). This is not the optimal solution, as patients' quality of life after RC is low and a high rate of relapse and death has been observed within a short period of time after operation [5,6]. As far as patients with NMIBC are concerned, it is impossible to predict which of them will have disease progression. In consequence, they undergo systematic cystoscopy examinations aimed at assessing the disease development stage. This also decreases patients' quality of life (time in hospital, stress, uncertainty connected with another examination) and generates enormous costs for the health care [7]. MicroRNAs (miRNAs) are known to be dysregulated in bladder cancer (BC) and implicated in the pathogenesis of the development of bladder tumors mostly via their influence on genes involved in two molecular pathways, specifically the gene which codes fibroblast growth factor receptor 3 (FGFR3) and the gene which codes tumor protein 53 (TP53). Numerous miRNA studies have identified histological grade and stage (pT) classification-dependent miRNA expression and have proven the existence of miRNAs alterations related to the two divergent pathways found in the development of NMIBC and MIBC [8,9]. Only a few studies have analyzed miRNA as a prognostic and predictive biomarker [10][11][12]. Each miRNA can have multiple targets, and changes in their expression profile could have a magnified effect on cellular phenotype. The previously published studies emphasize the possible prognostic potential of some miRNAs to predict progression and disease specific or overall survival in BC patients. Unfortunately, none of these miRNAs are used in routine practice. This in the result of quite a few factors: Using different platforms for assessing marker expressions, using various biological samples (tissue or cell lines) secured in different ways (paraffin, RNAlater, freezing), and using various normalization methods and reference genes [13]. Some analyses are based on relative expression and others are based on absolute expression. Finally, these factors also include the lack of a control group. That is why we decided to evaluate the expression of selected miRNAs in an adequately selected group of both NMIBC and MIBC patients characterized by the high rate of observed progression.
In tumors, downregulated miRNAs are considered to be tumor suppressor candidates, whereas miRNAs with increased expression may play a promotional role in cancer progression. Potential BC suppressors include miR-100, miR-99a, miR-202, and miR-30a. Some miRNAs, including miR-145-5p (locus on chromosome 5), miR-195, and miR-199a-5p have been shown to inhibit the proliferation of or induce the apoptosis of BC cells [14]. MiR-145-5p appears to play a key role as a tumor suppressor by targeting N-cadherin and its downstream effector matrix metalloproteinase-9 (MMP9), and it is the most frequently reported downregulated miRNA in BC. MiR-205-5p (locus on chromosome 1), miR-182-5p (locus on chromosome 7), mir-130b-3p (locus on chromosome 22), miR-10a-5p and miR-21-5p (loci on chromosome 17), and miR-20a-5p (locus on chromosome 13) are mainly overexpressed in BC tissue. They promote proliferation, migration, and invasion, and they inhibit BC cells apoptosis. The potential target/regulator for miR-130b-3p and miR-205-5p is the PTEN gene (phosphatase and tensin homolog); for miR-182-5p, it is the SMAD4 gene (drosophila protein, mothers against decapentaplegic homolog 4); and for miR-10a-5p, it is the FGFR3 gene [15]. miR-21-5p overexpression is related to TP53 inactivation, invasion, and tumor progression. It has been seen to be simultaneously upregulated in the tissue, plasma and urinary exosomes of BC patients, but its role needs further elucidation. However, there are still conflicting results regarding the function of miRNAs in publications, so, for our analysis, we chose a panel of the best described miRNAs for BC and the miRNAs connected with genes or chromosomes whose genetic alterations are well documented in pathogenesis BC [14,15].
Materials and Methods
The tested group consisted of tumor tissue samples stored in the tissue bank in the Clinical Genetics Department, the Chair of Clinical and Laboratory Genetics, Medical University of Lodz. The tumor tissues were obtained during the TURBT (transurethral resection of bladder tumor) examination at the Urology Ward of the University Clinical Hospital Military Memorial Medical Academy in Lodz. Official permission to conduct the tests was granted by the Bioethics Advisory Commission at Lodz Medical University, No. RNN/62/15/ KE/M, and the patients signed consent forms. The tumors selected for RNA isolation were submerged in an RNA later solution (Sigma) and stored at −20 • C before isolation time. The tests were carried out on a group of 55 patients with diagnosed bladder cancer. The clinical and pathological characteristics of the cohorts are summarized in Table 1 and Supplemental Table S1. All the tumors were of urothelial origin. Only samples with more than 60% tumor content were included in the study. The age range was 44-88 with an average age of 72.8. The majority of the patients were male (45/55-81.7%). Nineteen patients (34.55%) in the group suffered [16,17]. A progressive disease was defined as a disease that had progressed to stage T2 or higher, the development of nodal or distant metastases, or death. The control group consisted of 30 patients admitted to the urology ward. They underwent control cystoscopy aimed at confirming or excluding tumor changes in the bladder. The examination did not reveal any tumor changes.
A MirVana™ miRNA Isolation Kit (Life Technologies, Cat No. 1560, Foster City, CA, USA) was used to isolate microRNA from the frozen tumor tissues. The whole procedure was carried out in accordance with the instructions of the producers. Briefly: 1 mL of Lysis/Binding buffer was added to each sample (1 mL per 0.1 g of tissue) and homogenized. After that, 100 µL of miRNA Homogenate Additive was added to sample and incubated for 10 min on ice. Next, 1100 µL of acid-phenol:chloroform was mixed with the sample and centrifuged (5 min at 10,000× g). The aqueous phase was transferred to a fresh tube and vortexed with 200 µL of 100% ethanol. A lysate/ethanol mixture was pipetted onto the filter cartridge and centrifuged (15 seconds at 10,000× g). The filtrate was collected, and the step was repeated. After that, 400 µL of 100% ethanol was added to the filtrate, pipetted onto new filter cartridge, and centrifuged in the same condition. Two washing steps were conducted: (1) 700 µL of miRNA Wash Solution 1 was applied to the filter cartridge, and (2) 500 µL of Wash Solution 2 and 3 were applied to the filter cartridge; this was repeated twice (at each step, samples were centrifuged for 15 seconds at 10,000× g). In the last step, 70 µL of the preheated (95 • C) elution solution was applied to the filter cartridge, which was then spun for 30 seconds at 16,000 g. The collected eluate was stored at −20 • C. An additional DNase and digestion step was performed. The obtained microRNA concentrations were monitored using the spectrophotometric method on the NanoDrop ® ND-1000 instrument (NanoDrop Technologies, Wilmington, DE, USA). The purity measurement of the obtained extracts used the relationships A260/230 and A260/280. It is accepted that for good quality nucleic acids, these relationships are, respectively, 1.8-2.2 and 1.8-2.0. The measurement results of the samples selected for further analysis met the required criteria. The purity of the samples was also verified using a Qubit microRNA Assay Kit (Invitrogen, Cat No. Q32880). For reverse transcription, 10 ng of RNA was taken. MiRNAs (hsa-mir-10a, hsa-mir-20a, hsa-mir-21, hsa-mir-130b, hsa-mir-145, hsa-mir-182, hsa-mir-205, and hsa-mir-103) for 55 samples were reverse transcribed using a TaqMan MicroRNA Reverse Transcription Kit (Applied Biosystems Cat No. 4366596) and a 50 nM pool of miRNA specific stem loop primers (Applied Biosystems Cat No. 4427975; details and ID of assays specified in Supplemental Table S2) following the manufacturer's protocol (100 mM dNTPs 0,15 µL, MultiScribe™ Reverse Transcriptase, 50 U/µL 1 µL, 10× Reverse Transcription Buffer 1 µL, RNase Inhibitor, 20 U/µL 0,19 µL, and nuclease-free Water 4.16 µL). The reaction mixtures were incubated at 16 • C for 30 min, at 42 • C for 30 min, and at 85 • C for 5 min (Applied Biosystems microAmp Optical 96-well reaction plate Cat No. N8010560, Micro Amp optical adhesive film Cat No. 4311971), and then the products of the reaction were stored at −20 • C until use. Purity and quantity were verified using a Qubit dsDNA HS assay kit (Invitrogen, Cat No. Q32851). Real-time polymerase chain reactions (rt-PCR) were performed on CFX96 (BioRad, Hercules, CA, USA) including related documentation with regard to the specific items of MIQE guidelines (Supplemental Table S3) [18]. Each sample was run in duplicate at a final volume of 18 µL containing 10 µL of TaqMan 2× Universal PCR Master mix II with no UNG (Applied Biosystems Cat No. 4440040), 7 µL of nuclease free water, and 1 µL of TaqMan ® Small RNA Assay (20×). Each PCR included no template control, and all of them were negative. The reaction was heated to 90 • C for 10 min, 55 • C for 2 min, and 72 • C for 2 min, followed by 50 cycles. The mean threshold cycle value (Ct) was used for downstream analyses. miR-103-5p was chosen as an endogenous control. The ∆∆Ct method, also defined as the comparative method, was applied in order to mark the expression level of the examined microRNAs [19]. This method is based on mathematical calculations that enable us to determine the relative difference in the expression level of the tested marker between unknown samples and the reference. The first stage consists of the analysis of the marked Ct (the cycle at which the fluorescence level reaches a certain amount/threshold) in the amplification reaction of the examined microRNAs and control microRNA for both the tested and the control groups. The calculated expression level of each patient was normalized against the endogenous control, which was miR-103a-5p [20]. After that, the difference of the tested and control microRNAs (∆Ct) was calculated for individual samples. The calculations were made for both the unknown and control samples.
(∆Ct) (tested group) = Ct miRNA target -Ct miRNA reference (∆Ct) (control group) = Ct miRNA target -Ct miRNA reference Next, ∆∆Ct was calculated for each sample: ∆∆Ct = ∆Ct (tested samples) -∆Ct (median of the control group) The calculation of the normalized value of the relative expression level (FC) of the tested marker in the tested sample against the control sample was made as follows: The 2 −∆∆Ct method assumes a uniform PCR amplification efficiency of 100% across all samples. In our study, the efficiency was between 98.9% and 100%.
Data Analysis
The statistical calculations were made using the program STATISTICA 13, Stat-Soft Inc. The differential miRNA expression between bladder cancer cases and controls was determined using Student's t-statistics. In fact, the distribution of variables differed from the standard normal distribution; therefore, non-parametric tests were applied. The analysis of the unrelated variables was made with the Mann-Whitney U test. The value p < 0.05 was accepted as the threshold of statistical difference or correlation significance. Kaplan-Meier analyses with a long-rank test and Cox regression were performed for overall survival time (OS), time to recurrence, and time to progression. The discriminating capacity of miRNAs was assessed by a receiver operating characteristics (ROC) analysis.
Results
In the first stage of the analysis, the relationship between the abnormal expression of selected microRNAs and other clinical parameters was examined. The raw data from the Ct for individual miRNAs were recalculated for fold change (FCmiR) (Supplemental Fata File SF1, Supplemental Table S4). In the case of miR-205-5p, all the patients were classified into the reduced expression (low expression-LE) group, while in the case of miR-130b-3p, miR-20a-5p and miR-10a-5p, all the patients were classified into the increased expression (high expression-HE) group. These miRNAs did not differentiate the patients according to clinicopathological parameters; therefore, only FCmiR145, FCmiR21 and FCmiR182 were selected for further analysis. Table 1 presents the probability values (p) of the relevant statistics used to make conclusions regarding the existence of relationships between individual variables. The analysis was performed depending on the fulfilled assumptions by the a classic Chi 2 test, V-square test (V), or with Yates's correction (Y). There were not any significant correlations observed. Additionally, we did not observe any significant differences for the division of the tested group into Expression 1 (when at least one of the miRNAs indicates abnormal expression) and Expression 2 (when at least two of the analyzed miRNAs show changes). The results are presented in Table 1.
The next step was comparing differences between the level of expression in different stages (TaT1and T2) or grades (LG and HG tumors). The question that was sought next was whether the selected miRNAs could be prognostic classifiers for patients at different stages or grades of cancer. For this purpose, the patients were divided into two groups: 0-patients with stage T2 or higher; 1-patients with stage Ta or T1. The results are presented in Table 2 and Figure 1. The parametric t-test tested the one-sided hypothesis that miRNA for TaT1 < miRNA for T2 and above. The remaining p-values were read from the Mann-Whitney U test, which compares distributions (medians). This test is less powerful than the t-test, but it is the only one for non-normal distributions. NMIBC patients (TaT1 in our study) with an LG disease were characterized by significantly higher miR-130b-3p expression values compared to patients with HG tumors. If we consider patients with the LG disease, miR-205-5p, miR-182-5p and miR-20a-5p differentiated this group with BC in stage TaT1 from patients in a higher stage (p < 0.05). If we focus on a group of patients in HG, it is miR-130b-3p which best differentiated patients in terms of stage. To assess the clinical relevance of all miRNAs, a Kaplan-Meier analysis with the log-rank test and Cox regression analyses were performed for overall survival, recurrence-free survival, and progression-free survival (results presented in Table 3 and Figure 2). We did not find any significant differences. Univariate Cox regression was performed to assess the factors that affect the risk of progression, recurrence or death (results presented in Table 4). It has been shown that the older patients are, the higher the risk of death (increasing each year by 30%). In addition, the risk of death for people with stage T2 of the disease is more than six times higher than for patients in stage Ta or T1. People with disease progression are nine times more likely to die. The increase in the expression of miR-205-5p, miR-145-5p and miR-21-5p makes the risk of death higher by 13%, 0.03%, and 0.009%, respectively (Table 4). These percentages result from the interpretation of the hazard ratio (HR) of important risk parameters, for which the Cox regression analysis for OS had p < 0.05. The risk of recurrence in patients in stage Ta is over two times higher than people in stage T1 or T2. In the group of patients with recurrence, the death rate is four times lower. The increased expression of miR-20a-5p and miR-182-5p heightens the risk of recurrence by 5% and 6%, respectively. More advanced age To assess the clinical relevance of all miRNAs, a Kaplan-Meier analysis with the log-rank test and Cox regression analyses were performed for overall survival, recurrence-free survival, and progression-free survival (results presented in Table 3 and Figure 2). We did not find any significant differences. Univariate Cox regression was performed to assess the factors that affect the risk of progression, recurrence or death (results presented in Table 4). It has been shown that the older patients are, the higher the risk of death (increasing each year by 30%). In addition, the risk of death for people with stage T2 of the disease is more than six times higher than for patients in stage Ta or T1. People with disease progression are nine times more likely to die. The increase in the expression of miR-205-5p, miR-145-5p and miR-21-5p makes the risk of death higher by 13%, 0.03%, and 0.009%, respectively (Table 4). These percentages result from the interpretation of the hazard ratio (HR) of important risk parameters, for which the Cox regression analysis for OS had p < 0.05. The risk of recurrence in patients in stage Ta is over two times higher than people in stage T1 or T2. In the group of patients with recurrence, the death rate is four times lower. The increased expression of miR-20a-5p and miR-182-5p heightens the risk of recurrence by 5% and 6%, respectively. More advanced age increases the risk of progression (by 7% each year). This risk is more than three times higher for people in the T2 stage compared to people in the Ta or T1 stages. The next step of the analysis was based on results of the area under receiver-operating characteristic curves (ROC). Based on the data from 55 patients with BC and from 30 patients of the control group, an attempt was made to find out which miRNAs, among the selected ones, are the best potential cancer classifier. The conclusion about the significant influence of individual miRNAs on the classification of patients was formed using the multivariable logistic regression model (a logistic regression model with many explanatory variables). The results are presented in Figure 3, Table 5 and Supplemental Table S5. The Mann-Whitney U test showed that the distribution of miR-130b-3p was not significantly different for high grade (HG), Ta, and TaT1 patients. Only for the low grade (LG) group did all miRNAs have significantly different distributions compared to the control group. Figures 3 and 4 present the results. Mir-205-5p seems to be a good classifier for LG and HG patients and also for Ta and T1 stages. Logistic regression assessed with a backward elimination approach resulted in a pattern of three miRNAs (miR-205-5p, miR-20a-5p and miR-182-5p). For HG, the relevant classifiers are miR-205-5p and miR-20a-5p, which gave an AUC = 0.964, whereas low LG miR-205-5p and miR-182-5p gave an AUC = 0.992. The model classifies HG as well as BC. The results are presented in Figure 4 and Table 6. The best classifiers are miR-205-5p, miR-20a-5p and miR-145-5p, as these could significantly discriminate BC patients from the control group by an AUC higher than 0.9; p < 0.05.
Discussion
The progression in bladder cancer is a complex and multifactorial process [21,22]. In oncology, histopathological examination is still the most important method to determine the diagnosis and classification of tumors; however, current prognosticators such as the tumor grade, stage, size, and multifocality do not accurately reflect clinical outcomes and have limited usefulness for a reliable risk-adjusted therapy decision. At present, there are not enough good markers that could be used as tools to support screening, detecting or monitoring the disease [23,24]. miRNA is an "attractive candidate" as a potential diagnostic and prognostic biomarker, not only due to its high level of stability in body tissues and fluids but also due to its ability to be quantified in relatively easy and cheap techniques like real-time PCR [25,26]. Various miRNAs have been identified as important targets in bladder cancer development, but the large number of different expression profiling platforms such as microarrays, miRCURY ready to use PCR, TaqMan Human MicroRNA Probes, and different reference genes used for normalization are the reason that the results are not comparable and it is difficult to put miRNAs into clinical practice. Therefore, obtaining reliable, not biased miRNA expression data is crucial for selecting clinically useful markers.
It is estimated that over 30% of the protein-coding genes in human cells are controlled by miRNAs. One type of miRNA can even control the expression of hundreds of target genes, and one gene can be controlled by numerous miRNAs. These molecules are regarded as the "key" ones in the gene regulatory network. MiRNAs are involved in many significant biological processes, such as apoptosis, proliferation, cell diversification, and oncogenesis. In this study, we compared the expression of selected miRNAs in non-malignant and malignant bladder tissue, and we identified three down-regulated ones (miR-205-5p, miR-182-5p, and mir-145-5p) and two up-regulated (miR-20a-5p and miR-130b-3p).
In previous studies, all of them have been found to be differently expressed in malignant bladder tissue (mainly the underexpression of miR-145-5p and the overexpression of the others), but in this study, the normalization of expression data was performed using miR-103-5p as a reference [25,26]. An endogenous control, in relation to which we normalized the results of other miRNAs, should, as a rule, show stability in a given tissue. In reality, this is very difficult to achieve, and different groups of researchers choose different controls and obtain different results due to these controls. Ratert at al. confirmed that using RNU6B and RNU48 could lead to seriously biased results regarding miRNA expression analysis [27]. Peltier and Latham found that some miRNAs (including miR-106a and miR-191) were the most consistently expressed across different human tissues [28]. They also observed that RNU6 and RNA5S were the least stable. Hofbauer et al. used two endogenous controls in their research, RNU48 and miR-103-5p, and they achieved satisfactory results [29]. In our research, we followed the results of others, including the possibility of the use of miR-103-5p as an endogenous control in commercial sets (Exiqon, Vedbaek Denmark). The studies of Boisen et al. and Parvaee showed that mir-103-5p expression assessed in isolation from formalin-fixed parafin-embedded (FFPE) cancer tissue was the most stable reference miRNA in colorectal (CRC), pancreatic (PC), and intestinal type gastric cancer [30,31].
We subdivided the tumor samples in terms of the low and high grade diseases. The comparison of the miRNAs revealed four significant differentially expressed miRNAs (miR-205-5p, miR-130b-3p, miR-20a-5p, and miR-182-5p). Several studies have implicated miRNAs as prognostic markers for BC. As already shown in previous studies, miR-205-5p expression in normal and tumor samples seems to be coordinated with the mir-8 family. Lenherr found abnormal expression between progressors and non-progressors for several miRNAs including miR-205-5p and miR-20a-5p. Some of the known targets of miR-205-5p include ZEB1/2, PTEN, and VEGFA [32]. The downregulation of miR-205-5p has been linked to the epithelial-mesenchymal transition (EMT) and has been significantly associated with progression in non-muscle invasive BC. However, the results obtained by different research groups are not consistent due to factors that were already-mentioned in the introduction (differences in the chosen methods). Contrary to that, Dip et al. observed that miR-205-5p was overexpressed in pT2-3 stages of BC [33]. In their study, miR-10a-5p overexpression was associated with shorter disease-free and disease specific survival. Ecke et al. did not confirm the statistical significance for differences in the expression of miRNA-205-5p between non-malignant and BC samples, but they detected a statistically significant reduction in the expression of miR-130b-3p (the best discriminator, also shown in our research) [34]. miR-145-5p overexpression inhibited cell proliferation and migration in BC [35]. Moreover, the downregulation of mir-145-5p was found to be directly targeting the TAGLN2 gene (its increased expression promoted cell proliferation and migration). Li et al. also confirmed the correlation between the overexpression of miR-145-5p and poor survival [36]. Unfortunately, we failed to achieve such correlation. Inamoto et al. also confirmed the deregulation of miR-145-5p expression and its association with the aggressive phenotype, but they showed its protective effect. miR-145-5p expression was significantly lower in BC samples and cell lines compared to those in normal bladder tissue [37]. Pignot et al. observed that most of the examined miRNAs were deregulated in the same way in the two types of bladder cancer, irrespective of the pathological stage [38]. In their study, miR-182-5p was downregulated and was found to be related to tumor aggressiveness (associated with both recurrence-free and overall survival in univariate analysis). In our study, the high expression of mir-182-5p and mir-20a-5p correlated with the risk of disease recurrence (Table 4; risk higher by 0.06% and 0.0002%, respectively). Urquidi et al. identified a few miRNA set classifiers for predicting the presence of bladder cancer (25 miRNAs, 20 miRNAs, 15 miRNAs, and 10 miRNAs), but none of them included those ones which we found in our study [39]. The authors note that these biomarkers were correlated with the presence of BC, but their association with clinical variables was much less evident. In our opinion, different sets of miRNAs can be suggested as prognostic biomarkers (three: miR-9, mir-183, and mir-200b; two: miR-143 and miR-145); however, until now, only one study had verified the examined miRNAs as independent markers [27]. Ecke et al. identified miR-199a-3p and miR-214-3p as independent prognostic biomarkers for the prediction of overall survival (OS) in MIBC patients after radical cystectomy (RC). They used a combination of four miRNAs (miR-101, miR-125a, miR-148b, and miR-151-5p) or three miRNAs (miR-148b, miR-181b, and miR-874) as endogenous controls. The study was carried out in a formalin-fixed, paraffin-embedded (FFPE) tissue specimen. These markers were not evaluated by us. Ecke at al. also analyzed the expression of miR-205-5a, but they did not confirm its usefulness. It needs to be stressed, however, that the marking was done in FFPE, whereas our tests were carried out in fresh, frozen tumor tissue. Armstrong's results for matched tumor and bio-fluids in BC showed that there is an overlap between the expression of miRNAs in different bio-specimen sources, but overexpression in all three kinds of the biological samples has only been observed for two tested miRNAs (miR-4454 and miR-21) [40]. No correlation has been observed between expression in tumors and plasma exosomes (using the NanoString nCounter microRNA assay technique). In their review, Lee et al. showed a correlation in the changes of the expression of miRNAs isolated from bladder cancer tissues and urine (in multiple results) for only 14 miRNAs, including miR-145, miR-182, and miR-205 [15]. On the other hand, Baumgart et al. observed that nine miRNAs were consistently differently expressed in both invasive cells and their secreted exosomes, but the remaining six miRNAs were only dysregulated in exosomes [41]. The NanoString technique has its advantages, as it does not require the application of any nucleoid acids. However, it is expensive and hardly available. A real-time PCR technique is available, but any obtained result is affected by many factors, such as the kind of tissue, the way of normalization, and the way of analysis.
Receiver operating characteristics analyses showed a good ability to discriminate between non-malignant and malignant tissues for the investigated miRNAs. Based on binary logistic regression using the backward elimination approach, the optimal combination for discriminating healthy people from BC patients is miR-205-5p, miR-20a-5p and miR-182-5p (AUC > 0.9; p < 0.05). Lv et al. Egawa et al. and Liu at al. also confirmed that miR-130b-3p could play a critical role in the development and progression of bladder cancer [42][43][44]. Fang et al. found an miR-205-5p area under the receiver-operating characteristics curve value of 0.950 for discriminating BC patients from healthy people and a value of 0.668 for discriminating MIBC from NMIBC [10]. The log-rank test and univariate and multivariate Cox regression analyses did not indicate that high miR-205-5p expression in NMIBC patients was associated with cancer specific survival.
We faced some limitations in our study, one of which was a relatively small group of patients. We only tried to use the samples that were characterized by adequate amounts of tumor cells. It is not easy to obtain a large group of patients with bladder cancer progression who can provide biological material for tests, as such cases constitute the minority in this disease. The applied study technique is relatively cheap and easy. Thus, it could be used for examining chosen markers on a daily basis.
Conclusions
This study follows the strategy "from top to bottom," which means choosing the phenotype of patients (histopathological characteristics and survival) and evaluating the molecular markers of such a phenotype. The goal for the future is the opposite course of analysis, which is "from genotype to phenotype." Based on the detection of diagnostically and prognostically significant differences between normal and cancer samples, we could assess the biological potential of the tumor and its aggressiveness. As a result, we could enable the choice of appropriate therapeutic measures, tailored to individual patients; this is personalized medicine. Finally, we could lengthen a patient's life and improve its quality without offering a radical treatment if it is not necessary. The implementation of miR-205-5p, miR-20a-5p and miR-130b-3p into routine practice can be an alternative to screening or the follow up of treatment effects. Such analyses can help in the search of non-invasive markers, especially since they can also be evaluated in urine or plasma. Our findings could be of clinical importance, but the results should be validated in a bigger group.
Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6694/11/10/1551/s1, Table S1: Characteristics of the patients group. Table S2: Details of TaqMan MicroRNA assays. Table S3: Experimental details of the rt-PCR analyses according to the checklist of the MIQE Giudelines (minimum information for the publication of real-time quantitative PCR experiments) Table S4: Summary statistics of the normalized expression data of miRNAs in the three clinical sample groups (BC, NMIBC, MIBC) Table S5: ROC characteristics for A) selected miRNAs B) between AUC of different miRNAs Data File SF1: Raw dataexpression calculation. | 7,172.2 | 2019-10-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Transforming education: Perceptions and challenges of technology-enabled teacher education programmes
.
Introduction
Due to changes in pedagogical methods and technological breakthroughs, the subject of education is continually changing.Technology has become an essential component of education in our digital age, changing static instruction into more engaging, interactive lessons.Technology-enabled teacher education programs, which aim to equip future educators with the abilities and knowledge to successfully incorporate technology into their classrooms, are one notable breakthrough.In order to shed light on their advantages, difficulties, and the contribution of technology to the future of education, this study investigates how learners, in particular aspiring teachers, perceive technology-enabled teacher education programs.Both historically and now, information and communication technologies (ICTs) have had a considerable impact on the teaching and learning process.Richards acknowledged this influence in 2005 and underlined the importance of well-designed tasks and digital resources for enticing young learners to use digital platforms.In order to equip students with the skills they will need for the 21st century, educational institutions are currently under pressure to integrate ICTs into their curricula.There are many advantages to utilizing ICTs in schooling.In addition to improving motivation, self-confidence, and academic performance, they also make it easier for students to get accurate information, understand abstract topics, and create collaborative online activities for teachers.While some ICT gadgets, such as iPads and tablets, are still in the early stages of adoption, they have a lot of potential, especially for children with special needs, to improve fundamental skills and boost participation in the classroom.The adoption and integration of ICTs into the teaching-learning process is mostly the responsibility of teachers.However, a number of elements, such as historical events, organizational characteristics, personal attitudes toward technology, and other contextual factors, influence this integration.
Review of the literature
It is crucial to comprehend how teacher educators feel about technology-enhanced learning (TEL) and what their attitudes are.These teachers are crucial to successfully utilizing technology as a teaching tool and assisting students in gaining networking and TEL skills (Kirkwood & Price, 2011).These educators' support and attitudes are crucial to the adoption of TEL and subsequent modification of educational practices (Kusano et al., 2013;Rana, 2012).It is well acknowledged that teachers' positive attitudes are a key indicator of effective teaching strategies.Technology can be easily incorporated into the teaching-learning process by educators when they do it with enthusiasm (Birkollu et al., 2017; Buabeng-Andoh, 2012).The emergence of the TEL environment, primarily driven by the COVID-19 pandemic, disrupted the traditional face-to-face (F2F) mode of education and may return to its original format once the pandemic's adverse effects subside.Therefore, assessing the opinions and attitudes of teacher educators toward TEL could significantly influence and determine their actual adoption of technology in teaching and learning post-pandemic (Chandwani et al Mukherjee & Maity, 2019), have yielded mixed results.While most studies concluded that teacher educators held favorable attitudes toward TEL, they often failed to exhibit an equally positive attitude toward integrating technology into the traditional curriculum.Constraints included limited exposure and training in TEL, inadequate technical infrastructure, and insufficient institutional support in terms of budget allocation for advanced technologies, as well as a lack of clarity and direction.TEL is essential in the context of distance education because it helps students get beyond the obstacles posed by distance.High levels of learner interaction are encouraged, interpersonal communication between teachers and students is facilitated, and a comprehensive learning management system that includes collaboration tools, virtual classrooms, content authoring and capturing tools, and digital repository systems can be used are all made possible.Additionally, TEL has proven to be more effective than distance learning programs that rely solely on printed materials (Addah et al., 2012; Almarashdeh & Alsmadi, 2016).Faculty attitudes are crucial in facilitating the shift from conventional remote education delivery to technology-enabled teaching and training within distance learning Teacher Education Programs (TEPs) (Panda & Mishra, 2007).The research on teachers' beliefs, attitudes, and how these convictions show in their teaching practices is still scanty despite the abundance of studies on teachers' use of technology (Kirkwood & Price, 2016).There are many studies on how teachers use technology, but little research has been done on their attitudes and beliefs, or how these attitudes affect their teaching practices.This gap highlights the need for further research in this area to better understand and support teacher educators in harnessing the potential of TEL.
The Significance of Technology-Enabled Teacher Education Programs
Understanding the significance of technology-enabled teacher education programs in the current educational landscape is crucial before digging into learners' opinions.These programs are unavoidable because of the numerous benefits they provide.
Improved Pedagogy
Technology makes it possible for a variety of interesting teaching techniques, including multimedia presentations, computer simulations, and interactive learning environments.These tools are included in teacher education programs to help educators improve their ability to impart knowledge and promote learning.Enhanced pedagogy in technologyenabled teacher education programs represents a fundamental change in how educators are being prepared for the twenty-first century.These ground-breaking initiatives make use of the potential of online learning environments and digital tools to build engaging learning environments that prioritize student participation, individualized education, and the acquisition of vital digital literacy skills.Teacher education is made more accessible and efficient by incorporating technologies like virtual reality, interactive simulations, and data analytics.This enables aspiring teachers to gain practical experience, collaborate with colleagues around the world, and adapt to the constantly changing educational landscape.Additionally, these programs enable instructors to develop into tech-savvy facilitators who can use digital resources to improve student learning outcomes and build inclusive, student-centered classrooms where innovation, critical thinking, and problem-solving thrive.Essentially, improved pedagogy in technology-enabled teacher education programs holds the promise of growing a new generation of educators outfitted with the abilities and mentality required to shape the future of education in a world that is becoming more and more digital.
Adaptive Learning
With the use of technology, learning experiences may be tailored to the needs and learning preferences of each individual student.Future educators who complete technology-enhanced programs will be better prepared to modify their lessons for a variety of classes.A game-changing paradigm change in the field of pedagogical training is represented by adaptive learning in technology-enabled teacher education programs.These programs optimize teacher candidates' professional development journeys by utilizing the power of cutting-edge technology like artificial intelligence and customizable algorithms to create learning experiences that are tailored to each candidate's needs.Adaptive learning platforms detect strengths and weaknesses through continuous assessment and real-time feedback, enabling educators-in-training to concentrate on areas needing improvement and encouraging self-directed learning.This innovative strategy improves the efficacy of teacher preparation while also empowering educators to address the wide range of requirements of their future students, ultimately resulting in more effective and responsive teaching strategies in the quickly changing educational environment of today.
Global Connectivity
Online teacher education programs connect students and experts from all over the world by removing geographic obstacles.The educational experience is enhanced by this exposure to various viewpoints and pedagogies.The way educators are prepared for the challenges of the twenty-first century is revolutionized by global connectedness through programs for teacher education that are supported by technology.These initiatives cross international borders by seamlessly integrating digital tools and online platforms, encouraging cooperation and knowledge sharing between educators and institutions everywhere.This interconnection not only improves access to other viewpoints and information, but it also fosters cultural sensitivity and the sharing of cutting-edge instructional techniques.Additionally, it enables educators to keep up with the most recent developments in education and research, ensuring they are better prepared to address the changing demands of their students in a world that is becoming more interconnected.Global connectedness serves as a crucial pillar in the landscape of teacher education as technology continues to change it.It enables a dynamic and inclusive approach to pedagogy and professional development.
Efficiency and Accessibility
Technology-enabled programs frequently offer flexible timetables, increasing access to education for people with busy schedules or geographic limitations.The ability to balance their academics with work or family obligations might be extremely helpful for future teachers.Modern teacher education programs that use technology prioritize efficiency and accessibility.These programs expedite the learning process by utilizing digital tools and platforms, giving aspiring teachers unmatched access to top-notch resources, peer collaboration, and hands-on learning opportunities.Teacher education is made more effective by utilizing online courses, virtual classrooms, and interactive learning materials, which may accommodate different schedules and learning preferences.Technology also offers accessibility for people from different backgrounds, removing geographic obstacles and encouraging diversity in teacher preparation.The incorporation of effectiveness and usability in technology-driven teacher education programs ultimately equips the next generation of educators to flourish in their positions, which is advantageous to both students and the educational system as a whole.
Data-driven decision-making
Tools for gathering and analyzing data on student performance are made available by technology.Teachers who have completed these programs can use data to personalize their lessons and pinpoint areas where students might require more assistance.With the help of data analytics and technology integration, data-driven decision-making in technologyenabled teacher education programs can significantly improve the efficacy and efficiency of teacher preparation.Educators and administrators can make well-informed decisions that result in more customized and responsive teaching tactics by methodically gathering and analyzing data on student performance, program outcomes, and instructional methods.This strategy not only enables educators to pinpoint areas for growth but also allows for the personalization of learning opportunities to cater to the needs of each individual student, thus generating a more flexible and effective ecosystem for teacher education.Data-driven decision-making is emerging as a key tool in ensuring the continuous evolution and optimization of teacher preparation programs, providing educators with the skills and insights required to excel in today's dynamic classroom environments.This is especially true in an era where technology is playing an increasingly important role in education.
Positive Attitudes
Many students enrolled in teacher education programs see technology as an important tool for their personal and professional growth.They understand that technology may enhance their instructional tactics, classroom management, and teaching abilities.For modern pedagogy to advance, positive views toward technology-enhanced teacher education programs are crucial.These initiatives equip teachers with the knowledge and skills necessary to successfully navigate the digital world, encouraging flexibility and innovation in the classroom.Adopting technology improves the effectiveness of teacher preparation while also providing aspiring teachers with the digital literacy abilities necessary for the 21st-century classroom.Additionally, it encourages collaborative learning environments, enabling teachers to connect and exchange information globally, enhancing their instructional strategies.A more dynamic, learner-centric, and forward-thinking educational environment will result from incorporating a positive attitude toward technology into educational programs, which will eventually benefit students and society as a whole.
Enhanced Engagement
Learners frequently express greater involvement in programs that use technology.The learning process becomes more engaging and pleasant with the use of interactive multimedia tools, online debates, and team projects.The study "Enhanced Engagement of Learners Perceptions of Technology-Enabled Teacher Education Programs" denotes a significant transformation in the field of education, where technology serves as a catalyst to change teacher education.This innovative program uses cutting-edge technological resources and cutting-edge pedagogical strategies to develop an immersive learning environment.In addition to providing future teachers with the digital literacy needed for today's classrooms, the seamless integration of technology into teacher education programs encourages a deeper and more meaningful engagement with the educational process.With the ability to adapt to changing educational environments and effectively inspire and educate the next generation of learners, this paradigm shift has the potential to transform how instructors are educated.
Flexibility and Convenience
Learner's value the flexibility that blended or online teacher preparation programs provide.They may work at their own pace, which is crucial for individuals trying to balance coursework with other obligations.The attitudes of students of technology-enabled teacher preparation programs are significantly influenced by flexibility and convenience.These programs give aspiring instructors the flexibility to adapt their learning experiences to their own needs and schedules in the dynamic educational environment of today, where traditional limits are continuously changing.Utilizing technology's capabilities makes teacher education more accessible than ever, enabling students to easily fit coursework into their daily lives.This adaptability encourages a greater level of engagement and retention in addition to fostering a sense of ownership over one's learning experience.Fundamentally, convenience and flexibility act as the pillars of contemporary teacher education, ushering in a time when teachers are prepared to fulfill the varied requirements of their pupils while juggling the demands of their own professional development.
Preparation for the Digital Classroom
Future educators are aware of the value of being technologically proficient in today's increasingly digital society.They feel that programs supported by technology effectively equip them for the difficulties of teaching in the twenty-first century.Through the eyes of the students, it examines the vital nexus between technology and teacher preparation.In order to provide light on their experiences, difficulties, and opportunities in the context of the digital classroom, this study explores the perceptions of prospective educators as they interact with technology-enabled teacher education programs.This research aims to inform the ongoing evolution of teacher preparation programs by gaining an understanding of how learners perceive these cutting-edge pedagogical approaches.By doing so, it will be possible to make sure that educators are adequately prepared for success in the rapidly changing world of digital education.
Concerns about Overreliance
Some students express worries about an excessive reliance on technology, worried that it can supplant crucial interpersonal relationships and creative expression in the classroom.It's still up for dispute how to balance using technology and conventional teaching techniques.Concerns against overly relying on students' opinions of technologyenhanced teacher education programs center on the potential negative effects of giving digital tools and platforms an excessive amount of weight in comparison to conventional pedagogical approaches.Although technology unquestionably improves teacher preparation, a focus on technology alone may unintentionally overlook other elements of good teaching, such as interpersonal abilities, classroom management, and subtle instructional strategies.This over-dependence on technology runs the risk of generating teachers who are adept at using it but lacking in the crucial human-centric traits that promote meaningful student-teacher connections and engaging learning environments.To ensure that teacher education programs remain comprehensive and responsive to the complex requirements of contemporary education, it is crucial to strike a balance between utilizing the advantages of technology and maintaining the fundamentals of teaching.
Challenges and Considerations
While technology-enabled teacher education programs offer numerous benefits, they are not without challenges:
Access to Technology
Unfair access to technology and dependable internet connections, especially in underprivileged areas, is a major barrier.This digital divide can make it very difficult for students to participate in online courses, resulting in unequal access to resources and educational opportunities.To guarantee that prospective teachers from all backgrounds may benefit from technology-enhanced teacher training programs, this gap must be closed through infrastructure development and actions for fair access.
Technological Literacy
Making sure educators and students are technologically literate is one of the main problems in teacher education programs that use technology.While these programs have many benefits, such as improved accessibility, flexibility, and cutting-edge learning techniques, they also necessitate a certain amount of digital competency from participants.To effectively teach their students, instructors must be skilled in using a variety of educational technologies, platforms, and tools, and students themselves must have the essential digital abilities to traverse these digital environments.To ensure that educators and students can fully take advantage of the potential benefits of technology in education, it is essential to bridge the digital gap and provide enough training and assistance to improve technological literacy.
Quality of Online Instruction
Technology-enhanced programs' efficacy is greatly influenced by the caliber of instruction and course structure.It's crucial to maintain good standards for online education.The caliber of online training is one of the most important factors.It can be difficult to guarantee that instructors receive thorough and efficient training in a virtual setting.In an online environment, it may be more difficult to maintain engagement, encourage interactive learning, and monitor student progress.In addition, certain aspiring teachers may be disproportionately affected by the digital divide and lack of access to technology, thus keeping them out of these programs.To fully realize the potential of technology-enabled teacher education, it is essential to develop and implement high-quality online coursework that addresses these issues.
Balancing Theory and Practice
Theoretical knowledge and useful classroom abilities need to be balanced in teacher preparation programs.It might be difficult to integrate technology effortlessly into the curriculum.The delicate balance between theory and practice is one of the main factors to be taken into account.Technology integration in teacher education programs shouldn't take precedence over the value of practical classroom practice.To guarantee that future educators not only gain pedagogical information but also develop the practical skills required to successfully traverse various classroom contexts, it is crucial to strike the correct balance.In order to successfully bridge the gap between theory and practice, tackling this difficulty also calls for creative curriculum design, mentorship programs, and chances for student instructors to participate in authentic teaching experiences.
Conclusion
This paper highlights the transformative effects of technology-enhanced teacher preparation programs on the constantly developing field of education in its conclusion.It highlights how crucial it is to comprehend how both students and teacher educators view these initiatives.Technology-enabled teacher education programs are significant because they can improve pedagogy, offer adaptive learning experiences, create global connectivity, provide efficiency and accessibility, and encourage data-driven decision-making.These programs empower educators with the skills necessary to function in the digital era, enabling them to fulfill the varied requirements of students and quickly adjust to the sector of education that is undergoing rapid change.The majority of learners have positive opinions of these programs, and they particularly value the increased engagement, adaptability, and preparation for the digital classroom that they provide.However, worries about an over reliance on technology serve as a reminder of the value of maintaining a balance between technological resources and conventional teaching strategies.Although these programs have many advantages, they also have problems with access to technology, technological literacy, the caliber of online training, and the need to balance theory with practice.In order for technology-enabled teacher education programs to develop and successfully educate the upcoming generation of educators for the demands of the 21st-century classroom, it will be imperative to address these problems.Programs for teacher preparation that use technology present a possible way to influence the direction of education.They offer learners innovative and interesting educational experiences while also equipping instructors with the knowledge and mindset necessary to succeed in a digital environment.Addressing the issues and concerns outlined in this assessment will help to create a teacher education environment that is more inclusive, flexible, and successful, allowing us to fully realize their potential.The perceptions of students will be crucial in guiding the growth of teacher preparation programs as technology continues to alter the future of education. | 4,299.4 | 2023-10-30T00:00:00.000 | [
"Education",
"Computer Science"
] |
Metamorphic Conditions and Raman Spectroscopic Carbonaceous Material Thermometry of Host Schists of Olympiada and Eldorado Gold Deposits (Yenisey Ridge, Russia)
: Metamorphic processes play a key role in forming orogenic gold deposits. In this paper, we present new evidence that host schists of the two largest gold deposits of the Yenisey ridge (Russia) Olympiada and Eldorado underwent a single stage of metamorphism in contrast to surrounding blocks. This metamorphism is of moderate thermal gradient and belongs to the Barrovian type, which is typical for the collisional event in the time range 800–850 Ma. The new Ar/Ar age data presented in this paper and the review of magmatic and metamorphic events and ore-forming processes indicate that the most productive stage (gold-sulfide-quartz) correlates well in time with the regional metamorphism of the Barrovian type. This indicates that metamorphic processes can have a crucial role in forming gold deposits of the Yenisey ridge. Carbonaceous material thermometry indicates a wide range of obtained temperatures around 90–150 ◦ C around the mean temperature for each sample. The highest temperatures are close to the peak metamorphic temperatures estimated by garnet-biotite thermometry.
Introduction
Metamorphic processes play a key role in forming orogenic gold deposits [1]. The genesis of many gold deposits with polygenic history, which included both metamorphic and magmatic processes, often remains debated. The controversy is due to difficulties in the interpretation of geochronological data for overlapping magmatic and metamorphic processes of different ages. The Yenisey ridge (Russia) is one such region with polygenic history. It is a Neoproterozoic accretionary-collisional orogen, extending for 700 km along the western margin of the Siberian craton situated between the craton to the east and West Siberian basin to the west. Most gold deposits occur in the lower part of the Sukhoi Pit Group, represented by clastic, carbonaceous, and calcareous clastic rocks of the Korda, Gorbilok, and Uderei Formations [2].
We present in this paper new data on pressure-temperature parameters of metamorphism of host schists of the two largest gold deposits of the Yenisey ridge, Olympiada and Eldorado and its tectonic and time correlation with metamorphic events around the whole orogen of the Yenisey ridge. New Ar-Ar data on ore formation of the Eldorado gold deposit allows correlating the ore-forming processes with Barrovian type collisional metamorphism.
Geology of Deposits
The Olympiada deposit occurs closer to the Tatarka fault. The deposit is hosted in metamorphosed rocks of the Korda Formation. From top to base, these are carbonaceous quartz-mica schist, foliated marble, quartz-mica-carbonate schist, and biotite-muscovitequartz schist (Figure 2a). The chemical composition of the rocks ranges from aluminous to carbonate-rich, with minor variations in ferromagnesian content. All rocks are hydrothermally altered to different degrees and contain sulfide mineralization. The principal host rocks are quartz-mica-carbonate (two mica-quartz-calcite) schist and marble. In the underlying quartz-mica (quartz-garnet-mica) and overlying quartz-mica-carbonaceous (carbonaceous silicate) schists, mineralization occurs only near the contacts with carbonate-bearing and carbonate rocks.
Sedimentary rocks are regionally metamorphosed at low amphibolite facies conditions [25,26]. Mineral assemblages are garnet-muscovite-biotite, developed after siltstone. The area of the Olympiada deposit has a local zone of dynamothermal metamorphism and alteration, characterized by distinct mineral assemblages, and spatially restricted to the long-lived tectonic zone along the mineralized horizon. The highest temperature associations of metapelites in the tectonic sliver correspond to the garnet zone (garnet + quartz + biotite ± plagioclase), with sporadic fibrolite-sillimanite and kyanite. Higher pressure rocks correspond to the barrovian type of metamorphism, with mineral assemblages of chloritoid + biotite ± garnet (in metapelite) and margarite + quartz (in metamarl). [11,14,24].
Geology of Deposits
The Olympiada deposit occurs closer to the Tatarka fault. The deposit is hosted in metamorphosed rocks of the Korda Formation. From top to base, these are carbonaceous quartz-mica schist, foliated marble, quartz-mica-carbonate schist, and biotite-muscovitequartz schist (Figure 2a). The chemical composition of the rocks ranges from aluminous to carbonate-rich, with minor variations in ferromagnesian content. All rocks are hydrothermally altered to different degrees and contain sulfide mineralization. The principal host rocks are quartz-mica-carbonate (two mica-quartz-calcite) schist and marble. In the underlying quartz-mica (quartz-garnet-mica) and overlying quartz-mica-carbonaceous (carbonaceous silicate) schists, mineralization occurs only near the contacts with carbonatebearing and carbonate rocks.
Sedimentary rocks are regionally metamorphosed at low amphibolite facies conditions [25,26]. Mineral assemblages are garnet-muscovite-biotite, developed after siltstone. The area of the Olympiada deposit has a local zone of dynamothermal metamorphism and alteration, characterized by distinct mineral assemblages, and spatially restricted to the long-lived tectonic zone along the mineralized horizon. The highest temperature associations of metapelites in the tectonic sliver correspond to the garnet zone (garnet + quartz + biotite ± plagioclase), with sporadic fibrolite-sillimanite and kyanite. Higher pressure rocks correspond to the barrovian type of metamorphism, with mineral assemblages of chloritoid + biotite ± garnet (in metapelite) and margarite + quartz (in metamarl).
The Tyrada and Chirimba massifs of Ayakhta complex (761±8 Ma [27]) are the nearest granitic intrusions to the Olympiada deposit ( Figure 2a). The gravity data suggest that all of them are parts of a single batholith [28]. Both massifs are similar in composition, mostly consisting of granodiorite (phase 1) and coarse-grained porphyritic biotite and amphibole-biotite granite of normal alkalinity (main phase 2). These massifs contain numerous internal small stocks, dikes, leucogranite veins, fine-grained granite, and aplite (phase 3).
More than 50 estimates for the age of mineralization exist for the Olympiada deposit based on the K-Ar, 40 Ar/ 39 Ar, Rb-Sr, Sm-Nd, and Re-Os techniques. On the basis of this data a number of main ore-forming stages have been distinguished. The age of the nonauriferous quartz-mica-sulfide mineral association is estimated as 817.1 ± 6.3 to 808.4 ± 7.7 Ma. The age of the main productive quartz-gold-arsenopyrite-pyrrhotite association is 803 ± 6.1 to 758 ± 6.0 Ma, with a prevalence of 795 to 784 Ma. The age estimates of the late quartz-gold-antimony association range from 795.2 ± 5.8 to 660 ± 19 Ma [28].
The Eldorado deposit is much less studied than the Olympiada deposit. It is located near the Ishimba fault ( Figure 1). The deposit is situated in the Gorbilok formation of the Sukhoi pit group. The host rocks are garnet-biotite-muscovite schists metamorphosed at low-amphibolite facies conditions. The nearest magmatic rocks are about 15 km away from the deposit and belong to the Kalama granitoid massif of age about 860-880 Ma [13,29]. Twenty-two ore bodies are found in the deposit areas. They form 4 groups ( Figure 2b) and are located in subparallel quartz veins with arsenopyrite. Age estimates for this deposit are very restricted. The age of quartz-gold-arsenopyrite stage is estimated as 795 Ma and gold-polysulfide stage as 780 Ma [7]. The age around 790 Ma was obtained by Gibsher et al. [30] for the host metamorphic schists. These were interpreted as the age of the local dynamometamorphism. The Tyrada and Chirimba massifs of Ayakhta complex (761±8 Ma [27]) are the nearest granitic intrusions to the Olympiada deposit ( Figure 2a). The gravity data suggest that all of them are parts of a single batholith [28]. Both massifs are similar in composition, mostly consisting of granodiorite (phase 1) and coarse-grained porphyritic biotite and amphibole-biotite granite of normal alkalinity (main phase 2). These massifs contain numerous internal small stocks, dikes, leucogranite veins, fine-grained granite, and aplite (phase 3).
More than 50 estimates for the age of mineralization exist for the Olympiada deposit based on the K-Ar, 40 Ar/ 39 Ar, Rb-Sr, Sm-Nd, and Re-Os techniques. On the basis of this data a number of main ore-forming stages have been distinguished. The age of the nonauriferous quartz-mica-sulfide mineral association is estimated as 817.1 ± 6.3 to 808.4 ± 7.7 Ma. The age of the main productive quartz-gold-arsenopyrite-pyrrhotite association is 803 ± 6.1 to 758 ± 6.0 Ma, with a prevalence of 795 to 784 Ma. The age estimates of the late quartz-gold-antimony association range from 795.2 ± 5.8 to 660 ± 19 Ma [28].
The Eldorado deposit is much less studied than the Olympiada deposit. It is located near the Ishimba fault ( Figure 1). The deposit is situated in the Gorbilok formation of the Sukhoi pit group. The host rocks are garnet-biotite-muscovite schists metamorphosed at low-amphibolite facies conditions. The nearest magmatic rocks are about 15 km away from the deposit and belong to the Kalama granitoid massif of age about 860-880 Ma [13,29]. Twenty-two ore bodies are found in the deposit areas. They form 4 groups ( Figure 2b) and are located in subparallel quartz veins with arsenopyrite. Age estimates for this deposit are very restricted. The age of quartz-gold-arsenopyrite stage is estimated as 795 Ma and gold-polysulfide stage as 780 Ma [7]. The age around 790 Ma was obtained by Gibsher et al. [30] for the host metamorphic schists. These were interpreted as the age of the local dynamometamorphism.
Materials and Methods
Samples of garnet-biotite schists from the Eldorado and garnet-biotite, garnet-staurolite and biotite-muscovite schists from the Olympiada deposits were taken. Samples
Materials and Methods
Samples of garnet-biotite schists from the Eldorado and garnet-biotite, garnet-staurolite and biotite-muscovite schists from the Olympiada deposits were taken. Samples from the Olympiada deposit were taken from well core which intersects ore bodies. Samples from the Eldorado deposit were taken from the quarry. All schists are enriched in carbonaceous material which forms thin lumpy inclusions in rock-forming minerals: garnet, biotite, muscovite, quartz, chlorite, and chloritoid ( Figure 3). (Table 1). Biotite contains 1.88 wt.% of TiO2 and xFe (Fe/Fe+Mg) = 0.63 (Table 2). Plagioclase contains xAn = 0.25-0.37 with a decrease in the content of the anorthite component towards the Organic maturity was determined in selected samples on polished sections about 0.2 mm thick, cut perpendicular to the foliation. For the acquisition of the Raman spectra, a Horiba Jobin Yvon LabRam HR confocal Raman spectrometer equipped with a frequency-doubled Nd-YAG laser (100 mW, 532.2 nm) and diffraction gratings of 1200 and 1800 grooves/mm, a Peltier-cooled, slow-scan, ccD matrix-detector and an Olympus bX 40 microscope were used at the Institute of Geology and Mineralogy SB RAS. To obtain a better signal to noise ratio five scans with an acquisition time of 30 sec in the 700-2000 cm −1 (first-order) and 2200-3200 cm −1 (second-order) region are summed. From 40 to 80 spectra were recorded for each sample. the measured first-order bands of the raman spectra were the D1 [3] or D band [31] at~1350 cm −1 ; the G [3] or O band [31] at ca. 1580 cm −1 , the D2 band at~1610 cm −1 , and the D3 band at~1500 cm −1 . The second-order bands were recorded at~2450 cm −1 ,~2700 cm −1 (s1 band) and~2900 cm −1 (s2 band, [3]). The peak position, area, and peak width (full width at half maximum-FWHM) of the bands were determined using the computer program Fityk 1.3.1. The R1 ratio is calculated as D1/G peak intensity ratio and the R2 ratio is given as D1/(G + D1 + D2) peak area ratio [3].
Minerals for 40 Ar/ 39 Ar isotopic-geochronologic studies were separated from rocks using conventional techniques of magnetic and density separation. Samples of monomineralic fractions and biotite MCA-11 (OSO no. 129-88, which was used as the mineral monitor) were wrapped in Al foil, placed into a quartz ampoule, which was then pumped out and welded. Biotite MCA-11 (prepared at the Fedorovskii All-Russia Institute of Mineral Resources in 1988 as a K/Ar standard for K/Ar dating) was certified as a 40 Ar/ 39 Ar monitor with the use of the muscovite Bern 4m and biotite LP-6 internationally certified standards [32]. The integral age of biotite MCA-11 was assumed to correspond to the calibration results: 311.0 ± 1.5 Ma. The quartz ampoules with samples were irradiated in the Cd-coated channel of a reactor (BBP-K type) at the Tomsk Polytechnic Institute. The gradient of the neutron flux did not exceed 0.5% of the sample size. The experiments on the stepwise heating were carried out in a quartz reactor with an external heater. The blank for 40Ar (10 min at 1200 • C) was no higher than 5 × 10 −10 ncm 3 . Ar was purified using Ti and ZrAl SAES getters. The isotopic composition of Ar was measured on a Noble Gas 5400 (Micromass, United Kingdom) mass spectrometer. The errors quoted below in the text, table, and figures correspond to ±1σ.
Petrography and Mineralogy
Three samples from the well core of Olympiada deposit were taken for detailed study: two samples of garnet-biotite (oc-40 and oc-99) and one of the garnet-staurolite-chloritoid schists (oc-24).
The mineral assemblage of garnet-biotite schists is Grt + Bt + Chl + Pl + Qz. The sample oc-40 has a distinct laminated structure with alternating laminae of a dominantly quartzfeldspar-mica composition containing a small amount of carbonaceous matter and layers enriched with the carbonaceous matter. Garnet is concentrated in certain layers, where it forms idiomorphic porphyroblasts with a sectorial internal structure. The folded structure and the development of schistosity at an angle to the bedding are characteristic of the rock (Figure 3a Table 1). The content of almandine, pyrope, and grossular increases to the edges of grains, while the content of spessartine decreases ( Figure 4) The host rocks of the Eldorado deposit are rather homogenous in composition. Generally, it is garnet-biotite schist with the mineral assemblage Grt+Bt+Ms+Chl+Pl+Qz. They have lepidoblastic texture with garnet porhyroblasts about 1.2 mm in diameter. Two samples of schists were studied (19-el-2 и 19-el-5). (Table 3).
Garnet-staurolite-chloritoid schist (oc-24) has a schistose structure. The rock is significantly enriched in the carbonaceous matter, which occurs in the form of inclusions in garnet, staurolite, chloritoid, chlorite, and micas (Figure 3c,d). In garnet, mainly the edge parts of the grains are enriched with the carbonaceous matter. In staurolite and chloritoid, it is concentrated in the central parts. A sectorial structure is often noted in garnet. Chloritoid, staurolite, and biotite are partially replaced by secondary chlorite. (Table 3).
The host rocks of the Eldorado deposit are rather homogenous in composition. Generally, it is garnet-biotite schist with the mineral assemblage Grt + Bt + Ms + Chl + Pl + Qz. They have lepidoblastic texture with garnet porhyroblasts about 1.2 mm in diameter. Two samples of schists were studied (19-el-2 и19-el-5).
P-T Parameters of Metamorphism
Temperature estimates have been made with a garnet-biotite geothermometer [33] and a Ti-in-Bt geothermometer [34,35], pressure has been estimated with a garnet-biotiteplagioclase geobarometer [36]. For P-T calculations composition of garnet and plagioclase rims and biotite from the rocks matrix were taken. These minerals do not form reaction textures with each other and do not indicate other evidence of disequilibrium.
Ten other samples of garnet-biotite schists were taken from the same well core. They have mineral composition and compositions of rock-forming minerals similar to those for schists described above. We do not show here its detailed description, obtained P-T parameters fall into the range: Grt-Bt [33]-540-570 • C, Ti-in-Bt [34]-560-590 • C, pressure 6.3-7.4 kbar (Table 4). Table 4. P-T parameters of metamorphism of garnet-biotite schists of the Olympiada deposit.
Chlorite Thermometry
Mineral microtextures indicate that chlorite in samples from the Olympiada deposit replaces minerals of the peak metamorphic assemblage: biotite, staurolite, and chloritoid ( Figure 3). For the temperature estimates, chlorite thermometry has been applied. Both empirical and semi-empirical approaches were used for chlorite geothermometry, and results obtained are summarized in Table 3. Among the empirical methods, four different geothermometers of Kranidiotis and MacLean [37] (KM), Jowett [38] (J), McDowell and Elders, [39] (ME) and Cathelineau and Nieva [40] (CN) were used. The highest temperature (390-410 • C) was indicated by thermometers of Jowett [38] and McDowell and Elders [39] for samples oc-24, oc-99 and 19-el-2 and slightly less (350-370 • C) for the sample oc-40. Calibrations CN and KM indicate lower temperature in the range 310-370 • C.
The semi-empirical approach of Bourdelle et al. [41] was also used as it is best suited for low temperature chlorites (<350 • C). Individual T-R 2+ -Si diagrams (where T-temperature, R 2+ -sum of divalent cations, Si [apfu]) were plotted and temperatures were graphically estimated ( Figure 5). Most of the chlorites in T-R 2+ -Si diagram plot above 350 • C isotherm. Since, the optimal reliability of this thermometer is up to 350 • C only, this method could not estimate the upper temperature limit of chlorites but it clearly indicates that for all chlorite the upper temperature limit is higher than 350 • C.
21, 11, x FOR PEER REVIEW 12 of 22
results obtained are summarized in Table 3. Among the empirical methods, four different geothermometers of Kranidiotis and MacLean [37] (KM), Jowett [38] (J), McDowell and Elders, [39] (ME) and Cathelineau and Nieva [40] (CN) were used. The highest temperature (390-410 °C) was indicated by thermometers of Jowett [38] and McDowell and Elders [39] for samples oc-24, oc-99 and 19-el-2 and slightly less (350-370 °C) for the sample oc-40. Calibrations CN and KM indicate lower temperature in the range 310-370 °C. The semi-empirical approach of Bourdelle et al. [41] was also used as it is best suited for low temperature chlorites (<350 °C). Individual T-R 2+ -Si diagrams (where T-temperature, R 2+ -sum of divalent cations, Si [apfu]) were plotted and temperatures were graphically estimated ( Figure 5). Most of the chlorites in T-R 2+ -Si diagram plot above 350°C isotherm. Since, the optimal reliability of this thermometer is up to 350 °C only, this method could not estimate the upper temperature limit of chlorites but it clearly indicates that for all chlorite the upper temperature limit is higher than 350°C. [42]. Isotherms (in 50 °C steps) calculated with Bourdelle et al. [41] geothermometer.
Raman Spectroscopy
Carbonaceous matter (CM) inclusions in garnet, quartz, chloritoid and micas ( Figure 6) were analyzed. From 44 to 79 measurements were made for each sample (Table 5). CM particles were analyzed in situ enclosed within transparent grains in sections orientated perpendicular to the foliation. Raman spectra acquired from samples display sharp and intense G peaks at ~1578 cm -1 , D1 peaks at ~1347 cm -1 , and weakly resolved, broad shoulders of the D1 bands at ~1615 cm -1 indicating the D2 peak (Figure 7). The second-order spectra are characterized by bands at ~ 2693 cm -1 (Table 5). D1 bands at ~1353 cm -1 and G peaks at ca. 1576 cm -1 were measured on the first-order Raman spectra of samples.
Raman Spectroscopy
Carbonaceous matter (CM) inclusions in garnet, quartz, chloritoid and micas ( Figure 6) were analyzed. From 44 to 79 measurements were made for each sample (Table 5). CM particles were analyzed in situ enclosed within transparent grains in sections orientated perpendicular to the foliation. Raman spectra acquired from samples display sharp and intense G peaks at~1578 cm −1 , D1 peaks at~1347 cm −1 , and weakly resolved, broad shoulders of the D1 bands at~1615 cm −1 indicating the D2 peak (Figure 7). The secondorder spectra are characterized by bands at~2693 cm −1 (Table 5). D1 bands at~1353 cm −1 and G peaks at ca. 1576 cm −1 were measured on the first-order Raman spectra of samples. In the geothermometer of Beyssac et al. [4] metamorphic temperature is calculated as a linear function of the R2 parameter (R2 = D1/(G + D1 + D2) area ratio) and the thermometer is calibrated in the range of 330-650 • C. The calibration of geothermometerby Rahl et al. [43] is based on both R1 (R1 = D1/G high ratio) and R2 parameters and is applicable for temperatures as low as 100 • C. Both of these calibrations were used for temperature estimates and they indicated very similar results (Table 5). Temperatures obtained for each sample fall into a wide range around 100-150 • C around the mean temperature (Table 5, Figure 8). CM inclusions in different minerals (garnet, quartz, muscovite, and chlorite) indicate a large scatter of crystallinity in each mineral phase (Figures S1 and S2).
Ar-Ar Dating
More than 50 estimates for the age of mineralization exist for the Olympiada deposit based on various techniques, whereas the age data on the Eldorado deposit is very restricted.
Gold at the Eldorado deposit mostly resides in quartz rather than in sulfides and is of two generations, associated with carbon-saturated quartz and early sulfides (arsenopyrite and pyrite) or with late complex ore mineralization (sphalerite, chalcopyrite, and galena). Gold of the first generation is invisible, with small particles more strongly deformed and uniformly distributed, which makes up the greatest part of the gold reserves, while the particles of the second generation gold are coarser. Table 5. Raman spectra position, FWHM and estimated temperature. Notes: R1 and R2 reported here are averages Raman spectra collected per sample. Uncertainties are represented by sta ard errors (SE), which were determined by dividing the standard deviation of the measurements by the square root of number of measurements. 40 Ar/ 39 Ar dating was performed for clarification of ore processes of the Eldorado deposit. For dating a sample of quartz vein with fragments of host garnet-biotite schists was taken. Quartz veins in garnet-biotite schists do not cross schistosity of host rocks and contain its lens-shaped fragments, mean while they are deformed conformal to host schists and often form lenses in it (Figure 9a). No retrograde changes like secondary chlorite were noticed (Figure 9b). Quartz contains inclusions of large arsenopyrite crystals (up to 1 cm) and is cut by a light-green sericite vein, which contains arsenopyrite grains as well (Figure 9c). The concentration of gold in the sample is about 5.3 ppm. For dating of the younger age limit of the ore mineralization, the fine-grained sericite was taken. Obtained plateau includes 58.4% of the total argon and indicates the age 798.6 ± 5.3 Ma (Figure 9d, Table 6). Table 6. Ar-Ar sericite data.
Heating
Step 40 and often form lenses in it (Figure 9a). No retrograde changes like secondary chlorite were noticed (Figure 9b). Quartz contains inclusions of large arsenopyrite crystals (up to 1 cm) and is cut by a light-green sericite vein, which contains arsenopyrite grains as well ( Figure 9c). The concentration of gold in the sample is about 5.3 ppm. For dating of the younger age limit of the ore mineralization, the fine-grained sericite was taken. Obtained plateau includes 58.4% of the total argon and indicates the age 798.6±5.3 Ma (Figure 9d, Table 6).
P-T Conditions of Metamorphism
Conventional thermometry data indicate that the metamorphism of garnet-biotite and garnet-staurolite schists occurred at temperatures of about 530-570 • C and pressure mainly around 6-7 kbars and they were very similar for both deposits. Similar P-T parameters were obtained in earlier works (see review of Sazonov et al. [2]) and they correspond to the P-T parameters of Barrovian metamorphism estimated in the Mayakon area [20] ( Figure 10).
Late chlorite is typical for schists from the Olympiada deposit; relatively high temperatures obtained by chlorite thermometry (more than 350 • C) indicate that chlorite was formed probably due to low-temperature metamorphism, but not in a hydrothermal environment.
Typical prograde metamorphic zoning in garnets with decreasing spessartine and increasing almandine from core to the rims and almost homogenous in grossular content indicates a single stage of prograde metamorphism. The rocks do not have any evidence of early HT/LP metamorphism which was mentioned in the nearest area [20] or the grade of early metamorphism was not higher than greenschist facies. This shows that the early stage of metamorphism described over the whole YR [44] probably has a patchy distribution.
Detailed studying of metamorphic processes of Transangarian part of YK was performed along Tatarka thrust (Figure 1, review [44] and references therein). Fe-and Al-rich metapelitic schists of the Korda formation show petrological evidence of two superimposed metamorphic events. An early middle Proterozoic event produced low-pressure, andalusite-bearing assemblages at c. 3.5-4 kbar and 540-560 • C. During a subsequent late Proterozoic event, a moderate-pressure (Barrovian-type), regional metamorphic overprint produced kyanite-bearing mineral assemblages that replaced minerals formed in the low-pressure event [21]. Due to these two metamorphic events, a specific chemical zoning with sharp increasing of grossular component from core to the rims develops in garnet grains [20,21].
Conventional thermometry data indicate that the metamorphism of and garnet-staurolite schists occurred at temperatures of about 530-570 °C mainly around 6-7 kbars and they were very similar for both deposits. Simi eters were obtained in earlier works (see review of Sazonov et al. [2]) and th to the P-T parameters of Barrovian metamorphism estimated in the May ( Figure 10).
Late chlorite is typical for schists from the Olympiada deposit; relati peratures obtained by chlorite thermometry (more than 350 °C) indicate th formed probably due to low-temperature metamorphism, but not in a hyd vironment. Figure 10. Pressure-Temperature diagram with parameters of metamorphism of th (rectangulars) [20] and Olympiada and Eldorado schists (crosses). Grey arrow sho of schists of the Mayakon area from early HT/LP metamorphism to the late Barro morphism [20].
Typical prograde metamorphic zoning in garnets with decreasing s increasing almandine from core to the rims and almost homogenous in gro indicates a single stage of prograde metamorphism. The rocks do not have of early HT/LP metamorphism which was mentioned in the nearest area [2 of early metamorphism was not higher than greenschist facies. This shows stage of metamorphism described over the whole YR [44] probably has a p tion.
Detailed studying of metamorphic processes of Transangarian part o formed along Tatarka thrust (Figure 1, review [44] and references therein). F metapelitic schists of the Korda formation show petrological evidence of posed metamorphic events. An early middle Proterozoic event produced andalusite-bearing assemblages at c. 3.5-4 kbar and 540-560°C. During a s [20] and Olympiada and Eldorado schists (crosses). Grey arrow shows the PTpath of schists of the Mayakon area from early HT/LP metamorphism to the late Barrovian-type metamorphism [20].
The early stage considered to be occurred as a result of the orogeny during late Mesoearly Neoproterozoic. This is supported by earlier U-Th-Pb, Rb-Sr, and K-Ar data on granite-gneiss domes (1100-950 Ma) and the more recent single-zircon (U-Pb SHRIMP II) and 40/39Ar dating of metapelites, metabasites, and rapakivi granites (1140-870 Ma) [13,22,45]. The age estimates around c. 1050 Ma, were obtained by U-Th-Pb dating of xenotime inclusions in the core of zoned garnet grains [21]. The age of the second (Barrovian-type) metamorphic episode is estimated in the range from 850 to 800 Ma. Age estimates are based on CHIME monazite dating [21] and much Ar-Ar data on biotite from metamorphic schists around the whole YK [13,[20][21][22]30].
This review indicates the strong correlation of Barrovian-type metamorphism with the time interval 800-850 Ma in the YK collisional orogen. Mineral assemblages and compositional zoning in garnets from both Eldorado and Olympiada deposits indicate that the rocks underwent only one episode of metamorphism with PT-parameters the thermal gradient of Barrovian type of metamorphism and do not have evidence of the early stage. In the light of regional studies we can suggest that the age around 840 Ma obtained for host schists of Eldorado deposit should be considered as an age of a single metamorphic event, but not a local dynamometamorphism as was suggested by Gibsher et al. [30] and that there is no evidence of early metamorphism with the age around 1050.
Carbonaceous Material Thermometry
Previous studies [46] have shown that a large range of crystallinity is characteristic of carbonaceous matter of organic origin, in contrast to that precipitated from the fluid phase. The temperature obtained on the basis of CM Raman spectra falls in a wide range of about 100-150 • C around the mean temperature for each sample, which suggests varying degrees of graphitization for CM particles in the sample. A large scatter of CM crystallinity inside a single mineral phase indicates that it does not depend on the structural position in the rock. This range exceeds the dispersion of the points for the calibration of the CM thermometer ±50 • C suggested by Beyssac et al. [3]. As no different morphological types of CM were found in samples, this heterogeneity may be due to heterogeneity of the carbonaceous matter or different orientation of microdomains inside a single CM particle. The influence of the mineral matrix or composition of the metamorphic fluids is not excluded [47].
Comparison of data obtained by conventional and CM thermometry shows that the Grt-Bt temperature estimates are generally close to the highest temperatures obtained by CM thermometry (Figure 8) and most CM temperature estimates are much lower than data of Grt-Bt thermometry. This is most pronounced for samples from the Olympiada deposit (Figure 8a-c).
These data indicate that thermometry using Raman spectroscopy of carbonaceous material is a reliable tool for temperature estimates in the temperature range 500-600 • C, but it can only be used if there is a sufficiently large number of analyzes.
Ar-Ar Dating
The sericite vein that was used for Ar-Ar dating cuts the host gray quartz vein and trapped fragments of biotite schist. Arsenopyrite grains are located in both quartz and sericite veins (Figure 9), which implies sericite growth simultaneously or later than arsenopyrite. This is consistent with the Ar-Ar date of 798.6 ± 5.3 Ma as an age of quartzgold-arsenopyrite mineralization or its younger age limit. These data are similar to the age of quartz-gold-arsenopyrite stage (795 Ma) and gold-polysulfide stage (780 Ma) obtained earlier by Sazonov et al. [7].
Correlation of the Mineralization Periods with the Main Tectonomagmatic Events
Correlation of the mineralization periods with the main tectonomagmatic events [13,14,22,27,28,30] in the region shows the following. In the early Neoproterozoic (1050-950 Ma), the terrigenous strata of the Sukhoi Pit Group were deformed and metamorphosed as a result of orogeny. These processes were most intense in the Tatarka-Ishimba fault system. In the earlier, syncollisional, period of this stage, the formation of Teya-type granite-gneiss domes was accompanied by extensive pegmatization [13]. The late collisional period (880-860 Ma) was marked by the formation of K-Na granitoid plutons of the Kalama type (Teya-Eruda complex) [13,27]. Premineral metasomatites formed in a large contact aureole, in the zone where these plutons influenced enclosing black shale-terrigenous strata. They host Au fields and deposits, which formed later.
Barrovian-type metamorphism took place at~860-800 Ma [21,23]. It is genetically related to thrusting and the final stage in the evolution of a collisional orogen. Two following stages of metamorphism were distinguished [23]. The early stage of collisional metamorphism with the peak around 860-850 Ma was related to mostly westward thrusts of blocks of Siberian craton, the late collisional stage around 800 Ma-with eastward thrusts of the Central Angara terrane to the Siberian craton [21].
The formation of quartz-vein zones (Sovetskoe, Eldorado, and other deposits) (830-820 Ma) correlates well with that of thrust nappes. The paragenetic relationship between quartz veining and thrust nappes is confirmed by the fact that quite complex synfolding veins have obvious features of metamorphic bodies [48,49]. The late collisional metamorphism (802-798 Ma) caused by eastward movements of blocks in the zone of high order splays developed at this stage [23]. The formation of bimodal dike swarm dated at~790 Ma is related to Neoproterozoic extension along the western margin of Siberian Craton and mark the upper age limit of collisional metamorphism In the Late Neoproterozoic, rift and intraplate magmatism was most intense and frequent in the Tatarka-Ishimba fault system. Here, we distinguish four stages in the formation of rift structures, accompanied by intraplate magmatism at 780, 750, 700, and 670-650 Ma. The formation of gold-sulfide ores (720-711 Ma) parallels the initiation and evolution of rift structures (Uvolga, Indola, Talovka grabens, Teya-Chapa trough) (720-700 Ma) as well as intraplate subalkaline granitoid and alkaline magmatism (Kutukas and Gurakhta granitoid complexes, 690-700 Ma; Zakhrebetnaya and Middle Tatarka complexes, 700-710 Ma). The time of Au-Sb mineralization correlates with the pre-Vendian orogeny, the initiation of fault grabens infilled with the Chapa Group sediments, and the emplacement of alkalic-ultramafic intrusions (670-650 Ma). The latest postmineral alterations of mineral matter may be related to the tectonomagmatic processes manifested in syenite-alkali syenite (Middle Vorogovka complex) and subalkalic granitoid (Tatarka massif) intrusions with an age of 630-620 Ma.
This review indicates that the most productive quartz-gold-arsenopyrite stage correlates well in time with the collisional stage (800-850 Ma) and formation of barrovian-type metamorphic complexes over the whole YK (Figure 11), where as the closest magmatic events occurred much earlier (Teya-Eruda complex) or much later (Ayahta complex). This implies that metamorphic processes can play a key role in forming YK gold deposits.
Conclusions
(1) The host rocks of both Olympiada and Eldorado gold deposits are metamorphosed during the single episode of Barrovian-type metamorphism at similar temperatures 530-570°C. The rocks do not indicate any traces of the early HT/LP metamorphism Figure 11. The time correlation scheme of main magmatic and metamorphic events with mineralization stages of gold deposits of the Yenisey Ridge.
Conclusions
(1) The host rocks of both Olympiada and Eldorado gold deposits are metamorphosed during the single episode of Barrovian-type metamorphism at similar temperatures 530-570 • C. The rocks do not indicate any traces of the early HT/LP metamorphism observed in other areas of the YK. The growth of the late chlorite after peak metamorphic minerals occurred at temperatures higher than 350 • C probably during retrograde metamorphism. (2) Carbonaceous material thermometry indicates a wide range of obtained temperatures around 90-150 • C for each sample. The highest temperatures are close to the peak metamorphic temperatures estimated by garnet-biotite thermometry. We guess that thermometry using Raman spectroscopy of carbonaceous material is a reliable tool for temperature estimates, but it can only be used if there is a sufficiently large number of analyzes. (3) The new 39/40Ar age data on ore-forming processes of the Eldorado deposit around 796 Ma are very close to the age of the Barrovian-type metamorphism estimated in the area under study and around the whole Yenisey ridge. This may indicate their genetic affinity. (4) The review of magmatic and metamorphic events and ore-forming processes of Yenisey ridge indicate that the most productive ore stage (gold-sulfide-quartz) have a good time correlation with the regional metamorphism of Barrovian type. This indicates that metamorphic processes can play a key role in forming of gold deposits of the Yenisey ridge.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,952 | 2021-11-02T00:00:00.000 | [
"Geology"
] |
Application of Artificial Intelligence Models for Evapotranspiration Prediction along the Southern Coast of Turkey
is one of the most significant factors the cycle it has a great influence on water resource planning and management, agriculture and irrigation management, and other processes in the hydrological sector. In this study, an efficient and local predictive model was established to forecast the monthly mean ET o t over Turkey based on the data collected from 35 locations. For this purpose, twenty input combinations including hydrological and geographical parameters were introduced to three different approaches called multiple linear regression ( MLR ) , random forest ( RF ) and extreme learning machine ( ELM ) . Moreover, in this study, large investigation was done, involving the establishment of 60 models and their assessment using ten statistical measures. The outcome of this study revealed that the ELM approach achieved high accurate estimation in accordance with the Penman–Monteith formula as compared to other models such as MLR and RF. Moreover, among the 10 statistical measures, the uncertainty at 95% ( U 95 ) indicator showed an excellent ability to select the best and most efficient forecast model. The superiority of ELM in the prediction of mean monthly ET o over MLR and RF approaches is illustrated in the reduction of the U 95 parameter to 49.02% and 34.07% for RF and MLR models, respectively. Furthermore, it is possible to develop a local predictive model with the help of computer to estimate the ET o using the simplest and cheapest meteorological and geographical variables with acceptable accuracy.
Introduction
1.1. Background. Global warming has become a great concern of researchers and world leaders. It is well known that the Earth surface temperature is increasing significantly during the last decades [1,2]. Water storage, hydrological cycles, and, consequently, water availability are directly affected by global warming [3][4][5]. us, one of the most essential indicators of climate change is the referenced evapotranspiration (ET o ), which is considered as the most complicated element in the hydrological cycle [6][7][8].
ET o mainly occurs due to two complicated processes. e first is when water evaporates from the surface of the soil, lakes, rivers, etc., and this process is called physical evaporation. e second process is the transpiration phenomenon in crops and plants, which is called biological transpiration [9,10]. Undeniably, the evaporation process requires energy to convert water from the liquid phase to the vapor phase. erefore, the main parameters that affect the ET o process is the sun radiation, wind speed and direction, air temperature, and humidity [9][10][11]. In conclusion, the ET o also represents the link between surface energy and carbon cycle [12,13]. Based on the stated literature, a precise measurement and prediction of ET o is essential for quantifying surface energy and water reserves worldwide [14][15][16].
Literature
Review. Due to the significant effect of ET o on climate change, Earth temperature, crops and plants, water management, and runoff quantity, many researchers have studied the ET o prediction over the last decades [9,[22][23][24].
e Penman-Monteith (P − M) is the most widely used model, and it is considered as a physical ET o model as it is an approximate linearized solution governing energy balance, thermodynamic state, vertical heat, and watervapor diffusion [9,23]. However, P − M requires many meteorological data to be applied, which can be considered as a drawback for this equation [25,26]. In any case, there are many models applied to estimate the ET o around the world. Examples of such are the constant heat method by including heat pulse [27,28] and the Shuttleworth-Wallace S-W method to estimate the transpiration from plants [29][30][31]. It is worth mentioning that the number of empirical equations for modeling the evaporation has exceeded 100 due to the importance of ET o measurements and the variety of meteorological data around the world. erefore, it is impossible to decisively compare these models [32][33][34].
Recently, the development of artificial intelligence has received significant attention from communities in the hydrological and environmental sciences, including water treatment [35,36], hydrology [37][38][39][40][41], water reservoir optimization [42][43][44], remote sensing applications [45,46], etc. Consequently, due to the highly nonlinear characteristics associated with the ET o data, AI technology presented a suitable modeling approach to solve many issues with the empirical equations that has been used before [47]. Kumar et al. utilized the artificial neural network (ANN) in 2002 for predicting ET o , where different ANN architectures were implemented for evaporation simulation. e radial neural network yielded the best results for evaporation simulation, and it calculates the number of layers and neurons based on a trial and error process. [48]. Many researchers have followed his footprint in predicting ET o [49][50][51][52]. In addition, an adaptive network-based fuzzy inference system (ANFIS) has been used to predict ET o [53,54]. e ANFIS and ANN techniques and empirical equations were used in the evapotranspiration field, and it was found that the ANFIS and ANN methods were much better than the empirical equations. [55,56]. However, it is well established that ANN models easily get stuck in a local minimum, and, therefore, recent studies have employed new models adopting other AI techniques for ET o modeling [47]. Many approaches have been utilized for this purpose, including support vector machine SVM [53,57,58]. e SVM is well known to have a basic form, but one of the drawbacks of SVM is the unknown parameter [59]. Another approach in the field of simulation ET o data is genetic programming (GP).
is approach consists of measurement programming capable of obtaining input data and producing a nonlinear interaction between data to determine the outputs [55,[60][61][62].
In order to enhance the ability of AI models, many algorithms were associated with different AI methodologies, including the use of wavelet transform regression model [63], wavelet coupled with ANN [64,65], and wavelet enhanced extreme learning machine [66]. Others utilize random forest (RF) algorithm to enhance the AI techniques. Due to its success over a variety of datasets, high precision estimation, a small range of user-defined parameters, the ability to estimate relative value of the variables, and its ability to preclude overfitting, the RF approach has become extremely popular in recent years [26,34,67,68]. Recently, due to its higher efficiency and much quicker calculation speed, a newly proposed machine learning technology called the extreme learning machine (ELM) has confirmed it to be a promising ET o estimation tool [69]. First, Abdullah et al. (2015) used ELM to forecast ET o at three Iraqi stations and concluded that the ELM model is highly efficient and computerized at high generalization speeds [70,71]. Ever since, the ELM for ET o predictions has been used by many studies in different climate environments [72][73][74]. To the best of the authors' knowledge, all models presented in the literature were established to simulate the evapotranspiration using a single model for each location or case study. Furthermore, some researchers employed modern techniques and used different case studies, but they could not produce a general model that could take into account more than one case study. In this study, an effort is being made to include a robust modeling methodology using a variety of locations along the southern coast of Turkey to create a comprehensive general model to forecast ET o . e other objective of the study is to predict mean monthly ET o from limited data, which can be easily available.
Motivation of the Study.
Due to the significance of ET o , there are numerous studies that have been conducted to estimate it, such as AI techniques and empirical models. In general, these approaches achieved a satisfactory success. However, creating one robust model for conducting local ET o throughout a specific country based on data collected from different sites is still considered as a challenging issue that needs to be addressed. Moreover, recognizing the statistical measures that are effectively used to assess the feasibility of a certain model are also very significant in the selection of the best predictive model. us, in this study, a broad investigation is performed using three different approaches, multiple linear regression (MLR), random forest (RF), and extreme learning machine (ELM), based on twenty combinations of meteorological and geographical indicators constituting 60 predictive models. Moreover, there were 10 efficient statistical measures employed to assess the accuracy of the performance for each model separately in accordance 2 Complexity with the Penman-Monteith equation. Although the formula of Penman-Monteith is well-known in the prediction of ET o , it does pose some issues regarding the measurement of some factors, which may not be relevant to every site such as solar radiation and sensible heat flux into the soil. erefore, a robust local model is established in this study, which can efficiently predict the local mean monthly ET o over the southern coast of Turkey using conveniently and inexpensively measurable parameters. Furthermore, the evaluation of the outcomes of the models will be further assessed and validated against the actual ET o values, which are calculated by the Penman-Monteith equation.
Case Study and Data Collection
In this study, the data collected from 35 meteorological stations in Turkey are used. e geographical location of these stations covers large area located between a latitude from 36°to 38°. Figure 1 shows the location of each meteorological station separately. It can be seen that the majority of these stations are located in the south of Turkey on the coasts of the Mediterranean Sea. e data collected from the general directorate of Turkish state meteorological includes several long-term monthly meteorological data such as temperature, humidity, wind speed, and reference evapotranspiration. Furthermore, the dataset comprises of the long-term mean monthly variables covering the period from 1975 to 2010. e highest temperature is recorded at the Mut station at 46.7°C, while the lowest temperature is recorded at Goksun station at −33.5°C. Figure 2 showed the long-term monthly mean (ET o ), overall in Turkey. It can be observed that the highest value of (ET o ) occurred in July followed by June, August, and May, consecutively. It is worth mentioning that Turkey primarily has a complex climate due to its location and topography. e Mediterranean climate in southern Turkey is predominant with warm and dry summers and wet and moderate to cold winters. Continental weather is predominant in central Turkey with warm and dry summers and cold winters. e oceanic atmosphere in northern Turkey is seen through warm and rainy summers and cold and wet winters. is study ensures that the stations chosen to measure ET o are spread nearly uniformly across southern Turkey in order to determine spatial differences in mean monthly ET o values and their time characteristics.
Due to the lack of ET o data, the FAO embraced the PM equation as a standard methodology for calculating the reference ET o . e FAO56-PM can be used on hourly or daily scales to supply the data needed for machine learning approaches. e equation is suggested for hourly time measures as expressed in the following equation [7,75]: where ET o is the reference evapotranspiration in (mm/day), Δ represents the slope of the saturation vapor pressure function at air temperature T in (kPa°· C − 1 ), R N is the net solar radiation in (MJ · m − 2 · day − 1 ), G is the soil heat flux density in (MJ · m − 2 · day − 1 ), c is the psychometric constant in (kPa°· C − 1 ), T is the mean air temperature in (°C), U 2 is the average 24-hour wind speed at 2m height above the ground surface in (m.s − 1 ), e a is the saturation vapor pressure in (kPa), and e d is the actual vapor pressure in (kPa). Finally, it is important to mention that the dataset is subjected to the normalization process (between 0 and 1), for all input variables and their appropriate targets. is process is very important in order to boost the predictive model performance. Later, the data are simulated using three modeling approaches, namely, RF, ELM, and MLR.
Methodology
3.1. Random Forest. Random Forests (RF) is an algorithm that manages regression issues of high dimensions. is method is tree-based, where all trees have random variables in the selection, and the forest is established from several trees of regression and is clustered together [34,76]. e tree is chosen as a random subset of variables that will be used to determine the result of the prediction. Whereas, in the random forest learning process, two significant parameters are identified: the first one is the number of trees (n tree ) and the second one is the number of variables in each division (m try ). After fitting single tree into the ensemble (bagging procedure), the final decision is made by averaging the output. e bias between the bagged trees is equal to that of the single tree, while the variance is reduced as the correlation between the trees is reduced [77].
For regression-based RF formation, in order to get the tree predictor h(X, ⊖) numerical value, the process starts by growing trees on the bias of a random vector (⊖). For any given numerical estimator, the mean squared generalization error can be expressed as follows [78]: (2) e RF predictor is generated by taking an average over j of a single tree. Here, in this respect, the following theorems line up: Theorem 1. By expanding the number of trees in the forest, the error will then be expressed as follows: is equation's right hand demonstrates the generalization error of the forest. Similarly, the average tree generalization error can be found from the following equation: Theorem 2. If we assume E Y � E X h(X, ⊖) for every ⊖, then where p represent the weight correlation [76].
Extreme Learning Machine (ELM).
e ELM, an advanced learning algorithm, is a machine learning modeling approach assembled with single-layer feedforward neural network (SLFN) [79]. ELM's primary strength is that the weight of the input parameters is determined arbitrarily, while the output parameter weights are measured analytically using the Moore-Penrose approach (Huang et al., 2006). e SLFN function combines the hidden node additives and activation function, which can be represented mathematically as follows: wheref(x) represents the ELM model output function; x represents the input variable, ai and bi represent the learning parameters hidden nodes, and L stands for the number of hidden nodes. e βi governs the connecting weight to the i-hidden node between the output nodes. e output vector G(ai, bi, x) implies the output of the hidden node. e sigmoid activation solves the additional hidden node as follows: where a i and b i represent weight and basis values for each i-node in its hidden layer. In the algorithm of ELM, the weight of the input layer and bias values are randomly generated. Figure 3 showed the basic structure of the ELM. For the input and output variables, the arbitrary distinct sample N is denoted as (x i , y i ). Based on the above findings, equation (6) can be rewritten as follows: where the Hussain matrix (H) is represented as In addition, G represents the activation function and in this current study, log-sigmoid transfer function is used, which is expressed as where y � ax + b as indicated in equation (6). e ELM model with independent L sets, which can result in zero learning errors, can be trained in the form of SLFN [80]. In addition, even with fewer hidden neurons (L) than the number of separate samples (N), the ELM will specify random parameters for the hidden nodes. Ultimately, the output weights can be calculated by means of pseudoinverse of H, which makes a limited error range ε > 0. e hidden node parameters (weights and bias) are given random values during the training phase [81].
Multiple Linear Regression.
In the form of the local climatic parameters of minimum and maximum temperature, relative humidity, wind speed and sun hours, the ET o data can be formulated using multiple linear regression (MLR) models. Herein, the ET o is calculated by MLR. e MLR can be employed to describe the relationship between dependent and independent variables as a multivariate statistical tool described by the following equation: where the response variables are represented by Y, which is also the predicted mean monthly ET o . e independent variables and the predictors are represented by X 1 , X 2 , X n , and the coefficients of regression are represented by α o α 1 , α 2 , . . . , α j , which can be acquired by the following equations [82]: where the error of the estimated and real values of ET o is represented by e and y i , respectively.
Model Evaluation.
e selection process of the best predictive model is of great importance to achieve high accurate predictions. erefore, in this study, ten statistical parameters have been used to assess the performances of each predictive approach [83,84]. e quantitative parameters are as follows: (i) Mean absolute error (MAE). It can be expressed by assuming the absolute errors divided by the 4 Complexity number of total observations. is indicator is widely used in water resources and hydrological sectors to assess the predictive models because it provides significant information on how closely the simulated data points match with the actual ones.
e mathematical expression of MAE is shown in the following equations [85,86]: where e is the forecasted error, (ii) Root mean square error (RMSE). It is a statistical parameter often used to compare the forecasting errors of several models. e lower RMSE value usually points out to better predictions. e RMSE can be derived using the following equation [87,88]: (iii) Mean absolute relative error (MARE). is indicator can be expressed as an absolute relative computer error (difference between actual and forecasted points). When the MARE parameter is represented as a percentage, it is called the mean absolute relative error (MARE), which is expressed by the following equation: (iv) Root mean square relative error (RMSRE) is expressed as (v) Relative root mean square error (RRMSE) can be calculated by dividing the RMSE criteria over the mean of actual data points. is parameter is very vital in assessing the accuracy of a model. In accordance to [89], the model is considered excellent if RRMSE > 10%, good if 10% < RRMSE < 20%, fair if 20% < RRMSE < 30%, and last, the model can be considered poor if RRMSE > 30%. e mathematical expression of the RRMSE is depicted as follows: (vi) Mean bias error (MBE). It discloses the tendency of a model and explains whether it overestimates the data or underestimates them and is expressed by the following equation: (vii) Correlation of coefficient (CC). e CC is a significant factor that can be utilized to efficiently discover the robustness of the relationship between predicted and simulated data points. e mathematical expression of CC is shown in the following equation [82,90]: (viii) Maximum absolute relative error (erMAX) is expressed as . is statistical test is beneficial for validating and testing the broadband models [91]. As t − stat indicator approaches zero, the desired model would be achieved and it is expressed by the following equation: where n is the total number of observations, and ET o i,m ET o i,p are the i th actual and estimated mean monthly reference of evapotranspiration, respectively. (x) Uncertainty at 95% (U 95 ). is quantitative criterion is very efficient in terms of selecting an efficient predictive model among several models. e U 95 indicator provides very useful information on the deviation of a certain model [91]. e U 95 can be calculated by the following equation: where SD represents the standard deviation of the difference between true and simulated data points. e value 1.96 is a coverage factor corresponding to 95% confidence level. Last, for visual evaluation, boxplots and scatter plots are presented in the Results and Discussion sections. e coefficient of determination (R2) is calculated based on the following equation: Complexity
Model Development.
To accurately predict the monthlymean ET o , three different models were assigned called MLR, RF, and ELM approaches. In accordance with the nature of dataset collected from different locations and sites, and in addition to having different characters, it is a significant and difficult task to create a reliable predictive model. erefore, the current work is carried out by establishing 20 different input groups for each predictive approach as shown in Table 1.
e dataset is randomly divided into phases called training set and testing set. e training set is used for calibration process and model construction, while the testing set is used for examining the performance accuracy of each candidate model. Table 2 shows the statistical description of the variables used in this study. In addition, Table 3 gives information about the Pearson correlation coefficient between all variables and ET o used in the suggested case study. As shown in the table, the maximum temperature variable has the highest correlation with ET o for both training and testing datasets. e table shows that the longitude variable has the lowest correlation Target of ELM Input variables and bias Complexity coefficient with an ET o of 0.09, while the maximum temperature is significantly correlated with ET o with an R of 0.841. e process of selecting a more accurate model is relatively tough with ten statistical matrices and three different approaches including several input combinations. us, the assessment process is carried out based on two stages. First, during the training set, from each approach, the best three different models (in total nine models) are being selected. In the second step, it is crucial to monitor the performance of those models, which are selected throughout the training phase, during the testing phase and to select the three most efficient models for each approach. Finally, this process would provide much information about each adopted approach, apart from providing a clear and realistic impression of the performance of each predictive model separately.
Furthermore, a robust and effective performance measure is used to assess the capability of each model. Among these statistical metrics, the uncertainty at 95% (U 95 ) has been used to assess the performance accuracy for each model and subsequently recognize the best predictive model. Figure 4 describes the prediction process of ET o along the southern coast of Turkey. Last, it is imperative to emphasize that all input variables and their corresponding targets are normalized between 0 and 1. is process is very important to enhance the effectiveness of the predictive models [92,93]. All models are developed using MATLAB 2017a.
Results and Discussion
is section of the study is dedicated to illustrating the forecast results obtained for mean monthly ET o over Turkey via three different predictive models, namely, MLR, RF, and ELM. Twenty scenarios have been presented including different input variables and are introduced to the mentioned models. 60 predictive models are assessed and validated against the Penman-Monteith equation using ten efficient statistical indicators and graphical presentations. A qualified model is one that meets the requirements of most of the mentioned statistical parameters. e dataset collected from 35 stations is divided randomly into two sets: the training phase (75%) is used to calibrate the models, and the rest of the data is used for validation purposes.
In terms of quantitative assessment, Table 4 exhibits further information about the performances of the proposed 60 different models based on different input variables through the training phase. e three predictive modeling approaches have achieved different accuracies in accordance with statistical measures. Moreover, it is difficult to rank the models in accordance with the achieved accuracies, but the RF approach relatively showed the best accuracy predictions. However, for conducting a fair comparative analysis for each adopted approach, from each modeling technique, the best three models with different input combinations have been selected for performing further and fair comparisons.
us, among the 60 predictive models, only the best nine models are selected for optimally carrying out the efficient quantitative analysis. Reducing the number of models has many advantages. For instance, it ensures conducting a powerful and excellent compilation and, thereby, optimally choosing the best statistical matrices. Table 5 shows the performance accuracy of three different approaches based on several input variables.
In general, remarkably, it can be noted from Table 5 that the most frequent combination is C1, C4, and C7, consecutively. is means that the component includes all useful parameters that have effective impact on mean monthlyET o . At a glance, it can be said that the RF models provided more accurate estimations of mean monthly ET o than MLR and ELM approaches. Here, it is essential to mention that the uncertainty at 95% (U 95 ) indicator is the most efficient factor, which plays a major role in the evaluation process of Essentially, it should be taken into consideration that promising estimations were obtained during the training phase, which is primarily employed to effectively calibrate the models based on the known input variables and targets. However, the testing step is vital in assessing the performance of a model since it examines the model's accuracy After demonstrating the performance of the models during the calibration (training) phase, it is very important to see the accuracy of the adopted models during the testing phase. Table 6 demonstrated the performance skill of each predictive model using different input parameters.
It is also important to carefully follow-up the performance of nine efficient models that were chosen in the training set (MLR-C1, MLR-C4, MLR-C7, RF-C7, RF-C4, RF-C2, ELM-C1, ELM-C4, and ELM-C14). Besides the mentioned models, there were additional three models (RF-C5, RF-C6, and ELM-C2) that have effectively provided satisfactory estimations. e heat-map diagram as presented in Figure 5 provided significant information about the best modeling performance based on ten statistical parameters.
Although RF models generated acceptable precision during the training set, it exhibited the worst accuracies as compared to ELM and MLR techniques in the testing set. Moreover, these models produced high uncertainty, and the values of U 95 are 19. 59, 27.1, 22.74, 22.35, and 26.72% for RF-C7, RF-C4, RF-C5, RF-C6, and RF-C2, respectively. Additionally, the other statistical parameters such as RMSE and RRMSE also gave further information about the weaknesses of RF models. It is undeniable that the RF approach suffers from overfitting issue. On the other hand, the MLR models showed much better performance capacity than RF models. Finally, the ELM models achieved high precision in the prediction of mean monthly ET o in accordance with the P-M equation. Moreover, the ELM-C1 is considered the best predictive model and recorded the highest CC (0.957), lowest RMSE (1.155), MAE (0.946 mm/ month), t − stat(10.37), RRMSE (16.54%), and U 95 (9.989%), respectively. Moreover, among ten statistical parameters, the most efficient parameters, which can easily recognize the best predictive model are U 95 , RMSE, and RRMSE. e supremacy of ELM approaches was evaluated in accordance with its ability of reducing the most significant statistical measures (U 95 , RMSE, and RRMSE) throughout the testing phase. e results obtained as shown in Figure 6 illustrated the superiority of ELM-C1 over other predictive models in reducing the value of the mentioned three statistical metrics. e effectiveness of the ELM-C1 model over MLR-C1 powerfully appeared in reducing the RMSE and U 95 parameters to 10.05% and 34.07%, respectively. Moreover, the prediction accuracy of ELM-C1 accomplished better outcome when it is compared with the RF-C7 model, where there was a significant improvement in the reduction of RMSE and U 95 parameters, which reached 16.36% and 49.02%, respectively. e box plot diagram, shown in Figure 7, presented the best candidate models, which were employed to predict the average monthlyET o . It can be remarkably noted that the RF models could not perform well as compared to the other approaches such as MLR and ELM. However, the MLR models had the modest performance and poor accuracy as compared to ELM approaches. It can be said that the best performance approach in prediction average monthlyET o is ELM, followed by MLR and RF techniques, consecutively. e ELM-C1 achieved the best estimation accuracy where the median and interquarter range (IQR) were found to be very close to the actual median and IQR.
Line graph and scatterplot of predicted and actual average monthlyET o are provided in Figures 8(a) and 8(b) during the testing phase. e scatter plot effectively presents useful visualization information on the diversion between observed and predicted values, and the coefficient of determination (R 2 ) showed the determination between them. Based on Figures 8(a) and 8(b), the proposed ELM-C1 has better prediction performance than other comparative models in terms of providing a higher value of R 2 (0.915).
Considering the best ELM models, it can be noted that this approach often requires a relatively higher number of input parameters in comparison with RF and MLR approaches. e logical explanation of this phenomenon is that the data size includes 35 locations of different characteristics. Besides, the geographical factors are very important when developing local and robust models based on a dataset collected from several locations.
For further assessment, it is vital to examine the ability of the suggested model (ELM-C1) against several predictive models that were proposed in the previous studies to estimate 14 Complexity 16 Complexity current study, a single model (ELM-C1) successfully managed to simulate the ET O based on the data collected from dozens of stations. In addition to the simulation process of 35 stations in one single model, the other interesting feature is that the predictability of the proposed model was recorded high (R 2 > 0.91).
Conclusion
Evapotranspiration is considered as one of the most significant factors in the hydrological cycle. Although the wellknown Penman-Monteith equation for computing the evapotranspiration exists, there are some difficulties in accurately calculating some of its parameters such as solar radiation and sensible heat flux into the soil. erefore, in this study, three different approaches were employed, namely, ELM, RF, and MLR, based on geographical and meteorological parameters for the prediction of the mean monthly evapotranspiration over southern Mediterranean coast of Turkey. Besides, twenty different input combinations assigned according to data collected from 35 meteorological stations were established. Ten efficient statistical parameters have been used to assess the 60 predictive models. e outcomes of this study revealed that the performance of the ELM approach outperformed the MLR and RF models. In addition, the efficiency of the proposed ELM produced less computed errors during the testing set. It is worth mentioning that the U 95 measure played a vital role in the selection of the best model accuracy among the established 60 models in this study. e other essential observation that can be drawn is that the RF approach provided the perfect accuracy during the training phase, and when it comes to examining its performances during the testing phase, the predictions' accuracies were very poor and disappointing. Finally, this study suggested the use of ELM in building local models based on different stations and weather conditions. In addition, a study that covers the whole area of the Mediterranean Sea via one robust model is needed. For further investigation, feature selection approaches can possibly be integrated prior to the predictive learning process to extract the essential input variables for the prediction matrix [98,99].
Abbreviations
ET o : Reference evapotranspiration T max : Maximum monthly temperature T min : Minimum monthly temperature T mean : Average monthly temperature Std: Standard deviation W s : Wind speed RH max : % Maximum relative humidity RH min : % Minimum relative humidity MAE: Mean absolute error e: Estimated error between actual and predicted ET O RMSE: Root mean square error MARE: Mean absolute relative error RMSRE: Root mean square relative error RRMSE: Relative root mean square error MBE: Mean bias error CC: Correlation of coefficient R 2 : Correlation of determination erMAX: Maximum absolute relative error t − stat: t-statistic U95: Uncertainty at 95% ELM: Extreme learning machine RF: Random Forests MLR: Multiple-linear regression P − M: Penman-Monteith.
Data Availability
All the data are available upon request.
Conflicts of Interest
e authors have no conflicts of interest. | 7,504 | 2021-08-23T00:00:00.000 | [
"Mathematics"
] |
Mechanical Analysis and Corrosion Analysis of Zinc Alloys for Bioabsorbable Implants for Osteosynthesis
Zinc alloys have recently been researched intensely for their great properties as bioabsorbable implants for osteosynthesis. Pure zinc (Zn) itself has relatively poor strength, which makes it insufficient for most clinical use. Research has already proven that the mechanical strength of zinc can be enhanced significantly by alloying it with silver. This study evaluated zinc silver alloys (ZnAg) as well as novel zinc silver titanium alloys (ZnAgTi) regarding their mechanical properties for the use as bioabsorbable implants. Compared to pure zinc the mechanical strength was enhanced significantly for all tested zinc alloys. The elastic properties were only enhanced significantly for the zinc silver alloys ZnAg6 and ZnAg9. Regarding target values for orthopedic implants proposed in literature, the best mechanical properties were measured for the ZnAg3Ti1 alloy with an ultimate tensile strength of 262 MPa and an elongation at fracture of 16%. Besides the mechanical properties, the corrosion rates are important for bioabsorbable implants. This study tested the corrosion rates of zinc alloys in PBS solution (phosphate buffered solution) with electrochemical corrosion measurement. Zinc and its alloys showed favorable corrosion rates, especially in comparison to magnesium, which has a much lower degradation rate and no buildup of hydrogen gas pockets during the process. Altogether, this makes zinc alloys highly favorable for use as material for bioabsorbable implants for osteosynthesis.
Introduction
Especially in the last two decades, research for bioabsorbable implants has been intensified [1][2][3][4] because they offer great advantages in comparison to conventional implants. Conventional implant materials are associated with many adverse effects like stress shielding [5][6][7], chronic inflammation [8,9], soft tissue damage, and stress on the body through metallosis [8,10,11]. Bioabsorbable implants include materials that not only degrade within the body but also offer no harm to the body during corrosion and focus especially on how they affect the host's metabolism [12]. In the search for bioabsorbable implants for osteosynthesis, there is still a lot of room for improvement. There is an ongoing search for implants, which exhibit mechanical properties that match those of bone and offer the sufficient strength needed. Mechanical stability is critical for fracture healing, as a great stability in connection with only a minimal volume change of the fracture gap and without shearing of the fragments leads to optimal conditions for bone healing [13,14]. Reviews have discussed different target values for bone implants. Targets for ultimate tensile strength have been set between 200 and 300 MPa, an elongation at fracture of around 15% to 20% and a Young's modulus of 10 to 30 GPa [15][16][17][18][19]. Venezuela et al. [20], for instance, proposed During the research for bioabsorbable implants interest has been increasingly drawn to zinc alloys as zinc exhibits favorable mechanical properties regarding the use as bioabsorbable implants, while also showing reasonable low corrosion rates [21][22][23]. Furthermore, zinc is not converted to hydrogen during its degradation process, as is the case with magnesium, where there have been issues with hydrogen accumulating in the surrounding tissue and, therefore, possible adverse effects on the process of bone healing [11,24,25]. With an ultimate tensile strength of 100 to 150 MPa [26,27] and a small Young's modulus, zinc exhibits properties that are close to those of bone [27]. Notably, the strength of zinc is considerably lower than the required target values of implants for osteosynthesis. There has been a significant effort put into researching the process of alloying zinc to gain higher strength values and more favorable mechanical properties for the use as orthopedic implants [28,29]. For instance, Sikora-Jasinska et al. [30] showed that the ultimate tensile strength of zinc could be increased over 200 MPa when alloyed with silver, while also keeping its beneficial elastic properties.
Regarding its use as a bioabsorbable implant, zinc is also expected to show good biocompatibility [28,31]. It is closely regulated by zinc transporters and metallothioneins and is one of the most abundant essential elements in the body [32,33]. A daily intake of about 8 to 11 mg is recommended for adults, whereas the maximum daily intake of zinc is limited to 40 mg/day [34,35]. Additionally, studies have shown positive effects of zinc on cell proliferation, differentiation, and mineralization of extracellular matrix as well as bone growth [36][37][38][39][40].
For a better suitability of zinc as a material for implants for osteosynthesis it can be alloyed with different metals. In this study we focused on zinc alloys with silver and titanium. Titanium is the most used metal for implants in orthopedics today. It has been used since the 1950s and it has been studied intensively [41]. Its biocompatibility with a slow corrosion rate and bioinertness have been already established in literature [42]. Silver has also been used as an implant material in humans for a long time and has shown a very good biocompatibility [43]. Silver is also an essential trace element and it is recommended to consume 0.4 to 27 µg of silver daily [44]. As early as 1924, Zierold et al. [45] proved that silver is excellently suited for implantation in bone. In this study, an improved regeneration of the bone occurred in the cancellous bone, while the cortical regeneration was not influenced. Furthermore, it was demonstrated that silver nanoparticles improved fracture healing in the rat model [46]. In addition, silver has a direct antibacterial effect and is used in therapy for particularly resistant bacteria [43,47,48]. In this study, novel zinc alloys were biomechanically analyzed and evaluated for future use as bioabsorbable implants in orthopedic fixation devices. Therefore, mechanical properties were measured and compared to target values as well as mechanical properties of implants already available for clinical use like MgYERZr (Syntellix AG, Hannover, Germany). Additionally, corrosion measurements were performed in order to get an idea of the corrosion characteristics of the novel zinc alloys.
Tensile Testing
This study used tensile tests to determine the tensile strength of zinc alloys in comparison to pure zinc. Dog bone-shaped test specimens were manufactured by Limedion GmbH., Mannheim, Germany, according to ISO standard 6892-1:2019 [49], with a grain size of <15 µm. Testing was carried out at room temperature with the material testing machine Z005 (ZwickRoell, GmbH & Co. KG, Ulm, Germany) under position control with a nominal stress of 30 MPa s −1 until failure. Failure was defined as an increase of strain by 500% to the previous measuring point or a stress drop to 2% of the maximum stress. Strain-stress curves were recorded with TestXpert3 software, Version 3.31 (ZwickRoell, GmbH & Co. KG, Ulm, Germany) and used to analyze ultimate tensile strength (UTS) and yield strength (YS). Elongation at fracture (A) was calculated by measuring the test specimens after breakage with the stereomicroscope Olympus SZ61 (Olympus K.K., Shinjuku, Japan) and the software ImageJ (FIJI modification). Tensile testing was carried out with Zn, ZnTi0.5, ZnAg6, ZnAg9, ZnAg1Ti1, and ZnAg3Ti1 alloys.
Samples
The electrochemical measurement was carried out on cylinders of the different alloys (Zn, ZnTi0.5, Zn3, ZnAg6, ZnAg1Ti1 and ZnAg3Ti1) with the dimensions d = 6 mm and h = 10 mm, with a surface area of 0.2826 cm 2 . The samples were each fixed as watertight in a specially made sample holder made of FC52 polyol and FC52 isocyanate (see Figure 1). The samples were placed in this mold with a precise fit and attached to the measuring apparatus using the mold. Each sample was measured twice with 10 min between measurements, and the mean was calculated. The electrochemical measurement was carried out on cylinders of the different alloys (Zn, ZnTi0.5 and ZnAg6) with the dimensions d = 6 mm and h = 10 mm, with a surface area of 0.2826 cm². The samples were each fixed as watertight in a specially made sample holder made of FC52 polyol and FC52 isocyanate (see Figure 1). The samples were placed in this mold with a precise fit and attached to the measuring apparatus using the mold. Each sample was measured twice with 10 min between measurements, and the mean was calculated.
Electrochemical Testing
To determine the corrosion rate of the prepared samples, potentiodynamic polarization tests were carried out in PBS solution (8 g/L NaCl, 0.2 g/L KCl, 1.44 g/L Na2HPO4 and 0.245 g/L KH2PO4)) at 37 ± 2 °C and pH = 7.4 ± 0.2. The electrochemical measurement was carried out with a three-electrode measuring system, with a potentiostat/galvanostat PS2000, the corrosion measuring cell KMZ5 from Sensortechnik Meinsberg (Xylem Analytics Germany Sales GmbH & Co. KG, Waldheim, Germany), and the Ag/AgCl reference electrode SE11. A 2 cm × 2 cm Pt foil (Xylem Analytics Germany Sales GmbH & Co. KG, Waldheim, Germany) was used as counter electrode (see Figure 2). Immediately before the measurements all samples were polished with 1200 SiC (silicon carbide) paper.
Electrochemical Testing
To determine the corrosion rate of the prepared samples, potentiodynamic polarization tests were carried out in PBS solution (8 g/L NaCl, 0.2 g/L KCl, 1.44 g/L Na 2 HPO 4 and 0.245 g/L KH 2 PO 4) ) at 37 ± 2 • C and pH = 7.4 ± 0.2. The electrochemical measurement was carried out with a three-electrode measuring system, with a potentiostat/galvanostat PS2000, the corrosion measuring cell KMZ5 from Sensortechnik Meinsberg (Xylem Analytics Germany Sales GmbH & Co. KG, Waldheim, Germany), and the Ag/AgCl reference electrode SE11. A 2 cm × 2 cm Pt foil (Xylem Analytics Germany Sales GmbH & Co. KG, Waldheim, Germany) was used as counter electrode (see Figure 2). Immediately before the measurements all samples were polished with 1200 SiC (silicon carbide) paper.
Measurements were conducted at a 0.25 mV scan rate with ±50 mV around the rest potential. Open circuit potential (OCP) was measured for 30 min before the corrosion measurement to ensure the potential stability of the studied system. The corrosion current and the corrosion potential were defined as the intersection of two straight lines. For this purpose, Tafel fittings were applied to the linear sections of the log current-potential curve of the individual measurements in Origin Pro 2020 (OriginLab, Northampton, MA, United States) and their intersection point was determined (see Figure 3). Measurements were conducted at a 0.25 m/V scan rate with ±50 mV around the rest potential. Open circuit potential (OCP) was measured for 30 min before the corrosion measurement to ensure the potential stability of the studied system. The corrosion current and the corrosion potential were defined as the intersection of two straight lines. For this purpose, Tafel fittings were applied to the linear sections of the log current-potential curve of the individual measurements in Origin Pro 2020 (OriginLab, Northampton, MA, United States) and their intersection point was determined (see Figure 3). Using Faraday's law, the corrosion rate was calculated with the following equation: Where M is Atomic mass, n is the number of electrons involved in the reaction, ρ is density, and F is 96,485 C/mol. This was calculated for each alloy accordingly (see Table 1). Measurements were conducted at a 0.25 m/V scan rate with ±50 mV around the rest potential. Open circuit potential (OCP) was measured for 30 min before the corrosion measurement to ensure the potential stability of the studied system. The corrosion current and the corrosion potential were defined as the intersection of two straight lines. For this purpose, Tafel fittings were applied to the linear sections of the log current-potential curve of the individual measurements in Origin Pro 2020 (OriginLab, Northampton, MA, United States) and their intersection point was determined (see Figure 3). Using Faraday's law, the corrosion rate was calculated with the following equation: Where M is Atomic mass, n is the number of electrons involved in the reaction, ρ is density, and F is 96,485 C/mol. This was calculated for each alloy accordingly (see Table 1). Using Faraday's law, the corrosion rate was calculated with the following equation: where M is Atomic mass, n is the number of electrons involved in the reaction, ρ is density, and F is 96,485 C/mol. This was calculated for each alloy accordingly (see Table 1).
Statistical Analysis
Statistical analysis was conducted with Microsoft Office Excel ® , Version 2013 (Microsoft Cooperation, Redmond, WA, United States) and Origin Pro 2020 (OriginLab, Northampton, MA, United States). All quantitative data were given as mean ± standard deviation. Statistical significance was determined with unpaired one-way analysis of variance (ANOVA) and the significance level was set to p < 0.05.
Tensile Testing
The mechanical properties of zinc and its alloys are shown in Table 2. Comparison revealed an increase of ultimate tensile strength (UTS) as well as elongation at fracture [A]. Elongation at fracture was only enhanced significantly in comparison to pure zinc for the zinc silver alloys ZnAg6 and ZnAg9 with p = 0.009 and p = 3.09 × 10 −4 respectively. The UTS was increased significantly (p < 0.05) for all alloys. As seen by comparing the stress-strain curves shown in Figure 4, the ultimate tensile strength was also enhanced with increasing silver content. The alloys ZnAg9 and ZnAg3Ti1 showed higher tensile strengths compared to ZnAg6 and ZnAg1Ti1, respectively. This increase was significant for ZnAg9 in comparison to ZnAg6 (p = 0.02) but not for ZnAg3Ti1 in comparison to ZnAg1Ti1 (p = 0.06). Ultimate tensile strength was also enhanced significantly for ZnAg3Ti1 in comparison to the zinc silver alloys (ZnAg6 p = 1.5 × 10 −5 and ZnAg9 p = 0.048).
Corrosion Measurement
The corrosion measurements are used as an indication of the corrosion processes happening in the body. Table 3 shows the results of the electrochemical corrosion testing. Corrosion rates of zinc were accelerated by alloying it with titanium or silver. This was significant for ZnAg3, ZnAg6, and ZnTi0.5 in comparison to pure zinc with p < 0.05 (see Figure 5). The corrosion rates of the ZnAgTi alloys were not significantly different to the corrosion rate of pure zinc (see Figure 5). The highest corrosion rates were measured for the ZnAg alloys with a corrosion rate of 0.11 mm/year for ZnAg6 and 0.14 mm/year for ZnAg3, respectively. (c) (d)
Corrosion Measurement
The corrosion measurements are used as an indication of the corrosion processes happening in the body. Table 3 shows the results of the electrochemical corrosion testing. Corrosion rates of zinc were accelerated by alloying it with titanium or silver. This was significant for ZnAg3, ZnAg6, and ZnTi0.5 in comparison to pure zinc with p < 0.05 (see Figure 5). The corrosion rates of the ZnAgTi alloys were not significantly different to the corrosion rate of pure zinc (see Figure 5). The highest corrosion rates were measured for the ZnAg alloys with a corrosion rate of 0.11 mm/year for ZnAg6 and 0.14 mm/year for ZnAg3, respectively.
Tensile Testing
This study verified that alloying zinc with silver increases the tensile strength in comparison to pure zinc significantly. Tensile strength rose with increasing content of silver for both zinc silver alloys as well as zinc titanium silver alloys. This is in accordance with previous studies.
In this study, we measured an ultimate tensile strength of pure zinc of 115 MPa with an elongation at fracture of 10%. Xiao et al. [50] reported very similar properties with an ultimate tensile strengths of 110 MPa and an elongation at fracture of 15% for pure zinc in dog bone-shaped specimens.
Tensile Testing
This study verified that alloying zinc with silver increases the tensile strength in comparison to pure zinc significantly. Tensile strength rose with increasing content of silver for both zinc silver alloys as well as zinc titanium silver alloys. This is in accordance with previous studies.
In this study, we measured an ultimate tensile strength of pure zinc of 115 MPa with an elongation at fracture of 10%. Xiao et al. [50] reported very similar properties with an ultimate tensile strengths of 110 MPa and an elongation at fracture of 15% for pure zinc in dog bone-shaped specimens.
For zinc silver alloys, Sikora-Jasinska et al. The test specimens we used were manufactured by Limedion (Limedion GmbH., Mannheim, Germany), which were hot extruded at 325 • C with an extrusion rate of 25:1 and showed a grain size of <15 µm. Sikora-Jasinska et al. [30] used specimens that were hot extruded at 250 • C with an extrusion rate of 14:1, whereas Yang et al. [51] tested specimens, which were hot extruded at 260 • C with an extrusion rate of 36:1 and showed a grain size of <10 µm. Hot extrusion is known to enhance the mechanical properties of metal alloys as it refines the grain size and homogenizes the alloy [29]. Homogenous alloys with smaller grain size showed more favorable mechanical properties [8]. This could be one explanation for the higher mechanical properties measured by Yang et al. [51] in comparison to our measurements.
The elastic properties of zinc in our tests were only significantly altered by alloying with silver for ZnAg6 and ZnAg9. There was no significant enhanced elongation at fracture for the zinc silver titanium alloys. The measured elastic values for the tested zinc silver alloys can also be confirmed by comparing it to literature. Sikora-Jasinska et al. [30] and Yang et al. [51] reported elongation at fracture of 30% to 40% for their tested zinc silver alloys. The zinc silver alloys tested by us showed an elongation at fracture of 25 to 30% for ZnAg6 and ZnAg9, which are quite similar.
The zinc titanium alloys had a slightly lower elongation at fracture with 16 to 21% in our measurements. These values are very close to and, for ZnAg3Ti1, even meet the target values proposed by Venezuela et al. [20] for orthopedic fixation devices. Additionally, the tested zinc silver titanium alloys showed even greater mechanical strength properties compared to the zinc silver alloys. ZnAg1Ti1 and ZnAg3Ti1 tested for an ultimate tensile strength of 241 and 262 MPa, respectively.
In comparison to the mechanical target values suggested in the literature, the ZnAg3Ti1 and ZnAg6 alloys showed excellent properties for the values we measured. They both met an ultimate tensile strength of 200 to 300 MPa, while also possessing good elastic properties with a reasonably low elongation at fracture. ZnAg3Ti1 could also meet the criteria of an elongation at fracture of 15 to 18% with a measured elongation of 16%.
Our zinc silver titanium alloys achieved a combination of good mechanical stability as well as favorable elastic properties. This makes them also stand out in comparison to tested zinc alloys in the literature, as seen in Figure 6. In comparison to the reported mechanical values of bioabsorbable alloys and also in regards of the MgYERZr alloys already in clinical use, the ZnAg3Ti1 alloy displayed especially excellent mechanical properties. specimens, which were hot extruded at 260 °C with an extrusion rate of 36:1 and showed a grain size of <10 μm. Hot extrusion is known to enhance the mechanical properties of metal alloys as it refines the grain size and homogenizes the alloy [29]. Homogenous alloys with smaller grain size showed more favorable mechanical properties [8]. This could be one explanation for the higher mechanical properties measured by Yang et al. [51] in comparison to our measurements.
The elastic properties of zinc in our tests were only significantly altered by alloying with silver for ZnAg6 and ZnAg9. There was no significant enhanced elongation at fracture for the zinc silver titanium alloys. The measured elastic values for the tested zinc silver alloys can also be confirmed by comparing it to literature. Sikora-Jasinska et al. [30] and Yang et al. [51] reported elongation at fracture of 30% to 40% for their tested zinc silver alloys. The zinc silver alloys tested by us showed an elongation at fracture of 25 to 30% for ZnAg6 and ZnAg9, which are quite similar.
The zinc titanium alloys had a slightly lower elongation at fracture with 16 to 21% in our measurements. These values are very close to and, for ZnAg3Ti1, even meet the target values proposed by Venezuela et al. [20] for orthopedic fixation devices. Additionally, the tested zinc silver titanium alloys showed even greater mechanical strength properties compared to the zinc silver alloys. ZnAg1Ti1 and ZnAg3Ti1 tested for an ultimate tensile strength of 241 and 262 MPa, respectively.
In comparison to the mechanical target values suggested in the literature, the ZnAg3Ti1 and ZnAg6 alloys showed excellent properties for the values we measured. They both met an ultimate tensile strength of 200 to 300 MPa, while also possessing good elastic properties with a reasonably low elongation at fracture. ZnAg3Ti1 could also meet the criteria of an elongation at fracture of 15 to 18% with a measured elongation of 16%.
Our zinc silver titanium alloys achieved a combination of good mechanical stability as well as favorable elastic properties. This makes them also stand out in comparison to tested zinc alloys in the literature, as seen in Figure 6. In comparison to the reported mechanical values of bioabsorbable alloys and also in regards of the MgYERZr alloys already in clinical use, the ZnAg3Ti1 alloy displayed especially excellent mechanical properties. 300 MPa, were the ZnAg3Ti1 alloy tested in this study and the MgYERZr alloy used for the MAGNEZIX Pin from Syntellix (Syntellix AG, Hannover, Germany) [57,58] (see Figure 7). This makes the zinc silver titanium alloys suitable for further testing for bioabsorbable implants. Their biocompatibility as well as their corrosion rates are important for future clinical use and need to be evaluated in future testing. magnesium alloys.
For example, Vojtech et al. [52] reported a significantly higher corrosion tende magnesium in contrast to zinc. They described a resting potential for zinc of 0.89 V with a pH of 7. For magnesium, they reported a resting potential of -1.64 V. The potential of zinc was higher than that of magnesium. The resting potential of zinc measurements was 0.97 V and, thus, was in a similar order of magnitude.
Törne et al. [21] reported corrosion rates in PBS solution of zinc of less th mm/year. This corresponds closely to the corrosion rate of pure zinc of abou mm/year measured during our experiment (see Figure 7). Additionally, Hehrlei [60] measured the immersion corrosion rate of ZnAg3 alloy in SBF (simulated body over a period of 60 days, where they measured a corrosion rate of 0.16 mm/year. also very close to the corrosion rate measured by us in PBS.
Corrosion Measurements
The measurements showed that the corrosion rates of the zinc alloys can be significantly influenced by the composition of the alloys (see Figure 5). Furthermore, all measured corrosion rates proved to be significantly lower than the corrosion rates of the magnesium alloys [52,59]. Thus, the zinc alloys had a better corrosion resistance than magnesium alloys.
For example, Vojtech et al. [52] reported a significantly higher corrosion tendency for magnesium in contrast to zinc. They described a resting potential for zinc of 0.89 V in SBF with a pH of 7. For magnesium, they reported a resting potential of -1.64 V. The resting potential of zinc was higher than that of magnesium. The resting potential of zinc in our measurements was 0.97 V and, thus, was in a similar order of magnitude.
Törne et al. [21] reported corrosion rates in PBS solution of zinc of less than 0.1 mm/year. This corresponds closely to the corrosion rate of pure zinc of about 0.04 mm/year measured during our experiment (see Figure 7). Additionally, Hehrlein et al. [60] measured the immersion corrosion rate of ZnAg3 alloy in SBF (simulated body fluid) over a period of 60 days, where they measured a corrosion rate of 0.16 mm/year. This is also very close to the corrosion rate measured by us in PBS.
The corrosion rates of pure zinc can be increased by alloying zinc with silver. For example, the corrosion rate increased for the zinc silver alloys measured by us. We measured the highest corrosion rates for ZnAg3 with 0.14 mm/year. For ZnAg6 the corrosion rate decreased slightly to 0.11 mm/year instead.
Sikora-Jasinska et al. [30] also measured the corrosion rates for zinc and zinc silver alloys in electrochemical corrosion tests in Hank's solution. For zinc, they determined a resting potential of −0.98 V and a corrosion rate of 0.133 mm/year. As the silver content increased, so did the corrosion rates. Thus, the corrosion rate for ZnAg2.5 was about 0.137 mm/year and for ZnAg7, 0.147 mm/year. Static immersion with subsequent determination of the weight also showed an increasing corrosion rate with increasing silver content. The increased corrosion rate of the zinc-silver alloys resulted, among other things, from the formation of two phases by alloying silver with zinc. This resulted in localized increased corrosion rates on the ZnAg3 phases with increased cathodic reaction rates [22].
This different dynamic of the corrosion rates with increasing silver content in comparison to our measurement can possibly be explained by the use of a different corrosion medium. The use of PBS led to less corrosion on average and the alloys had the opportunity to repassivate through the phosphate-based buffer [8]. For example, the corrosion rates in Törne et al. [22] were 0.1 mm/year for pure Zn and 1.7 mm/year for ZnAg4 in Ringer's solution (see Figure 7). This is higher than the corrosion rate of the zinc silver alloys we measured. Zaludin et al. [61] also showed that the corrosion rates changed greatly when SBF (simulated body fluid), Ringer's solution, or PBS were used. For example, the corrosion currents of magnesium increased from 11.19 µA in PBS to 236.05 µA in SBF and 676.87 µA in Ringer solution. Li et al. [59] described corrosion rates of zinc in SBF of about 0.102 mm/year and in Gamble's solution (GS) of 0.034 mm/year. Correlating to this, the surfaces of the zinc samples in SBF showed more significant corrosion damage than the zinc samples in GS.
Another reason for the different dynamics of the corrosion rates could be that the ZnAg3 alloys had a larger grain size due to the manufacturing process and, thus, an increased corrosion rate. The corrosion rates in the measurement can be influenced by the processing, microstructure, and especially the grain size of the alloy. A particularly smooth surface, a homogeneous alloy, and the smallest possible grain size led to lower corrosion rates [2]. A change in grain size from 400 µm to 10 µm can reduce the corrosion rate by up to 50% [19]. This also needs to be considered in evaluating the corrosion rates.
The dynamics of the corrosion rates for zinc silver alloys with increasing silver content is, thus, not yet conclusively solved.
The electrochemical corrosion tests represent an important building block in the research for new materials for bioabsorbable implants. General requirements for bioabsorbable stents were summarized in a table by Bowen et al. [17]. Here they claimed that corrosion rates of 0.02 mm/year are required. In the review by Venezuela et al. [20] a corrosion rate of approx. 0.5 mm/year is required for bioabsorbable plates and screws. Generally, implanted screws must maintain mechanical integrity for 6 months to a certain degree and then dissolve completely within 1 to 2 years. The zinc alloys we investigated had corrosion rates of 0.04 mm/year to 0.14 mm/year, which is within the range of corrosion rates required in the literature for bioabsorbable implants. The electrochemically measured corrosion rates can also be approximated in comparison with in vivo tests.
Bowen et al. [28], for example, investigated the corrosion rates of zinc by implanting zinc wires into the abdominal aorta of rats. Uniform corrosion rates for zinc were demonstrated for the first 3 months. Localized corrosion damage, such as pitting, did not form on the zinc implants until 4.5 and 6 months. They also reported a corrosion rate of 0.02 to 0.05 mm/year. These in vivo corrosion rates correlate with the corrosion rates we measured for zinc of 0.04 mm/year in the electrochemical measurements; thus, it can be assumed that the measured corrosion rates of the zinc alloys may also be transferred to the in vivo experiments and corrode homogeneously at the beginning.
Kafri et al. [23] also measured a corrosion rate within the first 14 weeks of pure zinc implanted subcutaneously in rats of approximately 0.06 mm/year in in vivo experiments.
Xiao et al. [50] confirmed the homogeneous corrosion and biocompatibility of zinc alloys in bone. With a corrosion rate of 0.15 mm/year, the implants showed good osteointegration already after 12 weeks. Therefore, it can be assumed that the zinc alloys we tested may also achieve the desired corrosion properties while implanted in bone. This still must be confirmed in corresponding tests.
For magnesium alloys, Huehnerschulte et al. [62] already examined magnesium alloys with rare earth elements, similar to the magnesium alloy of the Syntellix products, in in vivo tests in 2011. The calculated corrosion rates of the ZEK100 and AX30 alloys were 0.07 mm/year for the first 3 months. Thus, the corrosion rates of the zinc alloys measured by us with, for example, 0.08 mm/year for the ZnAg3Ti1 alloy or 0.11 mm/year for the ZnAg6 alloy are also in similar orders of magnitude compared to magnesium alloys with rare earths. With that, they achieved similar corrosion properties like bioabsorbable alloys that are already in clinical use.
For a correct assessment, additional investigations should be carried out in vivo, as the corrosion properties differ significantly depending on the implantation site and the environmental milieu [50,63]. Furthermore, especially for clinical utility, an assessment of the change in mechanical properties under corrosion is needed to better evaluate the mechanical integrity of the implant. For example, Huehnerschulte et al. [62] showed that the various magnesium alloys exhibited significant differences in flexural strength after corrosion, despite the same corrosion rates. For zinc, studies here showed that aging and creep processes could lead to an impairment of the mechanical properties [29]. Kafri et al. [23] also showed that the corrosion rate weakened significantly over time. This requires further investigation after corrosion in vitro and in vivo.
Conclusions
Zinc and its alloys are promising candidates for further investigation as materials for bioabsorbable implants for osteosynthesis. ZnAg6 and ZnAg3Ti1 showed excellent mechanical values in tensile testing. With a UTS of above 200 MPa, they met the criteria mentioned in literature for implants for osteosynthesis. These mechanical properties and their corrosion rates, which are significantly lower as common magnesium alloys, make them highly favorable for clinical use. We could prove that the corrosion rates of these alloys also lay within target values mentioned in literature for bioabsorbable implant materials. With the analysis of these alloys, novel zinc alloys are brought into the focus for further investigation for bioabsorbable implants. These findings make them promising candidates, but they still must be investigated further, especially with clinical trials. | 7,126 | 2022-01-01T00:00:00.000 | [
"Materials Science"
] |
Novel equations to predict vibrational spectroscopic and electrodynamics properties of molecules
Mathematical relations linking electric and magnetic field-dependent physical properties of molecules have been unveiled. These relations are analogous to Maxwell relations in thermodynamics and are derived from mixed third-order partial derivatives of every alternative Legendre representation of the energy of molecules with respect to the electric or magnetic field and normal coordinates. Some of these novel physical relationships have practical applications in the low computational cost calculation of parameters commonly used in vibrational spectroscopy like the Stark and Zeeman Tuning Rates. Furthermore, other equalities have shown connections and alternative ways of computing physical properties used in electrodynamics as permanent dipolar moments and polarizabilities.
Introduction
In thermodynamics [1], the relation that gives the internal energy (U) as a function of the extensive parameters like entropy (S), volume (V ), and number of particles (N) is known as the fundamental relation of a thermodynamical system (equation (1)). If the fundamental equation of a system is known, every thermodynamic attribute is completely and precisely determined. The derivative of the internal energy as a function of the extensive parameters gives rise to the intensive parameters, which are important physical properties such as the temperature (T), pressure (P) and electrochemical potential (μ). Therefore, the differential equation of the fundamental relation of the internal energy is expressed as a function of the intensive parameters and the differentials of the extensive parameters (equation (2)).
A mathematical formalism known as Legendre transformations [1] is used in thermodynamics to obtain other fundamental equations from the fundamental relation of the internal energy. In such reformulations, the intensive parameters replace extensive parameters as mathematically independent variables. All these fundamental equations are known as thermodynamic potentials. Gibbs free energy (G), Enthalpy (H), Helmholtz free energy (F) and Grand Canonical potential (C) are some of the most commonly used thermodynamic potentials and their corresponding differential equations (3). A formal aspect of thermodynamics is the set of Maxwell relations [1]. These relations arise from the equality of the mixed partial second derivatives of the fundamental relation expressed in any of the various possible alternative representations. One example of a Maxwell relation is shown in equation (4).
The second derivatives of fundamental equations are also descriptive of material properties of direct physical interest like the coefficient of thermal expansion (α), the isothermal or adiabatic compressibility [2], and the molar heat capacity at constant volume or pressure [1]. Using Jacobian operations and Maxwell relations, the isothermal compressibility (κ T ) can be related to adiabatic compressibility (κ S ), and the heat capacity at constant volume (c V ) to the heat capacity at constant pressure (c P ) (equation (5)) [1]. An analogous mathematical approach used in thermodynamics to develop equation (5) will be used later on. In this work, all these equivalent mathematical relations will be derived for a molecular system where the fundamental equation is the potential energy of the molecule and the independent variables are the nuclear normal coordinates and the elements of an external uniform electric (or magnetic) field vector. Similar equalities are formulated from mixed third-order partial derivatives of the energy and other Legendre representations. The obtained relations carry practical uses in alternative and faster ways of computing properties used in vibrational spectroscopy and electrodynamics such as the Stark and Zeeman Tuning Rates, permanent dipolar moments and polarizabilities. In most cases, these parameters can be obtained by means of these novel relations without the application of an external electric or magnetic field during their computation, thus reducing the cost of the calculation. Furthermore, relations obtained by Jacobian operations show other connections between these electrodynamics parameters.
The vibrational Stark effect [3][4][5][6][7][8][9][10][11] is the shifting of vibrational frequencies due to the presence of an external electric field. These frequencies can be measured using Infrared or Raman Spectroscopy [12,13]. The dependence of the wavenumber (ñ ) of a particular mode m with an electric field strength (F) can be described using a Taylor expansion as: The vibrational Zeeman effect [14] is the same phenomenon caused by an external magnetic field. The Stark Tuning Rate [15][16][17] (STR) and the Zeeman Tuning rate [18] (ZTR) are defined as the derivative of the frequency with respect to the electric field (F) or magnetic field (B) at the limit of zero field. These parameters are broadly used for estimating the electric or magnetic field magnitude experimentally by monitoring the shift in the frequency of molecular probes. It is a common practice to calculate theoretically the STR and ZTR due to the experimental difficulties in measuring them.
In this paper, a theoretical development of these new relationships is presented alongside some particular practical applications. Subsequently, as a proof of concept, the Stark Tuning Rate is numerically computed employing one of these novel equations for different chemical compounds typically used as molecular probes. Moreover, for the purpose of comparison, the STR is also calculated in a traditional manner with the computational approach normally adopted in the literature.
Theoretical development
2.1. Mixed second and third-order partial derivative equalities The potential energy (E) of a molecular system with fixed nuclear positions is a function of all atom coordinates. When a molecule is under the presence of an external uniform electric field (EF), the energy also depends on the vector element values of this new variable (F x , F y , F z ). For convenience, the normal coordinates of a molecule calculated in the absence electric field (Q 1 , K, Q n ; where n is the total number of normal modes) are used instead of nuclear Cartesian coordinates [12,13]. Therefore, the potential energy of a molecule under the effect of an EF can be expressed as a function of the vector elements of the EF and the normal coordinates of the molecule as shown in equation (7).
The total differential of E is given by: F F Q Q n , , , , , , , , , , , , 1 , , , , , 1 , , , , , y z n x z n x y n x y z n x y z n 1 1 1 It can be noticed that the first derivative of the energy with respect to the EF vector elements yields the vector elements of the electric dipolar moment ( , , x y z m m m ) as [13]: , , , , , , , , In addition, the first derivative of the energy with respect to the normal mode coordinates produces the normal forces ( f 1 , K, f n ) as [13]: Now, equation (8) can be summarised as: The second derivatives of the energy yield other known parameters such as the polarizability tensor elements ( i j , a where i, j, k=x, y, z) and the normal force constants (λ m where m n 1 ) [12,13]: A Legendre transformation [1] can be applied to the energy to define other fundamental expressions of the energy, so-called energy potentials, depending on f 1 , K, f n and μ x , μ y , μ z in addition to Q 1 , K, Q n and F x , F y , F z 1 R F F F f f , , , , , 14 x y z n The Legendre transformation implies that: where i, j, k=x, y, z and m p n 1 , , and that: When the molecular geometry is optimised, the forces are zero, then R equals E. When the electric field is absent, T equals E. If both conditions are satisfied, Y equals E. Thus, the same applies to the total differentials of such energy potentials.
The differentials of the new fundamental expressions of the energy (R, T and Y) can be obtained similarly to the one for E: Based on the symmetry of partial derivatives, the equality of mixed second-order partial derivatives of E, R, T, and Y produces the relations shown in section A.1. From all equations only the following two will be used in further derivations.
The second and third derivatives of E and T in sections A.1 and A.2 are defined as restrained since the molecular normal coordinates are constant during derivation, while the derivatives of R and Y are defined as unrestrained since the nuclear coordinates are variable and the forces are constant. In the following text, an upperscore will denote unrestrained parameters.
Exact equivalent equations relating the magnetic field (B), the magnetic dipole moment (m ) and the magnetic polarizability (a ) instead of their electric counterparts are obtained for the case of vibrational Zeeman effects (equations not shown).
Vibrational Stark effect
The vibrational Stark effect provides to experimentalists the possibility of determining the local electric field of molecules at diverse environments by measuring the shift of their vibrational frequencies. As shown in equation (6), knowing in advance the lower order derivatives of the Taylor expansion provides the necessary information for precisely calculating the magnitude of such field. The first-order Stark Tuning Rate (STR or σ) is defined as the derivative of the wavenumber 2 of a particular mode m ( m ñ ) with respect to the electric field at F=0 [3]. The second derivative is known as second-order Stark Tuning Rate.
Restrained first-order Stark Tuning Rate
The restrained Stark Tuning Rate (rSTR or σ) has been previously used by Brewer et al [5]. The rSTR is defined as the first derivative of the wavenumber with respect to the field holding Q constant (the nuclear configuration coordinates are constant): A shortcut relation to determine the rSTR avoiding the application of an electric field using the mixed thirdorder derivative relation is stated in equation (29). Based on this equation, the derivative of the normal force 2 It can also be the frequency (ν).
constant λ m with respect to the electric field at F i (right term) at fixed nuclear positions (Q p constants, p " ) can be determined as the second derivative of the electrical dipolar moment over the i axis with respect to the normal coordinate m.
The equation (32) [12] relates m ñ with λ m : To obtain the rSTR, equations (32) and (31) can be used to express σ as a function of the normal force constant: Subsequently, by performing a replacement using equation (29), the desired relation (equation (34)) is derived.
shows how rSTR can be computed alternatively to equation (31). Starting from an optimised molecular geometry in vacuum, a frequency analysis can be performed to obtain all normal force constants, thus, their vibrational frequencies. For a selected frequency, the corresponding normal coordinate is obtained. A normal coordinate is a displacement vector for all atom in the Cartesian space. This vector is used to displace the coordinates of each atom by multiplying the vector by small displacement values and adding this values to the relaxed structure. For each distorted geometry, an electrical dipolar moment is calculated. Finally, the rSTR is obtained from the second derivative of the dipolar moment along the i direction versus the displacement values applied to the normal mode m. This should be done for i = x, y and z to obtain the rSTR vector and module. The electric dipolar moment of a molecule can be computed, by means of quantum mechanics, by applying the electric dipolar operator to the wavefunction of the molecule instead of using the equation (17). This way of computing the rSTR provides a computational alternative that does not require the application of an electric field, compared to the traditional method in which the normal force constants are numerically computed at different electric fields. The determination of rSTR by the traditional method requires fixing the nuclear positions while applying an electric field to the structure optimised in vacuum. This fact impies that the molecular geometry is not in an energetically relaxed state. This is a drawback of the traditional method since such approach is subject to computational problems during the calculation of the frequencies, given that, when an electric field is applied to a fixed unrelaxed molecular structure, the molecular forces are not zero, and the frequencies analysis may not converge.
Unrestrained first-order Stark Tuning Rate
The unrestrained equation (35) implies that all the normal mode forces are constants 3 .
In the particular case when all forces are zero (f p 0, p = " ), the molecular geometry is in a minimum (relaxed state) or maximum (transition state) of energy. Combining the equation (32) together with the equation (35), it leads to equation (36).
From the mixed third-order partial derivative relations, an equality for the derivative in equation (36) can be obtained.
By using the demonstration in section A.3 (equation (83)), it can be proven that equation (13) is equivalent to equation (37).
Based on equation (30) and replacing by the inverse of equation (37) to obtain: and finally: The combination of equations (36) and (39) yields the final relation for the alternative calculation of the unrestrained first-order Stark Tuning Rate (uSTR): It is demonstrated in section A.4 of the appendix (equation (86)) that equation (40) is equivalent to equation (41):
Connection between restrained and unrestrained first-order Stark Tuning Rates
Presently, it is feasible to relate the restrained Stark Tuning Rate (σ) with the unrestrained Stark Tuning Rate (s ) using equation (98) in section A.6 where l=f or Q, indistinctly: Now replacing equation (42) by equations (35) and (31) leads to equation (43): For the case l=f, the last derivative in equation (43) can be replaced by equation (27) and then using equation (88) it is proven that the equation (44) is valid for l=f or Q indistinctly.
Unrestrained second-order Stark Tuning Rate
There is also a relation that links the second-order Stark Tuning Rate [20] with the polarizability. The second derivative of equation (32) with respect to the electric field provides under unrestrained configuration (constant forces) the following: The second derivative of λ m is obtained by taking the derivative of equation (39): where: Using equation (46) in conjunction with equations (47) and (39) affords: (45) using equations (48) and (39) yields:
Replacing in equation
With relation (49) it is possible to calculate the unrestrained second coefficient of the Taylor expansion in equation (6) by determining the normal force constant as well as the second derivative of the permanent dipolar moment and the polarizability with respect to the normal force.
Restrained second-order Stark Tuning Rate
Another relation links the restrained second-order Stark Tuning Rate [20] with the polarizability. The second derivative of equation (32) with respect to the electric field at restrained nuclear configuration provides: The second derivative of λ m is obtained by taking the derivative of equation (29): Using equation (51) in conjunction with equation (52) affords: With relation (54) it is possible to calculate the restrained second coefficient of the Taylor expansion of equation (6) by determining the normal force constant as well as the second derivative of the permanent dipolar moment and the polarizability with respect to the normal force along the normal coordinate m.
Dipolar moments and polarizabilities 2.3.1. Restrained and unrestrained dipolar moments
So far, the permanent dipolar moment was defined as the derivative of the energy with respect to the electric field at constant normal coordinates (μ; equation (55)). In quantum chemistry, this restrained dipolar moment μ is usually computed by applying the dipolar moment operator to the wavefunction of a molecule with fixed nuclear geometry. Alternatively, an unrestrained dipolar moment can also be defined ( ; m equation (55)). Under this definition, when forces are constant, the nuclear configuration has no geometric restrain. At constant null forces, the system is in a relaxed state (minimum of energy) or a transition state (maximum of energy). The restrained dipolar moment implies that the nuclear configuration is fixed and the electronic configuration is unrestrained. While for the unrestrained dipolar moment, the nuclear and electronic configuration are unrestrained. The numerical calculation of m by using the equation (55) requires a geometry optimisation for each electric field value which increase the computational cost compared to μ.
It is feasible to link both dipolar moment definitions by combining equations (55) and (98) derived from Jacobian properties, in section A.6: where l=f or Q, indistinctly. By replacing For a relaxed molecular structure (at f m =0), the unrestrained and restrained dipolar moments are equal provide information regarding the molecular geometry distortion given by an electric field. However, as it was done for the derivation of equation (44), these derivatives can be replaced by their mixed second order derivative counterparts (equation (27)) and yield the final expression: With equation (57), it is possible to compute the unrestrained dipolar moment when the molecular structure is away from a minimum energetic state.
Restrained and unrestrained polarizabilities
Similarly to the permanent dipolar moment, the polarizability can be defined as restrained (α) (equation (58)) and unrestrained (ā ) (equation (59)) where, for α the nuclear configuration is restrained and the electronic configuration is unrestrained, while for ā the nuclear and electronic configurations are unrestrained.
By renaming the term α as the electronic polarizability and ā as the total polarizability, the equation (63) states that the total polarizability (ā ) is equivalent to the sum of the electronic (α) and the nuclear polarizabilities (ã ). The necessity for separation of the total polarizability in nuclear and electronic contributions has been previously discussed [21].
The total (unrestrained) polarizability can be computed numerically by means of its definition (equation (59)) or by computing the electronic and nuclear polarizability terms and equation (63). The electronic polarizability is a well-known property usually computed by default in several quantum chemistry programs [22][23][24]. The nuclear polarizability can be determined either by using the equation (62) or by using in conjunction equations (61) and (56). In this way, the determination of the nuclear polarizability term at null electric field can be done avoiding the application of an electric field.
Vibrational Zeeman effect
The vibrational Zeeman effect [14] is the shifting of vibrational frequencies due to the presence of an external magnetic field (B). Analogously to the STR, the first and second derivatives of the wavenumber with respect to the magnetic field yield the firstand second-order Zeeman Tuning Rates [18] (ZTR), respectively. The exact equivalent equations found above can be obtained relating the magnetic field (B), the magnetic dipole moment (m ) and the magnetic polarizability (a ) instead of their electric counterparts: Restrained Zeeman Tuning Rate: Unrestrained second-order Zeeman Tuning Rate:
Proof of concept
As a proof of principle, in this section, the unrestrained Stark Tuning Rate (uSTR) was numerically computed for a set of small molecules in two ways as a matter of comparing the traditional method and the proposed method in this paper. The traditional way uses the equation by definition (equation (35)) and the newly developed method uses the singular equation (41).
Methods
Quantum calculations were performed using methodologies based on Density Functional Theory (DFT). The hybrid three parameters exchange functional from Becke combined with the correlation functional from Lee, Yang and Parr (B3LYP) was chosen to compute the electric field effects on the wavenumber modes [25][26][27]. All calculations were performed with Gaussian 03 [24] using the basis set TZV. The molecules selected for this study are commonly used molecular probes: CO, NO, HCN, water, methane and 4-Chloro-Benzonitrile. For the HCN and 4-Chloro-Benzonitrile, the STR was computed for the vibrational mode corresponding to C-N vibration. For water, the vibrational mode of 1567 cm −1 was selected. For methane, the vibrational mode with highest vibrational frequency was chosen. The electric field vector was pointed in the direction of the vibration of the selected mode. A geometry optimisation was performed for all structures in vacuum with an extremely tight converge criteria 4 . A frequency analysis for all optimised structures in vacuum was performed to obtain the normal mode vectors for all the molecules. Unrestrained first-order Stark Tuning Rate was then computed using the traditional method and the novel method presented in this work.
Traditional method for computing uSTR
From the frequency analysis, vibrational frequencies were obtained for all molecules at the electric field modules of F=±0.01, ±0.005, ±0.002, ±0.001, ±0.0005, ±0.0002, 0.0 au. Geometries were previously tightly optimised at each field. STR was numerically computed by fitting a 6th degree polynomial to the frequency versus electric field module tables and by taking the derivative at F=0 according to equation (35).
Results
The obtained uSTR values from both methods were compared for precision and computational cost. Table 1 shows that the values of the computed uSTR are close and well within the numerical error. The percentage differences 5 in the uSTR for the standard and the new method seem to be higher in polar molecules (H 2 O, HCN and 4-Chloro-Benzonitrile) and with greater number of atoms. One possible explanation is that at higher number of atoms the numerical errors in the determination of the derivatives increase. The results from the traditional method cannot be considered as exact, neither from the new method. Nevertheless, the standard method may be prompt to higher numerical errors due to during the application of the electric field and optimisation, the molecular geometry changes, allowing the molecule to slightly rotate. This difference could be reduced if the electric field vector was applied in the direction of the net dipolar moment. This may explain the reason for a higher difference in polar molecules considering that the higher the dipolar moment is, the greater the rotation. The frequency analysis could be also another source of imprecision. However, the relation found in this paper that connects both methods are mathematically equivalent and further analysis should be done to identify the source of inaccuracy in each method and to determine which method is more precise. The new method may yield more accurate uSTR values as it does not require the application of an electric field and requires just a one time frequency analysis computation. This reduces the mentioned sources of error.
A comparison of the computation time is shown in table 2. It can be seen that for bimolecular species the time spent for computing STR using the standard method is about 1.5 times longer than using the new method. For three-atom molecules, the ratio almost doubles and for molecules of more than 5 atoms it is ∼3 fold. It is expected that the ratio would increase for higher number of atoms. The numerical computation time for the new method can be considerably reduced by using less number of points. In this example, to obtain a precise uSTR value with the new method, more than enough number of points were used (27). However, this number can be reduced at least by 2 or more to yield a faster calculation. The main goal of this section is to show that the applicability of equation (41) and it is not focused specifically on the timing performance.
Discussion and conclusions
The intention of this work is to present the developed relations from sections A.1 and A.2, their uses to obtain physically meaningful relations uncovered in sections 2.2, 2.3 and 2.4, and finally showing a simple application example with the unrestrained Stark Tuning Rate in section 3. According to the traditional way of computing the uSTR, the geometry of several structures at diverse electric field strengths ought to be optimised prior to the frequency analysis yielding the wavenumber. This procedure is computationally very expensive and is increasingly hindered by the number of atoms in the molecule. The relation obtained in equation (41) significantly reduces the computation time of the uSTR and may increase the precision in the calculation of its value. Additionally, the application of an electric field is not required to compute the unrestricted and restricted STRs. Incorporation of this method for computing the restrained and unrestrained first order and second order uSTR in traditional quantum computing packages is encouraged. With little extra computational cost, all these extra parameters can be obtained after the frequency analysis calculation. A more focused study to optimise the performance (speed and precision) of the implementation of the new STR method should be done. The study should determine and compare the speed and precision of this method with the traditional one with more diverse molecules. The novel mathematical relationships developed here can be used as shortcut equations to compute restrained/unrestrained first/second-order Stark Tuning Rate (STR) and Zeeman Tuning Rate (ZTR). Furthermore, an equation that connects the uSTR and rSTR has been found, in addition to other important electric or magnetic field-dependent properties of molecules relevant for vibrational spectroscopy. For electrodynamics, novel definitions for unrestrained dipolar moment and nuclear polarizability were introduced and an important relation between electronic, nuclear and total polarizabilities was developed. These relationships are obtained from mixed third order partial derivatives of the Legendre transformation of the energy which depend on the normal coordinates and the external uniform electric or magnetic field vector. A direct application of these relationships is the faster and efficient computation of the STR and ZTR. Additionally, this work can be a starting point for further future theoretical developments in this field. Whether this novel framework will prove to be useful for other practical applications is yet to be discovered. Q F Mixed third-order partial derivatives of E, R, T and Y Analogously, the equality of the mixed third-order partial derivatives [19] of E, R, T, and Y provides the following useful relations: 6
A.3. Specific demonstration 1 (normal force constant equality)
The normal force constant is defined as the second derivative of the energy with respect to a normal coordinate while the rest of the normal coordinates and all the elements of the electric field vector are kept constant.
Considering a system with n normal vibrational modes: The equality of both normal force constants is proven based on Jacobian properties [1] as shown in section A. 5: The following is the demonstration of equation (87). This demonstration can be extended for equations (88), (89) and (90).
The derivative of the left term of equation (87) can be rewritten using Jacobian's notation as shown in equation (91).
x Q The first variable to change, in this case Q 1 , should be in the second position. The first step is to multiply equation (91) by a unity Jacobian which is constructed by using the denominator of equation (91) with the first variable to change replaced: | 6,165.6 | 2018-04-25T00:00:00.000 | [
"Chemistry",
"Physics"
] |
High-contrast JWST-MIRI Spectroscopy of Planet-forming Disks for the JDISC Survey
The JWST Disk Infrared Spectral Chemistry Survey (JDISCS) aims to understand the evolution of the chemistry of inner protoplanetary disks using the Mid-InfraRed Instrument (MIRI) on the James Webb Space Telescope (JWST). With a growing sample of >30 disks, the survey implements a custom method to calibrate the MIRI Medium Resolution Spectrometer (MRS) to contrasts of better than 1:300 across its 4.9–28 μm spectral range. This is achieved using observations of Themis family asteroids as precise empirical reference sources. The high spectral contrast enables precise retrievals of physical parameters, searches for rare molecular species and isotopologues, and constraints on the inventories of carbon- and nitrogen-bearing species. JDISCS also offers significant improvements to the MRS wavelength and resolving power calibration. We describe the JDISCS calibrated data and demonstrate their quality using observations of the disk around the solar-mass young star FZ Tau. The FZ Tau MIRI spectrum is dominated by strong emission from warm water vapor. We show that the water and CO line emission originates from the disk surface and traces a range of gas temperatures of ∼500–1500 K. We retrieve parameters for the observed CO and H2O lines and show that they are consistent with a radial distribution represented by two temperature components. A high water abundance of n(H2O) ∼ 10−4 fills the disk surface at least out to the 350 K isotherm at 1.5 au. We search the FZ Tau environs for extended emission, detecting a large (radius of ∼300 au) ring of emission from H2 gas surrounding FZ Tau, and discuss its origin.
INTRODUCTION
Giant planets, planetesimals, the embryos of terrestrial planets, as well as comets and Kuiper belt objects, form in a rich chemical environment where primordial water and other volatiles mix with new complex chemistry to create a vast diversity of planetary systems ( Öberg et al. 2023).In analogy with our own solar system, rocky planets probably have a wide range of compositions, depending on their formation history.Some may be rich in ices and volatiles, whereas others are rocky with tenuous atmospheres.While the formation and chemical evolution of planets is complex, it is widely thought that the initial chemical conditions of oxygen, carbon, and nitrogen of the planet-forming region at 1-10 AU (Mordasini et al. 2009) play a vital role in determining the final makeup of planets (Cleeves et al. 2014;Bergin et al. 2015).Indeed, many chemical signatures seen in ancient Solar System material likely have their origin in the gas-rich protoplanetary disk phase (Busemann et al. 2006;Mumma & Charnley 2011).
Tracing the chemistry of planet-forming gas with infrared spectroscopy
While millimeter-wave observatories, such as ALMA, are powerful facilities for tracing cold molecules at large disk radii (10-100 AU), many of the most abundant (bulk) molecular species in planet-forming regions, such as water, CO 2 , and CH 4 , can only be efficiently characterized at infrared wavelengths from space.Extensive spectroscopic observations obtained with the InfraRed Spectrometer (IRS) on Spitzer discovered that many protoplanetary disks surrounding low-mass young stars (up to about 1.5 M ⊙ ) have mid-infrared spectra covered in bright emission lines from warm molecular gas (Carr & Najita 2008;Salyk et al. 2008;Pontoppidan et al. 2010;Carr & Najita 2011;Pascucci et al. 2013).It has further been shown that the gas emitting at infrared wavelengths originates at radii of 0.1-10 AU, commensurate with the primary planet-forming region (Salyk et al. 2011).
Because of the high dust optical depth of typical inner disks, the infrared emission generally traces the surface, representing 0.1-1% of the total vertical column density at 1 au (Woitke et al. 2018;Bosman et al. 2022).These regions are expected to have highly active chemistry due to the high temperatures and exposure to UV and X-ray radiation from the central star and accretion shock (Semenov & Wiebe 2011;Calahan et al. 2022).Consequently, molecular abundances may be strongly altered, by orders of magnitude, from their primordial composition (Pontoppidan et al. 2014).Indeed, a combination of gas-phase kinetic reactions of self-shielding is expected to lead to abundant water vapor in the inner disk at temperatures above ∼300 K (Bethell & Bergin 2009).Concurrently, such static models predict that cooler gas (150-300 K) is deficient in water, resulting in observable column densities below ∼ 10 15 cm −2 .
However, this simplified static chemistry may be strongly modified by a combination of selective freezeout and vertical/radial drift of millimeter to meter-sized pebbles (Meijerink et al. 2009;Kama et al. 2016;Booth & Ilee 2019;Banzatti et al. 2020;Price et al. 2021).Since water is a potentially dominant solid mass reservoir, such transport may have profound effects on the formation of planetesimals and planetary cores close to the snowline.
A massive flow of icy pebbles across the snowline, followed by vertical turbulent mixing, may enhance the abundance of water in gas at temperatures below ∼300 K by orders of magnitude above the low abundances predicted by static chemical models.Recently, JWST has begun confirming the Spitzer observations of molecular emission throughout the mid-infrared (Banzatti et al. 2023a), and initial results indeed suggest a wide diversity in relative molecular abundances (Grant et al. 2023;Tabone et al. 2023;Gasman et al. 2023a).
The need for high-contrast mid-infrared spectra
These fundamental questions motivate a strong need for obtaining high SNR mid-infrared spectra of protoplanetary disks in order to: 1) Precisely measure the water vapor column density via the detection of optically thin lines, 2) detect rare molecular species, including isotopologues, and 3) place limits on potential carriers of the most abundant elements, such as NH 3 for nitrogen and CH 4 for carbon.Precisely measuring the abundance of the H 16 2 O at a range of temperatures requires the detection of weak lines because the strongest water lines are highly optically thick (Meijerink et al. 2009;Notsu et al. 2016), and trace the emitting area, rather than the total column density.The optically thin lines have small intrinsic strength (A ul ≲ 10 −3 s −1 ) and may be 10-100 times weaker than the brightest water lines.However, since the mid-infrared Spectral Energy Distributions (SEDs) of protoplanetary disks are typically characterized by bright dust emission, searches for rare molecular species and optically thin lines tracing the total water column density often require very high signal-to-noise ratios and correspondingly high spectral contrast.Further, detecting rare isotopologues of water and other molecules would allow us to measure the H 16 2 O/H 18 2 O ratio as a proxy for photochemical processing that is believed to have occurred in the early solar system (e.g., Clayton et al. 1973;Lyons & Young 2005).Important nitrogen-and carbon-bearing species are expected to be present at low contrast.HCN and C 2 H 2 are typically detected via their Q-branches near 14 µm, but accurate column density measurements require the use of the Q-branch shape, in combination with the much weaker R and P branches.Other rare species are also important tracers of inner disk chemistry and evolution.An active inner disk chemistry will lead to a re-distribution of volatile elements into new molecular carriers.For instance, NH 3 -a major nitrogen carrier in the cold interstellar medium -is likely destroyed in the disk surface, and the nitrogen driven into N 2 and HCN (Walsh et al. 2015), which could be part of the explanation for why the Earth is so depleted in nitrogen.We can use the strong ν 2 rovibrational band of warm NH 3 around 10 µm to detect, or put strong constraints on the inner disk NH 3 abundance, down to abundances of ∼ 10 −8 , relative to H 2 (Pontoppidan et al. 2019).
Together, all these species allow constraints of the bulk elemental abundances (C/O and C/N) of the primary planet-forming regions for direct comparisons with observations of these quantities in exoplanetary atmospheres ( Öberg et al. 2011;Madhusudhan 2012).
The JDISC Survey
The JWST Disk Infrared Spectral Chemistry Survey (hereinafter JDISCS) is a collaboration involving several JWST programs to observe a well-characterized sample of protoplanetary disks with the Medium Resolution Spectrometer (MRS, Wells et al. 2015) on the JWST Mid-Infrared Spectrometer (MIRI, Rieke et al. 2015) at high spectral contrast, reduced using a consistent process.Currently, JDISCS includes PID1549 (PI:K.Pontoppidan), PID1584 (CoPIs: C. Salyk, K. Pontoppidan), PID1640 (PI: A. Banzatti), and PID3034 (PI: K. Zhang).In this paper, we present the data and processing strategies developed for JDISCS and quantify the data quality.A key feature is the use of asteroids to remove the well-known MIRI detector fringes at high signal-to-noise across the full MIRI-MRS wavelength range of 4.9-28 µm.This approach is described in Sections 2.1 and 3.In section 3.6, we demonstrate that we achieve the goal of ≥1:300 spectral contrast.
As a demonstration of common use cases for the JDISCS data, we present the MIRI MRS spectrum of the disk around FZ Tau, a low-mass young star in the nearby Taurus star-forming region.Table 1 summarizes the physical properties of FZ Tau.FZ Tau has a very compact millimeter-dust disk with a radius of ∼ 11.6 +3.0 −1.2 au, yet with a relatively high mass (> 1.2 × 10 −3 M ⊙ , Table 1), and high accretion rate (McClure 2019).Compact millimeter-dust disks like FZ Tau are predicted to have high column densities of warm water vapor, provided that their compactness is due to efficient inward drift of icy pebbles (Banzatti et al. 2020(Banzatti et al. , 2023b)).Indeed, FZ Tau is known to exhibit abundant warm water (Pontoppidan et al. 2010), and it is one of the few disks around solar-mass stars with sensitive Herschel-PACS spectroscopy of cool water, and with estimates of the water abundances near, and outside the surface water snowline at 3.3±0.2au (Blevins et al. 2016).FZ Tau has a high accretion rate, which indicates efficient delivery of material, including water, to its inner disk.
We organize this paper as follows.In Section 2.1 we describe how the JDISCS observations are carried out.Section 3 describes the JDISCS custom data reduction process and quantify its performance.In section 4, we provide examples of analyses undertaken by the JDISCS team: We present the JDISCS dust continuum estimator applied to FZ Tau (Section 4.1).In Section 4.2, we retrieve the properties of molecular gas in the inner planet-forming regions of disks (distribution, abundance, and temperatures) for FZ Tau.In Section 4.4, we present continuum-subtracted line images showing extended emission from rotational H 2 lines.Finally in Section 5, we interpret the retrieved FZ Tau parameters in the context of classical protoplanetary disk models.
OBSERVATIONS
2.1.High-contrast observing strategy MIRI-MRS spectra suffer from strong, high-frequency fringes due to interference within the detectors with am-plitudes of ∼10-30%.This leads to strong confusion with the densely packed molecular spectra from planetforming disks.While modeling of the fringe response function may achieve contrasts of 1-2% (Argyriou et al. 2020), it is well-known that theoretical fringe fitting methods, such as those used by Lahuis & Boogert (2003) and employed by the MIRI pipeline, are prone to removing power from intrinsic lines by acting as a low-pass filter (Pontoppidan et al. 2010).This is particularly problematic if the line contrast is low, or if the line spacing is regular or semi-periodic, as is the case for many molecular bands.Thus, correcting spectra removing them down to a level of 0.3% requires an independent, direct measurement of the Relative Spectral Response Function (RSRF).The use of empirical calibrators for efficient and stable fringe correction was developed by multiple independent groups for use with observations with the Spitzer Infrared Spectrometer (Spitzer-IRS, Carr & Najita 2008;Pontoppidan et al. 2010;Lebouteiller et al. 2015), which was affected by similar fringing.Early efforts with MIRI-MRS suggest that similar calibration with standard stars provide promising results at short wavelengths where stars are the brightest (Gasman et al. 2023b).Figure 1.Accuracy of the MIRI target acquisition measured as the difference in retrieved centroid for each dither between the asteroid calibrator and the FZ Tau observation.The largest differences, in units of spaxels, is seen for the shortest wavelength channels where the spaxels are the smallest.
The observing strategy for the JDISC Survey was designed to optimize the line contrast for high signal-tonoise point sources by obtaining observations of high signal-to-noise calibrators (Pontoppidan et al. 2021).The calibrators are required to be bright in the important MRS channel 4, which covers the coolest water lines accessible to JWST, as well as the most unblended H 18 2 O and HDO lines.Given the excellent JWST pointing stability (Rigby et al. 2023), even single standard calibrator observations is sufficient for excellent fringe removal of MRS observations.Empirical Relative Spectral Response Functions (RSRFs) are determined independently for each MRS dither position by observing the standard calibrators centered on the relevant spaxels in the MIRI MRS field.The correction is then optimized by placing the science target on the same spaxels, using the same dither pattern.This is a key difference from the JWST calibration pipeline, which does not enforce such "spaxel matching".In Figure 1, we show that the pointing difference between calibrator and science target are accurate to a fraction of a spaxel, as expected.
Asteroid calibrators
The Cycle 1 MIRI flux calibration plan uses stars for spectrophotometric calibration.These work well at short wavelengths (Gasman et al. 2023b), but have much lower SNR in the key Channels 3 and 4 of MIRI-MRS (≲ 10 in MRS Channel 4, Gordon et al. 2022b).Consequently, the Cycle 1 JWST calibration program is insufficient to reach the high signal-to-noise needed to fully characterize the molecular spectra from protoplanetary disks.Instead, JDISCS included observations of asteroids as an integral part of PID1549 to provide high SNR fringe calibration across the full MRS range.
Asteroids are near ideal relative empirical calibrators for MIRI MRS.They are very bright at 10-30 µm, and they generally have spectra that can be approximated by smooth greybodies with shallow broad-band features above a few % and have no gas-phase lines (Harris et al. 1998).Even relatively large asteroids (up to ∼ 200 km diameter) remain point sources when observed with the MIRI MRS beam (corresponding to ≲ 0. ′′ 14 at a minimum distance to the main asteroid belt of ∼2 au).These are desirable qualities for efficient and model-independent removal of fringes and other spectral response function features in high signal-to-noise MIRI-MRS spectra.The main drawback of asteroids as photometric calibrators is that they are variable by both distance and rotation.Asteroids are therefore used to produce high-quality relative spectrophotometric calibration of both high-and low-frequency spectral response structure, but not for absolute photometry.In practice, the latter still requires the additional observation of a well-modeled standard star, such as those provided by the JWST calibration program.
For measuring the MRS RSRF, we use two Themis family asteroids: 515 Athalia and 526 Jena (see Table 2 for their properties).The Themis family consists of primitive C-type asteroids from the outer main belt (Hirayama 1918;Ziffer et al. 2011).A sufficient number of Themis asteroids are available with sizes such that they present a point source to MIRI-MRS with 10-20 µm flux densities of 0.2-1.0Jy.They are characterized by carbon and volatile-rich surfaces with low silicate content, leading to nearly featureless mid-infrared spectra, with silicate features less than a few %, as demonstrated by Spitzer spectroscopy (Licandro et al. 2012).526 Jena and 515 Athalia are both spatially unresolved with MIRI: With diameters of at 43 and 52 km, respectively, they subtend solid angles of 25-30 mas at a distance from L2 of ∼ 3 AU, compared to the MIRI MRS resolution of 0.4-1 ′′ .
We show the asteroid spectra and models used for the calibration in Figure 2. It is seen that the asteroid spectrum is highly complementary to observations of stellar calibrators that are bright at short wavelengths.For the JDISCS RSRFs, we supplement the asteroid calibration with MRS observations of the standard calibrator star HD163466 (PID1539) in Channel 1, where the asteroid is faint.
JDISC Survey observations
The JDISC Survey uses MIRI-MRS to obtain deep spectra of a large sample of protoplanetary disks around ∼ 0.1-2 M ⊙ stars.We observe the JDISCS targets using all three sub-bands (SHORT/A, MEDIUM/B, LONG/C) to cover the full wavelength range between 4.9-28.6µm.The exposure times have a general goal of reaching signal-to-noise ratios (SNR) of at least 300 over the full MRS wavelength range, but in particular at wavelengths longer than 20 µm.Exposures are designed to have integration ramps that are as long as possible without saturating the detector to minimize the detector read noise.We consequently used integrations with between 14 and 26 groups, ensuring good slope fits to the ramps, even in the presence of first-frame effects and non-linearity.As presented in this paper, FZ Tau was observed by JWST on February 28, 2023 with exposure times of 14×6 and 21×4 (frames×integrations), yielding 966 and 987 seconds for the short-and long-wavelength detectors, respectively.
The 4-point "Negative" dither, optimized for point sources, samples the RSRF for four specific locations in the MRS field of view.The asteroid calibrators were ob-served using the same dither pattern to precisely sample the same four point-source fringe patterns.To ensure that the targets and asteroid calibrators are placed on the same spaxel for each dither, all observations were acquired using the MRS target acquisition procedure.This approach uses the neutral density filter to peak up on the brightest pixel of the target source in a region of interest (ROI) in the MIRI imager, before offsetting the source to the MRS field of view.The on-board peak-up algorithm uses the brightest pixel in the target acquisition image to re-center the target in the MRS FOV, which results in centering that is accurate to 0. ′′ 1, or 0.3-0.5 spaxels, depending on the channel.This precision is verified by our observations, as shown in Figure 1.The raw FZ Tau and and asteroid JWST-MIRI data from JDISCS used in this paper are available at 10.17909/7g7e-3w15.JDISCS generally uses the latest1 JWST Calibration Pipeline (Bushouse et al. 2022) to process the MRS spectra to stage 2b, which pro-
Residual fringes
Figure 2. Top panels: MIRI MRS spectrum of the Themis family calibrators 526 Jena (left) and 515 Athalia (right), extracted from the stage 2 products using the JWST pipeline spectro-photometric calibration.The spectra are well-fit by blackbody curves using wavelengths above 7 µm.Below 7 µm, the asteroid spectra become increasingly dominated by reflected light, as opposed to pure thermal emission.However, in this range (Channel 1), we use the standard star HD163466 for the RSRF.Bottom panels: The ratio between spectrum and model.The relative departures between the asteroid and pipeline calibration at long wavelengths are likely due to degradation in the Channel 4 throughputs since commissioning.
duces separate three-dimensional cubes for every exposure, channel, and sub-band.The background is removed by pairwise subtraction of the level 2b cubes obtained at two opposing dither positions.This approach is also efficient in removing many hot pixels from the nod-subtracted cubes.We create a clean, high signal-tonoise two-dimensional image for each channel and subband using the median of each cube in the wavelength direction.This enables the use of a Gaussian centroid to determine the exact location of the source.We then extract a one-dimensional spectrum using an aperture that expands linearly with wavelength within each sub-band.Due to defocusing by detector scattering, the MRS PSF is significantly broader than the diffraction limit across the wavelength range, spanning from a factor 2 in Channel 1 short to a factor 1.2 in Channel 4 (Argyriou et al. 2023).Consequently, we use extraction diameters of 2.8 times 1.22 × λ/D.We use the same extraction aperture for the target and calibrator, as this negates the need to apply an aperture correction, and the absolute flux calibration precision is essentially that of the JWST pipeline.The exact choice of aperture size does not affect the absolute spectro-photometric calibration under the assumption that both are point sources.
Each of the four dithered one-dimensional spectra are individually divided by one of four empirical RSRFs from the same position within the MRS Field-Of-View (FOV).The set of one-dimensional dithers are then mine if any small offsets are present; if one is detected, the RSRF is translated in wavelength to match.Typically, we find small wavelength offsets of less than 1/2 resolution element, possibly due to non-reproducibility of the grating position.In Figures 3 and 4, we show the result of this process for a representative set of wavelength ranges.
Refinement of the MRS wavelength calibration
Because of the large number of water lines present in mid-IR spectra of planet-forming disks, they provide excellent tests of the MRS wavelength calibration.We use ≈ 200 single lines or line complexes (clusters of 2-4 nearby lines with separate peaks) from water, CO, and OH to shift the observed FZ Tau spectrum to the model spectra by cross-correlation.We find that the JWST pipeline wavelength solution in context 1100 has departures of up to 90 km s −1 (about 1/2 resolution element at 22 µm), with a relatively worse performance for channel 4 (Figure 5).As a result, we derive a new wavelength calibration by assuming the molecular (water and CO) lines trace the systemic velocity of FZ Tau (Heliocentric stellar R V = 15.8 ± 3.5 km s −1 , Banzatti et al. 2019).This assumption is supported by ground-based high-resolution spectroscopy of both the CO fundamental band around 4.7 µm and rovibrational and rotational water lines from 5 µm to 12.4 µm (Banzatti et al. 2023a).In this context, corrections of up to 90 km s −1 are needed to achieve an accuracy of a few km s −1 across the full spectrum.After correction, the intrinsic scatter of the wavelength solution is reduced to ∼ 3 − 5 km s −1 (RMS).
MRS resolving power
The large number of unresolved molecular lines provides an opportunity to accurately measure the MRS resolving power, in particular in Channel 4, where previous measurements have lacked suitable lines.We use relatively isolated CO and water lines in the FZ Tau spectrum and fit single Gaussians to measure the FWHM of each line.We deconvolve each MIRI line with the measured intrinsic CO line width of 36 km s −1 (from ISHELL, Banzatti et al. 2023a).We show the resulting measurements in Figure 6.With our measurements we increase the number of resolving power measurements by a factor of several.We detect the slope within 10 out of 12 subbands, as seen in pre-flight data (Glasse et al. 2015).Most importantly, we find that the resolving power in Channel 4 is higher than previously thought; The relation of Argyriou et al. (2023) decreases to R ∼ 1000 at the longest wavelengths, when in actuality R ∼ 2 − 3000 across all of Channel 4. This has significant consequences when fitting lines in this range, and affects the JDISCS analysis.3.4.The 6.1/12.2µm spectral leak A serendipitous advantage of asteroid RSRF calibrators is that they are naturally unaffected by the spectral leak of the MRS.It is known from Gasman et al. (2023b) that a 2-3% of the 6.1 µm light (channel 1 MEDIUM) leaks into channel 3 SHORT at 12.2 µm.Because the asteroid SED is very red, there is a factor 40 between 6.1 and 12.2 µm.This effectively decreases the leak to negligible levels in the calibration for spectra reduced using the JDISCS process.
Channel 3 and 4 throughput degradation
Broadly consistent with reports from the MIRI team, we detect a relative decrease in apparent response compared to the pipeline calibration in channels 3 and 4, with the difference being the largest toward the longest wavelengths.At 27 µm (Channel 4C), the signal suggests a further decrease in response from 77% to 70% relative to the calibrator between November 21, 2022 and .Resolving power measured on isolated water and CO lines in the FZ Tau spectrum.The individual measurements have been deconvolved with the intrinsic line width of 36 km s −1 (a 7% difference for R = 3000).Each MRS subband is indicated by a different color, and is fitted separately by a straight line.The relation measured by the MIRI instrument team using a planetary nebula in the Small Magellanic Cloud is also shown (Argyriou et al. 2023).For subbands 2A and 2B, we do not find sufficiently suitable lines in the FZ Tau spectrum, so we assume the previous relation and the average subband slope.April 15, 2023, which suggests a slowing rate of degradation.At shorter wavelengths, the effect appears to be significantly smaller.Generally, the loss of throughput does not affect the results of this study, or JDISCS in general, besides the associated loss of sensitivity.We do not speculate on the root cause of the throughput loss.
Achievable spectral contrast with the JDISCS pipeline
Figure 7 shows the computed contrast of the FZ Tau spectrum, derived as the root-mean-square (RMS) of the 4 calibrated dithers.This performance is within ∼30% of that predicted by the JWST Exposure Time Calculator, showing that the asteroid-based fringe correction achieves its goal of a contrast better than 1:300 out to 25 µm, and is not limited by the fringe correction for sources with a continuum brightness up to at least 1 Jy.The practical difference between JDISCS and the JWST pipeline product for Channel 4C is shown in Figure 8. 4. ANALYSIS
Dust features
The FZ Tau continuum is characterized by dust emission from crystalline silicates.To determine the continuum in the presence of many blended emission lines, we use an iterative algorithm 2 as follows.The spectrum is median filtered using a wide box of ∼ 100 wavelength channels.This produces a smoothed spectrum, which can be compared to the previous iteration of the continuum estimator.A new continuum estimate is then constructed by retaining wavelength channels that have lower flux density (assuming emission lines) than the smoothed version of the previous iteration.The rejected wavelength channels are filled in by linear interpolation, producing a new estimate of the continuum, ready for a new iteration.After 3-5 such iterations, a final, nearly noiseless, continuum remains after one final smoothing step using a 2nd-order Savitzky-Golay filter.Note that if there are particularly broad, dense areas of gas-phase emission lines, such as the HCN bands near 14 µm, these should be removed from the continuum estimator, as they can otherwise act as a pseudo-continuum.The full FZ Tau spectrum and the continuum are shown in Figure 9.
We compare to representative dust opacity models generated using optool (Dominik et al. 2021).As seen in Figure 9, the FZ Tau continuum spectrum is dominated by sharp features from a combination of crystalline silicates (in particular forsterite) and silica glass grains with sizes of 1-10 µm.This is consistent with the findings by Sargent et al. (2009), based on spectra obtained with the Spitzer InfraRed Spectrometer (IRS).While a detailed fit of the dust continuum is beyond the scope of this paper, the apparent dominance of strong crystalline features indicate significant heating and processing of the inner disk dust in FZ Tau, to a higher degree than those observed in most protoplanetary disks (Kessler-Silacci et al. 2006;Furlan et al. 2011). .Continuum estimate for FZ Tau (orange curve), assuming a pure emission line spectrum, compared to the observed spectrum (grey).Shown are also reference opacities from pure silica and pure forsterite, computed for grain size distributions with amin = 2 µm, amax = 20 µm, and N (a) ∝ a −3.5 .The opacities are not fitted to the data, but indicate the locations of significant spectral features.
Molecular lines
The continuum-subtracted MIRI MRS spectrum of FZ Tau is dominated by bright emission from warm water vapor, as evidenced by both the rovibrational bending mode of water in the 5-8 µm range, and pure rotational water lines at longer wavelengths.There is CO present in the fundamental (∆v = 1) band, as seen in the part of the P-branch covered by the MRS range in channel 1A, and numerous lines from warm OH is seen at longer wavelengths in channels 3 and 4. The well-known bands from carbon-bearing molecules C 2 H 2 (13.7 µm) and HCN (14.0 µm) are weak, while CO 2 (15.0 µm) is clearly detected (see Figure 15).While noting the presence of the organics bands, we defer their quantitative analysis to a later JDISCS paper.Sargent et al. (2014) reported the detection of gaseous H 2 CO in absorption near 5.8 µm in FZ Tau, but we do not confirm it (see Figure 14).It is likely that the combination of strong emission lines from the water bending mode and the low resolution of Spitzer-IRS below 10 µm mimicked absorption from other species, underlining the critical need for higher spectral resolving power for the study of midinfrared gas-phase lines.
We extracted lines from the continuum-subtracted spectrum using super-positions of Gaussian profiles to separate line blends.Lines closer than 1σ (based on the intrinsic resolving power of MRS) to another line were excluded from the fit to avoid degenerate line strengths.The fits were stabilized by fixing the line centers to their known values from the HITRAN database (Gordon et al. 2022a), and the line widths using the MIRI-MRS resolving power curve measured by JDISCS (Figure 6).Consequently only the line amplitudes were varied by the fit, similarly to a procedure applied in the past to de-blend Spitzer-IRS spectra (Banzatti 2013).
Physical parameter retrieval
The mid-infrared molecular emission spectrum is an important tracer of processes important for planet formation, such as carbon, oxygen, and nitrogen chemistry, and gas-solid dynamics like pebble drift (e.g.Kalyaan et al. 2021).As current instrumentation generally does not spatially resolve the inner disk emission, with a few notable exceptions, we may rely on integrated line fluxes to infer information about the spatial distribution of molecular disk gas.This is possible because of the wide dynamic temperature and opacity range traced by the rich infrared molecular bands.10.Rotation diagrams for the water and CO lines are shown in MKS units, where F is the integrated line flux, ν is the frequency of the transition in wavenumbers, A ul is the Einstein A coefficient, and gu is the statistical weight of the upper level.The CO lines detected with MIRI are compared to those measured with IRTF-ISHELL from the ground (Banzatti et al. 2023a).We find excellent agreement between the ISHELL and MIRI CO lines where there is overlap.The water diagram includes over 600 individual lines .
Plane-parallel "slab" models provide a convenient way to quantify the properties of the line emission in terms of its excitation temperature, column density, and emitting area.Given the preponderance of evidence that the mid-infrared molecular lines are formed in the inner few au of the disk (Brittain et al. 2007;Pontoppidan et al. 2008;Salyk et al. 2019;Banzatti et al. 2023a), retrieved physical parameters can be interpreted in the context of a disk surface in Keplerian rotation.We fit the water spectrum at 10-27 µm composed of hun-dreds of rotational emission lines, using a model with one or more independent temperature components, each of which is defined by three parameters that describe the emission from a slab of gas with emitting area A, excitation temperature T , and column density N (e.g.Salyk 2020).The model assumes level populations in Local Thermodynamic Equilibrium (LTE), but includes effects from optical depth, including overlapping lines.The intrinsic velocity broading is assumed to be thermal: ∆v = k B T gas /µ, where T gas is the kinetic gas temperature, µ is the mass of a water molecule, and k B is the Boltzmann constant.This model is implemented in spectools-ir (Salyk 2022), and has been used to successfully approximate portions of water spectra observed at lower resolution with Spitzer (Carr & Najita 2011;Salyk et al. 2011).spectools-ir deploys emcee (Foreman-Mackey et al. 2013) to retrieve posterior distributions of the slab model parameters, given integrated line fluxes, using a Markov-Chain Monte Carlo algorithm.The ro-vibrational bending-mode band of water around 6 µm is excluded from the fit because it is known to be affected by non-LTE effects (Bosman et al. 2022;Banzatti et al. 2023a), but otherwise traces the same range in upper level energies.Indeed, the line strengths of the rovibrational water lines are ×4 weaker than those predicted by fits to the rotational spectrum.This is consistent with the higher critical densities of transitions between vibrational levels.We therefore interpret this as sub-thermal excitation of the vibrational ladder in the bending mode.Since the critical densities for rotational transitions are lower, this is not an indication that the physical parameters retrieved from the rotational lines are similarly out of equilibrium.However, it is important to confirm retrievals using non-LTE water excitation models in future studies.
Recently, it was found that a single temperature cannot fully reproduce the rotational water lines as observed in disks with MIRI (Banzatti et al. 2023b).In the case of FZ Tau, we find that at least two temperature components are required to fit the full range of lines covered by MIRI (about an order of magnitude in upper level energies).However, a single temperature is sufficient to fit the CO v=1-0 band, also when including low-J lines from ground-based observations (Banzatti et al. 2022, Figure 10).We show the rotational diagrams of CO and water lines detected in the MIRI spectrum in Figure 10, and comparisons to slab model calculations.The corner plot for two water temperature components is shown in Figure 11.Slab model parameters for FZ Tau are summarized in Table 4.
Both components have a column density of ≳ 10 18 cm −2 in FZ Tau.At these column densities, many water lines are optically thick, providing well-constrained emitting areas.The two components likely reflect an underlying continuous temperature gradient in a surface disk layer.
Extended line emission
Since MRS provides spatially resolved integral field spectroscopy we conduct a search for extended emission from the detected lines (Figure 12).The water lines are spatially unresolved, and therefore confined to an emitting region smaller than the MRS beam (0. ′′ 175-0.′′ 45, Half-Width at Half Maximum, corresponding to disk radii of 22.5-58 au).Since the lines are generally thought to trace inner disk emission from a few au, this is consistent with expectations.
While undetected toward the central point source of the FZ Tau system, we detect a large extended ring in emission from the rotational H 2 ladder.This is particularly apparent in the S(1) and S(2) lines covered by Channel 3 (Figure 12).The shape is elliptical with a semi-major axis of 3. ′′ 25 = 419 au, and a center that is offset from the stellar position by 0. ′′ 5 along the minor axis.The position angle of the ellipse is consistent with PA=90 • , but with a large uncertainty.If interpreted as a ring or disk, the axis ratio of the ellipse corresponds to an inclination angle i ∼ 22 • , consistent with the submillimeter disk inclination as measured with ALMA of ∼ 25 • (Table 1).
The image further shows the presence of several compact H 2 line emission clumps to the south-east of the star.The origin of these is unclear, but they appear to be significantly hotter than the ring itself, with relatively brighter emission in higher-J lines.We do not see them in Channel 1, as the field-of-view is too small.We also detect a feature in H 2 S(5)+S( 6) and the 6.63 µm [NiII] line, resembling a collimated jet oriented roughly northsouth and centered on the continuum source.This type of jet is seen in other similar disks (Narang et al. 2023), and is consistent with the relatively high accretion rate of FZ Tau.
While jets are commonly detected with JWST from a range of young stars, the FZ Tau ring is unusual.We cannot uniquely identify its origin, as the line emission is not kinematically resolved with MIRI.However, we consider two different scenarios: The first is that the ring is associated with the FZ Tau disk, and traces material orbiting the star at large radii.The excitation of H 2 is potentially complex, and beyond the scope of this paper.Large-scale H 2 emission from disks, including debris disks, were seen with the Infrared Space Observatory (ISO) (Thi et al. 2001a,b), and perhaps these are now being recovered by the relatively wide field of MIRI-MRS.A separate scenario is that the ring is part of a wide-angle outflow, viewed close to face-on.This is consistent with the face-on inclination of the disk (Table 1), as well as the common occurrence of such outflows from young sources as observed in forbidden emission, including in FZ Tau (e.g.Banzatti et al. 2019).However, it is not clear if a face-on outflow can easily produce as a ring-like morphology such as that seen for FZ Tau.
DISCUSSION
The high-contrast, broad-band MIRI spectrum allows us to constrain the spatial distribution of water and other detected species in the disk surface and compare to basic model predictions.We estimate the expected molecular column densities and temperatures from simple considerations.In the inner disk the total dust column density is expected to be highly optically thick, such that only 0.1-1% of the vertical column is typically contributing to the observed emission lines (e.g.Bosman et al. 2022).While the inner disk may be depleted in small dust as disks are cleared as part of their natural evolution to the transition disk stage (Zhu et al. 2012), this does not appear to have happened yet for FZ Tau (Najita et al. 2007).
We model the dust temperature in the FZ Tau disk surface using equation 7 from Dullemond et al. (2001) and the parameters in Table 1.This offers a simple prescription for optically thin reprocessing of starlight in the surface of a flared disk.There is a weak dependence on the ratio of the Planck mean opacities of the stellar irradiation and the disk self-emission, ∼ ϵ −1/4 .We approximate this from the standard DIANA dust model (Woitke et al. 2016) to ϵ = 0.17 for disk temperatures of 400-1000 K and compare to the extreme case of interstellar medium dust (ϵ = 0.058 for R V = 5.5; Draine 2003).Interpreting the two temperature components as a coarse binning of the temperature gradient, we plot the retrieved gas excitation temperatures in Figure 13 by distributing the emitting areas in consecutive, nonoverlapping annuli.The molecular distribution likely extends to larger radii than those observed, but they cannot be well constrained without measuring transitions with upper level energies lower than those covered in the MIRI range (≲ 1000 K).We then compare the measurements to the predicted dust surface temperature curve for the FZ Tau parameters (Table 1).This provides an excellent match the measured molecular excitation temperate curve, consistent with a scenario in which the molecular gas is strongly coupled to the dust in the line-forming region, and where the excitation of the rotational water lines is well-described by populations in LTE.However, an important caveat is that the effect of superheated gas and subthermal excitation have the opposite effect on line fluxes (Meijerink et al. 2009).Thus, we cannot rule out that nature conspires in such a way to make it appear that the lines are coupled to the dust and thermalized.However, gas-dust temperature decoupling can be a substantial effect, increasing the gas temperature factors of 2 or more, depending on disk altitude (Jonkheid et al. 2004;Glassgold et al. 2004).Such departures may be difficult to reconcile with Figure 13.We also note that previous fits of two-dimensional radiative transfer models to lower-resolution Spitzer data arrived at the opposite conclusion, suggesting gas-dust temperature decoupling and gas-to-dust ratios ≲100 (Blevins et al. 2016).Resolving the question requires further analysis of larger samples of MIRI spectra, such as those offered by JDISCS.
Similarly, we can consider if the observed column densities are consistent with expectations for the disk surface.The gas is visible above a dust optical depth of τ dust = 1 at the wavelength of the relevant transition: where A X is the molecular number abundance relative to hydrogen, g2d is the mass gas-to-dust ratio, C ext is the dust extinction coefficient, µ = 1.3 is the mean atomic mass, and m p is the proton mass.The dust extinction is calculated using optool from the DIANA standard dust model parameters 3 (Woitke et al. 2016), truncated to grain sizes < 10 µm to simulate surface dust, setting C ext (17 µm) ∼ 1800 cm 2 g −1 .This choice is driven by evidence than 10 µm grains are abundant at the disk surface from SED modeling and mid-infrared scattered light imaging (Pontoppidan et al. 2007;Duchene et al. 2023).Surface grains are unlikely to be much larger.A much smaller grain size would lead to lower opacities.Interstellar medium dust has smaller opacities by about a factor 2, C ext (17 µm) ∼ 791 cm 2 g −1 (R V = 5.5, Draine 2003), but such small grains are inconsistent theoreti-3 https://diana.iwf.oeaw.ac.at/ cal expectations, as well as with observations of disks, as noted above.For a gas-to-dust ratio of 105 (Draine 2003) and a water abundance of 10 −4 in the inner disk surface (Kamp et al. 2017), the water column density predicted by Eq. 1 is N H2O ∼ 2.7 × 10 18 cm −2 .This value is remarkably consistent with the mid-infrared water column densities as estimated in many disks (Salyk et al. 2011;Banzatti et al. 2023a).Thermo-chemical models without enrichment of water from pebble drift generally predict that the water vapor abundances decrease rapidly below ≲ 300 K, as gas-phase production of water becomes inefficient (e.g.Kamp et al. 2017).In Figure 13 we show the observed column densities along with those predicted for water abundances of 10 −4 at T > 300 K and 10 −6 at T < 300 K.It is seen that the observed lines in the FZ Tau disk surface are consistent with a high water abundance at least out to 1.5 AU, corresponding to the 350 K isotherm.At larger radii, thermo-chemical models predict a significant decrease in column density.However, if efficient drift inwards of icy pebbles is active, this may maintain a high water abundance between 300 K and 150 K at larger radii (Kalyaan et al. 2023;Banzatti et al. 2023b).
The potential enrichment by pebble drift of cold water vapor near ice sublimation at ∼ 150 K is difficult to test with JWST-MIRI alone, as water lines tracing these temperatures are mostly available at wavelengths λ > 30 µm (Zhang et al. 2013;Blevins et al. 2016;Notsu et al. 2016).Blevins et al. (2016) used observations of low-lying water lines (E upper = 114 − 800 K) with Herschel-PACS to constrain the radial extent of water emission to 3.3 au, beyond the 300 K isotherm.For other disks, Banzatti et al. (2023b) used the additional wavelength range of Spitzer-IRS to detect cool (< 300 K) water near the snowline, and interpreted its enhancement as evidence for icy pebble drift.
A second potential signature of pebble drift that JWST-MIRI may address is the absolute abundance of water within the 300 K isotherm.If this abundance is significantly higher than that predicted by the canonical abundance of oxygen from Eq. 1, this could be evidence for pebble drift enrichment.This effect might be hard to measure when the water emission becomes optically thick and only a surface layer is observed, as the slab results seem to suggest.Finally, the abundance of 300 K water may correlate with other disk properties linked to pebble drift, such as the size of the dust disk.Evidence for such a connection was presented in Banzatti et al. (2023b), and the compact disk of FZ Tau (R out = 12 au) is one where pebble drift is expected to be more efficient.
Moreover, the water lines in FZ Tau have one of the largest line-to-continuum ratios known (Najita et al. 2013), but what underlying property of this disk is causing this observable?Given the predicted water abundance profile in Figure 13, the inner disk in FZ Tau appears to have filled the region inside the 350 K isotherm approximating the highest abundance of water possible without additional enrichment of oxygen.If other disks do not maintain a high water abundance (∼ 10 −4 ) throughout the inner disk, they might have a smaller line-to-continuum ratio.It is conceivable that this uniformly high water abundance in FZ Tau is regulated by active replenishment by pebble drift.Measurements of the distribution of water in other disks relative to their 300 K disk surface isotherm may reveal if other disks are less efficient in maintaining a high abundance throughout their inner regions.
CONCLUSIONS
The calibration methods developed for the JDISC Survey produce high-contrast MIRI spectra across its full wavelength range.We demonstrate SNR≳ 300 and shotnoise limited performance for a test case.We find that the use of bright Themis-family asteroids to measure the RSRF allows us to avoid fringe models to calibrate MIRI spectra.We develop a new MIRI wavelength calibration that is precise to a few km s −1 across the MIRI range.We generally reproduce the predicted MIRI resolving power, except in channel 4, where we demonstrate higher resolving powers by a factor ∼ 2-3.This data quality enables the detection of cool water vapor down to 300 K with MIRI, and more precise retrievals of disk chemistry and gas temperatures.
We use a MIRI-MRS observation of the FZ Tau protoplanetary disk to test the JDISCS data reduction methods.FZ Tau shows extraordinarily bright molecular emission from water, CO, and OH, it also shows CO 2 emission but has weak bands from C 2 H 2 and HCN.Its continuum dust spectrum shows strong bands from crystalline silicates, including forsterite and silica.The molecular emission bands are consistent with a point source at the MIRI MRS resolution, centered on the central star, except for the low-J rotational H 2 lines, which show highly extended emission, including a large ring.We speculate that the ring is either part of an outer thin disk for FZ Tau, possibly excited by an external UV radiation field, or is a part of an outflow cone viewed close to face-on.
The water spectrum of FZ Tau is consistent with an approximately constant water abundance of ∼ 10 −4 in the disk surface at temperatures between ∼350 and ∼1000 K, corresponding to radii of 0.2-1.5 au , for a gasto-dust ratio of 100 (see Figure 13).We are not able to constrain the properties of cooler water (≲ 300 K) at larger radii, as this would require access to water lines beyond the wavelength coverage of MIRI (> 30 µm), generally requiring future spectroscopic access to the far-infrared wavelength range (Pontoppidan et al. 2019).The excitation temperatures of rotational water lines and CO are consistent with gas coupled to the dust tem-perature.The high water line contrast observed in FZ Tau does not appear to be linked to exceptionally high column densities, but is more likely an effect of a large emitting area compared to other disks of the same luminosity.We emphasize that this solution is suggestive, but likely not unique.We find that the populations of the vibrationally excited levels of water are subthermal by at least a factor ∼4.While the rotational ladder of water appears to be thermalized, we do not rule out that a non-thermal retrieval will yield different gas kinetic temperatures.
The disk water gradient as measured by MIRI-MRS traces out to a ∼ 1 AU, but is ultimately limited by the lack of low-temperature water lines in the MIRI range.To fully map the water distribution in disks across the water snowline requires a new far-infrared space telescope, such as the PRobe Far-Infrared Mission for Astrophysics (PRIMA), (Glenn et al. 2023;Moullet et al. 2023), which proposes to provide high-resolution spectroscopy of the entire thermal range of water vapor.4).The ro-vibrational lines are not used in the fit, but displayed here at 1/4 of their predicted LTE flux from the model solution in Table 4, demonstrating a good match to the rotational populations, although the vibrational levels are sub-thermally populated.Remaining emission is dominated by HI recombination lines.
Figure 3 .
Figure 3. Representative example of the empirical RSRF calibration in Channel 2, showing efficient removal of both the MIRI detector fringes, as well as other residual throughput structure.The top panel shows the comparison of the FZ Tau spectrum with that of 526 Jena.The middle panel shows the quality of a single calibrated dither.The bottom panel shows the final median-clipped and stitched spectrum, demonstrating the suppression of outliers.median-clippedto produce a single one-dimensional spectrum per channel/sub-band, which efficiently re-
Figure 4 .
Figure 4. Quality of the RSRF in Channel 4. The meanings of lines and colors are the same as those in Figure 3.
Figure 5 .
Figure5.Wavelength correction relative to the JWST pipeline at the time of writing (CRDS context 1100) using ≈ 200 molecular lines.In this context, corrections of up to 90 km s −1 are needed to achieve an accuracy of a few km s −1 across the full spectrum.After correction, the intrinsic scatter of the wavelength solution is reduced to ∼ 3 − 5 km s −1 (RMS).
Figure6.Resolving power measured on isolated water and CO lines in the FZ Tau spectrum.The individual measurements have been deconvolved with the intrinsic line width of 36 km s −1 (a 7% difference for R = 3000).Each MRS subband is indicated by a different color, and is fitted separately by a straight line.The relation measured by the MIRI instrument team using a planetary nebula in the Small Magellanic Cloud is also shown(Argyriou et al. 2023).For subbands 2A and 2B, we do not find sufficiently suitable lines in the FZ Tau spectrum, so we assume the previous relation and the average subband slope.
Figure 7 .
Figure7.Measured signal-to-noise (RMS) achieved with the JDISCS pipeline for the FZ Tau observation (purple curve).The dashed line indicates the expected goal of SNR=300.The orange circles show the prediction by the JWST ETC.Note that the division by the calibrator adds noise, which is included in the ETC prediction.
Figure 8 .
Figure 8.Comparison of the JDISCS reduction with the JWST pipeline for a section of Channel 4, which contains the lowest energy water lines.Different levels of contrast (SNR=33, 100, and 300) are indicated by bars showing ±1σ ranges.In the spectral range shown the JDISCS spectrum reaches SNR ∼ 300, whereas the pipeline is limited by the calibration to SNR ∼ 50.The wavelength offsets in the JWST pipeline are apparent around 26 µm.
Figure9.Continuum estimate for FZ Tau (orange curve), assuming a pure emission line spectrum, compared to the observed spectrum (grey).Shown are also reference opacities from pure silica and pure forsterite, computed for grain size distributions with amin = 2 µm, amax = 20 µm, and N (a) ∝ a −3.5 .The opacities are not fitted to the data, but indicate the locations of significant spectral features.
Figure 11 .
Figure 11.Retrieval of a two-component water slab model show that degeneracies are minor, and parameters are statistically well-constrained, supporting the presence of either multiple water temperature components, or a radial temperature gradient.Note that the confidence levels do not include known systematic error, including absolute flux calibration error, errors in the assumed intrinsic line broadening, non-LTE effects, etc.
Figure 12 .
Figure12.Continuum-subtracted line images of the rotational H2 lines.The S(1) and S(2) images show the presence of a ring of cold H2 emission, as well as a number of hot spots to the south-west of the star.The S(3) and S(4) lines are in Channel 2, which has a smaller field of view and therefore, these images do not include the full ring.Further, the Channel 1 field-of-view does not cover the hot spots.The inner working angle with radius of 1.22λ/D is masked, where D is the JWST telescope diameter, and λ is the central wavelength of the line.The location of the star is indicated with a star symbol, and an ellipse is shown outlining the H2 emission ring.
Figure 13 .
Figure 13.Retrieved parameters for the two-component fits of water in FZ Tau, compared to model predictions.Top: The excitation temperatures and disk radii, derived from the slab model emitting areas.The horizontal bars indicate the disk annulus width corresponding to the retrieved emitting areas.The curves indicate surface dust temperatures for FZ Tau using the prescription from Dullemondet al. (2001).Bottom: The retrieved water vapor column densities for the two temperature components.The purple line show the column densities predicted for a water abundance of n(H2O) = 10 −4 at T > 300 K, and n(H2O) = 10 −6 at 300 K > T > 150 K.The critical radius indicates the outer edge of the region with high water abundance region found byBlevins et al. (2016) using Herschel upper limits on cool water line fluxes.
Facilities: JWST Software: astropy (Astropy Collaboration et al. 2022), spectools-ir (Salyk 2022) This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope.The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST.These observations are associated with program 1549.APPENDIXIn Figures14 and 15, we show the full wavelength range of the continuum-subtracted MIRI spectrum from FZ Tau.
Figure 14 .
Figure 14.Continuum-subtracted emission line spectrum of FZ Tau at λ < 11.85 µm.Bright H 16 2 O lines are marked with triangles, CO lines with circles, and OH lines with stars.The color and size of the symbols both show the upper level energy of the transition in Kelvin.The orange curve shows the model fit to the spectrum (Table4).The ro-vibrational lines are not used in the fit, but displayed here at 1/4 of their predicted LTE flux from the model solution in Table4, demonstrating a good match to the rotational populations, although the vibrational levels are sub-thermally populated.Remaining emission is dominated by HI recombination lines. 12
Figure 15 .
Figure 15.Continuum-subtracted emission line spectrum of FZ Tau at λ ≥ 11.85 µm.The labels have the same meaning as those described in Figure 14.
Table 1 .
FZ Tau star and disk parameters
Table 2 .
JDISCS calibration asteroid parameters (obtained from JPL Horizons or measured from our data).
Table 4 .
Physical parameters of molecular emission in FZ Tau.Quoted errors are estimated 10% systematic uncertainties, based on variation on retrieved parameters when varying the selection of lines, different calibrators, etc. | 12,496.4 | 2023-11-28T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Accurate Channel Estimation and Adaptive Underwater Acoustic Communications Based on Gaussian Likelihood and Constellation Aggregation
Achieving accurate channel estimation and adaptive communications with moving transceivers is challenging due to rapid changes in the underwater acoustic channels. We achieve an accurate channel estimation of fast time-varying underwater acoustic channels by using the superimposed training scheme with a powerful channel estimation algorithm and turbo equalization, where the training sequence and the symbol sequence are linearly superimposed. To realize this, we develop a ‘global’ channel estimation algorithm based on Gaussian likelihood, where the channel correlation between (among) the segments is fully exploited by using the product of the Gaussian probability-density functions of the segments, thereby realizing an ideal channel estimation of each segment. Moreover, the Gaussian-likelihood-based channel estimation is embedded in turbo equalization, where the information exchange between the equalizer and the decoder is carried out in an iterative manner to achieve an accurate channel estimation of each segment. In addition, an adaptive communication algorithm based on constellation aggregation is proposed to resist the severe fast time-varying multipath interference and environmental noise, where the encoding rate is automatically determined for reliable underwater acoustic communications according to the constellation aggregation degree of equalization results. Field experiments with moving transceivers (the communication distance was approximately 5.5 km) were carried out in the Yellow Sea in 2021, and the experimental results verify the effectiveness of the two proposed algorithms.
Introduction
Underwater acoustic communication technology can be widely applied in many fields, such as marine pollution monitoring, underwater rescue, underwater autonomous underwater vehicle (AUV) positioning and navigation. However, underwater acoustic channels are characterized by a time-varying multipath. In particular, when there is relative motion between the transceivers, the channel will change rapidly, resulting in fast timevarying multi-path interference, which distorts the received signal waveform and leads to a reduction in or even failure of the decoding performance of the underwater acoustic communication system [1][2][3].
To solve the issues of time-varying underwater acoustic channels and environmental noise, an adaptive communication scheme was proposed, where the transmitter automatically selected an appropriate modulation according to the instantaneous channel state information (CSI) and signal noise ratio (SNR). The adaptive communication scheme can be mainly divided into two categories, including feedback adaptive communications and direct adaptive communications, as shown in Figure 1a,b, respectively. For feedback adaptive communications, as shown in Figure 1a, User A sends a test signal to User B. User B estimates CSI and SNR based on the test signal, and then feeds them back to User A. User A selects a modulation according to the feedback CSI and SNR, and then transmits the data information to User B by using the selected modulation [4,5]. For direct adaptive communications in Figure 1b, User A initially selects a modulation, such as the direct sequence spread spectrum (DSSS), then transmits the data information to User B by using DSSS. User B identifies the modulation (i.e., identifies DSSS), demodulates and decodes, and estimates CSI and SNR, such as a simple channel and SNR = 20 dB. According to the estimated CSI and SNR, User B selects a new modulation, such as orthogonal frequency division multiplexing (OFDM), and then feeds data information back to User A by using OFDM. Similarly, User A identifies, demodulates and decodes, and estimates CSI and SNR. Then, according to the estimated CSI and SNR, User A selects an original or new modulation, and then transmits the data information to User B by using the selected modulation [6,7]. The biggest difference in the two adaptive communications is that there is no need to send a test signal for the second scheme. Therefore, for a specified amount of data information, the second scheme saves communication time, thereby reducing or even avoiding time-variation of the channel during communications. Therefore, the second scheme is more suitable for fast time-varying channels incurred by underwater acoustic communications with moving transceivers than the first scheme. For adaptive communications, there are mainly four selected modulations and demodulations: multiple frequency shift keying (MFSK), spread spectrum, orthogonal frequency division multiplexing (OFDM) and single carrier. The transmission rate of MFSK is low; spread spectrum technology always uses high-order spread spectrum code, which has a low communication efficiency [8]; OFDM has a poor anti-frequency-offset performance [9][10][11][12][13][14][15][16]. Therefore, with a high transmission rate and good anti-frequency-offset characteristics [17][18][19], the single carrier technology is adopted in this paper. It can be used with a variety of encoding rates to realize adaptive underwater acoustic communications with moving transceivers.
Channel estimation is one of the key factors to realize reliable adaptive communications. At present, there are mainly three kinds of underwater acoustic channel estimation algorithms, such as channel estimation algorithms based on reference signal, blind estimation algorithms and semi-blind estimation algorithms [20][21][22][23][24][25][26][27][28]. Among the three kinds of algorithms, the channel estimation capability and channel tracking capability based on the reference signal are the strongest. Much research on them has been conducted by some teams, such as the team of the University of Connecticut, the team of the Massachusetts Institute of Technology, the team of Institute of Acoustics, Chinese Academy of Sciences, and the team of Harbin Engineering University. So far, all of the above channel estimation algorithms based on reference signals have adopted the traditional time-multiplexing training sequence scheme. In order to further improve the tracking capability of time-varying channels, the joint team of Qingdao University of Technology, the University of Wollongong and the University of Western Australia [29] proposed a superimposed training scheme for underwater acoustic communications, where the training sequence and the symbol sequence are linearly superimposed in order to make the channel information of the training sequence and the symbol sequence completely consistent.
Same as literatures [29,30], the superimposed training (ST) scheme and the segment strategy are used in this paper to enhance the estimation and tracking capability of fast time-varying channels. To realize the full potential of the ST scheme and the segment strategy, a channel estimation algorithm based on Gaussian likelihood (GL) is proposed. The product of the Gaussian probability-density functions of the segments is still the Gaussian probability-density function, which can be parameterized by the mean and the variance, where the mean is the channel estimate and the variance is the deviation of the channel estimate. The variance of the Gaussian probability-density function after the product is less than the variance of the Gaussian probability-density function of each segment, which means that the estimated channel for the segment after the product is more accurate than the estimated channel for the segment itself. This is equivalent to estimating the channel information of the segment by using the 'whole' data block [29,30], thereby leading to the ideal channel estimation of the segment.
It is important to note that the proposed GL algorithm can achieve the same channel estimation and tracking performance as literatures [29,30] in a 'novel' Gaussian product way, because it can be seen as a message-passing method in the Gaussian scenario [29,30]. The message-passing thought was first proposed in literature [30] to improve the channel estimation capability; then, it was applied in underwater acoustic communications with a communication distance of approximately 1 km [29]. Different from literatures [29,30], in this paper, the same thought as message passing [29,30] is realized in a 'novel' Gaussian product way; in particular, the proposed algorithm is applied in actual underwater acoustic communication machines, and the effective communication distance is extended from 1 km to 5.5 km.
In addition, an adaptive communication algorithm based on constellation aggregation (CA) is proposed. The encoding rate (such as rate-1/2, rate-1/4, rate-1/8, or rate-1/16) is automatically selected based on the aggregation degree of the constellation points after linear minimum mean square error (LMMSE) equalization. The working principles of the proposed direct adaptive communications and the traditional direct adaptive communications are different. The traditional direct adaptive communications select the modulation based on CSI and SNR. However, the proposed direct adaptive communications select the encoding rate based on the constellation aggregation. The proposed algorithm based on the constellation aggregation is more accurate in making a selection than the traditional algorithm based on CSI and SNR. In order to fully realize the potential of the GL algorithm and the CA algorithm, the single-carrier communication system and turbo equalization are adopted. The channel estimator (GL), constellation aggregation decision maker (CA), equalizer and decoder are combined together, and they are performed jointly in an iterative manner (turbo equalization) to realize an accurate estimation of fast time-varying channels and reliable communications by using the information exchange between the equalizer and the decoder (turbo equalization). Field experiments with moving transceivers (the communication distance was approximately 5.5 km) were carried out in the Yellow Sea in 2021 to verify the effectiveness of the proposed algorithms. The major contributions of this paper are summarized as follows: (1) A channel estimation algorithm, named GL, is proposed, realizing the same performance as the bidirectional channel estimation algorithm [29,30] in a novel product way of probability-density functions; (2) An adaptive communication algorithm based on constellation aggregation is proposed to improve the applicability of the system for different environments; (3) GL-based channel estimation, LMMSE equalization and decoding are iteratively performed (turbo equalization), leading to a significant performance improvement of the whole system; (4) The proposed algorithms are applied in actual underwater acoustic communication machines to verify their effectiveness.
The remainder of the paper is organized as follows. The system structure is provided in Section 2. Then, a channel estimation algorithm based on Gaussian likelihood and an adaptive communication algorithm based on constellation aggregation are shown in Section 3. Simulations, experiments and the conclusion are presented in Sections 4-6, respectively. Throughout the paper, superscripts [·] T r and [·] H represent transpose and conjugate transpose, respectively.
System Structure
The system structure of underwater acoustic communications is shown in Figure 2.
At the transmitter, the information bit sequence is encoded, interleaved and mapped by using quadrature phase shift keying (QPSK) to symbols. The training sequence and the symbol sequence are linearly superimposed, the resultant sequence is partitioned into multiple segments and then each segment is appended a cyclic prefix (CP) to avoid inter-segment interference and to facilitate low-complexity equalization. The in-phase quadrature (IQ) modulation is used for each CP plus segment. Hyperbolic frequency modulation (HFM) signals with negative and positive modulation rates are used as the head and the tail of the signal frame, respectively. Then, the resultant signals are transmitted by a transducer. The HFM signals are used to estimate and eliminate the average frequency offset and to synchronize the received signals [31]. Then, the transmitted signals are extracted, band-pass filtering and IQ demodulation are carried out and CPs are removed. With the resultant signals, we estimate initial channelsĥ nF of all segments based on the GL algorithm and noise powersp n , and obtain a 'clean' signal z n after training elimination for data equalization. Then, LMMSE equalization, CA decision and decoding are carried out based onĥ nF ,p n and z n , as shown in Figure 3. They, on both sides of equalization, represent the same things. Please note that, on the right side, they represent the initial values, and, on the left side, they represent the iterative values. The iterative process proceeds until a pre-set number of iterations is reached, and difficult decisions are made on each information bit in the last iteration. An illustration of the CA decision is shown in Figure 4. We can ensure the return encoding rate by comparing the pre-set values ξ in and ξ ex . When the constellation aggregation degree ξ m < ξ in , the return encoding rate is increased. When the constellation aggregation degree ξ m > ξ ex , the return encoding rate is reduced. When the constellation aggregation degree ξ in ≤ ξ m ≤ ξ ex , the return encoding rate is kept the same. The turbo equalization is shown in Figure 3. Based onĥ nF ,p n and z n , the LMMSE equalization and decoding for each segment are carried out, where the LMMSE equalization can be efficiently implemented with fast Fourier transform (FFT), where the initial a priori logarithm likelihood ratios (LLRs) of the interleaved encoded bits are set to zeros, i.e., L a = 0. The soft detection outputs for multiple segments are collected to make up extrinsic LLRs L e , and then deinterleaving and decoding are carried out. The output of the decoder is used by the equalizer and the channel estimator, so there are two branches from the decoder. Both branches use the latest decoding results, i.e., the LLRs of encoded bits of the decoder, and they are updated in each iteration. In the first branch, the LLRs of encoded bits are interleaved and input to the equalizer. In the second branch, difficult decisions on the encoded bits are executed, followed by interleaving and QPSK mapping to obtain the (estimated) symbol sequence. They, together with the training sequence, are used for accurate channel (re)estimation. After that, based on L a from the first branch and h nF ,p n and z n from the second branch, LMMSE equalization is performed to obtain L e , which is input into the decoder for the next round of iteration (turbo equalization).
Accurate Channel Estimation and Adaptive Communications
A block of an information bit sequence denoted by b = b 1 , · · · , b L b T r is encoded and interleaved, yielding an interleaved coded bit sequence denoted by c = c 1 , · · · , c L i T r , Denote the periodic training sequence as . The training sequence and the symbol sequence are linearly superimposed with a power ratio r, yielding the transmitted signal s with length L f , where L f is an integer multiple of T.
Divide s into N y segments, i.e., s = s 1 , · · · , s N y T r , and the length of each segment is L s , where L f = N y × L s . Taking s n as an example, the corresponding symbol sequence is f n , and the corresponding training sequence is t L s . CP is added to each segment, yielding the channel circulant matrix denoted by H n . Denote the white Gaussian noise as w.
Denote a segment of the received signal after CP removal as y n , and its length is an integer multiple of T, i.e., L s = pT. Then, we can represent y n as y n = y 1T , · · · , y pT T r .
The received signal y n can be written as Define L c as the channel order, where T ≥ L c ; then, the Toeplitz matrix formed by the training sequence can be represented as From Appendix A, based on the least squares (LS) algorithm, the channel estimate of a segment can be computed aŝ
Accurate Channel Estimation Based on Gaussian Likelihood
Channel estimates of two consecutive segments can be expressed as two independent and identically distributed probability-density functions in the Gaussian scenario. Denote two independent and identically distributed probability-density functions as p n−1 (x) and p n (x). Denote µĥ n−1 and σ 2 h n−1 as the mean value and variance of the channel estimatê h n−1 of the (n-1)-th segment, respectively, and denote µĥ n and σ 2 h n as the mean value and variance of the channel estimateĥ n of the n-th segment, respectively. Then, we can obtain Denoteĥ nF as the channel estimate after information fusion of the channel estimatê h n−1 of the (n-1)-th segment and the channel estimateĥ n of the n-th segment. Denote µĥ nF and σ 2 h nF as the mean value and variance of the channel estimateĥ nF after information fusion, respectively. Then, the product of the two probability-density functions can be expressed as where and It is important to note that which means that the variance σ 2 h nF after the product becomes smaller than σ 2 i.e., the fused channel estimateĥ nF becomes more accurate, which is more close to the real channel thanĥ n−1 andĥ n . C A is the scale factor of the Gaussian distribution, and it is not a variable, which can be normalized. Therefore, we can obtain the Gaussian distribution p nF (x) after the product, i.e., where N() represents Gaussian distribution. From (7), we can acquire From (6) and (7), i.e., µĥ i.e., The message fusion Formulas (10) and (12) are equivalent to the message fusion Formulas (18) and (19) of literature [29], i.e., the proposed GL algorithm using a 'novel' Gaussian product can achieve the same performance as the bidirectional channel estimation algorithm in literature [29]. They have the same computational complexity.
The formulas of the forward passing and the backward passing are as follows [33]: and where α p is the channel correlation coefficient of the consecutive segments, n p is Gaussian white noise (the mean is 0) and β is the noise power. Take the n-th segment as an example to show the flow of the 'global' channel estimation, and the flow diagram is shown in Figure 5. For forward message passing, the local channel estimationĥ 1 of the first segment is fused with the local channel estimationĥ 2 of the second segment to obtain a fused channel estimationĥ 2 f by using (10) and (12). Then, the message updateĥ 2a can be obtained by using (13) until the fused channel estimation h n f is acquired. For backward message passing, the local channel estimationĥ N y of the last segment is fused with the local channel estimationĥ N y −1 of the (N y -1)-th segment to obtain a fused channel estimationĥ (N y −1) f by using (10) and (12). Then, the message updateĥ (N y −1)b can be obtained by using (14) until the fused channel estimationĥ (n+1)b is acquired. Finally,ĥ n f andĥ (n+1)b can be fused to obtain a 'global' channel estimation h nF of the n-th segment. Appending a proper number of zeros toĥ nF to form a length-L s vector, i.e.,ĥ nF = ĥ nF , 0
Message fusion
Backward passing Forward passing Figure 5. Accurate channel estimation of the n-th segment.
Training Interference Elimination, Estimation of Noise Power and Turbo Equalization
We use F to denote a normalized discrete Fourier transform (DFT) matrix, i.e., the (m, n)th element is given as L −1/2 s e −j2πmn/L s with j = √ −1. Take the n-th segment as an example. The circulant matrix H n can be diagonalized by a DFT matrix, i.e., H n = F H D n F, where D n is a diagonal matrix. After the training interference elimination, the frequencydomain received signal can be written as Based on the estimated channel, the diagonal elements of the diagonal matrixD n can be acquired as follows: As the power of the transmitted symbol sequence is set to 1, the noise power σ 2 n for the n-th segment is the difference between the power Py n for the received segment and the corresponding channel energy Eĥ nF , i.e., Take the n-th segment as an example of LMMSE equalization. Following literatures [30,33,34], the a priori mean and variance of the symbol f i (the symbol sequence f n ) are as follows: where both the initial values of L a n c 1 i and L a n c 2 i (the initial a priori LLRs of the interleaved encoded bits) are set to 0. The estimated interleaved bit sequence is converted to the symbol sequence m a = m a 1 , · · · , m a L s by using (18). The a posteriori mean and variance of the symbol f i (the symbol sequence f n ) are as follows: The a posteriori mean sequence m p is the estimated symbol sequence after LMMSE equalization. It is noted that the computational complexity of the LMMSE equalizer is dominated by (19), and its computational complexity is only in the order of log(Ls) per symbol. In addition, the extrinsic mean and variance of the symbol f i (the symbol sequence f n ) are as follows: As QPSK mapping is used, the extrinsic LLRs of the interleaved encoded bits c 1 i and c 2 i can be expressed as The estimated symbol sequence is converted to the extrinsic LLRs (i.e., the estimated interleaved bit sequence L e n ) by using (19)- (21). The extrinsic LLRs of the segments are denoted collectively as L e and then input into the decoder for the next round of iteration (turbo equalization).
Adaptive Underwater Acoustic Communications Based on Constellation Aggregation
We set a certain iteration number, such as one iteration, to obtain the aggregation degree after LMMSE equalization, i.e., (22), wheref i is the a posteriori mean of a symbol f i , and its real part and imaginary part are denoted byf Re i andf Im i , respectively.
We compute the mean of all ξ i for a frame of information bits, i.e., ξ m = 1 Denote ξ in and ξ ex as the inner boundary and the outer boundary, respectively. As shown in Figure 4, when ξ m < ξ in , the encoding rate will be increased automatically; when ξ m > ξ ex , the encoding rate will be reduced automatically; when ξ in ≤ ξ m ≤ ξ ex , the encoding rate will be kept.
Simulation Results
The simulation parameters are shown in Table 1. Rate-1/2, rate-1/4, rate-1/8 and rate-1/16 convolutional codes and QPSK mapping are used. A variety of power ratios of the training sequence and the symbol sequence, such as 0.15:1, 0.2:1, 0.25:1 and 0.3:1, are used. The standard block with 1024 symbols is divided into a number of segments with a variety of lengths, including 128 symbols, 256 symbols, 512 symbols and 1024 symbols. The corresponding cases are denoted by S128, S256, S512 and W1024, respectively, where the prefix W means that the standard block is treated as a segment. W1024 is used as the benchmark turbo system. S128, S256 and S512 are used in the proposed GL turbo system. The CP is set to 128 symbols. One frame includes 100 blocks, and one block includes 1024 information bits. Assume that a 4 kHz bandwidth is provided. For S256, with rate-1/2, rate-1/4, rate-1/8 and rate-1/16 convolutional codes, the transmission rates are 2667 bits/s, 1333 bits/s, 667 bits/s and 333 bits/s, respectively, and the corresponding bandwidth efficiencies are 0.67 bps/Hz, 0.33 bps/Hz, 0.17 bps/Hz and 0.08 bps/Hz, respectively. The SNR is from −4 dB to 13 dB. A static channel, as shown in Figure 6, and the white Gaussian noise are used.
The BER performance for S256 is shown in Figure 7. It can be seen from the results that both the ST scheme and the channel estimate fusion based on Gaussian likelihood are effective. The lower the encoding rate, the better the BER performance that the system can achieve. Taking Figure 7b, with SNR = 7 dB and rate-1/4 convolutional code, after three iterations, 100 blocks of information bits are correctly decoded. From Figure 7, after two iterations, all information bits with rate-1/2 convolutional code and SNR = 13 dB (Figure 7a Taking a block of 1024 information bits with rate-1/4 convolutional code as an example, where the noise is not added, the channel in Figure 6 is used, and the channel estimation and equalization results are shown in Figure 8. S256 and a static channel are used; therefore, the channels of the four consecutive segments are the same and their channels are perfectly correlated, i.e., α p = 1. When turbo equalization is not used, i.e., with 0 iterations, the corresponding channel estimate and equalization results are shown in Figure 8(a1),(b1). The estimated channel in Figure 8(a1) is obviously different from the real channel in Figure 6, and the constellation points after LMMSE equalization in Figure 8(b1) are significantly scattered. When turbo equalization is performed once, i.e., after one iteration, the corresponding equalization results are shown in Figure 8(b2), where it is noted that the estimated channels have been updated before turbo equalization, and the aggregation degree of the constellation points after LMMSE equalization becomes significantly better. When turbo equalization is performed two times, i.e., after two iterations, the corresponding equalization results are shown in Figure 8(b3), where the constellation points after LMMSE equalization are ideally condensed together, and the corresponding estimated channel in Figure 8(a2) is exactly the same as the real channel in Figure 6, demonstrating the effectiveness of the ST scheme used for enhancing the channel estimation and tracking capability and the GL algorithm used for the channel information fusion of the segments. From Figure 8b, it is clear that we can carry out adaptive communications according to the pre-set constellation aggregation degree threshold. It is important to note that we do not show the adaptive communication performance, as we can see the results clearly from Figure 7.
Next, we test the BER performance with a variety of power ratios of the training sequence and the symbol sequence. Taking S128 with rate-1/2 convolutional code as an example, the BER performance of the system is shown in Figure 9a. The green triangle line represents the BER performance with a power ratio 0.2:1 and SNR = 13 dB; after three iterations, 100 blocks of information bits are correctly decoded. Considering the complexity and variability of underwater acoustic channels incurred by the moving transceivers, the power ratio 0.25:1 is used in the follow-up simulations and experiments. Assuming that the SNR = 13 dB and the power ratio is 0.25:1, the BER performance of the system is shown in Figure 9b. The blue star line represents the BER performance of the system with the training interference elimination; after two iterations, 100 blocks of information bits are correctly decoded, demonstrating the effectiveness of the training interference elimination. The pink square line represents the BER performance of the system without the training interference elimination, and it can be seen that, if we do not use the training interference elimination, the system simply does not work. Then, we test the BER performance of the system by using the ST scheme and the GL algorithm, where W1024 is used as the benchmark turbo system. The BER performance comparison is shown in Figure 10. From Figure 10a, if the GL algorithm is not used, the system with a variety of segment lengths does not work. From Figure 10b, if the GL algorithm is used to fuse the local channel estimates to obtain global channel estimates, it can be seen that, no matter how long the segment is, the BER performances for S128, S256, S512 and W1024 are similar. This is because, regardless of the segment length, the 'whole' standard symbol block is used to acquire the global channel estimate for each segment. This demonstrates that the proposed GL turbo system (S128, S256 and S512) can achieve a similar performance as the benchmark turbo system (W1024).
Experimental Results
Two separate underwater acoustic communication experiments with moving transceivers were carried out in the Yellow Sea in 2021, named Yellow Sea 1 and Yellow Sea 2, respectively. Their deployments are shown in Figure 11a,b, respectively. We did not use a vertical array in the experiments. The two receiving hydrophones were completely independent, i.e., they had nothing to do with each other. Therefore, multiple receiver channels were not exploited in the system. The height of the sea waves was from 0.5 m to 1 m; the sea temperature was 5.6°C; the south wind was from level 3 to level 4; the ship with the transducer floated away from the ship with the hydrophone at a speed of approximately 0.5 m/s. The detailed experimental parameters are shown in Table 1. For the two experiments, QPSK mapping and the power ratio of the training sequence and the symbol sequence 0.25:1 were used; both the communication distances of the transceivers were approximately 5.5 km; one frame included 16 blocks, and one block included 1024 information bits; the single-carrier communication system was used; the center frequency was 12 kHz with a bandwidth of 4 kHz; and the sampling frequency was 96 kHz. The signal structure for field experiments is shown in Figure 12.
Adaptive Underwater Acoustic Communications with SNR = 9 dB
The experimental deployment and instruments for Yellow Sea 1 are shown in Figure 11a and Figure 13, respectively. Rate-1/4, rate-1/8 and rate-1/16 convolutional codes were adopted. S256, S512 and W1024 were used, and the CP was set to 128 symbols. Taking S256, for rate-1/4, rate-1/8 and rate-1/16 convolutional codes, the transmission rate were 1333 bits/s, 667 bits/s and 333 bits/s, respectively, and the corresponding bandwidth efficiencies were 0.33 bps/Hz, 0.17 bps/Hz and 0.08 bps/Hz, respectively. Both transceivers were deployed at a depth of 4 m. We firstly used rate-1/16 convolutional code, and the BER performance of 16 data blocks based on the GL algorithm is shown in Figure 14. By comparing the results of S256, S512 and the benchmark turbo system (W1024), it can be seen that S256 was much more effective than S512 and the benchmark turbo system for underwater acoustic communications with moving transceivers. After only one iteration, all information bits with S256 were correctly decoded. However, both S512 (pink square curve) and the benchmark turbo system (blue dotted curve) are completely invalid. This is because moving communications incur time-varying channels. The average channel estimate does not effectively represent the channel information of the 512 symbol block and 1024 symbol block. Taking the first block for S256 in Figure 14 as an example, as S256 was used, there were four consecutive segments for the first data block, and their channels were different due to floating transceivers, i.e., α p = 1. It is important to note that α p can be obtained automatically, and can be calculated by using the estimated channels of the four segments, as α p was equal to the correlated coefficient of the estimated channels of the four segments, which was also used in the initial channel estimation. When turbo equalization was not used, i.e., with 0 iterations, the corresponding channel equalization results are shown in Figure 15(b1), and the constellation points after LMMSE equalization were very scattered. Then, the automatically determined α p was recalculated by using the updated channel estimates of the four segments. When turbo equalization was performed once, i.e., after one iteration, the corresponding equalization results are shown in Figure 15(b2), where the constellation points after LMMSE equalization were ideally condensed together. The corresponding estimated channels of the four segments in Figure 15a were significantly different, where α p = 0.07 after one iteration, demonstrating the time-variation of the channel and the effectiveness of the ST scheme and the GL algorithm. Then, we carried out field experiments with a variety of convolutional codes to test the effectiveness of direct adaptive communications. The adaptive threshold setting is shown in Table 2. We used the mean aggregation degree after one iteration to compare the threshold, where the inner boundary is set to ξ in = 0.03 and the outer boundary is set to ξ ex = 0.2. When the mean aggregation degree is ξ m < 0.03, the encoding (transmission) rate will be improved automatically; when the mean aggregation degree is ξ m > 0.2, the encoding (transmission) rate will be reduced automatically; when the mean aggregation degree is 0.03 ≤ ξ m ≤ 0.2, the encoding (transmission) rate will be kept the same. Table 2. Threshold setting of mean aggregation degree ξ m after one iteration for Yellow Sea 1 and Yellow Sea 2.
Improve the encoding rate ξ m > 0. 2 Reduce the encoding rate 0.03 ≤ ξ m ≤ 0. 2 Keep the encoding rate The calculation of the mean aggregation degree ξ m is shown in Table 3. For Yellow Sea 1, assuming that rate-1/16 convolutional code was used first, after one iteration, the mean aggregation degree was ξ m = 0.002, which was less than 0.03. Therefore, the encoding rate was improved automatically, i.e., the encoding rate was adjusted from rate-1/16 convolutional code to rate-1/8 convolutional code automatically. We can see that, after one iteration, with rate-1/8 convolutional code, the mean aggregation degree was ξ m = 0.0591, which belonged to [0.03, 0.2]. Therefore, the encoding rate was kept the same. Assuming that rate-1/4 convolutional code was used first, after one iteration, the mean aggregation degree was ξ m = 0.4442, which was more than 0.2. Therefore, the encoding rate was reduced automatically, i.e., the encoding rate was adjusted from rate-1/4 convolutional code to rate-1/8 convolutional code automatically. As after one iteration with rate-1/8 convolutional code, the mean aggregation degree was ξ m = 0.0591, which belonged to [0.03, 0.2], the encoding rate was kept. The aggregation performance of the 16 blocks of information bits with S256 is shown in Figure 16. From Figure 16b, after one iteration with rate-1/8 convolutional code, the constellation points of the 16 blocks of information bits were obviously clustered. The BER performance based on the ST scheme and the GL algorithm with S256 for Yellow Sea 1 is shown in Figure 17. We can see that, after one iteration, the decoding with rate-1/4 convolutional code was invalid, and the decodings with rate-1/8 convolutional code and rate-1/16 convolutional code were valid. With rate-1/8 convolutional code, all information bits were correctly decoded after one iteration, and the BER performance was sufficient in meeting the needs for underwater acoustic communications. Therefore, the rate-1/8 convolutional code was kept, which was in keeping with the result from the mean aggregation degree, demonstrating the effectiveness of the GL algorithm and the CA algorithm.
Adaptive Underwater Acoustic Communications with SNR = 13 dB
The experimental deployment in the Yellow Sea is shown in Figure 11b. An underwater acoustic communication machine (Seatrix Modem) was used, whose illustration and dimensions are shown in Figures 18 and 19, respectively. The introduction of the machine is listed in Table 4. An SD card was plugged in Seatrix Modem to collect data at the receiver, and the collected data were analyzed by using a computer. The transmitting ship floated away from the receiving ship at a speed of approximately 0.5 m/s. For Yellow Sea 2, rate-1/2 and rate-1/4 convolutional codes were used. S256 was used, and the CP was set to 16 symbols. The transmission rates were 3765 bits/s and 1882 bits/s, respectively, and the corresponding bandwidth efficiencies were 0.94 bps/Hz (rate-1/2) and 0.47 bps/Hz (rate-1/4), respectively. The deployment depths of the transducer and the hydrophone were 4 m and 5 m, respectively. The main goal of the experiment is to demonstrate a successful implementation on modem hardware for the proposed algorithms. As the receiver in Section 5.2 has a higher SNR than the receiver in Section 5.1, higher code rates can be used in Section 5. We used rate-1/2 and rate-1/4 convolutional codes and S256. The BER performance based on the GL algorithm is shown in Table 5. It can be seen that S256 was very effective for underwater acoustic communications with moving transceivers. After only one iteration, all information bits were correctly decoded. Taking the fourth block with rate-1/2 convolutional code in Table 5 as an example, there were four consecutive segments for the fourth data block, and their channels were different due to floating transceivers, i.e., α p = 1. When turbo equalization was not used, i.e., with 0 iterations, the corresponding channel equalization results are shown in Figure 20(b1), and the constellation points after LMMSE equalization were very scattered. Then, the automatically determined α p was updated. When turbo equalization was performed once, i.e., after one iteration, as shown in Figure 20(b2), the constellation points after LMMSE equalization were still scattered. After three iterations, as shown in Figure 20(b4), the constellation points after LMMSE equalization were ideally condensed together. The corresponding estimated channels of the four segments in Figure 20a were significantly different, where α p = 0.09 after three iterations, demonstrating the time-variation of the channel and the effectiveness of the ST scheme and the GL algorithm. Comparing Figure 15a in Yellow Sea 1 and Figure 20a in Yellow Sea 2, it can be seen that their channel lengths are almost the same. This is because the communication environments are basically the same. We did not show BERs for S512 and W1024, as they basically do not work.
Then, we tested the effectiveness of direct adaptive communications with real underwater acoustic communication machines. The adaptive threshold setting is shown in Table 2. We still used the mean aggregation degree after one iteration to compare the threshold, where the inner boundary is set to ξ in = 0.03 and the outer boundary is set to ξ ex = 0.2. When the mean aggregation degree is ξ m < 0.03, the encoding (transmission) rate will be improved automatically; when the mean aggregation degree is ξ m > 0.2, the encoding (transmission) rate will be reduced automatically; when the mean aggregation degree is 0.03 ≤ ξ m ≤ 0.2, the encoding (transmission) rate will be kept the same. The calculation of the mean aggregation degree ξ m is shown in Table 3. For Yellow Sea 2, assuming that rate-1/4 convolutional code was used first, after one iteration, the mean aggregation degree was ξ m = 0.007, which was less than 0.03. Therefore, the encoding rate was improved automatically, i.e., the channel code was adjusted from rate-1/4 convolutional code to rate-1/2 convolutional code automatically. After one iteration with rate-1/2 convolutional code, the mean aggregation degree was ξ m = 0.041, which belonged to [0.03, 0.2]. Therefore, the encoding rate was kept the same. Assuming that rate-1/2 convolutional code was used firstl, after one iteration, the mean aggregation degree was ξ m = 0.041, which belonged to [0.03, 0.2]; therefore, the encoding rate was kept. The aggregation performance of the 16 blocks of information bits with S256 for Yellow Sea 2 is shown in Figure 21. From Figure 21a, after one iteration with rate-1/2 convolutional code, the constellation points of the 16 blocks of information bits became obviously clustered. Table 5. (a1) Rate-1/2, 0 iteration; (a2) rate-1/2, 1 iteration; (a3) rate-1/2, 2 iterations; (b1) rate-1/4, 0 iteration; (b2) rate-1/4, 1 iteration; (b3) rate-1/4, 2 iterations.
The BER performance based on the GL algorithm with S256 is shown in Table 5. After one iteration, the decoding performance with rate-1/2 convolutional code was sufficient in meeting the needs for underwater acoustic communications. Therefore, the rate-1/2 convolutional code was kept, which was in keeping with the result from the mean aggregation degree. The experiment demonstrated the effectiveness and practicability of the proposed algorithms in real underwater acoustic communication machines. Table 5. BER performance of the GL turbo system with S256 at SNR = 13 dB for Yellow Sea 2. From the above simulations and experimental results, we can conclude that the best segment length is 2 n , and should be close to and longer than the channel length, and shorter than a period of the training sequence, where n is an integer. Considering the transmission rate and time variation of the channel, S256 is better than S128, S512 and W1024. The two separate experimental results show that, with SNR = 9 dB, the 1/8 code rate is effective; with SNR = 13 dB, the 1/2 code rate is effective. Even if α p = 0.07 and α p = 0.09 (α p can be obtained by calculating the correlation coefficient of the consecutive segments), i.e., the channels of the four segments are weakly correlated, the proposed system is still effective.
Conclusions
The GL algorithm and the CA algorithm have been proposed to achieve a global accurate channel estimation of each segment and automatic encoding rate adjustment. To improve the estimation and tracking capability of time-varying channels, the ST scheme has been used. For channel estimate fusion of the segments, S256 is the best for practical moving underwater acoustic communications. Even if the channel correlation coefficients of the segments are as low as 0.7 and 0.9, the proposed GL turbo system is still effective. The experimental results demonstrates that a 1/8 code rate is effective at SNR = 9 dB, and a 1/2 code rate is effective at SNR = 13 dB. In the process of iteration, direct adaptive communications based on constellation aggregation have been realized. The experimental results illustrated that the encoding rate can be adjusted automatically among the 1/2 code rate, 1/4 code rate, 1/8 code rate and 1/16 code rate by using the mean aggregation degree decision. Simulations and experimental results have verified the effectiveness of the proposed system. y n = H n s n + w = H n (rt L s + f n ) + w h 1 0 0 · · · · · · · · · · · · h 2 · · · · · · · · · · · · h 1 Take elements of h n , s n , t L s ,f n and w to build the channel estimator based on the LS algorithm. An element of the impulse response of the channel is denoted by h n ; an element of the symbol sequence is denoted by f n ; an element of the periodic training sequence is denoted by t n with a period of T; and the superimposed symbol sequence is denoted by s n = f n + r × t n . As r is a constant number, the superimposed symbol sequence can be simplified as s n = f n + t n . Then, an element of the received signal sequence can be expressed as y n = h n * s n + w n = L c where w n and L c is the white Gaussian noise and the channel length, respectively. Assuming that the mean value of the symbol sequence is 0, and T ≥ L c , then we can obtain Let L s = pT. We divide E[y n ] into p subsegments with a length of T, i.e., where | 9,865.8 | 2022-03-01T00:00:00.000 | [
"Computer Science"
] |
High-accuracy bulk electronic bandmapping with eliminated diffraction effects using hard X-ray photoelectron momentum microscopy
A key bene fi t of angle-resolved photoelectron spectroscopy (ARPES) in the X-ray range is the signi fi cant increase of the information depth, thanks to the large inelastic mean-free-path of the escaping photoelectrons. In practice hard X-ray ARPES (HARPES) faces severe challenges by low cross sections, large photon momentum transfer, and in particular strong phonon scattering and photoelectron diffraction effects. Here, we show that these challenges can be overcome by extending ultra-ef fi cient time-of- fl ight momentum microscopy into the hard X-ray regime. Phonon scattering destroys the initial momentum distribution but subsequent diffraction at the lattice imprints a pronounced Kikuchi-type pattern on the background signal. Moreover, the pattern of the valence electrons is modulated by diffraction as well. For the examples of the medium-weight element materials Mo and layered TiTe 2 , we demonstrate how comprehensive valence-band and core-level photoemission data taken under identical conditions can be used to effectively remove photoelectron diffraction effects in HARPES band maps.
T he increasing availability of high-brilliance soft and hard X-ray beams at synchrotron sources has launched angleresolved photoelectron spectroscopy (ARPES) of a new generation [1][2][3][4][5][6] . Genuine bulk electronic structure information can be gained, solving the surface sensitivity problem of low-energy ARPES. Soft X-ray ARPES at photon energies of typically less than 1000 eV, corresponding to information depths up to 5 nm 7 , is now a standard method at modern synchrotron radiation sources that has recently been stretched to in-depth studies of novel topological materials [8][9][10] , bulk impurities 11 , buried interfaces 12 , and device-like heterostructures 13 . The information depth is further increased in the hard X-ray range, reaching ∼20 nm at hν = 6 keV (ref. 7 ). In this truly bulk-sensitive regime, only a few ARPES experiments have been performed up to now 4,5,14,15 .
This rareness of such experiments is because the desired increase of the inelastic mean-free-path λ IMFP in hard X-ray ARPES (HARPES) comes along with several effects that hamper photoemission at high energies. The approximated wavelength of the final-state wavefunction λ e = 2π/k f varies from 61 to 15.8 pm in the region between 400 eV and 6 keV. This has two important consequences. First, the photoionization cross-sections drop strongly since the shorter wavelength leads to cancellations in the overlap integral of the transition matrix elements. Between 400 eV and 6 keV most subshell cross-sections go down by several orders of magnitude 16 . Second, λ e shorter than interatomic distances gives rise to core-like X-ray photoelectron diffraction (XPD), leading to pronounced intensity modulations in angle or momentum.
Another serious effect, which can smear out all band-structure information, is electron-phonon scattering. Due to the increasing mean-free-path, the cross-section of this process increases with kinetic energy, and it becomes the dominant loss channel in highenergy photoemission. Phonon scattering can transfer large momenta, while energy transfer is typically limited to <100 meV. Hence, the photoelectron momentum distribution becomes "randomized" resulting in a strong diffuse background. The characteristic spectral distribution of this background reflects the matrix-element-weighted density of states (MEWDOS), as observed and discussed in previous work [17][18][19][20][21] . Similar to diffraction experiments, the relative weight of the phonon-scattering channel essentially depends on a photoemission Debye-Waller factor. Debye-Waller criteria in fact appear to rule out most materials for HARPES 22,23 . In reasonable agreement with theory 24 , however, well-resolved band-structure features have been measured up to 6 keV for tungsten 4 , which represents a special case because it is heavy (Z = 74) and has a high Debye temperature (377 K). Due to its high Debye-Waller factor at cryogenic temperatures, tungsten is less prone to phonon smearing effects in HARPES than most other materials.
Overall, low cross-sections, pronounced photoelectron diffraction, strong phonon scattering, and also large photon momentum transfer to the photoelectron render HARPES a challenging task. Here, we present an approach that eliminates the strong inherent distortions in the measured HARPES band maps originating from these effects. The medium-Z metal Mo (Z = 42) and the reference transition-metal dichalcogenide TiTe 2 serve as examples from a comprehensive series of measurements on various materials with photon energies up to 6 keV. The experiments are carried out with a time-of-flight momentum (k) microscope 6 at beamline P22 of the storage ring PETRA III at DESY in Hamburg (Germany), providing hard Xrays with one of the highest brilliances worldwide in an energy range from 2.4 to 15 keV 25 . For comparison, data taken in the soft X-ray range are included (for details on beamlines and microscope, see Methods section). The simultaneous measurement of valence-band momentum maps and core-level photoelectron diffraction patterns, taken at identical settings of the microscope (kinetic energy, k-field of view), allow the quantitative correction of the diffraction modulation in the asmeasured valence-band patterns. (110) into free-electron-like final states between hν = 400 eV and 6 keV. Momentum components k x and k z are quantized in multiples of the reciprocal lattice vector G 110 = 2.824 Å −1 . The final-state sphere of radius k f (here plotted for emission from the Fermi level (E F )) runs through a large k-range between the third and 14th repeated Brillouin zone. Its center is displaced from the origin k = (0,0,0) by the photon momentum k hν . Dashed vectors indicate the range observed by the k-microscope (field of view (FoV) indicated on top); the vector k f denotes emission normal to the surface. The background pattern shows a k y = 0 cut (at E F ) of the periodically repeated measured spectral density; dark denotes high spectral density. The spectral function was mapped in the yellow area with small steps of 0.1 Å −1 along k z via appropriate setting of many photon energies
Results
Molybdenum (110). Figure 1a, b shows the inelastic mean-freepath and the partial cross-sections of Mo 4d and 4p, respectively, revealing the enhancement of the information depth (∼3λ IMFP ) and the strong drop in subshell cross-sections with increasing energy. Figure 1c, d illustrates the effects of photoelectron diffraction and phonon scattering, respectively, with the wavelength to scale for 3 keV electrons in Mo. Figure 1e shows a quantitative transition scheme for Mo, assuming direct transitions into free-electron-like final states. In the extended zone scheme, these are located on a sphere with radius where m eff and m are the effective mass and free-electron mass, E B and V 0 * the binding energy and inner potential, both referenced to the Fermi level (E F ). Photons with energies between 400 eV and 6 keV induce transitions into the third to 14th repeated Brillouin zone (BZ). The photon wave vector k hν is transferred to the photoelectron, causing a displacement of the center of the sphere. For 6 keV, this shift (k hν = 2πν/c = 3.178 Å −1 ) is larger than the diameter of a full BZ. The periodic background pattern in Fig. 1e is a cut of the measured 3D spectral density function at E F in the plane of photon incidence (k z −k x ). The intersection of the final-state sphere with the spectral function reveals where band features appear under arbitrary emission directions. Note that the k-scale of Fig. 1e is aligned with the energy scale of Fig. 1a, showing the inelastic mean-free-path λ IMFP for Mo which increases from 0.5 to 6.8 nm in this range.
The top row of Fig. 2a-d shows the as-measured kdistributions (sections at E F ) from an Mo(110) crystal at four selected final-state energies between 3110 and 6010 eV. Up to the maximum photon energy, sharp band features are visible in the imaged k-region of ∼12 Å −1 diameter. In particular, we recognize the diamond-shaped electron pockets at the Γ-points (labeled A), the spherical electron pockets (B) at the corners of diamond A, the elliptical contour of the hole pockets centered at the N-points (C) and the octahedron-shaped electron pocket centered at the H-points (D) as denoted in the calculated and measured Fermi surface. However, intense background patterns with characteristic shapes are superimposed on the k-distributions in the top row.
Assuming that the structured background patterns in Fig. 2a-d are the fingerprint of XPD, we took high-resolution diffraction patterns of the shallow Mo 4p core level at the same kinetic energies by varying slightly the photon energy, and using identical settings of the microscope, assuring the same k-field of view. The results are shown in Fig. 2e-h. The small-angle diffractograms exhibit rich structure with the finest features corresponding to an angular width of <0.1°only. The diffraction features show a strong variation with photon energy and get more detailed with increasing energy (see sequence of Fig. 2e-h).
Although it comprises a number of BZs as marked in Fig. 2b, the k-field of view (see scale bars) corresponds to a rather small polar angular range of ∼10°. This is the reason why the characteristic strong XPD signals from forward-scattering along atom rows 26,27 are missing. Nevertheless, we identify the fingerprint of the crossing region of the central Kikuchi bands 28 . The dominant orthogonal Kikuchi bands are indicated by dashed lines in Fig. 2h. The horizontal band is marked by arrows and the center of the XPD pattern by a plus sign in Fig. 2e-h. The center is the crossing region of two orthogonal Kikuchi bands. The expected two-fold symmetry of the (110)-surface is broken by the incoming p-polarized photon beam. The photon momentum is denoted by the arrow in Fig. 2d; the electric-field vector is oriented 22°off-normal. These Kikuchi bands arise from Bragg scattering on lattice planes perpendicular to the surface. Owing to the relatively large scattering factor of Mo, the Kikuchi band is not as sharp as for light elements, but sufficiently clear for an identification of the relevant reciprocal lattice vector G 001 . The width of the band is approximately 2G 001 . In addition, we recognize dark excess lines and edges of Kikuchi bands with their characteristic sharp intensity jump (Fig. 2h, bottom left). Indeed both the dark line and the edges are visible in the as-measured valence-band distribution as well (Fig. 2d, bottom left).
The shifts of the field of view result from small movements of the photon footprint on the sample when changing the photon energy. In order to retain identical settings, these shifts were not corrected by electron beam deflection. Remarkably, the prominent features in the Mo 4p core-level diffractograms ( Fig. 2e-h) appear to be identical to the background features in the valence-band patterns ( Fig. 2a-d). In particular, the crossing Kikuchi bands indicated by the dashed lines in Fig. 2h appear also in Fig. 2d. Assuming that the background intensity carries the XPD signature, we subtract the diffractograms of Surprisingly, the difference images in Fig. 2i-l still show intensity enhancements due to XPD in the valence-band patterns themselves. For example, the intense feature marked by arrows in Fig. 2j obviously falls on a high-intensity region in the corresponding diffractogram of Fig. 2f. Apparently, the valenceband electrons originating from k-conserving direct transitions into free-electron-like bands carry a diffraction signature resembling the core-level XPD patterns. Assuming that the intensity modulation is correlated with the one seen in the XPD patterns measured for the core level, we try a multiplicative correction in terms of a pixel-by-pixel division of the raw data in Fig. 2a-d by the XPD patterns in Fig. 2e-h. The result is shown in the fourth row ( Fig. 2m-p). Clearly, the XPD signature of the background is better removed in Fig. 2m-p suggesting that the background intensity has been divided by itself yielding just a constant intensity offset. In addition, the valence-band map is further homogenized in comparison with the purely subtractive correction (Fig. 2i-l), because the multiplicative correction also eliminates detector artefacts. Figure 2q-t shows E B -vs-k sections through the corrected arrays of Fig. 2m-p, cut along the dashed lines. These cuts clearly reveal the band dispersions which in the uncorrected arrays would be partly masked by the imprinted XPD modulation. Note that the spectral distribution of the background following the MEWDOS [17][18][19][20][21] is largely eliminated at the lower energies (Fig. 2q, r). However, residues of horizontal stripes, characteristic for the MEWDOS background, are still visible at 5410 eV (Fig. 2s). The inhomogeneity in these residual stripes point on a (small) kdependence of the MEWDOS background. At 6010 eV a diffuse background is underlying the band pattern on the right-hand side, reflecting the dramatic drop in the fraction of non-scattered photoelectrons (see below) and an increase of inelastic scattering at such high energies as we will discuss below. Figure 2u, v shows the intensity spectra integrated over the field of view for the valence band and the Mo 4p signal, respectively. Figure 2w, y displays the calculated and measured Fermi surface of Mo, respectively.
Close inspection of Fig. 2m-p reveals that there are still various regions of significant local intensity enhancement. This resembles our earlier result found in the soft X-ray range 29 . We will come back to this point in the Discussion section.
Bandmapping of TiTe 2 . Figure 3 presents HARPES data for TiTe 2 , a layered Fermi-liquid reference compound 30 of the family of transition-metal dichalcogenides. In the as-measured valenceband map (Fig. 3a) band features are hardly discernible as the pattern is dominated by the pronounced sixfold star-like shape found in the XPD patterns at this energy. Figure 3b, c shows the XPD patterns of the Te 4p core level and the background at the high-energy end of the spectrum. This background can be viewed as the weighted average of diffraction from the Te and Ti sublattices. Since Te is much heavier (Z = 52 vs Z = 22 for Ti), the dominant contribution is given by diffraction from the Te sublattice. The corrected pattern shown in Fig. 3d has been obtained by pixel-by-pixel division of the raw data (Fig. 3a) by the Te 4d 5/2 XPD pattern (Fig. 3b). It reveals a clear valence-band map at E F ; the strong diffraction artefacts have completely disappeared. The spectral regions over which the maps in Fig. 3a-c have been integrated are marked in the spectra displayed in Fig. 3e. The bottom row (Fig. 3f-i) shows energy isosurfaces for different binding energies E B as given in the panels. The same data processing as for Fig. 3d (division by the Te 4d 5/2 XPD pattern) has been applied to the energy cuts shown in Fig. 3f-i. The band features are pronounced, distortion-free, and exhibit the threefold symmetry of the 1T structure. The band dispersion is displayed in the E B -vs-k sections shown in Fig. 3j, k. These panels show cuts through the corrected data arrays for E final = 5155 eV (along the dashed line in Fig. 3d) and 4010 eV (along the dashed line in Fig. 3f). Similarly to the high-energy results (Fig. 2s, t), we recognize residues of MEWDOS stripes and a rather large diffuse background in the E B -vs-k sections. The onset of this diffuse background at E B > 0.5 eV originates from energyloss processes like electron-hole excitations in the valence band. The region in close vicinity of the Fermi level is free of this inelastic background contribution.
Besides the intense circular Te 5p-derived features at the Γpoints, the Fermi surface map should contain Ti 3d-derived elliptical electron pockets at the M-points, midway between two adjacent Γ-pockets. However, the Ti 3d ellipses cannot be identified in the E final = 5155 eV patterns (Fig. 3d, j) illustrating the larger relative cross-section for initial states of s-and pcharacter over states with d-and f-character in HARPES 16 . In addition to the low Ti 3d cross-section (Fig. 1b), these pockets are only about 200 meV deep 31 , smaller than the photon bandwidth. Overall, the low cross-section (two orders of magnitude lower than all others in Fig. 1b), the low Z (Z = 22) and the small spectral range (200 meV) of the Ti 3d-derived pockets pose a high challenge on the detection of this feature in HARPES. We find the onset of visibility at hν = 4000 eV, as shown in Fig. 3f, k, where the regions around the pocket are displayed with increased contrast. The pockets exhibit the characteristic elongated shape (Fig. 3f) and appear stretched in energy due to the photon bandwidth (Fig. 3k). 2 Results for Mo(110) at photon energies between 3.1 and 6 keV, corresponding to the final-state energies given on top. a-d As-measured large-area momentum maps of valence bands (VB) (sections at the Fermi level). Dashed lines in b denote the Brillouin zones (approximately because of the curved final-state sphere). A denotes the diamond-shaped electron pocket at the Γ-point, B the electron balls at the corners of the diamond, C the ellipsoidal hole pockets at the N-points, and D the octahedron-shaped electron pockets at the H-points, altogether making up the Fermi surface. e-h High-resolution 4p core-level diffraction patterns taken at identical final-state energies as a-d; arrows mark the central Kikuchi band 28 . The scales of the k-fields are identical in rows one and two, but vary slightly with energy as denoted by the scale bars; intensity in counts per pixel. i-l Same distributions as top row after subtraction of the core-level patterns from the second row, weighted by an empirical intensity factor. m-p Same distributions as top row after pixel-by-pixel division by the core-level patterns from the second row. q-t E B -vs-k sections through the arrays of the fourth row along the dashed lines. u, v Momentumintegrated spectra corresponding to a, e, respectively. w Brillouin zone with calculated Fermi surface (from ref. 41 ). x Fermi surface of Mo measured in a photon-energy interval between 400 and 900 eV using the method described in ref.
Discussion
The structured background patterns in Figs. 2a-d and 3a originate from a combined action of phonon scattering and diffraction. Phonon scattering is a stochastic process, which destroys the initial k-distribution of the electrons, leading to a (statistical) randomization of the k-vector. The change in energy (<100 meV) is smaller than the photon bandwidth, hence the scattered electrons cannot be separated energetically from the true signal. These electrons have lost their memory on the initial k-distribution. However, they are diffracted at the lattice, analogously to the well-known Kikuchi-type diffraction in scanning electron microscopy (SEM). The Kikuchi bands observed in backscattering in SEM 28 also appear due to initial scattering events at phonons and diffraction of these background electrons at the lattice. In the wave picture, the scattered electrons are described as spherical waves centered at the location of the stochastic event. An important result of the present paper is that the strong background signal in high-energy photoemission, originating from phonon scattering, also carries this Kikuchi-type diffraction signature.
Kikuchi patterns are characteristic for the diffraction of a spherical wave generated inside of the material by a scattering event. The diffraction pattern depends on the photoelectron wavelength, i.e., on E final , which can be considered as the photoelectron kinetic energy inside of the material. The primary signal can be a high-energy electron beam in an electron microscope or an electron in a high-energy Bloch wave generated by a direct transition from an itinerant band state. Even other inelastic processes in solids can generate spherical waves that undergo diffraction. This is in accordance with the fact that also the inelastic background on the high-energy side of the core-level signal displays an XPD pattern 32,33 (an example is shown in Fig. 3c). Photoemission from localized band states or core levels (point emitters) also leads to an outgoing spherical wave. Hence such photoelectrons can experience Kikuchi-type diffraction as well 26,27,34 , even without phonon scattering processes being involved (although in practice these are always present at finite temperatures). At identical final-state energies the observed momentum patterns in all these cases look alike because the underlying process, a spherical wave being diffracted at the lattice, is the same.
The equivalence of the diffraction patterns imprinted on the quasi-elastic background in valence-band k-maps and the XPD patterns of electrons from core levels at the same final-state energy can be exploited for an effective elimination of the background. Either the core-level XPD pattern is subtracted from the as-measured valence pattern using an intensity factor as empirical parameter, or the valence pattern is divided by the XPD pattern leading to a homogeneous background that can easily be subtracted. Both procedures lead to similar results (compare Fig. 2i-l with Fig. 2m-p), the division effectively also removes detector artefacts.
The Debye-Waller formalism quantifies the fraction of photoemission intensity originating from direct transitions without phonon scattering. The Debye temperatures 35 of Mo and TiTe 2 are 423 and 217 K, respectively. Figure 4 depicts the fraction of kconserved photoelectron intensity at T = 30 K of the Mo(110) and TiTe 2 valence bands that cross the Fermi level near the N point (for Mo) and Γ-point (for TiTe 2 ) as a function of E final in a semi-logarithmic plot. The fraction of k-conserved valence-band intensity for each kinetic energy results from I VB = (I max − I B )/ I max , where I max and I B denote the measured intensity at the N-(Γ-) point maximum and the averaged background intensity in close vicinity to the corresponding N-and Γ-point, respectively. The division by I max accounts for different accumulation times and X-ray intensity for the individual measurements. We have used processed data similar to the data shown in Fig. 2m-p in order to eliminate the diffraction contribution.
One expects a decreasing intensity with increasing final-state energy as described by the Debye-Waller factor 34 DWF ¼ e ÀαTE final . The fitted value = (1.1 ± 0.1) × 10 −5 K −1 eV −1 agrees with the predicted value 1.2 × 10 −5 K −1 eV −1 for Mo. Note that the prefactor α comprises an energy-independent factor averaged over all possible scattering paths 34 . The DWF represents within this model the probability p that one observes a direct transition without scattering event. Consequently, the probability of a scattering event with the loss of momentum information is 1 − p. Since both observables originate from the same diffraction process, the procedure of division of the original data by the XPD pattern eliminates the modulation caused by diffraction both for the direct transition fraction and from the background intensity. Thus, one observes the valence band on a homogeneous background intensity, allowing for a quantitative evaluation of p. Our results confirm within error limits the predicted decrease with increasing photon energy of the valence-band intensity due to phonon scattering.
The temperature dependence becomes dramatic at high energies; at 6 keV the direct-transition signal drops by a factor of 4 upon a temperature increase from 20 to 40 K. For the case of TiTe 2 we determine the value = (1.6 ± 0.5) × 10 −5 K −1 eV −1 . From the ratio of the Debye temperatures and atomic masses, one would expect a value of = 2.2 × 10 −5 K −1 eV −1 .
We assume that α is a constant, depending exclusively on the Debye temperature and the mass of the nuclei. In the rigorous formulation, the exponent of the DWF comprises the factors g 2 and u 2 , where g is the relevant reciprocal lattice vector and u is the averaged amplitude of lattice vibrations. Note that the first factor is in fact not a constant for diffracted electrons, where one has to consider the reciprocal lattice vector involved in the diffraction. The averaging of the vibration amplitudes is not allowed in cases of strongly anisotropic phonon distributions. In future investigations, the time-of-flight momentum microscope will allow the observation of temperature-and momentum-dependent variations of the background intensity, which may result in additional physical insights into the phonon distribution and the electron-phonon interaction.
It is evident that our normalization procedure suppresses part of the intensity modulations originating from photoemission matrix element and, e.g., dipole selection rules might be less visible. At first sight it is surprising that the momentum patterns at high energies do not show significant dichroism effects. With increasing photon energy, higher multipole contributions in the photon operator beyond the electric dipole approximation become significant. These mask the classical dipole selection rules and corresponding dichroism effects (circular and linear dichroism in the angular distribution CDAD, LDAD). These dichroism effects are substantial in the VUV range 36 and soft Xray range 37 and in very good agreement with theory based on the dipole approximation. In addition to these multipole contributions, the partial cross-sections drop more rapidly for higher orbital quantum numbers, see Fig. 1b. This different weighting of partial-wave matrix elements influences the dichroism and also the spin polarization 38 in the hard X-ray range.
Remarkably, both procedures for eliminating the XPD intensity modulations (Fig. 2i-l and Fig. 2m-p) result in k-maps that still exhibit inhomogeneous local intensities. These are a fingerprint of a different diffraction mechanism acting on the valence-band photoelectrons. Kikuchi patterns in the background signal arise due to coherence-destroying scattering processes or from emission from fully localized core levels, both leading to outgoing spherical waves. The spherical waves experience Kikuchi-type diffraction at the lattice leading to highly symmetric patterns aligned with the crystal lattice. High-symmetry directions in the Kikuchi patterns coincide with the corresponding crystallographic directions. Unlike core electrons and scattered electrons, the electrons originating from direct transitions from itinerant band states are plane waves propagating into the direction k final . Figure 1e makes clear that the full 4D valenceband emission pattern I(E B , k) (with E B being a continuum) is fundamentally different from the 3D emission pattern of electrons from a core level (at a given E B ). The most important difference originates from the transfer of the photon momentum to the valence photoelectron, leading to a displacement of the final-state sphere in periodic k-space (Fig. 1e). This displacement breaks the symmetry and leads to non-symmetric diffraction features. Considering the continuum of E B , this diffraction scheme is equivalent to Laue diffraction with a continuum of Xrays. It can be described in terms of Umklapp processes on the displaced final-state sphere involving reciprocal lattice vectors. We have discussed and quantified this effect in the soft X-ray range 29 . The intensity enhancement can reach a factor of 5. This momentum-transfer description can be visualized in terms of a geometric model resembling the Ewald construction, however, with the final-state sphere being displaced by the photon momentum k hν .
In conclusion, using two medium-weight element materials as examples, Mo and TiTe 2 , we have presented a powerful method for high-accuracy bulk electronic structure mapping using momentum-resolved hard X-ray photoelectron spectroscopy. The key ingredients are high-resolution valence-band maps and corelevel diffractograms taken at exactly the same kinetic energy and k-range. This allows for voxel-by-voxel processing of the 3D data arrays and disentangling of the different distortion effects due to quasi-elastic phonon scattering and photoelectron diffraction. A strongly structured background underlying the valence-band maps originates from phonon scattering and shows the same XPD pattern as core-level photoelectrons of the same final-state energy. Scattered valence electrons and core electrons show Kikuchi-type XPD patterns because both are described by a point emitter inside of the crystal.
The equivalence of the background in the valence maps and the core level Kikuchi patterns offer two ways for elimination of the background signal. We have shown that the background can be removed either by subtraction of the corresponding core-level diffractogram, introducing an intensity factor as empirical parameter. Alternatively the structured background can be removed via division by the core-level diffractogram, leading to a constant intensity offset. A residual XPD modulation of the band patterns originates from Laue diffraction of the propagating planes waves. This type of valence-band photoelectron diffraction can be considered as a matrix-element effect and fingerprint of the inverse LEED state 39,40 , which up to now was hardly accessible experimentally. The proposed correction scheme eliminates the scattering and XPD contribution effectively. The results were obtained with a highly efficient momentum microscope, but the principles and procedures are independent of the photoelectron detection scheme.
Methods
Sample preparation. The Mo(110) crystal was cleaned by repeated cycles of heating in oxygen at 1500 K followed by desorption of the surface oxide at 2000 K; the base pressure during experiment was 4 × 10 −10 mbar. Prior to the soft X-ray measurements the surface quality was verified by LEED, and at HAXPES energies the bulk electronic bands have even been observed without initial preparation. The TiTe 2 sample was cleaved in UHV prior to transfer to the microscope chamber. The samples are inserted into an He-cooled (∼20 K) sample stage on a high precision 6-axis hexapod manipulator. Besides lens voltages and data acquisition system all mechanical adjustment parameters (hexapod, frame position, apertures of the microscope) are fully remote controlled since the HAXPES setup is located in a closed hutch.
Beamlines. The experiments were carried out at beamlines P22 and P04 of the storage ring PETRA III at DESY in Hamburg (Germany). Owing to the high energy (6.0 GeV) and large size (2.3 km circumference) of PETRA III, P22 provides hard X-rays with the highest brilliance worldwide in an energy range from 2.4 to 15 keV. Present conditions (in the commissioning phase) were 2 × 10 13 photons/s at 4-6 keV in a spot of about 10 × 10 μm 2 using a Si(111) double-crystal monochromator 25 . The bandwidth varies from 350 meV in the 3 keV range to 500 meV at >6 keV for the Si(111) crystal, the Si(311) crystal yields a five times less bandwidth. At 5.3 keV we measured 450 and 155 meV experimental resolution for the two crystals, respectively. Beamline P04 provides circularly-polarized photons in the range from hν = 260 eV to ∼2 keV with a resolution limit up to 4 × 10 4 (<30 meV at hν = 1000 eV) in a spot of <25 μm diameter. In the soft X-ray experiments we used bandwidths around 50 meV.
Instrument. Data were taken using the momentum-microscope technique recording data arrays at many photon energies at both beamlines, as described elsewhere 6 . For the measurements in the HAXPES range at P22 we used the same combination of full-field k-imaging with ToF detection as in previous experiments but with a new optics, optimized for high initial kinetic energies and large k-field of view. A novel type of objective lens can image large solid-angle intervals comprising many Brillouin zones in parallel. Two groups of zoom lenses shape and decelerate this large phase-space volume (high energy and large solid angle) and focus the k-image onto the detector, in this case a delay-line detector (DLD) with 80 mm active area. The ToF microscope with delay-line detector has an energy resolution of <20 meV, so the total resolution is essentially given by the photon bandwidth at both beamlines.
Data recording. 3D data arrays I(E B , k x , k y ) in the valence range are recorded with acquisition times of typically 20 min, and the acquisition of core-level XPD patterns is faster. The two momentum coordinates (k x , k y ) are observed by full-field kimaging (via magnification of the reciprocal image in the backfocal plane of the objective lens). By recording the time coordinate (referenced to the photon pulse) of each individual counting event in the DLD, many binding energies E B are taken simultaneously in a single exposure. Typically, energy intervals with typical widths of ∼6 eV are acquired (without any scanning). For larger intervals the k-resolution is diminished due to the chromatic aberration of the lens system.
The 40-bunch filling pattern of the storage ring corresponds to a pulse period of 192 ns, which perfectly matches the 5 MHz count-rate capability of the DLD. Photon pulse widths are about 50 ps rms. The high parallelization of data acquisition and the high brilliance of the beamlines compensate for the dramatic decrease of photoemission cross-section with increasing photon energy. Countrates exceeding 10 6 counts per second in an interval of ∼15 eV have been obtained in the valence range at all photon energies used. For core-level recording attenuation of the X-ray beam was necessary.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 7,572.6 | 2019-09-10T00:00:00.000 | [
"Physics"
] |
Semiotic of Symtrolic Mode in Interpreting Mythology in English Poetry
This paper is primarily based on the assumption that we need theories of literature in order to interpret the literary texts and explain literature as a unique form of communication. The so far traditional efforts of treating literary study as an intuitive analysis has contributed to the harsh criticism on the srudy ol literatu re as merely ' the reading and understanding of literature'. Literature teaching has given too much emphasis on the enjoying and understanding ofa literary piece, that is to say that merely by understanding the meaning of the language of a text, its cultural ref'erences, one is said to be in a position to respond critically to that text*thus there is no need for interpretation beyond that. Therefore, this paper is an attempt to interpret a literary text (mythology in English poetry) beyond its literal level by the use of semiotics of symbolic mode approach which allows the intertextual and intratextual analysis.
subJect on curriculum, wliich has led to the neglect of features which mark liierature as a discourse and an area ofstudy demanding different techniques ofdescription and different pedagogical approaches.
There might be nothing wrong with such an effort except for some_ par- ticular reasons, First, teachers in English speaking context may still endeavor to hold the usefulness principle as far as the literature program is designed carefully for children based on their needs, interests, and cognitive development.But we cannot expect too much that some literary works may still be attractive to children today who have enjoyed more 'fascinating ' presentation ofexperience via television or movie, Besides, we must be aware that we do not expect to treat literature in this way when we are facing the adult students such as the university students.It would be silly to think that university students even in non Englishcontext can learn something from Dickens' Great Expectations or sheridan's school for Scan- dal.Second, as Lotman (1970) pointed out that literature is aproduct of minimally two overlapping systems ; linguistics and literature so that it can be considered as a 'secondary modeling system'.Hence the system of literature is supralingual rvherein the recipient,of the linguistic message must firstly know the linguistic code in order to interpiet a text.That is to say that a iiterary reader must have the knowledge of literary code besides the linguistic code used to convey the message in the text.At this point, it becomes obvious that literature cannot be used as a language communication model for language students without causing some dis- orders.we cannot surely use the following Shakespeare's word class conversion of the noun 'boy' to a verb as cited by Widdowson (1975): "And I shall see some squeaking Cleopatra boy my greatness", Jacobson (1960) states the difference between the literary or poetic and the daily language as follows: " the poetic func- tion projects the principle ofequivalence from the axis ofselection into the axis of combination".The consequence is clear-cut, expression such as alive he drive can be accepted in literary language but not in daily one.Third, although it is true that the majority of modern critics make a distinction between the enjoyment of literature reading and interpretation (see Newton, 1990), at the university level we must carefully separate the two and put more emphasis on the interpretation.Literature teaching should provide students with the ability of recognizing the quality of liter- ary works (literariness) and try to analyze the patterns or the universal concepts of literature.
The literature study in the twentieth century has undergone a series of devel-Muh'oso.Semiotics of Svnbolit: Mtttle in Intrytrcting Mt'ttnlogt 65 opments focusing on at least three aspects ir) autonomy ofthe text such as held by Formalism and Structuralism (b) the mimeticism of the text such as held by Marxism (3) the reader's role held by the so-called receptional aesthetics.Those theo- ries have lately developed into some versions such as the schema theory (Cook, 1994) that fbcuses on the three schemata of the readers; the text schemata, the world schemata, and the language schemata.Each of the theories has certainly some weaknesses and quite possibly that each one serves as a complementary to the others.This article is designed to use the Semiotic approach of symbolic mode in which a text is not merely interpreted on the basis of its literal meaning but its symbolical aspect (the message).Semiotics is chosen here for two reasons.First, ever since the publication of Eco's Theory of Semiotics \197 6), the question of the truth has been neglected.Literature is obviously characterized by what Spet (in Fokkema and Kunne lbsch, 1994) called "the third type of truth" since, as a matter of fact, literature very frequently informs us many "fantastic things" which refer- ence cannot be approached by logics such as ' a speaking horse in Tolstoy's Kholstomer'.By rejecting the concept of 'referential fallacy', Eco (1976 58-59) decides to include literature into one of the objects studied in Semiotics.Second, Semiotics opens the possibility of interpreting a text on the basis of intertextuality as well as intratextuality since it covers a wide range of disciplines'
THE SYMBOLIC MODE OF MYTH IN ENGLISH POETRY
Morris wrote in his Foundations of the Theory of Sign (1938: l) that "hu- man civilization is dependent upon signs and system of signs, and the human mind is inseparable from the functioning of signs-if indeed mentality is not to be identified with such functioning".This leads him as to believe that the concept of sign may prove as fundamental to the sciences of man as the concept of atom has been for the physical sciences or the concept of cell for the biological sciences (1938: 42).This belief seems not to be too exaggerated if we notice what appears to be an agreement among the semioticians, defined more as those who label their works as semiotic such as Eco, Sebeok, etc. rather tn'an those who in fact practice semiotic studies but do not label them so, that the subject matter of semiotics covcrs the whole range of cultural phenomena including the phenomena produced by aninrals (zoosemiotics) and flora (phytosemiotics), We can easily notice in many rvalks of our daily life, be they political, economical, or social -cultural,,men still employ largely what in general we call 'sign'.The poiiticians speak of their programs and messages via signs hence there we meet the emblems, coat of arms, flags as the signs related to a politicalparty and ideology.It may be curious enough that in the same time we can recognize easily that some of the 'signs' used by modern people are existentially the same with those used in the long past.Myth, for instance, has been used by modern people surprisingly as "communicating signs" as the ancient people did.The difference only lies on the content but the substance remains the same.Sorel in the Reflexions on violence (1914) considers that the world labour strike is a myth fcrr it represents the motivation and supports for the Iabours.
Despite the well-known acceptance on the standard definition of semiotics as the 'general theory of sign' (see Sebeok, 1986; Deely, 1982), the meaning of sign itself indicates the problems commonly faced by the semioticians.peirce (19g5) di vides sign into three kinds (a) icon : a sign that refers to the object that it denotes by virlue of characters of iis own (b) index : a sign wliich refers to the object that it denotes by virtue of bging really affected by that object (c) symbol : a sign which refers ro irs object that it denotes by virtue of a law usually an association of general ideas.Here it is obvious that Peirce has used the term 'sign' as the genus generallisimum of semiotics, symbol is said to be the subclass of sign.whereas in fact' manv people call symbols what others call signs, but fewer people who cail signs what others call symbols (seeEco, 1986).In other words, concerning the couple symbol/sign, the first term is the marked one.This explains a lot of things.
First, in the Peircean sense, symbols are expressions that mean directly and univocally what they are designed to mean such as the scientific symbols in chem- istry, mathematics orphysics.'symbols', in Peircean sense, are then those conven- tional 'signs' whose meanings are not vague and pre-estabrished.Second, symbols are often associated with the indirect meaning or an additional intended mean- ing that senrences may have (Grice, 1957).The sentence " I saw your wife in a hott:l with a young man" cannot be interpreted literaily.but must go beyond the prima facie hence meaning " your wit'e was unfaithful to you".Third, saussure drtlned symbols as "icons" ir the Peircean dichotomy of sign while Hjelmself included diagrams and games into the symbolic system (1943).According to Eco (1916), Saussure and Hjelmself spoke in fact of signs ruied by ratio difficilis where the expression maps.according to pre-established projection rules, some features Mulyoso, Serniotics of'st,mbolic Mocle irt IrxtrPretitxs Mvthc,logt' 67 of the corresponding coutent.In this sense.we can call a geographical map as symbol for once we alter, for example, the border between Indonesia and Malay- sia, we can forecast what would happen in the real world.Fourth, many people associate symbols with metaphors, allegories, and other tropes.Wahab ( 1990) uses Cirlot's dictionary of symbols in order to help him reveal the meaning of 'univer- sal metaphors'.Thus, a metaphor concerning darkness can be revealed by check- ing the meaning of the symbol 'darkness' in many cultures.
We usually can notice easily that in rhetorics and literature, the flouting of the four conversational maxims (Grice, 1975) can be used to create certain effects from the addressee.Metaphors, irony, hyperboles and such tropes violate the maxim of quality since they do not tell the truth (literally).When Khairil Anwar said " Aku ini binatang jalang", he literally lied for he could never be a wild animal.Yet, in facing such a blatant case of lying, the addressee normally faces no problem and can recognize that the poet probably intended to say something else.In fact, many metaphors can be disarnbiguated rvithout vagueness.The rnetaphor concerning darkness is an example, we can know from the context that the meaning is "sad- ness" without looking onto the traditional symbolism which will even lead to a wrong path for in Cirlot ( 197l:75-76), darkness symbolizes the primigenial chaos, not -eloom, On the other hand, a reader may not rely on any pre-established rules when facing such d text as the following lines of Milton's L Allegro : Hence loathed Melancholy Of Cerberus and blackest Midnight born, In Stygian cave forlorn 'Mongst homid shapes, and shrieks, and sights unholy, Find out some uncouth cell, Where brooding Darkness spreads his jealous wings, The text can be approached in two ways.First, at the surface level we can start with the figure Melancholy, Any reader who has no stock of encyclopedia concerning Melancholy , Cerberus, Stygian cave might still be able to grasp a sense that ' this Melancholy was born of Cerberus and blackest Midnight in a cave called Stygia where there were horrid shapes, shrieks, and sights'.But such a too literal interpretation may be made more sensible by a reader with enoughencyclopedia of Melancholy, cerberus, and Stygian cave.Babb (in carey and Fowrer, 1968) points out that there are two opposed Renaissance attitudes toward melancholy.The first, originating in Galenic medicine, viewed it as a source of stupidity, fearfulness and illusions.The second, originating in Aristole's problemata xxx stressed that all who have become eminent in philosophy, poetry, or the arts have been of melancholy temperament.The next encyclopedia that the writer must work upon is the Greek mythology of cerberus and Styx (stygian cave).In Greek my- thology, cerberus was told in the story of Hercules as a three-headed hell -hound whose throat bristled with serpe nts.He was the guardian of the abode of pluto on the banks of the stygian lake (Pluto's kingdom).vrgil once wrote that Aeneas heard shrieks which came from the souis of dead children as he passed the cave.At this stage, the reader will also be aware that the parentage of Melancholy as the child of cerberus and the blackest midnight is merely Milton's invention.At the surface level , a reader may still be able to grasp the sense ofthe text.
However, the fact that we often finci such a text aiiuding to mythology may convince us that there is something beyond the use.This brings us to the second kind of interpretation that will be labeled "symbolic mode" (Eco,19g6).practi- cally, symbolic mode is characterized by the interpreter's pragmatic decision to interpret the text symbolically.The pragmatic decision produces at the semantic level a new sign function, by associating new contentas far as possible, undetermined and vague-with expression already corelated to a coded content.This is especially applicable to a texr such as mythology in English poetry for the text, when the mode is no{ realized interpretively, remains endowed with sense at its literal or figurative sense.Eco ( 1986) has shown that in the modern aesthetic experience, the possible contents are suggested by the co-text and by the intertextual traditions.The interpreter does not intend to discover an external truth but he makes the encyclopedia work its best.unlike the mystical experience in which symbolic contents are suggested by a preceding tradition and the interpreter is convinced that they are not cultural units but referents, aspects of an extrasubjective and cx tracultut'll reality.
Thus we must firstly operate our encyclopedia (the co-rext and the intertextual traditions) in a progressive rnanne r during the symbolic interpretation of the mythology in English poetry, That is to say that the possible new contents are not pre-MuLvoso, Sentiotics of 51tn7fi61i, Mode in Intrpreting Mvtholog,-69 estabiished (even according to rhetorical rules) but created during the progress of intelpretation.In interpreting the myth of Cerberus in L'Allegro we can start with the co-text .Milton describes the dog as the parent of Melancholy together with the blackest midnight .If we relate this with the key words 'horrid shapes, shrieks, Darkness, uncouth cell, night raven', we are building an image of blackness, dark- ness which are the attributes given to Melancholy.But this is not sufficient to understand the meaning of the myth Cerberus in this poem.We must do an intertextual analysis.In the beginning of the twin poem Il Penseroso , Melancholy is described as black but in esteem such as Prince Memnon's sister, Himera: Hai I divinest Melancholy.
Whose saintiy visage is too bright
To hit the sense of human sight And therefore t0 our weaker view O'erlaid with black staid wisdom's hue Black, but such as in esteem, Prince Memnon's sister might beseem, Or that stared Ethiop queen that strove To set her beauty's praise above The sea nymphs, and their powers offended.Thus here Melancholy is described as black but beautiful different from in L'Allegro where the blackness of Melancholy is horrible.Melancholy blackness refers to that of Himera (prince Memnon's sister ) or the Ethiop queen (Cassio- peia) who was changed into a constellation (star) because she claimed to be more beautiful than the Nereids (sea nymphs).Then we have to look up at Babb who distinguishes two Rennaisance attitudes to melancholy which can be summarized as the damned melancholy (the source of stupidity, fearfulness and illusion) and divine melancholy (poets are of melancholy temperament).The meaning of the twin poems clearly support these opposed attitudes to melancholy.In L' Allegro, Milton rejects Melancholy (because of the terible darkness) and preferred to lollow Euphrosyne, the heart-easing Mirth as the symbol of youth and jollity.If we follow Dorian (in Carey and Fowler, i 968) who suggested that the poems are the autobiographical record of Miiton that is whether he should suppress either l0 TEFLIN JoLrrnal, Volume X Number l, August t999 the lighter or serious side of his nature, as man or as poet, we can conclude that in L' Allegro, Milton berieves that rrving in righter manner as man is better.Hr;;;;; concluded in the rast rine of this poem "Mirth with thee , I mean to live,,.whereas in Il Penseroso, Milton preferred a rife as a poet.He considered the vain deluding joys lMirth) as the brood of folry without father bred.Therefore, he concluded in the last lines that " these preasures Meranchory give, and I with thee wiil choose to lrve".It is now obvious by contrasting the brackness of Melanchoty in the two poems (representing the damnecl and the divine merancholy), tt e mitt cerberus -symbolizes the evil genius that gives birth to poetic genius.This is in line with the Neoplatonrc doctrine which saw the dog as the symbor of the evil genius (cirrot, 1971 42).
Since the symbolic mode interpretation uses the intertextual and intratextual analysis, rhe rarget is then to find the symbolic meaning of myths in mythorogy so that finally we can construcr the semiotic system oimytr,oroiy in nnjrsn poetry.
The procedu'es are st.rted with the meaning of each myth in one individual work and then relatcd to the meaning of the sai'e myth in other worts 1"ven or,iiff"r.n,writers) befbre we finaily determine the functlon of the myth.'Letus compare the myth of Proserphine ih the following three poems: Paradise Lost (Milton) Rhymes on the Road (Moore) Tis fbr the theft of Enna's flower from the earth These urchins celebrate their dance of mirth Mttlvt.so.Stmirttit:.s of'St'rnholk: Moile in Intrpr<'ting Mt'thologt' 7l Round the green tree.like tays upon I herth Those that are nearest linked in order bright Cheek afier cheek, like rosebuds in a wreath; And those more distant showing fi'om beneath The others' win-es their little eyes of light While see I Among tlie clouds .their eldest blother.
But just flown up, tells with a smile of bliss, This prank of Pluto to his charmed mother Who turns to greet the tidings with a kiss By using a method of reading similar to the myth reading done by Levi-strauss (in Culler, 1975:40) where he compares different myths, we could determine the underlying structure and hence the meaning.In the r.'ase of mythology in English poetry, we may see that it is easier to pro- t'ced rather than in Levi-Strauss' study since e there has been provided the c:ultural context which enables us to grasp the meaning which the poerns convey so that we can check our explanation of details by the reievance to tlre meanings.Just like in Levi-Strauss' study, we may apply the so-called " spiral movement" in which one rnyth is used to elucidate another .The l'inal result ought to be a coherent system in which each myth is studied rund understood in its relation with the others.To explain an item or inci- rlent in a particular myth, we must nct only consider its relation tc other cle ments in a poem, but also try to determine how it is related to elements rrppearing in similar contexts in other poems.Thus we could finally gather tlrat the myth of Proserphine in the three poems would be read as: Of Enna, where proserphine gathering flowers Herselt a lairer llower by gloomy Dis Was gather-ed, which cost Ceres all that pain To seek her through the world; nor that ,*".t g.ou"Of Daphne by Orontes, and the inspired Castallian spring, might with this paradise | 4,436.6 | 2015-08-29T00:00:00.000 | [
"Linguistics"
] |
Container Introspection: Using External Management Containers to Monitor Containers in Cloud Computing
: Cloud computing plays an important role in today’s Internet environment, which meets the requirements of scalability, security and reliability by using virtualization technologies. Container technology is one of the two mainstream virtualization solutions. Its lightweight, high deployment effi-ciency make container technology widely used in large-scale cloud computing. While container technology has created huge benefits for cloud service providers and tenants, it cannot meet the requirements of security monitoring and management from a tenant perspective. Currently, tenants can only run their security monitors in the target container, but it is not secure because the attacker is able to detect and compromise the security monitor. In this paper, a secure external monitoring approach is proposed to monitor target containers in another management container. The management container is transparent for target containers, but it can obtain the executing information of target containers, providing a secure monitoring environment. Security monitors running inside management containers are secure for the cloud host, since the management containers are not privileged. We implement the transparent external management containers by performing the one-way isolation of processes and files. For process one-way isolation, we leverage Linux namespace technology to let management container become the parent of target containers. By mounting the file system of target container to that of the management container, file system one-way isolation is achieved. Compared with the existing host-based monitoring approach, our approach is more secure and suitable in the cloud environment.
providers (e.g., Amazon, Microsoft, Google) hosting the applications of lots of developers or companies. Cloud computing is based on virtualization technology, which can divide huge physical resources into many small virtual resources. There are two mainstream virtualization technologies, including virtual machine and container. Each virtual machine has its own operating system kernel, making it heavyweight and less efficient in deployment. Compared with virtual machines, containers are lightweight and more efficient in deployment, because they share the operating system kernel with the host. Therefore, it is more popular.
The widespread development of cloud computing and containers has also brought security risks. In a cloud computing environment with multi-tenant tenants, tenants need to monitor and manage their container instances. However, the current host-based or container-based monitoring methods cannot meet the requirements of security and flexibility in cloud computing. Firstly, container-based monitors running inside target containers can be detected or subverted by incontainer attackers, since they are running in the same namespace. Although monitors can be executed in the host to hide themselves, it is not secure in the cloud since host applications are privileged for all containers. Secondly, redirecting execution information of target containers to security monitors of cloud tenants is not flexible, because it needs the close cooperation of the cloud host. It is not practicable for cloud service providers to corporate with each cloud tenant.
In this paper, an external container monitoring architecture is proposed to solve these problems. In this architecture, monitoring tools of cloud tenants or CSPs are deployed in external management containers, which can intercept the execution information of target containers but are transparent to them. The external container provides a secure and flexible monitoring environment for security tools. Security tools running inside the management containers are transparent and isolated to target containers. Since the management container is unprivileged, in-container security tools cannot affect host security. So, cloud tenants can run security tools as they want. After providing a one-way transparent monitoring environment, CSP needs not to closely cooperate with different security tools of cloud tenants. Therefore, our architecture is more secure and flexible compared with existing host-based or container-based security monitoring approaches.
To achieve one-way transparent management containers, process and file one-way isolation approaches are proposed. For process isolation, we leverage Linux namespace technology to make management container become the parent of target containers. Since parent namespace has full privilege over child namespace, security tools running in management container can intercept the execution information of target containers, and are transparent to them. We do not leverage the container-in-container solution to achieve one-way isolation, because it needs the parent container to be privileged. For file isolation, we mount the file system of the target container to that of the management container. Therefore, security tools can access files of target containers transparently. After the implementation, we test the effectiveness and performance of our prototype. The experimental results show that our system can make security tools be transparent to target containers with high performance.
In summary, the contributions of this paper are as follows.
• An external container monitoring approach is proposed to build management containers over target containers. Security tools running inside management containers are transparent to target containers, but can intercept the execution information of them. • A namespace-based process isolation approach is proposed by making the namespace of the management container to be the parent of target containers and ensuring the access security of the management container.
• For file isolation, a file-system-based file introspection method is used for the management containers to access the files of target containers outside.
The rest of this paper is organized as follows. Section 2 gives the related work. The system design is described in Section 3. The design details about process introspection and file system introspection are given in Section 4 and Section 5 respectively. Section 6 evaluates the effectiveness and performance of the prototype. Section 7 concludes this paper and discusses the future work.
Related Work
At present, container introspection technology is still a new research topic, and this concept is related with cloud security, virtual machine introspection and container security technologies [1,2].
Virtual Machine Introspection
The current research direction closest to the container introspection technology is virtual machine introspection (VMI) [3], which monitors VMs from the hypervisor. The hypervisor has the highest privilege and can intercept the execution information of target VMS. Under the framework of VMI, the monitoring of the VMs generally includes three steps: execution information collection, semantic reconstruction, and behavior analysis.
There are two approaches to collect VM information, including static analysis and dynamic interception. CFMT [4] obtains the contents of the VM disk from the outside and saves the checksums of all the original file contents in the VM files. Then it compares the checksum of the existing contents with the original one during each poll to detect whether there the file has been tampered with. VMWatcher [5] maps the file systems of a VM to the privileged virtual machine (DOM0), and uses anti-virus software to perform security analysis on these files to protect the security of the VM.
Compared with static approaches, dynamic approaches can obtain the execution information in real time. Ether [6] can capture the behavior of virtual machine system calls in real time. The system modifies the content of a specific register (MSR register) of the target VM. When the VM executes a system call, it throws an exception, which can trigger an event in the hypervisor. At this time, Ether can obtain specific information about the system call, such as call number and parameters. Nitro [7] is similar to Ether, but Ether is designed for the Xen platform and Nitro works in KVM platform.
The execution information collected in the VMM layer is binary low-level information, but security analysis needs high-level information. After collection, it needs to be reconstructed into high-level semantic information. Filesafe [8] reconstructs file system from disk image based on the layout of Windows FAT32 file system and then maps files and disk blocks. vMon [9] reconstructs the map between file and disk block for Linux VM. Volatility is an open-source memory analysis tool, which can reconstruct kernel objects from binary memory snapshot based on the profiles of layouts of different operating systems. Reference [10] can identify the kernel version by analyzing the VM kernel automatically.
Container Security
Container security [17] is a hot topic, so there are many works to analyze and protect container security. Reference [18] compares virtual machines and containers. Compared to virtual machine technology, containers are more light-weight and can reduce the resource consumption, because containers share the operating system of the host. But it also raises security risks due to the huge attack surface of the operating system. In addition, the ecosystem of the Docker containers also contains security challenges, this paper also focuses the security of it. SCONE [19] leverages Intel SGX technology to protect the Docker containers from external malicious attacks and the untrusted cloud hosts. To defense against container escalation attacks, Reference [20] proposes an escape defense method by checking the status of Linux Namespace, which can detect abnormal processes and prevent users from malicious escaping behaviors. Reference [21] focuses on strengthening the Docker's access restriction, and hopes to extend the dockerfile format so that the Docker image maintainer can provide the SELinux security policies to enhance the security of the container. SELinux is widely used to enhance the security of certain service programs, the expansion and adjustment of dockerfile will allow SELinux security policies to be specified for different images, improving the security of Docker. Reference [22] exploits the Linux cgroups from containers and proves that cgroups technology is not enough to limit the resource access of containers in cloud computing.
Container Introspection Technology
To analyze the security of containers, Reference [23] uses the introspection tool Prometheus to capture the information including the Docker engine itself and the memory usage of the container and the host OS. This method analyzes the data difference between the Apache server running in the container during normal operation and when it is infected by malware, and concludes that introspection tools can be used as data collection and forensic analysis tools for the early warning system in the containerized system. Reference [24] proposes a malicious node identification method.
In summary, there has not been much research on the concept of container introspection technology. This paper is a preliminary exploration of this concept.
Design of Container Introspection
This section describes the design overview of our system. We first give the motivation and then describe the threat model and assumption. After that, two one-way isolation approaches for process and file system are proposed.
Motivation
With the development and wide application of container technology, more and more containers are deployed in cloud computing. Containers are facing increasing security risks since they are providing services on the Internet. Therefore, many cloud tenants and cloud service providers need to manage and monitor their containers. There are several security and flexibility requirements of container monitoring, which are as follows.
RQ1: Monitoring tools should be secure. Many containers execute processes with root privilege by default. It is not secure for monitors running inside them. Even though container processes are not privileged, they can also detect monitoring tools, which is not secure for monitoring tools. RQ2: Security tools should be isolated from the host. Processes running inside containers can be accessed in the host, but it is not secure to run security tools of cloud tenants in the host in cloud computing.
RQ3: Cloud service providers should not closely cooperate with security tools. Another monitoring solution in the context of virtual machines is that the host intercepts execution information of the target VM and then sends it to the security tools running inside another secure VM. In the context of containers, it is possible for CSP to transmit the execution information of target containers to a secure container. However, this solution needs the close cooperation of CSPs. CSPs need to provide the customized monitoring APIs or required information for different monitors. It is very complicated for CSPs to perform API authorization and access control, since there are many containers of different cloud tenants running on the host.
To meet these requirements, this paper proposes an external approach for container introspection, which builds a one-way isolated management container over target containers. The management container is a secure and flexible environment for monitoring tools.
Threat Model & Assumptions
Before describing the design of our system, we first discuss the threat model and some assumptions.
We first assume that containers cannot escape to the cloud host. Container privilege escalation is a serious attack for cloud computing and security researchers are keeping fix vulnerabilities in the host kernel. But currently there no perfect host-based solution to defense container privilege escalation. Therefore, we do not consider container escape attack in this paper.
In addition, the CSPs are considered to be trusted, which is a common trust base in cloud computing. Most security monitors in cloud computing rely on the isolation provided by cloud hosts or hypervisors. The design of our system is also based on the security protection of cloud hosts.
Cloud tenants are responsible for the security of their security monitors. In this paper, we do not analyze the security of monitoring tools from cloud tenants and do not provide mechanisms for cloud tenants to intercept their security monitors.
System Overview
The core method of our system is to build a one-way isolation management container, which can intercept the execution information of target containers but is transparent to them. The architecture of our system is shown in Fig. 1.
The host provides OS virtualization service for containers, which has the highest privilege and isolates different containers. There are two modules in the system, building the one-way isolation environment for management containers. The PID NS management module makes the PID namespace of the management container be the parent of target containers by leveraging the host namespace mechanism. After that, the management container can intercept the process information of the target containers without the cooperation of the host. File system management module analyzes the structure of target containers and mounts them to the management container. Security tools of cloud tenants run in the management container, so that they can access the execution information of target containers. There can be many target containers managed by only one management container.
Namespace-based Process Introspection
We leverage the Linux namespace mechanism to achieve one-way process isolation. The main steps include: 1) constructing a parent PID namespace of target containers; 2) letting the management container join the parent namespace dynamically; 3) hiding other processes of the parent PID namespace. At first, we make a brief introduction of Linux namespaces.
Linux PID Namespace
There are 6 different namespaces introduced by the Linux kernel after v2.6. These namespaces are used to isolate different kinds of resources for containers, including PID namespace, UTS namespace, IPC namespace, MNT namespace, NET namespace and USER namespace. Among them, the PID namespace is used for process domain isolation. The relationship between different PID namespaces is shown in Fig. 2. Only processes within the same PID namespace could see each other. The parent PID namespace has full privilege over child namespaces. Processes of child namespace are mapped into parent namespace. For instance, Process 1 of Namespace 1 is mapped as Process 5 in Namespace 0, so all of the processes in Namespace 0 are able to see it. For the Docker container, a new namespace is automatically created when the container is created. All processes running inside one container belong to the same PID namespace, so they can see each other. Since different containers have different PID namespaces, processes of different containers can see each other.
Inspired by the PID namespace, we found that if the PID namespace of the management container is that of the target container, the management container can visit the processes of the target container transparently.
Docker-in-docker Architecture
There are several docker-in-docker solutions (e.g., dind), which make the docker create a new docker inside it. A child namespace can also be the parent of other namespaces. Based on this mechanism, running a docker in another docker is possible. But most dockers cannot create child dockers, because there are several challenges First, the parent docker should be privileged. However, privileged dockers are not safe and not accepted in cloud computing. Second, the file system of Docker (AUFS) should only consist of normal file systems, which means docker cannot run based on multiple AUFS file systems. These challenges are shortcomings of dind. So, dind should be created by Docker with 'privileged' flag, and the file system of child docker should be a volume of the parent docker. Even though docker-in-docker solutions can build multiple levels of namespaces and make external monitoring possible, the security risks of privileged containers are unacceptable in cloud computing. Therefore, our approach leverages the architecture of the docker-in-docker solution and tries to overcome the problems of it.
Joining into Parent Namespace
To address the problems of the docker-in-docker solution, our system is based on the dockerin-docker solution but does not execute security monitors directly in the parent docker. The main approach is to let the target containers run inside a parent container, and then let management container join the PID namespace of the parent docker. The steps are as follows.
1) Creating a parent container. When a cloud tenant creates her first container, the host first creates a parent container. This container can create child containers, so it is created with the 'privileged' flag. But this step is transparent to the cloud tenants, and the tenants cannot operate the parent container. 2) Creating target containers. Target containers are created within the parent container, so the PID namespace of the parent container is the parent of those of target containers. Parent container has the client which can create child containers for cloud tenant. Only containers of the same cloud tenant can be created in one parent container. 3) Creating the management container. Management container is the execution environment of security monitors. To access the information of target containers, we let it join the PID namespace of the parent container, when it is created. Since this container only shares the same PID namespace of the parent container, it is not as privileged as the parent container. Therefore, it is more secure.
Process Hiding
Since the management container shares the same PID namespace of the parent container, the management container can obtain the process information of client tools of the parent container, making the parent container be visible. To overcome this problem, we leverage a kernel module to hide the client in the parent container.
The process hiding approach is inspired by kernel rootkits. Kernel rootkits are used to hide processes or files in operating systems. They usually hook kernel functions and inject malicious code in kernel system calls. Among them, adore-ng is a popular rootkit, which is used for process and file hiding. It injects malicious code into several system calls (e.g., gedents64, etc.). The injected code deletes the process that needed to be hidden from the result list.
Our process hiding module is also based on the idea of kernel rootkit, hooking some key kernel functions and injecting code into several system calls (e.g., getdents). The difference is the module only hides the process information of the client from the management container. Since the management container and parent container share the same PID namespace, we hide the client process in the parent container from other processes within this namespace. Therefore, the most important step is to identify the processes of different namespaces.
To address this challenge, we first explore the relationship between processes and namespaces. In Linux, all task structures are linked by a doubly linked list, which first task is labeled with the 'init_task' symbol. As shown in Fig. 3, every task structure has a nsproxy object to record its different namespaces. The pid_ns fields of different tasks in a same container point to only one PID namespace object, so we can identify all the processes of a container in the kernel by identifying the pid_ns pointer of its task structure. When a parent container is created, the corresponding PID namespace is identified. Then, the pid of client in the parent container is obtained. After that, the namespace and pid are transferred to the kernel module via a customized system call. The kernel module reads the information and performs process hiding. If a process of parent container wants to obtain the process list by using ps command, the result of the system call is checked and the information of the client is cleared.
File System Introspection
The file system introspection is based on the AUFS file system, which is widely used by the Docker containers. So, we first make a brief introduction of AUFS, and then describe how we leverage AUFS to perform external file system introspection. AUFS (short for advanced multi-layered unification file system) is an implementation of the Union File System, which can merge file directories stored in different locations and mount them to the same directory. As shown in Fig. 4, the file system is copy-on-write. There are several layers in the file system of a docker. The lower layers are image layers, which are read-only. Therefore, these layers can be used for many dockers at the same time. The upper layer is writable, which records the modifications to the underlying images. When files are added, deleted, and modified in the container, a runtime copy will be generated in the upper layer. All the layers are mounted to the same directory with a number, which is not the ID of the container. So, we need to find the correspondence between this number and the container ID. Based on the features of AUFS, we achieve the one-way file system isolation by mounting the mount point of the target container in the host to a subdirectory of the management container and analyzing the mapping between container ID and mount ID automatically, which is shown in Fig. 4. In addition, when the target containers are going to be removed, the file system of it should be unmounted from the management container first, to prevent the failure of container removal.
Evaluation
This section evaluates the effectiveness and performance of our system. The testbed is a PC equipped with 3.2 GHz Intel i5 CPU and 8 GB RAM. The host operating system is Ubuntu 16.04. Dind is selected as the image of the parent container. The image of the target container is Ubuntu 14.04.
Effectiveness
Our system can obtain the process information and files of the target container from the management container, so the first step is to deploy the monitoring tools in the management container after deploying the target container. We test the effectiveness of process introspection and file system introspection respectively.
To test the effectiveness of process introspection, we first introduce a workload in the target container and then run the 'ps' command in management to obtain the process list of the target container. The target container is running the ps command, and the management container is able to get the corresponding process information. Then, we run the ping command in the management container, and run the ps command in the target container. The results show that the processes of the management container are transparent to the target container.
We obtain the file list of the target container in the management container to test the effectiveness of file introspection. As shown in Fig. 5, we first add a new file in the target container, then get the file list in the management container in Fig. 6. From the result, we can know that the management container is able to access the file system of target containers.
Performance
Since our system introduces a parent container to user containers, the parent container will consume file space. Therefore, we measure the size of the dind image, which is 533 MB. According to the results, the space cost is acceptable for cloud computing.
There is a kernel module to hide the client processes in parent containers, which could introduce overhead to the 'ps' command. So, we test the execution time before and after the injection of the kernel module for 1000 times, and then compare the performance. The average execution time of 'ps' before module injection is about 23.05 ms, and it is 23.2 ms after injection. The experimental results show that the overhead is acceptable since the 'ps' operation is not time-sensitive.
Conclusion & Future Work
This paper proposes an external container introspection approach to monitor target containers from a management container, which is built by achieving a one-way transparent process and file isolation. Based on Linux namespaces, the namespace of the management container is the parent of target containers, so security tools running inside the management container can obtain the execution information of the target containers. For file system introspection, we analyze and mount the file system of target container to the management container. After the implementation, we test the prototype. The experimental results show that our system is effective with acceptable overhead. In this paper, we do not analyze cross-host containers of one cloud tenant. Containers belonging to one cloud tenant may be deployed in different physical cloud hosts, but our system can only analyze containers in one host. To analyze cross-host containers of one cloud tenant, we need to correlate and analyze different cloud hosts. This work is left to future work.
Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study. | 5,863.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Hydrogen Assisted Cracking in Pearlitic Steel Rods: The Role of Residual Stresses Generated by Fatigue Precracking
Stress corrosion cracking (SCC) of metals is an issue of major concern in engineering since this phenomenon causes many catastrophic failures of structural components in aggressive environments. SCC is even more harmful under cathodic conditions promoting the phenomenon known as hydrogen assisted cracking (HAC), hydrogen assisted fracture (HAF) or hydrogen embrittlement (HE). A common way to assess the susceptibility of a given material to HAC, HAF or HE is to subject a cracked rod to a constant extension rate tension (CERT) test until it fractures in this harsh environment. This paper analyzes the influence of a residual stress field generated by fatigue precracking on the sample’s posterior susceptibility to HAC. To achieve this goal, numerical simulations were carried out of hydrogen diffusion assisted by the stress field. Firstly, a mechanical simulation of the fatigue precracking was developed for revealing the residual stress field after diverse cyclic loading scenarios and posterior stress field evolution during CERT loading. Afterwards, a simulation of hydrogen diffusion assisted by stress was carried out considering the residual stresses after fatigue and the superposed rising stresses caused by CERT loading. Results reveal the key role of the residual stress field after fatigue precracking in the HAC phenomena in cracked steel rods as well as the beneficial effect of compressive residual stress.
Introduction
Stress corrosion cracking (SCC) is one of the most frequent causes of failure of structural components in aggressive environments [1]. This phenomenon proceeds in different ways depending on the electrochemical conditions of the environment. Thus, for anodic conditions (above the hydrogen discharge line of H 2 O in a Pourbaix diagram for the system steel-H 2 O), a phenomenon known as localized anodic dissolution (LAD) takes place causing the loss of material by generating ferrous oxides [2]. Conversely, in the case of cathodic conditions (beneath the hydrogen discharge line of H 2 O in a Pourbaix diagram) the damage appears in a quite different way as the so-called hydrogen assisted cracking (HAC) [3]. In this case, the environment promotes the creation of atomic hydrogen that is adsorbed at the metal surface and later absorbed into the material. Later on, hydrogen is transported by diffusion [4][5][6][7] towards prospective damage areas. It then accumulates to a critical concentration associated with hydrogen assisted damage at the microstructural level. This process is influenced by the stress state of the material.
A common way of evaluating the susceptibility of a given material to SCC processes is by testing cracked rods under a constant extension rate tensile (CERT) test in a harsh environment [3]. A crack is generated by an initial sharp notch by applying cyclic loading (fatigue precracking) with diverse loading sequences [8] exceeding the material fatigue crack propagation threshold in terms of the stress intensity factor (SIF) amplitude (∆K th ). As a result of the precracking process, residual stress field appears in the vicinity of the crack tip (CT), depending on the maximum cyclic SIF level (K max ) applied during the final stage of fatigue precracking. This residual stress state influences hydrogen diffusion and hence, the hydrogen accumulation and microstructural damage causing HAC. An adequate way of obtaining the hydrogen distribution near CT is by numerical simulation of stress-assisted hydrogen diffusion in the material [9,10].
In this paper, numerical simulations were performed to establish the role of the residual stresses generated after fatigue precracking, considering different K max -levels. The residual stress states and the stress re-distributions during CERT test in the CT vicinity were obtained by the numerical simulations of the fatigue precracking and posterior CERT loading of a cracked pearlitic steel rod by using a commercial finite element (FE) analysis code. Later, from the previously computed stress states, the hydrogen accumulation in the CT vicinity was obtained by simulating the stress-assisted diffusion of hydrogen using an in-house FE code. The results provide key information with regard to the HAC process for a better understanding of the role of fatigue precracking in HAC.
Numerical Modelling
The modelling that was undertaken consists in the simulation of hydrogen diffusion in CERT experiments that have been described elsewhere [11]. The analysis was divided into two sequential numerical simulations. The first one (mechanical) obtains the stress state, including the residual stress state after fatigue precracking and its evolution during the CERT test. The second one (stress-assisted hydrogen diffusion) determines the evolution with time of the hydrogen accumulation in a material assisted by the previously computed mechanical stress states.
The material used in this study is a pearlitic steel rod of diameter d = 11.03 mm, its chemical composition is as follows: C 0.789%, Mn 0.681%, Si 0.210%, P 0.010%, S 0.008%, Al 0.003%, Cr 0.218%, V 0.061%. The principal mechanical characteristics of the steel (Young modulus E, yield strength σ Y , ultimate tensile stress (UTS) σ R ) obtained from the corresponding experimental master curve shown in Figure 1 are summarized in Table 1. The material fracture toughness (K IC ) was taken from a previous study [12] dealing with this pearlitic steel.
A common experimental technique [11,[13][14][15] of fatigue precracking consists of applying a multi-step cyclic loading with a progressively decreasing maximum load level (K max ) in various steps. In this procedure, the maximum loading of the cycling decreases as the number of precracking step increases. Once the crack is originated by the initial fatigue loading step with the largest load level, crack growth continues under a lower fatigue loading until the effects of the plastic zone generated by the earlier step are considered to be removed. In this way, precracked samples with the same crack length a = 0.3d were obtained in the SCC experiments [11], which are simulated in this work. They had different plastic zones and residual stresses depending on the K max used in the last fatigue precracking step. A common way of evaluating the susceptibility of a given material to SCC processes is by testing cracked rods under a constant extension rate tensile (CERT) test in a harsh environment [3]. A crack is generated by an initial sharp notch by applying cyclic loading (fatigue precracking) with diverse loading sequences [8] exceeding the material fatigue crack propagation threshold in terms of the stress intensity factor (SIF) amplitude (Kth). As a result of the precracking process, residual stress field appears in the vicinity of the crack tip (CT), depending on the maximum cyclic SIF level (Kmax) applied during the final stage of fatigue precracking. This residual stress state influences hydrogen diffusion and hence, the hydrogen accumulation and microstructural damage causing HAC. An adequate way of obtaining the hydrogen distribution near CT is by numerical simulation of stress-assisted hydrogen diffusion in the material [9,10].
In this paper, numerical simulations were performed to establish the role of the residual stresses generated after fatigue precracking, considering different Kmax-levels. The residual stress states and the stress re-distributions during CERT test in the CT vicinity were obtained by the numerical simulations of the fatigue precracking and posterior CERT loading of a cracked pearlitic steel rod by using a commercial finite element (FE) analysis code. Later, from the previously computed stress states, the hydrogen accumulation in the CT vicinity was obtained by simulating the stress-assisted diffusion of hydrogen using an in-house FE code. The results provide key information with regard to the HAC process for a better understanding of the role of fatigue precracking in HAC.
Numerical Modelling
The modelling that was undertaken consists in the simulation of hydrogen diffusion in CERT experiments that have been described elsewhere [11]. The analysis was divided into two sequential numerical simulations. The first one (mechanical) obtains the stress state, including the residual stress state after fatigue precracking and its evolution during the CERT test. The second one (stress-assisted hydrogen diffusion) determines the evolution with time of the hydrogen accumulation in a material assisted by the previously computed mechanical stress states.
The material used in this study is a pearlitic steel rod of diameter d = 11.03 mm, its chemical composition is as follows: C 0.789%, Mn 0.681%, Si 0.210%, P 0.010%, S 0.008%, Al 0.003%, Cr 0.218%, V 0.061%. The principal mechanical characteristics of the steel (Young modulus E, yield strength σY, ultimate tensile stress (UTS) σR) obtained from the corresponding experimental master curve shown in Figure 1 are summarized in Table 1. The material fracture toughness (KIC) was taken from a previous study [12] dealing with this pearlitic steel.
A common experimental technique [11,[13][14][15] of fatigue precracking consists of applying a multi-step cyclic loading with a progressively decreasing maximum load level (Kmax) in various steps. In this procedure, the maximum loading of the cycling decreases as the number of precracking step increases. Once the crack is originated by the initial fatigue loading step with the largest load level, crack growth continues under a lower fatigue loading until the effects of the plastic zone generated by the earlier step are considered to be removed. In this way, precracked samples with the same crack length a = 0.3d were obtained in the SCC experiments [11], which are simulated in this work. They had different plastic zones and residual stresses depending on the Kmax used in the last fatigue precracking step. Experimental curve (equivalent stress, σ eq versus equivalent strain, ε eq ) for the studied hot-rolled pearlitic steel. In this study, the simulations of the described fatigue precracking of the CERT test specimens were limited to the final stage of the real precracking process, because this is the stage that is responsible for the residual stress fields in the specimens before the SCC tests. Different schemes of precracking by cyclic loading were considered. All of them have sinusoidal shape varying from a null load (K min = 0) to a maximum load (K max ), with a load ratio R K = K min /K max = 0 ( Figure 2). The difference between the fatigue (cyclic) loading schemes is the value of K max, which was determined by the values used in the simulated experiments [11]. The four K max levels expressed in terms of the fractions of the material´s fracture toughness (K IC ) are as follows: (i) the heavy fatigue precracking, Loading regime I (K max = 0.80K IC ), two moderate fatigue precracking regimes, (ii) Loading II (K max = 0.60K IC ) and (iii) Loading III (K max = 0.40K IC ), and finally, (iv) the soft fatigue precracking, Loading IV (K max = 0.25K IC ). In this study, the simulations of the described fatigue precracking of the CERT test specimens were limited to the final stage of the real precracking process, because this is the stage that is responsible for the residual stress fields in the specimens before the SCC tests. Different schemes of precracking by cyclic loading were considered. All of them have sinusoidal shape varying from a null load (Kmin = 0) to a maximum load (Kmax), with a load ratio RK = Kmin/Kmax = 0 ( Figure 2). The difference between the fatigue (cyclic) loading schemes is the value of Kmax, which was determined by the values used in the simulated experiments [11]. The four Kmax levels expressed in terms of the fractions of the material´s fracture toughness (KIC) are as follows: (i) the heavy fatigue precracking, Loading regime I (Kmax = 0.80KIC), two moderate fatigue precracking regimes, (ii) Loading II (Kmax = 0.60KIC) and (iii) Loading III (Kmax = 0.40KIC), and finally, (iv) the soft fatigue precracking, Loading IV (Kmax = 0.25KIC). After fatigue precracking, a monotonically rising loading up to the final fracture is considered to be aiming to reproduce the conditions of the modelled CERT tests performed for estimating the SCC susceptibility of the steel [11]. The specimens during CERT tests were loaded under controlled displacement rate conditions, and the employed displacement rate was adjusted to render a constant overall deformation rate as low as 0.01 mm·min −1 , which is in the range of slow loading that provides sufficient time for hydrogen diffusion, and consequently, for HE to occur. The experimentally obtained [11] fracture loads and fracture times for different tests are presented in Table 2. It can be deduced that in terms of the applied load F the modelled CERT test conditions correspond to the loading rate dF/dt = 0.002 kN/s. To carry out the first stage of the simulations (mechanical analysis) adequately and obtain reliable high-resolution data about the stress-strain fields in the close proximity of the CT, two nonlinearities must be considered: the physical one to account for the elastoplastic material behavior, and the geometrical one to account for large deformations and strains, which arise near the CT [16]. With After fatigue precracking, a monotonically rising loading up to the final fracture is considered to be aiming to reproduce the conditions of the modelled CERT tests performed for estimating the SCC susceptibility of the steel [11]. The specimens during CERT tests were loaded under controlled displacement rate conditions, and the employed displacement rate was adjusted to render a constant overall deformation rate as low as 0.01 mm·min −1 , which is in the range of slow loading that provides sufficient time for hydrogen diffusion, and consequently, for HE to occur. The experimentally obtained [11] fracture loads and fracture times for different tests are presented in Table 2. It can be deduced that in terms of the applied load F the modelled CERT test conditions correspond to the loading rate dF/dt = 0.002 kN/s. Table 2. Experimental fracture loads and fracture times of CERT tests of specimens precracked by fatigue at various levels of K max [11]. To carry out the first stage of the simulations (mechanical analysis) adequately and obtain reliable high-resolution data about the stress-strain fields in the close proximity of the CT, two nonlinearities must be considered: the physical one to account for the elastoplastic material behavior, and the geometrical one to account for large deformations and strains, which arise near the CT [16]. With regard to the first, the constitutive model of elastoplastic material with the isotropic hardening rule and von Mises yield criterion was adopted in this research, where the employed master curve "equivalent stress-equivalent strain", given in the plot of Figure 1, replicated the experimental stress-strain curve of the steel.
This modelling approach does not take into account the potential effects of hydrogen on the stress-strain constitutive relation of the steel, or on the stress-strain fields in hydrogenated metal. The reasons are summarized in the following sentences. Firstly, the effects of hydrogen on the stress-strain behaviors of steels are uncertain: numerous reported data [17,18] have manifested slight both strengthening and softening influences of hydrogen in steels (increase and decrease of the yield strength), but the alterations of the stress-strain curve shapes by hydrogen were rather minor (within few percent of yield strength). Accordingly, this factor is considered to be insignificant for the CT stress-strain fields. Secondly, possible generation of strains, and subsequent stresses in metals because of the crystal lattice expansion by interstitial hydrogen is regarded as insignificant in this study. Indeed, according to available evaluations [19], the introduction of hydrogen into BCC iron lattice can generate stresses as high as about 2 MPa per 10 mol/m 3 (1.2 ppm by mass) of hydrogen concentration in steel. So, hydrogenation attainable near the CT in steels during SCC, which usually does not exceed 5 ppm, cannot cause a substantial effect on the mechanical fields which have stress levels exceeding the magnitudes of~10 3 MPa.
Numerical simulations of the loading schemes including indicated fatigue precracking regimes and posterior monotonic loading up to the final fracture ( Figure 2) were carried out in this way. Regarding the precracking loading, the Laird and Smith mechanism of the fatigue crack growth by blunting-resharpening [16,20], which is considered suitable for analysing the cracking phenomena in ductile materials, relies on the near-tip plastic deformations under cyclic loading with no bond breaking. Accordingly, high-resolution large-deformation elastoplastic simulation of the crack is the right way to visualize the fatigue crack growth [21,22]. In the numerical simulation of fatigue precracking, the modelling of cyclic crack growth only considers the elastoplastic crack advance according to the mechanism of Laird and Smith, but not by bond breaking.
The geometry of the analysis case is given in Figure 3a: a cracked rod with a semi-elliptical crack placed in the rod on the central cross section normal to its axis. This geometry has no revolute symmetry and hence, the axisymmetric formulation is not suitable. However, the 3D geometry can be simplified to a bi-dimensional plane strain case accounting for the constraint effect that arises near the crack front in the central longitudinal section of the rod, as shown in Figure 3b. In this way, computing time can be saved in both simulations. regard to the first, the constitutive model of elastoplastic material with the isotropic hardening rule and von Mises yield criterion was adopted in this research, where the employed master curve "equivalent stress-equivalent strain", given in the plot of Figure 1, replicated the experimental stress-strain curve of the steel. This modelling approach does not take into account the potential effects of hydrogen on the stress-strain constitutive relation of the steel, or on the stress-strain fields in hydrogenated metal. The reasons are summarized in the following sentences. Firstly, the effects of hydrogen on the stress-strain behaviors of steels are uncertain: numerous reported data [17,18] have manifested slight both strengthening and softening influences of hydrogen in steels (increase and decrease of the yield strength), but the alterations of the stress-strain curve shapes by hydrogen were rather minor (within few percent of yield strength). Accordingly, this factor is considered to be insignificant for the CT stress-strain fields. Secondly, possible generation of strains, and subsequent stresses in metals because of the crystal lattice expansion by interstitial hydrogen is regarded as insignificant in this study. Indeed, according to available evaluations [19], the introduction of hydrogen into BCC iron lattice can generate stresses as high as about 2 MPa per 10 mol/m 3 (1.2 ppm by mass) of hydrogen concentration in steel. So, hydrogenation attainable near the CT in steels during SCC, which usually does not exceed ~5 ppm, cannot cause a substantial effect on the mechanical fields which have stress levels exceeding the magnitudes of ~10 3 MPa.
Numerical simulations of the loading schemes including indicated fatigue precracking regimes and posterior monotonic loading up to the final fracture ( Figure 2) were carried out in this way. Regarding the precracking loading, the Laird and Smith mechanism of the fatigue crack growth by blunting-resharpening [16,20], which is considered suitable for analysing the cracking phenomena in ductile materials, relies on the near-tip plastic deformations under cyclic loading with no bond breaking. Accordingly, high-resolution large-deformation elastoplastic simulation of the crack is the right way to visualize the fatigue crack growth [21,22]. In the numerical simulation of fatigue precracking, the modelling of cyclic crack growth only considers the elastoplastic crack advance according to the mechanism of Laird and Smith, but not by bond breaking.
The geometry of the analysis case is given in Figure 3a: a cracked rod with a semi-elliptical crack placed in the rod on the central cross section normal to its axis. This geometry has no revolute symmetry and hence, the axisymmetric formulation is not suitable. However, the 3D geometry can be simplified to a bi-dimensional plane strain case accounting for the constraint effect that arises near the crack front in the central longitudinal section of the rod, as shown in Figure 3b. In this way, computing time can be saved in both simulations. In this scheme, the parallel-flanks slot of the width 2b 0 with semi-circular tip was taken as a model of an undeformed crack (Figure 3c), as it has been repeatedly substantiated and widely used elsewhere [23][24][25][26][27][28]. During axial loading, the points placed at the rod axis are fixed in the radial direction, and the points at the central cross section (where the crack is situated) are fixed in the axial direction due to the symmetry. Taking these conditions into account, the geometry can be simplified to a quarter, as shown in Figure 3b. This geometry was non-uniformly meshed with 4-node quadrilateral elements. The sizes of the elements were progressively increased as the distance from the CT grew. Thus, a fine mesh was generated near the CT, whereas a coarse mesh was applied away from the CT (remote zone).
The previously described loads ( Figure 2) were applied as imposed displacements at the top edge of the two-dimensional sample used in computations (Figure 3b) [20]. The large number of FE nodes used in this model made it unsuitable for the later simulations of hydrogen diffusion assisted by stress due to excessive computing time. To reduce the computational expenses, the so-called boundary layer approach [16,20], which has been commonly used under monotonic loading [25] and employed for cyclic one [29], was invoked as it is schematised in Figure 3c. Here, K-controlled boundary conditions given by the known singular linear-elastic solution for the near-tip displacements u x and u y were imposed over a remote boundary of a circular domain of a radius sufficiently large in comparison to the CT plastic zone size r p .
This approach is based on the well-known similitude principle of fracture mechanics [16,22,[25][26][27]: stress-strain states near the CT only depend on SIF. Therefore, the analysis can be limited to the K-dominated zone placed next to the CT with boundary conditions according to the fundamental elastic solution given by Muskhelishvili [30]. In order to achieve substantial computer time saving, a simplified case was considered in terms of K-dominated zone by using the nearby CT mesh (the shadowed zone as schematised in Figure 3 and displayed in detail in Figure 4).
One of the conditions that must be fulfilled to ensure the validity of this approach is that the size of the K-dominated zone must be large enough enabling to consider negligible the perturbation of the K-dominated field by the process zone, where plastic strains are generated near the CT, and in addition, small enough in relation to the remote zone (far from the effects of the crack) to make negligible the perturbation of the K-dominance by this non-singular far-field. In addition, to ensure the K-dominance of the near-tip elastoplastic fields in the employed model of a crack [16], the crack tip opening displacement (CTOD) δ t must overcome the value given by [16,30]: With regard to the dimensions of the K-dominated zone, an estimation of the plastic zone size (r p ) is necessary. It is estimated as follows [31][32][33].
where K max is the upper bound of attainable SIF values and (σ Y + σ R )/2 is employed instead of the habitual σ Y to account approximately for the strain hardening. The fatigue precracking loading (Figure 2) was applied by means of imposed displacements at nodes (u x,i , u y,i ) placed in the periphery of the optimal K-dominated zone as shown in Figure 4. For the simulation of the monotonic loading, a fixed K rate (dK/dt) of 0.078 MPa·m 1/2 /s was applied up to final fracture, which corresponds to the experimental loading rate according to the data in Table 2. These displacements imposed to each node were calculated on the basis of the adopted boundary layer approach with the use of the known SIF solution for the edge-crack plates [16]. Once the stress state is revealed from the results of the first simulation of the CT mechanics, the second numerical simulation of the stress-assisted diffusion of hydrogen can be performed to obtain the hydrogen concentrations for diverse times of exposure to the hydrogenating source. As previously discussed, the stage that rules HAC, HAF or HE phenomena in pearlitic steels is hydrogen transport in the material by diffusion [34][35][36] dependent on the stress state [36][37][38][39][40].
In numerical simulations, the model for hydrogen diffusion assisted by stress is implemented [37,38]. Briefly, in this model the hydrogen flux (J) can be expressed by the modified Fick law including the influence of stresses, as follows [37,38]: D being the hydrogen diffusion coefficient, C the hydrogen concentration, vH the partial molar volume of hydrogen in metal, R the universal gas constant and T the absolute temperature.
Applying the mass conservation law [41], the evolution of hydrogen concentration with time can be represented by the following Equation [37,38].
This differential equation cannot be analytically solved and, consequently, a numerical solution is necessary. When the system metal-environment is in equilibrium, i.e., J = 0, the equilibrium hydrogen distribution in stressed metal is given by the following expression [28,37,38] for the equilibrium concentration (Ceq): where C0 is the hydrogen equilibrium concentration in the metal free of stress. This formula represents the equilibrium distribution of hydrogen in the metal-environment system which gives the maximal attainable hydrogen concentration in stressed metal. This equilibrium hydrogen distribution coincides with the steady state solution of the diffusion Equation (4) at time t for the specimen immersed in the hydrogenating environment and subjected to sustained load. Assuming quick mass exchange between the metal and the environment on the hydrogen entry surface, the equilibrium concentration value given by the expression (5) may be used as a boundary condition for hydrogen diffusion in metal [28,37,38].
An FE modelling code developed ad-hoc was used for numerical simulation of the hydrogen diffusion assisted by stress. The weak form of the weighted residuals approach is employed considering the Galerkin method for approximating the stress distribution via FE nodal functions Ni [42]. As a result, the following FE equation system is obtained: Once the stress state is revealed from the results of the first simulation of the CT mechanics, the second numerical simulation of the stress-assisted diffusion of hydrogen can be performed to obtain the hydrogen concentrations for diverse times of exposure to the hydrogenating source. As previously discussed, the stage that rules HAC, HAF or HE phenomena in pearlitic steels is hydrogen transport in the material by diffusion [34][35][36] dependent on the stress state [36][37][38][39][40].
In numerical simulations, the model for hydrogen diffusion assisted by stress is implemented [37,38]. Briefly, in this model the hydrogen flux (J) can be expressed by the modified Fick law including the influence of stresses, as follows [37,38]: D being the hydrogen diffusion coefficient, C the hydrogen concentration, v H the partial molar volume of hydrogen in metal, R the universal gas constant and T the absolute temperature.
Applying the mass conservation law [41], the evolution of hydrogen concentration with time can be represented by the following Equation [37,38].
This differential equation cannot be analytically solved and, consequently, a numerical solution is necessary. When the system metal-environment is in equilibrium, i.e., J = 0, the equilibrium hydrogen distribution in stressed metal is given by the following expression [28,37,38] for the equilibrium concentration (C eq ): where C 0 is the hydrogen equilibrium concentration in the metal free of stress. This formula represents the equilibrium distribution of hydrogen in the metal-environment system which gives the maximal attainable hydrogen concentration in stressed metal. This equilibrium hydrogen distribution coincides with the steady state solution of the diffusion Equation (4) at time t → ∞ for the specimen immersed in the hydrogenating environment and subjected to sustained load. Assuming quick mass exchange between the metal and the environment on the hydrogen entry surface, the equilibrium concentration value given by the expression (5) may be used as a boundary condition for hydrogen diffusion in metal [28,37,38].
An FE modelling code developed ad-hoc was used for numerical simulation of the hydrogen diffusion assisted by stress. The weak form of the weighted residuals approach is employed considering the Galerkin method for approximating the stress distribution via FE nodal functions N i [42]. As a result, the following FE equation system is obtained: S eq is the surface exposed to hydrogenation, ϑ is a constant representing the mass exchange rate at this surface, and n is the number of FE nodes. This equation system can be expressed in a matrix reduced form as follows: The algorithm stated by Zienkiewicz et al. [42] was used for solving the system (7) of linear ordinary differential equations with respect to the nodal concentration values [C]: where the time increment ∆t = t q − t q−1 , and the constant τ are chosen in such a way that the stability of this algorithm is ensured. Within the present work the algorithm is unconditionally stable for values τ ∈ [0.5,1]. The described FE approach was employed considering the two-dimensional case shown in Figure 4 using four-node quadrilateral elements. The same mesh was used both in the mechanical simulation and in the analysis of hydrogen diffusion assisted by stress. For all the cases of this study, the required convergence of results was achieved in simulations with the selected meshes.
Residual Stress Distributions in Fatigue Precracked Rods
According to the previously described model of hydrogen diffusion assisted by stress [37,38], special attention must be paid to the hydrostatic stress since the gradient of this variable is one of the driving forces of hydrogen diffusion. For this reason, the distributions of hydrostatic stress for each case of study were analysed at different times during CERT loading. Figure 5 shows the hydrostatic stress fields in the CT vicinity obtained at the end of the fatigue precracking loading (residual stress state) for each case of study, which is the initial stress state at the beginning of the CERT tests.
According to these results, such a field is localized in the vicinity of the CT, reaching progressively smaller absolute values of the stress σ as the maximum fatigue precracking levels K max decrease. Taking this into account, the analysis is focused on the distribution of this variable in the central cross section of the rod, which contains the crack, at three different instants during the CERT loading: (i) the initial residual stress state as a consequence of the cyclic precracking before CERT; (ii) the intermediate time during the CERT and finally; (iii) the instant t = t R corresponding to the final fracture of the sample under HAC conditions at K = K R known from the simulated experiments [11] (Table 2).
In the case of the residual stresses generated after precracking, Figure 6 shows the distributions of the dimensionless hydrostatic stress (σ/σ Y ) against the dimensionless distance to the CT, x/b t , b t being the deformed CT width at the end of precracking, b t (K max ) = b 0 + δ t (K max ), which is different for each K max level according to Equation (1).
Two effects produced by the applied precracking are revealed here; (i) the maximum value of the compressive residual stress (−σ) is higher as the precracking level is increased and (ii) the position of the extreme value of the hydrostatic stress σ is shifted further from the CT into the metal depth. In addition, it is worth emphasizing that slight differences of the boundary values of the hydrostatic stress are observed just at the CT for different fatigue loading levels, which are notably higher than the material yield strength, indicating the important role of plasticity in the fatigue phenomena. With regard to different distributions of the residual hydrostatic stress associated with distinct fatigue precracking levels, the main differences are found at a certain distance from the CT, where the extreme of the stress is located. This fact makes the gradients of hydrostatic stress (a driving force for hydrogen diffusion according to the employed model) similar for all cases of study. This is consistent with previous studies that found a beneficial effect of these compressive residual stresses in environmental damage phenomena such as HAC [43,44]. Furthermore, the evolution of the hydrostatic stress distribution during constant rate monotonic loading after precracking is shown in Figure 7 for three different instants of time during the CERT test: the initial, associated with the residual stress field, the intermediate time (tR/2), and the instant of final fracture by HAC (tR), where tR is given in Table 2 [11]. According to these results, the fracture time is increased with the level of fatigue precracking load applied to the sample. With regard to different distributions of the residual hydrostatic stress associated with distinct fatigue precracking levels, the main differences are found at a certain distance from the CT, where the extreme of the stress is located. This fact makes the gradients of hydrostatic stress (a driving force for hydrogen diffusion according to the employed model) similar for all cases of study. This is consistent with previous studies that found a beneficial effect of these compressive residual stresses in environmental damage phenomena such as HAC [43,44]. With regard to different distributions of the residual hydrostatic stress associated with distinct fatigue precracking levels, the main differences are found at a certain distance from the CT, where the extreme of the stress is located. This fact makes the gradients of hydrostatic stress (a driving force for hydrogen diffusion according to the employed model) similar for all cases of study. This is consistent with previous studies that found a beneficial effect of these compressive residual stresses in environmental damage phenomena such as HAC [43,44]. Furthermore, the evolution of the hydrostatic stress distribution during constant rate monotonic loading after precracking is shown in Figure 7 for three different instants of time during the CERT test: the initial, associated with the residual stress field, the intermediate time (tR/2), and the instant of final fracture by HAC (tR), where tR is given in Table 2 [11]. According to these results, the fracture time is increased with the level of fatigue precracking load applied to the sample. Furthermore, the evolution of the hydrostatic stress distribution during constant rate monotonic loading after precracking is shown in Figure 7 for three different instants of time during the CERT test: the initial, associated with the residual stress field, the intermediate time (t R /2), and the instant of final fracture by HAC (t R ), where t R is given in Table 2 [11]. According to these results, the fracture time is increased with the level of fatigue precracking load applied to the sample. During CERT tests, the applied loading generates a tensile stress state which progressively annihilates compressive stresses generated after fatigue precracking. It is evident that this process takes more time for those precracking loads that cause higher compressive stresses and hence, the effects of the previous compressive residual stresses disappear at higher loads during the CERT test. This effect is clearly seen in Figure 7c where the residual stress distributions generated by low precracking loads have no appreciable effect on the stress state under posterior rising load, whereas for the heavy precracking regimes the effects still remain in the stress distributions under a rising load during HAC test.
At the critical instant tR, which corresponds to the final fracture at HAC according to the experiments (see Table 2), the effect of the compressive residual stress is completely removed by the CERT load for all considered precracking Kmax levels, see Figure 7d. The critical distributions of stress shown in Figure 7d are obtained for distinct times of exposure to the aggressive environment (fracture times, tR presented in Table 2) depending on the applied fatigue precracking load. Taking into account that the stress distributions at the fracture instants for all the regimes are quite similar for all the precracking loads, it can be determined that this corresponding distribution is associated with a critical stress field leading to final fracture by HAC mechanism. However, it is necessary to point out that different levels of fatigue precracking and, per ende, different initial compressive states, determine different values for the fracture time for each case of study.
Hydrogen Distributions in Fatigue Precracked Rods
From the stress states that appear in the simulated samples after the precracking process and their subsequent loading during CERT tests in cathodic conditions, the hydrogen accumulation was obtained by means of numerical simulation of the hydrogen diffusion assisted by stress. Figure 8 shows the distributions of the relative hydrogen concentration (C/C0) against the depth from the CT in the transversal section of the rods precracked by fatigue, which are obtained at two instants of the CERT test in cathodic environment: (i) the intermediate instant (tR/2), and the final fracture instant (tR). During CERT tests, the applied loading generates a tensile stress state which progressively annihilates compressive stresses generated after fatigue precracking. It is evident that this process takes more time for those precracking loads that cause higher compressive stresses and hence, the effects of the previous compressive residual stresses disappear at higher loads during the CERT test. This effect is clearly seen in Figure 7c where the residual stress distributions generated by low precracking loads have no appreciable effect on the stress state under posterior rising load, whereas for the heavy precracking regimes the effects still remain in the stress distributions under a rising load during HAC test.
At the critical instant t R, which corresponds to the final fracture at HAC according to the experiments (see Table 2), the effect of the compressive residual stress is completely removed by the CERT load for all considered precracking K max levels, see Figure 7d. The critical distributions of stress shown in Figure 7d are obtained for distinct times of exposure to the aggressive environment (fracture times, t R presented in Table 2) depending on the applied fatigue precracking load. Taking into account that the stress distributions at the fracture instants for all the regimes are quite similar for all the precracking loads, it can be determined that this corresponding distribution is associated with a critical stress field leading to final fracture by HAC mechanism. However, it is necessary to point out that different levels of fatigue precracking and, per ende, different initial compressive states, determine different values for the fracture time for each case of study.
Hydrogen Distributions in Fatigue Precracked Rods
From the stress states that appear in the simulated samples after the precracking process and their subsequent loading during CERT tests in cathodic conditions, the hydrogen accumulation was obtained by means of numerical simulation of the hydrogen diffusion assisted by stress. Figure 8 shows the distributions of the relative hydrogen concentration (C/C 0 ) against the depth from the CT in the transversal section of the rods precracked by fatigue, which are obtained at two instants of the CERT test in cathodic environment: (i) the intermediate instant (t R /2), and the final fracture instant (t R ). According to the results shown in Figure 8, the samples undergoing more intense precracking loadings (higher Kmax-levels), and consequently, those that have higher compressive residual stresses in the near-tip area, have a lower concentration of hydrogen in the CT vicinity as a consequence of stress-driven hydrogen diffusion. This way, the beneficial effect of compressive stresses is revealed since they produce a delay in hydrogen accumulation near the CT, and consequently, a delay in the microstructural hydrogen induced damage, thereby leading to longer times of fracture under cathodic environments promoting HAC. However, once the earlier-created residual stress state is passed, and consequently, the effect of compressive stresses is completely removed, the hydrogen distributions obtained for each precracking loading are similar (Figure 8b). From this result, the critical distribution of hydrogen concentration linked to the time of fracture caused by HAC is revealed.
Finally, for revealing the effect of the compressive stresses generated after fatigue precracking, the hydrogen distributions near the CT were obtained for all the cases of study at fixed time of exposure to the hydrogenating source. Figure 9 shows the distributions of hydrogen concentration for four analysed cases at the time corresponding to the fracture instant associated with the lowest fatigue precracking regime of Kmax = 0.25KIC, i.e., at tR = 17,000 s ( Table 2). The observed barrier effect exerted by the compressive residual stress field increases with the fatigue precracking level (Kmax) as it is clearly shown by the presented hydrogen concentration distributions for the two higher precracking levels of 0.80KIC and 0.60KIC. In the other two cases, the According to the results shown in Figure 8, the samples undergoing more intense precracking loadings (higher K max -levels), and consequently, those that have higher compressive residual stresses in the near-tip area, have a lower concentration of hydrogen in the CT vicinity as a consequence of stress-driven hydrogen diffusion. This way, the beneficial effect of compressive stresses is revealed since they produce a delay in hydrogen accumulation near the CT, and consequently, a delay in the microstructural hydrogen induced damage, thereby leading to longer times of fracture under cathodic environments promoting HAC. However, once the earlier-created residual stress state is passed, and consequently, the effect of compressive stresses is completely removed, the hydrogen distributions obtained for each precracking loading are similar (Figure 8b). From this result, the critical distribution of hydrogen concentration linked to the time of fracture caused by HAC is revealed.
Finally, for revealing the effect of the compressive stresses generated after fatigue precracking, the hydrogen distributions near the CT were obtained for all the cases of study at fixed time of exposure to the hydrogenating source. Figure 9 shows the distributions of hydrogen concentration for four analysed cases at the time corresponding to the fracture instant associated with the lowest fatigue precracking regime of K max = 0.25K IC , i.e., at t R = 17,000 s ( Table 2). According to the results shown in Figure 8, the samples undergoing more intense precracking loadings (higher Kmax-levels), and consequently, those that have higher compressive residual stresses in the near-tip area, have a lower concentration of hydrogen in the CT vicinity as a consequence of stress-driven hydrogen diffusion. This way, the beneficial effect of compressive stresses is revealed since they produce a delay in hydrogen accumulation near the CT, and consequently, a delay in the microstructural hydrogen induced damage, thereby leading to longer times of fracture under cathodic environments promoting HAC. However, once the earlier-created residual stress state is passed, and consequently, the effect of compressive stresses is completely removed, the hydrogen distributions obtained for each precracking loading are similar (Figure 8b). From this result, the critical distribution of hydrogen concentration linked to the time of fracture caused by HAC is revealed.
Finally, for revealing the effect of the compressive stresses generated after fatigue precracking, the hydrogen distributions near the CT were obtained for all the cases of study at fixed time of exposure to the hydrogenating source. Figure 9 shows the distributions of hydrogen concentration for four analysed cases at the time corresponding to the fracture instant associated with the lowest fatigue precracking regime of Kmax = 0.25KIC, i.e., at tR = 17,000 s ( Table 2). The observed barrier effect exerted by the compressive residual stress field increases with the fatigue precracking level (Kmax) as it is clearly shown by the presented hydrogen concentration distributions for the two higher precracking levels of 0.80KIC and 0.60KIC. In the other two cases, the The observed barrier effect exerted by the compressive residual stress field increases with the fatigue precracking level (K max ) as it is clearly shown by the presented hydrogen concentration distributions for the two higher precracking levels of 0.80K IC and 0.60K IC . In the other two cases, the compressive residual stress effect is cancelled by CERT loading for this time of exposure, and consequently, this effect on the SCC process by HAC vanishes. As soon as the effects of the residual stresses are cancelled, i.e., when the precracking level is lower, the microstructural damage caused by hydrogen will begin.
Conclusions
The compressive residual stress distribution generated in a pearlitic steel rod after fatigue precracking procedure before testing the hydrogen assisted cracking (HAC) susceptibility of steels plays a key role in hydrogen accumulation, and hence, in microstructural hydrogen damage. Such compressive residual stresses are produced by strain compatibility near the CT.
According to the obtained results, as the maximum fatigue precracking level (K max ) is increased, the compressive residual stress is higher. In addition, the extreme value of this residual stress distribution is further away from the CT. This creates the barrier effect against hydrogen diffusion, which extends over a wider zone.
The beneficial effects of the compressive residual stresses generated by fatigue precracking are cancelled when a stress state equivalent to the one applied by fatigue precracking is reached during the constant extension rate tension (CERT) test. Thus, the higher the K max , the longer the time needed for cancelling the barrier effect at a given loading rate.
The distributions of hydrogen concentration at the experimentally obtained fracture times (when the compressive residual stresses are fully cancelled for all the cases of study) are the same for all K max -levels considered in the analysis. This distribution is linked to a critical hydrogen accumulation causing HAC fracture in the rod. [ | 10,490.2 | 2017-05-01T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Studies of Fractal Microstructure in Nanocarbon Polymer Composites
The in situ study of fractal microstructure in nanocarbon polymers is an actual task for their application and for the improvement in their functional properties. This article presents a visualization of the bulk structural features of the composites using pulsed acoustic microscopy and synchrotron X-ray microtomography. This article presents details of fractal structure formation using carbon particles of different sizes and shapes—exfoliated graphite, carbon platelets and nanotubes. Individual structural elements of the composite, i.e., conglomerations of the particles in the air capsule as well as their distribution in the composite volume, were observed at the micro- and nanoscale. We have considered the influence of particle architecture on the fractal formation and elastic properties of the composite. Acoustic and X-ray imaging results were compared to validate the carbon agglomeration.
Introduction
Polymer nanocomposites have become an object of intense interest since the late 1990s, when various types of low-dimensional carbon nanoforms were synthetized and applied in materials science [1][2][3][4].The idea of nanocomposites is based on the embedding of different types of low-dimensional nanoparticles into the polymeric matrix volume.Carbon (graphite) 1D and 2D nanoforms-single-wall (SWCNTs) and multi-wall (MWCNTs) carbon nanotubes, graphite nanoplatelets (GNPs) or nanoflakes (GNFs)-as well as silicate (clay) nanoplatelets and platelets formed by other types of layered crystals can be used as nanofillers.
High values of aspect ratio l/d = 10 2 ÷ 10 3 , where l is lateral size and d is nanoparticle thickness, make it possible to obtain a dense packing of nanoparticles.Even for minimal values of the nanofiller content w f ∼ (0.01 ÷ 2)%, the mean interparticle distance s can be compared with the lateral size l of 1D and 2D nanoparticles: s ≤ l.This dense packing determines the most attractive feature of nanocomposites, namely the ability to improve their properties with small amounts of nanomaterial additives.This enhancement Polymers 2024, 16, 1354 2 of 13 results from the excellent physical properties of the nanoparticles to be employed-the electrical and heat conductivity of carbon nanoforms and their outstanding elastic and strength properties [5,6].Nanofiller is a dispersed phase in the structure of a composite material; there are various ways to incorporate its properties into the overall properties of nanocomposites.Substantial electric and heat conductivity of the composite are provided due to the formation of a continuous cluster of conductive carbon nanoparticles.In this case, the polymer matrix acts as a mechanical substrate to keep a stable configuration of nanoparticles.Another allocation of functions between the components arises when it comes to the elastic properties of nanocomposites.Elastic properties have been the subject of a great number of papers and reviews [7][8][9][10][11].Both constituents, polymer binder and nanocarbon reinforcement, participate in forming the elastic response of nanocomposites.The main part of the applied load is carried by the nanocarbon reinforcing elements; the key role here is played by the 1D and 2D reinforcing elements (or their parts), oriented along the direction of the applied load.The function of the polymer binder is to transfer the load from the region where it is applied to the reinforcing nanoparticles, and to redistribute the load between them.This is achieved through shear stresses at the interface between the matrix and the reinforcement, which is known as the shear lag transfer mechanism [12][13][14][15].When the nanoparticles do not possess a preferable orientation, the elastic response does not depend on the load direction.
The nanocomposite elastic modulus E c can be assessed by employing the mixture rule for elastic moduli in the frame of the Voigt model for the uniform strain [9][10][11][12][13][14][15][16][17][18].In this model, the E c value is estimated as the weighted sum of the elastic moduli E m and E f of the matrix and nanocarbon reinforcement, respectively: where w f -reinforcement content, α-quotient of reinforcement oriented in the applied load direction [16].For small reinforcement content w f 1, the polymer nanocomposite modulus E c equals the polymer matrix modulus E m plus the nanocarbon reinforcement contribution δE c = α•w f •E f .This addition is the product of a high value of the reinforcement modulus E f ( E f /E m ∼ 10 3 ) and α small quantity of the reinforcement content (α•w f ) ( w f ∼ 10 −2 ÷ 10 −3 and α < 1).Finally, the reinforcement addition δE c is compared with the polymer elastic matrix modulus E m : The elastic properties of carbon nanocomposites have been extensively studied experimentally, with the results summarized in several publications, including [2,[19][20][21][22][23][24][25].The experiments show a difference in the elastic modulus between the pure polymer and the polymer with carbon nano-dispersions, which gives an enhancement in elastic properties within the range established by Equations ( 1) and (2).However, some studies show minimal changes in elasticity, with variations in elastic modulus within the confidence interval of experimental error.This lack of enhancement may be due to failure in shear lag transfer.Load transfer requires tight physical contact between the reinforcement and matrix; violations of this contact prevent it.Thus, both the content of the nanofiller and the state of the interface determine the elastic modulus of polymer nanocomposite materials.
An important characteristic of composites is the spatial distribution of their dispersed phase [26][27][28].The optimal version is a uniform dispersion of nanoparticles, but experimental studies suggest an inhomogeneous distribution of nanocarbon reinforcement.Data from small-angle X-ray (SAXS) and neutron scattering [29,30] in epoxy-carbon composites as well as direct observations of the microstructure on sample cuts using scanning electron microscopy (SEM) [26][27][28] show that the carbon reinforcement is distributed throughout the composite volume in the form of fractal nanoparticle aggregates.
Agglomerates involve many nanotubes, platelets or flakes; the agglomerate size L is significantly larger than the lateral size of individual nanoparticles: L >> l.According to data of the SAXS and SEM images, the value of L is in the range of 10-50 µm.Such agglomerates essentially disturb the homogeneity of elastic properties.The local elastic modulus E( → r ) can be estimated from the relation (1) using the local value of reinforcement content w f ( → r ).The separation of nanocarbon reinforcement into agglomerates and its non-uniform distribution throughout the polymer matrix is not essential for large-scale deformations, such as macroscopic loads or low-frequency ultrasonic testing.However, they become critical for high-frequency ultrasound due to its scattering at small-scale elastic inhomogeneities.Scattering efficiency depends on the ratio between the characteristic size of elastic inhomogeneity L and the probe ultrasound wavelength λ, as well as on the difference between the elastic properties of the perturbed area and the matrix medium.
Ultrasound is an efficient tool for non-destructive observation of the volume structure of composite materials and measurement of their integral and local elastic properties, revealing structural failures within their volume.Different ranges of ultrasonic frequencies provide various levels of spatial resolution and are employed to solve different problems in assessing the structure and mechanical properties of composites.Scanning impulse acoustic microscopy (SIAM), based on applying ultrashort probe pulses of high-frequency (50-200 MHz) focused ultrasound, provides bulk visualization with spatial resolution of 20-100 µm.These values are too large to resolve individual carbon nanoparticles (lateral sizes l ~1-15 µm) but are enough to reveal regions of increased stiffness resulting from nanoparticle aggregation.One of the most substantial advantages of ultrasonic visualization techniques is their non-destructive nature.In the case of impulse acoustic microscopy, its application makes it possible to see the internal microstructure of a specimen without cutting it.
Papers [31][32][33] describe the use of the SIAM technique for the in situ monitoring of microstructure in nanocomposites containing different types of nanocarbon fillers.Ultrasonic imaging gives unexpected results-acoustic images at different depth inside the nanocomposite specimens include many bright small spots and dots against the dark background.Positions of the spots change when the imaging layer is varying its depth in the specimen volume, but the general character of the images remains unchanged.This means that the bulk microstructure of carbon-epoxy specimens includes many efficient microscopic scatterers.It is unlikely that such scatterers are the agglomerates themselves-magnitudes of the elasticity variations in the regions of carbon nanoparticle agglomeration, as follows from the estimation (1), are not sufficient to provide such effective scattering.The assumption is that the scatterers are air bubbles encapsulating fractal agglomerates within themselves.Such bubbles are formed from air captured by fractal agglomerates during the nanocomposite formation and firmly held by the agglomerates thereafter.Most of the filler nanoparticles are distributed in these voids; they do not contact the polymer matrix and do not participate in the elastic response of the composite material as has been discussed above.This conclusion is corroborated by the results of local elastic measurements performed with the ultrasonic probe beam in different regions of the nanocomposite specimen volume.Local elastic measurements give the same values of sonic velocities at different points of the nanocomposite; these values do not differ from their values in the polymer binder [33].
The formation of such a structure with numerous micro-voids should be explained by the low wettability of carbon filler particles with the polymer binder.It has been shown that the formation of 2D and 3D fractal structures substantially increases the non-wettability of the primary graphite in use for preparing such structures [34].It can be assumed that assembling nanoparticles into fractal agglomerates and then forming and retaining air cavities around these agglomerates is energetically preferable over direct contact of carbon nanoparticles with the polymer matrix.Apparently, a natural way to eliminate the formation of air bubbles is to prepare carbon nanocomposite samples in a vacuum.However, modifying carbon nanocomposite production using vacuum technologies preserves the overall picture of their internal microstructure with multiple microscopic scatterers [33].
This publication presents experimental data obtained with 3D non-destructive imaging with a sufficient resolution to observe the fine structure of the nanocarbon agglomerates.This is carried out in order to confirm the hypothesis that carbon nanoparticles tend to assemble into loose agglomerates enriched with air that prevents the improvement of the mechanical properties of polymer nanocomposites.The aim of this work is to establish their actual geometry, sizes and interior details.Two high-resolution bulk imaging techniques have been applied.One of them is an enhanced version of the same impulse acoustic microscopy with an essentially increased resolution of 10-20 µm by augmenting the operation frequency up to 200 MHz instead of the 50-100 MHz used in the previous experiments [32].The alternative technique is X-ray microtomography (X-ray µ-CT) with submicron resolution [35].The proposed visualizing techniques employ different contrast mechanisms to present the object volume structure but, unlike conventional methods of light, electron or probe microscopy, they make it possible to restore the real threedimensional distribution of carbon nanofillers even in the case of the formation of fractal agglomerates encapsulated in air micro cavities.
Materials
Epoxy-carbon nanocomposites were epoxy resin containing small concentration of carbon particles of diverse origins.Epicote 828 epoxy resin (Westlake Chemical Corporation, Houston, TX, USA) with curing agent called A1 (a modified TEPA) was employed as matrix material.Various kinds of small graphite particles were used as fillers.Exfoliated graphite (EG) has a size of 50-200 µm.Graphite nanoplatelets (GNPs) have thickness of 7-10 nm and lateral dimensions of up to 1-10 µm.The latter are multi-walled carbon nanotubes (CNTs).Nanotubes are 20-40 nm in diameter and have a length of 5-10 µm.Sample names and filler concentrations are given in Table 1.The composite samples were fabricated by solidification of graphite particle suspension in epoxy resin with the curing agent.The fabrication procedure was aimed at achieving the highest homogeneity of specimens [33].The procedure included (1) vacuum degassing of liquid epoxy resin for 48 h under a few millibars; (2) preparation of graphite nanofiller suspension in isopropyl alcohol and its sonication in ultrasonic bath for 1.5 h; (3) by-hand mixing the liquid resin and graphite particle suspension in isopropyl alcohol; (4) entirely evaporating the alcohol at 150 • C and sonicating for 1.5 h to degas the new mixture; (5) adding curing agent A1 and by-hand mixing it with the graphite-epoxy suspension for 7 min; (6) curing the suspension for 24 h in air at normal conditions and for 4 h in an oven at 80 • C. Samples were cut from prepared plates of composites.Sample dimensions were selected based on the parameters of both imaging methods.So, the size was approximately 0.5 × 0.5 × 5 mm.The two surfaces were plane parallel.
Methods
Two advanced visualizing techniques have been employed for high-resolution study of the internal microstructure of nanocarbon-epoxy composites.
The impulse acoustic microscopy [36,37] is based on receiving reflected or scattered echo impulses generated by interaction of ultrashort probe pulses of focused high-frequency ultrasound with the volume structure within the focal area of the probe ultrasonic beam penetrated in a specimen.The volume microstructure is recovered by mechanical scanning of the probe ultrasonic beam and time resolving of the recorded signals at each point in the scanning area.The resolution of the technique is defined by the usual Rayleigh expression conventional for all wave technology of visualization.
X-ray computer tomography (CT) is based on measuring the attenuation of the X-ray probe beam as it propagates through a specimen with following mathematical processing and recovering of 3D distribution of the X-ray density of the material [35].The resolution of the methods is defined by pixel sizes of X-ray detectors used in the tomograph.Structural studies of carbon nanocomposites in our experiment have been performed with synchrotron radiation, which provides submicron resolution imaging with phase contrast enhancement.In [38], a high efficiency of synchrotron radiation microtomography method was shown in the example of reinforced composites based on carbon fibers and epoxy resin.
The scanning impulse acoustic microscope SIAM [39] designed and produced by the Emanuel Institute of Biochemical Physics RAS was used for monitoring of the volume microstructure of nanocomposites.Here, we applied an acoustic lens with a working frequency of 200 MHz and half-aperture of 11 • , which has lateral resolution of 24 µm.Short probing pulse of 15 ns duration generated by the electronic block provides depth resolution of 20 µm inside the composites.The acoustic lens is fixed in a precision mechanical scanning system (XYZ-axis) with a scanning step of 5 µm (the reproducibility of lens position is 0.5 µm).During the scanning of the specimen with the lens, the ultrasound beam passes through the water immersion, is reflected from the sample surface and bottom elements of internal structure and is received by the same acoustic lens.The received signals are digitalized by 12-bit ADC at a sampling frequency of 1 GHz (2 × 500 MHz) and averaged up to 8 times at each point of observation.The received echo signal includes time-resolved echoes coming from elements or boundaries located at different depths inside the sample volume.The signals have been saved simultaneously with the scanning coordinates (X-Y coordinates).The stored database is presented as a 3D tomography mode or layer-by-layer images of horizontal cross section (C-scans), and image of vertical section (B-scan) depicts distribution of elements in the volume of material.
To investigate the internal structure of the nanocomposite, we utilized a synchrotron radiation beamline provided with a high-quality monochromatic X-ray beam essential for phase-contrast imaging.Our experiments were conducted at the P14 beamline (DESY PETRA III Hamburg, Germany) [40,41].The monochromatic X-ray photon flux reached over 10 13 photons per second within a working region of 0.6 × 1.2 mm 2 , with transverse coherence of approximately 500 µm vertically and 20 µm horizontally.This beamline configuration had previously proven successful for diverse advanced X-ray studies, including X-ray diffraction analysis of microcrystals and large (>100 µm) macromolecular crystals [42,43] and µ-CT of low-density materials [44].Samples were affixed to a needle holder and exposed to an uncorrected X-ray beam.For tomographic data collection, the holder was rotated with high accuracy (<0.001 • ).X-ray image projection images were recorded using an optical system comprising an 8 µm thick LSO:Tb scintillator, a 20× microscope and a 2048 × 2048 sCMOS camera (pco.edge4.2, Excelitas PCO GmbH, Kelheim, Germany).Factoring in the microscope's magnification, the effective pixel size of the object was estimated to be around 0.325 µm across a field of view of approximately 600 µm.A typical tomographic scan consisted of 30 flat-field images and 3600 projections at intervals of 0.1 • .Flat-field correction was implemented by dividing each projection image by the most similar flat field chosen based on the SSIM criterion [45].To accommodate potential axis shifts at varying object-camera distances, Fourier space correlation with sub-pixel interpolation was employed to correct the X-ray scan sequence images [41].Phase contrast effect during X-ray projection acquisition was achieved by repetitively scanning the same sample at different distances (135-145 cm) from the camera.Correspondingly, the sets of recorded X-ray projection images underwent processing using a multi-distance non-iterative holographic reconstruction method [46,47].Following this, tomographic reconstruction was performed using the TOMOPY package [48,49], employing the Gridrec algorithm and Shepp-Logan filtering.For visualization of the reconstructed data in 3D views, we utilized the following image processing software: Fiji-ImageJ 1.53t (open-source) and CTvox v. 3.3.0r1403 (Bruker MicroCT, Kontich, Belgium).
Results
The results of the visualization of the microstructure in the volume of nanocomposites are presented as ultrasound and X-ray images (Figures 1-3).All the obtained images, both acoustic and X-ray, show a characteristic feature of the development of a fractal structure that is the agglomeration of particles with air capture.This is evidenced by the high contrast of the images, which is determined by a large difference in the acoustic properties (impedances) of the epoxy matrix and air in the case of acoustic sensing, and by a significant difference between the absorption of radiation in the matrix material, reinforcing particles and air for X-rays.In addition, the observed inhomogeneities in these images have a regular spherical shape and smooth boundaries.
Results
The results of the visualization of the microstructure in the volume of nanoco sites are presented as ultrasound and X-ray images (Figures 1-3).All the obtained es, both acoustic and X-ray, show a characteristic feature of the development of a structure that is the agglomeration of particles with air capture.This is evidenced high contrast of the images, which is determined by a large difference in the ac properties (impedances) of the epoxy matrix and air in the case of acoustic sensin by a significant difference between the absorption of radiation in the matrix ma reinforcing particles and air for X-rays.In addition, the observed inhomogenei these images have a regular spherical shape and smooth boundaries.Figure 1 shows the results of the visualization of the microstructure of a nan posite with carbon nanotubes (epoxy-CNT).In the image of the sample surface, th tribution of brightness corresponds to the microrelief caused by the caverns with agglomerates inside (Figure 1a).This relief is a result of the specimen fabrication a In a single slice, the clusters of different sizes are imaged depending on their position with regard to the slice plane.The average distances between the pores depend on the concentration of reinforcing particles.Here, in the slice of thickness 0.35 µm for the concentration 0.25 wt.% of CNTs, the average density of the carbon clusters encapsulated with air is 120 items/mm 2 .In Figure 2, a set of successive slices located at a distance of 10 µm from each other demonstrates the pore configuration dynamics as one moves through the depth of the specimen.In the X-ray images, we have revealed fairly large pores, and the CNT agglomerates are visible inside the largest of them.Figure 3 shows an image of such an agglomerate in an air capsule with a gradual increasing magnification.It is clearly seen that there is a heterogeneous structure inside the cluster.Dark areas show higher X-ray density and correspond to the carbon nanoparticles; light areas are the air interlayer.Clusters are visible against the background of the polymer matrix.
Figure 4 shows the GNP composite microstructure and distribution of graphite particles in the volume of the nanocomposite.In general, the observed microstructure looks The sizes of visible clusters measured in X-ray images vary in the range of 20-30 µm.In a single slice, the clusters of different sizes are imaged depending on their position with regard to the slice plane.The average distances between the pores depend on the concentration of reinforcing particles.Here, in the slice of thickness 0.35 µm for the concentration 0.25 wt.% of CNTs, the average density of the carbon clusters encapsulated with air is 120 items/mm 2 .In Figure 2, a set of successive slices located at a distance of 10 µm from each other demonstrates the pore configuration dynamics as one moves through the depth of the specimen.In the X-ray images, we have revealed fairly large pores, and the CNT agglomerates are visible inside the largest of them.Figure 3 shows an image of such an agglomerate in an air capsule with a gradual increasing magnification.It is clearly seen that there is a heterogeneous structure inside the cluster.Dark areas show higher X-ray density and correspond to the carbon nanoparticles; light areas are the air interlayer.Clusters are visible against the background of the polymer matrix.
Figure 4 shows the GNP composite microstructure and distribution of graphite particles in the volume of the nanocomposite.In general, the observed microstructure looks Figure 1 shows the results of the visualization of the microstructure of a nanocomposite with carbon nanotubes (epoxy-CNT).In the image of the sample surface, the distribution of brightness corresponds to the microrelief caused by the caverns with CNT agglomerates inside (Figure 1a).This relief is a result of the specimen fabrication and the presence of pores in the nanocomposite material.The contrast of the pores was observed due to their sharp edges and scattering of ultrasonic waves at their boundaries.We observe the same pores in the volume as the bright inclusions in the C-scan (Figure 1c) and short bright lines in the B-scan (Figure 1b).These results agree with previous results presented in [31][32][33].The depth distribution of filler conglomerates is presented using a color scale (Figure 1d).The structure of the composite looks much more interesting in the X-ray image.Figure 1e shows the reconstructed 3D microstructure of the material.One can see multiple points or clusters of minimal X-ray absorption (minimum X-ray density) and their rather compact distribution in the volume.The ideal spherical shape of such inclusions is clearly visible in individual X-ray sections (Figure 1f).
The sizes of visible clusters measured in X-ray images vary in the range of 20-30 µm.In a single slice, the clusters of different sizes are imaged depending on their position with regard to the slice plane.The average distances between the pores depend on the concentration of reinforcing particles.Here, in the slice of thickness 0.35 µm for the concentration 0.25 wt.% of CNTs, the average density of the carbon clusters encapsulated with air is 120 items/mm 2 .In Figure 2, a set of successive slices located at a distance of 10 µm from each other demonstrates the pore configuration dynamics as one moves through the depth of the specimen.
In the X-ray images, we have revealed fairly large pores, and the CNT agglomerates are visible inside the largest of them.Figure 3 shows an image of such an agglomerate in an air capsule with a gradual increasing magnification.It is clearly seen that there is a heterogeneous structure inside the cluster.Dark areas show higher X-ray density and correspond to the carbon nanoparticles; light areas are the air interlayer.Clusters are visible against the background of the polymer matrix.
Figure 4 shows the GNP composite microstructure and distribution of graphite particles in the volume of the nanocomposite.In general, the observed microstructure looks the same as that of the CNT composite; that is, it is an agglomeration of the carbon particles encapsulated with air into pores.The pores have different sizes in the range from 10 up to 50 µm.Their distribution is dense; the average density of the pores and carbon clusters is increased up to 200 items/mm 2 .The heterogeneity of pore distribution is revealed by acoustic images (Figure 4a-d).In Figure 4c, the integral number of pores is visible in the layer of 44 µm thickness with a central depth of 40 µm.The colored image (Figure 4d) gives the distribution of pores at a certain depth; it is easy to reveal that the pore sizes vary at the same depth.Figure 4e,f present a 3D reconstruction of the GNP composite specimen volume and its microstructure in an individual slice of 0.35 µm.The tomographic representation allows for seeing that the brightness of the pores varies; some of the largest pores have a GNP agglomerate structure inside, similar to that presented for the CNT composite in Figure 3. Individual GNP particles were not identified in tomographic images.
Polymers 2024, 16, x FOR PEER REVIEW 8 the same as that of the CNT composite; that is, it is an agglomeration of the carbon cles encapsulated with air into pores.The pores have different sizes in the range fro up to 50 µm.Their distribution is dense; the average density of the pores and c clusters is increased up to 200 items/mm 2 .The heterogeneity of pore distribution vealed by acoustic images (Figure 4a-d).In Figure 4c, the integral number of po visible in the layer of 44 µm thickness with a central depth of 40 µm.The colored i (Figure 4d) gives the distribution of pores at a certain depth; it is easy to reveal th pore sizes vary at the same depth.Figure 4e,f present a 3D reconstruction of the composite specimen volume and its microstructure in an individual slice of 0.35 µm tomographic representation allows for seeing that the brightness of the pores v some of the largest pores have a GNP agglomerate structure inside, similar to tha sented for the CNT composite in Figure 3. Individual GNP particles were not identifi tomographic images.Investigation of the EG composite samples demonstrates another microstru formation (Figure 5).A distinctive feature of this microstructure is the presence of inclusions.The inclusions are extended (≈150 µm) and have complex rou non-spherical shape with smooth borders, as can be seen on tomographic images (F 5e,f).The core of these inclusions is EG particles and the boundaries are forme pending on the degree of wettability of the particles with the polymer matrix (e Investigation of the EG composite samples demonstrates another microstructure formation (Figure 5).A distinctive feature of this microstructure is the presence of large inclusions.The inclusions are extended (≈150 µm) and have complex rounded non-Polymers 2024, 16, 1354 9 of 13 spherical shape with smooth borders, as can be seen on tomographic images (Figure 5e,f).The core of these inclusions is EG particles and the boundaries are formed depending on the degree of wettability of the particles with the polymer matrix (epoxy resin).The shape of the observed inclusions corresponds to the EG particles shape, and there is an enhanced contrast of the particle borders caused by the air presence around the particle.
resin).The shape of the observed inclusions corresponds to the EG particles shape, there is an enhanced contrast of the particle borders caused by the air presence aro the particle.
Acoustic images (Figure 5b-d) show only the top parts of the curved surface of inclusions, the normal to which lies inside the aperture of the receiving lens.Due to specificity of the acoustic image formation of curved surfaces, we can only detect small bright spots (50-100 µm).
Cracks in the polymer matrix were observed for these types of samples; the lar one was visible on the acoustic image of the surface that expanded from the surface d into the sample (Figure 5a).The X-ray slice demonstrates the distribution of cracks in sample body (Figure 5f).
Discussion
The use of two effective high-resolution imaging methods makes it possible to tain high-contrast, detailed 3D images of the volumetric microstructure of carbon-ep nanocomposites.In scanning pulsed acoustic microscopy, images are formed by e pulses reflected from interfaces or backscattered by small inclusions; they represent distribution of such interfaces or inclusions over the volume of the sample.The cont of the images is determined by the difference in acoustic properties at the boundaries Acoustic images (Figure 5b-d) show only the top parts of the curved surface of the inclusions, the normal to which lies inside the aperture of the receiving lens.Due to this specificity of the acoustic image formation of curved surfaces, we can only detect the small bright spots (50-100 µm).
Cracks in the polymer matrix were observed for these types of samples; the largest one was visible on the acoustic image of the surface that expanded from the surface deep into the sample (Figure 5a).The X-ray slice demonstrates the distribution of cracks in the sample body (Figure 5f).
Discussion
The use of two effective high-resolution imaging methods makes it possible to obtain high-contrast, detailed 3D images of the volumetric microstructure of carbon-epoxy nanocomposites.In scanning pulsed acoustic microscopy, images are formed by echo pulses reflected from interfaces or backscattered by small inclusions; they represent the distribution of such interfaces or inclusions over the volume of the sample.The contrast of the images is determined by the difference in acoustic properties at the boundaries, the maximum value of which is reached at the face with air or a vacuum.As opposed to the AM method, X-ray tomographic images are obtained in phase-contrast-enhanced transmission mode.In this case, the distribution of X-ray absorption over the sample bulk is displayed; the maximum X-ray contrast occurs for areas with finite X-ray density-normal to the material-and areas with zero absorption-air or a vacuum.In our experiments with both methods, we observed high contrast, so it is natural to assume the presence of air in the area of agglomeration of the reinforcing particles.Encapsulation of graphite particles (agglomerates) excludes the participation of filler in the elastic response of the material to an external mechanical load.
A fairly large number of publications are devoted to the study of elastic properties of nanocomposites [19][20][21][22][23][24][25].The study of the dependence of the composite modulus on the concentration of the reinforcing additive is one of the main tasks of these works.In accordance with Formula (1), an increase in the content of reinforcing particles should increase the elastic modulus.In the cited references, a completely different nature of this dependence was observed [19][20][21][22].Some of them report a decrease in the modulus in a polymer with a higher concentration of carbon filler.In our previous work [31][32][33], which was performed with nanocomposite samples manufactured using the same technique as in this article, the change in elastic moduli at different filler concentrations was within the confidence interval.These data are consistent with the conclusions of this work that reinforcing elements can be minimally embedded in the matrix material.
The results obtained in this work indicate that the wettability of reinforcing particles is a key factor for improving the elastic properties of the nanocomposite.Complete wettability is possible when all reinforcing components are involved in the formation of the elastic response; in the absence of wettability, when reinforcing elements are encapsulated with air, even a decrease in the elastic modulus is possible.In the intermediate case, when there is partial embedding of the forming particles into the polymer matrix, some enhancement in elastic moduli may be observed, depending on the concentration of particles successfully merged into the polymer.
Conclusions
Two effective high-resolution non-destructive imaging techniques were applied for detailed imaging of the 3D bulk structure of carbon-epoxy nanocomposites.
It is shown that nanocomposites obtained by standard methods from GNPs and multiwalled CNTs without additional modification of the particle surface result in a composite microstructure filled with encapsulated fractal agglomerates of carbon particles, which can be impeding for the elastic property enhancement.
Due to the high resolution of the synchrotron X-ray technology used in this work, it was possible to observe the image of thin slices and identify the structure of individual agglomerates.It can be seen that the agglomerate is a collection of carbon nanoparticles that capture a certain amount of air during material preparation and have an insufficiently active surface to ensure full adhesion of the filler to the polymer matrix.This situation leads to a lack of improvement in the elastic properties of the composite compared to a pure polymer, since the condition of the continuity of mechanical stresses is violated at the local boundaries of nanoparticle agglomerates, and the particles do not contribute to the elastic response of the material as a whole.
The high-resolution X-ray method is quite time-consuming, expensive and applicable for specially prepared samples.The results of this study demonstrate that acoustic microscopy is a highly sensitive express technique for detecting air cavities.This is the reason for using the technique to evaluate the effect of reinforcement particle agglomeration and air encapsulation on the reinforcement as well as to estimate the mechanical properties of nanocomposites materials.
Figure 1 .
Figure 1.Internal microstructure of epoxy-CNT nanocomposite: (a)-acoustic image of s (b)-B-scan in the central part, (c)-C-scan at depth of 40 µm and thickness of 44 µm in class scale gradation and (d)-a color distribution of agglomerations over the depth.Working freq is 200 MHz, scanning field is 1 × 2 mm, scanning step 5 µm.(e,f)-Three-dimensional ren and a slice of the sample obtained by X-ray tomography at the PETRA III synchrotron.
Figure 1 .
Figure 1.Internal microstructure of epoxy-CNT nanocomposite: (a)-acoustic image of surface, (b)-B-scan in the central part, (c)-C-scan at depth of 40 µm and thickness of 44 µm in classic gray scale gradation and (d)-a color distribution of agglomerations over the depth.Working frequency is 200 MHz, scanning field is 1 × 2 mm, scanning step 5 µm.(e,f)-Three-dimensional rendering and a slice of the sample obtained by X-ray tomography at the PETRA III synchrotron.
Figure 2 .
Figure 2. The set of the serial tomographic slices distanced at 10 µm from each other for the epoxy-GNP nanocomposite.Image size was 660 × 660 µm.The thickness of a slice was 0.325 µm.
Figure 3 .
Figure 3. X-ray imaging of CNT cluster structure inside of the individual pore: the bulk composite reconstruction (on the left) and high-resolution 2D images of the box (on the right).
Figure 2 .
Figure 2. The set of the serial tomographic slices distanced at 10 µm from each other for the epoxy-GNP nanocomposite.Image size was 660 × 660 µm.The thickness of a slice was 0.325 µm.
Figure 2 .
Figure 2. The set of the serial tomographic slices distanced at 10 µm from each other for the epoxy-GNP nanocomposite.Image size was 660 × 660 µm.The thickness of a slice was 0.325 µm.
Figure 3 .
Figure 3. X-ray imaging of CNT cluster structure inside of the individual pore: the bulk composite reconstruction (on the left) and high-resolution 2D images of the box (on the right).
Figure 3 .
Figure 3. X-ray imaging of CNT cluster structure inside of the individual pore: the bulk composite reconstruction (on the left) and high-resolution 2D images of the box (on the right).
Figure 4 .
Figure 4. Internal microstructure of epoxy-GNP nanocomposite: (a)-acoustic image of su (b)-B-scan in the central part, (c)-C-scan at depth of 40 µm and thickness of 44 µm in classi scale gradation and (d)-a color distribution of agglomerations over the depth.Working freq is 200 MHz, scanning field is 1 × 2 mm, scanning step is 5 µm.(e,f)-Three-dimensional rend and a slice of the sample obtained by X-ray tomography at the PETRA III synchrotron.
Figure 4 .
Figure 4. Internal microstructure of epoxy-GNP nanocomposite: (a)-acoustic image of surface, (b)-B-scan in the central part, (c)-C-scan at depth of 40 µm and thickness of 44 µm in classic gray scale gradation and (d)-a color distribution of agglomerations over the depth.Working frequency is 200 MHz, scanning field is 1 × 2 mm, scanning step is 5 µm.(e,f)-Three-dimensional rendering and a slice of the sample obtained by X-ray tomography at the PETRA III synchrotron.
Figure 5 .
Figure 5. Internal microstructure of epoxy-EG nanocomposite: (a)-acoustic image of sur (b)-B-scan in the central part, (c)-C-scan of the layer at depth of 40 µm and 80 µm thickne classic gray scale gradation, (d)-a color distribution of agglomerations over the depth.Scan field is 0.8 × 2 mm, scanning step 5 µm.(e,f)-Three-dimensional rendering and a slice of the s ple obtained by X-ray tomography at the PETRA III synchrotron.In (f), the red box shows an larged defect structure.
Figure 5 .
Figure 5. Internal microstructure of epoxy-EG nanocomposite: (a)-acoustic image of surface, (b)-B-scan in the central part, (c)-C-scan of the layer at depth of 40 µm and 80 µm thickness in classic gray scale gradation, (d)-a color distribution of agglomerations over the depth.Scanning field is 0.8 × 2 mm, scanning step 5 µm.(e,f)-Three-dimensional rendering and a slice of the sample obtained by X-ray tomography at the PETRA III synchrotron.In (f), the red box shows an enlarged defect structure.
Table 1 .
Legend of samples. | 8,446.2 | 2024-05-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Palmitate-Induced Cardiac Lipotoxicity Is Relieved by the Redox-Active Motif of SELENOT through Improving Mitochondrial Function and Regulating Metabolic State
Cardiac lipotoxicity is an important contributor to cardiovascular complications during obesity. Given the fundamental role of the endoplasmic reticulum (ER)-resident Selenoprotein T (SELENOT) for cardiomyocyte differentiation and protection and for the regulation of glucose metabolism, we took advantage of a small peptide (PSELT), derived from the SELENOT redox-active motif, to uncover the mechanisms through which PSELT could protect cardiomyocytes against lipotoxicity. To this aim, we modeled cardiac lipotoxicity by exposing H9c2 cardiomyocytes to palmitate (PA). The results showed that PSELT counteracted PA-induced cell death, lactate dehydrogenase release, and the accumulation of intracellular lipid droplets, while an inert form of the peptide (I-PSELT) lacking selenocysteine was not active against PA-induced cardiomyocyte death. Mechanistically, PSELT counteracted PA-induced cytosolic and mitochondrial oxidative stress and rescued SELENOT expression that was downregulated by PA through FAT/CD36 (cluster of differentiation 36/fatty acid translocase), the main transporter of fatty acids in the heart. Immunofluorescence analysis indicated that PSELT also relieved the PA-dependent increase in CD36 expression, while in SELENOT-deficient cardiomyocytes, PA exacerbated cell death, which was not mitigated by exogenous PSELT. On the other hand, PSELT improved mitochondrial respiration during PA treatment and regulated mitochondrial biogenesis and dynamics, preventing the PA-provoked decrease in PGC1-α and increase in DRP-1 and OPA-1. These findings were corroborated by transmission electron microscopy (TEM), revealing that PSELT improved the cardiomyocyte and mitochondrial ultrastructures and restored the ER network. Spectroscopic characterization indicated that PSELT significantly attenuated infrared spectral-related macromolecular changes (i.e., content of lipids, proteins, nucleic acids, and carbohydrates) and also prevented the decrease in membrane fluidity induced by PA. Our findings further delineate the biological significance of SELENOT in cardiomyocytes and indicate the potential of its mimetic PSELT as a protective agent for counteracting cardiac lipotoxicity.
Introduction
Obesity represents a major public health problem that seriously increases the risk of developing cardiovascular diseases (CVD) and directly contributes to incident cardiovascular risk factors [1]. In obese patients, the increased left ventricular mass and myocardial changes correlate with adiposity, which is responsible for cardiac steatosis by promoting the ectopic deposition of triglycerides in the heart [2]. Following lipid accumulation within the cells of the cardiovascular system, several complex mechanisms drive myocardial dysfunction, leading to heart failure (HF). In this process, known as cardiac lipotoxicity, saturated long chain fatty acids (FAs)-such as palmitic acid (PA)-play a major role due to their ability to alter cellular structures, triggering oxidative stress, endoplasmic reticulum (ER) stress, defective insulin signaling, mitochondrial dysfunction, inflammation, and myofibrillar dysfunction, culminating in cell death [3,4]. Despite the high metabolic flexibility of the heart in terms of substrate utilization, as well as its capacity to adjust the rate of FA uptake to match myocardial demand to obtain energy [5], the augmented circulating levels of free FAs during obesity dramatically increase the myocardial uptake of lipids that may be stored as triglycerides. However, since cardiomyocytes possess a limited storage capacity, the excess free FAs are also shunted into non-oxidative pathways, leading to myocardial lipotoxic injury [6]. Accordingly, elevated serum levels of PA, which represents the major circulating saturated FA, have been proposed as a potential factor contributing to lipotoxic cardiomyopathy development [7]. During chronic lipid overload conditions, excessive cytosolic FAs can increase mitochondrial uncoupling, which in turn may generate reactive oxygen species (ROS) and activate stress-sensitive pathways, leading to oxidative stress [6]. Additionally, ER protein misfolding secondary to the excess of lipids also promotes ER and nuclear oxidative stress [8]. Therefore, endogenous antioxidant enzymes are crucial to detoxify lipid hydroperoxides and other reactive species to maintain the normal cellular machinery, in particular during lipid overload conditions. In this context, selenoproteins are considered among the most potent antioxidant defense systems, with crucial implication in several pathophysiological contexts, including CVD [9,10]. Selenoprotein T (SELENOT) is an ER-resident thioredoxin-like enzyme and the only ER-located selenoprotein whose gene knockout in mice is lethal early during embryogenesis [11]. As a member of the selenoprotein family, SELENOT contains a selenocysteine (Sec, U) residue in its CVSU amino acid redox motif that is fundamental for its biological functions, including the regulation of ER homeostasis, N-glycosylation, and intracellular Ca 2+ mobilization and neuroendocrine secretion. We previously showed in transgenic cell and animal models that a reduced expression of SELENOT is associated with oxidative and nitrosative stress, unfolded protein response (UPR) activation, and the depletion of Ca 2+ stores leading to altered hormone secretion [12,13]. Considerable interest has also arisen in the recent years regarding the biological significance of SELENOT in glucose homeostasis and cardiac pathophysiology. SELENOT is strongly expressed in human and mouse pancreatic βand δ-cells, and conditional pancreatic β-cell SELENOT-knockout mice exhibited impaired glucose tolerance with a deficit in insulin production/secretion [14]. On the other hand, SELENOT is highly expressed during the early hyperplastic growth of cardiomyocytes, suggesting its involvement in cardiac development during embryogenesis, and can also protect cardiomyocytes following myocardial ischemia/reperfusion (MI/R) insult [15,16].
The biological impact of SELENOT in cardiac and metabolic pathophysiology raised an intriguing hypothesis concerning its role in protecting cardiomyocytes exposed to a dysmetabolic condition generated by lipid overload. More precisely, considering that the
Cell Viability Assay
MTT assay was used to determine the effect of PSELT or I-PSELT on cell viability following PA exposure. H9c2 cells (5000 per well) were seeded in a 96-well plate and then treated with palmitate (PA) (from 100 to 500 µmol/L) for 24 h or vehicle (BSA) as control. PA was purchased from Sigma and dissolved to make a 10 mM stock solution with 10% fatty acid-free BSA [21]. Once the concentration for PA-induced cell death was established, H9c2 cardiomyocytes were exposed to PA (100 µmol/L) and PSELT (from 5 to 100 nmol/L) or its inactive form, indicated as inert-PSELT (I-PSELT), from 5 to 100 nmol/L, for 24 h. After treatments, 100 µL of 2 mg/mL MTT solution (Sigma Aldrich) was added to each well after removal of the culture medium, and then cells were incubated for 4 h at 37 • C, 5% CO 2 . Finally, MTT solution was replaced by DMSO, and formazan crystals were dissolved. The absorbance was measured at 570 nm using a microplate reader (Multiskan™ SkyHigh, Thermo Fisher Scientific Inc.). The means of the absorbance values of six wells in each experimental group were expressed as the percentage cell viability relative to the control cells. The experiment was repeated three independent times [16,18,19].
Lactate Dehydrogenase (LDH) Assessment
The entity of damage induced by PA in H9c2 cardiomyocytes was assessed by analyzing the levels of LDH released in the culture medium following the method of McQueen (1972) [22], and as previously indicated by [23]. To analyze the levels of LDH released in the culture medium, H9c2 cells (100,000 per mL) were seeded in a 24-well plate and treated with vehicle (control), PA (100 µmol/L), PA + PSELT or PSELT (5 nmol/L) for 24 h. At the end of the treatments, 100 µL per well of the culture medium was used for LDH activity determination. The enzyme activity was evaluated spectrophotometrically using Multiskan™ SkyHigh (Thermo Fisher Scientific), following the method of McQueen (1972) [22]. The reaction velocity was determined by a decrease in absorbance at 340 nm resulting from the oxidation of NADH (indicative of LDH activity) that was expressed in IU/L [24].
Oil Red O Staining
Intracellular lipid accumulation was measured by Oil Red O staining. H9c2 cells were seeded in a 6-well plate, treated with vehicle, PA (100 µmol/L), PA + PSELT or PSELT (5 nmol/L) and incubated in a humidified atmosphere at 37 • C for 24 h. After the treatments, H9c2 cells were washed three times with DPBS and incubated with the Oil Red O kit according to the manufacturer's instructions (#04-220923, Bio Optica, Milan, Italy). H9c2 cardiomyocytes were incubated with reagent A for 20 min, then washed and incubated with reagent B for 30 s followed by the addition of distilled water for 3 min. Nuclei were counterstained with hematoxylin provided by the kit [25][26][27]. After staining, cells were visualized under an Olympus BX41 microscope, and the images were taken with CSV1.14 software, using a CAM XC-30 for image acquisition.
For the detection of intracellular ROS, H9c2 cells (100,000 per well) were seeded on coverslips in 6-well plates and exposed to PA and PSELT, alone or in co-treatment for 24 h. At the end of the experimental protocol, H9c2 cardiomyocytes were incubated with 10 µmol/L CM-H 2 DCFDA at 37 • C for 30 min in the dark, and then the cells were carefully rinsed with DPBS and visualized under a fluorescence microscope (Olympus; 20× objective) [28,29]. For measuring the production of total ROS by flow cytometry, H9c2 cells were treated as above, collected, washed with DPBS, and stained with H2DCFDA. ROS were measured by flow cytometry (BD Fortessa X-20) according to the producer's guidelines. The data were analyzed with FlowJo 10.8.1 software.
For mitochondrial superoxide generation, H9c2 cardiomyocytes were seeded on a coverslip in a 6-multiwell plate at a density of 100,000 cells per well and after 48 h, incubated with PA 100 (µmol/L) and PSELT (5 nmol/L) alone or in co-treatment for 24 h. At the end of the treatment, MitoSOX reagent was first dissolved in dimethylsulfoxide (5 mmol/L) and then diluted to 5 µmol/L in DMEM/F-12 phenol-free and serum-free medium [29]. A suitable amount of the solution was added to the cells followed by incubation for 10 min at 37 • C protected from light. Cells were then washed twice with DPBS. Fluorescence was detected using an Olympus fluorescence microscope (20× objective) and quantified using ImageJ 1.6 software (National Institutes of Health, Bethesda, MD, USA).
Immunofluorescence Analysis for CD36 Evaluation
H9c2 cardiomyocytes were seeded on chamber slides (50,000 cells per chamber), incubated for 48 h at 37 • C, 5% CO 2 , and then treated with vehicle, PA (100 µmol/L), PA+ PSELT or PSELT (5 nmol/L) for 24 h. At the end of the treatment, H9c2 cells were washed three times with DPBS and fixed for 10 min with ice-cold methanol. Then, fixed cardiomyocytes were rinsed with cold-DPBS two times, and the permeabilization step was performed using 0.1% Triton X-100 in DPBS for 30 min at RT. Permeabilized H9c2 cells were then washed with DPBS following by blocking with 1% BSA in DPBS for 30 min at RT [16]. For immunofluorescence staining, H9c2 cells were incubated with a primary antibody against CD36 (diluted 1:200) for 2 h at room temperature and then stained with donkey anti-rabbit secondary antibody, Alexa Fluor 555 (diluted 1:1200), for 1 h at room temperature following the manufacturer's instruction. The cells were then washed twice with DPBS and stained with DAPI for nuclei counterstaining. Images were obtained using an Olympus fluorescence BX41 microscope and acquired with CSV1.14 software, using a CAM XC-30 for acquisition. The fluorescence quantification was carried out using ImageJ 1.6 software.
Short
Interfering RNA (siRNA) Transfection for SELENOT Silencing SELENOT gene silencing in H9c2 cardiomyocytes was performed as previously described by Rocca et al. (2022) [16]. Briefly, H9c2 cardiomyocytes (5000 per well) were seeded in 96-well plates and incubated for 48 h at 37 • C, 5% CO2. SELENOT siRNA (100 nmol/L) was transfected into H9c2 cardiomyocytes in serum-free medium using the Lipofectamine 2000 transfection reagent following the manufacturer's instructions (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA). Negative control si-RNA (si-NC) was used to detect non-specific effects of siRNA delivery and to compare siRNA-treated samples. Both si-NC and siRNA for SELENOT were purchased from Santa Cruz Biotechnology. H9c2 cardiomyocytes were transfected in serum-free medium for 6 h, after which the medium was replaced with full-medium and cells were incubated for 36 h at 37 • C, 5% CO 2 . H9c2 cells were treated with PA (100 µmol/L) or co-treated with PA and increasing concentrations of PSELT (from 5 to 100 nmol/L) for 24 h. At the end of the treatments, the viability of the H9c2 cells was evaluated by MTT assay. The cell viability was reported as the percentage cell survival relative to the si-NC transfected cells in six wells for each experimental group [16]. The experiment was repeated three independent times.
Assessment of Mitochondrial Respiratory Function Using the Seahorse XF Analyzer
For the evaluation of mitochondrial respiration, real-time oxygen consumption rates (OCR) were determined using the Seahorse Extracellular Flux (XFe-96) (Seahorse Bioscience, Agilent Technologies, Inc), as previously described [30]. H9c2 cardiomyocytes were seeded at a density of 10,000 cells per well in a XF96 Seahorse microplate, incubated for 24 h in complete medium at 37 • C and 5% CO2, and then exposed to vehicle, PA (100 µmol/L), and PSELT (5 nmol/L), alone and in combination for 24 h. At the end of the treatments, media were replaced with pre-warmed Seahorse XF assay medium (Agilent Technologies, Inc.) supplemented with 10 mmol/L glucose and 1 mmol/L pyruvate and adjusted to pH 7.4. Cells were maintained in 175 µL of XF assay medium per well at 37 • C in a non-CO2 incubator for 1 h. During the incubation, 10 µmol/L oligomycin, 9 µmol/L FCCP, 10 µmol/L Rotenone, and 10 µmol/L antimycin A were loaded in XF assay medium into the injection ports in the XFe-96 sensor cartridge for OCR measurement. Data were analyzed by XFe-96 software, and measurements were normalized by the protein content, which was determined by Sulphorhodamine B assay, as previously described [30].
Western Blot
After the H9c2 cells were treated with vehicle, PA (100 µmol/L), PA + PSELT or PSELT (5 nmol/L) for 24 h, the cardiomyocytes were washed with DPBS and the total proteins were extracted using RIPA lysis buffer (Sigma Aldrich, St. Louis, MI, USA) supplemented with protease inhibitors [16]. Cell lysates were transferred in microcentrifuge tubes, incubated on ice for 30 min with intermittent mixing, and centrifuged at 12,000× g for 15 min at 4 • C. The supernatant was collected, and the protein concentration was determined by Bradford reagent using bovine serum albumin (BSA) as a standard. Equal amounts of protein (50 µg for all antigens and 30 µg for SELENOT) were loaded on 12% SDS-PAGE gel for N-terminus peroxisome proliferator-activated receptor-gamma coactivator-1 alpha (PGC-1α) and Superoxide dismutase 2 (SOD-2) and SELENOT; on 10% SDS-PAGE gel for Catalase (CAT); and on 8% SDS-PAGE gel for dynamin-related protein 1 (DRP-1) and optic atrophy 1 (OPA1). Gels were subjected to electrophoresis and transferred to polyvinyl difluoride membranes (GE Healthcare, Chicago, IL, USA). Membranes were blocked in 5% non-fat dried milk at room temperature for 1 h, washed three times with tris-buffered saline containing 0.1% Tween 20 (TBST), incubated overnight at 4 • C with primary specific antibodies for the antigens above mentioned, and diluted 1:1000 (for SELENOT and OPA-1), 1:500 (for PGC-1α, SOD-2, CAT) in TBST and 5% BSA, and 1:1000 for DRP-1 in TBST and non-fat dried milk 1%. β-actin antibody was used as a loading control. Following incubation with primary antibodies, the membranes were washed three times with TBST and then incubated with secondary antibodies peroxidaseconjugated at room temperature for 1 h (anti-mouse diluted 1:2000 and anti-rabbit diluted 1:3000) (Sigma Aldrich) in TBST containing 5% non-fat dried milk. Immunodetection was carried out using a chemiluminescence kit (Santa Cruz Biotechnology, Dallas, TX, USA) or Clarity Western ECL Substrate (Bio-rad, Hercules, CA, USA) when necessary. Densitometric analyses were performed using ImageJ 1.6 software (National Institutes of Health, Bethesda, MD, USA) as previously indicated [16,18,23].
Attenuated Total Reflectance Fourier-Transform Infrared (ATR-FTIR) Spectroscopic Measurements
For these analyses, H9c2 cells exposed to vehicle, PA (100 µmol/L), PA + PSELT or PSELT 5 nmol/L for 24 h were collected using trypsin, centrifuged (1500× g, 5 min), and then 1,000,000 cells were resuspended in 300 µL of the complete medium. The infrared spectra of live H9c2 cells were collected in the attenuated total reflectance (ATR) mode at 37 • C by using a Tensor II FTIR spectrometer (Bruker Optics, Ettlingen, Germany) equipped with a thermostated BioATR II sample holder and a mercury-cadmium-telluride detector. Then, 20 µL of the cell suspension was deposited on the ATR silicon crystal and left to equilibrate for 2 min before measurement. Spectra were recorded for 180 min (each 10 min) using the kinetic option in Opus acquisition software in order to maximize the absorption signal due to cell sedimentation on the crystal. Each spectrum was an average of 120 scans at 4 cm −1 spectral resolution. The background spectrum was recorded with the cell culture medium under the same experimental setup. At least three replicates for each cell type (grown independently) were measured. Spectral processing and analysis: the kinetic spectra of each measurement were baseline corrected with a rubber band function and averaged starting from the saturation stage using Opus 7 software. This average absorbance spectrum was further averaged with the replicas of the same cell type and normalized for the area under the amide II band (1597-1481 cm −1 ). The normalized mean spectra of each cell type were used in the calculation of the difference spectra. The statistical comparison between the normalized mean spectra of untreated and differently treated H9c2 cells was performed by using Student's t-test (two-tailed, nonparametric Wilcoxon, 99% CI) at each wavenumber in Prism 5. The spectral regions where a significant difference in the absorption occurred (p-value < 0.0001) were considered in the discussion. The lipid/protein ratio was calculated from the area under the 3050-2800 cm −1 region (lipid content) and the 1700-1600 cm −1 region (protein content). The bandwidth of the CH2 symmetric band was measured at 75% of the height of the peak maximum from the baseline-corrected spectra.
PSELT Mitigates PA-Induced Cytotoxicity and Lipid Accumulation in H9c2 Cardiomyocytes
To determine the concentration range at which PA induces toxic action on cardiomyocytes in term of cell viability, H9c2 cells were treated with increasing concentrations of PA (100-500 µmol/L) for 24 h. MTT assay showed that, compared to control cells, PA dose-dependently decreased cardiomyocyte viability starting from 100 µmol/L ( Figure 1A). After this first cardiotoxic dose of PA was established, H9c2 cells were exposed to PA (100 µmol/L) and co-treated with increasing concentrations of PSELT (5-100 nmol/L) for 24 h. The results indicated that PA induced a significant decrease in cell viability compared to the control cells, while PSELT was able to significantly mitigate PA-dependent cell death at each tested concentration, starting from 5 nmol/L ( Figure 1B). Thus, the first effective concentration of PSELT (5 nmol/L) was considered for the subsequent analyses. Conversely, in the concentration range of 5-100 nmol/L, the inert counterpart of PSELT (inert-PSELT) was ineffective in counteracting PA-induced cell viability decrease ( Figure 1C).
The effect of PSELT against PA-induced cytotoxicity was also evaluated by measuring, in the H9c2 culture medium, the enzymatic activity of LDH, whose release indicates damage of the cell membrane. As shown in Figure 1D, PA treatment significantly increased LDH release with respect to the control group, while H9c2 cells exposed to PA + PSELT exhibited a significant decrease in LDH activity compared to cells treated with PA alone. In the cells exposed to PSELT alone, no significant change in LDH levels was detected compared to control cells ( Figure 1D).
Oil Red O staining and relative spectrophotometric quantification were performed to measure the accumulation of intracellular lipids in H9c2 cells. This analysis revealed a significant increase in the intracellular lipid droplets in cardiomyocytes after PA exposure compared to control cells. However, in H9c2 cardiomyocytes treated with PA + PSELT, intracellular lipids were significantly reduced compared to PA alone ( Figure 1E).
PSELT Protects H9c2 Cells against PA-Induced Oxidative Stress
To evaluate whether PSELT could counteract PA-induced oxidative stress, we evaluated intracellular ROS generation by using the specific fluorescent probe CM-H 2 DCFDA, mitochondrial superoxide generation by MitoSOX-Red staining, and the expression levels of endogenous antioxidant enzymes. Figure 2A indicates that PA significantly increased ROS generation, as evidenced by the enhanced fluorescence intensity of the probe observed in PA-treated cells compared to control cells. PSELT significantly decreased this PA-induced fluorescence intensity, and PSELT alone did not generate any significant intracellular ROS compared to the control group. ROS determination by the H 2 DCFDA assay and measured by flow cytometry in H9c2 cells exposed to PA with or without PSELT reflected the same trend (Supplementary Figure S1).
Mitosox-Red staining was then conducted to detect mitochondrial superoxide generation in the H9c2 cardiomyocytes. As revealed by increased red fluorescence-stained cells in Figure 2B, PA treatment resulted in a significant increase in O 2 − levels compared to those of control cells. PSELT treatment significantly reduced O 2 − generation in the presence of PA compared to PA alone ( Figure 2B). The cell oxidative status was also evaluated by assessing the expression levels of the endogenous antioxidant enzymes SOD-2 and CAT; Western blot and densitometric analyses of the H9c2 cell extracts showed that SOD-2 and CAT expression significantly increased in PA-treated cells with respect to the control Figure 1. Effects of palmitate (PA) with or without PSELT on cell viability, cytotoxicity, and lipid accumulation in H9c2 cardiomyocytes. H9c2 cells were exposed to vehicle (Control) or increasing concentrations of (A) PA (100-500 µmol/L) or (B) PA 100 µmol/L + PSELT (5-100 nmol/L) or (C) PA 100 µmol/L + inert PSELT (I-PSELT) (5-100 nmol/L) for 24 h. The viability of H9c2 cells was determined using MTT assay and was expressed as the percentage of control cells. Results are reported as the mean ± SEM (n = 6 per group). Significant differences were detected by one-way ANOVA followed by Dunnett's test, p < 0.05 (*); p < 0.01 (**); and p < 0.0001 (****) vs. the Control group. (D) Lactate dehydrogenase (LDH) release in the culture medium of H9c2 cardiomyocytes treated with vehicle, PA (100 µmol/L) and PSELT (5 nmol/L) alone or in co-treatment for 24 h. The LDH activity is expressed as IU/L. Data are shown as the mean ± SEM of six separate experiments. Significant differences were detected by one-way ANOVA and Newman-Keuls multiple comparison test, p< 0.05 (*); p < 0.01 (**); and p < 0.0001 (****). (E) Representative images of Oil Red O staining for lipid droplet assessment and relative quantification. H9c2 cardiomyocytes treated with vehicle (Control), PA 100 µmol/L, PA + PSELT 5 nmol/L or PSELT for 24 h. Scale bar: 25 µm. Quantification of the stained lipid droplets was performed by measuring the absorbance at 510 nm. Values are the mean ± SEM of three different experiments. p < 0.05 (*); p < 0.01 (**); p < 0.001 (***); and p < 0.0001 (****).
The effect of PSELT against PA-induced cytotoxicity was also evaluated by measuring, in the H9c2 culture medium, the enzymatic activity of LDH, whose release indicates damage of the cell membrane. As shown in Figure 1D, PA treatment significantly increased LDH release with respect to the control group, while H9c2 cells exposed to PA + PSELT exhibited a significant decrease in LDH activity compared to cells treated with PA alone. In the cells exposed to PSELT alone, no significant change in LDH levels was detected compared to control cells ( Figure 1D).
Oil Red O staining and relative spectrophotometric quantification were performed to measure the accumulation of intracellular lipids in H9c2 cells. This analysis revealed a of endogenous antioxidant enzymes. Figure 2A indicates that PA significantly increased ROS generation, as evidenced by the enhanced fluorescence intensity of the probe observed in PA-treated cells compared to control cells. PSELT significantly decreased this PA-induced fluorescence intensity, and PSELT alone did not generate any significant intracellular ROS compared to the control group. ROS determination by the H2DCFDA assay and measured by flow cytometry in H9c2 cells exposed to PA with or without PSELT reflected the same trend (Supplementary Figure S1). Mitosox-Red staining was then conducted to detect mitochondrial superoxide generation in the H9c2 cardiomyocytes. As revealed by increased red fluorescence-stained cells in Figure 2B, PA treatment resulted in a significant increase in O2 − levels compared to those of control cells. PSELT treatment significantly reduced O2 − generation in the presence of PA compared to PA alone ( Figure 2B). The cell oxidative status was also evaluated by assessing the expression levels of the endogenous antioxidant enzymes SOD-2 and CAT; Western blot and densitometric analyses of the H9c2 cell extracts showed that SOD-2 and
PSELT Rescues the PA-Induced Reduction of Endogenous SELENOT Expression in H9c2 Cells, and Endogenous SELENOT Is Fundamental for PSELT-Induced Cell Protection against the PA Effect
To evaluate whether PA could affect the expression of the endogenous SELENOT, we performed Western blot and relative densitometric analysis of SELENOT in H9c2 cells exposed to PA for 12, 18, and 24 h. As shown in Figure 3A, PA treatment reduced SELENOT levels in a time-dependent manner compared to the control, particularly at 18 and 24 h where the reduction of SELENOT expression was statistically significant. Then, we evaluated whether PA could decrease endogenous SELENOT expression through FAT/CD36 (cluster of differentiation 36/fatty acid translocase), the main transporter of fatty acids in the heart. To this aim, H9c2 cardiomyocytes were first treated for 1 h with 1 µmol/L of sulfo-N-succinimidyl oleate (SSO) [31]-an irreversible inhibitor of CD36 able to block CD36-mediated FA uptake-followed by exposure to PA for 24 h. As shown in Figure 3B, PA significantly reduced SELENOT expression compared to the control, while in the cells exposed to SSO + PA, SELENOT expression was preserved. To confirm that SSO was effective in preventing PA-induced cytotoxicity, we performed an MTT assay showing the ability of SSO to significantly mitigate PA-induced cell death in H9c2 cardiomyocytes compared to PA alone ( Figure 3C). In order to evaluate the influence of exogenous PSELT on endogenous SELENOT expression during PA treatment, we carried out a Western blot analysis of H9c2 cells exposed to PA for 24 h in the presence or absence of PSELT. Figure 3D confirmed the ability of PA to reduce SELENOT expression compared to the control and indicated that PSELT significantly rescued the PA-induced reduction of SELENOT.
µmol/L of sulfo-N-succinimidyl oleate (SSO) [31]-an irreversible inhibitor of CD36 able to block CD36-mediated FA uptake-followed by exposure to PA for 24 h. As shown in Figure 3B, PA significantly reduced SELENOT expression compared to the control, while in the cells exposed to SSO + PA, SELENOT expression was preserved. To confirm that SSO was effective in preventing PA-induced cytotoxicity, we performed an MTT assay showing the ability of SSO to significantly mitigate PA-induced cell death in H9c2 cardiomyocytes compared to PA alone ( Figure 3C). In order to evaluate the influence of exogenous PSELT on endogenous SELENOT expression during PA treatment, we carried out a Western blot analysis of H9c2 cells exposed to PA for 24 h in the presence or absence of PSELT. Figure 3D confirmed the ability of PA to reduce SELENOT expression compared to the control and indicated that PSELT significantly rescued the PA-induced reduction of SELENOT. To establish the direct role of endogenous SELENOT in the cytoprotection mediated by exogenous PSELT during PA exposure, we tested the effect of SELENOT knockdown by using a SELENOT siRNA. We first observed that the transfection of the siRNA SELENOT (si-SELENOT) significantly reduced H9c2 cell viability compared to the control [i.e., cells transfected with control siRNA (si-NC)] ( Figure 3E). As shown above, this analysis also showed that PA reduced cell viability in si-NC transfected cells compared to control cells (si-NC). Further, SELENOT knockdown worsened PA-induced cytotoxicity since the extent of the PA-induced cell death was higher in SELENOT-knockdown cells compared to the control cells exposed to PA [PA (si-NC) group] ( Figure 3E). Additionally, none of the PSELT concentrations (5-100 nmol/L) in si-SELENOT-transfected cells exposed to PA were able to significantly mitigate PA-induced cell death compared with cells silenced for SELENOT and exposed only to PA ( Figure 3E).
PSELT Reduces the PA-Dependent Upregulation of CD36 in H9c2 Cardiomyocytes
To evaluate whether PSELT could affect the expression levels of the fatty acid transporter CD36, we performed an immunofluorescence analysis on H9c2 cells exposed to PA and PSELT, alone and in co-treatment, for 24 h. As shown in Figure 4, PA significantly increased CD36 expression compared with control cells, as revealed by the enhanced fluorescence intensity in the PA group. Conversely, PSELT was able to significantly mitigate CD36 expression in H9c2 cardiomyocytes exposed to PA with respect to cells treated with PA alone, as revealed by a lower fluorescence intensity in the PA + PSELT group. Moreover, a slight but significant increase in fluorescence intensity was also detected in PSELT-treated H9c2 cells compared to control cells. Nuclei were counterstained with DAPI (blue). Scale bars: 25 µm. Data are expressed as the mean ± SEM (n = 3 different experiments). Significant differences in CTCF were detected by one-way ANOVA and Newman-Keuls multiple comparison test, p < 0.05 (*) and p < 0.0001 (****).
PSELT Mitigates the Detrimental Effects of PA on Mitochondrial Function, Biogenesis, and Dynamics
To evaluate the effects of PSELT on mitochondrial function during PA treatment, we performed metabolic flux analysis using the Seahorse XFe96. In H9c2 cardiomyocytes, a dramatic reduction in the oxygen consumption rate (OCR) was observed after treatment with PA, whereas PSELT mitigated this effect ( Figure 5A and Supplementary Figure S2).
We next investigated whether the protective action of PSELT could also be linked to mitochondrial biogenesis and dynamics; to this aim, we carried out Western blot analyses aimed at assessing specific markers involved in these molecular processes. Figure 5B shows that PA significantly reduced the levels of PGC1-α, a master regulator of mitochondrial biogenesis, compared to control cells. However, PSELT treatment significantly increased these levels in the presence of PA compared to cells exposed only to PA ( Figure 5B). Moreover, we assessed whether PSELT could influence the expression levels of DRP-1, a key regulator of mitochondrial fission, and OPA-1, a gatekeeper of stress-sensitive mitochondrial fusion. Our results indicate that PA triggered a significant increase in both DRP-1 and OPA-1 expression compared to the control group, which was significantly reversed in the presence of PSELT ( Figure 5C,D).
PSELT Mitigates PA-Dependent Ultrastructural Alterations in H9c2 Cardiomyocytes
To evaluate the protective action of PSELT against PA in H9c2 cardiomyocytes at the ultrastructural level, we carried out TEM analyses. Figure 6A,B shows that the cytoplasm of control cells was rich in ER, Golgi bodies, and mitochondria (see higher magnification
PSELT Mitigates PA-Dependent Ultrastructural Alterations in H9c2 Cardiomyocytes
To evaluate the protective action of PSELT against PA in H9c2 cardiomyocytes at the ultrastructural level, we carried out TEM analyses. Figure 6A,B shows that the cytoplasm of control cells was rich in ER, Golgi bodies, and mitochondria (see higher magnification images). Dilated cisternae of ER were found to be a typical component of control cells, while mitochondria possessed their typical cristae morphology and structure. After 24 h of PA treatment, the ER network reorganized into stacked and concentrically whorled membranes. The cristae disoriented in most of the mitochondria that also changed shape to oval or round, became swollen, and were often embedded in the ER (Figure 6C,D). Treatment with PSELT restored an apparent normal ultrastructure with no evidence of swelling or injury in cytoplasm and organelles. Most of the mitochondria showed regularly spaced lamellar cristae; the mitochondrial matrix also possessed the typical homogeneous staining pattern of modest electron density, see Figure 6E,F. PSELT treatment alone did not induce any significant effect on the structural and subcellular organization of the cells ( Figure 6G,H). Figure 7A shows a representative mean absorbance FTIR spectrum of H9c2 cells in aqueous solution in the spectral regions 3050-2800 and 1800-900 cm −1 , which contain the signature of key biomolecules. The absorption in the first region originated mainly from the CH2 and CH3 stretching vibration (both symmetric and asymmetric) of the lipid acyl chain and reflects the molecular properties of cellular membranes. The second spectral region (fingerprint region) exhibited more complex spectral features where the contribution of the different biomolecules partially overlapped. The main peaks were due to lipids (1750-1715 cm −1 C = O stretching, and CH2 bending at 1470 cm −1 ), proteins (amide I, 1646 cm −1 , amide II, 1546 cm −1 , and amide III at 1300 cm −1 ), nucleic acids (asymmetric and symmetric phosphate stretching at 1125 and 1085 cm −1 , respectively), and carbohydrate and glycogen (1155 and 1030 cm −1 ) [32,33]. For a more detailed description of the absorption peaks numbered in Figure 7A, see Table 1. Typically, variations of the IR bands may include peak shifts, linewidth changes, and variation in the peak intensity/area that can be correlated with specific modifications of the functional groups of the relevant molecules within the sample [34]. In particular, peak shifts and bandwidth provide structural and Figure 7A shows a representative mean absorbance FTIR spectrum of H9c2 cells in aqueous solution in the spectral regions 3050-2800 and 1800-900 cm −1 , which contain the signature of key biomolecules. The absorption in the first region originated mainly from the CH 2 and CH 3 stretching vibration (both symmetric and asymmetric) of the lipid acyl chain and reflects the molecular properties of cellular membranes. The second spectral region (fingerprint region) exhibited more complex spectral features where the contribution of the different biomolecules partially overlapped. The main peaks were due to lipids (1750-1715 cm −1 C = O stretching, and CH 2 bending at 1470 cm −1 ), proteins (amide I, 1646 cm −1 , amide II, 1546 cm −1 , and amide III at 1300 cm −1 ), nucleic acids (asymmetric and symmetric phosphate stretching at 1125 and 1085 cm −1 , respectively), and carbohydrate and glycogen (1155 and 1030 cm −1 ) [32,33]. For a more detailed description of the absorption peaks numbered in Figure 7A, see Table 1. Typically, variations of the IR bands may include peak shifts, linewidth changes, and variation in the peak intensity/area that can be correlated with specific modifications of the functional groups of the relevant molecules within the sample [34]. In particular, peak shifts and bandwidth provide structural and dynamical information whereas band intensity is related to the concentration of the corresponding functional groups of the biomolecules (according to the Beer-Lambert law). Based on a comparison of the mean absorbance spectra of the four different cell samples analyzed, no variation of the band position was observed (Supplementary Figure S3), indicating that there were no significant structural changes. However, changes in the peak intensity were observed.
PSELT Mitigates Specific FTIR Spectral Alterations Induced by PA in H9c2 Cardiomyocytes
To better visualize the effects of the different treatments on the molecular composition of H9c2 cardiomyocytes, we calculated the difference between the mean absorbance spectra of the treated cells minus the mean spectrum of the untreated cells representing our control sample ( Figure 7B). The regions with the highest statistical significance (p < 0.0001) in absorbance were highlighted with a thicker line. Positive/negative peaks in the plot indicated higher/lower concentration of the molecular components within the cell.
When we compared H9c2 cells treated with PA with control cells (red line), significant differences were observed in the amide I region, which reflects a decrease in protein concentration. The peaks positions at 1655 and 1635 cm −1 are associated with α-helix and β-sheet proteins [32]. Positive peaks were found around 1196 and 1046 cm −1 . These regions are assigned due to the vibrations of specific groups of nucleic acids and carbohydrates and suggest changes in the content of such molecules. Positive peaks are also found for the lipid component at about 2920 and 1730 cm −1 . These variations were also reflected in the lipid/protein ratio for the two samples, which increased from 0.200 ± 0.079 (control) to 0.337 ± 0.043 for the PA sample. Moreover, the bandwidth of the CH 2 symmetric band at 2852 cm −1 decreased from 16.747 ± 0.003 cm −1 (control) to 16.030 ± 0.401 cm −1 (PA). Such a reduction suggests a decrease in cell membrane fluidity under PA treatment. When the cells were co-treated with PSELT (blue line), the difference in absorption almost vanished below 1200 cm −1 and only a reduction in the components related to nucleic acids and carbohydrates was observed.
Finally, the effect of PSELT alone on the cells (green line) involved a reduction of the lipid content (see the negative peak at 2922 cm −1 ) and also of the bands related to nucleic acids and carbohydrates. Regarding the effects induced by PA in the region between 1165 and 955 cm −1 , PSELT co-treatment mitigated the PA-dependent increase (blue line).
Discussion
Cardiac lipotoxicity can induce cell dysfunction and cell death, thus increasing both atherosclerotic coronary heart disease and HF, which represent important contributors to cardiovascular complications among obese individuals [6]. Although diverse studies have been conducted in this field, it is mandatory to improve our knowledge on the molecular mechanisms that drive cardiac lipotoxicity in order to characterize specific approaches to minimize obesity-related cardiac complications. In the present study, we employed H9c2 cardiomyocytes exposed to PA, a widely used in vitro model for recapitulating the cardiac harmful effects of consuming a high-fat diet, and provided novel evidence on the beneficial action of the antioxidant SELENOT mimetic peptide (PSELT) against cardiomyocyte damage induced by lipotoxicity.
PSELT Exerts Protective Effects against PA-Induced Cytotoxicity and Lipotoxicity through Its Redox Site Containing the Sec Residue
Several reports indicate that PA causes lipotoxicity and cell dysfunction in many cell types, including cardiomyocytes [36][37][38][39][40]. Therefore, we exposed H9c2 cells, which are widely used as a cell line to model cardiomyocytes in vitro due to their biochemical, morphological, and electrical/hormonal properties [41,42], to PA in order to model hyperlipidaemia in vitro and to focus on the mechanism affecting the myocardium (i.e., lipotoxicity), which is implicated in the pathogenesis of HF in obesity. In line with previous studies, our results showed first that PA caused cardiomyocyte death in a dose-dependent manner, indicating that the in vitro model of cytotoxicity was successfully established. In accordance, the use of PA represents the most commonly method to induce cardiac lipotoxicity in in vitro systems [43,44].
Our previous in vivo and in vitro data demonstrated that SELENOT exerts a crucial role in preserving redox and ER homeostasis and is essential for cardiomyocyte differentiation and protection, as well as for glucose metabolism through its ability to regulate insulin production/secretion [12][13][14][15][16]. Therefore, we hypothesized that SELENOT could play a role in protecting cardiomyocytes exposed to a dysmetabolic condition generated by lipid overload. To address this issue, we took advantage of a selective SELENOT-derived small peptide (PSELT) able to mimic the activity of the full-length protein through its CVSU redox motif, as previously reported by our groups in different pathophysiological contexts [15][16][17]. In this regard, therapeutic peptides are emerging as very promising tools due to their ability to selectively target specific molecules and pathways, which may circumvent some limitations of the conventional therapeutics such as those related to the use of a full-length protein in its recombinant form [45].
Our results indicated that PSELT counteracts lipotoxic cardiomyocyte death through its redox active site, as an analogous control peptide lacking the Sec residue in the catalytic site (i.e., inert PSELT) was ineffective, indicating that the cytoprotective action of PSELT is attributable to the Sec residue in the CVSU motif. The protective effects of PSELT against PA-induced lipotoxicity were also confirmed by its ability to reduce the release of LDH in the culture medium (i.e., an important indicator of cytotoxicity and membrane damage) and to counteract intracellular lipid droplets [46].
PSELT Counteracts PA-Induced Oxidative Stress and the Reduction of Endogenous SELENOT
It is widely known that oxidative stress plays a key role in the onset and progression of several multifactorial diseases, including obesity-related cardiovascular disorders [47][48][49]. Therefore, selective pharmacological interventions aimed at inhibiting ROS overproduction could represent suitable strategies to mitigate lipotoxic cardiomyocyte death and cardiac dysfunction [50]. Here, we report that PSELT mitigated intracellular ROS production and the mitochondrial superoxide generation provoked by PA and decreased the PA-dependent activation of key endogenous antioxidant enzymes, such as SOD and catalase. It has been reported that an increase in ROS production induces lipid peroxidation, stimulating the activation of antioxidant defenses and inducing redox homeostasis imbalance [51,52]. For instance, the decrease in glutathione (GSH) reductase and glutathione peroxidase (GPX) activities in the presence of PA may activate the antioxidant enzymes SOD and CAT as the result of an adaptive response employed by the cell to counteract the lipotoxic stress condition [53]. The increase in SOD-2 levels found in our study during PA treatment could be due to the augmented production of mitochondrial superoxide, which in turn may contribute to ROS generation. The increase in CAT during PA could be linked to excessive hydrogen peroxide production due to the increase in SOD-2 levels. Therefore, our results suggest that PSELT may induce direct antioxidant activity and a consequent equilibrium of the redox status by rebalancing the levels of ROS-metabolizing enzymes. It is known that a close relationship between increased levels of cytosolic/mitochondrial Ca 2+ and excessive ROS generation promoting cardiomyocyte and endothelial dysfunction exists [54,55]. It is also widely accepted that PA triggers oxidative stress, leading to disrupted redox-dependent regulatory mechanisms of ER homeostasis and ER stress resulting in Ca 2+ dysregulation, which may actively participate in apoptotic cell death [56]. Although we did not investigate the cytosolic and mitochondrial Ca 2+ overload and its potential role in PA-cardiac lipotoxicity, literature evidence has reported Ca 2+ overload in cardiomyoctes during PA exposure, even if at higher doses compared to those used in our study [21,57]. Considering that SELENOT crucially modulates ER thiol redox balance and contributes to Ca 2+ signaling by modulating Ca 2+ flux into and from the ER lumen and by a redox mechanism involving thiol groups in calcium channels and pumps [12,58], it is possible that PSELT and/or SELENOT can be functionally involved in Ca 2+ regulation for mediating their beneficial action during lipid overload conditions.
The ability of PSELT to mitigate the alteration of cardiomyocyte redox status following PA prompted us to investigate whether PA could also affect the expression of SELENOT (i.e., another key endogenous antioxidant enzyme) and to evaluate the functional role of endogenous SELENOT in the exogenous PSELT-mediated cytoprotection. Our results showed first that PA time-dependently decreased SELENOT expression, indicating that the lipid overload-induced cardiomyocyte injury may depend, at least in part, on the decrease in a selenoprotein faithfully involved in ER homeostasis and exerting a crucial protective role in cardiomyocytes, which also acts as a redox-sensing protein [12,15,16]. We then determined whether PA-dependent SELENOT downregulation could be mediated by FAT/CD36, which plays a pivotal role in the uptake of long-chain fatty acids under both physiological and pathological conditions and is responsible for more than 70% of fatty acid uptake/oxidation in the heart [59,60]. To this aim, we employed SSO, an irreversible inhibitor of FAT/CD36 blocking the uptake of fatty acids, and found that pre-treatment with this compound not only prevented SELENOT downregulation in cells exposed to PA but also inhibited lipotoxic cell death, indicating that PA can affect endogenous SE-LENOT expression through CD36, indicating this selenoprotein as a novel molecular actor in CD36 signaling for which a crucial role in lipid-overloaded hearts has been reported in several studies [ [61] and references therein]. Intriguingly, PSELT rescued SELENOT protein expression after PA treatment, indicating important physiological crosstalk between the exogenous peptide and the endogenous protein. To further corroborate this hypothesis, we assessed the effect of PSELT during PA treatment in SELENOT-silenced cardiomyocytes. The results showed that H9c2 cell viability was compromised by SELENOT deficiency per se, a phenomenon that was worsened in SELENOT-silenced cells exposed to PA, confirming the essential role of this selenoprotein in cardiomyocyte survival and function [15,16]. However, PSELT addition did not mitigate lipotoxic damage in SELENOT-silenced cardiomyocytes, suggesting that SELENOT is required for PSELT action and that the combined action of the protein and the peptide is fundamental to protect cardiomyocytes from the lipotoxic insult. The ability of exogenous PSELT to prevent PA-induced SELENOT downregulation could be of particular interest, as SELENOT not only plays a crucial role in cardiomyocyte differentiation and protection but also represents an essential protein for life, as revealed by the fact that it is the only ER selenoprotein whose gene disruption leads to early embryonic lethality, it is the most highly conserved selenoprotein throughout evolution, and it represents one of the highest-priority selenoproteins [62,63].
Based on the ability of CD36 to affect SELENOT expression and to potentially use this mechanism, among others, for mediating cardiac lipotoxicity, we wondered whether PSELT could influence CD36 expression during PA exposure. Our immunofluorescence analysis revealed that the peptide mitigated the PA-induced upregulation of CD36, indicating that PSELT can protect cardiomyocytes and prevent SELENOT decrease during PA treatment by also reducing the uptake of PA through this transporter. Although we currently do not have the possibility to unequivocally demonstrate that PSELT may influence the biological activity of CD36 through a direct binding to this transporter since a specific antibody against the portion of SELENOT encompassing the PSELT sequence does not exist, our immunofluorescence analysis, which also indicated that PSELT was able to stimulate CD36 under basal conditions (without PA), suggested that the regulation of CD36 expression through direct binding may exist. On the other hand, we cannot exclude that PSELT, acting intracellularly [16,17], may regulate SELENOT expression during stressful conditions induced by PA, making the PSELT/SELENOT crosstalk particularly complicated.
PSELT Improves Mitochondrial Ultrastructure and Function in Terms of Respiration, Biogenesis, and Dynamics in PA-Treated Cardiomyocytes
There is mounting evidence that maintaining mitochondrial integrity and function is crucial for cardiac cells [64,65]. Here, we determined the effect of PA with or without PSELT on mitochondrial respiration in H9c2 cardiomyocytes. Our data showing a decrease in the oxygen consumption rate in PA-treated cells are consistent with the ability of PA to act as a partial inhibitor of the electron transport chain and to induce oxidative imbalance in mitochondria [21,66]. Interestingly, PSELT improved mitochondrial respiration and mitigated PA-induced mitochondrial respiratory dysfunction.
Mitochondrial alterations can be also related to their biogenesis and dynamic deficiency, which plays a critical role in lipotoxicity and lipid overload-induced metabolic disorders [67]. To further delineate which of these mechanisms are involved in PSELTdependent mitochondrial protection during PA deleterious effects, we first analyzed the expression levels of PGC1-α, a transcriptional coactivator that plays a key regulatory role in mitochondrial biogenesis. Consistent with several studies [68,69], our results showed that PA markedly reduced PGC1-α expression, an effect that was completely reversed by PSELT, suggesting a potential action of the peptide in promoting mitochondrial biogenesis. Consolidated data have also indicated the presence of cross-regulatory circuits coordinating mitophagy, mitochondrial dynamics, and mitochondrial biogenesis aimed at maintaining the quantity and quality of the mitochondrial network [70]. Therefore, we also studied the potential action of PSELT in mitochondrial dynamics by assessing the expression levels of DRP1, an essential regulator of mitochondrial fission and OPA-1, a critical mediator of the fusion process. In our study, PA induced an increase in both markers, indicating a significant imbalance of fission/fusion that will inevitably lead to mitochondrial dysfunction. The increased mitochondrial fusion observed during PA exposure could represent an attempt of the cell to address and mitigate lipotoxic stimulus, while the increased fission could be required for generating new mitochondria on the one hand, and for removing injured mitochondria through apoptosis on the other hand, to control mitochondrial quality [71]. Although different stimuli, including the excess of lipids, can differentially regulate mitochondrial dynamics, activating or inhibiting mitochondrial fission/fusion depending on the context [72], our data showed that PSELT may restore the levels of both DRP-1 and OPA-1 to normal conditions, thus counteracting PA-induced mitochondrial alterations by re-establishing mitochondrial fission/fusion balance. Important evidence also highlighted the active participation of ER in mitochondrial division establishing a new model linking mitochondrial dynamics and cell death [71,73]. Moreover, saturated fatty acids have been reported to be lipotoxic in cardiomyocytes due to their generation of ER stress [3,74]. Thus, in this study we evaluated ultrastructural changes during PA and PSELT treatments, focusing on mitochondrial morphology and the ER network. Our TEM analyses showed impaired mitochondrial morphology characterized by the swelling of mitochondria and injured mitochondrial cristae as well as mitochondrial embedding within the ER and the presence of alterations in the ER network in cardiomyocytes exposed to PA. These findings are in line with several studies reporting ultrastructural changes secondary to PA exposure that are mainly linked to ROS overproduction and oxidative stress [47,[75][76][77]. Interestingly, PSELT not only improved PA-induced cristae remodeling but also restored the conventional mitochondrial ultrastructure and promoted the ER network, reinforcing the hypothesis that the peptide can protect cardiomyocytes against PA by ameliorating the mitochondrial structure and function and ER network. Notably, we previously demonstrated that SELENOT localizes in the cardiac ER and plays a crucial role in the regulation of ER proteostasis, acting as a "guardian" of ER homeostasis through its redox center [12,15]. Therefore, the ability of PSELT to cross the plasma membrane and potentially localize/target ER should be regarded with particular interest in the context of key functional cooperation between exogenous PSELT and endogenous SELENOT in ER homeostasis under both physiological and lipid overload conditions.
PSELT Attenuates FTIR Spectral-Related Macromolecular Changes Induced by PA in H9c2 Cardiomyocytes
The beneficial action of PSELT against PA prompted us to deepen our understanding of the effect of the peptide on the macromolecular content by a spectroscopic characterization. FTIR spectroscopy analysis revealed positive peaks in the lipid component of PA-treated cells, confirming the ability of saturated fatty acids to induce lipid overload and cardiac lipotoxicity [78][79][80]. We also found that PA negatively affected the content of the α-helix and β-sheet structures of proteins; this effect is likely related to the oxidative burst and ER stress generated by PA that may activate UPR, impairing protein synthesis. This hypothesis is consistent with recent findings according to which PA causes a decline in protein synthesis in skeletal muscle by inducting ER stress [46]. We then found that PA also promoted a substantial increase in the content of nucleic acids, which could reflect a firstline response adopted by cardiomyocytes against cellular damage, aimed at reconstructing new mitochondria and microsomes. This hypothesis is in line with previous studies indicating the capability of myocardial tissue to enhance the RNA/DNA concentration for the early renewal of injured cell material following specific insults, such as myocardial infarction [81].
Intriguingly, PSELT markedly reduced PA-induced lipid accumulation, further confirming the direct action of the peptide in reducing the accumulation of lipid droplets, and mitigated the PA-dependent reduction in protein synthesis and increase in nucleic acids. These findings agree with our recent studies showing that SELENOT, as a novel subunit of the oligosaccharyl transferase (OST) complex, is crucial for the regulation of ER proteostasis, hormone N-glycosylation, folding, and secretion [11,13]. Additionally, TEM analyses showing the ability of PSELT to improve the ER ultrastructure and network, together with our recent work indicating that PSELT exerts cardioprotection by relieving ER stress in a rat model of MI/R [16], corroborate the idea that this peptide may preserve the ER structure and function in cardiomyocytes and thus counteract PA-dependent protein synthesis decline.
Another aspect of our spectroscopic findings refers to a possible decrease in membrane fluidity observed in H9c2 cells treated with PA, which could be related to a decrease in the content of cholesterol, unsaturated and saturated lipids, and proteins, as well as to the action of free radicals that can promote molecular oxidation through different mechanisms [82]. Interestingly, PSELT mitigated the loss in cardiomyocyte membrane fluidity, corroborating its ability to restore the lipid profile and to exert antioxidant defense. Indeed, consolidated evidence reported the ability of oxygen free radicals to alter the cell membrane fluidity. Particularly, different oxyradical generating systems can affect the function of cardiac membranes by decreasing the phospholipid N-methylation activity, which is known to determine the membrane fluidity [83]. This effect may represent the result of lipid peroxidation that occurs when a free radical oxidizes an unsaturated lipid chain, leading to the formation of a hydroperoxidized lipid and an alkyl radical, thus altering membrane integrity, fluidity, and function [84,85]. Together, these data further delineate the biological significance of SELENOT in controlling the homoeostasis of cardiomyocytes during oxidative stress triggered by various insults including lipotoxicity.
Conclusions
In conclusion, this study reports the ability of the SELENOT mimetic PSELT to inhibit PA-provoked detrimental effects in H9c2 cardiomyocytes by counteracting cell death, lipid accumulation, redox alteration, CD36 upregulation, and mitochondrial and ER dysfunction (Figure 8). Our results also indicate that PSELT can preserve PA-dependent alterations in the cellular macromolecular content and membrane fluidity. Interestingly, these findings suggest that exogenous PSELT requires endogenous SELENOT-a crucial protein for cardiomyocyte differentiation and protection-to exert its beneficial action during lipotoxicity, further delineating the biological significance of SELENOT in cardiomyocytes and highlighting important physiological crosstalk between the exogenous peptide and the endogenous protein (Figure 8). chain, leading to the formation of a hydroperoxidized lipid and an alkyl radical, thus altering membrane integrity, fluidity, and function [84,85]. Together, these data further delineate the biological significance of SELENOT in controlling the homoeostasis of cardiomyocytes during oxidative stress triggered by various insults including lipotoxicity.
Conclusions
In conclusion, this study reports the ability of the SELENOT mimetic PSELT to inhibit PA-provoked detrimental effects in H9c2 cardiomyocytes by counteracting cell death, lipid accumulation, redox alteration, CD36 upregulation, and mitochondrial and ER dysfunction ( Figure 8). Our results also indicate that PSELT can preserve PA-dependent alterations in the cellular macromolecular content and membrane fluidity. Interestingly, these findings suggest that exogenous PSELT requires endogenous SELENOT-a crucial protein for cardiomyocyte differentiation and protection-to exert its beneficial action during lipotoxicity, further delineating the biological significance of SELENOT in cardiomyocytes and highlighting important physiological crosstalk between the exogenous peptide and the endogenous protein ( Figure 8). Values are the mean ± SD of three different experiments. p < 0.01 (**) and p < 0.0001 (****); Figure S2: PSELT mitigates the detrimental effect of palmitate (PA) on mitochondrial respiration. A significant reduction in basal respiration was observed in cardiomyocytes stimulated with PA, whereas this effect was mitigated by PSELT. Data shown are the mean ± SEM of 2 independent experiments performed in triplicate. * p < 0.05; *** p < 0.001 indicates significance, all relative to the control cells; Figure S3: ATR-FTIR mean spectra of control H9c2 cells (Control) and differently treated cells, specifically with palmitate (PA), sodium palmitate and peptide (PA + PSELT), or peptide only (PSELT). The cells were dispersed in their culture medium at a concentration of 10 −6 /300 µL and maintained for the financial support received in the context of the research project "The challenge of dealing with heart failure with preserved ejection fraction: multiple phenotypes with a common pathophysiological substratum? Donazione Anna Maria Ruvinetti", which was partially used for this work.
Conflicts of Interest:
The authors declare no conflict of interest. | 12,783.4 | 2023-03-29T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Wide temperature range magnetoresistance enhancement of La0.67Ca0.33MnO3: NiO nanocomposites
This paper reports the addition of NiO nanoparticles into the LCMO nanocomposites through a simplified methodology in the heat treatment process compared to the previous work. Structural analysis showed that LCMO and NiO co-existed in the samples without the formation of other phases and confirmed their composite structures. The grain size distribution of samples was observed to become narrower and left-skewed as the concentration of NiO nanoparticles increased. This indicates that the NiO nanoparticles were distributed near the grain boundaries or on the grain surfaces. The presence of La, Ca, Mn, O, and Ni was confirmed by EDX analysis, and no additional impurities were noticeable, which revealed that the composites here are chemically pure in the composition. Temperature dependence magnetisation measurement (100–300 K) revealed the occurrence of ferromagnetic to paramagnetic transition. The Curie temperature, TC recorded at 260 K for all samples. However, the metal–insulator transition temperature, TMI was found to decrease from 246 to 120 K as the NiO content increased. Magnetoresistance of composites was observed to enhance over a wide temperature range at 10 kOe. The findings of LCMO: NiO composites in this work can set up a new perspective and unlock its potential to be utilised in magnetic field sensor element functioning at a wide range of temperatures.
Introduction
Perovskite manganites have been studied with great enthusiasm since the discovery of its renowned phenomenon, which is known as the colossal magnetoresistance (CMR) effect in the 1990s [1,2]. The hole-doped perovskite manganites with the general formula of R 1-x A x MnO 3 , where R is a rare earth ion (R = La, Nd, Pr) and A is a divalent alkaline earth metal ion (A = Ba, Sr, Ca) have drawn much research interest in the scientific community. This is attributed to the correlation between charge, spin, orbital, and lattice degrees of freedom which makes it highly desirable to be applied in spintronic devices [3][4][5]. The CMR effects can be classified into two, which are intrinsic and extrinsic magnetoresistance (MR) [6]. The intrinsic MR is expressive at high magnetic fields while the extrinsic MR is significantly greater at low magnetic fields (LFMR). LFMR refers to the MR exhibited by a conductor under the influence of an external magnetic field, H ≤ 2 kOe. This intriguing phenomenon (LFMR) is one of the focus topics for manganite research in recent years as it is more appealing to modern world applications and it can also be enhanced in a wide temperature range [7].
The enhancement of LFMR can be achieved by the addition of a secondary oxide phase into the manganite composite. The composite samples are normally prepared by a 2-step sintering process and the oxide phase will be appended into the manganite phase after the first sintering process. This procedure has been demonstrated by LCMO: NiO [8,9], LCMO: Sm 2 O 3 [7], LSMO: NiO [10], and LBCO: WO 3 [11]. The purpose of the 2-step sintering process is to facilitate the formation of the manganite phase through the first sintering process before the addition of the secondary phase. Thus, this could prevent the reaction between two compounds in the composite. However, the 2-step sintering process will cause the preparation time longer and might end up with a higher cost.
We have reported a study on the (1 − x) La 0.67 Ca 0.33 MnO 3 : x NiO nanocomposites prepared by the 2-step sintering process [9]. It was the first attempt to synthesising LCMO: NiO composite in bulk form. In that work, different concentrations of NiO nanoparticles (x = 0.00, 0.05, 0.10, 0.15, and 0.20) were added and the findings revealed that no interfacial diffusion reaction occurred between the two materials. Nonetheless, a slight reduction of LFMR was observed in the composites compared to the pure LCMO sample. This might be caused by the thick boundary layer (excessive addition of NiO) near grain boundaries and the impaired spinpolarised tunnelling process. Over the years, many attempts have been done to study the influence of NiO addition on the manganite composite [8][9][10][12][13][14][15]. NiO nanoparticle is one of the excellent metal oxide nanoparticles attributed to its high magnetic permeability, high electronic conductivity, and phase stability [16].
Therefore, this study was carried out as an extension of the previous work on the (1 − x) LCMO: x NiO nanocomposites and aims to improve its shortcomings. Current work has simplified the preparation work of the sol-gel method by reducing the number of heat treatment processes. Besides that, the concentration of NiO nanoparticles in the composite has also been reduced. In this work, a systematic study of the structural, microstructural, magnetic, electrical, and magneto-transport of (1 − x) LCMO: x NiO, x = 0.00, 0.01, 0.03, 0.05, and 0.10 is presented.
Research methodology
(1 − x) LCMO: x NiO composite samples were synthesised using a similar procedure as reported in our previous work by the sol-gel method [9]. However, the heat treatment process was simplified in this study. The fine powder obtained from the dried gel was calcined at 500 °C for 5 h. Later, it was added with different contents of NiO (Sigma Aldrich, 99.8%) nanoparticles (< 50 nm) at x = 0.00, 0.01, 0.03, 0.05, and 0.10. These mixed powders were ground for 0.5 h and pressed into 13 mm pellets, then sintered at 800 °C for 6 h.
The phase identification and structural analysis were performed by an X-ray diffractometer (XRD, X'Pert Pro PW 3040) and HighScore Plus software, respectively. The surface morphologies were observed by a field emission scanning electron microscope (FESEM, FEI Nova NanoSEM 230). The compositions of samples were determined using an energy-dispersive X-ray spectrometer (EDX, Oxford Instrument Max 20). A vibrating sample magnetometer (VSM, Lakeshore 7407) was utilised in the temperature range of 100-300 K to characterise the magnetic behaviour and to determine the Curie temperature (T C ). The temperature dependence of the resistivity and magneto-transport were assessed by a standard four-point probe method using a Hall effect measurement system (HMS, Lakeshore 7604) from 80 to 300 K.
Results and discussion
The structural characterisation of the composite samples was done by XRD within the 2θ range from 20° to 80°. Figure 1 shows the XRD patterns of (1 − x) LCMO: x NiO, x = 0.00, 0.01, 0.03, 0.05, and 0.10 samples in this work. All peaks were successfully matched and indexed by LCMO and NiO phases, indicating the composites were crystallised without other phase impurities. The relative peak intensities corresponding to the NiO phase were found to increase with its concentration. However, there is no sign of NiO peak observed for the x ≤ 0.03 samples. This can be attributed to the NiO content in those samples being below the XRD detection limit [17,18]. The XRD data were further analysed by Rietveld refinement and the structural parameters are presented in Table 1. The low value of goodness of fit, χ 2 (close to 1) signifies the good fit between the experimental and calculated patterns. The lattice parameters were found to have negligible changes with increasing content of NiO nanoparticles. The same observation was also applied to the bond angle and length between Mn and O ions, which are important indicators to determine the double mechanism between Mn 3+ and Mn 4+ ions in the perovskite manganites, specifically on the interaction between the delocalised e g electrons and the localized t 2g spins [19]. Besides that, the average crystallite size, D was computed by Scherrer's equation [20] below and listed in Table 2: where β = β sample − β instrumental , β is the full width at half maximum (FWHM) in radian, λ is the X-ray wavelength (1.5406 Å) and θ is the position of the most intense diffraction peak in degree, which is (121) peak in this work. The calculated crystallite sizes are found to be in the range of 29-34 nm, where the changes are considered insignificant with the increasing content of NiO in the composites. The analysed structural data reveal that there is no reaction between NiO and LCMO within the composite samples. Thus, it can be deduced that the NiO nanoparticles are segregated on the surface of LCMO grains or at the grain boundaries [7,9,21]. Table 2 The D, GS, T C , T irr , T MI , ρ peak , and ΔT for the composite samples Figure 2f shows the typical FESEM micrograph of composite samples. All samples were observed to have grains with irregular shapes accompanied by distinguished grain boundaries. Figure 2a-e show the grain size distribution histograms with Gaussian fit (red colour solid line). The average grain size (GS) was determined by selecting 100 particles randomly from the FESEM micrograph. The grain sizes are found to gradually decrease with increasing NiO concentration as listed in Table 2. Interesting outcomes can be observed from the distribution plots, where the distribution becomes narrower and skewing to the left as the NiO content increases. These results are attributed to the segregation of NiO nanoparticles near grain surfaces or grain boundaries and agree with the XRD outcomes, as well as the observation made by the previous work [9]. Figure 3 shows the scanning region and EDX spectrum of the x = 0.03 sample. The presence of La, Ca, Mn, O, and Ni was confirmed by the EDX analysis despite the NiO phase was not detected by XRD in this sample due to the equipment limitation. Moreover, the atomic composition from EDX analysis was found to be very close to the nominal ratio of the composites and no additional impurities were noticeable, revealing that samples prepared in this work are chemically pure in the composition.
The field-dependent of magnetisation measurement (M-H) was carried out in the range of temperature from 100 to 300 K for x = 0.00 sample as displayed in Fig. 4a. The sample has undergone a ferromagnetic (FM) to paramagnetic (PM) transition, where the hysteresis loops were observed to become narrower and eventually changed to a positive Fig. 4a. This is attributed to the influence of thermal fluctuations of the blocked moment across the anisotropy barrier [22]. Figure 4b shows the temperature-dependent zero-field-cooled (ZFC) and field-cooled (FC) magnetisation under an applied field of 200 Oe in the temperature range of 100 to 300 K. For the ZFC measurement, the sample was first cooled from room temperature (300 K) down to 100 K without the application of the magnetic field. Then, the magnetisation was measured under the magnetic field of 200 Oe in the warming cycle. Whereas for the FC measurement, the sample was cooled down to 100 K from 300 K in the presence of a magnetic field (200 Oe) and the magnetisation measurement was followed later under the same applied field of 200 Oe. The magnetisation is irreversible as shown by the bifurcation between ZFC and FC curves below the irreversible temperature (T irr ), which is due to the movement of magnetic domain walls [23] and the presence of magnetic anisotropy [24]. It is known that the magnitude of ZFC depends on the anisotropy and the difference between FC and ZFC will be larger for the highly anisotropic compounds [25]. The difference between these two curves is almost similar for all samples (not shown) and the T irr values (listed in Table 2) are also having insignificant changes as the concentration of NiO increases, indicating the addition of NiO does not disturb the magnetic behaviour of composites in this work. All samples exhibit a distinct magnetic transition from PM to FM as shown by the M-T plots in Fig. 5. The T C was determined by the inflexion point of dM/ dT (inset of Fig. 5) and the values are grouped in Table 2. The T C values observed in this work do not experience any changes (260 K) with the increasing content of NiO. Hence, the magnetic findings here suggest that the LCMO perovskite lattice is free from the substitution or the formation of a new phase with NiO nanoparticles. Figure 6 shows the temperature dependence of the electrical resistivity for the LCMO: NiO composites. The resistivity increases with the concentration of NiO in the composites as demonstrated by its peak resistivity, ρ peak (see Table 2). Besides that, all samples showed a distinct metal-insulator transition, T MI in the range of 120-246 K as tabulated in Table 2. The T MI decreases with the increasing content of NiO because the NiO nanoparticles near the grain interface have acted as a barrier to charge transport and caused an increment in the resistivity, as well as the dilution to the DE interaction [26]. The decrease in T MI value also indicates that the extrinsic transport (inter-grain connectivity) behaviour is highly influenced by the interface and grain boundary effects [27,28]. Other than that, the transport behaviour of the composite can be elucidated by the two-channel conduction model [29][30][31]. The charge transport in the pure LCMO sample is contributed by the direct contact between LCMO grains. Whereas the presence of insulating NiO nanoparticles near the grain boundaries in composites has interrupted the charge transport. T C and T MI are important indicators of magnetic and electrical properties of perovskite manganites, respectively. These two values are not far from each other for bulk manganites [32]. The discrepancy between T C and T MI is presented as ΔT and summarised in Table 2. The ΔT value is getting greater with the NiO content because the electrical behaviour (T MI ) is highly dependent on the microstructural properties while the magnetic behaviour (T C ) is a cumulative effect of intrinsic properties in manganites [33,34]. Thus, the changes of T C are insignificant as observed in magnetic analysis by considering the FM-PM transition is an intrinsic and intragrain behaviour [14].
The electrical resistivity data were fitted with theoretical models in the metallic and insulator regions. To gain insight into the conductive nature at low temperatures (T < T MI ), the electrical resistivity data were fitted with the equation below [35][36][37]: where 0 is the grain/domain boundary effect, 2 T 2 is due to electron-electron scattering and 4.5 T 4.5 represents the electron-magnon scattering process. The experimental data were fitted with Eq. 2 in ρ-T plots as shown in Fig. 6. The fitting quality was determined by the squared linear correlation coefficient (R 2 ), and it is noteworthy that the theoretical model is in good agreement with the experimental data. The fitting parameters are given in Table 3, and it is observed that 0 is greatest among all parameters, which shows that the residual resistivity due to the grain/domain Table 3 Parameters obtained corresponding to the best fit of the experimental data based on Eqs. boundary is mainly responsible for the conduction process in the metallic region. This fitting result complements the insulating nature of NiO nanoparticles that are scattered near the LCMO grain interface. The polaron conduction is likely to have the responsibility for the conduction in the insulator region (T > T MI ) [38]. The experimental data were fitted by Holstein's adiabatic small polaron hopping (ASPH) model (graph not shown) arising from the strong phonon coupling due to the Jahn-Teller distortion at T > θ D /2 (θ D is the Debye's temperature, θ D /2 is defined as the temperature at which the deviation from the linearity of ln (ρ/T) against 1/T plot) [39], and Mott's variable range hopping model (VRH) [40] corresponding to the localized electrons in the absence of electron-electron interactions at T MI < T < θ D /2. ASPH conduction mechanism is given by [41,42]: where α is the temperature independent coefficient, k B is Boltzmann's constant and E a is the activation energy of the polaron. Figure 7 shows the graph of ln (ρ/T) against 1/T. All plots followed a linear relation at the high-temperature region and the obtained parameters are summarised in Table 3. The E a is recognised as the height of the phase boundaries [13,43], and it was found to increase from 1.520 to 1.630 meV as the NiO increases. It can be deduced that the magnetic disorder in the composites (addition of NiO) dominates the conduction process at the high-temperature region and leads to a higher resistive phase boundary. Thus, a higher E a is required for the sample with higher NiO concentration [31,44]. The hopping process at T > θ D /2 can be explained by the ASPH model as it accords well with the theoretical conduction mechanism. The resistivity data at T MI < T < θ D /2 were fitted with the VRH model and it is expressed by [33]: where o is residual resistivity and T o is the Mott characteristic temperature which can be expressed as: and it can be calculated from the slope of the graph of ln (ρ) against (T) − 1 4 (not shown), k B is Boltzmann's constant, is the inverse of localization length which is estimated to be 2.22 nm −1 for manganites [45] and N(E F ) is the density of states at the Fermi level [46]. The parameters of VRH are listed in Table 3. The obtained N(E F ) for the x = 0.00 sample is in good agreement with the reported value for La 0.67 Ca 0.33 MnO 3 manganite [47]. Figure 8a illustrates the resistivity as a function of temperature (ρ-−T) for x = 0.00 sample under a magnetic field of 0 kOe, 2 kOe, and 10 kOe. The magnetic field has caused the local ordering of magnetic spins and suppression of magnetic spin scattering, thus enhancing the electron hopping process and lowering the value of resistivity. This phenomenon is known as magnetoresistance (MR) and its magnitude is defined by Eq. (6) [32]: where O and H represent the resistivities without and with an applied magnetic field, respectively. The MR (%) as a function of externally applied magnetic field (0-10 kOe) curves from 80 to 300 K for pure LCMO sample was depicted in Fig. 8b. The MR (%) curves have displayed two regions as the magnetic field increases. The first region (H ≤ 2 kOe) or LFMR exhibits a steeper slope attributed to the spin-polarised tunnelling across the grain boundaries [28]. The disorder spins in manganite align when the magnetic field starts to apply and causes a drastic drop in resistivity at the low-field region due to the electron hopping enhancement. Then, the MR (%) increases linearly with the applied magnetic field with a reduced slope in the high field region (> 2 kOe) and it can be explained by the relatively slower rotation of the grain core [39,48]. As observed from the plots in Fig. 8, LFMR is more pronounced at low-temperature regions and decreases as the temperature increases.
To investigate the variation of magnetic fields on the MR (%) of LCMO: NiO composites, plots of MR (%) against T in the magnetic field range of 1-10 kOe are presented in Fig. 9a-d. Slight improvement of LFMR has been observed for composite samples above 280 K as shown in Fig. 9b. Moreover, the MR (%) of LCMO and its composites was found to increase monotonically with the decrease in temperature at low magnetic fields (1 kOe & 2 kOe). On the other hand, the MR (%) at 10 kOe has been enhanced over a wide range of temperatures as demonstrated in Fig. 9d. This enhancement is attributed to the suppression of spin fluctuation, where the spins are aligned parallel to the magnetic field and exhibit a higher MR (%) value near the T MI . Since this study is an extension of our previous work [9], several comparisons have been made to highlight the improvement. First, the structural parameters presented here are comparable with our previous work, indicating the crystal structure formation is not influenced by the shortened period of the heat treatment process. However, the microstructural behaviour (average grain size) of the composites in this work is highly suppressed due to the short heating period. The grain sizes in the current work are almost half of the value compared to the previous work, attributed to the less energy supplied during the grain growth. Sample resistivity here is one order down when comparing the x = 0.00 sample between both studies. This is an unusual trend because smaller grain size will contribute to higher resistivity as demonstrated by the previous work [32,33,49]. This might be attributed to the composites obtained in this work being denser and more compact with better grain connectivity, which favoured the electron hopping process although more grain boundaries have been formed in the samples. However, high-temperature resistivity data fitting by the SPH model showed that the E a obtained here is almost Fig. 9 Temperature dependence of MR (%) for LCMO: NiO composite samples at a 1 kOe, b 2 kOe, c 5 kOe, and d 10 kOe similar to the previous work. Hence, we can deduce that the simplified heat treatment process of LCMO: NiO composites has favoured the conduction mechanism in the metallic region but has almost no effect on the Jahn-Teller mechanism. Lastly, the enhancement of MR has also been observed over a wide range of temperatures in the current work.
Conclusion
The LCMO: NiO nanocomposites have been successfully synthesised with a simplified heat treatment process by the sol-gel method in this work. XRD patterns confirmed the coexistence of LCMO and NiO phases in the composites with no phase impurities. Morphology analysis revealed the average grain size decreases as the NiO content increases, attributed to the segregation of NiO nanoparticles near the grain interfaces or grain boundaries. The chemical homogeneity of the samples and their elements were confirmed by the EDX. The T C values determined by M-T plots indicated samples do not experience any changes (260 K) with the increasing content of NiO. The presence of insulating NiO nanoparticles near the grain boundaries in composites has acted as a charge transport barrier to suppress the electron hopping process and caused the increment in the electrical resistivity. The MR (%) at 10 kOe has been enhanced over a wide range of temperatures, which is attributed to the suppression of spin fluctuation, where the spins are aligned parallel to the magnetic field and exhibit a higher MR (%) value near the T MI . In a conclusion, this work has demonstrated an important approach to obtaining MR enhancement in a wide temperature range. It also constituted an important step towards the development of an improved application of LCMO: NiO nanocomposite for future spintronic and memory devices. | 5,264 | 2023-03-28T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A conformal Yamabe problem with potential on the euclidean space
We consider, in the Euclidean setting, a conformal Yamabe-type equation related to a potential generalization of the classical constant scalar curvature problem and which naturally arises in the study of Ricci solitons structures. We prove existence and nonexistence results, focusing on the radial case, under some general hypothesis on the potential.
introduction
In [4] the first and the third author considered "potential" generalizations of some canonical metrics on smooth complete Riemannian manifolds. In this paper we focus our attention on one of those classes, namely f -Yamabe metrics. We recall that, given a n-dimensional Riemannian manifold (M, g), where g is the metric, and a smooth function f ∈ C ∞ (M ), we say that the triple (M, g, f ) ∈ Y f if and only if it satisfies the condition (1.1) ∇R = 2 Ric(∇f, ·) , where Ric and R are, respectively, the Ricci and the scalar curvature of g and ∇ denotes the Levi-Civita connection associated to g. In a local orthonormal frame {e i }, i = 1, . . . , n, (1.1) becomes where R ij = Ric(e i , e j ). Note that we are using the Einstein summation convention over repeated indices. This equation is a meaningful generalization of the one for constant scalar curvature metrics and naturally arises in the study of Ricci solitons structures (for a general overview see [3]). Moreover, it is clear that any Ricci flat metric satisfy (1.1), for any function f and, more in general, any product of a Ricci flat metric with a metric with constant scalar curvature solves (1.1), for any function f which depends only on the first factor. In the same spirit of the classical Yamabe problem it is natural to address the following questions: (A) having fixed f ∈ C ∞ (M ), does there exist a metric g such that (M, g, f ) ∈ Y f ? (B) having fixed f ∈ C ∞ (M ) and a metric g, does there exist a conformal metric g in the conformal class [g] such that (M, g, f ) ∈ Y f ? More generally, one could ask the question (C) does there exist a metric g and a smooth function Clearly the answer to (C) is positive, since it is always possible to construct a (complete) metric with constant (negative) scalar curvature ( [1] and [2]). Furthermore, when f is constant, (B) boils down to the well known Yamabe problem, which is completely solved when M is compact (see e.g. [9]). We will refer to (B) as the conformal f -Yamabe problem. In this paper we consider problem (B) (when f is not constant) on the Euclidean space R n endowed with the standard flat metric g R n . In particular, in dimension four, we prove the following:
ODE formulation of the conformal f -Yamabe problem
Let (M, g) be a smooth n-dimensional Riemannian manifold, n ≥ 2, and let f ∈ C ∞ (M ). It is well known (see for instance [5]) that, if g = e 2w g ∈ [g] for some w ∈ C ∞ (M ), then the following formulas hold: where ∇ 2 is the Hessian and ∆ = g ij ∇ 2 ij is the Laplace-Beltrami operator of g. A computation shows that (M, g, f ) = (M, e 2w g, f ) ∈ Y f if and only if the function w solves the system of PDEs In particular, since (R n , g R n ) is Ricci flat, then (R n , g, f ) = (R n , e 2w g R n , f ) ∈ Y f if and only if w solves the system of PDEs To fully exploit the symmetries of the Euclidean space, it is reasonable to start our analysis by considering radial solutions w = w(r) of (2.2) for a given radial function f = f (r), where r denotes the distance function from the origin. In this case, in standard polar coordinates, one has g R n = dr 2 + r 2 g S n−1 , dw = w (r)dr and df = f (r)dr , and a computation shows that system (2.2) boils down to the following second order nonlinear ODE for the function u(r) := w (r) where h(r) := f (r). Note that if n = 2, then the cubic term disappears in (2.3). We then impose the initial conditions , then u(r) = αr + O(r 2 ) as r → 0 and, in turn, as r → 0 which shows that, by combining suitably the terms in (2.3), we obtain finite limits as r → 0. The existence and uniqueness of a solution of (2.3)-(2.4) can then be proved rigorously by adapting the arguments of Proposition 1 in [11]: one needs to combine the Ascoli-Arzelà Theorem with the Schauder fixed point Theorem in order to obtain existence of a solution.
Then the solution is unique as long as it can be continued [6,Proposition 4.2]. Before stating our existence and nonexistence results, let us discuss heuristically the structure of (2.3).
Heuristic preliminaries
We first notice that, if n ≥ 3, then there exist exactly two singular (negative) solutions of (2.3) of the type cr −1 given by regardless of the explicit form of h. This fact suggests that the "interesting dynamics" for (2.3) occurs when u(r) < 0 and global solutions of (2.3) are more likely to be prevalently negative. If n = 2 then the functions u(r) = cr −1 are singular solutions of (2.3) for any c = 0; in particular, there exist infinitely many positive singular solutions and the dynamics appears much more chaotic. It is quite useful to consider the two functions defined for all (r, y) ∈ R + × R: Then (2.3) may be written in normal form as Depending on h ∈ C 0 [0, ∞), we define the two regions Clearly, I h contains a right neighborhood of r = 0 and is therefore nonempty for all h, while It is also straightforward that: • if r ∈ I h , then P (r, y) = 0 if and only if y = 0, moreover P has the same sign as y; • if r ∈ I h , then we may write and hence P (r, y) vanishes if and only if one of the following facts occurs: Note that ψ(r) > ϕ(r) for all r ∈ I h but, while ϕ(r) < 0 for all r ∈ I h , the sign of ψ(r) may vary and it is the opposite of the sign of rh(r) + n − 1; in particular, ψ(r) < 0 in a right neighborhood of r = 0. One expects a crucial role for the existence results to be played by the signs of Q and P . However, the overall picture is not completely clear. To see this, consider the trivial case h ≡ 0 for which the function First notice that if a < 0 (so that u a (0) > 0), then u a blows up as r → 1/ |a|. Therefore, u a is a global solution of (2.3) if and only if a > 0. Simple computations then show that if n = 3, 4, 5 then ∃ρ > 0 , ϕ(r) < u 1 (r) < ψ(r) ∀r > ρ , if n ≥ 6 then ∃ρ > 0 , u 1 (r) < ϕ(r) ∀r > ρ .
In the left picture we see that the graph of u 1 (thick line) eventually lies inside Γ while in the right picture it eventually lies outside. Therefore, the function P (r, u 1 (r)) does not always have the same sign as r → ∞.
Nonexistence results
We can the prove the following (partial) nonexistence results. The proof of Theorem 4.1 is given in Section 7. As a by-product, the very same proof enables us to obtain Concerning nonexistence of negative solutions, a weaker result holds. First of all, we put together the three static terms This shows that the static term changes sign whenever the graph of u crosses one of the two hyperbola: Then rewrite (2.3) as If we assume that and that then the above equation tells us that r → r n−1 u (r) is decreasing for r > R. In particular, we have that u (r) < 0 and u(r) < h 1 (r) for all r > R. Finally, this yields the existence of γ > 0 such that 1 r n−1 r n−1 u (r) ≤ γu(r) 3 ∀r > R .
Existence results
We start with a simple but interesting example. If 3) is more likely to have negative solutions whenever h itself is negative. We prove that this is the case, at least in dimension n = 4.
Remarks and open problems
We discuss some open problem related to conformal f -Yamabe metrics and to solutions of equation (2.1).
(1) In Theorem 4.1 we stated a partial nonexistence results for radial solutions in the Euclidean space, while Theorem 5.1 provides a general existence result. It would be interesting to prove a sharp condition on the potential function f (or on its derivative) ensuring existence of global solutions to (2.3).
(2) It is well known [8] that global positive solution the Yamabe equation have to be radial (and thus classified). We could ask the same question for (general) solutions to (2.2). For a given f ∈ C ∞ (R n ), are there any nonradial solutions w? If f is radial, are all solutions to (2.2) radial?
(3) In this paper we studied conformal f -Yamabe metrics for (R n , g R n ). What about other rotationally symmetric spaces? In particular, what we can say for the hyperbolic space (H n , g H n ) or the round sphere (S n , g S n )?
(4) In the existence result (Theorem 5.1) the dimension n = 4 seems to be peculiar, at least from the analytic point of view. Is there a geometric interpretation of this fact?
Proof of Theorem 4.1
Throughout this proof we will need the following particular class of test functions.
Definition 7.1. Let ρ > 0. We say that a nonnegative function φ ∈ C 2 c [0, ∞) satisfies the ρ-property if and if It is clear that such functions exist; to see this, it suffices to replace any φ satisfying (7.1) with a power φ k for k sufficiently large so that (7.2) will be satisfied.
For the proof of Theorem 4.1, we first observe that, since α > 0 the solution of (2.3)-(2.4) is positive and strictly increasing in a right neighborhood of r = 0, say in some maximal interval (0, R). Clearly, among u and u the first one which can vanish is u . But if u (R) = 0 then, using the lower bound for h, we see that (2.3) yields giving a contradiction. Therefore, u cannot vanish and two cases may occur: The proof will be complete if we show that (ii) occurs. At this point, we distinguish two cases.
In order to prove (ii) in (7.3), we argue for contradiction by assuming that R = ∞ so that u, u > 0 for all r > 0. From the assumptions and (2.3), we then infer that (recall n ≤ 4) To reach a contradiction we need the following estimate, inspired to the method developed by Mitidieri-Pohožaev [10] (see also the proof of [7,Proposition 5]).
Take a function φ 1 satisfying the 1-property and observe that the function φ ρ (r) := φ 1 r ρ ∀ρ > 1 satisfies the ρ-property. Therefore, for all ε > 0, from (7.4) and Lemma 7.2 we infer that Take 0 < ε < 3(n − 2), then the latter inequality yields With the change of variable r = ρt this becomes Since u is increasing on R + , we have u(ρt) ≥ u(t) for all ρ > 1 so that the left hand side of this inequality is positive and increasing for ρ ≥ 1. By letting ρ → ∞, the right hand side tends to 0 and this leads to a contradiction which rules out case (i). Hence, case (ii) occurs and the solution u of (2.3)-(2.4) with α > 0 cannot be continued to all the interval [0, ∞). This completes the proof of Theorem 4.1 in the case n = 3, 4.
Proof. In this proof we will use the Young inequality in the following form:
Let v(r) = ru(r), then v satisfies the equation If u satisfies (2.3)-(2.4) with α < 0, then u(r) and u (r) are strictly negative in a right neighborhood of r = 0. By definition of v, also v(r) is strictly negative in a right neighborhood of r = 0. We claim that −2 < v(r) < 0 for all r > 0. If not, let R > 0 be the first time where (8.2) either v(R) = 0 or v(R) = −2 .
Multiply (8.1) by r 2 v (r) and integrate over [0, R] to obtain If n = 4 and h(r) ≤ 0 we get a contradiction which shows that R does not exist and therefore −2 < v(r) < 0 for all r > 0. This proves the claim. Hence, by (8.1), also v and v remain bounded and the solutions exists. This concludes the proof of Theorem 5.1. | 3,185 | 2019-07-19T00:00:00.000 | [
"Mathematics"
] |
Genetic Algorithm Based Improved ESTAR Nonlinear Models for Modelling Sunspot Numbers and Global Temperatures
Smooth Transition Autoregressive (STAR) models are employed to describe cyclical data. As estimation of parameters of STAR using nonlinear methods was time-consuming, Genetic algorithm (GA), a powerful optimization procedure was applied for the same. Further, optimal one step and two step ahead forecasts along with their forecast error variances are derived theoretically for fitted STAR model using conditional expectations. Given the importance of the issue of global warming, the current paper aims to model the sunspot numbers and global mean temperatures. Further, appropriate tests are carried out to see if the model employed is appropriate for the datasets.
Introduction
Linear time-series family of models viz., Autoregressive integrated moving average (ARIMA), are useful for modelling and forecasting of time-series datasets in order to extract substantial statistics and other characteristics of the data. It takes into consideration that data points taken over different time-epochs may have internal structure, like, autocorrelation, trend or seasonal variation. A time series model is linear if it can be written as a linear function of past observations, errors and other exogenous variables. These linear models have gained much popularity as it is relatively simple and further there exist a good number of computer software which has inbuilt packages for fitting such models. The linear model, however, is inadequate as it is not capable of capturing many important characteristic features like presence of non-Gaussian colored noise in the error term, cyclicity, nonlinearity, limit cycle behavior, volatility clustering, leverage effects, and chaotic behavior of time-series datasets. These types of characteristics feature of time-series datasets can be modelled efficiently using various nonlinear time series models available in the literature. Keeping this in mind, time-series analysis has moved towards the nonlinear domain over the last four decades, as it not only provides a better fit to the data but is also capable to capture different features which cannot be captured solely by linear models. Further, nonlinear models are more suitable for accurately relating dynamics of a time-series, and for making better multi-step-ahead forecasts. To this end, to capture cyclical phenomena in time-series data, Smooth Transition Autoregressive (STAR) family of parametric nonlinear time-series models was propounded by Terasvirta (1994). There are mainly two models under the STAR family, Exponential Smooth Transition Autoregressive (ESTAR) and Logistic Smooth Transition Autoregressive (LSTAR).
The advantages of using STAR models can be gauge from the fact that these types of models are of particular importance to describe those data sets that have cyclical variations along with chaotic periods, changes in economic aggregates influenced by changes in the manners of many different agents and it is highly unlikely that all agents react simultaneously to a given economic signal. In STAR models, transitions are possible along a continuous scale, making the regimeswitching process smooth which helps overcome the abrupt switch in parameter values characteristic of simpler Threshold autoregressive (TAR) models.
The importance of precise modelling of climatic variables such as temperature and sunspots numbers can be judged from the fact that researchers have used variety of models ranging from parametric to nonparametric and recently machine learning too (Citakoglu et In this manuscript, after a brief introduction in Section 1, Section 2 describes ESTAR models in detail. In Section 3, a brief narrative on fitting of star models is given. In Section 4, optimal one step and two step ahead forecasts of ESTAR model is theoretically derived. Section 5 illustrates the procedure with practical time-series datasets. Finally, in Section 6 some concluding remarks are put up and the papers ends with delineation of some pertinent research problems for future work.
Description of ESTAR Model
Over the last few decades, nonlinear time-series models have been applied in many fields of research. One such parametric family is the State dependent models (SDM), which are the general class of nonlinear models that include the Bilinear and Threshold autoregressive as special cases, but allow much greater flexibility. SDM are essentially Autoregressive moving average (ARMA) models in which the parameters are functions of past values of time-series (Young et al., 2001). Although SDM are of a general nature, they are nevertheless amenable to statistical analysis. This approach to nonlinear time-series analysis offers two major advantages, namely (i) SDM may be used directly in connection with the problem of forecasting.
(ii) Since these can be fitted to data without any specific prior assumptions about the form of the nonlinearity, these may be used to give us an "overview"' of the nonlinearity inherent in the data, and thus indicate whether, for example, a Bilinear, Threshold autoregressive, or even a linear model, is appropriate. Toivonen (2003) derived the discrete-time models with state-dependent parameters for nonlinear systems. Linearization followed by integration over the sampling interval was used to represent these systems. The representation gave exact solution of the nonlinear system. Numerical simulations also showed that these models could be represented by a neural network approximator trained on input-output data. Models which allow for state-dependent or regimeswitching behavior have been most popular for such analysis. One of the most popular models with regime-switching behavior is Threshold Autoregressive (TAR) models. The foremost thought of TAR models is to model a given stochastic process by a piecewise linear autoregressive model, where the determination as to whether each of the sub-models is active or not is driven by the value of a known variable. Smooth Transition Autoregressive (STAR) family of parametric nonlinear time-series models is a generalization of the threshold models, avoiding discontinuities in the autoregressive parameters as the transition which takes place from one regime to the next is determined by a continuous nonlinear function.
It is to be noted that STAR family of parametric nonlinear time-series model has the potential to capture the non-Gaussian characteristics at different time-epochs of time-series datasets. The STAR family of models can be written as: where { } is a sequence of normal (0, 2 ) independent errors, = ( 0 , 1 , … , ) ′ and = There are two different transition functions in the smooth transition autoregressive models, one is And the other one (2) is called the LSTAR model, and with transition function (3) is called the ESTAR model. A tiny difference in the two transition functions is owing to the fact that, the logistic function changes monotonically with − , while the exponential function fluctuates symmetrically at c with − . But, it is to be kept in mind that both the functions become steeper when γ is large, which means more rapid is the speed of the transition. One of the characteristic features of (1) is that the minimum value of the transition function can be equal to zero. Also heartening to note that other linear as well as nonlinear models can be transformed as special cases of the STAR specifications.
Fitting of STAR models
The
An illustration
The data are at monthly frequency for global land-ocean temperatures (GT) and sunspot numbers of static transformation of a linear Gaussian random process. Further, it was also found that the periodogram ordinates were significant, thereby signifying presence of cyclicity. Obviously, conventional ARIMA modelling approach of the given two time-series data may not be able to describe these datasets satisfactorily.
The preliminary data analysis justifies the application of ESTAR nonlinear time-series model to describe the two time-series dataset. Accordingly, several ESTAR models were fitted to the data and the best model was identified on the basis of minimum AIC criterion. The best ESTAR models for global land-ocean temperatures and sunspot numbers are respectively obtained as
Derivation of formulae for out-of-sample forecasts
One of the main objectives when dealing with time-series analysis is the forecasting of future values of the series of interest. In this section, we confine our attention only to deriving out-ofsample one-step and two-step forecast formulae in respect of ESTAR model. However, formulae for more than two-step ahead forecasts, though quite complicated, can be derived along similar lines. The optimal predictor which minimizes the mean one-step-ahead squared prediction error is the conditional expectation given by Hence, the one-step ahead forecast error variance is For out-of-sample forecasting, parameters are replaced by their corresponding estimates. By using the above procedure one-step optimal forecasts are computed. To validate whether the formula was providing reasonable forecast, the errors were tested for randomness using run test available at R software package. The test statistics for sunspot numbers and Global mean temperature was calculated as 0.148 and -1.024 respectively with 0.88 and 0.31 as the corresponding p value. So, we can infer that the error series were found to be random for both the datasets thereby implying that the model was able to explain the data in an appropriate manner.
Conclusions
In this present investigation, methodology for fitting ESTAR nonlinear time-series model through Genetic Algorithm methodology is described. It is suggested that, for modelling and forecasting cyclical time-series data, researchers should apply this model rather than the ARIMA model. Given the importance of the issue of global warming, the methodology is employed for modelling sunspot numbers and global temperature datasets, which have been captured by the model employed. Further, optimal forecast formulae have also been derived. As a future work, effort can also be required to be directed towards considering more parsimonious subset | 2,157.4 | 2021-08-17T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Physics"
] |
An ML-Based Estimate and the Cramer-Rao Bound for Data-Aided Channel Estimation in KSP-OFDM
We consider the Cramer-Rao bound (CRB) for data-aided channel estimation for OFDM with known symbol padding (KSP-OFDM). The pilot symbols used to estimate the channel are positioned not only in the guard interval but also on some of the OFDM carriers, in order to improve the estimation accuracy for a given guard interval length. As the true CRB is very hard to evaluate, we derive an approximate analytical expression for the CRB, that is, the Gaussian CRB (GCRB), which is accurate for large block sizes. This derivation involves an invertible linear transformation of the received samples, yielding an observation vector of which a number of components are (nearly) independent of the unknown information-bearing data symbols. The low SNR limit of the GCRB is obtained by ignoring the presence of the data symbols in the received signals. At high SNR, the GCRB is mainly determined by the observations that are (nearly) independent of the data symbols; the additional information provided by the other observations is negligible. Both SNR limits are inversely proportional to the SNR. The GCRB is essentially independent of the FFT size and the used pilot sequence, and inversely proportional to the number of pilots. For a given number of pilot symbols, the CRB slightly increases with the guard interval length. Further, a low complexity ML-based channel estimator is derived from the observation subset that is (nearly) independent of the data symbols. Although this estimator exploits only a part of the observation, its mean-squared error (MSE) performance is close the CRB for a large range of SNR. However, at high SNR, the MSE reaches an error floor caused by the residual presence of data symbols in the considered observation subset.
INTRODUCTION
Multicarrier systems have received considerable attention for high data rate communications [1] because of their robustness to channel dispersion. To cope with channel dispersion, the multicarrier system inserts between blocks of data a guard interval, with a length larger than the channel impulse response. The most commonly used types of guard interval are cyclic prefix, zero padding, and known symbol padding. In cyclic prefix OFDM, the guard interval consists of a cyclic extension of the data block whereas in zero-padding OFDM, no signal is transmitted during the guard interval [2]. In OFDM with known symbol padding, (KSP-OFDM), which is considered in this paper, the guard interval consists of a number of known samples [3][4][5]. One of the advantages of KSP-OFDM as compared to CP-OFDM and ZP-OFDM is its improved timing synchronization ability: in CP-OFDM and ZP-OFDM, low complexity timing synchronization algorithms like the Schmidl-Cox [6] algorithm, typically result in an ambiguity of the timing estimate over the length of the guard interval, whereas in KSP-OFDM, low complexity timing synchronization algorithms can be found avoiding this ambiguity problem by properly selecting the samples of the guard interval [7].
In KSP-OFDM, the known samples from the guard interval can serve as pilot symbols to obtain a data-aided estimate of the channel. However, as the length of the guard interval is typically small as compared to the FFT length (to keep the efficiency of the multicarrier system as high as possible) the number of known samples is typically too small to obtain an accurate channel estimate. To improve the channel estimation accuracy, the number of pilot symbols must be increased. This can be done by increasing the guard interval length or by keeping the length of the guard interval constant 2 EURASIP Journal on Wireless Communications and Networking and replacing in the data part of the signal some data carriers by pilot carriers. As the former strategy results in a stronger reduction of the OFDM system efficiency than the latter [8], the latter strategy will be considered.
In this paper, we derive an approximative analytical expression for the Gaussian Cramer-Rao bound (GCRB) for channel estimation when the pilot symbols are distributed over the guard interval and pilot carriers. The paper is organized as follows. In Section 2, we describe the system and determine the GCRB. Further, we derive a low complexity ML-based estimate for the channel in Section 3. Numerical results are given in Section 4 and the conclusions are drawn in Section 5.
System model
In KSP-OFDM, the data symbols to be transmitted are grouped into blocks of N symbols: the ith symbol block is denoted a i = (a i (0), . . . , a i (N − 1)) T . As explained below, a i contains information-bearing data symbols and pilot symbols. The symbols a i are then modulated on the OFDM carriers using an N-point inverse FFT. The guard interval, consisting of ν known samples, is inserted after each OFDM symbol (this corresponds to the dark-gray area in Figure 1(a)), resulting in N + ν time-domain samples s i during block i: where F is the N × N matrix corresponding to the FFT operation, that is, F k, = (1/ √ N)e − j2π(k /N) , and b g = (b g (0), . . . , b g (ν − 1)) T corresponds to the ν known samples of the guard interval.
The sequence (1) is transmitted over a dispersive channel with L taps h = (h(0), . . . , h(L − 1)) T and disturbed by additive white Gaussian noise w. The zero-mean noise components w(k) have variance N 0 . To avoid interference between symbols from neighboring blocks, we assume that the duration of the guard interval exceeds the duration of the channel impulse length, that is, ν ≥ L − 1. Without loss of generality, we consider the detection of the OFDM block with index i = 0, and drop the block index for notational convenience. Taking the condition ν ≥ L − 1 into account, the corresponding N + ν received time-domain samples can be written as where (H ch ) k,k = h(k − k ) is the (N + ν) × (N + ν) channel matrix. For data detection, the known samples are first subtracted from the received signal. Then, the ν samples of the guard interval are added to the first ν samples of the data part of the block, as shown in Figure 1(b), and an FFT is applied to the resulting N samples. As the known samples are distorted by the channel (as can be seen in Figure 1(b)), the channel needs to be known before the contribution from the known samples can be removed from the received signal.
To estimate the channel, we assume that M pilot symbols are available. As we select the length of the guard interval in function of the channel impulse length and not in function of the precision of the estimation, only ν of the M pilot symbols can be placed in the guard interval. This implies that M − ν carriers in (1) must contain pilot symbols, which are denoted by b c = (b c (0), . . . , b c (M −ν−1)) T . We define I p and I d as the sets of carriers modulated by the pilot symbols and the data symbols, respectively, with I p ∪ I d = {0, . . . , N − 1}. Hence, the symbol vector a contains M − ν pilot symbols b c and N + ν − M data symbols which are denoted by a d . We assume that the data symbols are independent identically distributed (i.i.d.) with E[|a d (n)| 2 ] = E s and the pilot symbols are selected such that E[|b g (m) The normalization factor N/(N + ν) in (1) then gives rise to E[|s(m)| 2 ] = E s . It can easily be verified that the observation of the N + ν time-domain samples corresponding to one OFDM block (as shown in Figure 1(c)) contains sufficient information to estimate h. Rewriting (2), we obtain where B = B g +B c is a (N +ν)×L matrix. The matrix B g contains the contributions from the pilot symbols in the guard interval, and is given by where |x| K is the modulo-K operation of x yielding a result in the interval [0, K[, and b g (k) = 0 for k ≥ ν. The matrix B c consists of the contributions from the pilots transmitted on the carriers, where The vector s p equals the N-point IFFT of the pilot carriers only, that is, s p = F p b c . The N × (M − ν) matrix F p consists of a subset of columns of the IFFT matrix F + corresponding to the set I p of pilot carriers. Note that s p (k) = 0 for k < 0 or k ≥ N. The disturbance in (3) can be written as where
Gaussian Cramer-Rao bound
First, let us determine the Cramer-Rao bound of the estimation of h from the observation r. The Cramer-Rao bound is defined by R h− h − J −1 ≥ 0 [9], where R h− h is the autocorrelation matrix of the estimation error h − h, h is an estimate of h, and the Fisher information matrix J is defined as Heidi Steendam et al. Hence, the MSE of an estimator is lower bounded by . In our analysis, we assume that s d = F d a d is zero-mean Gaussian distributed; this yields a good approximation for large N + ν − M (say for large block sizes) and results in the Gaussian CRB (GCRB). In this case, r given h is Gaussian distributed, that is, r | h∼N(Bh, R w ), where R w = E s (N/(N +ν))HF d F + d H + +N 0 I N+ν is the autocorrelation matrix of the disturbance w and I K is the K × K identity matrix. Hence, it follows that where C is an irrelevant constant and R w is the determinant of R w . Note that as the autocorrelation matrix R w depends on the channel taps h to be estimated, we need the derivative of R w and R −1 w with respect to h to obtain the Fisher information matrix, and hence the GCRB. As in general these derivatives are difficult to obtain, the computation of the GCRB is in general very complex. In order to find an analytical expression for the GCRB and avoid the difficulty of finding the derivatives of R w and R −1 w for a general autocorrelation matrix R w , we suggest the following approach.
Let us consider the approximation of the data contribution HF d a d in (6) In this approximation, we have neglected, in the contribution from a d to r, the transient at the edges of the received block; this approximation is valid for long blocks, that is, when N ν. When applying an invertible linear transformation that is independent of the parameter to be estimated, to the observation r, this will have no effect on the CRB. Further, note that Ha d contains only N + ν − M < N + ν components. Therefore, it is possible to find an invertible linear transformation T that maps r to an (N + ν) × 1 vector r = [r T 1 r T 2 ] T , where r 1 depends on the transmitted data symbols a d and r 2 is independent of a d . This transform can be found by performing the QR-decomposition of the matrix F, that is, where U is an upper triangular matrix. Taking into account The transform matrix T is then given by T = Q + , and the resulting observations yield In (11), B 1 and B 2 correspond to the first N +ν−M and last M rows of TB, respectively. Because of the unitary nature of the matrix T, the noise contributions w 1 and w 2 are statistically independent and have the same mean and variance as the noise w.
We now compute the GCRB related to the estimation of the channel taps h based on the observation r = Tr using the approximation HF d = F H. The observation r given h is also Gaussian distributed, that is, r | h∼N(TBh, R w ), where R w is the autocorrelation matrix of the disturbance w = T w and is given by where As r 1 and r 2 given h are statistically independent, it can easily be verified that the Fisher information matrix is given by with i = 1, 2; and
EURASIP Journal on Wireless Communications and Networking
We now compute the Fisher information matrices J 1 and J 2 , separately. First, we determine J 2 . As the observation r 2 = B 2 h + w 2 is independent of the data symbols, and p(r 2 | h)∼N(B 2 h, N 0 I M ), where B 2 is independent of h, it can easily be found that Note that the CRB of an estimation can not increase by using more observations. Hence, the GCRB obtained from the observation r 2 only is an upper bound for the GCRB obtained from the whole observation r . Next, we determine J 1 , based on the observation r 1 = B 1 h + U Ha d + w 1 only. Note that, although B 1 is independent of h, the autocorrelation matrix R 1 of the disturbance U Ha d + w 1 is not. Recall that to compute J 1 , we need the derivative of R 1 and (R 1 ) −1 with respect to h. These derivatives can be written in an analytical form using the following approximation: when M − ν N, F F + can be approximated by the identity matrix I N+ν−M . When this assumption holds, R 1 can be written as where T 1 consists of the N + ν − M first rows of T, and Δ is a diagonal matrix with elements α defined as Because F has rank N+ν−M, T 1 F is a full-rank square matrix. When A and B are square matrices, it follows that AB = A B . Hence, ln R 1 reduces to Further, as T 1 F has full rank, the inverse of R 1 (16) can easily be computed: Using (18) and (19), the derivate of ln R 1 and (R 1 ) −1 with respect to h can easily be computed. Defining it follows after tedious but straightforward computations (see the appendix) that the Fisher information matrix J 1 is given by Combining (15) and (21), the total Fisher information matrix, based on the observation of both r 1 and r 2 , is given by (see the appendix) Let us now consider the behavior of the GCRB for low and high values of E s /N 0 . When E s /N 0 1, it follows from (17), (20) that the second and third term in (22) are proportional to (E s /N 0 ) 2 , whereas it can be verified from the definitions of B and R w that the first term in (22) is proportional to E s /N 0 . Hence, the first term in (22) is dominant at low E s /N 0 and the GCRB reduces to CRB = trace [(B + R −1 w B) −1 ]. Taking into account that at low E s /N 0 the autocorrelation matrix R w reduces to N 0 I N+ν , the low SNR limit of the GCRB equals trace (N 0 (B + B) −1 ), which is inversely proportional to E s /N 0 . This low SNR limit equals the GCRB that results from ignoring the data symbols a d in (6); this limit corresponds to the low SNR limit of the true CRB that has been derived in [8].
To evaluate the low E s /N 0 limit of the (G)CRB, we approximate B + B by its average over all possible pilot sequences, that is, We assume that the pilot symbols are selected in a pseudorandom way. In that case, The components of the second term E[B + c B c ] are given by When the pilot symbols are evenly distributed over the carriers (i.e., the set I p of pilot carriers is given by Further, if we split the first term of (22) as B + R −1 w B = B + 1 R −1 1 B 1 + (1/N 0 )B + 2 B 2 (see the appendix), it can be verified from the definitions of B 1 , B 2 , and R 1 that the first term B + 1 R −1 1 B 1 is independent of E s /N 0 and the second term (1/N 0 )B + 2 B 2 is proportional to E s /N 0 at high E s /N 0 . Hence, the Fisher information matrix at high E s /N 0 is dominated by the term (1/N 0 )B + 2 B 2 so the high SNR limit of the GCRB equals CRB = trace [N 0 (B + 2 B 2 ) −1 ], which is inversely proportional to E s /N 0 . This high SNR limit equals the GCRB corresponding to J −1 2 , which corresponds to exploiting for channel estimation only the observations r 2 that are independent of the data symbols. This indicates that at high SNR, the information contained in the observations r 1 , that are affected by the data symbols can be neglected as compared to the information provided by r 2 . Based on this finding, we will derive in Section 3 a channel estimator that only makes use of the observations r 2 .
Finally, note that both the low and high E s /N 0 limits of the GCRB are independent of h.
THE SUBSET ESTIMATOR
The ML estimate of a vector h from an observation z is defined as [9]: In the previous section, we have found that all observations were linear in the parameter h to be estimated: where ω is zero-mean Gaussian distributed with autocorrelation matrix R ω . If R ω is independent of h, the ML estimate can easily be determined.
In the problem under investigation, the autocorrelation matrix of the additive disturbance becomes independent of h only for the observation r 2 . Based on the observation r 2 , we can easily obtain the ML estimate of h: We call this the subset estimator, as only a subset of observations is used for the estimation. The mean-squared error of this estimate is given by Hence, the MSE of this estimate reaches the subset GCRB which equals trace (J −1 2 ), that is, the estimate is a minimum variance unbiased (MVU) estimate. However, it should be noted that (27) is valid under the assumption HF d = F H, which for finite block sizes is only an approximation. For finite block sizes, the observation r 2 is affected by a residual contribution from the data symbols. In that case, the MSE of the estimate (26) is given by where D = (B + 2 B 2 ) −1 B + 2 T 2 and T 2 consists of the last M rows of T. Note that the matrix D is proportional to (E s ) −1/2 . At low E s /N 0 , the autocorrelation matrix R w converges to N 0 I N+ν , in which case (28) converges to (27), which is inversely proportional to E s /N 0 . At high E s /N 0 , however, the residual contribution of the data symbols will be dominant, and the dominant part of R w that contributes to (28) is proportional to E s . Hence at high E s /N 0 the MSE, (28) will become independent of E s /N 0 : an error floor will be present, corresponding to MSE = trace (E s (N/(N + ν))DHF d F + d H + D + ). Note that the subset estimate (26) is only a true ML estimate as long as the assumption HF d = F H is valid; for finite block size, (26) is rather an ML-based ad hoc estimate.
As the transform T is obtained by the QR-decomposition of F, and F is known when the positions of the data symbols are known, B 2 only depends on the known pilot symbols and the known positions of the data carriers and the pilot carriers. Hence, B 2 is known at the receiver and (B + 2 B 2 ) −1 B + 2 can be precomputed. Therefore, the estimate (26) can be obtained with low complexity.
NUMERICAL RESULTS
In this section, we evaluate the GCRBs obtained from the whole observation r 1 and r 2 (22) and the data-free observation r 2 only (15). Without loss of generality, we assume the comb-type pilot arrangement [10] is used for the pilots transmitted on the carriers. We assume that the pilots are equally spaced over the carriers, that is, . Note however that the results can easily be extended for other types of pilot arrangements. From the simulations we have carried out, we have found that the equally spaced pilot assignment yields the best performance results. Further, we assume an L-tap channel with h( ) = h(0)(L − ), for = 0, . . . , L − 1, which is normalized such that L−1 =0 |h( )| 2 = 1; we have selected L = 8. The pilot symbols are BPSK modulated and generated independently from one block to the next. Unless stated otherwise, we compute the GCRB and the MSE for a large number of blocks and average over the blocks, in order to obtain results that are independent of the selection of the pilot symbols.
In Figure 2, we show the normalized GCRB, defined as CRB = ((N/(N + ν))(E s /N 0 )) −1 NCRB, as a function of the SNR = E s /N 0 for the total observation (r 1 , r 2 ) and the subset r 2 of observations only. Further, the low SNR limit trace (N 0 (B + B) −1 ) of the (G)CRB is shown. As expected, for low SNR (<−10 dB), the GCRB of the total observation coincides with the low SNR limit of the (G)CRB. At high SNR, the GCRB reaches the GCRB (27) for the subset observation. Further, it can be observed that the low SNR limit of the NCRB is essentially equal to L/M, as was shown in Section 2. Note that the difference between the low SNR limit and the high SNR limit is quite small (in our example the difference amounts to about 10%); this indicates that most of the estimation accuracy comes from the observation r 2 .
In Figure 3, the NCRB is shown as function of M for different values of the SNR. The (N)CRB is inversely proportional to M for a wide range of M. At low and high values of 6 EURASIP Journal on Wireless Communications and Networking M, the NCRB is increased as compared to L/M. This can be explained by Figure 4, which shows the influence of the pilot sequence on the GCRB. In this figure, the GCRB is computed for 50 randomly generated pilot sequences. Further, the average of the GCRB over the random pilot sequences is shown. Note that the GCRB depends on the values of the pilots through the first term in (22) only. At high values of M, the pilot spacing becomes = 2 (for N/4 < M − ν < N/2 = 512) and = 1 (for M − ν > N/2 = 512); in that case pilots are not evenly spread over the carriers but grouped in one part of the spectrum, and the approximation in the last line of (24) is no longer valid. This effect causes the peaks in the curve at high M. The GCRB in this case clearly depends on the values of the pilots: we observe an increase of the variance. The effect disappears when M − ν is close to N/2 = 512 or N = 1024: the spreading of the pilots over the spectrum becomes again uniform. Also at low values of M, the average value of the GCRB and the variance of the GCRB are increased. At low M, the contribution of the guard interval pilots is dominant. From simulations, it follows that this contribution strongly depends on the values of the pilots in the guard interval, and has large outliers when the guard interval pilots are badly chosen. Assuming the pilots in the guard interval are B-PSK modulated, the lowest GCRB in this case is achieved when the B-PSK pilots are alternating, that is, b g = {1, −1, 1, −1, . . . }. When M increases, the relative importance of the guard interval pilots reduces and the contribution of the pilot carriers becomes dominant. The GCRB turns out to be essentially independent of the values of the pilot carriers, as these pilots are multiplied with complex exponentials, which have a randomizing effect on the contributions of the pilot carriers. Hence, for increasing M, the GCRB becomes essentially independent of the used pilot sequence. Figure 5 shows the dependency of the NCRB on the guard interval length for a fixed total number of pilots. It is observed that the NCRB slightly increases for increasing guard interval length. This can be explained by noting that when ν increases, the number of guard interval pilots increases while the number of pilot carriers decreases. Hence, when ν increases, the relative importance of the contribution of the guard interval pilots will increase. As shown in Figure 4, this will cause an increase of the GCRB. Hence, as the GCRB increases for increasing guard interval length when the total number of pilots is fixed, it is better to keep the guard interval length as small as possible (i.e., ν = L − 1 in order to avoid intersymbol interference) and put the other pilots on the carriers.
The dependency of the GCRB on the FFT size N is shown in Figure 6. The GCRB is constant over a wide range of N. Only at low values of N, the GCRB slightly increases. Note that for low N, the approximations HF d = F H and F F + = I N+ν−M do not hold, and the approximate analytical expression for the GCRB looses its practical meaning. However, for the range of N for which the derived approximation for the GCRB is valid, we can conclude that the GCRB is independent of N. This can intuitively be explained as follows. The FFT size N will mainly contribute to the GCRB through the data symbols a d , as the number of data symbols increases with increasing N. However, we have shown that most of the estimation accuracy of the GCRB comes from the observation r 2 , which is the data-free part of the observation. Therefore, the presence of the data symbols will have almost no influence of the GCRB, resulting in the GCRB to be independent of N.
In Figure 7, we show the GCRB for both the total observation and the subset observation, along with the low SNR limit of the (G)CRB. Although it follows from Figure 2 that the GCRB and the subset GCRB are larger than the low SNR limit of the (G)CRB, the difference is small: the curves in Figure 7 are close to each other. In Figure 7, we also show the MSE (28) of the proposed subset estimator. As can be observed, the MSE coincides with the subset GCRB for a large range of SNR. Only for large SNR (>20 dB), the MSE shows an error floor as shown in the previous section, indicating that for E s /N 0 > 20 dB the approximation HF d = F H is no longer valid. Further, we show in Figure 7 optimal ML-based estimator for the channel, derived in [8] and based on the estimator given in [11]. In the latter estimator, it is assumed that the autocorrelation matrix R w of the disturbance w (6) is known. Assuming the autocorrelation matrix R w does not depend on the parameters to be estimated (which is not the case), the latter estimator is derived based on the ML estimation rule. It is clear that the estimator proposed in this paper outperforms the estimator from [8]. Further, in the latter estimator the autocorrelation matrix R w is in general not known but must be estimated from the received signal. Therefore, the complexity of the estimator from [8] is much higher than that of the proposed estimator, as in the former case, the autocorrelation matrix first has to be estimated from the received signal before channel estimation can be carried out.
CONCLUSIONS AND REMARKS
In this paper, we have derived an approximation (which is accurate for large block size) for the Cramer Rao bound, that is, the Gaussian Cramer-Rao bound, related to for data-aided channel estimation in KSP-OFDM, when the pilot symbols are distributed over the guard interval and pilot carriers. An analytical expression for the GCRB is derived by applying a suitable linear transformation to the received samples. It turns out that the GCRB is essentially independent of the FFT length, the guard interval, and the pilot sequence, and is inversely proportional to the number of pilots and to E s /N 0 . At low SNR, the GCRB obtained in this paper coincides with the low SNR limit of the true CRB, derived in [8]. At high SNR, the GCRB reaches the GCRB corresponding to the data-independent subset of the observation, indicating that at high SNR, observations affected by data symbols can be safely ignored when estimating the channel. Further, we have compared the MSE of the subset estimator with the obtained GCRB and with the MSE of the ML-based channel estimator from [8]. The proposed estimator coincides with the subset GCRB for a large range of SNR. Only at large SNR, the MSE shows an error floor. However, the proposed estimator outperforms the estimator from [8], both in terms of complexity and performance.
In CP-OFDM, the N samples corresponding to the data part of the received signal are transformed to the frequency domain by an FFT, and the guard interval samples are not transformed. In ZP-OFDM, first the samples from the guard interval are added to the first ν samples from the data part of the received signal, and then the N samples from the data part are applied to an FFT, while the guard interval samples are not transformed. In both cases, the used transform is an invertible linear transformation that is independent of the parameter to be estimated. As the different carriers do not interfere with each other, it can be shown that the FFT outputs corresponding to the pilot carriers contain necessary and sufficient information to estimate the channel. Therefore, the observations that are used to estimate the channel in CP-OFDM and ZP-OFDM are the FFT outputs corresponding to the pilot carriers; the observations corresponding to the data carriers and the guard interval samples are neglected. Hence, in CP-OFDM and ZP-OFDM channel estimation is performed in the frequency domain. As the FFT outputs at the pilot positions are independent of the transmitted data, the ML channel estimate and associate true CRB for CP-OFDM and ZP-OFDM are easily to obtain [8]. However, in KSP-OFDM, such a simple linear transformation cannot be found to obtain M observations independent from the data symbols, that is, the pilots are split over the guard interval and the carriers, and the data symbols interfere with the guard interval carriers. Therefore, channel estimation in KSP-OFDM is in general more complex than for CP-OFDM and ZP-OFDM.
A. DETERMINATION OF J 1 (21)
Taking into account (18) and (19), the derivative of ln p(r 1 | h) with respect to h(k) is given by (A.2) 1 k is a vector of length L with a one in the kth position and zeros elsewhere; and α , γ k, , and β k are defined as in (17), (20). Hence, the elements of the Fisher information matrix J 1 are given by Taking into account that and trace (XY) = trace (YX), it follows that trace ( Q k R 1 ) = trace (X k Δ) and trace ( Q + k R 1 Q k R 1 ) = trace (X + k ΔX k Δ). Further, note that Δ = diag(α ), then it follows that
Call for Papers
The cost of medical-and healthcare has been skyrocketing over the past decades. This is mainly due to the rapid growth of the aging population. To provide more comfortable and effective healthcare services, a recent trend of healthcare has been directed towards deinstitutionalization, community care, and home care. On the other hand, the technologies have witnessed an impressive evolution in signal/image processing, computers, and network communications. These technologies have facilitated the development of effective signal processing techniques in consumer electronics to improve the quality of community and home healthcare as well as many portable devices with a wide variety of applications where signal processing-based software plays a pivotal role in their success. The goal of this special issue is to provide most up-to-date and recent advances of signal processing techniques developed for system and network design of healthcare applications. Hopefully, this special issue will serve as a forum and venue for researchers in both academia and industries working in this fascinating and emerging area to share their experience and findings with the readers.
Major topics of interest include but are not limited to the following: • Computer-aided diagnosis for various medical modalities • Signal processing for vital signs monitoring and analysis • Signal analysis in circuits and devices design for healthcare systems • Signal processing and analysis in surveillance and home monitoring for healthcare systems • Embedded system design for healthcare devices Authors should follow the EURASIP Journal on Advances in Signal Processing manuscript format described at the journal site http://www.hindawi.com/journals/asp/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://mts.hindawi.com/, according to the following timetable: | 8,052.8 | 2008-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The PDE4 Inhibitor Tanimilast Blunts Proinflammatory Dendritic Cell Activation by SARS-CoV-2 ssRNAs
Phosphodiesterase 4 (PDE4) inhibitors are immunomodulatory drugs approved to treat diseases associated with chronic inflammatory conditions, such as COPD, psoriasis and atopic dermatitis. Tanimilast (international non-proprietary name of CHF6001) is a novel, potent and selective inhaled PDE4 inhibitor in advanced clinical development for the treatment of COPD. To begin testing its potential in limiting hyperinflammation and immune dysregulation associated to SARS-CoV-2 infection, we took advantage of an in vitro model of dendritic cell (DC) activation by SARS-CoV-2 genomic ssRNA (SCV2-RNA). In this context, Tanimilast decreased the release of pro-inflammatory cytokines (TNF-α and IL-6), chemokines (CCL3, CXCL9, and CXCL10) and of Th1-polarizing cytokines (IL-12, type I IFNs). In contrast to β-methasone, a reference steroid anti-inflammatory drug, Tanimilast did not impair the acquisition of the maturation markers CD83, CD86 and MHC-II, nor that of the lymph node homing receptor CCR7. Consistent with this, Tanimilast did not reduce the capability of SCV2-RNA-stimulated DCs to activate CD4+ T cells but skewed their polarization towards a Th2 phenotype. Both Tanimilast and β-methasone blocked the increase of MHC-I molecules in SCV2-RNA-activated DCs and restrained the proliferation and activation of cytotoxic CD8+ T cells. Our results indicate that Tanimilast can modulate the SCV2-RNA-induced pro-inflammatory and Th1-polarizing potential of DCs, crucial regulators of both the inflammatory and immune response. Given also the remarkable safety demonstrated by Tanimilast, up to now, in clinical studies, we propose this inhaled PDE4 inhibitor as a promising immunomodulatory drug in the scenario of COVID-19.
INTRODUCTION
SARS coronavirus 2 (SARS-CoV-2), the causative agent of the pandemic Coronavirus disease 2019 (COVID- 19), is a positivesense ssRNA virus belonging to the family of Coronaviridae (1). In a subgroup of patients, COVID-19 develops as acute respiratory distress syndrome (ARDS) featuring intense lung injury, sepsis-like manifestations and multi-organ failure (2). Dysfunctional immune response and hyper-inflammation with subsequent cytokine storm were shown to play a key role in the development of severe and fatal forms of COVID-19 (3). We recently described a novel mechanism of SARS-CoV-2dependent activation of innate immune cells, based on the recognition of sequences of viral genomic ssRNA (SCV2-RNA) by endosomal pattern recognition receptors, namely TLR7 and TLR8 (4). Of note, SCV2-RNA recapitulated potent lung inflammation in vivo and induced a strong release of proinflammatory cytokines and Th1 polarization in vitro.
Several immunomodulatory therapies targeting the inflammation-driven damaging stages were proposed for the treatment of severe COVID-19 (5). Among these, inhibitors of phosphodiesterases (PDEs) have been put forward based on the analogy between the clinical features of COVID-19 and other pathologies, associated with inflammation, for which these drugs are already approved (6). PDEs are a superfamily of 11 isoenzymes that modulate signal transduction by degrading cyclic nucleotides (cAMP and/or cGMP). PDE4s, comprising PDE4A, PDE4B, PDE4C and PDE4D, are cAMP-specific PDEs abundantly expressed in leukocytes (7), where they promote the production of pro-inflammatory cytokines and lipid mediators (8). Inhibition of PDE4 leads to accumulation of intracellular cAMP and to a shift of the anti-inflammatory/pro-inflammatory balance (8). Such upstream anti-inflammatory mechanism, makes these agents particularly interesting to master critical conditions characterized by overt release of multiple cytokines, as compared to other single downstream anti-cytokine drugs (9). Nevertheless, side effects such as gastrointestinal disturbances, particularly nausea and emesis as well as headache and weight loss are typically associated with oral PDE4 inhibitors (10). Tanimilast (international non-proprietary name of CHF6001) is an inhaled, selective inhibitor of PDE4 isoforms A-D (11) endowed with anti-inflammatory properties in several in vitro and in vivo models (12,13) which is particularly well tolerated as compared to oral PDE4 inhibitors (13) given its high lung retention coupled with low systemic exposure (14). Published data by our group highlighted that Tanimilast can reduce the secretion of inflammatory and Th1/Th17 polarizing cytokines by fine tuning the activity of the master inflammatory transcription factor NF-kB, which could be useful to control Th-1 and Th-17 driven pathologies without inducing a global repression of the inflammatory and immune responses (15).
Dendritic cells (DCs) are innate immune cells that, by expressing several nucleic acid sensors, play a crucial role in recognizing viral pathogens and mounting protective inflammatory and interferon responses. In addition, DCs are specialized antigen presenting cells capable of activating and shaping the adaptive response, both CD4 + and CD8 + T cell-mediated, to clear the infection (16). Given the central role of DCs in the regulation of the immune response, excessive activation of these cells may unleash overt immunity and tissue damage (16,17). During the progression of SARS-CoV-2 infection, both DCs and CD4 + and CD8 + T cell are recruited to the lung (18,19), with Th1/Th17 effectors reported to play a pivotal role in severe COVID-19 pneumonia (20,21). Thus, DCs represent an interesting pharmacological target to modulate detrimental immune responses, possibly including those observed in severe forms of COVID-19.
This study was designed to investigate the effects of Tanimilast on DC activation induced by SCV2-RNA with the aim of uncovering the potential beneficial immunomodulatory effects of such drug in COVID-19.
Cell Preparation and Culture
Buffy coats from blood donations of anonymous healthy donors were obtained and preserved by the Centro Trasfusionale, Spedali Civili of Brescia according to the Italian law concerning blood component preparation and analysis. Peripheral blood mononuclear cells (PBMC) were obtained by density gradient centrifugation and monocytes were subsequently purified by immunomagnetic separation using anti CD14-conjugated magnetic microbeads (Miltenyi Biotec) according to the manufacture's protocol and as previously published (22). Briefly, monocytes were cultured for 6 days in tissue culture plates in complete medium (RPMI 1640 supplemented with 10% heat-inactivated, endotoxin free FBS, 2 mM L-Glutamine, penicillin and streptomycin (all from Gibco, Thermo Fisher Scientific) in the presence of 50 ng/ml GM-CSF and 20 ng/ml IL-4 (Miltenyi Biotec). Untouched peripheral blood cDC1 and cDC2 (cDCs) and pDCs were obtained from PBMC after negative immunomagnetic separation with the Myeloid Dendritic Cell Isolation kit (Miltenyi Biotec) and the Plasmacytoid Dendritic Cell Isolation kit II (Miltenyi Biotec), respectively.
Cell Stimulation With SCV2-RNA
Complexation of SCV2-RNA (5'-UGCUGUUGUGUGUUU-3'; genome position: 15692-15706) with DOTAP Liposomal Transfection Reagent (Roche) was performed as previously described (4). Briefly, 5 mg RNA in 50 ml HBS buffer (20 mM HEPES, 150 mM NaCl, pH 7.4) was combined with 100 ml DOTAP solution (30 ml DOTAP plus 70 ml HBS buffer) and incubated for 15 minutes at RT. After the complexation, 150 ml of HBS was added and used to stimulate the cells (final concentration of 5 mg/ml). Where indicated, cells (2x10 6 /ml in 48 well-plate) were pretreated for 1 hour with the indicated concentrations of Tanimilast or b-methasone (provided by Chiesi Farmaceutici S.p.A.). The maturation process was conducted in RPMI containing 2% FBS and supplemented with 0.01% DMSO to avoid the sequestration of Tanimilast by serum proteins.
Cytokine Detection
TNF-a, IL-1b, IL-6, IL-12p70, CXCL8, CXCL10, CCL3, CCL17 were measured by ELISA assay (R&D Systems). IFN-a was detected using specific Module Set ELISA kit (eBioscience) and IFN-b by was measured by a bioluminescence kit (In vivoGen). All assays were performed on cell free supernatants according to the manufacturer's protocol.
Flow Cytometry
DCs were stained with the following antibodies from Miltenyi Biotec: Vioblue-conjugated anti-human CD86 (clone FM95), PEconjugated anti-human CD83 (clone REA714), FITC-conjugated anti-human BDCA2 (clone AC144), APC-conjugated anti-human CCR7 (clone REA546). Cell viability was assessed by LIVE/DEAD staining according to the manufacturer's instruction (Molecular Probes, Thermo Fisher Scientific). Samples were read on a MACSQuant Analyzer (Miltenyi Biotec) and analysed with FlowJo (Tree Star Inc.). Response definition criteria were defined post-hoc. Raw data can be provided per request.
T Cell Proliferation Assay
Because Tanimilast was previously shown to exert direct effects on T lymphocytes (23,24), stimulated moDCs were collected and thoroughly washed to avoid any Tanimilast contamination of cocultures. Allogenic naïve CD4 + T cells and CD8 + T cells were isolated from buffycoats using the naïve CD4 + T cell Isolation kit II (Miltenyi Biotec) and CD8 + T cell Isolation kit (Miltenyi Biotec), respectively. Purified T cells were labeled with CellTrace-CFSE (Molecular Probes, Thermo Fisher Scientific) at a final concentration of 5 mM. Subsequently, T cells (6x10 4 cells/well) were co-cultured with graded numbers of moDCs in 96-well round-bottom culture plates in complete RPMI medium. After 6 days, alloreactive T cell proliferation was assessed by measuring the loss of the dye CellTrace-CFSE upon cell division using flow cytometry. Positive controls of T cell proliferations were routinely performed using IL-2 plus PHA. Response definition criteria were defined post-hoc. Dead cells were excluded by LIVE/ DEAD staining according to the manufacturer's instruction. Raw data can be provided per request.
Statistical Analysis
Sample group normality was confirmed by Shapiro-Wilk test before application of parametric statistical analysis. Statistical significance among the experimental groups was determined using one-way ANOVA with Dunnet's post-hoc test (GraphPad Prism 7, GraphPad Software) as indicated in each figure legend. P< 0.05 was considered significant.
RESULTS
Tanimilast Selectively Reduces the Secretion of Cytokines and Chemokines by moDCs Stimulated With SCV2-RNA (SCV2-moDCs) The effects of Tanimilast on the pro-inflammatory properties of SCV2-moDCs were assessed in terms of cytokine and chemokine regulation. moDCs were pre-treated with Tanimilast (10 -11 , 10 -9 , 10 -7 M) for 1 hour and then stimulated with an optimal concentration of SCV2-RNA (4). The concentrations of Tanimilast used in this study were previously shown to be effective in moDCs without reducing cell viability [ (15) and data not shown]. b-methasone (10 -7 M), a glucocorticoid anti-inflammatory drug commonly used to treat overactive inflammation (25), was used as a comparison. Figure 1A shows that Tanimilast dose-dependently decreased the production of the pro-inflammatory cytokine TNF-a and of the Th1polarizing cytokines IL-12 and IFN-b, although with different efficacy. Similarly, also the myelomonocyte-attracting chemokine CCL3 and the Th1-attracting chemokines CXCL9 and CXCL10 were dose-dependently reduced ( Figure 1B). Figures 1A, B show the calculated IC50s that, in most cases, lie in the nanomolar range, a result consistent with previously published data and indicating a high potency of Tanimilast (12,14,15). However, in the case of IFN-b and CXCL10, Tanimilast at a concentration of 10 -7 M (representing the maximal concentration of solubility in our system) could inhibit less than 50% of the secreted cytokine. At 10 -7 M maximal inhibitory effect on PDE4 is reached (15,23). Therefore, it is likely that at such concentration also the maximal inhibitory effect of Tanimilast against these two cytokines is reached. Thus, the IC50s could be defined as >100nM. Tanimilast did not inhibit the secretion of the neutrophilattracting chemokine CXCL8, as well as that of IL-6 ( Figure 1C). In most cases, b-methasone showed a similar inhibition pattern. At difference with Tanimilast, however b-methasone effectively reduced IL-6 and CXCL8 secretion (60% and 50% reduction respectively, Figure 1C), while it did not counteract the induction of CXCL9 and CXCL10 ( Figure 1B).
Interestingly, significant inhibition of cytokine secretion could also be observed when Tanimilast was administered together and 1 hour after the stimulation with SCV2-RNA, which better mimics a setting in which Tanimilast is used as a therapeutic agent ( Figure 1D).
These results indicate that both Tanimilast and b-methasone reduce the overall pro-inflammatory potential of SCV2-moDCs. Interestingly, the modulatory pattern of target cytokines differs between the two drugs.
Tanimilast Does Not Impair the Acquisition of Maturation Markers by SCV2-moDCs Consistent with our previous findings in LPS-treated moDCs (15), Tanimilast pre-treatment (10 -7 M) did not restrain the upregulation of the costimulatory molecules CD83 and CD86 and of the lymph-node-homing receptor CCR7 induced by SCV2-RNA (Figure 2A, left panels). Indeed, the expression of these markers showed the tendency to be even higher in the presence of Tanimilast both on a per-cell-basis as demonstrated by higher MFI and in terms of % of positive cells (Figure 2A, center and right panels). Similarly, Tanimilast pretreatment did not block the upregulation of MHC-II, while consistently reducing the MFI of MHC-I ( Figure 2B left and center panels). As expected, both these markers were expressed by 100% of the cells in the population in all conditions ( Figure 2B right panels). By contrast, b-methasone counteracted the SCV2-RNA-dependent upregulation of all these markers. Both drugs did not modify the phenotype of unstimulated moDCs (Figures 2A, B, white bars). Thus, unlike b-methasone, Tanimilast does not grossly impair the phenotypical maturation of moDCs. However, it selectively targets the upregulation of MHC-I, which may result in the modulation of antigen presentation to CD8 + T cells.
Tanimilast Restrains CD8 + T Cell Activation by SCV2-moDCs
Based on findings described above, we set up allogeneic coculture experiments to characterize the CD8 + T-cell activating properties of SCV2-moDCs in the presence of Tanimilast. Figure 3A shows that, consistent with the observed MHC-I reduction, both Tanimilast and b-methasone impaired CD8 + T cell proliferation induced by stimulation with SCV2-moDCs, as assessed by CFSE staining. In addition, both drugs also reduced the percentage of cells producing IFN-g and Granzyme B, two key effector molecules of activated CD8 + T cells ( Figure 3B).
SCV2-moDCs Induce a Th2-Skewed CD4 + T Cell Response in the Presence of Tanimilast
The same experiments were performed using naïve CD4 + T cells as responders. As expected, based on the lack of MHC-II and costimulatory molecule modulation, Tanimilast did not affect the proliferative response of CD4 + T cells induced by SCV2-moDCs ( Figure 4A). By contrast, b-methasone reverted T cell proliferation almost to basal levels, in accordance with MHC-II downregulation. Next, the effects of Tanimilast on the polarizing properties of SCV2-moDC were assessed by measuring the levels of intracellular cytokines in activated CD4 + T cells. We have previously shown that SCV2-RNA induces a prominent Th-1 response (4), which was consistently blocked by both Tanimilast and b-methasone ( Figure 4B). Interestingly, pretreatment with Tanimilast, but not with b-methasone, enhanced the development of T cells producing IL-4 and IL-13, which characterize Th2-skewed CD4 + effectors ( Figure 4B). Of note, Tanimilast alone did not induce either IL-4 + or IL-13 + T cells (data not shown). We also stained for IL-17 production, but this was undetectable in our experimental conditions (data not shown). Taken together, these results indicate that DCs matured in the presence of Tanimilast fully retain the stimulatory capacity to induce CD4 + T cell proliferation while skewing the T helper response toward a Th2 profile. By contrast, the effect of bmethasone results in a general inhibition of CD4 + T cell activation, resembling the inhibition observed on CD8 + T cells.
Primary DC Subsets Recapitulate the Effects of Tanimilast Pre-Treatment of moDCs
To confirm the results obtained in moDCs also in primary DCs, we immunomagnetically sorted the two main subsets of circulating DCs, namely cDCs and pDCs. Because of the rarity of these cells, only a fixed concentration of Tanimilast was used (10 -7 M). In cDCs, a substantial lack of CD86 and CCR7 modulation by Tanimilast was confirmed, both in terms of percentage of positive cells and of mean fluorescence intensity of the population ( Figure 5A). By contrast, the production of TNF-a was significantly decreased ( Figure 5B). Regarding pDCs, though Tanimilast did not interfere with the acquisition of a mature phenotype characterized by the upregulation of CD86 and downregulation of BDCA2 ( Figure 5C), it decreased IFN-a secretion to 40% ( Figure 5D).
DISCUSSION
Tanimilast is a novel inhaled PDE4 inhibitor currently undergoing phase III clinical development for COPD which shows promising pharmacodynamic results associated with a good tolerability and safety profile (14,24). Tanimilast was previously shown to act as a potent anti-inflammatory agent in several cell-based models (23), including leukocytes derived from asthma (26) and COPD patients (27) and rhinovirus-infected human bronchial epithelial cells (12), as well as in experimental rodent models of pulmonary inflammation (13). In this study, Tanimilast is investigated as an agent capable of modulating the strong inflammatory activation induced by SCV2-RNA in human DCs. Consistent with previous work of our group (15), Tanimilast reduced the secretion of selected, but not all cytokines without affecting the acquisition of a mature phenotype. This is a condition previously defined as "semi-mature DCs", suited to prevent excessive responses in peripheral tissues (28). Our analysis was conducted in parallel with b-methasone, since corticosteroids are established drugs in the treatment of overactive immune conditions, also undergoing clinical trials for the treatment of COVID-19 (5). Unlike Tanimilast, bmethasone induced a widespread and clear-cut shift from competent to suppressive moDCs. Tanimilast decreased the expression of TNF-a and CXCL10, which are cytokines highly correlated with severity and mortality rate of Covid-19 (29,30).
Additionally, Tanimilast induced a marked reduction in the release of chemokines that amplify the inflammatory and immune response via the recruitment of innate cells (e.g. CCL3) or Th1 effector cells (e.g. CXCL9/10). To date, many ongoing trials to test the efficacy of anti-TNF-a or anti-IL-6 drugs in severe COVID-19 have provided conflicting results (31,32). It is tempting to speculate that Tanimilast may prove beneficial because of its broad modulatory effect on several cytokines, as compared to drugs selectively targeting one specific cytokine. We observed that Tanimilast, unlike b-methasone, did not inhibit the SCV2-RNA-dependent release of the neutrophil attracting CXCL8, another prognostic marker in COVID-19 (33). However, CXCL8 is produced by many cell types in addition to DCs and was shown to be efficiently blocked by Tanimilast in other experimental settings (15,34). Thus, our experimental model, by focusing on DCs, may not fully recapitulate the modulation of CXCL8 occurring in vivo upon administration of Tanimilast. By contrast, DCs are by far the principal producers of type I IFN, which was decreased by Tanimilast. Because both type I IFNs and pDCs play crucial protective roles in the early phases of SARS-CoV-2 infection (35, 36), the administration of Tanimilast may need to be timely targeted during SARS-CoV-2 infections, especially when tissue damage mostly depends on overwhelming immune activation rather than to viral replication per se. This holds true and has been clearly assessed also for corticosteroids, where early addition impairs viral eradication, while late-stage usage reduces symptoms and immune-dysregulation (37).
In the in vitro experimental setting utilized in this study, the combined reduction of selected cytokines elicited by Tanimilast, together with the conserved expression of co-stimulatory molecules and MHC class II, skewed the predominantly Th1 polarization of CD4 + naïve T cells induced by SCV2-activated DCs (4) towards a Th2-oriented activation, without affecting T cell proliferation. This apparent Th-2 skewing effect of Tanimilast appears to be related to the presence of the SCV2 stimulus. Indeed, we observed that Tanimilast alone induced neither IL-4-nor IL-13-producing T cells. Additionally, Tanimilast is very effective in inhibiting allergen-induced eosinophilia in rats which is Th-2 driven (13). A further evidence of the modulatory effects of Tanimilast on Th-2 driven pulmonary inflammation comes from its ability in reducing the allergen challenge response in asthmatic patients (38). In this regard, the effect of b-methasone was a clear-cut inhibition of phenotypical maturation, CD4 + T cell proliferation and Th1 blockade, with no observed skewing towards Th2 polarization. We could not evaluate the effects of Tanimilast on Th17 polarization because it was not induced in our experimental setting. However, we demonstrate a reduction in the secretion of crucial Th17-polarizing cytokines such as IL-6 and TNF-a. This is of particular importance, since Th1/Th17 responses have been associated to COVID-19 immunopathogenesis and exacerbation (20,21). SARS-CoV-2-specific CD4 + effector cells generally do not express Th2 traits (39), which could play a protective role as shown by the lower susceptibility and less severe outcomes of COVID-19 in asthmatic and atopic patients (40,41). Accordingly, IL-13 was shown to reduce viral burden, possibly by downregulating the expression of angiotensin-converting enzyme 2 (ACE2) in airway epithelial cells (42,43). In addition, M2 macrophage polarization induced by IL-4 and IL-13 fostered tissue repair and resolution of inflammation in ARDS (44). Finally, Th2 cytokines rescue the antithrombotic properties of endothelial cells by inhibiting the expression of pyrogen-induced tissue factor (45), which is highly expressed in the lungs of severe COVID-19 patients (46). A number of reports, however, described Th2 signature and eosinophilia in the inflamed areas of lungs in subgroups of severe COVID-19 patients (47). This complex picture reinforces the hypothesis that Tanimilast administration may prove beneficial in blunting the excessive inflammatory response that can occur in severe COVID-19, provided careful patient evaluation and stratification is performed.
Tanimilast reduced the expression of MHC-I molecules. This effect may depend on increased levels of cAMP, reproducing the activation of the cAMP/PKA/ICER pathway previously described to repress MHC-I transcription (48). In addition, PDE4 inhibition by Rolipram was shown to reduce antigen production (and therefore MHC-I expression) by decreasing the activity of the ubiquitin proteasome system in rodent skeletal muscle cells (49). Further research is granted to elucidate if these mechanisms are involved in the block of MHC-I upregulation in Tanimilast-treated moDCs. MHC-I reduction, together with IL-12 blockade, are likely responsible for the observed curtailing of CD8 + T cell proliferation and activation, characterized by a decrease of both IFN-g and Granzyme-B levels. This effect is shared by both Tanimilast and b-methasone. Activated CD8 + effector cells play a dual role in SARS-CoV-2 infection, being critical for virus eradication as well as detrimental, when excessive cytotoxic activation results in lung damage, even more lethal than viral replication itself (50). Both hyperactive and exhausted cytotoxic T cells were described in COVID-19 patients, possibly correlating with the course of the illness (51,52). Indeed, an early immune profile characterized by high expression of interferon stimulated gene and viral load with limited lung damage was shown to precede a later stage with low interferon stimulated gene levels, low viral load and abundant infiltration of activated cytotoxic cells (53). In addition, continual proliferation and overactivation of CD8 + T cells observed in severe, late stage COVID-19 were correlated to disease aggravation (54). Thus, the inhibition of CD8 + T cells proliferation and activation observed upon Tanimilast treatment may be beneficial to alleviate cytotoxic hyperactivation but might be not relevant, if not contraindicated, in COVID-19 cases displaying an exhausted CD8 + T cell phenotype.
Despite this study did not investigate the molecular mechanisms underlying Tanimilast modulation of DC activation by SCV2-RNA, the observed differences in its potency in inhibiting different cytokines suggest a promoterspecific action, rather than a direct perturbation of TLR7/8 signaling and NF-kB activation. This hypothesis is also supported by the observation that genes requiring NF-kB for efficient transcription, such as CCR7 (55) are upregulated in the presence of Tanimilast. A similar mechanism was previously described in LPS-activated moDCs, where we observed that Tanimilast could decrease the recruitment of NF-kB subunits to specific promoters, without affecting its nuclear translocation (15). This could depend on reduced recruitment of NF-kB coactivators, as originally demonstrated for the prototypic PDE4 inhibitor Apremilast (56). Indeed, a promoter specific regulation is very well suited to explain the variegated modulation of DC activation described in this paper.
It remains to be established if immunomodulation by Tanimilast can be observed also when DCs are infected by SARS-CoV-2, instead of being challenged with SCV2-RNA. It was previously shown that Tanimilast could efficiently block rhinovirus-induced cytokines (12). In addition, a recent paper showed that intact SARS-CoV-2 activates innate immune cells via TLR7/8, thus reproducing the mechanisms of activation by SCV2-RNA (57). Based on this, it is possible to hypothesize a similar inhibitory effect acting on TLR7/8 downstream pathways, as previously discussed, also in the presence of intact SARS-CoV-2. It should also be considered that, in addition to immunomodulation, Tanimilast may interfere with SARS-CoV-2 infection via other mechanisms. For example, Rolipram and Roflumilast were both shown to inhibit viral replication (58,59). In addition, compounds with properties of PDE4 inhibition were suggested to bind to N-terminal RNA-binding domain of SARS-CoV-2 N-protein, a critical component of the viral replication and genome packaging machinery that may affect viral replication (60,61). By analogy with other PDE4 inhibitors, it is tempting to speculate that Tanimilast may be helpful in COVID-19 pneumonia not only by regulating the inflammatory balance but also by directly reducing viral replication and load. However, this aspect could not be investigated using our system of moDCs stimulation by SCV2-RNA. Overall, the data presented in this study suggest that the PDE4 inhibitor Tanimilast could be a promising inhaled immunomodulator in the scenario of COVID-19, given its remarkable safety demonstrated in healthy subjects as well as in asthma and COPD patients (14) and its mechanism of action non redundant with corticosteroids. Nevertheless, further studies are needed to evaluate the benefits of this agent in clinical settings. In particular, it will be important to determine the optimal disease stage at which starting Tanimilast administration, with a particular focus on the identification of subgroups of patients (clinical phenotypes) with increased chances of therapeutic success.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. | 5,765.2 | 2022-01-24T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Analyzing EEG of Quasi-Brain-Death Based on Dynamic Sample Entropy Measures
To give a more definite criterion using electroencephalograph (EEG) approach on brain death determination is vital for both reducing the risks and preventing medical misdiagnosis. This paper presents several novel adaptive computable entropy methods based on approximate entropy (ApEn) and sample entropy (SampEn) to monitor the varying symptoms of patients and to determine the brain death. The proposed method is a dynamic extension of the standard ApEn and SampEn by introducing a shifted time window. The main advantages of the developed dynamic approximate entropy (DApEn) and dynamic sample entropy (DSampEn) are for real-time computation and practical use. Results from the analysis of 35 patients (63 recordings) show that the proposed methods can illustrate effectiveness and well performance in evaluating the brain consciousness states.
Introduction
Brain death is defined as the complete, irreversible, and permanent loss of all brain and brainstem functions [1][2][3][4]. Under the definition, however, it is hard to conduct brain death judgement precisely for some clinical reasons. Traditional clinical tests are expensive, time consuming, and even dangerous in some cases (e.g., apnea test etc.). To avoid the above disadvantages, we have proposed an EEG preliminary examination procedure before the test of spontaneous respiration, which makes the test easier more effective and brings less risks [5]. To determine quasi-brain-death (QBD, where quasi means that it is a preliminary decision), EEG which is known to us as an important clinical tool for observing brain signals has been widely available in many countries to evaluate the absence of cerebral cortex function [5][6][7]. Our research aim is to provide several signal processing tools to determine brain death based on EEG analysis and help clinicians conduct the diagnosis in the practical operation.
The complexity of nonlinear physiologic signals has been wildly used in evaluating the differences between health and disease states [8]. The information of complexity contained by a physiologic time series directly reflects the state of such physiologic system [9]. The concept of entropy has been extensively available for complexity measures [10,11]. Approximate entropy (ApEn) and sample entropy (SampEn) are effective approaches used in the complexity analysis and help us have a better understanding of biological system. Pincus first introduced ApEn [11], a set of measures of system complexity closely related to entropy, which has well been performed to analyze clinical cardiovascular and other time series. One defect of ApEn, however, is that its statistics lead to inconsistency. Therefore, Richman and Moorman have developed SampEn [12] as an improvement, due to that ApEn leads to bias where SampEn does not, which is caused by self matches, so that SampEn agrees with theory much more closely than ApEn over a broad range of conditions. In our studies, we will further illustrate the improved accuracy of SampEn statistics for brain death diagnosis. This paper presents the dynamic extensions of ApEn and SampEn, since the static methods can only deal with a limited length of time series whereas the analysis of the data of long recording length is common in a biological system. The analysis on a small segment of the original data may probably 2 Computational and Mathematical Methods in Medicine Figure 1: The diagram of dynamic measures. The time sections of 6 channels (Fp1, Fp2, F3, F4, F7, and F8) were intercepted as an example. Our dynamic method dealt with data in a moving time window with length . cause a larger error and even a fault (e.g., the segment is seriously contaminated by noise), causing misleadingness. So that the dynamic method enables us to gain a more comprehensive and global view into a complex system. On the other hand, our dynamic method can decrease the amount of calculation in a simulation process and improve the efficiency for an analysis on a full data. As a result, the analysis on the successively changing information contained by a total time series is available for us. The paper is organized as follows. In Section 2, we first recall a set of computable entropy methods including ApEn and SampEn and then derive the extension formulas of the proposed DApEn and DSampEn. In Section 3, we present real-world EEG data recording and data analysis results. Section 4 includes the conclusions.
Methods of EEG Data Analysis
Given a threshold and each ≤ − + 1, let be the number of vectors within of , and we define and ( ) as a mean of ( ) Equation (2) is mainly defined to calculate the possibility that, for each and , the two vectors are similar within the threshold , while (3) is used to calculate the average. By finding +1 ( ), ApEn( , , ) takes the form as This is how ApEn is defined to measure the self-similarity of the time series [11].
Sample Entropy (SampEn).
SampEn deals with the same -dimension vectors and as defined in ApEn. The distance between two vectors is calculated by (1). In SampEn, let denote the number of vectors within of times ( − ) −1 , for ranges from 1 to − +1 and ̸ = , excluding self-matches. We then define as a mean of , for all 1 ≤ ≤ − + 1, and takes the form as By increasing the space dimension to + 1 and also repeating the steps in (1), (5), we can obtain +1 . Then SampEn can be obtained by This is how SampEn is defined to measure the selfsimilarity of the time series [11,12].
Dynamic Extensions of ApEn and SampEn.
The dynamic ApEn (DApEn) and dynamic SampEn (DSampEn) are proposed for the analyzing physiologic time series. The values of ApEn or SampEn are calculated in a set of consecutive time windows marked with their starting moments with a length of the whole data with a length , respectively, as shown in Figure 1. Here, the expressions of DApEn and DSampEn are then obtained as ApEn( , , ) and SampEn( , , ) , where the footnotes represent the time windows for ApEn and SampEn computation. As a result, if the denoted variable ranges, for example, from 1 to 2 with a step length (set = ), the values of ApEn or SampEn are obtained, respectively, in several nonoverlapping windows. Then DApEn is defined by while DSampEn is defined by
Experimental Results
In our present study, the EEG experimental protocols were executed in the ICUs of a hospital. The EEG data were directly recorded at the bedside of patient where high environmental noise from medical machines seriously corrupted recording procedure. The EEG recording instrument was a portable NEUROSCAN ESI-32 amplifier associated with a laptop computer. During EEG recording, a total of nine electrodes were placed on the forehead of the patients. Six channels were placed at corresponding electrodes (Fp1, Fp2, F3, F4, F7, and F8), two electrodes (A1, A2) were placed on the ears as reference and an additional channel, GND, served as the ground, whose sampling rates were all set as 1000 Hz and resistances of electrodes were set under 8 kΩ. Experimental data were obtained from 35 patients (19 male, 16 female) of ages ranging from 17 to 85 years old; 19 of them were diagnosed to be comatose and the left were brain deaths. The average length of the EEG recordings from these patients was about five minutes. Instead of using the EEG cap in the standard EEG recording, we used individual electrodes with high-chloride electrolyte gel (suitable for using DC power) in all data recording sessions.
Comparison of ApEn and SampEn.
The EEG signals of the coma cases include brain waves together with environmental noise, while the EEG signals of quasi-brain-death only include environmental noise. Therefore, we consider that regular or predictable components exist in the EEG signals of coma patients. Both ApEn and SampEn measures of coma states may assign a much lower value to coma cases.
Let us begin with having a look into the tables, where ApEn and SampEn results of two patients, respectively, in the state of coma (C) and in the state of brain death (D) for each channel with different threshold are given. In general, we took samples of 100-5000 points, set = 2, and the tolerance as = 0.15 × standard deviation (SD) or = 0.25 × SD according to [11,12]. In order to demonstrate the differences between the two types, we present one case from each group.
From Tables 1 and 2, we find that the calculated values of both ApEn and SampEn significantly differ between the coma and the brain deaths. On the other hand, denoted as the average difference indicates the capability to classify the two states. Particularly in the SampEn case with the threshold = 0.15 × SD, there is a relatively huger difference between these two states. In a word, here we regard SampEn as a method in the first place. To eliminate the possibility that the divergence is caused by different patients, we take another special example for a certain patient as a further result in the Table 3. This patient behaved from coma to brain death.
Obviously, the results of both ApEn and SampEn for this certain patient who behaved these two states are tremendously different, which means that it is an available and significant method for us to carry on brain death diagnosis.
Statistical Results.
To obtain more convincing results, ApEn and SampEn are applied to calculate all our recordings. As shown in Figure 2, the box plot of ApEn and SampEn are calculated for all the six channels with the same parameters. According to the two-way ANOVA against the null hypothesis that the two groups have the same mean and the returned value, the significance is found and the null hypothesis is rejected ( = 1) for our EEG experiment. Note that the performance of each electrodes is different because of the different distance to the reference, and comparison of complexity measures across each channel is also given by two-way ANOVA that summarized in Table 4. As seen, the results obtained from channel Fp1 and Fp2 are more reliable and convincible for our brain death diagnosis according to the box plot. For the other channels, the determination of coma and brain death states is statistically clear; however, the values of both states range within a common section, possibly causing misdiagnosis for one patient. So our further effort is based on the data sampled through channel Fp1 or Fp2. -test is also used to show the significance ( < 0.05) for the two groups with the marker * for each channel in Figure 2.
Results for DApEn and DSampEn.
For the real-time application such as monitoring a state of the patient, it is necessary to introduce dynamic-based analysis to explore the brain wave activity changes of the patients over time. As shown in Figure 3, over the time coordinate (0-800 s) of EEG signals, ApEn and SampEn in each second are calculated. Results of DApEn and DSampEn for a coma case (a), a quasibrain-death case (b), and a case that the patient behaved from coma to quasi-brain-death (c). Under the same experimental environment, the results of DApEn and DSampEn are obtained by (7) and (8). To obtain a more smooth curve, the moving average method is applied to decrease the high frequency part by calculating the average of every ten points. For the same coma case, values of SampEn (green) remain low over time, while values of ApEn (red) remain slightly higher than those of SampEn. For the same quasi-brain-death case, SampEn assigns a higher value (purple) than that of ApEn (blue). This indicates a more powerful capability for DSampEn to classify the two brain consciousness states than for ApEn. Around 750 s, the huge fluctuation is caused by the serious contamination of noise. Moreover, the results of DApEn and DSampEn of the certain patient whose coma state and brain death state are both recorded are plotted in Figure 3(c). In this case, this patient's two states are well discernible because of a huge difference of the values of both DApEn and DSampEn, however, the results of coma state are slightly different and several data segments are under the influence of interfering noise. To make our estimates in Figure 3 solid, we would like to give the corresponding average values and error bars in Figure 4. We define as the average distance between the bars which estimates the difference between the two methods and also the capability of states determination. It also helps prove that DSampEn value of coma state and its value of brain death state vary more greatly than DApEn values which means DSampEn is better to discern the states of coma and brain death. As an expansion of the standard SampEn, DSampEn is more durably effective to perform a function.
From all the obtained results, we firmly believe that complexity of the coma and the brain deaths can be well used for brain consciousness determination. The plotted dynamic ApEn or SampEn indicates the state of a patient and such time-dependent methods also help monitor the trend of a patient's state, with which clinic can carry out emergency medical care before danger. But in this paper the methods applied to predict are beyond our scope. So, with the help of our dynamic algorithm, the online EEG preliminary brain death determination system comes into reality.
Conclusions
This paper has proposed the recently introduced complexity analysis based on calculating entropy of a time series in physiologic systems. For approximate entropy and sample entropy measures, we have found a complexity-analysisbased criterion on the determination of quasi-brain-death, providing a reference for our proposed EEG preliminary procedure. Furthermore, based on the criterion, we have tested all our recordings and obtain the statistically reliable result which is identical to the clinical brain death determination completely. Finally, we have developed the novel DApEn and DSampEn algorithms for the online EEG preliminary brain death determination system. These methods may be applied to analysis of other physiologic time series as a reference. | 3,266.2 | 2013-12-22T00:00:00.000 | [
"Computer Science"
] |
Aether Scalar Tensor (AeST) theory: Quasistatic spherical solutions and their phenomenology
There have been many efforts in the last three decades to embed the empirical MOND program into a robust theoretical framework. While many such theories can explain the profile of galactic rotation curves, they usually cannot explain the evolution 15 the primordial fluctuations and the formation of large-scale-structures in the Universe. The Aether Scalar Tensor (AeST) theory seems to have overcome this difficulty, thereby providing the first compelling example of an extension of general relativity able to successfully challenge the particle dark matter hypothesis. Here we study the phenomenology of this theory in the quasistatic weak-field regime and specifically for the idealised case of spherical isolated sources. We find the existence of three distinct gravitational regimes, that is, Newtonian, MOND and a third regime characterised by the presence of oscillations in the gravitational potential which do not exist in the traditional MOND paradigm. We identify the transition scales between these three regimes and discuss their dependence on the boundary conditions and other parameters in the theory. Aided by analytical and numerical solutions, we explore the dependence of these solutions on the theory parameters. Our results could help in searching for interesting observable phenomena at low redshift pertaining to galaxy dynamics as well as lensing observations, however, this may warrant proper N-body simulations that go beyond the idealised case of spherical isolated sources.
INTRODUCTION
The presence of an invisible matter -dark matter -permeating the Universe throughout its cosmic evolution is the most popular explanation to the origin and number of large-scalestructures in the Universe as well as their internal dynamics.Yet dark matter particles remain elusive despite the vast array of experimental programmes searching for such particles with great sensitivity, see e.g.Agrawal et al. (2021); Aalbers et al. (2022); Aprile et al. (2023).
While the particle dark matter hypothesis is still far from being excluded, the task of coming up with alternative theories which fit galactic and cosmological observations remains important and becomes increasingly so, especially in the absence of direct evidence for particle dark matter.One of such alternatives, namely Modified Newtonian Dynamics (MOND), was formulated in Milgrom (1983) and Bekenstein & Milgrom (1984) to explain the anomalous rotation curves of galaxies at large radii.MOND postulates that Newton's second law must be modified for accelerations a with magnitude smaller than a0 ∼ 1.2 × 10 −10 m/s 2 to explain the rotation curves of galaxies (Begeman et al. 1991;Milgrom 1988a).This is neatly captured by changing Newton's sec-E-mail<EMAIL_ADDRESS>law to µ(y) a = −∇Φ, with y ≡ | a|/a0 and Φ being the gravitational potential determined from the standard Poisson equation sourced by baryons only.This works if µ(y) y for y < 1 (this is the so-called MOND regime) and µ(y) = 1 for y > 1 (this is the Newtonian regime).
MOND faces a fundamental theoretical problem: it is not a relativistic theory but rather an empirical model.As such, MOND cannot be used to compute the formation and distribution of large-scale-structures in the Universe.A number of relativistic theories, typically extensions of General Relativity (GR), have been proposed to palliate this issue, and all have been constructed to lead to MOND behaviour in the non-relativistic limit (see Famaey & McGaugh (2012b) for a review).
One of these theories, the Tensor-Vector-Scalar (TeVeS) theory (Sanders 1997;Bekenstein 2004), introduces a unittimelike vector and a scalar field in addition to the usual gravitational metric tensor.These fields are combined together to define a second metric tensor which is used to determine the geodesics of ordinary standard model matter.With this bimetric disformal (owing to the vector field) structure, TeVeS leads to the equality of dynamical and lensing mass and thus can generate the right amount of gravitational lensing produced by baryon-only galaxies as if dark matter was present.Perturbations of the vector field also play a major role cosmologically as their time-evolution can lead to matter power spectra in line with observations (Skordis et al. 2006;Dodelson & Liguori 2006).However, despite these important improvements over other theories, TeVeS fails to fit the angular distribution of the CMB (Skordis et al. 2006).Moreover, TeVeS leads to tensor mode gravitational wave speed cGW which is different than the speed of light cEM and thus has been ruled out by the simultaneous observation of the GW170817 and GRB170817A events (Abbott et al. 2017).
The recent Aether Scalar Tensor (AeST) proposal of Skordis & Zlosnik (2021) retains the unit-timelike vector field and the scalar from TeVeS, but only has one metric tensor and thus, no bi-metric structure.Unit-timelike vector fields have been dubbed "aether fields" in other instances, see Jacobson & Mattingly (2001), hence, the naming of this theory1 .The success of AeST theory rests on maintaining cGW = cEM in all situations and fitting the CMB and matter power spectrum data quite convincingly, while retaining a MOND limit in galaxies and the correct gravitational lensing.
Cosmologically, the scalar in AeST theory evolves as in shift-symmetric k-essense (Scherrer 2004) which results in its cosmological energy density being similar to dust, i.e. ∝ (1+z) 3 plus small decaying corrections.This k-essence-like behaviour of AeST theory leads to spontaneous breaking of time diffeomorphisms as in the Ghost condensate (GC) theory (Arkani-Hamed et al. 2004a) which results in the metric potential Ψ (see Skordis & Zlosnik (2021, 2022) for details) acquiring a mass term µ.The quasi-static long-distance behaviour of AeST theory thus departs from MOND, but is still different than the GC due to the presence of the noncanonical MOND term and the vector field.Our aim in this paper is to precisely understand the particular behaviour of AeST theory in spherically symmetric static situations, due to the new features introduced by µ.
We present the AeST theory in Section 2, and derive the gravitational equations for quasistatic distributions of matter.In Section 3, we study the properties of density profiles assuming spherical symmetry.This work is critical in view of creating N-body simulations of large-scale-structure formation in the late Universe to compare AeST theory with non-linear observables, such as, those related to galaxy clusters (including the bullet cluster).It can also provide a guide on how extensions of GR at both early and late times may be distinguied from models based on the dark matter hypothesis.
THEORETICAL MODEL
We now summarise the main features of AeST theory as proposed in Skordis & Zlosnik (2021), the theory parameters, and the transition scales between the three regimes (Newton, MOND and "µ-domination") which feature in this theory.More details regarding the underlying setup and derivation of the weak-field static equations may be found in Skordis & Zlosnik (2021, 2022).
Static weak-field equations
The fields of AeST theory are the metric gµν , a unit-timelike vector field Aµ, such that A µ Aµ = −1, and a scalar field φ.Perturbing the field equations around Minkowski spacetime and ignoring time derivatives reduces the field dependence to just two potentials: Φ and χ, see Skordis & Zlosnik (2021, 2022).The gravitational potential Φ results from the metric perturbation.Matter fields couple minimally to this potential and follow its geodesics such that the Einstein equivalence principle holds.Specifically, gravitational accelerations are determined from ∇Φ.The potential χ is a gauge-invariant combination of a scalar mode contained2 in the spatial part of the vector field Aµ, and the perturbation of the scalar field φ.In addition to Φ and χ, we find it useful to define the potential Φ through treating Φ and χ as the fundamental fields and Φ as a derived variable.With these variables, the weak-field equations of AeST thaory which extend the standard Poisson equation for Φ and determine spatial dependence of the gravitational potentials of the theory Φ, χ and Φ, can be written in the form while a third, non-independent equation, is In the equations above, GN is Newton's constant, ρ b is the baryonic density, J = J (Y) is a function of the variable Y = (g µν + A µ A ν )∇µφ∇ν φ, µ is the mass parameter and β0 is the inverse screening parameter, the screening parameter being λs ≡ 1/β0.The latter determines how much χ contributes to Φ through (1) in the large gradient limit of (2), ( 3) and (4); see below.The mass parameter µ is determined through a specific combination of parameters which enter the action of the theory, see Skordis & Zlosnik (2021, 2022) for details, however, for our purposes we treat it as a free parameter in this work.It can be shown that linear stability of the theory on Minkowski space imposes λs > 0 and µ 2 > 0, however, in practice µ −1 1 Mpc to ensure that MOND solutions are attainable at galactic scales.Indeed, precise observations of the extent of flat galactic rotation curves can be used to put strong constraints on µ −1 (Mistele et al. 2023).The variable Y denotes the "spatial gradient" combination Y = (g µν + A µ A ν )∇µφ∇ν φ and in the static weak-field case one finds that (5) The function J then controls the behaviour of static weak field configurations through its dependence on Y, or more specifically on √ Y = | ∇χ|.When √ Y a0 the function J is constructed so as to recover GR as close as possible.This can happen if which also serves as the definition of the screening parameter λs, a parameter which must be part of the function J .As λs → ∞ (equivalently β0 → 0), then χ → 0 so that the contribution of χ to Φ through (1) is totally screened3 .In the opposite limit, √ Y a0, the function J is constructed to lead to MOND behaviour, which is possible if Three comments are in order: • In deriving (2)-( 4) one also finds that the two standard weak-field metric potentials Φ and Ψ are equal.This ensures that the lensing mass is equal to the dynamical mass.This means that for any source configuration where the MOND limit of AeST theory successfully replaces dark matter, the gravitational lensing signal for the same source will also be as if dark matter is present.
• The case χ = 0 corresponds to having Φ = Φ and the above system of equations reduces to a single equation for Φ which is identical to the non-relativistic weak-field limit of the Ghost condensate model (Arkani-Hamed et al. 2004b).
• Because of the term µ 2 Φ, the system of equations ( 2)-( 4) is not precisely that of TeVeS theory but tends to it as µ → 0. However, dust-like solutions in cosmology require µ to be non-zero, with µ → ∞ corresponding to the pure Higgs phase (Arkani-Hamed et al. 2004b) where the solutions are exactly dust-like throughout the entire history of the Universe.Thus AeST theory will have solutions which depart from the MOND paradigm at a large enough distance away from the matter source.This results in a third regime, different from both GR and MOND as we study thoroughly below.
Newtonian, MOND and µ-domination limits and corresponding transition scales
We define the dimensionless variable and the new function which corresponds to the usual MOND interpolation function and is designed to provide a continuous transition between the Newtonian and MOND regime as follows: f → λs : large gradient limit, x 1 x 1+β 0 : small gradient limit, x 1 .
(10) with β0 ≡ 1/λs (see also the equivalent limits in terms of J in equations ( 6) and ( 7)).We note that the large gradient limit corresponds to Newtonian behaviour to lowest order in the potentials and GRlike behaviour in the full relativistic strong-field case (up to parametrically small post-Newtonian corrections).
2.2.1 Large gradient limit: strong-field/Newtonian regime Consider first the large gradient limit which leads to the strong-field (and Newtonian) regime.Setting f = λs in equation (3) and integrating twice, results in χ = β0 Φ + χ0, where we have neglected a possible curl which is not relevant in spherically symmetric situations (see also the discussion in 3.3).Substituting this relation in (1) gives Φ = (Φ − χ0)/(1 + β0) and with these conditions (2) takes the form of the inhomogeneous Helmholtz equation Equation ( 11) also results from taking the non-relativistic weak-field limit of the Ghost condensate model (Arkani-Hamed et al. 2004b).For scales smaller than ∼ µ −1 / √ 1 + β0, (11) reduces to the standard Poisson equation describing Newtonian gravity.The solution to (11) for scales larger than ∼ µ −1 / √ 1 + β0 is not physically relevant to the models studied in this article, since the systems we study first transition to the small gradient regime before the scale ∼ µ −1 / √ 1 + β0 is reached.
Note that the integration constant χ0 does not affect (11), and so, does not play a role in the large gradient regime.However, it's value will survive until the small gradient regime, and as we discuss below, it will lead to observable effects at very large distances from the source.
Small gradient limit: MOND and µ regimes
Consider now the small gradient limit where f = x/(1 + β0); see (10).Then (4) becomes which reduces to leading to the MOND equation for χ in the limit where µ → 0 and Eq. ( 2) turns into The above equation has an exterior solution scaling as | ∇ Φ| ∼ 1/r 2 while the deep MOND solution to (13) (for µ = 0) is | ∇χ| ∼ 1/r.Hence, for large enough r (but not too large so that we keep within the regime where µ has no influence and can be set to zero) we have | ∇χ| ∼ 1/r implying that the solution for Φ is also MONDlike.
When r becomes even larger, the (1 + β0)µ 2 Φ term in the above equation becomes important, leading to a new regime specific to AeST theory which departs from MOND.This happens because since | ∇χ| ∼ 1/r, the 1st term is expected to scale as 1/r 3 , while the 2nd term grows as ln r and is always bound to become important at some scale rC.We call this the µ-dominated regime.
We now derive the transition scales from Newton to MOND and from MOND to µ-domination in the case of a point source.
Transition from Newtonian to MOND regime
The transition from Newtonian to MOND behaviour in the "classical" MOND paradigm occurs when the Newtonian force is equal to the MOND force, and is given by the familiar MOND radius equation However, in AeST theory, the MOND force is coupled only to the χ component of the field.Therefore, determining the radius at which the total force deviates from the Newtonian force is determined by the χ component only.In the Newtonian limit of AeST theory, χ = β0Φ/(1+β0) with ∇Φ tracking the Newtonian force (i.e.| ∇Φ| = GN M/r 2 ).Hence ∇χ departs from the Newtonian limit and enters the MOND regime when ∇χ is no longer is proportional to | ∇Φ| = GN M/r 2 .This happens when the two limiting situations in (10) become comparable, that is, when λs ∼ x/(1 + β0).Hence, different transition scales may occur depending on whether we consider interior or exterior solutions.In the exterior case, we call this transition rχ where such that rχ rM always.Given Eq.( 16) and the fact that β0 1, we expect rχ rM .To determine when the total AeST force enters the MOND regime, we also need to require that χ is the dominant component of Φ, that is ∇Φ ∇χ.This domination occurs at a distance rM which is defined as the point when such that rM rM always.In practice, β0 is expected to be small so that the two scales rM and rM are approximately equal.
The different transition scales above are depicted in Fig. 7 where we discuss how the solutions change when we vary the parameter λs.
Transition from MOND to µ-domination
To estimate the transition between MOND and µ-domination we consider (13) for a point source of mass M which we define to occur at the scale rC .Estimating derivatives as d/dr ∼ 1/r and considering the Log terms appearing in the MOND solution Φ, i.e. ln(r/rM), as being O(1)4 , the terms on the LHS in ( 13) become comparable at the scale rC when where χ ∼ Φ ∼ √ GNM a0×O(1).This gives a simple estimate of rC ∼ rM/µ 2 1/3 .The above estimate, however, ignores possible effects coming from the boundary condition and these can be important.Let us set the boundary condition for χ at a radius r0, that is, χ(r0).For later convenience, we normalize it with respect to √ GNM a0 and define χout ≡ χ(r0)/ √ GNM a0 as the constant free parameter setting the boundary condition.Consider now the extreme case where χout is so large that it dominates the solution, while ∇χ still retains a (subdominant) 1/r MOND component.Then, Φ ∼ χ ∼ √ GNM a0 χout and ( 18) leads to rC ∼ rM/µ 2 χout 1/3 .The last relation above would appear to diverge when χout → 0, but nothing obviously wrong should happen when setting the boundary condition for χ to zero (and this is verified numerically).Moreover, the point r0 where the boundary condition is set is arbitrary, thus the "zero point" where χ(r0) (or equivalently χout ) is zero is also arbitrary.
Nevertheless, defining χout can be done in an unambiguous way, so that a better estimate for rC which interpolates between the two extreme cases above can be derived.We summarise this here and leave the details for appendix-A.In the small gradient regime, see 2.2.2, the details of the interpolation function are unimportant and we may set f → x/(1+β0) from (10) resulting in (13).First, assuming that µ = 0 the solution is χ = √ GNM a0 χout + ln r rM , which serves to define χout as χout ≡ χ(rM)/ √ GNM a0.To determine when the full µ = 0 solution deviates from the pure MOND solution, we expand the former in terms of the MOND solution plus a small perturbation.We find that when χout takes a specific value that we denote as χ(max) out , the transition scale rC reaches a maximum.In appendix A we show that this specific value is given by We then denote deviations from this extreme value as ∆, so that a general value for χout is determined from thus, the actual value for the boundary condition is fully determined from ∆.
We then determine (appendix A) a better estimate for rC which is where the factor of 1/3 is inserted to create a more conservative estimate.Thus, our first naive estimate above (rC ), simply corresponds to the maximum rC case which is obtained when ∆ = 0, while the 2nd extreme case above is equivalent to ∆ 1.To summarize, rC denotes the scale where deviations from pure MOND solutions can occur due to the onset of the µ regime.As we show below, for distances r > rC the solution to the weak-field equations becomes oscillatory but still decaying (similarly to the Helmholtz equation).Importantly, one may use data to constrain rC, leading to global constraints on µ and specific constraints for ∆ which may be different for individual astrophysical sources.We discuss the dependence of the solution on ∆ in section 4.2.3.
NUMERICAL SOLUTIONS: SETTING UP THE PROBLEM
In order to investigate how AeST theory solutions differ from those of the classical MOND theory, we define a baryonic system representative of an idealized spherical galaxy and calculate the solutions of the field equations, as described in the previous section.In the case of MOND, solutions for the force for spherical systems can be obtained analytically starting from a Newtonian solution.However, due to the mass term µ, analytical solutions are no longer possible in AeST theory and we need to rely on a numerical approach.We describe in this section the setup that we use to obtain the spherically symmetric solutions.
Density profiles
We use two different density profiles for the analysis: a tophat and a Hernquist profile, the latter being a good description of the baryonic component of spherical galaxies, see Hernquist (1990).
The top-hat profile is defined as a sphere of radius a h of constant density embedded in a uniform background: where ρc is the baryonic density at the centre of the profile.For our purposes, we fix the parameters of the profile to ρc = 3.45 × 10 9 M kpc −3 which is ∼ 10 7 larger than the critical density of the Universe, and a h = 2 kpc.This is representative of a galactic system.We discuss how changing the density affects the solutions in Section 4.2.5.The Hernquist density profile is given by (Hernquist 1990), Note that for simplicity, we use the same symbol a h to represent the radius of the top-hat sphere as well as the scale radius for the Hernquist profile.The Newtonian potential associated to this profile is given by, Binney & Tremaine (2008) ΦN 10 2 10 1 10 0 10 1 10 2 10 3 27), ( 28) and ( 29).For each function, we show the result for three different values of λs: 1, 2 and 10.
AeST field equations for spherical isolated sources
In this section, we focus on solving (2) and ( 4) in order to predict the radial dependence of Φ and χ which are critical for understanding the behaviour of gravity for spherical isolated sources in AeST theory.The potential Φ used for particle accelerations is calculated from (1).The spherical version of (2) and ( 4) from which we obtained the numerical solutions is where x = |∇χ|/a0.For setting boundary conditions we employ the analytic solutions of these equations at the (non-zero) radius where we start the numerical integration.Throughout this work, we use a0 = 1.2 × 10 −10 m s −2 (Begeman et al. 1991).We will use different expressions for the function f .Two are given in the literature (Famaey & Binney 2005;Zhao & Famaey 2006): which agree with the limits defined in (10).However, we will also explore the consequences of a different function that we define as which exhibits a sharp Newtonian to MOND transition and returns the correct limits defined in (10).This function has a turning point at x → λs + 1 and is not designed to be fully consistent with observations, but rather as a test function to show the Newtonian to MOND transition radii and viceversa.Fig. 1 shows these three functions for three different values of λs.See McGaugh (2008) for additional discussion on interpolation functions.
Analytic solutions for the top-hat profile
Analytic solutions are required for setting up inner boundary conditions for the numerical solvers as well as for testing that our numerical implementation of the complete solutions is correct, at least for the cases where µ = 0 or r rC , for which these analytic solutions exist.While our numerical solutions are obtained using ( 25) and ( 26), the analytical calculations are easier to obtain using (25) together with the spherical version of (3).
We now derive the analytic solutions for Φ and χ in the case of a top-hat source profile with the simple interpolation function.In appendix B we present solutions for the tophat profile with the sharp interpolation function, while in appendix C we display the solution for the Hernquist profile with the simple interpolation function.
The solution for Φ can be obtained by re-scaling the gravitational constant in the solution of the standard Poisson's equation (e.g.Binney & Tremaine 2008) and takes the following form: We then integrate (3) once to obtain The field ∇×k is divergenceless and was discussed in detail in Bekenstein & Milgrom (1984) where it has been shown to be exactly zero for particular symmetries (including spherical), and to behave at least as r −3 for non-symmetric situations (the effects of this so called curl term on non-linear structure formation with pure MOND were studied in detail by Llinares et al. (2008) and Llinares (2011)).Since we are assuming spherical symmetry, we can ignore ∇×k and invert equation ( 32) to find ∇χ.Applying this procedure with the simple interpolation function ( 27) gives which can be integrated to obtain a solution for χ.Letting M ≡ 4 3 πa 3 h ρc be the total mass of the system, and defining the scale rI ≡ 4a 3 h /r 2 χ , the solution is where χin = χ(0)/ √ GNM a0 and χout are two integration constants corresponding to the inner and outer regions respectively, both normalized to √ GNM a0 for later convenience.These two integration constants are not independent but are related by matching the solution at the boundary r = a h in (34).Lastly, we start our numerical integration, at radius r0 (in the interior) and set the boundary condition as χ0 = χ(r0)/ √ GNM a0, which by appropriate use of (34) may then be related to χin and χout.We study the dependence of the solution on the boundary condition in Section 4.2.3.
RESULTS
We first analyse the dependence of the numerical solutions on the model and density profile parameters.We then present a list of physical effects which make AeST theory different from standard MOND.
General properties of the solutions
Fig. 2 shows the numerical solution for the potentials (left) and their derivatives (right) for a fiducial set of parameters (λs, µ) = (1, 1 Mpc −1 ), the "simple" interpolation function f (x) from ( 27) and the Hernquist density profile defined in (23).The blue solid curve in Fig. 2 corresponds to the Newtonian solution for Φ which we denote as ΦN and the green solid curve is solution for Φ in AeST theory with µ = 0 which is equivalent to MOND, thus denoted as ΦMOND.Both ∇ΦN and ∇ΦMOND agree in the central region of the galaxy and thus give rise to the same force profile there.Farther away from the center, the gradients fall below a0 (see horizontal lines in the right panel) leading to the characteristic logarithmic MOND potential for ΦMOND outside the source and a force that follows a 1/r relation.
The three additional curves in Fig. 2 in both panels are solutions of the field equations ( 25) and ( 26) and the total potential provided by the relation (1).The green dotted curve is the solution for Φ (and ∇ Φ); the yellow dashed curves are the solution for χ (and ∇χ); and finally, the pink dash-dot curve is the solutions for the potential Φ (and ∇Φ).As we move farther away from the center, between the MOND radius rM and rC, the theory tends to the classical MOND behaviour (i.e. a force law which not only has the same dependence with r, but also the same normalization).Farther away when r > rC , the solutions enter the oscillatory regime, where the potential develops additional potential wells and the force can become repulsive.
In the next two sections, we describe how variations of the AeST theory parameters affect these reference solutions and what additional physical effects are associated with them.).The blue, green and red regions delineate the Newtonian, MOND and Oscillatory regions respectively.The yellow and green dashed lines are for the potentials Φ and χ respectively, and the purple dash-dot line is the potential Φ which is responsible for defining particle accelerations through its derivative.We have included the Newtonian (blue) and classical MOND (green) solutions for comparison.The break in the blue curve at ∇Φ = 10 −5 is not physical, but related to the symlog scaling that we use for the vertical axis of the right panel. .
Dependence on the free parameters
We now study the dependence of the solutions on the free parameters of the model and on the central density (and mass) of the source, assuming a top-hat density profile (22).
Interpolation function
Fig. 3 shows solutions for the gravitational potential (left column) and force (right column) that were obtained with the three interpolation functions f (x) defined by ( 27), ( 28) and ( 29).The first three rows correspond to a different potential; from top to bottom, these are Φ, χ and Φ.We fix the boundary condition χ(r0) at r0 to be the same for all three interpolation functions, hence, the inner Newtonian regime is identical in all cases displayed in the first three rows of Fig. 3.For the same reason, the MOND regime for the forces (right panels of Fig. 3) is also identical in all solutions, however, the potentials (left panels of Fig. 3) differ.The reason for the potentials reaching different values as the system evolves towards the MOND regime, is the sharpness or smoothness of the interpolation function f (x).Smooth transitions from the Newtonian to the MOND regime lead to higher values of the potential before it settles into the MOND track (see also Fig. 1), after which, its evolution becomes the same for all interpolation functions.When r rC, the solutions enter the oscillatory regime, and the value of Φ = Φ + χ then plays a role due to the mass term µ (see ( 25) and ( 26)).At that stage, since the value of the potential between interpolation functions is already different, the result is a change in the oscillation pattern, although the overall oscillation envelope stays the roughly the same; see Fig. 3.
The last row of Fig. 3 shows the result when the boundary conditions are calculated individually for each interpolation function according to the prescription described in section 4.2.3, as well as in Appendix A and B. With the boundary conditions appropriately calculated, the µ-dominated region shows identical behaviour across all the interpolation functions shown.This is expected, as all interpolation functions have the same limiting form at small gradients.However, given that they now have different boundary conditions at r0, the inner Newtonian (and partially the transition to the MOND regime) is slightly different across each function.We do not show the exponential interpolation function since we do not have an analytical solution for it in order to specify the χ boundary condition according to our prescription of section 2.2.4 and Appendix A.
For the remainder of this work will mainly focus on the "Simple" interpolation function ( 27) so as to simplify the plots and standardise the results.
Mass parameter µ
Setting the mass parameter µ to zero gives a standard MOND-like behaviour in which the force follows the Newtonian solution in the inner regions at r < rM and transitions into a MOND regime when r > rM.Letting µ take positive values initiates spatial oscillations in the solutions for r rC .Choosing λs = 1 and top-hat profile (leading to rM = 12.5 kpc), Fig. 4 shows solutions with three different values of µ = {0, 1, 10} Mpc −1 , corresponding to rC: {∞, 156, 33.6} kpc; see (21).All solutions for the force give the same result up to ∼ rC.After this point, the solutions de- ).On the left we show the potentials and on the right their derivatives.From top to bottom, the panels contain Φ, χ and Φ.The horizontal dash-dot lines in the right panels denote Milgrom's constant a 0 .For each panel, the three lines each show an interpolation function and we also show the Newtonian solution in blue for comparison.The break in the blue curve at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the right panels.viate from each other with the oscillations in the µ-dominated region starting at smaller r for larger µ.Observations of galaxies do not show evidence for the presence of repulsive gravitational forces affecting their internal dynamics, but only evidence of the MOND regime.Requiring rC to be larger than the virial radius of the Milky Way (∼ 200 kpc) gives an estimate that µ −1 1 Mpc.Param-eter estimation using several different astrophysical objects and proper modelling of the observable quantities may provide more accurate bounds on this parameter but this goes beyond the scope of our current work.See however (Mistele et al. 2023) where the first attempt in doing so is considered.Fig. 5 shows Newtonian and AeST forces for the region r rC assuming µ = 1 Mpc −1 and λs = 1.Thanks to the non-linearity of the AeST theory equations, the oscillations are far from sinusoidal, acquiring rather a very steep slope when crossing zero.We generally find that the mean spatial frequency of the oscillations decreases with µ, however, given the non-linear nature of the equations, we do not have an analytical estimate for the exact relation.Furthermore, the wavelength is not constant through the domain, but decreases slightly with r.Further detail of the zero crossing is shown in the bottom panel of Fig. 5 indicating that the 2nd derivative might become singular at the crossing.This, however, is an artifact, and an alternative way of solving the system of equations (2) and (4) using a Hamiltonian approach (Durakovic & Skordis 2023) shows that nothing bad happens there.
Boundary conditions
The Poisson and classical MOND equations depend on the potential only through its derivatives, hence, adding a constant to the solution is not physically relevant and does not affect the gravitational force.The situation is different in AeST theory due to the presence of the mass term µ 2 Φ in (25) and ( 26) which makes the value of the potentials physically relevant at r > rC.In other words, the boundary conditions are important in AeST theory.We now investigate their impact on the solution determining the depth and radial profile of the gravitational potential Φ and its derivative.
As discussed in Sec.2.2.4 the boundary condition ∆ directly affects where the solution departs from pure MOND and en- .Sensitivity of the solutions to changes on the inner boundary condition employed to obtain numerical solutions.We show solutions for metric perturbation Φ for the top-hat profile and (λs, µ) = (1, 1Mpc −1 ).Left and right columns are the field and its radial derivative (i.e. the force that defines particle trajectories).The horizontal dash-dot lines in the right panels are the MOND constant a 0 .The two different rows correspond to mild (upper row) and large (bottom row) offsets on the fiducial boundary condition which we define as the value of the Newtonian solution in the center.The blue line is the Newtonian solution, shown for comparison.The break in the blue curve at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the right panels.
ters the µ-regime.This is estimated according to (21).The effect of varying ∆ is illustrated in Fig. 6 and as the numerics confirm, ∆ = 0 constitutes the fiducial zero point corresponding to maximum rC.
To investigate this, we choose a value χ0 at r = r0, such that, χout = χ(max) out + 1 − 1 2 ln 2 through the use of (34), where χ(max) out is defined in ( 19).This corresponds to ∆ = 0 in (20) as shown by the magenta dash-dot curve in Fig. 6.The expectation from ( 21) is that this also corresponds to the maximum rC when all the other parameters, e.g.µ and λs are kept fixed.Indeed, this is what is observed in the upper row of Fig. 6.Varying ∆ to negative (positive) values, results in χout being smaller (larger).However, in both cases rC, decreases from the value reached through ∆ = 0, as expected through (21) where ∆ enters with an absolute value.
There are two further related effects that are worth describing.Firstly, when ∆ is positive, the force ∇Φ passes through zero at smaller distances than the fiducial ∆ = 0 case.Hence, one observes a smaller oscillation phase.Conversely, for negative ∆ the opposite happens, that is, ∇Φ passes through zero at larger distances than the fiducial ∆ = 0 case.Hence, one observes a larger oscillation phase.This is seen in both upper and lower panels of Fig. 6.Secondly, when ∆ deviates significantly from zero, the force reaches an extensive plateau which is either negative (for ∆ > 0) or positive (for ∆ < 0) before entering the oscillatory regime.This effect is more pronounced in the lower panel of Fig. 6. .Sensitivity of the solutions to changes on the model parameter λs.We show solutions for metric perturbation Φ for the tophat profile and µ = 1Mpc −1 .In this particular figure we used the Sharp interpolation function to highlight the transition between the Newtonian and MOND limits.Top left and right plots are the field and its radial derivative (i.e. the force that defines particle trajectories).
The horizontal dash-dot lines are the MOND constant a 0 .The blue line is the Newtonian solution, shown for comparison.The break in the blue curve at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the top right panel.See section 4.2.4 for the explanation of the bottom panels.
In addition, λs controls the effective gravitational strength which couples Φ to the density ρ, see (25) and also (30) and (31).Fig. 7 shows the affect of varying λs on the solution of the potential Φ whilst keeping other parameters fixed.We have used the Sharp interpolation function (29) to show these effects clearly.The zoom-in portion of the figure (bottom left) shows the transition from Newtonian to MOND behaviour in the exterior of the source.We see the two scales rM and rχ at work, and their dependence on λs.The scale rχ signifies when the χ component, and the total field Φ = χ + Φ, changes from a Newtonian to MOND behaviour.The scale The transition to the full MOND force is essentially completed when the point rM is reached.We note that both Newtonian and MOND magnitudes of the force are independent of λsonly the transition is affected by this parameter.Towards larger radii, when r ∼ rC, the solution approaches the µ-dominated regime as discussed above.There is a mild dependence on λs as to where that happens, since rC ∝ λ 1/3 s /(1 + λs) 1/3 , and this is clearly seen in the zoom-in region (bottom right).The main effect of this shift, is the starting point of the oscillations, which sets the overall oscillation phase.
Central Density and total mass
In our discussion so far, we have assumed a fiducial central density representative of the bulge of the Milky Way or an average spherical galaxy (Widrow et al. 2008).In Fig. 8 we show the effect of changing this central density, on the total potential Φ (left panel), on the total force ∇Φ (middle panel) and on the total force when the mass is kept constant (by adjusting the source size a h ) (right panel) We observe that larger central densities (resulting in larger total mass), make the inner potentials deeper and increase the overall force throughout the Newtonian and MOND regimes, but also increase the magnitude of the oscillations in the µdominated regime.The opposite happens with decreasing the central density, and this overall behaviour is what we expect based on the transition scale dependence on the total mass M .Indeed, in the right panel, we see the effect of keeping M constant, which is that the exterior solution remains the same, and the only differences occur in the interior.
CONCLUSIONS
In this paper, we predict the profile of the gravitational potential and associated force expected in spherical galaxies in the AeST theory.The later is the first extension of GR that successfully fits the CMB angular and matter power spectra without a dark matter component.The non-relativistic limit of AeST theory differs from the classical MOND theory in the fact that the field equations (equivalent to the Poisson's equation in the standard gravity case) include a mass term.This new ingredient leads to a different gravitational phenomenology which we investigate here for two different spherical density distributions: a top-hat and a Hernquist profile.
We identified three characteristic regimes in the solutions, independent of the density profile: a Newtonian regime, a MOND regime and finally, an oscillatory regime, where the mass term dominates and the fields (as well as the forces and their associated dynamical mass) develop spatial oscillations.Focusing on the case where the MOND regime appears for intermediate scales and the oscillatory regime at larger scales, we find that the transition from the Newtonian to the MOND regime depends on the usual acceleration parameter a0, and the screening parameter λs, while the transition from MOND to the oscillatory regime depends on the mass parameter µ, the inner boundary condition and the total mass of the gravitating object through (21).This means that the oscillations do not appear at a fixed radius, but have a more complex dependence which is different for every object and depends on their mass distribution.On the other hand, we find no strong dependence of the solutions with the free function that regulates the speed of the transition between the Newtonian and MOND regimes.
There should be remarkable consequences from the distinct behaviour of the gravitational potential and forces observed here.Accurate predictions would require N-body simulations but we do expect the matter density and distribution to be impacted and different from that in the ΛCDM model.Given the oscillations at large scales corresponding to the µ dominated regime, one could speculate that this will translate into the existence of ring-like structures far away from the galactic centre, which may eventually resemble observed structures such as galactic rings (Buta & Combes 1996).Furthermore, the enhancement (suppression) of the AeST gravitational potential in the inner galactic regions for certain values of the inner boundary condition (see Fig. 6) and interpolation function (see Fig. 3) could be misinterpreted as a higher (lower) dark matter density in the ΛCDM framework.This could af-fect lensing analyses, galaxy cluster profiles, dark matter indirect (and potentially also direct) detection predictions, and presumably the whole galactic (and possibly early Universe) evolution.
The main conclusion of our work is that AeST theory is a potential alternative to particle dark matter but N-body simulations are needed to make sure that the non-linear formation of large-scale-structures in this theory and the gravitational structure of galaxies are consistent with observations.
Figure 1 .
Figure 1.Interpolation functions explored in this work.Different colours correspond to the different functions defined by (27), (28) and (29).For each function, we show the result for three different values of λs: 1, 2 and 10.
Figure 2 .
Figure2.Numerical solution for the potentials (left) and their derivatives (right) for the Hernquist density profile and the fiducial model parameters with (λs, µ) = (1, 1 Mpc −1 ).The blue, green and red regions delineate the Newtonian, MOND and Oscillatory regions respectively.The yellow and green dashed lines are for the potentials Φ and χ respectively, and the purple dash-dot line is the potential Φ which is responsible for defining particle accelerations through its derivative.We have included the Newtonian (blue) and classical MOND (green) solutions for comparison.The break in the blue curve at ∇Φ = 10 −5 is not physical, but related to the symlog scaling that we use for the vertical axis of the right panel.
Figure 3 .
Figure3.Sensitivity of the solutions to changes on the interpolation function f for the top-hat profile with (λs, µ) = (1, 1Mpc −1 ).On the left we show the potentials and on the right their derivatives.From top to bottom, the panels contain Φ, χ and Φ.The horizontal dash-dot lines in the right panels denote Milgrom's constant a 0 .For each panel, the three lines each show an interpolation function and we also show the Newtonian solution in blue for comparison.The break in the blue curve at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the right panels.
Figure 4 .Figure 5 .
Figure 4. Sensitivity of the solutions to changes on the mass parameter µ on the potential Φ (Left) and its derivative (Right) for the top-hat profile and λs = 1.The horizontal dash-dot lines in the right panels denote Milgrom's constant a 0 .The blue solid line is the Newtonian solution, shown for comparison.The other three lines correspond to three different values of µ = {0, 1, 10}Mpc −1 .The break in the curves at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the right panels.
Figure6.Sensitivity of the solutions to changes on the inner boundary condition employed to obtain numerical solutions.We show solutions for metric perturbation Φ for the top-hat profile and (λs, µ) = (1, 1Mpc −1 ).Left and right columns are the field and its radial derivative (i.e. the force that defines particle trajectories).The horizontal dash-dot lines in the right panels are the MOND constant a 0 .The two different rows correspond to mild (upper row) and large (bottom row) offsets on the fiducial boundary condition which we define as the value of the Newtonian solution in the center.The blue line is the Newtonian solution, shown for comparison.The break in the blue curve at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the right panels.
Figure7.Sensitivity of the solutions to changes on the model parameter λs.We show solutions for metric perturbation Φ for the tophat profile and µ = 1Mpc −1 .In this particular figure we used the Sharp interpolation function to highlight the transition between the Newtonian and MOND limits.Top left and right plots are the field and its radial derivative (i.e. the force that defines particle trajectories).The horizontal dash-dot lines are the MOND constant a 0 .The blue line is the Newtonian solution, shown for comparison.The break in the blue curve at ∇Φ = 10 −5 is related to the symlog scaling that we use for the vertical axis of the top right panel.See section 4.2.4 for the explanation of the bottom panels.
Figure 8 .
Figure8.Sensitivity of the solutions to changes on the normalization of the density profile.The left panel shows the total potential Φ and the middle and right panels shows its radial derivative.While in the left and middle panel the profile radius a h is kept constant (so that the total mass is different), in the right panel the mass is kept the same for all curves by adjusting a h .The continuous cyan curve is the fiducial model used in previous figures with ρc = 3.45 × 10 9 M kpc −3 .The other curves show perturbations around this value.The inner boundary condition for the potential is the same as used in previous figures. | 10,805.2 | 2023-04-11T00:00:00.000 | [
"Physics"
] |
Microtremor data and HVSR method in the kaligarang fault zone Semarang, Indonesia
The data presented contains a collection of microtremor measurement data in the Kaligarang Fault zone, Indonesia. This study aims to present data regarding the structure and lithology of the rocks in the Kaligarang Fault zone. Data was obtained using a triaxial geophone VHL PS 2B and recorded with a data logger type GL 240 with a duration of 10 minutes. The data in this study are primary data taken using a single-station microtremor device with a total of 75 acquisition points in the Kaligarang Fault zone. This data can then be used in the analysis of the Horizontal to Vertical Spectral Ratio (HVSR) which will later produce the hv curve. The vp, vs, density and thickness profiles of each data were obtained from carrying out the inversion process using dinver.
a b s t r a c t
The data presented contains a collection of microtremor measurement data in the Kaligarang Fault zone, Indonesia. This study aims to present data regarding the structure and lithology of the rocks in the Kaligarang Fault zone. Data was obtained using a triaxial geophone VHL PS 2B and recorded with a data logger type GL 240 with a duration of 10 minutes. The data in this study are primary data taken using a single-station microtremor device with a total of 75 acquisition points in the Kaligarang Fault zone. This data can then be used in the analysis of the Horizontal to Vertical Spectral Ratio (HVSR) which will later produce the hv curve. The v p , v s , density and thickness profiles of each data were obtained from carrying out the inversion process using dinver.
© Field conditions and coordinates were recorded during raw data measurement. The potential for movement or disturbance around the seismogram is also recorded as noise in the data. The seismometer must be checked first so that it is integrated with the data logger. Placement of the seismometer must be in an area that has stable soil, cleared of gravel, grass and roots. Make sure the bubbles on the seismometer are right in the middle of the bull's eye level and the direction of the seismogram is pointing north. In this paper, 10 minutes of microtremor data is considered sufficient to represent each location. Data
Value of the Data
• Microtremor data can be used to improve further testing on other fault zone cases.
• The microtremor data set can be used as a comparison and linked to other geophysical measurement tools or methods • The data allows for analysis of lithology and subsurface structures, such as stratification within geological units, and faults. • Provide information related to fault zones to the local government for consideration of urban planning for the area around the fault zone. • Raw data can be reprocessed so that it can display 3D images in the Kaligarang fault zone
Objective
This data set is designed to investigate research on rock structures in fault zone areas. The knowledge gained by conducting this research can be used to determine fault structure, seismic vulnerability, amplification factors etc. The data set is measured over a fairly large range of locations. Having sufficient and extensive data can avoid excessive data extrapolation so that the conditions below the ground surface can be thoroughly, broadly and accurately known along the Kaligarang fault zone, Indonesia. The existence of research on this fault zone can be additional data on the Kaligarang fault zone, Indonesia.
Data Description
The Kaligarang Fault is a fault located in the city of Semarang in a north-south direction. According to Helmy [1] that the Kaligarang fault is a shear fault that has a relatively north-south direction (N5 E-N185 E). There are 7 fault locations in the Kaligarang main fault zone. This zone continues to develop until now so there is a development of restraining and realising [2] . Fig. 1 describes the location of data collection for a total of 75 points in the Kaligarang fault zone with 2D acquisition models. The vertical side in Fig. 1 explains Latitude and the Horizontal side explains Longitude. The star image in Fig. 1 explains the location of the chain structure. The height at each data point varies greatly, which is around 13 -358 meters above sea level. Location data is written in the attachment coordinates.xlsx which can be opened via Ms. excell. The file in coordinate.xlsx provides information regarding the location of each point in the coordinate system: UTM degree and Longitude-Latitude Research on the Kaligarang Fault zone at the study site was carried out using a microtremor and global positioning system (GPS). While the software used in data processing is Microsoft Excel, Notepad ++ , Geopsy for microtremor data, and Dinver, Surfer 13. The raw data on microtremor obtained is in the form of .txt which can be opened in the Notepad application. The data in the .txt describes the microtremor signal received by the seismograph and recorded by the data logger.
The raw data is then processed using the HVSR method. The sampling frequency used in processing HVSR data is 20 Hz. In addition, this study uses different tentative window lengths (such as 25, 30, 35, and 40 s) to obtain a proper and good HVSR curve. The results of the HVSR curve can be seen in Fig. 2 The results of data processing obtained in the processing of the HVSR method are HVSR curves in .hv format which can be opened using Notepad ++ . The contents of the .hv format are the frequency, minimum, maximum and average values of the H/V Amplitude.
HV curve processing can be done using dinver software. The v p , v s , density, and thickness profiles of each data were obtained from carrying out the inversion process using dinver. The shape of the inversion results carried out by the diver software can be seen in Fig. 3 . The result of the inversion is still in .model format and can be opened using Notepad ++ .
HVSR Methods
The principle of the HVSR method is to compare the spectrum of the horizontal component with the vertical component of the spectrum of microwaves by assuming that most of the microwaves consist of shear waves and ignore surface waves (Rayleigh and Love waves). This method is almost the same as the transfer function between wave vibrations in sediment and bedrock. According to Nakamura [3] this method is an analytical method based on observations of shear wave propagation due to earthquake events for various geological conditions which can be estimated from the peak period of the microtremor H/V ratio [4] . In general, the HVSR method is a passive seismic method that uses three components in its measurement, namely 2 horizontal components East-West (East-West) and North-South (North-South), and 1 vertical component. Based on Herak [5] , natural frequency and amplification are important parameters resulting from the HVSR method and can be used to determine local geological characterization. Ambient noise data can be used for HVSR and array analysis, which is important for obtaining the fundamental frequency of the location and the ellipticity of the fundamental mode of the Rayleigh wave at the location being measured. Array analysis is useful for obtaining dispersion curves, which are needed to estimate the shear wave velocity profile [6] .
Processing Data
Data processing using the HVSR method was carried out with Geopsy software. The results of processing data using geopsy software are H/V curves and obtaining frequency (f 0 ), period, and amplitude values at each data collection point [7] . Subsequent processing is carried out in the dinver software. Dinver software is an inversion application that functions to get density, v p , v s , and depth [8] . The values of density, v p , v s , and depth can be calculated via Ms. Excel to obtain other parameters [9] .
Ethic Statements
This work does not involve human subjects, animal experiments, or any data collected from social media platforms.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 1,915.2 | 2023-07-01T00:00:00.000 | [
"Geology"
] |
Optimization over the Boolean Hypercube via Sums of Nonnegative Circuit Polynomials
Various key problems from theoretical computer science can be expressed as polynomial optimization problems over the boolean hypercube. One particularly successful way to prove complexity bounds for these types of problems are based on sums of squares (SOS) as nonnegativity certificates. In this article, we initiate the analysis of optimization problems over the boolean hypercube via a recent, alternative certificate called sums of nonnegative circuit polynomials (SONC). We show that key results for SOS based certificates remain valid: First, for polynomials, which are nonnegative over the $n$-variate boolean hypercube with constraints of degree $d$ there exists a SONC certificate of degree at most $n+d$. Second, if there exists a degree $d$ SONC certificate for nonnegativity of a polynomial over the boolean hypercube, then there also exists a short degree $d$ SONC certificate, that includes at most $n^{O(d)}$ nonnegative circuit polynomials.
Introduction
An optimization problem over a boolean hypercube is an n-variate (constrained) polynomial optimization problem where the feasibility set is restricted to the vertices of an n-dimensional hypercube min f pxq, subject to p 1 pxq, . . . , p m pxq ě 0, (1.1) x P ta i , b i u n , f, p 1 , . . . , p m P Rrxs.
The formulation (1.1) captures a class of optimization problems, that belong to the core of theoretical computer science. However, it is known that solving the above formulation is NP-hard in general, since one can easily cast, e.g., the Independent Set problem in this framework.
One of the most promising approaches in constructing efficient algorithms is the sum of squares (SOS) hierarchy [GV01,Nes00,Par00,Sho87], also known as Lasserre relaxation [Las01]. The method is based on a Positivstellensatz result [Put93] saying that the polynomial f , which is nonnegative over the feasibility set given in (1.1), can be expressed as a sum of squares times the constraints defining the set. Bounding a maximum degree of a polynomial used in a representation of f provides a family of algorithms parametrized by an integer d. Finding a degree d SOS certificate for nonnegativity of f can be performed by solving a semidefinite programming (SDP) formulation of size n Opdq . Finally, for every (feasible) n-variate unconstrained hypercube optimization problem there exists a degree 2n SOS certificate.
On the one hand, the SOS algorithm provide the best available approximation algorithms for a wide variety of optimization problems. For example, the degree 2 SOS for the Independent Set problem implies the Lovász θ-function [Lov79] and gives the Goemans-Williamson relaxation [GW95] for the Max Cut problem. The ARV algorithm of the Sparsest Cut [ARV09] problem can be captured by SOS of degree 6. Finally, the subexponential time algorithm for Unique Games [ABS10] is implied by a SOS of sublinear degree [BRS11,GS11]. More recently, it has been shown that Op1q degree SOS is equivalent in power to any polynomial size SDP extended formulation in approximating maximum constraint satisfaction problems [LRS15]. Other applications of the SOS method for combinatorial optimization can be found in [BRS11, BCG09, Chl07, CS08, CGM13, dlVKM07, GS11, Mas17, MM09,RT12]. For a more detailed overview on the use of SOS in approximation algorithms, see the surveys [CT,Lau03a,Lau09a].
On the other hand, it is known that the SOS algorithm admits certain weaknesses. First, for some hypercube optimization problems the SOS algorithm performs much worse than other known methods. Grigoriev in [Gri01] shows that a Ωpnq degree SOS certificate is needed to detect that the Knapsack instance tx P r0, 1s n : ř n i"1 x i " tn{2u`1{2u contains no integer point. Simpler proofs can be found in [GHP02,Lau03b,KLM16]. Other SOS degree lower bounds for Knapsack problems appeared in [Che07,KLM17a]. Another example is the problem of scheduling unit size jobs on a single machine to minimize the number of late jobs. The problem is solvable in polynomial time using the Moore-Hodgson algorithm; an Ωp ? nq degree SOS algorithm, however, still attains an unbounded integrality gap [KLM17b]. For gives an example of a polynomial optimization problem that admits a degree 2 SOS certificate, but every degree 2 SOS certificate for this problem has exponential bit complexity. Moreover, in [RW17] the example is modified and cast into a hypercube optimization problem again having a degree 2 SOS certificate, which, however, has super-polynomial bit complexity for certificates up to the degree Op ?
nq. For small d, this excludes the possibility that known optimization tools used for solving SDP problems like the ellipsoid method [Kha80,GLS88] are able to find a degree d certificates in time n Opdq for optimization problems of the form (1.1). The above arguments motivate the search of new methods for solving hypercube optimization problems efficiently.
In this article, we initiate an analysis of hypercube optimization problems of the form (1.1) via sums of nonnegative circuit polynomials (SONC). SONCs are a nonnegativity certificate introduced recently by Iliman and the third author [IdW16], which are independent of sums of squares; see Definition 2.1 and Theorem 2.4 for further details. Similarly as Lasserre relaxation for SOS, a Schmüdgen-like Positivstellensatz yields a converging hierarchy of lower bounds for polynomial optimization problems with compact constraint set; see [DIdW17,Theorem 4.8] and Theorem 2.5. These bounds can be computed via a convex optimization program called relative entropy programming (REP) [DIdW17, Theorem 5.3]. Our main question in this article is: Can SONC certificates be an alternative for SOS methods for optimization problems over the hypercube? We answer this question affirmatively in the sense that we prove SONC complexity bounds for (1.1) analogous to the SOS bounds mentioned above. More specifically, we show: (1) For every polynomial which is nonnegative over an n-variate hypercube with constraints of degree at most d there exists a SONC certificate of nonnegativity of degree at most n`d; see Theorem 4.7 and Corollary 4.8.
(2) If a polynomial f admits a degree d SONC certificate of nonnegativity over an n-variate hypercube, then the polynomial f admits also a short degree d SONC certificate that includes at most n Opdq nonnegative circuit polynomials; see Theorem 4.9. For a discussion and remaining open problems, to turn these results into an efficient algorithms, see Section 2.3 and the end of Section 4.2.
Furthermore, we show some structural properties of SONCs: (1) We give a simple, constructive example showing that the SONC cone is not closed under multiplication. Subsequently we use this construction to show that the SONC cone is neither closed under taking affine transformations of variables, see Lemma 3.1 and Corollary 3.2. (2) We address an open problem raised in [DIdW17] asking whether the Schmüdgenlike Positivstellensatz for SONCs (Theorem 2.5) can be improved to an equivalent of Putinar's Positivstellensatz [Put93]. We answer this question negatively by showing an explicit hypercube optimization example, which provably does not admit a Putinar representation for SONCs; see Theorem 5.1 and the discussion afterwards. Our article is organized as follows: In Section 2 we introduce the necessary background about SONCs. In Section 3 we show that the SONC cone is closed neither under multiplication nor under affine transformations. In Section 4 we provide our two main results regarding the degree bounds for SONC certificates over the hypercube. In Section 5 we prove the non-existence of an equivalent of Putinar's Positivstellensatz for SONCs and discuss this result.
Preliminaries
In this section we collect basic notions and statements on sums of nonnegative circuit polynomials (SONC). Throughout the paper, we use bold letters for vectors, e.g., x " px 1 , . . . , x n q P R n . Let N˚" Nzt0u and R ě0 (R ą0 ) be the set of nonnegative (positive) real numbers. Furthermore let Rrxs " Rrx 1 , . . . , x n s be the ring of real n-variate polynomials and the set of all nvariate polynomials of degree less than or equal to 2d is denoted by Rrxs n,2d . We denote by rns the set t1, . . . , nu and the sum of binomial coefficients ř d k"0`n k˘i s abbreviated bỳ n ďd˘. Let e 1 , . . . , e n denote the canonical basis vectors in R n . 2.1. Sums of Nonnegative Circuit Polynomials. Let A Ă N n be a finite set. In what follows, we consider polynomials f P Rrxs supported on A. Thus, f is of the form f pxq " ř αPA f α x α with f α P R and x α " x α 1 1¨¨¨x αn n . A lattice point is called even if it is in p2Nq n and a term f α x α is called a monomial square if f α ą 0 and α even. We denote by Newpf q " convtα P N n : f α ‰ 0u the Newton polytope of f .
We call the terms f αp0q x αp0q , . . . , f αprq x αprq the outer terms and f β x β the inner term of f . For every circuit polynomial we define the corresponding circuit number as The first fundamental statement about circuit polynomials is that its nonnegativity is determined by its circuit number Θ f and f β entirely: IdW16], Theorem 3.8). Let f be a circuit polynomial with inner term f β x β and let Θ f be the corresponding circuit number, as defined in (2.2). Then the following statements are equivalent: (1) f is nonnegative.
Therefore, expressing a polynomial as a sum of nonnegative circuit polynomials (SONC) is a certificate for the polynomials nonnegativity.
Definition 2.3. We define for every n, d P N˚the set of sums of nonnegative circuit polynomials (SONC) in n variables of degree 2d as C n,2d :" We denote by SONC both the set of SONC polynomials and the property of a polynomial to be a sum of nonnegative circuit polynomials.
In what follows let P n,2d be the cone of nonnegative n-variate polynomials of degree at most 2d and Σ n,2d be the corresponding cone of sums of squares respectively. An important observation is, that SONC polynomials form a convex cone independent of the SOS cone: Theorem 2.4 ([IdW16], Proposition 7.2). C n,2d is a convex cone satisfying: (1) C n,2d Ď P n,2d for all n, d P N˚, (2) C n,2d Ď Σ n,2d if and only if pn, 2dq P tp1, 2dq, pn, 2q, p2, 4qu, (3) Σ n,2d Ę C n,2d for all pn, 2dq with 2d ě 6.
2.2. SONC Certificates over a Constrained Set. In [DIdW17, Theorem 4.8], Iliman, the first, and the third author showed that for an arbitrary real polynomial which is strictly positive on a compact, basic closed semialgebraic set K there exists a SONC certificate of nonnegativity. Hereinafter we recall this result.
We assume that K is given by polynomial inequalities g i pxq ě 0 for i " 1, . . . , s and is compact. For technical reason we add 2n redundant box constraints l j pxq :" N˘x j ě 0 for some sufficiently large N P N, which always exists due to our assumption of compactness of K; see [DIdW17] for further details. Hence, we have K :" tx P R n : g i pxq ě 0 for i P rss and l j pxq ě 0 for j P r2nsu. In what follows we consider polynomials H pqq pxq defined as products of at most q P N˚of the polynomials g i , l j and 1, i.e., where h k P t1, g 1 , . . . , g s , l 1 , . . . , l 2n u. Now we can state: Theorem 2.5. Let f, g 1 , . . . , g s P Rrxs be real polynomials and K be a compact, basic closed semialgebraic set as in (2.3). If f ą 0 on K then there exist d, q P N˚such that we have an explicit representation of f of the following form: where the spxq are contained in C n,2d and every H pqq pxq is a product as in (2.4).
The central object of interest is the smallest value of d and q that allows f a decomposition as in Theorem 2.5. This motivates the following definition of a degree d SONC certificate.
Definition 2.6. Let f P Rrxs such that f is positive on the set K given in (2.3). Then f has a degree d SONC certificate if it admits for some q P N˚the following decomposition: where the spxq are SONCs, the H pqq pxq are products as in (2.4), and deg˜ÿ finite spxqH pqq pxq¸ď d.
2.3. The Complexity of Finding a Degree d SONC Certificate. One can decide nonnegativity for a single given circuit polynomial by solving a system of linear equations. This is due to Theorem 2.2 and the fact that the λ j are unique (and thus trivially nonnegative), since the αpjq are affinely independent by construction. For finding a degree d SONC certificate, there are two main bottlenecks that might effect its complexity. The first one is to guarantee the existence of a sufficiently short degree d SONC certificate. If the first bottleneck is resolved, then a second one might occur: even if the existence of a short degree d SONC certificate is guaranteed, then it is not clear a priori, whether one can search through the space of n-variate circuit polynomials of degree at most d efficiently, in order to find such a short certificate.
Regarding the first bottleneck, given the fact that a polynomial f admits a degree d SONC certificate, it is open whether there also exists a degree d certificate which consists of a bounded (ideally n Opdq ) number of components.
The answer to the equivalent question for the SOS degree d certificates follows from the fact that a polynomial is SOS if and only if the corresponding matrix of coefficients of size n Opdq , called the Gram matrix, is positive semidefinite. Since every real, symmetric matrix M that is positive semidefinite admits a decomposition M " V V J , this yields an explicit SOS certificate including at most n Opdq polynomials squared. For more details we refer the reader to the excellent lecture notes in [BS16].
In this paper we resolve the first bottleneck regarding the existence of short SONC certificates affirmatively. Namely, we show that one can always restrict oneself to SONC certificates including at most n Opdq nonnegative circuit polynomials, see Section 4.2 for further details.
Properties of the SONC cone
In this section we show that the SONC cone is neither closed under multiplication nor under affine transformations. First, we give a constructive proof for the fact that the SONC cone is not closed under multiplication, which is simpler than the initial proof of this fact in [DIdW17, Lemma 4.1]. Second, we use our construction to show that the SONC cone is not closed under affine transformation of variables.
Lemma 3.1. For every d ě 2, n P N˚the SONC cone C n,d is not closed under multiplication in the following sense: if p 1 , p 2 P C n,d , then p 1¨p2 R C n,2d in general.
Proof. For every d " 2n, n P N˚we construct two SONC polynomials p 1 , p 2 P C n,d such that the product p 1 p 2 is an n variate, degree 2d polynomial that is not inside C n,2d .
Let n " 2. We construct the following two polynomials p 1 , p 2 P Rrx 1 , x 2 s: First, observe that p 1 , p 2 are nonnegative circuit polynomials, since, in both cases, λ 1 " λ 2 " 1{2, f αp1q " f αp2q " 1, and f β "´2, thus 2 " Θ f ě |f β |. Now consider the polynomial rpx 1 , x 2 q " p 1 p 2 " pp1´x 1 qp1´x 2 qq 2 . We show that this polynomial, even though it is nonnegative, is not a SONC polynomial. Note that x 2 2 ; the support of r is shown in Figure 1. Assume that r P C 2,4 , i.e., r has a SONC decomposition. This implies that the term´2x 1 has to be an inner term of some nonnegative circuit polynomial r 1 in this representation. Such a circuit polynomial necessarily has the terms 1 and x 2 1 as outer terms, that is, r 1 px 1 q " p 1 px 1 , x 2 q " 1`x 2 1´2 x 1 Since Θ r 1 " 2 the polynomial r 1 is indeed nonnegative and, in addition, we cannot choose a smaller constant term to construct r 1 . Next, also the term´2x 2 has to be an inner term of some nonnegative circuit polynomial r 2 . Since this term again is on the boundary of Newprq the only option for such an r 2 is: r 2 px 2 q " p 2 px 1 , x 2 q " 1`x 2 2´2 x 2 . However, the term 1 has been already used in the above polynomial r 1 , which leads to a contradiction, i.e., r R C 2,4 . Since C n,2d Ď C n`1,2d , the general statement follows.
Hereinafter we show another operation, which behaves differently for SONC than it does for SOS: Similarly as in the case of multiplications, affine transformations also do not preserve the SONC structure. This observation is important for possible degree bounds on SONC certificates, when considering optimization problems over distinct descriptions of the hypercube. Proof. Consider the polynomial f px 1 , x 2 q " x 2 1 x 2 2 . Clearly, the polynomial f is a nonnegative circuit polynomial since it is a monomial square, hence f P C n,d . Now consider the following affine transformation of the variables x 1 and x 2 : After applying the transformation the polynomial f equals the polynomial p 1 p 2 from the proof of Lemma 3.1 and thus is not inside C n,d .
An Upper Bound on the Degree of SONC Certificates over the Hypercube
In the previous section we showed that the SONC cone is not closed under taking an affine transformation of variables, Corollary 3.2. Thus, if a polynomial f admits a degree d SONC certificate proving that it is nonnegative on a given compact semialgebraic set K, then it is a priori not clear whether a polynomial g, obtained from f via an affine transformation of variables, admits a degree d SONC certificate of nonnegativity on K, too. The degree needed to prove nonnegativity of g might be much larger than d according to the argumentation in the proof of Corollary 3.2.
In this section we prove that every n-variate polynomial which is nonnegative over the boolean hypercube has a degree n SONC certificate. Moreover, if the hypercube is additionally constrained with some polynomials of degree at most d, then the nonnegative polynomial over such a set has degree n`d SONC certificate. We show this fact for all hypercubes ta i , b i u n ; see Theorem 4.3 for further details.
Formally, we consider the following setting: We investigate real multivariate polynomials in Rrxs. For j P rns, and a j , b j P R, such that a j ă b j let g j pxq :" px j´aj qpx j´bj q be a quadratic polynomial with two distinct real roots. Let H Ă R n denote the ndimensional hypercube given by ś n j"1 ta j , b j u. Moreover, let P :" tp 1 , . . . , p m : p i P Rrxs, i P rmsu be a set of polynomials, which we consider as constraints p i pxq ě 0 with degpp i pxqq ď d for all i P rns as follows. We define H P :" tx P R n : g j pxq " 0, j P rns, ppxq ě 0, p P Pu as the n-dimensional hypercube H constrained by polynomial inequalities given by P.
Throughout the paper we assume that |P| " polypnq, i.e. the size of the constraint set P is polynomial in n. This is usually the case, since otherwise the problem gets less tractable from the optimization point of view.
As a first step, we introduce a Kronecker function: Next we show that the term "Kronecker delta" is justified, i.e., we show that for every v P H the function δ v pxq takes the value zero for all x P H except for x " v where it takes the value one.
Lemma 4.2. For every v P H it holds that: Proof. On the one hand, if x P Hztvu, then there exists an index k such that x k ‰ v k . This implies that there exists at least one multiplicative factor in δ v which attains the value zero due to (4.1). On the other hand if x " v then we have The main result of this section is the following theorem.
Theorem 4.3. Let f pxq P Rrxs n,n . Then f pxq ě 0 for every x P H P if and only if f has the following representation: where s 1 , . . . , s 2n P C n,n´2 , c v P R ě0 and p v P P.
Since we are interested in optimization on the boolean hypercube H, we assume without loss of generality that the polynomial f considered in Theorem 4.3 has degree at most n. Indeed, it f has degree bigger than n, one can efficiently reduce the degree of f by applying iteratively the polynomial division with respect to polynomials g j with j P rns. The remainder of the division process is a polynomial with degree at most n that agrees with f on all the vertices of H.
We begin with proving the easy direction of the equivalence stated in Theorem 4.3.
Lemma 4.4. If f admits a decomposition of the form (4.2), then f pxq is nonnegative for all x P H P .
Proof. The coefficients c v are nonnegative, all s j pxq are SONC and hence nonnegative on R n . We have˘g j pxq ě 0 for all x P H, and for all choices of v P H we have p v pxq ě 0 for all x P H P , and δ v pxq P t0, 1u for all x P H. Thus, the right hand side of (4.2) is a sum of positive terms for all x P H P .
We postpone the rest of the proof of Theorem 4.3 to the end of the section. Now, we state an result about the presentation of the Kronecker delta function δ v .
Lemma 4.5. For every v P H the Kronecker delta function can be written as for s 1 , . . . , s 2 n P R ě0 and every H pnq j given as in (2.4) with q " n and K given by the hypercube constraints g 1 , . . . , g n and l 1 , . . . , l 2n .
Proof. First note that the function δ v can be rewritten as where ś n j"1 1 b j´aj P R ě0 . Now, the proof follows just by noting that for every j P rns both inequalities´x j`bj ě 0 and x j´aj ě 0 are in K.
The following statement is well-known in similar variations; see e.g. [BS14, Lemma 2.2 and its proof]. For clarity, we provide an own proof here.
Proposition 4.6. Let f P Rrxs n,2d be a polynomial vanishing on H. Then f " ř n j"1 p j g j for some polynomials p j P Rrxs n,2d´2 .
Proof. Let J :" xg 1 , . . . , g n y be the ideal generated by the g j 's. Let VpJ q denote the affine variety corresponding to J , IpVpJ qq denote its radical ideal, and let IpHq denote the ideal of H. It follows from ś n j"1 g j P J that VpJ q Ď H and hence IpHq Ď IpVpJ qq " J . The last equality holds since J itself is a radical ideal. This results from Seidenberg's Lemma; see [KR00, Proposition 3.7.15] by means of the following observations. The affine variety VpJ q consists exactly of the points defining H, therefore we know that J is a zerodimensional ideal. Furthermore, for every j P rns the polynomials g j satisfy g j P J XRrx j s and gcdpg j , g 1 j q " 1. Thus, every f P IpHq is of the form f " ř n j"1 p j g j . Moreover G :" tg 1 , . . . , g n u is a Gröbner basis for J with respect to the graded lexicographic order ă glex . This follows from Buchberger's Criterion, which says that G is a Gröbner basis for J if and only if for all pairs i ‰ j the remainder on the division of the S-polynomials Spg i , g j q by G with respect to ă glex is zero. Consider an arbitrary pair g i , g j with i ą j. Then the corresponding S-polynomial is given by Applying polynomial division with respect to ă glex yields the remainder 0 and hence G is a Gröbner basis for J with respect to ă glex . Therefore, we conclude that if f P Rrxs n,2d , then degpp j q ď 2d´2.
For an introduction to Gröbner bases see for example [CLO15].
Theorem 4.7. Let d P N and f P Rrxs n,2d`2 such that f vanishes on H. Then there exist s 1 , . . . , s 2n P C n,2d such that f " ř n j"1 s j g j`ř n j"1 s n`j p´g j q. Proof. By Proposition 4.6 we know that f " ř n j"1 p j g j for some polynomials p j of degree ď 2d. Hence, it is sufficient to show that every single term p j g j is of the form ř n j"1 s j g jř n j"1 s n`j g j for some s 1 , . . . , s 2n P C n,2d . Let p j " ř ℓ i"1 a ji m ji where every a ji P R and every m ji is a single monomial. We show that p j g j has the desired form by investigating an arbitrary individual term a ji m ji g j .
Case 1: Assume the exponent of m ji is contained in p2Nq n . If a ji m ji is a monomial square, then a ji m ji is a circuit polynomial. If a ji ă 0, then´a ji m ji is a monomial square. In both cases we obtain a representation s ji p˘g ji q, where s ji P C n,2d .
Case 2: Assume the the exponent β of m ji contains odd numbers. Without loss of generality, assume that β " pβ 1 , . . . , β k , β k`1 , . . . , β n q such that the first k entries are odd and the remaining n´k entries are even. We construct a SONC polynomial s ji " a αp1q x αp1q`a αp2q x αp2q`a ji x β such that By the construction (4.3) αp1q, αp2q P p2Nq n and β " 1{2pαp1q`αp2qq. Thus, s ji is a circuit polynomial and by (4.4) the coefficients a αp1q , a αp2q are chosen large enough such that |a ji | is bound by the circuit number a 2a αp1q a αp2q corresponding to s ji . Thus, s ji is nonnegative by [IdW16, Theorem 1.1]. Thus, we obtain a ji m ji g j " s ji g j`p a αp1q x αp1q`a αp2q x αp2q qp´g j q, where s ji , a αp1q x αp1q , and a αp2q x αp2q are nonnegative circuit polynomials.
Degree: All involved nonnegative circuit polynomials are of degree at most 2d. In Case 1 this follows by construction. In Case 2 we have for the circuit polynomial s ji that degpαp1qq, degpαp2qq " degpβq if k is even, and degpαp1qq " degpβq`1, degpαp2qq " degpβq if k is odd. Since β is an exponent of the polynomial f , we know that degpβq ď 2d. If k is odd, however, then degpβq " k ÿ j"1 β j lo omo on odd number`n ÿ j"k`1 β j lo omo on even number , i.e., degpβq is a sum of k many odd numbers, with k being odd, plus a sum of even numbers. Thus, degpβq has to be an odd number and hence degpβq ă 2d. Therefore, all degrees of terms in s ji are bounded by 2d and thus s ji P C n,2d . Conclusion: We have that By Cases 1 and 2 and the degree argument, we have s ji P C n,2d for every i, j and by defining s j " ř ℓ j i"1 s ji P C n,2d we obtain the desired representation of f .
4.1.
Proof of Theorem 4.3. In this section we combine the results of this section and finish the proof of Theorem 4.3.
Due to Lemma 4.4, it remains to show that f pxq admits a decomposition of the form (4.2) if f pxq ě 0 for every x P H P .
Hence, when restricted to the hypercube H, the polynomial f can be represented in the following way: where the last equality follows by Lemma 4.2 . Note that there might exist a vector v P HzH P such that f attains a negative value at v. If f pvq ă 0, then let p v P P be one of the polynomials among the constraints satisfying p v pvq ă 0. Otherwise, let p v " 1. Since by Lemma 4.2 we have δ v pxqp v pxq " δ v pxqp v pvq for every v, x P H, we can now write: Thus, the polynomial f pxq´ř vPH P δ v pxqf pvq´ř vPHzH P δ v pxqp v pxq f pvq pvpvq has degree at most n`d and vanishes on H. By Theorem 4.7 we finally get f pxq " n ÿ j"1 s j pxqg j pxq`n ÿ j"1 for some s 1 , . . . , s 2n P C n,n´2 and p v P P. This, together with Lemma 4.5, finishes proof together with.
Corollary 4.8. For every polynomial f which is nonnegative over the boolean hypercube constrained with polynomial inequalities of degree at most d there exists a degree n`d SONC certificate.
Proof. The argument follows directly from Theorem 4.3 by noting that the right hand side of (4.2) is a SONC certificate of degree n`d (see the Definition 2.6).
4.2.
Degree d SONC Certificates. In this section we show that if a polynomial f admits a degree d SONC certificate, then f also admits a short degree d certificate that involves at most n Opdq terms. We conclude the section with a discussion regarding the time complexity of finding a degree d SONC certificate.
Theorem 4.9. Let f be an n-variate polynomial, nonnegative on the constrained hypercube H P with |P| " polypnq. Assume that there exists a degree d SONC certificate for f , then there exists a degree d SONC certificate for f involving at most Op`n ďd˘q many nonnegative circuit polynomials.
Proof. Since there exists a degree d SONC proof of the nonnegativity of f on H P we know that where the summation is finite, the s j 's are SONCs, and every H pqq j is a product as defined in (2.4).
Step 1: We analyze the terms s j . Since every s j is a SONC, we know that there exists a representation such that κ j , µ 1j , . . . , µ k j j P R ą0 , ř k j i"1 µ ij " 1, and the q ij are nonnegative circuit polynomials. Since s j is of degree at most d, we know that Q j :" tq 1j , . . . , q k j j u is contained in R n,d rxs, which is a real vector space of dimension`n`d d˘. Since s j {κ j is a convex combination of the q ij , i.e. in the convex hull of Q j , and dimpQ j q ď`n`d d˘, applying Carathéodory's Theorem, see e.g. [Zie07], yields that s j {κ j can be written as a convex combination of at most`n`d d˘`1 many of the c ij .
Step 2: We analyze the terms H " g j 1¨¨¨g js¨lr 1¨¨¨l rt¨pℓ 1¨p ℓv with j 1 , . . . , j s P rns, r 1 , . . . , r t P r2ns, and ℓ 1 , . . . , ℓ v P rms. Since the maximal degree of H pqq j is d, the number of different H pqq j 's is bounded from above by`n`2 n`m d˘.
Conclusion: In summary, we obtain a representation: Since, as assumed m can bounded by polypnq, the total number of summands is polypnq Opdq " n Opdq , and we found a desired representation with at most n Opdq nonnegative circuit polynomials of degree at most d.
The Theorem 4.9 states that when searching for a degree d SONC certificate it is enough to restrict to certificates containing at most n Opdq nonnegative circuit polynomials. Moreover, as proved in [DIdW17, Theorem 3.2] for a given set A Ď N n , searching through the space of degree d SONC certificates supported on set A can be computed via a relative entropy program (REP) of size n Opdq . However, the above arguments do not necessarily imply that that the search through the space of degree d SONC certificates can be performed in time n Opdq . The difficulty is that one needs to restrict the configuration space of n-variate degree d SONCs to a subset of order n Opdq to be able to formulate the corresponding REP in time n Opdq . Since the current proof of Theorem 4.9 just guarantees the existence of a short SONC certificate, it is currently not clear, how to search for a short certificate efficiently. We leave this as an open problem.
There Exists No Equivalent to Putinar's Positivstellensatz for SONCs
In this section we address the open problem raised in [DIdW17] asking whether the Theorem 2.5 can be strengthened by requiring q " 1. Such a strengthening, for a positive polynomial over some basic closed semialgebraic set, would provide a SONC decomposition equivalent to Putinar's Positivstellensatz for SOS; for background see e.g., [Lau09b,Put93]. We answer this question in a negative way. More precisely, we provide a polynomial f which is strictly positive over the hypercube t˘1u n such that there does not exist a SONC decomposition of f for q " 1. Moreover, we prove it not only for the most natural choice of the box constraints, that is l i " 1˘x i , but for a generic type of box constraints of the form ℓ i " 1`c i˘xi with c i P R ě0 . We close the section with a short discussion.
Let H " t˘1u n and consider the following set of polynomials parametrized by a natural number a: f a pxq :" pa´1q These polynomials satisfy f a peq " a for the vector e " ř n i"1 e i and f a pxq " 1 for every other x P Hzteu. We define for every d P N S d :" # ÿ finite s¨h : s P C n,2d , h P 1,˘px 2 i´1 q, 1`c i˘xi : i P rns, c i P R ě0 ( + as the set of polynomials admitting a SONC decomposition over H given by Theorem 2.5 for q " 1. The main result of this section is the following theorem.
Theorem 5.1. For every a ą 2 n´1 2 n´2´1 we have f a R S d for all d P N. Before we prove this theorem, we show the following structural results. Note that similar structural observations were already made for AGIforms by Reznick in [Rez89] using a different notation.
Lemma 5.2. Every spxq P C n,2d attains at most two different values on H " t˘1u n . Moreover, if spxq attains two different values, then each value is attained for exactly the half of the hypercube vertices.
Proof. By Definition 2.1 every nonnegative circuit polynomial is of the form: Note that for j " 0, . . . , r, we have αpjq P p2Nq n . Hence when evaluated over the hypercube x P H " t˘1u n , spxq can take only one of at most two different values ř r j"0 f αpjq˘fβ . If spxq attains two different values over H, then there has to exist a non empty subset of variables that have an odd entry in β. Let I Ď rns be this subset. Then spxq " ř r j"0 f αpjq pxq´f β pxq, for x P H if and only if x has an odd number of´1 entries in the set I. The number of such vectors is equal to Lemma 5.3. Every polynomial spxqℓ i pxq, with s P C n,2d and ℓ i " 1`c i˘xi being a box constraint, attains at most four different values on H " t˘1u n . Moreover, each value is attained for at least one forth of the hypercube vertices.
Proof. By Lemma 5.2, spxq attains at most the two values´ř r j"0 f αpjq˘fβ¯o n H. Similarly, ℓ i pxq attains at most the two values 1`c i˘xi over H. Thus, a polynomial spxqℓ i pxq attains at most the four different values´ř r j"0 f αpjq˘fβ¯p 1`c i˘xi q on H. Let I be as in the proof of Lemma 5.2, i.e., the subset of variables that have an odd entry in β. If I " H, then the first term ř r j"0 f αpjq`fβ is constant over the hypercube H, thus spxqℓ i pxq takes two different values depending on the i-th entry of the vector. Each value is attained for exactly half of the vectors.
If I ‰ H and i R I the claim holds since the value of the first term depends only on the entries in I and the value of the second term depends on the i-th entry. Hence, the polynomial spxqℓ i pxq attains four values each on exactly one fourth of H vectors.
Finally, let I ‰ H and i P I. Partition the hypercube vertices into two sets depending on the i-th entry. Each set has cardinality 2 n´1 . Consider the set with x i " 1. For the vectors in this set the second term takes a constant value 2`c. Over this set the polynomial s takes one of the values ř r j"0 f αpjq pxq˘f β pxq, depending on whether x has an odd or even number of´1 entries in the set Izt´1u. In both cases the number of such vectors is equal to 2 n´|I| |I|´1 ÿ i"0, i odd 2 i " 2 n´|I| 2 |I|´2 " 2 n´2 .
The analysis for the case x i "´1 is analogous. Now we can provide the proof of Theorem 5.1.
Proof. (Proof of Theorem 5.1) Assume f a P S d for some a P N and d P N. We prove that a has to be smaller or equal than 2 n´1 2 n´2´1 .
Since f a P S d we know that f a pxq " s 0 pxq`n ÿ i"1 s i pxqℓ i pxq`n ÿ j"1s j pxqpx 2 j´1 q`s j`n pxqp1´x 2 j q with s 0 , . . . , s n ,s 1 , . . . ,s 2n P C n,2d . Since˘px 2 j´1 q for j P rns vanishes over the hypercube H, we can conclude (5.1) f a pxq " s 0 pxq`n ÿ i"1 s i pxqℓ i pxq for all x P H for some s 0 , s 1 , . . . , s n P C n,2d . Let s 0,k , and s i,j be some nonnegative circuit polynomials such that s 0 " ř k s 0,k , and s i " ř j s i,j . Thus, we get where the first inequality comes directly from Lemma 5.2 and 5.3 and the last equality from the fact that f a peq " a.
On the other hand, by the properties of the function f a and the equality (5.1), we know that ÿ xPH˜s 0 pxq`ÿ i s i pxqℓ i pxq¸" 2 n´1`a , which makes the subsequent inequality a necessary requirement for f a P S d : Speaking from a broader perspective, we interpret Theorem 5.1 as an indication that the real algebraic structures, which we use to handle sums of squares, do not apply in the same generality to SONCs. We find this not at all surprising from the point of view that in the 19th century Hilbert initially used SOS as a certificate for nonnegativity and many of the algebraic structures in question where developed afterwards with Hilbert's results in mind; see [Rez00] for a historic overview. Our previous work shows that SONCs, in contrast, can, e.g., very well be analyzed with combinatorial methods. We thus see Theorem 5.1 as further evidence about the very different behavior of SONCs and SOS and as an encouragement to take methods beside the traditional real algebraic ones into account for the successful application of SONCs in the future. | 9,906.4 | 2018-02-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
T-dependent RMF Model Applied to Ternary Fission Studies
Received: January 30, 2021 Accepted: May 25, 2021 Published Online: August 31, 2021 Ternary decay is comparatively a rare phenomenon. The yield distribution for the thermal neutroninduced fission of 236U was investigated within the Temperature-dependent Relativistic Mean Field (TRMF) approach and statistical theory. Binding energy obtained from TRMF for the ground state and at a specific temperature is used to evaluate the fragment excitation energy, which is needed to calculate the nuclear level density. Using the ternary convolution, the yield for α-accompanied fission of 236U* is calculated. Initial results are presented which shows a maximum yield for the fragment pair Tc + Ag +α. Further, the ternary pre-existence probability for the spontaneous fission of 236U was studied considering fixed third fragments of α,10Be and 14C using the area of the overlapping region. No significant change in the yield distribution was observed when fragment deformations are considered. However, the heavy group for the probable pair remains as 132Sn with a change in mass number of the lighter fragment.
Introduction
The unstable nuclei undergo radioactive decay by emitting radiations such as α, β and γ. Nuclear fission is another important process in which the nucleus splits into particles spontaneously or through induced processes along with the release of energy. When the excitation energy of the fragments is smaller, no neutrons are emitted and the phenomenon is known as cold fission. In such processes, one of the fragments was found to be associated with the closed-shell nuclei.
Ternary fission is an exotic decay mode in which the parent system splits into three fragments, and can be used as a probe to study the nuclear structure information. α accompanied fission is mostly observed as the light charged particle accompanied fission with its energy spectrum from 6 to 40 MeV. The size of the third particle varies from neutron to the case of true ternary fission in which the three fission fragments are of equal mass. Light particles such as H, Li, Be, and C were also observed in the spontaneous fission of various parent systems.
The experimental investigation [1] for the neutroninduced fission of 235 U indicated the presence of one α-particle per 250 fissions. Further Tsien et al., [2] studied the mass and kinetic energy of fission fragments in the tripartition of 235 U. Mostly the third particle was observed perpendicular to the other two heavier masses. In addition, the authors reported the quadripartition of the uranium nucleus, with the frequency of occurrence as 1/3000 that of bipartition. The α-particle accompanied and binary fission of 235 U was experimentally investigated by Asghar et al., [3] and have observed a similar distribution of fragment kinetic energy for the two fission modes. Furthermore, the mass distributions obtained for the binary and α-accompanied fission are in similar form with a narrow distribution for the latter case. The relative yields for 3 H, 3 He, and 4 He and their energy distributions were studied for thermal neutroninduced fission of 235 U [4][5][6].
To understand the ternary decay mechanism, various theoretical models have been developed. The ternary decay of heavy nuclei was studied using the Three Cluster Model (TCM) proposed by Manimaran and Balasubramaniam [7][8][9][10]. Vijayaraghavan et al. [11] also used the potential energy surface to study the fragment arrangements for the ternary fragmentation of 252 Cf. One of the authors has studied the binary [12] and ternary mass distribution [13] for the thermal neutron-induced fission of 236 U within the dynamical and scission point model respectively.
A brief account of yield calculation for the ternary decay of 236 U using the relativistic mean-field approach and statistical theory is described in the next section. Following that, the ternary pre-existence probability estimation of 236 U using the area of the overlapping region will be presented. In the subsequent sections, the preliminary results obtained by the two approaches are presented, followed by a summary.
Methodology
The ternary decay of thermal neutron-induced fission of 236 U is considered with α-particle as the fixed third fragment. The possible mass fragmentations are generated by comparing with the mass table [14] along with the constraint A A A 1 2 3
≥ ≥ and A A A A
. A 3 is the smallest fragment, taken as fixed and is considered here as α-particle. The interaction potential is calculated using Eq. (1) assuming deformed fragments ( A 2 and A 1 ) in the equatorial arrangement. It is given by, , . ( The temperature-dependent binding energy, BE T i ( ) is calculated using the relativistic mean-field (RMF) formalism which is briefly described below. Here constant temperature corresponding to the compound nucleus (CN) excitation energy is used. To evaluate the Coulomb and nuclear potential, the quadrupole deformation values obtained from the RMF approach is used rather than using the experimental data. As a preliminary calculation, we have restricted to only the quadrupole deformation values. The Coulomb potential is defined as: The terms involved are defined in Refs. [15,16].
S β β 1 2 , ( )gives the strength of nuclear interaction and the associated terms are described in Refs. [15,17]. The proximity potential for the spherical fragments [18] is given by, as the inverse of the rootmean-square radius, nuclear surface energy term, surface width, and universal function respectively.
The obtained fragmentation potential is then minimized for the charge number and further, we have restricted the input mass fragment window from A 2 60 = .
Relativistic Mean-Field Theory
The phenomenological description of ground-state properties of nuclei is successfully established using relativistic mean-field calculations. Interactions between nucleons are assumed to occur through mesonic fields. The relativistic description considers the spin-orbit interaction and the shell model properties of the nuclei. It is possible to calculate the nuclear binding energies, quadrupole deformation, r.m.s. radii, and matter density distribution within the RMF approach.
RMF Formalism
Nucleons are considered as Dirac spinors interacting among themselves by the exchange of mesons. The relativistic Lagrangian density [19,20] for the nucleon-meson interaction is given as: The nucleon and meson equations form a set of coupled equations and are solved iteratively. The total energy is given by, G is the pairing force constant and u i 2 and v i 2 are the occupation probabilities. Temperature is included in the partial occupancies as: representing the Fermi-Dirac distribution function and ε ε λ ∆ . The chemical potential λ for neutrons and protons is obtained from particle number conservation equations. Here we have calculated the binding energy of the nuclei using TRMF formalism.
The difference between binding energies obtained from TRMF formalism and experimental values is shown in Fig. (1). The binding energies obtained by the TRMF approach is found to be comparable with the predicted experimental values for the considered mass range. Hence, this model may be appropriate to evaluate the binding energy at the ground and excited state.
Statistical Theory
The statistical theory considers nucleons as non-interacting fermions [21]. The Fermi-Dirac occupation probability can be used to estimate the energy of the nucleus. The level density of the system is: where a i is the level density parameter and is given as a E T i i = * / 2 . The relative fission probability at the scission point is proportional to the folded level density of quantum states of the fission fragments.
The ternary convoluted level density ρ 123 [22,23] is given by, where j = 1 2 , and 3. E i * is the excitation energy of fragment i(=1, 2 and 3). For the minimized cases, the excitation energy of the fragments are evaluated as: with binding energy values obtained from TRMF formalism. The ternary fission yield is considered as the ratio between the probability of a given fragmentation and the sum of probabilities of all possible ternary fragmentations.
Ternary Pre-existence Probability
According to Gamow's model, the preformation probability or the spectroscopic factor gives a measure of formation probability of fission fragments within the CN. Different approaches were developed to account for the fission process and to determine the preformation probabilities of nuclei. Within the fission model, preformation probability was considered as the penetrability of the prescission part of the barrier. Two approaches were used for the preformation probability estimation. They are fission model, which was developed based on the fission theory, and the preformed cluster-decay model (PCM) [24,25] based on the collective model picture. Prescission and postscission parts of the potential were considered in the fission model, whereas only the outer part is present in the PCM. Poenaru et al. [26] have estimated the preformation probability within the unified fission approach. The penetration probability of the inner part of the barrier or the overlapping potential is considered as the preformation probability. One of us has studied the complete binary decay of 56 Ni, 116 Ba, 226 Ra and 256 Fm within the Unified Fission Model (UFM) [27]. As an extension, we also reported the pre-existence probability for the spontaneous ternary fission of various Cf isotopes from 242 Cf to 256 Cf in steps of two mass units with different choices of third fragments [28,29].
Fragmentation Potential -Overlapping Area
The spontaneous ternary fission of 236 U is considered assuming an equatorial arrangement of fragments. The possible fragmentations are generated with the constraint A A A 1 2 3 Fig. (2) for the exit channel of 132 Sn + 94 Sr + 10 Be assuming spherical and deformed fragments. The dotted line corresponds to the overlapping region assuming deformed fragments and the solid line depicts the overlapping region assuming spherical fragments. Here the overlapping region can be approximated as a triangle of base R 0t -R P fm and height V(R 0t ) -Q MeV and as R t -R P fm and V(R t ) -Q MeV respectively for spherical and deformed fragments. R P is the radius of the parent system, R 0t and R t are respectively the scission point for spherical and deformed fragments. Thus, the area of the overlapping region can be calculated using the values of potential, Q and radius of the nuclei at the scission point. The area can be correlated with the pre-existence probability of fission fragments. The overlapping area for the spherical and deformed fragments is given respectively as,
≥ ≥ and A A A A = + +
and, Then similar to WKB approximation, the probability, can be obtained for the exit channel i i . m is the reduced mass of the ternary system. The normalized pre-existence probability P 0 can be obtained as:
Ternary Yield Distribution of 236 U * using TRMF
The ternary yield distribution for the thermal neutroninduced fission of 236 U is studied within TRMF formalism and statistical theory. The nuclear level density was earlier [23,30] The fragmentation potential is then calculated with the resultant deformation and binding energy values. For the charge minimized fragmentations, the excitation energy is evaluated using Eq. (10). Then the nuclear level density and the relative yield are evaluated using Eqs. (9) and (11) and the preliminary result obtained is shown in Fig. (3). Maximum yield is observed for the nearly symmetric breakup corresponding to the fragment pair Tc + Ag + α. Further, secondary maxima is observed for the fragmentation of Br + Cs + α. But, Sn + Zr was observed as the probable fragment pairs in the α-accompanied fission of 236 U * [13]. It is planned to include the effect of channel temperature satisfying energy balance conditions within the TRMF formalism. Figure 4: Ternary pre-existence probability distribution for the spontaneous fission of 236 U considering α, 10 Be and 14 C as the third particles.
Ternary Yield Distribution of 236 U * using Overlapping Area Approach
The area of the overlapping region may be approximated as a measure of pre-existence probability. The normalized pre-existence probability distribution for the spontaneous ternary decay of 236 U accompanied by fixed third fragments as 4 He, 10 Be and 14 C is shown in Fig. (4) assuming spherical and deformed fragments. The fragment pairs with maximum yield are also marked in the plot. The probable pairs are the same for spherical and deformed fragments. However, the heavier group remains as the closed-shell nucleus, 132 Sn for three choices of the third fragment with a corresponding shift in the light fragment mass number. A decrease in the yield values is observed when fragment deformations are considered except at the probable fragment pairs. In addition, with an increase in the mass number of the third fragment, there is a gradual shift from narrow distribution of P 0 to broader distribution.
Summary
Two approaches were used to study the ternary yield distribution for the α accompanied fission of 236 U. The ternary convolution was used in Statistical theory to obtain the relative yield using the binding energy of the ground and excited states derived via TRMF. Quadrupole deformation of the fragments from TRMF formalism was also used to choose the minimized fragmentations. Tc + Ag + α fragment pairs have the maximum yield. The pre-existence probability distribution for the α, 10 Be and 14 C accompanied spontaneous fission of 236 U was studied using an analytical method. The area of the overlapping region was correlated with the pre-existence probability of fragments. The fragment pairs corresponding to maximum yield remains the same for spherical and deformed fragments. | 3,115 | 2021-08-31T00:00:00.000 | [
"Physics"
] |
Structural Refinement of Carbimazole by NMR Crystallography
The characterization of the three-dimensional structure of solids is of major importance, especially in the pharmaceutical field. In the present work, NMR crystallography methods are applied with the aim to refine the crystal structure of carbimazole, an active pharmaceutical ingredient used for the treatment of hyperthyroidism and Grave’s disease. Starting from previously reported X-ray diffraction data, two refined structures were obtained by geometry optimization methods. Experimental 1H and 13C isotropic chemical shift measured by the suitable 1H and 13C high-resolution solid state NMR techniques were compared with DFT-GIPAW calculated values, allowing the quality of the obtained structure to be experimentally checked. The refined structure was further validated through the analysis of 1H-1H and 1H-13C 2D NMR correlation experiments. The final structure differs from that previously obtained from X-ray diffraction data mostly for the position of hydrogen atoms.
Introduction
In the determination of the solid-state structure of crystalline compounds, NMR crystallography [1] has gradually grown in importance and is now considered complementary and supplementary to X-ray diffraction crystallography, the established leading technique in the field. The combination of the two techniques is particularly powerful in providing in-depth analyses of crystalline materials. Indeed, NMR techniques can cope with some limitations of X-ray diffractometry (XRD), such as the requirement of high quality and large single crystals. Of course, powder X-ray diffraction (PXRD) can also be applied in this case, but solving structure from PXRD still remains a challenging operation and the obtained structures are usually of lower quality than those derived from single crystal diffraction data. More importantly, XRD can have difficulty in making distinctions between isoelectronic species and atoms with similar atomic numbers. Finally, hydrogen atoms are poorly localized by XRD, as diffractometric techniques locate the centroid of the electron density, not the nuclear positions. On the other hand, NMR is intrinsically sensitive to the nuclear species and provides chemically selective information. In particular, the positions and interactions of hydrogen atoms can be finely probed by 1 H NMR, and also exploiting heteronuclei, such as 13 C, 14/15 N, 31 P and others.
In some research fields, the accurate characterization of the three-dimensional structure of solids is extremely important. This is particularly true in the pharmaceutical field, where the presence of an unknown polymorph could lead to extremely serious consequences [2,3]. Moreover, accurate crystal structures of solid pharmaceutics can be used to calculate important parameters using periodic density functional theory (DFT) calculations [4][5][6]. To this aim, structural studies capable of obtaining additional information to those derived from only diffractographic techniques are often required.
NMR has provided crystallographic information from its earliest days [7]; nowadays, thanks to the enormous developments made in the NMR field, there are many examples in the literature where NMR crystallography was successfully applied in the field of biochemistry [8,9], in the study of inorganic materials [10], crystalline microporous materials [11], supramolecular assemblies [12], and pharmaceutical systems [13,14].
The ability to calculate NMR parameters, thanks to the improved efficiency of DFT codes, and in particular, thanks to the development of the gauge-including projectoraugmented waves (GIPAW) method, has allowed rapid development of NMR crystallography, especially for the study of small organic molecules [15]. With respect to methods based on gauge-including atomic orbitals (GIAOs) [16], although extended to account for periodic conditions [17], in recent years, GIPAW has become the most popular method, as a first-principle theoretical framework in the context of NMR crystallography [18,19]. The reason for its success stems from the development of well-developed codes using periodic boundary conditions in conjunction with plane-waves, as basis sets and accurate dedicated pseudopotentials to build Bloch states, and simulate magnetic properties of crystalline (and, more in general, solid-state) systems. In this area, NMR methods have been used in a wide range of applications, for example assisting the structure solution process from PXRD data [20][21][22], establishing molecular conformations [23,24], confirming and rationalizing intermolecular interactions [25][26][27], and in some cases, deriving complete structures in absence of diffraction data [28][29][30].
In this context, an important research field regards the validation of structures derived from diffractographic data. This process often results in the optimization of atom positions in the unit cell of the crystal. The validation of diffractographic structures is based on the comparison between experimentally measured NMR parameters with those calculated with DFT methods. This process was proven effective at resolving ambiguities related to the molecular structure [31], to choose between alternative proposed structures [32,33], and to refine them through optimization of atom positions in the unit cell [34][35][36][37]. Although optimization of hydrogen atoms usually has the biggest effect, changes in heavy atom positions obtained through full optimization of the molecule sometimes results in improved agreement with experimental NMR data.
The present article reports the structural refinement of carbimazole by NMR crystallography. Carbimazole is, currently, one of the most used drugs for the treatment of hyperthyroidism and Grave's disease. Its antithyroid action is attributed to its metabolization to methimazole in the body, which inhibits the first step of thyroid hormone synthesis in thyroglobulin [38][39][40][41]. Although anti-thyroid drugs (methimazole, carbimazole, propylthiouracil) have been used for over 70 years, despite a lot of research, their mechanisms of action are still not fully understood, especially at the molecular level.
The crystal structure of carbimazole has been independently studied by two research groups [42,43], and is reported in the Cambridge Structural Database (JOVDIH and JOVDIH01). The two structures are very similar and their main parameters are reported in Table 1. Delage et al. [42] derived the crystal structure by single crystal XRD (SCXRD) at an ambient temperature with CuKα irradiation in 1990, while the structural determination performed by D. Das and co-workers [43] with X-ray diffraction was a secondary aspect of a more general study, looking at the biological activity of carbimazole and its analogues. Therefore, the diffractometric procedure and the determined structure were not described and discussed in detail. In addition, to the best of our knowledge, no solid state NMR spectra of carbimazole are present in the literature thus far. All of the mentioned aspects make carbimazole an interesting case of study for structural refinement by NMR crystallography. [42] and JOVDIH01 [43] reported in the literature.
JOVDIH JOVDIH01
Space Group P n a 2 1 P n m a Here we present the first solid-state NMR (SSNMR) characterization of carbimazole. In particular, 13 C cross polarization (CP)/magic angle spinning (MAS), 1 H MAS, 1 H combined rotation and multiple pulse spectroscopy (CRAMPS), 1 H-1 H double quantum-single quantum (DQSQ), and 1 H-13 C heteronuclear correlation (HETCOR) experiments were performed, and a complete assignment of the NMR peaks was achieved. The structural refinement was performed by using DFT with PAW pseudopotentials by optimizing hydrogen atoms only or all atoms in the cell. 1 H and 13 C isotropic chemical shifts were calculated for the raw and refined structures. The comparison between experimentally measured and calculated chemical shift values confirmed the better quality of the refined structures. These were further validated through the analysis of 2D NMR correlation 1 H-1 H DQSQ and 1 H-13 C HETCOR experiments.
DSC, TGA and PXRD
First, we performed differential scanning calorimetry (DSC), thermogravimetric analysis (TGA) and PXRD in order to obtain a basic characterization of our carbimazole sample. All of these experiments indicate that the sample under study is a pure, crystalline, and anhydrous form. In particular, the DSC thermogram ( Figure 1a) shows the melting peak of carbimazole at 126.4 • C, in agreement with the value reported in DrugBank [44], equal to 123.5 • C. In addition, DSC and TGA (Figure 1b) do not show anomalies or weight loss around 100 • C or below, confirming that the investigated sample is anhydrous. PXRD spectrum also confirms that the solid form investigated is the same polymorph studied by Delage et al. [42] and Das et al. [43].
1D High-Resolution 13 C and 1 H SSNMR Spectra
The 13 C CP-MAS spectrum of carbimazole recorded at room temperature and at a MAS frequency (ν MAS ) of 22 kHz is reported in Figure 2, together with the signals assignment, and shows seven narrow and well-resolved peaks. The absence of multiplicity of resonance of the signals confirms the presence of a single independent molecule in the unit cell (Z = 1), as previously reported by Delage and co-workers [42]. The assignment of the spectrum was carried out by comparison with the 13 C solution-state NMR spectrum of carbimazole [43,45] and was confirmed by the 1 H-13 C HETCOR experiment (vide infra). It is worth noting that the signal intensities in the 13 C CP-MAS spectrum reflect the number of protons directly linked to the carbon nuclei, as expected.
1D High-Resolution 13 C and 1 H SSNMR Spectra
The 13 C CP-MAS spectrum of carbimazole recorded at room temperature and at a MAS frequency (νMAS) of 22 kHz is reported in Figure 2, together with the signals assignment, and shows seven narrow and well-resolved peaks. The absence of multiplicity of resonance of the signals confirms the presence of a single independent molecule in the unit cell (Z'=1), as previously reported by Delage and co-workers [42]. The assignment of the spectrum was carried out by comparison with the 13 C solution-state NMR spectrum of carbimazole [43,45] and was confirmed by the 1 H-13 C HETCOR experiment (vide infra). It is worth noting that the signal intensities in the 13 C CP-MAS spectrum reflect the number of protons directly linked to the carbon nuclei, as expected. The 1 H MAS spectrum recorded at νMAS = 22 kHz ( Figure 3a) shows a scarce resolution; nevertheless, at least three heavily superimposed peaks centered at 2.4, 4.3, and 6.9 ppm can be identified. In order to improve the spectral resolution, MAS had to be combined with suitable pulse sequences, such as the phase modulated Lee-Goldburg (PMLG) Figure 3a) shows a scarce resolution; nevertheless, at least three heavily superimposed peaks centered at 2.4, 4.3, and 6.9 ppm can be identified. In order to improve the spectral resolution, MAS had to be combined with suitable pulse sequences, such as the phase modulated Lee-Goldburg (PMLG) and decoupling using mind boggling optimization (DUMBO), aimed at better removing the 1 H homonuclear dipolar coupling. The spectra so obtained are reported in Figure 3b,c, respectively. Both PMLG-MAS and DUMBO-MAS spectra show greatly improved spectral resolution: five partially overlapped peaks are now clearly distinguishable, corresponding to the five groups of inequivalent protons, as expected on the basis of the molecular structure. As for the 13 C spectrum, also in this case, the spectral assignment was performed by comparison with the 1 H solution-state NMR spectra [43,46] and with the assistance of the 1 H-13 C HETCOR experiment. All experimental isotropic 1 H and 13 C chemical shift values are reported in Table 2 along with the assignment of the peaks. The 1 H MAS spectrum recorded at νMAS = 22 kHz ( Figure 3a) shows a scarce resolution; nevertheless, at least three heavily superimposed peaks centered at 2.4, 4.3, and 6.9 ppm can be identified. In order to improve the spectral resolution, MAS had to be combined with suitable pulse sequences, such as the phase modulated Lee-Goldburg (PMLG) and decoupling using mind boggling optimization (DUMBO), aimed at better removing the 1 H homonuclear dipolar coupling. The spectra so obtained are reported in Figure 3b,c, respectively. Both PMLG-MAS and DUMBO-MAS spectra show greatly improved spectral resolution: five partially overlapped peaks are now clearly distinguishable, corresponding to the five groups of inequivalent protons, as expected on the basis of the molecular structure. As for the 13 C spectrum, also in this case, the spectral assignment was performed by comparison with the 1 H solution-state NMR spectra [43,46] and with the assistance of the 1 H-13 C HETCOR experiment. All experimental isotropic 1 H and 13 C chemical shift values are reported in Table 2 along with the assignment of the peaks.
Optimization of the Crystallographic Structure
Two crystal structures of carbimazole exist in the literature (JOVDIH [42] and JOVDIH01 [43]). They are in fair agreement on the values of the length of the axes of the orthorhombic unit cell belonging to the Pmmm point group (space groups Pna21 and Pnma for JOVDIH and JOVDIH01, respectively), measuring 7.689 Å, 6.637 Å, and 17.364 Å for JOVDIH and 7.698 Å, 6.650 Å, and 17.388 Å for JOVDIH01. The unit cell contains four molecules (for a total of 88 atoms) generated from a single independent (Z = 1) molecule via the symmetry operations of the point group (see Table 1). We focused our analysis on the estimation of the isotropic chemical shifts of both the 13 C and 1 H nuclei, which are reported in Table 2, by starting from the more recent JOVDIH01 crystal structure. All of the simulations were performed by imposing the experimental values of the length of the crystal axes. Within the cell, we used three levels of local optimization: (i) no optimization at all, by considering the experimental positions derived directly from the X-ray structure; (ii) a local optimization of the H atoms only (X-ray experimental C-H distances are in fact underestimated by about 10% with respect to known typical values); (iii) a complete local optimization of all the atoms within the unit cell. The data collected from the DFT-GIPAW simulation were corrected by using linear regression functions, whose analytic expressions are reported for each case in Table 5. Since the validation of the refined structures is mainly based on the values of RMSD between calculated shielding values and experimental chemical shifts, the approach to perform a separate regression for each set of data allowed systematic deviations to be minimized. As can be immediately evinced from the reported values, simulated isotropic chemical shifts estimated on the bare experimental positions carry a significant RMSD for both 1 H and 13 C species, due to the aforementioned underestimation of the C-H distances; moreover, the angular coefficient of the regression function results remarkably far from unity for both species, indicating the difficulty of reconciling experimental and simulated values in this case. The situation is improved when considering level (ii) and (iii) of local optimization: RMSDs are reduced to the typical values reported in the literature for the two considered species [47,48], and the analytical regression functions are characterized by angular coefficients near unity. Interestingly, the lowest RMSD value is achieved when only H atoms are optimized, indicating that the optimization of the heavier atoms at the DFT-GGA level slightly worsen the agreement with the experimental values. This can be expected as hybrid XC-functionals are more accurate in predicting chemical shifts for this type of molecule [49], but the chosen periodic approach allowed the use of the gradient-corrected functional only. PXRD patterns were also simulated for the three levels of optimization and they resulted in being very similar to each other (see Figure 1 and Supplementary Materials).
2D SSNMR Spectra: Validation of the Optimized Structure
1 H-13 C HETCOR and 1 H-1 H DQSQ experiments are often applied to enlarge the amount of information in NMR crystallography studies [50][51][52]. Here, these experiments were performed in order to obtain a validation of the refined structure of carbimazole by a semi-quantitative comparison of the signal intensities in the spectra with the distances of the corresponding coupled nuclei measured from the optimized crystal structure.
The 1 H-13 C HETCOR spectrum ( Figure 4) shows signals whose intensities depend on the strength of the heteronuclear dipolar interactions, and, in turn, primarily depend on the distance of the coupled nuclei (Table 3). Indeed, the most intense peaks correspond to the directly bonded pairs of 1 H-13 C nuclei (peaks 1, 2, 3, 4, and 5). Among these peaks, the intensity is roughly proportional to the number of hydrogen atoms directly bonded to the carbon nucleus: maximum for the signals of the methyl groups and minimum for those of the olefinic groups. Among the other peaks, the signals with larger intensities are those corresponding to the intramolecular interactions C6-H7 and C7-H6 (peaks 6 and 8, respectively) and to the intermolecular interactions C4-H7* and C7-H4* (peaks 7 and 9, respectively). The C4-H7*/C7-H4* pairs show larger signal intensities than other pairs of nuclei characterized by shorter internuclear distances, since, in the crystal structure, there Molecules 2021, 26, 4577 7 of 14 are 12 H4 atoms at distances shorter than 5 Å from each C7 (and vice versa there are 12 H7 atoms at distances shorter than 5 Å from each C4, as shown in Figure 5). The remaining signals in the spectrum also show intensities compatible with the C-H distances obtained from the optimized crystal structure.
the directly bonded pairs of H-C nuclei (peaks 1, 2, 3, 4, and 5). Among these peaks, the intensity is roughly proportional to the number of hydrogen atoms directly bonded to the carbon nucleus: maximum for the signals of the methyl groups and minimum for those of the olefinic groups. Among the other peaks, the signals with larger intensities are those corresponding to the intramolecular interactions C6-H7 and C7-H6 (peaks 6 and 8, respectively) and to the intermolecular interactions C4-H7* and C7-H4* (peaks 7 and 9, respectively). The C4-H7*/C7-H4* pairs show larger signal intensities than other pairs of nuclei characterized by shorter internuclear distances, since, in the crystal structure, there are 12 H4 atoms at distances shorter than 5 Å from each C7 (and vice versa there are 12 H7 atoms at distances shorter than 5 Å from each C4, as shown in Figure 5). The remaining signals in the spectrum also show intensities compatible with the C-H distances obtained from the optimized crystal structure. Table 3. Signals in the 1 H-13 C HETCOR spectrum, numbered in order of decreasing intensity. For each signal, the nuclei involved in the interaction, the normalized intensity, and the minimum distance between the nuclei calculated from the optimized crystal structure are reported. Asterisks in the coupled nuclei column denote intermolecular interactions. In the 1 H-1 H DQSQ spectrum (Figure 6), the signals corresponding to the interactions between 1 H nuclei belonging to the same chemical group (peaks 1, 2, and 3) are characterized by the largest intensities, since the distances between these pairs of nuclei are smaller than any other. The difference in intensity between peaks 1 and 3, both arising from a methyl group, clearly suggests interpreting the intensity of the signals only in a semi-quantitative manner. Contrary to the HETCOR experiment, in fact, in the DQSQ spectrum, a correspondence between the trends of the minimum distances of H pairs and of the intensities of the corresponding signals in the spectrum cannot be established (Table 4). In any case, only the signals arising from 1 H pairs showing a distance smaller than 2.9 Å in the optimized structure can be clearly detected in the DQSQ spectrum. Even in this case, the strongest intermolecular dipolar interactions are those between the protons belonging to the methyl groups 4 and 7 ( Figure 5). In the 1 H-1 H DQSQ spectrum (Figure 6), the signals corresponding to the interactions between 1 H nuclei belonging to the same chemical group (peaks 1, 2, and 3) are characterized by the largest intensities, since the distances between these pairs of nuclei are smaller than any other. The difference in intensity between peaks 1 and 3, both arising from a methyl group, clearly suggests interpreting the intensity of the signals only in a semi-quantitative manner. Contrary to the HETCOR experiment, in fact, in the DQSQ spectrum, a correspondence between the trends of the minimum distances of H pairs and of the intensities of the corresponding signals in the spectrum cannot be established (Table 4). In any case, only the signals arising from 1 H pairs showing a distance smaller than 2.9 Å in the optimized structure can be clearly detected in the DQSQ spectrum. Even in this case, the strongest intermolecular dipolar interactions are those between the protons belonging to the methyl groups 4 and 7 ( Figure 5). Table 4. Signals of the 1 H-1 H DQSQ spectrum, numbered and reported in order of decreasing intensity. For each signal, the pair of protons involved in the interaction, the normalized intensity, and the minimum distance between the nuclei calculated from the optimized crystal structure are reported. Although we could not interpret the signal intensities of the 2D spectra in a strictly quantitative way, these experiments were useful, on the one hand, to support the signal assignment of the 1 H and 13 C 1D high-resolution spectra, and on the other hand, to validate the crystal structure optimized by DFT. The observed deviations from the theoretical relationship between signal intensity and the inverse of the third power of the internuclear distance can have different sources. First, due to the difficulty of resolving all peak superpositions in the 2D spectra, the intensities were taken as heights of the unresolved 2D peaks. Second, the distances are calculated from an ideal "frozen" structure, so the real presence, at the experimental room temperature, of both rotational and vibrational molecular motions, certainly introduces discrepancies between actual and calculated internuclear distances. Although such discrepancies could be, in principle, strongly reduced by combining ab initio Molecular Dynamics within DFT GIPAW calculations, this is computationally very demanding and, in any case, beyond the scope of this work [53]. Third, possible artifacts can arise from the experiments, due, for instance, to RF inhomogeneity.
Materials and Methods
Sample. The carbimazole sample was purchased at TCI (Tokyo, Japan) (CAS RN 22232- . Differential Scanning Calorimetry (DSC). DSC was performed by heating at 10 K/min under dry nitrogen atmosphere using a Perkin Elmer (Waltham, MA, USA) DSC8500 calorimeter. Thermo Gravimetric Analysis (TGA). TGA measurements were performed with a thermogravimetric analyzer TGA4000 (Perkin Elmer) in a temperature range 20 • C to 600 • C, with a rate of 10 • C/min under dry nitrogen atmosphere (flow rate 20 mL/min).
Powder X-ray diffraction (PXRD). The PXRD spectrum was collected on a powdered sample using a Bruker (Rheinstetten, Germany) D8 Advance diffractometer with CuKα radiation (λ = 1.54 Å) and a LynxEye detector, operating in Bragg-Brentano geometry. Scans were recorded at room temperature (300 K) in angles ranging from 6 to 60 ( • 2Theta), with a step size of 0.03, and continuous scan mode.
NMR Methods. Solid State NMR spectra were recorded on a Bruker Avance Neo spectrometer working at Larmor frequencies of 500.13 and 125.77 MHz for 1 H and 13 C nuclei, respectively, equipped with triple-resonance CP-MAS probehead accommodating rotors, with an external diameter of 2.5 mm. The 90 degree pulse duration was 2.08 and 5 µs for 1 H and 13 C nuclei, respectively. The 1 H-13 C CP-MAS spectrum was recorded at a MAS frequency of 22 kHz, using a contact time of 2 ms and accumulating 1000 scans. The 1 H MAS spectrum was recorded at a MAS frequency of 22 kHz accumulating 4 scans. The 1 H PMLG-MAS spectrum [54] was recorded at a MAS frequency of 15 kHz accumulating 32 scans. The 1 H DUMBO-MAS spectrum [55] was recorded at a MAS frequency of 12 kHz accumulating 32 scans. The 1 H-13 C HETCOR spectrum with FSLG decoupling in the indirect dimension [56] was recorded at a MAS frequency of 15 kHz, using a contact time of 0.5 ms, accumulating 128 rows and 64 scans. The 1 H-1 H DQ-SQ spectrum [57] was recorded at a MAS frequency of 12 kHz, using the eDUMBO-1 22 scheme [58] for decoupling during acquisition, accumulating 256 rows and 16 scans. In all relevant experiments, a SPINAL-64 decoupling scheme [59] was applied on 1 H nuclei while acquiring the 13 C signal. In all cases, the measurements were carried out at room temperature (about 296 K) and a recycle delay of 10 s was used.
Computational simulations. All DFT calculations were performed by using the Quantum Espresso (QE) suite of programs [60], employing plane-augmented-wave (PAW) pseudopotentials [61], the PBE-D2 XC-functional [62,63], and plane-waves as basis sets to build Bloch states, with proper periodic boundary conditions inside the unit cell, determined by the experimental data of the JOVDIH01 structure (crystal axes measuring 7.698 Å, 6.650 Å, and 17.388 Å, within an orthorhombic cell belonging to the Pnma space group) [43]. Cut-offs on the wave function and electronic density were set to 60/600 Ry (1 Ry = 313.8 Kcal/mol) and the first Brillouin cell in the reciprocal space was sampled according to a (4 × 4 × 2) mesh of k points. Calculations were performed spin-restricted by applying a Gaussian smearing of the one-particle energy levels of 0.002 Ry. NMR chemical shifts (CS) were simulated by using the GIPAW approach [19] implemented in QE. For comparison with the experimental NMR data, the absolute isotropic shielding values (σ, ppm) obtained by DFT were transformed into isotropic chemical shifts (δ, ppm) through a linear least-squares fitting, which, for the calculations reported in Table 2, gave the results reported in Table 5.
Conclusions
In this work, the crystalline structure of carbimazole was deeply investigated by a combined solid state NMR-DFT approach, also exploiting previously reported XRD data. The carbimazole sample was subjected to a preliminary screening (by DSC, TGA, and PXRD), which confirmed the equivalence between its crystalline form and the form previously described in the literature and characterized by XRD [42,43]. Afterwards, isotropic 1 H and 13 C chemical shifts were experimentally determined by high-resolution solid state NMR techniques, offering the best accuracy, e.g., using DUMBO-MAS to obtain a well resolved 1 H spectrum. The experimental isotropic chemical shifts were quantitatively compared with those calculated by DFT-GIPAW methods for different structures (those reported in the literature as barely derived from XRD data, and those obtained after DFT optimization of the positions of H atoms only, or of the whole molecule). The agreement achieved for the optimized structure was excellent, the RMSD values obtained, reflecting the state of the art in NMR crystallography (about 1% of the whole spectral range explored by each nucleus, i.e., 0.2 and 2 ppm for 1 H and 13 C, respectively [48]). The refined structure differs from the XRD structure almost exclusively for the position of H atoms, which could be determined very accurately. Importantly, the refined structure was further deeply validated through the analysis of two 2D-correlation experiments ( 1 H-13 C HETCOR and 1 H-1 H DQSQ), whose signals intensities were all found to be in excellent (although semi-quantitative) agreement with the strengths of the dipolar couplings predicted from the inter-nuclear distances of the optimized structures. Here, the relevant role of inter-molecular dipolar interactions for specific chemical groups (e.g., methyl groups 4 and 7) was clearly observed and highlighted. Deviations from a fully quantitative agreement between 2D signal intensities and calculated dipolar coupling strengths must be at least partially ascribed to vibrational and inter-conformational motions, which can be effectively investigated by solid-state NMR through the measurement of interaction anisotropies and relaxation times [64][65][66][67]. This will be the subject of a future paper.
This study clearly confirms the importance and reliability of NMR crystallography, especially in the pharmaceutical field. To the best of our knowledge, this is the first time that solid-state NMR experiments were performed on carbimazole.
Supplementary Materials: The following are available online, Figure S1: Simulated XRPD spectra of carbimazole, Table S1: RMSD of the atom positions of the optimized structure with respect to the SCXR structure.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 6,400.8 | 2021-07-29T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Calcium-Dependent Protein Kinase 28 Maintains Potato Photosynthesis and Its Tolerance under Water Deficiency and Osmotic Stress
Calcium-dependent protein kinases (CDPK) are implicated in signaling transduction in eukaryotic organisms. It is largely unknown whether StCDPK28 plays a role in the response to water deficiency and osmotic stress in potato plants (Solanum tuberosum L.). Potato cv. Zihuabai was cultivated under natural, moderate, and severe water deficiency conditions; to induce osmotic stress, potato plants were treated with 10% or 20% PEG. StCDPK28-overexpression and StCDPK28-knockdown plants were constructed. StCDPKs were evaluated by qRT-PCR. The subcellular location of the StCDPK28 protein was observed with confocal scanning laser microscopy. Phenotypic changes were indicated by photosynthetic activity, the contents of H2O2, MDA and proline, and the activities of CAT, SOD and POD. Results showed water deficiency and osmotic stress altered StCDPK expression patterns. StCDPK28 exhibited a membrane, cytosolic and nuclear localization. Water deficiency and osmotic stress induced StCDPK28 upregulation. Photosynthetic activity was enhanced by StCDPK28 overexpression, while decreased by StCDPK2 knockdown under water deficiency and osmotic stress. StCDPK28 overexpression decreased H2O2 and MDA, and increased proline, while StCDPK28 knockdown showed reverse results, compared with the wild type, in response to water deficiency and osmotic stress. StCDPK28 overexpression increased the activities of CAT, SOD and POD, while StCDPK28-knockdown plants indicated the reverse trend under water deficiency and osmotic stress conditions. Regulation of StCDPK28 expression could be a promising approach to improve the tolerance ability of potato plants in response to drought or high salt media.
Introduction
Potato (Solanum tuberosum L.) is a staple and important food for populations in developing countries, particularly in the temperate region of northern China [1]. Unfortunately, potato plants are exposed to several environmental stressors that restrict their growth, development and harvest [2]. Drought is an important adverse factor for plant cultivation, and water deficiency affects the entire life cycle of the potato plant, such as tuber formation [3]. Nowadays, water deficiency is a growing conundrum in the context of global climate change [4]. As another limiting factor, osmotic stress causes a loss of approximately 20-98% of potato production [5]. Consequently, high priority should be given to enhancing the transient drought tolerance of potato plants based on conventional breeding and biotechnology. The development of transgenic potato plants enhances the drought and salt stress tolerance and increases tuber yield [6]. Calcium-dependent protein kinases (CDPK) are identified as crucial modulators for signaling transduction in eukaryotic organisms [7], which can be used as potential targets for biotechnology.
Protein kinases are a class of protein structural domains consisting of 250-300 amino acids that are responsible for phosphate transfer reactions. Emerging data indicate that plants evolve protein kinases to respond to stress conditions such as waster deficiency and osmotic stress [8][9][10]. CDPKs are characteristic of a calmodulin-like domain with EF-hand Ca 2+ -binding sites, a junction domain, a typical Ser/Thr protein kinase domain, and a variable N-terminal domain [11]. Ca 2+ is a second messenger in the molecular regulatory mechanism and it is involved in the activation of CDPKs [12]. Multiple stimuli, such as water deficiency and osmotic stress, induce changes in the cellular Ca 2+ concentration and subsequent activation of CDPKs, triggering the binding of Ca 2+ with the calmodulin-like domain [13,14]. Downstream targets are continually modulated in plants, and numerous physiological processes are changed in response to biotic stressors [13,14]. In potato plants, 26 StCDPKs have been predicted and classified into four groups (subfamilies I, II, III and IV) [15]. Their functions have been successively elucidated. For example, StCDPK1 has been shown to play a role in tuber transition and sprouting by affecting the vascular system of stems and roots [16].
CDPK28 is one of the calcium-dependent protein kinases that has been extensively studied in Arabidopsis thaliana [17][18][19][20][21][22]. The regulatory function of CDPK28 at the phenotypic level has been increasingly revealed. AtCDPK28 was found to regulate hypocotyl and lignification phenotypes, which is attributed to phosphorylation in methionine adenosyltransferases that affect lignin deposition and ethylene biosynthesis [17]. Further, CDPK28 is reported to mediate the sodium-chloride-and mannitol-sensitive phenotypes in Arabidopsis [23]. In terms of the molecular mechanism, CDPK28 has been proven to have a high affinity Ca 2+ /CaM-binding protein, and the peptide kinase activity of CDPK28 is associated with its autophosphorylation in response to the Ca 2+ messenger [18]. In the context of biotic stressors, CDPK28 modulates defense signaling related to reactive oxygen species and shows activity in balancing phytohormones [19]. Ding et al. recently elucidated that plasma-membrane-localized CDPK28 senses cold-induced Ca 2+ signals and relays them to the nucleus [24]. However, it is still unknown whether StCDPK28 plays a role in the response to water deficiency and osmotic stress in potato plants.
To explore whether StCDPK28 has functions under water deficiency and osmotic stress, we established StCDPK28-overexpressing and StCDPK28-knockdown plants and applied water deficiency and osmotic stress treatment. The result showed that StCDPK28 was sensitive to water deficiency and osmotic stress. StCDPK28 fortified the activity of potato in response to water deficiency and osmotic stress.
Expression Profiles of StCDPKs in Potato Plants under Water Deficiency and Osmotic Stress, and Analysis of CDPK28 Protein
Potato plants were cultivated under severe water deficiency and osmotic stress conditions, and StCDPKs were relatively quantified by qRT-PCR. Figures 1 and 2 showed that water deficiency and osmotic stress changed the expression patterns of StCDPKs. It was worth noting that StCDPK28 (reported as StCDPK26 by Frantino et al.) was constantly transcribed with the extension of the incubation time under water deficiency or osmotic stress conditions. Therefore, StCDPK28 was considered to mediate potato tolerance to water deficiency and osmotic stress. The CDPK28-like protein in potato was predicted to encode a calcium-dependent protein kinase with 564 amino acids (sequence ID: XP_006340738.1). According to results of protein sequence matches with the basic local alignment search tool (BLAST) (https://blast. ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=BlastSearch&LINK_LOC =blasthome, accessed on 27 September 2021), StCDPK28 protein showed sequence homology to SlCDPK28, NtCDPK28, SpCDPK28, AtCDPK16, AtCDPK18, AtCDPK28, OsCDPK4 and OsCDPK18, as depicted in Figure 3a. The main domain structures of StCDPK28 were mapped with the conserved domains search (CD-Search) tool (https://www.ncbi. nlm.nih.gov/Structure/cdd/wrpsb.cgi, accessed on 27 September 2021), containing the active site, ATP binding site, polypeptide substrate binding site, activation loop and EFhand motif, as well as the STKc_CAMK domain (the catalytic domain of the CAMPK family serine/threonine kinases; STKc transfers the γ-phosphoryl group from ATP to serine/threonine residues) and the PTZ00184 super family domain (calmodulin) (Figure 3b). These results indicate that StCDPK28 plays a role as a calcium-dependent protein kinase. The CDPK28-like protein in potato was predicted to encode a calcium-dependent protein kinase with 564 amino acids (sequence ID: XP_006340738.1). According to results of protein sequence matches with the basic local alignment search tool (BLAST) (https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastp&PAGE_TYPE=Blast-Search&LINK_LOC=blasthome, accessed on 27 September 2021), StCDPK28 protein showed sequence homology to SlCDPK28, NtCDPK28, SpCDPK28, AtCDPK16, AtCDPK18, AtCDPK28, OsCDPK4 and OsCDPK18, as depicted in Figure 3a. The main domain structures of StCDPK28 were mapped with the conserved domains search (CD-Search) tool (https://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi, accessed on 27 September 2021), containing the active site, ATP binding site, polypeptide substrate binding site, activation loop and EF-hand motif, as well as the STKc_CAMK domain (the catalytic domain of the CAMPK family serine/threonine kinases; STKc transfers the γ-phosphoryl group from ATP to serine/threonine residues) and the PTZ00184 super family domain (calmodulin) (Figure 3b). These results indicate that StCDPK28 plays a role as a calciumdependent protein kinase.
Subcellular Localization of the StCDPK28 Protein and Construction of Potato Plants with Over-or Under-Expression of StCDPK28
To better understand the function of the StCDPK28 protein in the defense against water deficiency and osmotic stress, we investigated the subcellular localization of the StCDPK28 protein. An in vivo study was conducted in Nicotiana tabacum leaves. The results in Figure 4a showed that green fluorescence, generated by the recombinant pCAM35s-GFP-StCDPK28 construct, was strongly observed in the membrane, cytoplasm and nucleus. To determine whether StCDPK28 plays a role in the tolerance to water deficiency and osmotic stress, transgenic plants overexpressing StCDPK28 (Figure 4b), and pCPI121-miRcdpk28 transformants were developed ( Figure 4c). StCDPK28-overexpressing plants (OE-1, OE-2, OE-3, OE-4, OE-5, and OE-6) showed a relatively strong expression of StCDPK28 (p < 0.05) (Figure 4d), and StCDPK28-knockdown plants (KD-1, KD-2, KD-3, KD-4, KD-5, and KD-6) showed a weak expression of StCDPK28 (p < 0.05) (Figure 4e).
StCDPK28 Was Responsive to Water Deficiency and Osmotic Stress in Photosynthesis
As for photosynthesis, we took into account the net photosynthetic rate, transpiration rate, stomatal conductance and water use efficiency. It was noticed that increases in net photosynthetic rate and transpiration were detected phenomenon induced by StCDPK28 overexpression under water deficiency and osmotic stress (p < 0.05); however, a reverse result was observed in StCDPK28-knockdown plants (p < 0.05) (Figure 5a,b). The value of the stomatal conductance apparently increased in StCDPK28-overexpressing plants while dropped in StCDPK28-knockdown plants under water deficiency and osmotic stress (p < 0.05) ( Figure 5c). Next, the water use efficiency was analyzed under water deficiency and osmotic stress. It was noticed that StCDPK28-overexpressing plants responded to water deficiency and osmotic stress with water use efficiency being increased (p < 0.05), while for StCDPK28-knockdown plants, water use efficiency dropped (p < 0.05) ( Figure 5d). Interestingly, there was no obvious changes in the net photosynthetic rate, transpiration rate, stomatal conductance or water use efficiency of StCDPK28-overexpressing and StCDPK28-knockdown plants when compared to the NT under normal conditions (p > 0.05) (Figure 5a-d).
StCDPK28 Modulated the Contents of H 2 O 2 , Malondialdehyde (MDA), and Proline for Tolerance to Water Deficiency and Osmotic Stress
Water deficiency and osmotic stress significantly decreased the H 2 O 2 content in StCDPK28-overexpressing plants with respect to the normal plants (p < 0.05), while the H 2 O 2 content in StCDPK28-knockdown plants were not obviously changed under moderate water deficiency and osmotic stress (p > 0.05), and only severe water deficiency elevated the H 2 O 2 content in StCDPK28-knockdown plants (p < 0.05) (Figure 6a). MDA, as an essential indicator of plant resistance to water deficiency and osmotic stress, was reduced in StCDPK28-overexpressing plants exposed to water deficiency treatment (moderate and severe) and 20% PEG6000 (p < 0.05); as for StCDPK28-knockdown plants, an elevation in MDA content was noticed (p < 0.05) (Figure 6b). Proline content was evidently increased by StCDPK28 overexpression in response to water deficiency and osmotic stress (p < 0.05); however, the decrement of proline content occurred in StCDPK28-knockdown plants (p < 0.05) (Figure 6c). Consistent with the results of photosynthesis, it was found that StCDPK28 overexpression and knockdown had no major role in the adaptation of potato plants under normal conditions (p > 0.05) (Figure 6a-c).
Catalase (CAT), Superoxide Dismutase (SOD) and Peroxidase (POD) Activities in StCDPK28-Overexpressing and StCDPK28-KnockDown Plants under Water Deficiency and Osmotic Stress
StCDPK28 overexpression increased CAT activity, while StCDPK28 knockdown resulted in a marked decline in CAT activity under moderate and severe water deficiency conditions, as well as osmotic stress conditions (10% and 20% PEG) (p < 0.05) (Figure 7a). Similarly, a significant enhancement in SOD activity was detected in potato plants overexpressing StCDPK28 after cultivation under water deficiency conditions or osmotic stress (p < 0.05); on the contrary, a significant decrease in SOD activity was noticed in StCDPK28-knockdown plants (p < 0.05) (Figure 7b). Regarding POD activity, a positive effect of StCDPK28 overexpression occurred (p < 0.05), and a negative effect of StCDPK28-knockdown was recorded (p < 0.05) in response to water deficiency and osmotic stress (Figure 7c). However, a nonsignificant effect of StCDPK28 overexpression or knockdown was exhibited on CAT, SOD and POD activity under normal conditions (p > 0.05) (Figure 7a-c).
Figure 5.
StCDPK28 overexpression in potato enhances photosynthesis, transpiration, stomatal conductance and water use efficiency in response to water deficiency and osmotic stress. After the plants were cultivated under water deficiency and osmotic stress for 24 h, (a) net photosynthetic rate, (b) transpiration rate, (c) stomatal conductance, and (d) water use efficiency were evaluated. Data are presented as the means ± standard deviation of nine replicates. The letters indicate significant difference according to one-way ANOVA corrected by Dunnett (p < 0.05).
StCDPK28 Modulated the Contents of H2O2, Malondialdehyde (MDA), and Proline for Tolerance to Water Deficiency and Osmotic Stress
Water deficiency and osmotic stress significantly decreased the H2O2 content in StCDPK28-overexpressing plants with respect to the normal plants (p < 0.05), while the H2O2 content in StCDPK28-knockdown plants were not obviously changed under moderate water deficiency and osmotic stress (p > 0.05), and only severe water deficiency elevated the H2O2 content in StCDPK28-knockdown plants (p < 0.05) (Figure 6a). MDA, as an essential indicator of plant resistance to water deficiency and osmotic stress, was reduced in StCDPK28-overexpressing plants exposed to water deficiency treatment (moderate and severe) and 20% PEG6000 (p < 0.05); as for StCDPK28-knockdown plants, an elevation in Figure 5. StCDPK28 overexpression in potato enhances photosynthesis, transpiration, stomatal conductance and water use efficiency in response to water deficiency and osmotic stress. After the plants were cultivated under water deficiency and osmotic stress for 24 h, (a) net photosynthetic rate, (b) transpiration rate, (c) stomatal conductance, and (d) water use efficiency were evaluated. Data are presented as the means ± standard deviation of nine replicates. The letters indicate significant difference according to one-way ANOVA corrected by Dunnett (p < 0.05).
Sequence Accession Numbers
Sequence information presented in this article can be obtained in the protein libraries using the accession code: StCDPK28, XP_006340738. MDA content was noticed (p < 0.05) (Figure 6b). Proline content was evidently increased by StCDPK28 overexpression in response to water deficiency and osmotic stress (p < 0.05); however, the decrement of proline content occurred in StCDPK28-knockdown plants (p < 0.05) (Figure 6c). Consistent with the results of photosynthesis, it was found that StCDPK28 overexpression and knockdown had no major role in the adaptation of potato plants under normal conditions (p > 0.05) (Figure 6a-c). Figure 6. Upregulation of StCDPK28 counteracts the effects of water deficiency and PEG stress on the contents of H2O2, MDA and proline. Contents of (a) H2O2, (b) MDA, and (c) proline were detected 24 h after water deficiency and PEG treatment. Data are presented as the means ± standard deviation of nine replicates. The letters indicate significant difference according to one-way ANOVA corrected by Dunnett (p < 0.05).
Catalase (CAT), Superoxide Dismutase (SOD) and Peroxidase (POD) Activities in StCDPK28-Overexpressing and StCDPK28-KnockDown Plants under Water Deficiency and Osmotic Stress
StCDPK28 overexpression increased CAT activity, while StCDPK28 knockdown resulted in a marked decline in CAT activity under moderate and severe water deficiency conditions, as well as osmotic stress conditions (10% and 20% PEG) (p < 0.05) (Figure 7a). Similarly, a significant enhancement in SOD activity was detected in potato plants overexpressing StCDPK28 after cultivation under water deficiency conditions or osmotic stress (p < 0.05); on the contrary, a significant decrease in SOD activity was noticed in StCDPK28knockdown plants (p < 0.05) (Figure 7b). Regarding POD activity, a positive effect of StCDPK28 overexpression occurred (p < 0.05), and a negative effect of StCDPK28-knockdown was recorded (p < 0.05) in response to water deficiency and osmotic stress (Figure 7c). However, a non-significant effect of StCDPK28 overexpression or knockdown was exhibited on CAT, SOD and POD activity under normal conditions (p > 0.05) (Figure 7a
Sequence Accession Numbers
Sequence information presented in this article can be obtained in the protein libraries using the accession code: StCDPK28, XP_006340738.
Discussion
Plant growth is influenced by multiple exogenous elements and this is strongly linked to a sophisticated interplay of stress-associated kinases [25,26]. At present, we have provided experimental evidence that water deficiency and osmotic stress altered the expression patterns of StCDPKs. StCDPK28 was defined as a positive modulator of plant stress responses. Water deficiency and osmotic stress induced StCDPK28 overexpression in the potato plant. StCDPK28 conferred no significant effects on physiological activities under normal conditions. However, we suggested that StCDPK28 was a critical modulatory kinase implicated in water deficiency and the osmotic stress response by enhancing the physiological activities.
Phylogenetic analysis revealed that CDPK28 of the potato plant was orthologous to SlCDPK28, NtCDPK28, SpCDPK28, AtCDPK16, AtCDPK18, AtCDPK28, OsCDPK4 and OsCDPK18, followed by identification of StCDPK28 via sequence alignment in this study. Besides, transient expression analysis revealed that StCDPK28 exhibited a membrane, cytosolic and nuclear localization. Studies have reported that potato plants establish their defense mechanisms by changing their CDPKs expression levels in response to environmental stress, including biotic and abiotic factors [15,27]. Phytophthora infestans causes systemic StCDPK7 overexpression in the leaves of potato plant [15]. StCDPK2 expression is highly induced by light treatment, which is associated with its light-responsive, cis-acting elements [27]. Our work showed that StCDPK28 displayed an expression increment in potato leaves, which was caused by water deficiency and osmotic stress. Furthermore, several studies reported that CDPK28 kinase activity strictly depends on calcium which modulates CDPK28 phosphorylation [18,20]. Together, upregulation of StCDPK28 by water deficiency and osmotic stress might function in response to adverse conditions, in which the Ca 2+ messenger is involved.
Under normal circumstances, CDPK28 modulates phenotypic changes in lignin deposition [17], morphology [19], plant stem elongation and vascular development [20], as well as regulating the immune signaling [21,22] of Arabidopsis thaliana. In this study, we observed that the photosynthetic activity of StCDPK28-overexpressing or StCDPK28-knockdown plants was not significantly changed, in addition to the contents of H 2 O 2 , MDA and proline, and the activities of CAT, SOD and POD. A mechanism, previously described for StCDPK28, in which the introduction of autophosphorylation contributes to constitutive kinase activity, can be included for StCDPK28 [18]. The activity of Ca 2+ -responsive StCDPK28 might be not changed in StCDPK28-overexpressing or StCDPK28-knockdown potato plants because of the unchanged physiological concentration of Ca 2+ . However, complementary experiments are required to prove the precise mechanism.
Interestingly, combined with wild-type plants after water deficiency and osmotic stress treatment, transgenic lines with four-to five-fold higher StCDPK28 expression levels showed an obvious overexpression phenotype, such as increased photosynthetic activity, contents of H 2 O 2 , MDA and proline, as well as activities of CAT, SOD and POD. Likewise, physiologic property changes induced by loss-of-function were observed in any of the plant lines that expressed low levels of StCDPK28 in response to water deficiency and osmotic stress. Calcium is one of the well-documented signaling components regulating the abiotic and biotic stress signaling mechanism in plants [26]. Calcium-dependent protein kinases dominate the calcium-dependent stress signaling mechanisms in response to drought, osmotic and other environmental stressors [28]. Drought decreases water potential and photosynthetic activity [29]. Besides, drought-induced conformational changes such as abscisic-acid-dependent stomatal movement, osmotic adjustments and oxidative damage [30]. Water deficiency provokes the changes of cytosolic Ca 2+ concentration, and then activates CDPK that induces the release of abscisic acid [31]. Hence, our results suggest that StCDPK28 might be upregulated in potato for maintaining physiological properties in response to water deficiency and osmotic stress.
Plasmid Construction and Transformation
Whole RNA from the leaves of potato cv. Zihuabai was extracted using the TRIzol RNA Extraction Kit (Invitrogen, Carlsbad, CA, USA). The cDNA was synthesized using the First-Strand cDNA Synthesis Kit (TransGen Biotech, Beijing, China), according to the user's manual. The generated cDNA was used as the template for StCDPK28 (GenBank Accession No. XM_006340676.2) amplification with the specific primers (forward primer 5 -GCTCTAGACCTTTCTTCCTCCTCCTATTTC-3 carrying Xba I site [5 -GCTCTAGA-3 ] and reverse primer 5 -CGGGATCCGCTGGTTGTTTAGGAGAAAGC-3 carrying BamH I site ), and then the StCDPK28 sequence was infused into a pCPB plasmid (pCPB-StCDPK28). A microRNA sequence targeting StCDPK28 mRNA was generated with the designed primers (miR-s 5 -GATATTGTAATGAGACCCCGCTTTCTCTCTTTTGTATT CC-3 , miR-a 5 -GAA AGCGGGGTCTCATTACAATATCAAAGAGAATCAATGA-3 , miRs* 5 -GAAAA CGGGGTCTCAATACAATTTCACAGGTCGTGATATG-3 , and miR-a* 5 -GAA ATTGTATTGAGACCCCGTTTTCTACATATATATTCCT-3 ), and the generated miR-cdpk28 sequence (5 -TATTGTAATGAGACCCCGCTT-3 ) was ligated into the pCPBI121 plasmid (pCPBI121-miRcdpk28), with reference to Zhou's description [32]. Agrobacterium tumefaciens LBA4404 were transformed with pCPB-StCDPK28 or pCPBI121-miRcdpk28, and transgenic potato plants were developed by following the methods of Si et al. [33]. In short, potato tubers with a diameter of 0.5 cm and a thickness of 2 mm were infected with the inoculum of LBA4404 for 10 min, followed by cultivation in MS media containing 3% sucrose under an 8-h dark period (15 • C) and 16-h photoperiod (22 • C, 3500 Lx) in Biotron at 22 • C. After 4 weeks, the transgenic plants were collected by incubation in 75 mg/L kanamycin, which were confirmed by PCR, as described below.
Water Deficiency and Osmotic Stress Treatment
To analyze the effects of water deficiency and osmotic stress on StCDPK expression, 4-week-old potato plantlets were continually cultivated in an MS medium supplemented with 8% sucrose in the dark. After 30 days, the generated potato tubers were collected, and the sprouting tubers were transferred into pots (height: 11.2 cm, width: 16.8 cm) filled with soil and vermiculite (1:1, v/v). The sprouting tubers were cultured for 30 days. For the water deficiency treatment, the plants were cultivated without irrigation, and the leaves were analyzed on day 0, 1, 2, 3, 4, 5 and 6, for StCDPKs expression by qRT-PCR; for osmotic stress, StCDPK was analyzed in leaves of 20% PEG-treated potato plants that were collected 0, 1, 2, 4, 8, 16 and 24 h after treatment. The sprouting tubers generated from the wild type and transgenic potato plantlets (4 weeks old) were cultured for 30 days, and allocated to the type of water deficiency treatment, that is, control (75-80% soil water content), moderate water deficiency (55-60% soil water content) and severe water deficiency (30-35% soil water content). As for PEG treatment, potato plants were treated with or without PEG (10% and 20%), respectively. Potato plants were subjected to downstream analysis 24 h after treatment.
Validation of the Transgenic Potato Plants by PCR
The cDNA from the transgenic and non-transgenic plants was subjected to PCR assay for the NPTII maker. PCR was carried out using the TakaRa Ex Taq Kit (Takara, Tokyo, Japan) according to the supplier's instruction. PCR conditions were 30 cycles of denaturation for 30 sec at 94 • C, annealing for 30 sec at 55 • C and extension for 1 min at 72 • C. The constructed plasmids (pCPB-StCDPK28 and pCPBI121-miRcdpk28) were assayed for NPTII, and served as a positive control (P). Potato plants without transfection served as the control (CK). The ddH 2 O served as negative control (N). The sequence of the primers was as follows: NPTII forward primer 5 -GCTATGACTGG GCACAACAG-3 and reverse primer 5 -ATACCGTAAAGCACGAGGAA-3 .
Quantification of Gene Expression by qRT-PCR
To investigate relative mRNA levels of StCDPKs, cDNA generated from the whole RNA in plant leaves was used for the qRT-PCR analysis. A reaction mixture for qPCR consisted of 100 ng of cDNA, 0.6 µL of specific primers (10 µM), 10 µL of 2× SuperReal PreMix Plus and 0.4 µL of 50× ROX Reference Dye (Tiangen Biotech, Beijing, China) to a final volume of 20 µL. The thermal profile of the ABI3000 system (Applied Biosystems, Foster City, CA, USA) were as follows: initial denaturation for 2 min at 94 • C, and 40 cycles of denaturation for 30 s at 94 • C, annealing for 34 s at 60 • C and extension for 30 s at 72 • C. The cycle threshold (CT) values were obtained and the mRNA level was calculated based on formula 2 −∆∆Ct . Each experiment was carried out in three technical and three biological replicates. StEf1a was used as an internal control. Primer sequences were referenced from our previous study [34].
Subcellular Location of StCDPK28
To investigate the subcellular localization of the StCDPK28 protein, a full sequence of StCDPK28 was ligated into pCAM35s-GFP. The plasmid was provided by Bioediates (Shanxi, China). The empty vector (pCAMP3s-GFP) served as a localization control. For agroinfiltration, the mixture of Gv3101 supernatant (OD 600 = 0.4, 125 µL) and infiltration media (375 µL) were used to infiltrate three-week-old Nicotiana tabacum plants according a previous method [35]. The agroinfiltrated plants were kept at 24 • C for 2 days in the dark. The segments (2 cm 2 ) were excised from the infiltrated zone of the leaf tissues, followed by observation under a Leica TCA confocal microscope (Leica, Weztlar, Germany).
Photosynthesis, Transpiration, Stomatal Conductance and Water Use Efficiency
To determine photosynthesis and transpiration, a portable photosynthetic system LI-6400XT (Li-COR, Lincoln, NE, USA) was used. The determination was performed on the second fully expanded leaf which was counted from the apex. Light function was examined at an irradiance of 1500 µmol·m −2 ·s −1 . The concentration of CO 2 was controlled by the facility at 400 µmol·mol −1 . Water use efficiency was defined as the ratio of the net photosynthesis rate to transpiration rate.
CAT, SOD and POD Activity
The fifth or sixth leaf below the growing point was selected for the determination of CAT, SOD and POD activity. Fresh leaves (0.5 g) were ground in a pre-cold potassium phosphate buffer (pH 7.0), followed by transference into a 25 mL volumetric flask. The extraction was maintained at 5 • C for 10 min. The supernatant was collected and then centrifuged at 4000 rpm for 15 min. CAT, SOD and POD activity was examined as described previously [36], which was presented in our previous study [34].
H 2 O 2 , MDA and Proline Contents
H 2 O 2 contents were determined according to a previous method [37] with minor modification. In brief, 0.5 g of leaves were extracted with 5 mL of TCA (0.1%, w/v), and the extract was subjected to centrifugation (12,000 rpm, 15 min). The supernatant (0.5 mL) was then collected and diluted with 1 mol/L KI and 0.5 mL of potassium phosphate buffer (10 mM, pH 7.0). The absorbance was detected spectrophotometrically at 390 nm.
In terms of MDA content, Heath's method was referenced [38] with minor modification. In short, 0.2 g of fresh leaves were extracted with 5 mL of TCA (10%). The extract was centrifuged at 4000× g for 10 min, and the supernatant was gathered. The supernatant (2 mL) was mixed with 2 mL of 0.6% TBA prepared in 10% TCA, followed by incubation at 100 • C for 15 min. After centrifugation (3500 rpm, 10 min), the absorbance at 532 nm, 600 nm and 450 nm was determined.
Proline contents were examined according to the method of Bates [39] with minor modification. Shortly after, potato leaves (0.2 g) were homogenized in 5 mL of 3% sulfosalicylic acid and the mixture was maintained in a boiling water bath for 10 min. After cooling, 2 mL of the supernatant was added with 3 mL of 2.5% ninhydrin and 2 mL of acetic acid. The color reaction continued for 40 min in a boiling water bath. The product was extracted with toluene, and the absorbance was determined at 520 nm.
Statistical Analysis
Three biological and technical replicates were conducted in each experiment. Statistical analysis was performed using IBM SPSS 19.0 statistical software (IBM, Chicago, IL, USA). Multiple comparisons were carried out using one-way ANOVA corrected by Dunnett. The obtained data were presented as the mean ± standard deviations. p-values less than 0.05 indicated the statistical difference.
Conclusions
Taken together, the altered expression profiles of StCDPK suggested that StCDPKs were involved in the response of the potato plant to water deficiency and osmotic stress. StCDPK28, as a calcium-dependent protein kinase, showed water-deficiency-and osmoticstress-responsive features. In potato plants, StCDPK28 was localized in the membrane, cytoplasm and nucleus. This study demonstrated the modulatory roles of StCDPK28 in photosynthetic and physiological activities in response to water deficiency and osmotic stress. Further work will definitely be required to investigate the downstream targets of StCDPK28, which may decipher the underlying mechanisms whereby StCDPK28 mediates the responsiveness of potato plants in terms of water deficiency and osmotic stress. Regulation of StCDPK28 expression could be a promising approach to improve the tolerance ability of potato plants in response to water deficiency or high salt media. The findings may provide a framework for implementing water-saving deficit irrigation under different local soil and climatic conditions. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | 6,538 | 2022-08-01T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Review: Polymeric-Based 3D Printing for Tissue Engineering
Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.
Introduction
Three-dimensional (3D) printing is a commonly used term that is often considered synonymous with additive manufacturing. 3D printing has drawn a lot of public attention, especially for its use in medical research. Additive manufacturing refers to a group of techniques that can generate a model with reduced waste and higher energy efficiency compared to those of conventional fabrication methods. The ability to create a 3D structure in a green and sustainable way through the use of 3D printing has taken fabrication techniques to a new level.
Currently, 3D printing technology can be used for tissue regeneration purposes. In the past two decades, increasing attention has been given to tissue engineering. With tissue and organ regeneration, the hurdles of traditional therapeutic methods may be overcome by autologous transplantation. As these technologies gain acceptance, the shortage of donor organs or chronic rejection of transplants may no longer be a problem.
The goal of tissue engineering is to create tissue or organ replacement strategies. Scaffolds play an important role in tissue engineering. They serve as templates for cell adhesion and the recruitment of cells to infiltrate deep into a defect site. Moreover, scaffolds can provide mechanical supports during tissue regeneration. With biomimetic scaffolds, researchers attempt to create an environment close to the natural extracellular matrix (ECM) of that organ, in which cells could be guided to create a new tissue with appropriate function.
Conventional scaffold fabrication methods include solvent casting and particulate leaching [1,2], fiber spinning [3], emulsion freeze drying [4], and phase separation [5]. Polymer-based scaffolds can then be acquired. These methods have been studied extensively [6][7][8][9]. Various polymers have been crafted into scaffolds using these methods and tested. Although conventional scaffold fabrication techniques have been improved, the physical properties of scaffolds fabricated by these methods still have limitations (i.e., controlling scaffold pore size, geometry, and porosity). Moreover, it is difficult to control the shape and dimension of scaffolds using these methods.
Unlike conventional scaffold fabrication techniques, which are highly process-dependent, additive manufacturing is design-dependent for scaffold fabrication. The size, geometry, and porosity can be precisely controlled during additive manufacturing to a patient's specification. In addition, scaffolds made using additive manufacturing techniques are highly reproducible. More importantly, a custom-made scaffold with specified dimensions and geometry can be prepared. When applying a reasonable design, the cell-cell interaction and cell-ECM interaction can be manipulated. Scaffold design can be performed easily with computer-aided design [10]. By adjusting the parameters of manufacturing, tissue engineering scaffolds can be made to fit different purposes.
Fused Deposition Modeling
Various additive manufacturing techniques have been applied in tissue engineering. They can be categorized into two large groups according to the power source used during fabrication, namely heat or light. Fused deposition modeling (FDM) is a typical heat-using technique for 3D scaffold fabrication. A scheme of FDM is shown in Fig. 1. In this method, the filament of the desired material is fed and melted in a liquefier by heat before extrusion from the nozzle. The melted polymer is extruded from the nozzle and deposited layer by layer to create a scaffold. The process temperature depends on the melting temperature of building materials and is generally too high for cells to survive or for bioactive molecules to retain their activity. Zein et al. [11] fabricated a honeycomb-structured polycaprolactone (PCL) scaffold that has a channel size of 160-700 lm, a filament diameter of 260-370 lm, and a porosity of 48-77 %. The working temperature was determined as 125 ± 5°C, which is considered a relatively narrow process window for polymer processing. Hsu et al. used poly(D,L-lactide) (PLA) as the feed material. Scaffolds with various fiber stacking orientations were produced and examined [12]. They also fabricated scaffolds with concentric cylinder geometry (with interconnected hollows) and tested them. Furthermore, collagen was placed in a poly(D,L-lactide-co-glycolide) (PLGA) scaffold to promote chondrocyte growth [13].
Liquid Frozen Deposition Manufacturing
There are some drawbacks to FDM. During the process, the use of heat as the power source to melt the material can have undesired effects. The operating temperature of the system is too high for cells and other biomolecules. With this limitation, cells are hardly printed together with the material to form a cell-containing scaffold, and it is also difficult to incorporate biomolecules such as growth factors into the scaffold. To overcome the limitations associated with FDM, a lower-temperature cooling platform, called liquid frozen deposition manufacturing (LFDM), was developed. A scheme of LFDM is shown in Fig. 2. A lowtemperature platform/chamber is required for the process.
LFDM involves low temperature during processing. Natural (e.g., chitosan) scaffolds as well as synthetic (e.g. PLGA) scaffolds were made with LFDM from polymer solutions by Hsu et al. for various applications [14][15][16]. Chitosan dissolved in acetic acid was printed and freezedried [14]. PLGA scaffolds were fabricated from a PLGA solution in an organic solvent (1,4-dioxane) using LFDM. The surface pore size of each of the stacking fibers was controlled by adjusting the concentration of the PLGA solution in the organic solvent. The pore size decreased from 2-3 lm to \1 lm as the concentration of PLGA solution was increased from 15 to 25 %. These scaffolds were found to promote the secretion of ECM from chondrocytes, which formed natural lacunae [15]. PLGA scaffolds were combined with alginate gel for the chondrogenesis of mesenchymal stem cells (MSCs) [16]. More recently, Hung et al. [17] developed a water-based system for printing polyurethane scaffolds. In their study, the organic solvent was replaced by water. Moreover, Xiong et al. [18] manufactured poly(L-lactic acid) (PLLA)/ (tricalcium phosphate) composite scaffolds for bone tissue engineering. LFDM is considered as a more efficient procedure since it does not require heating. However, because LFDM normally requires freeze-drying after fabrication, it did not allow cells to be printed with the materials during the process. Although cells cannot be directly printed, it is expected that bioactive compounds or biomolecules could be incorporated with the scaffold during the process [19].
Stereolithography
Stereolithography (SLA) employs a single beam laser to polymerize or crosslink a photopolymer resin. A scheme of SLA is shown in Fig. 3. By drawing on the liquid photopolymer resin with a light beam, thin layers of polymer are stacked layer by layer. A mixture of diethyl fumarate (DEF)/poly(propylene fumarate) (PFF) was used by Cooke et al. [20] to fabricate a scaffold. An 80-layer scaffold with a 4-mm thickness was fabricated using SLA. Holes and slots of various sizes were made on the scaffold. Protrusions were also made on the scaffold, which demonstrated the ability of SLA to build scaffolds various geometries. Melchels et al. [21] prepared a mathematically defined scaffold. The porous scaffold was built with two kinds of resin, either a PLA-based resin or a poly(D,L-lactide-co-e-caprolactone)-based resin. By changing the pore size, resin selection, and pore architecture, the mechanical properties of the scaffold may be manipulated. Flexible and elastic materials could also be crafted into scaffolds via SLA. Schüller-Ravoo et al. used poly(trimethylene carbonate)-based resin to build scaffolds for cartilage tissue engineering [22]. When the scaffolds were seeded with bovine chondrocytes, glycosaminoglycans and fibrillar collagens were deposited after 6 weeks of culture. The resulting scaffolds presented a 50 % increase in compressive modulus.
In addition to stiff resin, hydrogels may be rendered as scaffolds through SLA processes [23][24][25]. While using hydrogel as the building material, the temperature is generally low enough for cells to survive. This makes it possible to encapsulate cells during scaffold fabrication. Dhariwala et al. [23] used a photopolymerizable hydrogel as the building material. Poly(ethylene oxide) (PEO) and poly(ethylene glycol) dimethacrylate were mixed and used as the building materials in the study. The resulting hydrogels did not have a high elastic modulus; however, the mechanical properties were comparable to those of other soft tissues (e.g., breast tissue). Furthermore, Chinese hamster ovary cells were successfully encapsulated in these hydrogel scaffolds. This result suggests that hydrogels may be used to encapsulate cells while maintaining cell viability. A PEO/poly(ethylene glycol) diacrylate (PEGDA) hydrogel was used to build scaffolds by Chan et al. [24]. The elastic moduli varied from 4.73 ± 0.46 to 503 ± 57 kPa, depending on the molecular weight of the PEGDA used in the hydrogels. With a wider range of elastic moduli, the hydrogels have more possibilities for various applications. NIH/3T3 cells have been encapsulated in hydrogel, retaining long-term viability. This was an important step for SLA in cell encapsulation. Seck et al. [25] produced a hydrogel structure with SLA using poly(ethylene glycol)/PDLA-based resins. Both porous and non-porous structures were prepared. The pore size of the porous structure ranged from 387 to 558 lm with an average size of 423 lm. Based on micro-computed tomography (lCT) data, a porosity of 52 % was determined, while the porosity of the designed architecture was 55 %. SLA processes have been used to render the internal and external morphology of scaffolds with high accuracy, and have the ability to build structures as designed. For a patient-specific tissue, Du et al. [26] constructed a viable artificial bone substitute with SLA through a series of manufacturing processes. With the use of lCT images, the constructs had the correct external shape and optimized internal channels.
Digital Light Processing
Digital light processing (DLP) 3D printing uses a laser to cure a polymer. A scheme of DLP is shown in Fig. 4. Compared to SLA, which is a bottom-up process, DLP is a top-down process and is relatively faster. During the process, a digital mirror device (DMD) is used to control the curing laser beam. DMD has an array of micro-mirrors, which can rotate independently to control the laser beam to an on or off state. With the use of DMD, an entire layer can be cured at once, which makes DLP faster than the conventional SLA process. For tissue engineering, PEGDA hydrogel scaffolds were fabricated by Lu et al. [27] via DLP. In their study, murine-bone-marrow-derived cells were successfully encapsulated in the construct. A complex porous scaffold was fabricated by Gauvin et al. [28]. The hydrogel scaffold uses gelatin methacrylate (GelMA) as the building material. By varying the structure and the prepolymer concentration, the mechanical properties of scaffolds can be tuned. Furthermore, the interconnected pores allow for uniform distribution of human umbilical vein endothelial cells (HUVECs). As a result, scaffolds with high cell density and homogeneous cell distribution can be generated at the end of the culture period.
Selective Laser Sintering
Selective laser sintering (SLS) is another technique commonly used in scaffold fabrication (as shown in Fig. 5 [33]. Chondrocytes were seeded in collagen and further loaded into the scaffold. Studies on pore geometry and distribution were performed. Results showed that a customized and designed scaffold could be made with the combination of these technologies for cartilage tissue engineering. Regarding the starting materials for the SLS process, Ca-P/poly(hydroxybutyrate-co-hydroxyvalerate) nanocomposite material was used by Duan and Wang to fabricate microspheres [34]. Normally, bioactive molecules are not able to retain their activity after the SLS process. These microspheres could encapsulate proteins and are suitable for SLS processes to build up scaffolds for tissue regeneration. Although the encapsulation efficiency was only 24.51 ± 0.60 %, this study demonstrated the potential of biomolecule incorporation within the materials used for SLS scaffold fabrication. A summary of the advantages and disadvantages of various 3D printing techniques is shown in Table 1.
Other Techniques
There are still many techniques in the field of additive manufacturing that remain to be explored for their use in tissue engineering. Compared to the techniques introduced above, some methods have higher resolution, allowing smaller line widths of the fabricated scaffold. Some processes are suitable for ''printing'' a scaffold and cells at the same time or for printing cells directly as materials, which are fused layer upon layer during scaffold rendering. With these kinds of techniques, cell-containing scaffolds can be fabricated. Kolesky et al. [35] printed perfusable channels, 45-500 lm in diameter, with a custom-built 3D printer (ABG 10000, Aerotech Inc., Pittsburgh, PA). They used cell-laden GelMA and pluronic F127 to print a heterogeneous tissue construct with perfusable channels as vasculature. Billiet et al. [36] fabricated hydrogel scaffolds using a bioplotter (EnvisionTEC, Gladbeck, Germany). The cells were printed with the scaffold during the process. A scaffold-free system was introduced by Norotte et al. with the use of a bioprinter, which was manufactured in-house [37]. Multicellular spheroids and cellular cylinders were used as the building blocks to leave channels for vascular tissue engineering.
Challenges and Prospects
Additive manufacturing has a lot of advantages, but it still has many challenges that remain to be overcome. Firstly, the materials used in additive manufacturing are limited to the materials required for each technique. Few materials can be used in more than one 3D printing modality. Incorporating bioactive molecules is another challenge for additive manufacturing. Bioactive molecules may be sensitive to the printing environment. If the printing processes involve a solvent or extreme temperature, the folding of proteins can be affected or the proteins can be denatured. Methods suitable for bioactive molecule incorporation in 3D implants are limited. Moreover, the biocompatibility of the scaffold following successful but novel fabrication techniques must be evaluated. Given the limited number of commercially available materials, it may be challenging to control degradation, mechanical properties, pore size, and surface properties. These topics are discussed below. Control of degradation rate is important for scaffolds used for tissue regeneration. The degradation rate should be tuned carefully to synchronize with the regeneration rate of the neotissue. For instance, poly(a-hydroxy esters) have been used to create scaffolds for a variety of biomedical applications [1, 3-5, 7, 38-40]. The degradation rate of these scaffolds strongly depends on the size and geometry of the product [38,39]. The resolution of additive manufacturing techniques varies. Overall porosity and pore interconnectivity also affect the degradation rate [40]. When designing a scaffold, these parameters should be taken into consideration. Scaffold degradation byproducts have been studied for most of the materials used in tissue engineering. Most have good biocompatibility. However, fast-degrading polymers may cause an inflammatory response in vivo. Based on the degradation profile and degradation byproducts, the biocompatibility of materials should be evaluated as part of the design of the scaffold.
Since the function of a scaffold is to provide a biomimetic environment for cell attachment, proliferation and extracellular matrix secretion, suitable mechanical properties (e.g., similar to those of natural tissue) are important for 3D printed scaffolds. This would help cells maintain their phenotype and could induce the correct matrix secretion for the neotissue. Huang et al. designed a 3D environment for the maintenance of the spheroid morphology of MSCs [41]. It should be noted that 3D printing can sometimes produce scaffolds that are stiffer than those that can be fabricated using conventional methods. Although hydrogels are used to fabricate scaffolds, their mechanical strength may be insufficient. To improve the strength of hydrogel scaffolds, Wüst et al. developed a special hydrogel composite [42]. They used a two-step gelation process to make a mixture of alginate and gelatin hydrogel. Furthermore, hydroxyapatite was added to the hydrogel at various ratios to provide a mechanically tunable construct.
In addition to the mechanical properties, the microenvironment varies with tissue. Scaffold pore size requirements vary between different tissues and organs. In conventional scaffold fabrication, the control of the pore size strongly depends on the process [5,[43][44][45]. For instance, scaffolds fabricated from solution freeze-drying depend on the concentration of the solution and the size of the ice crystal [43,44]. The advantage of additive manufacturing is the ability to accurately and precisely control the pore size and geometry [11,13,21,25,31,32]. By adjusting fabrication parameters, various pore sizes may be easily achieved. However, with pore size control being possible, it is equally important to determine the optimal pore size needed for the regenerative process. New studies on pore size are needed.
Surface properties are another critical parameter for tissue engineering scaffolds. Surface properties include topography, hydrophobicity, and roughness. These surface features are important in cell-scaffold interactions as they affect how cells respond to the scaffold. For example, the surface of the scaffold from an SLS process is usually excessively rough. Although a rougher surface may increase cell attachment, overly sharp features may damage cells. The surface of a scaffold fabricated using the FDM method may be smooth and more suitable for cells. Yen et al. demonstrated that LFDM scaffolds with a rough surface (1-2 lm pores) may benefit the proliferation of attached chondrocytes [15]. The hydrophobicity of the [14], which reduced hydrophobicity and thereby enhanced cell seeding. After plasma treatment, the hydrophobicity of the scaffold was reduced, allowing cells to be seeded in the scaffolds more easily. Finally, direct organ fabrication is the ultimate goal of additive manufacturing in tissue engineering. There is a possibility of printing a complete organ that could be directly transplanted into the human body. In this situation, the patterning of cells and materials in a printed scaffold would need to be carefully designed.
In conclusion, recent developments in tissue engineering include various new approaches for creating 3D scaffolds. Compared to conventional fabrication methods, additive methods allow scaffolds to be made quickly and accurately. Moreover, this technology could lead to custom-made scaffolds for patients. Further developments in additive manufacturing in tissue engineering will require new biomaterials, scaffold design optimization, and better knowledge of cell and organ physiology. | 4,173.4 | 2015-06-01T00:00:00.000 | [
"Materials Science"
] |
A Late-fusion Approach to Community Detection in Attributed Networks
. The majority of research on community detection in attributed networks follows an “early fusion” approach, in which the structural and attribute information about the network are integrated together as the guide to community detection. In this paper, we propose an approach called late-fusion , which looks at this problem from a different perspective. We first exploit the network structure and node attributes separately to produce two different partitionings. Later on, we combine these two sets of communities via a fusion algorithm, where we introduce a parameter for weighting the importance given to each type of information: node connections and attribute values. Extensive experiments on various real and synthetic networks show that our late-fusion approach can improve detection accuracy from using only network structure. Moreover, our approach runs significantly faster than other attributed community detection algorithms including early fusion ones.
Introduction
In many modern applications, data is represented in the form of relationships between nodes forming a network, or interchangeably a graph. A typical characteristic of these real networks is the community structure, where network nodes can be grouped into densely connected modules called communities. Community identification is an important issue because it can help to understand the network structure and leads to many substantial applications [6]. While traditional community detection methods focus on the network topology where communities can be defined as sets of nodes densely connected internally, recently, increasing attention has been paid to the attributes associated with the nodes in order to take into account homophily effects, and several works have been devoted to community detection in attributed networks. The aim of such process is to obtain a partitioning of the nodes where vertices belonging to the same subgroup are densely connected and homogeneous in terms of attribute values.
In this paper, we propose a new method designed for community detection in attributed networks, called late fusion. This is a two-step approach where we first identify two sets of communities based on the network topology and node attributes respectively, then we merge them together to produce the final partitioning of the network that exhibits the homophily effect, according to which linked nodes are more likely to share the same attribute values. The communities based upon the network topology are obtained by simply applying an existing algorithm such like Louvain [2]. For graphs whose node attributes are numeric, we utilize existing clustering algorithms to get the communities (i.e., clusters) based on node attributes. We extend to binary-attributed graphs by generating a virtual graph from the attribute similarities between the nodes, and performing traditional community detection on the virtual graph. Albeit being simple, extensive experiments have shown that our late-fusion method can be competitive in terms of both accuracy and efficiency when compared against other algorithms. We summarize our main contributions in this work are: 1. A new late-fusion approach to community detection in attributed networks, which allows the use of traditional methods as well as the integration of personal preference or prior knowledge. 2. A novel method to identify communities that reflect attribute similarity for networks with binary attributes. 3. Extensive experiments to validate the proposed method in terms of accuracy and efficiency.
The rest of the paper is organized as follows: In Section 2, we provide a brief review of community detection algorithms suited for attributed networks, next we present our late fusion approach in Section 3. Experiments to illustrate the effectiveness of the proposed method are detailed in Section 4. Finally, we summarize our work and point out several future directions in Section 5.
Related Work
How to incorporate the node attribute information into the process of network community detection has been studied for a long time. One of the early ideas is to transform attribute similarities into edge weights. For example, [13] proposes matching coefficient which is the count of shared attributes between two connected nodes in a network; [15] extends the matching coefficient to networks with numeric node attributes; [4] defines edge weights based on self-organizing maps. A drawback of these methods is that new edge weights are only applicable to edges already existed, hence the attribute information is not fully utilized. To overcome this issue, a different approach is to augment the original graph by adding virtual edges and/or nodes based on node attribute values. For instance, [14] generates content edges based on the cosine similarity between node attribute vectors, in graphs where nodes are textual documents and the corresponding attribute vector is the TF-IDF vector describing their content. The kNN-enhance algorithm [9] adds directed virtual edges from a node to one of its k-nearest neighbors if their attributes are similar. The SA-Clustering [17] adds both virtual nodes and edges to the original graph, where the virtual nodes represent binary-valued attributes, and the virtual edges connect the real nodes to the virtual nodes representing the attributes that the real nodes own.
Another class of methods is inspired by the modularity measure. These methods incorporate attribute information into an optimization objective like the modularity. [5] injects an attribute based similarity measure into the modularity function; [1] combines the gain in the modularity with multiple common users' attributes as an integrated objective; I-Louvain algorithm [3] proposes inertia-based modularity to describe the similarity between nodes with numeric attributes, and adds the inertia-based modularity to the original modularity formula to form the new optimization objective.
With the wide spreading of deep learning, network representation learning and node embedding (e.g. [8]) motivated new solutions. [12] proposes an embedding based community detection algorithm that applies representation learning of graphs to learn a feature representation of a network structure, which is combined with node attributes to form a cost function. Minimizing it, the optimal community membership matrix is obtained.
Probabilistic models can be used to depict the relationship between node connections, attributes, and community membership. The task of community detection is thus converted to inferring the community assignment of the nodes. A representative of this kind is the CESNA algorithm [16], which builds a generative graphical model for inferring the community memberships.
Whereas the majority of the previous methods exploit simultaneously both types of information, we propose the late-fusion approach that combines two sets of communities obtained separately and independently from the network structure and node attributes via a fusion algorithms .
The Late-fusion Method
Given an attributed network G = (V, E, A), with V being the set of m nodes, E the set of n edges, and A an m × r attribute matrix describing the attribute values of the nodes with r attributes, the goal is to build a partitioning P = {C 1 , ..., C k } of V into k communities such that nodes in the same community are densely connected and similar in terms of attributes, whereas nodes from distinct communities are loosely connected and different in terms of attribute.
For networks with numeric attributes, we can directly apply a community detection algorithm F s on G to identify a set of communities based on node connections P s = {C 1 , C 2 , ..., C ks }, and a clustering algorithms F a on A to find a set of clusters based on node attributes P a = {C 1 , C 2 , ..., C ka }. When it comes to binary attributed networks, traditional clustering algorithms become inaccessible, we instead build a virtual graph G a that shares the same node set as G, but there is an edge only when the two nodes are similar enough in terms of attributes. Then we apply F s on G a and obtain P a . Note that we omit categorical attributes since categorical values can be easily converted to the binary case.
The second step is to combine the partitions P s and P a . We first derive the adjacency matrices D s and D a from P s and P a respectively, where d ij = 1 when nodes i and j are in the same community in a partitioning P and d ij = 0 otherwise. Next, an integrated adjacency matrix D is given by Here α is the weighting parameter that leverages the strength between network topology and node attributes. In this way, the information about network topology and node attributes of the original graph G is represented in D. Now G int , derived from the adjacency matrix D, is an integrated, virtual, weighted graph whose edges embody the homophily effect of G. Algorithm 1 shows the steps of our late-fusion approach applied to networks with binary attributes.
Algorithm 1: Late-fusion on networks with binary attributes
Here we address an important detail: how to build the virtual graph G a from the node-attribute matrix A? We compute the inner product as the similarity measure between each node pair, and if the inner product exceeds a predetermined threshold, we regard the nodes as similar and add a virtual edge between them. The threshold can be determined heuristically based on the distribution of the node similarities. However, the threshold should be chosen properly so that the resulted G a would be neither too dense nor too sparse, where both cases could harm the quality of the final communities. Under this guidance, we put forward two thresholding approaches: 1. Median thresholding (MT): Suppose S is the m × m similarity matrix of all nodes in V , we take all the off-diagonal, upper triangular (or lower triangular) entries of S, find the median of these numbers and set it as the threshold. This approach guarantees that we add virtual edges to half of all node pairs who share a similarity value higher than the other half.
is the density of G. Then the q th quantile of the similarity distribution is the chosen threshold. In this approach, we let the original graph G s be the proxy that decides how we construct the virtual graph G a
Experiments
Our proposed method has been evaluated through experiments on multiple synthetic and real networks and results are presented in this section. For networks with numeric attributes, we take advantage of existing clustering algorithms to obtain communities based on attributes (i.e., clusters), and for networks with binary attributes, we employ Algorithm 1 to perform community detection. We have also released our code so that readers can reproduce the results 3 .
Synthetic Networks with Numeric Attributes
Data We use an attributed graph generator [10] to create three attributed graphs with ground-truth communities, denoted as G strong , G medium and G weak , indicating the corresponding ground-truth partitionings are strong, medium, and weak in terms of modularity Q. To examine the effect of attributes on community detection, for each of G strong , G medium and G weak , we assign three different attribute distributions as shown in Figure 1, where attributes in 1a and 1b are generated from a Gaussian mixture model with a shared standard deviation, and 1c presents the original attributes generated by [10]. By this way, for each graph having a specific community structure (G strong , G medium , G weak ) we have also three types of attributes denoted strong attributes, medium attributes and weak attributes leading in fact to 9 datasets Evaluation measures and baselines Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI) and running time are used to evaluate algorithm accuracy and efficiency. Louvain [2] and SIWO [7] have been chosen as baseline algorithms that utilize only the links to identify network communities. Note that since the attribute distribution does not affect Louvain and SIWO, the results of Louvain and SIWO are only presented in Table 3. We choose Spectral Clustering (SC) and DBSCAN as two representative clustering algorithms as they both can handle non-flat geometry. We treat the number of clusters as a known input parameter of SC, and the neighborhood size of DBSCAN is set to the average node degree. We adopt default values of the remaining parameters from the scikit-learn implementation of these two algorithms. Finally, we take the implementation of the I-Louvain algorithm which exploits links and attribute values as our contender. The code of I-Louvain is available online 4 . Given Louvain, SIWO, SC, and DBSCAN, correspondingly we can have four combinations for our late-fusion method. In all experiments, the α parameter in Algorithm 1 is chosen to be 0.5, i.e., the same weight is allocated to structural and attribute information. Results Table 3, corresponding to strong attributes, shows that late fusion is the best-performing algorithm in terms of NMI on G strong and G medium , and very close to SC on G weak (0.765 against 0.768) whereas it is better in terms of ARI on this last graph. On Tables 4 and 5, corresponding respectively to medium and weak attributes, with the deterioration of the attribute quality, the accuracy of late-fusion degrades, but late fusion still remains at a consistently high level compared to I-Louvain and the clustering algorithms. Moreover, the performance degradation of late-fusion methods is less susceptible to the deterioration of community quality compared to the clustering algorithms, thanks to the complementary structural information. As for the running time, it is expected that classic community detection algorithms Louvain and SIWO are the fastest algorithms, as they do not consider node attributes, but the late-fusion method still outperforms I-Louvain by a remarkable margin.
Real Network with Numeric Attributes
Data and baselines Sina Weibo 5 is the largest online Chinese micro-blog social networking website. Table 2 shows the corresponding properties of the Sina Weibo network built by [9] 6 . It includes within-inertia ratio I, a measure of attribute homogeneity of data points that are assigned to the same subgroup. The lower the within-inertia ratio, the more similar the nodes in the same community are. As DBSCAN algorithm performs poorly on the Sina Weibo network and it is costly to infer a good combination of the hyper-parameters of the algorithm, it has been replaced by k-means as a supplement to spectral clustering. The number of clusters required as an input by k-means and SC is inferred from the 'elbow method', which happens to be 10, the actual number of clusters. Moreover, since we have the prior knowledge that the ground truth communities are based on the topics of the forums from which those users are gathered, we reckon that the formation of communities depends more on the attribute values than the structure and set the parameter α at 0.2. Table 6 presents the results on Sina Weibo network. The two baseline algorithms Louvain and SIWO and the contending algorithm I-Louvain perform poorly on the Sina Weibo network, whereas the clustering algorithms show a high accuracy. Especially, the k-means algorithm together with our four latefusion methods with the emphasis on attribute information produce results with the best NMI and ARI. This is because modularity of Sina Weibo network is low (0.05 as indicated in Table 2) and the within-inertia ratio is also low (0.04). The results also validate our assumption that communities in this network are mainly determined by the attributes. We will further explore the effect of α in Section 4.4.
Real Network with Binary Attributes
Data Facebook dataset [11] contains 10 egocentric networks with binary attributes corresponding to anonymous information of the user about the name, work, and education and ground-truth communities. This dataset is available online 7 and Table 7 presents the properties of these networks. We still treat Louvain and SIWO as our baselines. We use the CESNA algorithm [16], able to handle binary attributes in addition to the links, as our contender 8 . To compare the two thresholding strategies proposed in Section 3, we present experimental results of four late-fusion methods: Louvain + equaledge thresholding (denoted as Louvain-EET), Louvain + median thresholding (denoted as Louvain-MT), SIWO + equal-edge thresholding (denoted as SIWO-EET), and SIWO + median thresholding (denoted as SIWO-MT). We set α to its default value 0.5. Results Results in terms of NMI, ARI, and running time are respectively presented in Tables 8, 9, and 10. In terms of NMI, results in Table 8 show again that our late-fusion algorithms can significantly improve the community detection accuracy upon Louvain. On average, the late fusion method Louvain+EET outperforms Louvain, SIWO, and CESNA by 30.8%, 42.2%, and 33.2% respectively. The late fusion method Louvain+MT outperforms the three by 14.1%, 24.0%, and 16.2% respectively. However, all of the late-fusion methods perform poorly when evaluated by ARI. This is resulted from the goal of our late-fusion approach. Remember that we aim to find the set of communities such that nodes in the same subgroup are densely connected and similar in terms of attributes, whereas nodes residing in different communities are loosely connected and dissimilar in attributes. This purpose led the late-fusion approach to over-partition communities that are formed by only one of the two sources of information. The over-partitioning greatly hurts the results of ARI. A postprocessing model to resolve the over-partitioning issue with late fusion is left as a future work. The running time results shown in Table 10 again manifests the efficiency advantage of our late-fusion methods over CESNA.
Effect of Parameter α
In the Sina Weibo experiment, we see the advantage of having a weighting parameter to accordingly leverage the strength of the two sources of information.
In this section, we dive deeper into the effect of α on the community detection results. To do so, we devise an experiment where we use the G strong and G weak introduced in Table 1. In reverse, we assign weak attributes to G strong and strong attributes to G weak . Then we perform our late fusion algorithm on these two graphs with varying α values. In our experiment, we choose SIWO as F s and k-means as F a . Table 11 presents the NMI and ARI of the late fusion with SIWO and kmeans when α varies. G strong has communities with a strong structure but weak attributes, so the accuracy score for NMI and ARI goes up as we put more weight on the structure; On the contrary, G weak has weak structural communities but strong attributes, hence the accuracy score decreases as α increases. One can also notice that when α is sufficiently high or low, late fusion becomes equivalent to using community detection or clustering only, which is in accordance with our observation done on the Sina Weibo experiment.
In practice, when network communities are mainly determined by the links, α should be greater than 0.5; α < 0.5 is recommended if attributes play a more important role in forming the communities; When prior knowledge about network communities is unavailable or both sources of information contribute equally, α should be 0.5.
Complexity of Late Fusion
It is a known drawback of attributed community detection algorithms that they are very time-consuming due to the need to consider node attributes. Our late-fusion method tries to circumvent this problem by taking advantage of the existing community detection and clustering algorithms that are efficiently optimized, and combining their results by a simple approach. To further show the computational efficiency of our late-fusion method, we compute the running time of the late-fusion method and compare it with other methods.
We test the running time of four different community detection methods on five graphs with the number of nodes varying from 2000, 4000, 6000, 8000, and 10000. These graphs are also generated by the attributed graph generator [10]. We control the modularity of each graph at the range of 0.64−0.66 and keep other hyperparameters the same. For each size, we randomly sample 10 graphs from the graph generator and plot the average running time of each method. As we can see in Figure 2, it is expected that our late-fusion method is inevitably slower than the two community detection methods that only utilize node connections. However, our algorithm runs way faster than the I-Louvain algorithm, albeit both being approximately linear in the growth of network sizes.
Conclusion and Future Direction
In this paper, we proposed a new approach to the problem of community detection in attributed networks that follows a late-fusion strategy. We showed with extensive experiments that most often, our late-fusion method is not only able to improve the detection accuracy provided by traditional community detection algorithms, but it can also outperform the chosen contenders in terms of both accuracy and efficiency. We learned that combining node connections with attributes to detect communities of a network is not always the best solution, especially when one side of the network properties is strong while the other is weak, using only the best information available can lead to better detection results. It is part of our future work to understand when and how we should use the extra attribute information to help community detection. ARI suffers greatly from over-partitioning issue with our late fusion when applied to networks with binary attributes. A postprocessing model to resolve this issue is desired. We also hope to expand the late-fusion approach to networks with a hybrid of binary and numeric attributes as well as networks with overlapping communities. | 4,929.4 | 2020-04-27T00:00:00.000 | [
"Computer Science"
] |
The role of absorption terms in Dirichlet problems for the prescribed mean curvature equation
In this paper we study existence and uniqueness of solutions to Dirichlet problems as $$ \begin{cases} g(u) -{\rm div}\left(\frac{D u}{\sqrt{1+|D u|^2}}\right) = f&\text{in}\;\Omega,\\ \newline u=0&\text{on}\;\partial\Omega, \end{cases} $$ where $\Omega$ is an open bounded subset of $\mathbb{R}^N$ ($N\geq 2$) with Lipschitz boundary, $g:\mathbb{R}\to\mathbb{R}$ is a continuous function and $f$ belongs to some Lebesgue spaces. In particular, under suitable saturation and sign assumptions, we explore the regularizing effect given by the absorption term $g(u)$ in order to get a solutions for data $f$ merely belonging to $L^1(\Omega)$ and with no smallness assumptions on the norm. We also prove a sharp boundedness result for data in $L^{N}(\Omega)$ as well as uniqueness if $g$ is increasing.
where Ω is an open bounded subset of R N (N ≥ 2) with Lipschitz boundary, g : R → R is a continuous, and the datum f belongs to L 1 (Ω).
The main purpose of this paper is to describe the regularizing effect of zero order absorption terms on the existence of solutions for boundary value problems as in (1.1).
Let us recall that equation in (1.1), if f = 0 and without any absorption, falls in the well known case of minimal surface equation div Du the name deriving from the fact that, for a smooth function u, the involved operator calculates the mean curvature of the graph of u at each point (x, u(x)); due to this fact such an operator is also called non-parametric mean curvature operator.
Several cases of (non-parametric) prescribed mean curvature equation of the type have been considered as well in literature starting by [33], [22,23], [21], and [18] to present a noncomplete list.
It is also worth to point out that the equation in (1.1), with g(s) = s, corresponds to the resolvent equation of the following evolution equation roughly speaking, proving existence and uniqueness for (1.1), can be considered as a first step in order to apply Crandall-Liggett theory ( [13]) to look for mild solutions to the corresponding evolution problem.In a more general context these type of arguments have been successfully applied in order to get existence and uniqueness of Cauchy initial-boundary value problems involving equations as in (1.3) in the framework of entropy type solutions and with L 1 -initial data (see [16,4,3,5,24] for a quite exaustive account on this issue).
Concerning less theoretical issues, problems as in (1.2) arise in the study of combustible gas dynamics (see [30] and references therein) as well as in surfaces capillary problem as pendant liquid drops ( [17,11,12,18]) and also in design of water-walking devices ( [25], see also [28]).
Prescribed mean curvature equations as in (1.2) formally represent the Euler-Lagrange equation of the functional involving the area functional.
As regards the solvability of problems as in (1.2), a smallness assumption on the data naturally appears: indeed, if we formally integrate the equation in (1.2) in a smooth sub-domain of A ⊂ Ω, an application of the divergence theorem gives the following necessary condition where Per(A) indicates the perimeter of A and ν A is the outer normal unit vector.That is, some sort of smallness assumption on the datum f is needed in order to get existence in problems as (1.2).This is a typical feature of problems arising from functionals with linear growth as, for instance, the one driven by the 1-laplacian (see for instance [10,26,15]).See also Remark 3.4 below for more details on this structural obstruction.In [21] M. Giaquinta shows the unique solvability of (1.2), in a variational sense, in the space of functions with bounded variation provided f is measurable and there exists ε 0 > 0 such that for every smooth A ⊆ Ω ˆA f (x) dx ≤ (1 − ε 0 )Per(A) . (1.4) In [22] it is shown that N , is a general condition under which (1.4) holds, where ω N is the measure of the unit ball of R N and it is a sharp request in order to get bounded solutions for problem (1.2) (see [19]).
Less regularity for data f ∈ L q (Ω) below the threshold q = N is known to be more challenging than the classical variational setting of BV -solutions for equations arising from functionals with linear growth and one needs a different approach, see for instance [31] and [27].We also point out that these generalized solutions are, in general, bounded only for data f ∈ L N,∞ (Ω) with small norm.
As we said, our main focus consists in analyze the regularizing effect of zero order absorption terms for problems as in (1.1) where g : R → R is a continuous functions such that g(s) → ±∞ as s → ±∞ and g(s)s ≥ 0.
We show that solutions of (1.2) do exist for general data f ∈ L 1 (Ω) no matter of the size of f and, if g : R → R is increasing, the solution is unique.Moreover, if f ∈ L N (Ω), then solutions to (1.1) lie in L ∞ (Ω), again, without any restriction on the norm of f .As a remarkable fact this result is sharp at Lorentz scale since, as we will show by means of an explicit counter-example, unbounded solutions may exist for data in f ∈ L N,∞ (Ω).The boundedness of solutions for L N -data is a bit unexpected since the extension of the Calderon-Zygmund regularity theory just guarantees bounded solutions when data belong to L m (Ω) for m > N .
In the first part we work by approximation proving existence of a BV -solution of problem (1.1) when f ∈ L N (Ω); in this case the regular approximation scheme is suitably chosen involving p-Laplacian type operators.This part has some overlap with [16] or [3] in case g(s) = s.We present our results in case of a generic nonlinearity g and with a quite different approach based on the L ∞ -estimate.In the second part we look for infinite energy solutions of problem (1.1) when f is a merely integrable function.In this case the approximation scheme is given by solutions to problems as (1.1) whose existence has been proven in the first part and we only approximate the datum f .We remark that, from a different point of view, problem (1.2) (again in the case of linear absorption) with L 1 -data is studied in [4] and [24].
The plan of the paper is the following: in Section 2 we set the basic machinery on BV spaces (the natural space in which these problems are well settled), and the Anzellotti-Chen-Frid theory of pairings between bounded vector fields whose divergence lies in some Lebesgue spaces and gradients of BV functions.Section 3 is devoted to present the existence and uniqueness theory of finite energy solutions to problem (1.1) in case of data f ∈ L N (Ω).The core of the paper is the content of Section 4 in which we prove existence and uniqueness of infinite energy solutions to (1.1) in full generality.In Section 5 we discuss the existence of finite energy solutions to (1.1) when f does not necessarily belong to L N (Ω).In particular, if f ∈ L N,∞ (Ω), a smallness assumption is needed to guarantee the boundedness of the solutions; this hypothesis turns out to be sharp as shown by an explicit example.
Notation and preparatory tools
From here on Ω will always represent an open bounded subset of R N (N ≥ 2) with Lipschitz boundary.We denote by H N −1 (E) the (N − 1)-dimensional Hausdorff measure of a set E, while |A| stands for the N -dimensional Lebesgue measure L N of a set A ⊂ R N .We denote by χ A the characteristic function of a set A ⊂ R N .By M(Ω) we indicate the space of Radon measures with finite total variation over Ω and we will call mutually singular (or mutually orthogonal) two Radon measures µ and ν in M(Ω) such that there exists a measurable set A ⊂ Ω satisfying For a fixed k > 0, we use the truncation functions T k : R → R and G k : R → R defined, respectively, by If no otherwise specified, we denote by C several positive constants whose value may change from line to line and, sometimes, on the same line.These values will only depend on the data but they will never depend on the indexes of the sequences we will gradually introduce.Let us explicitly mention that we will not relabel an extracted compact subsequence.For simplicity's sake, and if there is no ambiguity, we will often use the following notation: and, if µ is a Radon measure, ˆΩ f µ := ˆΩ f dµ .
Finally, we will denote by sgn(s) the multi-valued sign function defined by sgn(s) := 2.1.BV spaces and the area functional.We refer to [1] for a complete account on BV -spaces. Let By Du ∈ M(Ω) N we mean that each distributional partial derivative of u is a Radon measure with finite total variation.Then the total variation of Du is given by Each u ∈ BV (Ω) exhibits a trace on ∂Ω which belongs to L 1 (∂Ω).Henceforth we will use the same notation for a BV -function and its trace.We underline that BV (Ω) endowed with the norm is a Banach space.A Radon measure µ can be uniquely decomposed as µ = µ a + µ s where µ a is absolutely continuous with respect to the Lebesgue measure L N while µ s is concentrated on a set of zero Lebesgue measure, i.e. µ a and µ s are mutually singular.
If u ∈ BV (Ω) the measure 1 + |Du| 2 is defined as for any Borel set E ⊆ Ω.The notation ˆΩ 1 + |Du| 2 stands for the total variation of the R N +1 -valued measure which formally represents (L N , Du).Notice that, if u is smooth, then gives the area of the graph of u.Let us also observe that it follows from the decomposition in absolutely continuous and singular part with respect to the Lebesgue measure that one has where we use the following notations D a u := (Du) a and D s u := (Du) s .
In what follows we will use the following semicontinuity classical results; firstly, the functional is lower semicontinuous in BV (Ω) with respect to the L 1 (Ω) convergence.On the other hand the functional defined on functions |v| ≤ 1 is weakly upper semicontinuous in L 1 (Ω) (see Corollary 3.9 of [8]).
2.2.The Anzellotti-Chen-Frid theory.Let us briefly present the L ∞ -vector fields theory due to [6] and [9] in the case of bounded fields z whose divergence is in L q (Ω).Let q ≥ 1 and X(Ω) q := {z ∈ L ∞ (Ω) N : div z ∈ L q (Ω)}.In [6], under suitable compatibility conditions that we shall outline later, given a function v ∈ BV (Ω) and a bounded vector field z ∈ X(Ω) q , the following distribution (z, Dv) : Let us stress that (2.2) is well defined provided one of the following compatibility conditions hold: (2.4) We point out that an admissible compatibility condition is also v ∈ BV (Ω) and div z ∈ L N,∞ (Ω), where L N,∞ (Ω) is the usual Lorentz space (see [32] for an introduction on such function spaces) also known as Marcinkiewicz space of exponent N .Moreover, it holds for all open set U ⊂⊂ Ω and for all ϕ ∈ C 1 c (U ), and for all Borel sets B and for all open sets U such that B ⊂ U ⊂⊂ Ω.Every z ∈ X(Ω) q has a weak trace on ∂Ω of its normal component which is denoted by [z, ν], where ν(x) is the outward normal unit vector defined for H N −1 -almost every x ∈ ∂Ω (see [6]), such that The following Green formula holds (see [6, Theorem 1.9]): provided one of the compatibility conditions (2.3) or (2.4) is in force.
Let us recall the following technical result due again to [6,Theorem 2.4].
provided one of the compatibility conditions (2.3) or (2.4) is in force.
An algebraic inequality.
In what follows we will have to apply an algebraic inequality, which is next set for the sake of completeness.If a ≥ 0 and 0 To check it, just realize that writing as squaring and simplifying, we get (2.6) is equivalent to This inequality holds since the left-hand side is a square.As a consequence of (2.6) and the Cauchy-Schwarz inequality, we deduce that if A, B ∈ R N with |B| ≤ 1, then (2.7)
BV -solutions in presence of L N -data
In this section we deal with the following problem: where f belongs to L N (Ω) and g is a continuous function such that lim s→±∞ g(s) = ±∞ and g(s Let us start by specifying what we mean by a solution to (3.1).
which, after a simple manipulation, gives the right-hand of (3.4).
It is also worth mentioning that, under the assumptions of Lemma 2.2, (3.4) turns out to be equivalent to require that both z and (z, Du) s = |D s u|, holds (see [16]).
We stress that condition (3.4) was leveraged in [16,Theorem 3.1] in order to characterize the subdifferential of the functional Let us also stress that, once (3.6) is in force, z is uniquely defined by This is a striking difference with some others flux-limited diffusion operators as the 1-laplacian or the transparent media one [5,2,20]).With regard to (3.5), it is nowadays the classical way the Dirichlet datum is meant for these type of equations as, in general, the trace of the solutions is not attained pointwise.Roughly speaking, it means that at any point of ∂Ω either u is zero or the modulus of the weak trace of the normal component of z is highest possible at the boundary.
Let us also observe that it follows from (3.3) that g(u) ∈ L N (Ω).We conclude by stressing that, by standard embedding in 3) also holds tested with functions in BV (Ω) as much as in D(Ω).
Let us state the existence result of this section: Remark 3.4.Let us stress again that, in absence of the absorption zero order term, existence of BV -solutions are expected only for small f 's belonging to L N (Ω).To check that a smallness condition is needed in this case, assume that there exists a solution of problem (3.1) without the absorption term and let z be the associated vector field.Then, for every v ∈ W 1,1 0 (Ω), Green's formula implies 3 consequently shows that when dealing with the regularizing absorption term one gains that a solution always exists, and it belongs to BV (Ω), avoiding any small condition on the size of f .It is also worth noting that the hypothesis (3.2) is necessary to get existence of a solution for every f ∈ L N (Ω).Indeed, assume that g is bounded from above, that is, there exists M > 0 such that g(s) ≤ M for all s ∈ R and let f ∈ L N (Ω) with f ≥ 0. If u is a solution to problem (3.1) and define So, for what we said before, we infer that Appealing now to Sobolev's inequality, we obtain Therefore, the datum f cannot be arbitrary.
An analogous argument can be developed assuming that g is bounded from below.
By appealing to the presence of the regularizing zero order term, we show that the BV -solution of (3.1) is unique provided g is increasing.
Theorem 3.5.Let g be an increasing function.Then there is at most one solution to problem (3.1) in the sense of Definition 3.1.
3.1.
Existence of finite energy solutions.In order to prove Theorem 3.3, for p > 1, we consider the following scheme of approximation: where g p (s) = T 1 p−1 (g(s)) for any s ∈ R, and follows by standard monotonicity arguments ( [29]).
We start proving that u p is bounded uniformly with respect to p by appealing to an idea in [14].
Lemma 3.6.Let f ∈ L N (Ω), let g satisfy (3.2), and let u p be a solution to (3.8).Then for some positive constant C not depending on p.
Proof.We take G k (u p ) where k > 0 as a test function in (3.9), yielding to For the first term on the left-hand of (3.11) we have while for the second term on the left-hand of (3.11) one has For the right-hand of (3.11) we write after applications of the Hölder and Sobolev inequalities (here S 1 is the best constant in the Sobolev inequality for functions in W 1,1 0 (Ω)) and h > 0 to be chosen.Now we gather (3.12), (3.13) and (3.14) into (3.11),obtaining that Now we fix h large enough in order to have (3.16) for any 1 < p < p 0 .From now on, we only consider those p satisfying 1 < p < p 0 .This allows to deduce from (3.15) that it holds An application of the Sobolev inequality gives that (ℓ > k) Estimate (3.17) is sufficient in order to apply standard Stampacchia machinery (see [34]) to deduce that u p is uniformly bounded with respect to p.The proof is concluded.
The previous lemma easily allows to show a BV -estimate for u p .
Lemma 3.7.Let f ∈ L N (Ω) and let g satisfy (3.2).Let u p be a solution to (3.8).Then u p is unifomly bounded in BV (Ω) (with respect to p), and it also holds for some constant C independent of p.
Proof.It is sufficient to pick v = u p as a test function in (3.9) obtaining Therefore it follows from gathering (3.20) into (3.19) that thanks also to (3.2).This concludes the proof.
From Lemmas 3.6 and 3.7 we immediately deduce the following corollary.
From now on, when referring to u we mean the function found in Corollary 3.8.
Lemma 3.9.Let f ∈ L N (Ω) and let g satisfy (3.2).Then there exists z ∈ X(Ω) N such that Proof.Let u p be the solution of (3.8).Firstly observe that, since It also follows from the weak lower semicontinuity of the norm that ||z|| L ∞ (Ω) N ≤ 1.Moreover, the above argument, Lemmas 3.6 and 3.7, and Corollary 3.8 give that (3.21) holds true.Indeed, we only need to show that the third term in (3.8) goes to zero in the sense of distributions as p → 1 + ; to do that, consider ϕ ∈ C 1 c (Ω) and observe that from the Hölder inequality and from (3.18), one has which gives that lim p→1 + (p − 1) ˆΩ |∇u p | p−2 ∇u p • ∇ϕ = 0.This implies (3.21) and, in particular, that z ∈ X(Ω) N as g(u) ∈ L ∞ (Ω).
Let us also underline, for later purposes, that, since u ∈ L ∞ (Ω) and f ∈ L N (Ω), then it holds Hence, let 0 ≤ ϕ ∈ C 1 c (Ω) and consider v = u p ϕ in (3.9); this takes to (3.28) Dropping the nonnegative fourth term in (3.28), one gets where we used that (3.30) Now we aim to take the liminf as p → 1 + in (3.29).As u p strongly converges to u in L q (Ω) for any q < ∞ and f p strongly converges to f in L N (Ω) as p → 1 + , we can easily pass to the limit in the first and in the last term of (3.29).The second term on the left-hand side of (3.29) is lower semicontinuous with respect to the L 1 convergence.The nonpositive third term on the left-hand side of (3.29) is weakly lower semicontinuous with respect to the L 1 convergence (recall (2.1)).Concerning the fourth term on the left-hand side of (3.29) we use the weak * convergence of ∇u p (1 + |∇u p | 2 ) − 1 2 to z in L ∞ (Ω) N as well as the strong convergence of u p in L q (Ω) for any q < ∞ as p → 1 + .Let us finally focus on the last term on the left-hand side of (3.29) for which we reason as for (3.24).Indeed one can apply the Hölder inequality with indexes p p−1 , p obtaining that whose right-hand goes to zero as p → 1 + thanks to (3.18) and since u p is uniformly bounded with respect to p.
Then we have proved that ˆΩ g(u)uϕ Hence, using (2.2), it holds and we take v = S δ,k (u p ) in (3.9) yielding to getting rid of the nonnegative second and third term.Then taking the limsup first as p → 1 + and then as δ → 0 + , one obtains (3.23).
Lemma 3.10.Let f ∈ L N (Ω) and let g satisfy (3.2).Then it holds where u and z are the function and the vector field found in Corollary 3.8 and in Lemma 3.9.
Proof.Let u p be a solution to (3.8) and let us take v = u p in (3.9) yielding to since u p has zero Sobolev trace.Moreover reasoning as for (3.30) (with ϕ = 1) one obtains Now we can take the liminf as p → 1 + acting similarly to what done in Lemma 3.9.This allows to deduce that ˆΩ g(u Now, recalling (3.25) and (2.5), one can write Then the proof of Lemma 3.10 follows by (3.22) and by the fact that |[z, ν]| ≤ 1 H N −1 −a.e. on ∂Ω.
Proof of Theorem 3.3.Let u p be a solution to (3.8).Then it follows from Lemmas 3.6 and 3.7 that u p is bounded in BV (Ω) ∩ L ∞ (Ω) with respect to p. Corollary 3.8 guarantees that u p converges, up to subsequences, to u in L q (Ω) for every q < ∞.Then (3.3) and (3.4) are proved in Lemma 3.9.Finally (3.5) follows from Lemma 3.10.The proof is concluded.
Uniqueness of finite energy solutions.
In this section we prove Theorem 3.5.Let us explicitly highlight that our proof of the uniqueness result is strongly related to the presence of the absorption term.
Proof of Theorem 3.5.Let u 1 and u 2 be solutions to (3.1) and let z 1 and z 2 be the corresponding vector fields.Using (3.3) (recall Remark 3.2), we readily have We take v = u 1 − u 2 in the difference between two weak formulations (3.32) related to u 1 and u 2 , obtaining after an application of (2.5).
Observe first that from (3.5) it holds Hence one can rewrite the boundary terms as which are nonnegative since |[z i , ν]| ≤ 1 for i = 1, 2. Gathering (3.34) into (3.33)gives that Moreover, using (3.4), one gets as measures in Ω.This easily follows by splitting the measures in the absolutely continuous and singular parts.Let us observe that for the absolutely continuous part of the measures one needs that which are given by (2.7) once one recalls that For the singular part it is sufficient to recall that Hence we have shown that which concludes the proof as g is increasing.
The case of L 1 data
Here we deal with (3.1) in presence of a merely integrable datum f and, once again, g satisfying (3.2).
In this case one can not expect finite energy solutions.We specify how a weak solution of problem (3.1) is meant in this case.
A function u which is almost everywhere finite in Ω and such that both g(u) ∈ L 1 (Ω) and T k (u) ∈ BV (Ω) for any k > 0, is a solution to problem (3.1) if there exists for any k > 0.
For the reverse inequality it is sufficient to observe that Then the above becomes an equality, which yields to We also state the following uniqueness result:
4.1.
Existence of infinite energy solutions.By exploiting the results of Section 3, we work by approximation via the following problems where f n := T n (f ).The existence of a solution u n ∈ BV (Ω) ∩ L ∞ (Ω) is proved in Theorem 3.3.This means that there exists u n (sgn We begin by proving estimates in BV (Ω) with respect to n for any truncation of the approximating solutions.
Lemma 4.5.Let f ∈ L 1 (Ω) and let g satisfy (3.2).Let u n be the solution to (4.6) given by Theorem 3.3.Then where C is a positive constant not depending on n.Then there exists an almost everywhere finite function u such that T k (u) ∈ BV (Ω) for any k > 0.Moreover, up to subsequences, u n → u a.e. on Ω, and Then recalling (2.5), (4.9) and the fact that u n ∈ BV (Ω), one gets Now, recalling (4.8) and the discussion in Remark 4.2, one has which readily implies that T k (u n ) is bounded in BV (Ω) with respect to n for any k > 0. This is sufficient to deduce the existence of a limit function u to whom u n converges, up to subsequences, almost everywhere in Ω as n → ∞.Moreover T k (u) ∈ BV (Ω).It remains to show that g(u n ), up to subsequences, converges to g(u) in L 1 (Ω) as n → ∞.First observe that an application of the Fatou Lemma with respect to n in (3.23) gives that g(u) ∈ L 1 (Ω); observe, in particular, that it implies that u is almost everywhere finite in Ω.Hence, one has that ∀ε > 0 there exists h such that |{|g(u n )| ≥ h}| < ε.Using the first assumption in (3.2) there exists a increasing sequence Therefore, the equi-integrability of g(u n ) is a consequence of (3.23) with k h in place of k.The proof is concluded.
Proof.Let u n be the solution to (4.6) given by Theorem 3.3 with vector field z n such that Then, recalling that from Lemma 4.5 g(u n ) converges, up to subsequences, to g(u) in L 1 (Ω) as n → ∞, it is easy to prove that (4.10) holds since f n converges to f in L 1 (Ω).Now in order to prove (4.11) one can take that, using (2.2), gives Then, recalling Remark 4.2, one has that, for every k > 0 which gathered in (4.12) yields to Now one can let n go to ∞ in the previous identity recalling that the left-hand is lower semicontinuous as already shown in the proof of Lemma 3.9.In particular, for the second term on the left-hand one uses that z n χ {|un|≤k} converges to zχ {|u|≤k} weakly in L 1 (Ω) N as n → ∞, for almost every k > 0. Moreover the first term on the right-hand simply passes to the limit since f n ,g(u n ) converge in L 1 (Ω) and T k (u n ) converges weak * in L ∞ (Ω).Finally, the last term easily pass to the limit as T k (u n ) converges in L 1 (Ω) and z n converges weak * in L ∞ (Ω) N .This argument takes to (recall that z k := zχ {|u|≤k} ) where the last passages follow from (4.10) and (2.2) respectively.From now on the reasoning to deduce (4.11) is similar to the one given in the proof of Lemma 3.9.Indeed it is sufficient to observe that z . This shows (4.11) for almost every k > 0. Now observe that, reasoning as in Remark 3.2, from (4.11) one readily gets for almost every k > 0. We claim that, for any fixed k > 0, z = 0 almost everywhere in {|u| = k}.If this is the case, then z n χ {|un|≤k} converges to zχ {|u|≤k} weakly in L 1 (Ω) N as n → ∞, for every k > 0 and this concludes the proof.Let us finally check the claim; let us fix h > k such that (4.14) holds for z h = zχ {|u|≤h} .Then, since z h = 0 almost everywhere in {|u| = k}, also z = 0 almost everywhere on the same set.This is not accurate as we only ask for u to have truncations in BV (Ω) so that D a u is not well defined in general.Nevertheless, reasoning as in [7] it is possible to define (see for instance [4, Lemma 1]) a generalized gradient for functions whose truncation is in BV (Ω) for which, in turn, (4.15) holds a.e. in Ω.Indeed, let u be a measurable function finite a.e. on Ω such that T k (u) ∈ BV (Ω) for any k > 0. Then D a T k (u) is well defined for any k > 0. A standard argument allows us to select a unique measurable vector function Using this generalized gradient, the vector field z given in Definition 4.1 can be uniquely identified by (4.15).Remark 4.8.For subsequent use, we underline that in the previous proof we have shown that z n χ {|un|≤k} converges to zχ {|u|≤k} weakly in L 1 (Ω) N as n → ∞ and for every k > 0. Lemma 4.9.Let f ∈ L 1 (Ω) and let g satisfy (3.2).Then it holds for any k > 0 where u and z are the function and the vector field found in Lemma 4.5 and in Lemma 4.6.
Proof.Let u n be the solution of (4.6) given by Theorem 3.
and, after an application of the (2.5), to Now using both (4.13) and (4.9), it follows from (4.16) that Recalling also Remark 4.8, we can take n → ∞ by lower semicontinuity of the left-hand of (4.17).For the right-hand it is sufficient to use the strong convergence of both f n and g(u n ) in L 1 (Ω) and the * -weak convergence in L ∞ (Ω) of T k (u n ) as n → ∞.Then one deduces Now observe that from (4.10) one has that (f − u)T k (u) = −T k (u) div z.Then an application of (2.5) in (4.18) gives which, from (4.11), implies Proof of Theorem 4.4.Let u 1 and u 2 be solutions to (3.1) and let z 1 and z 2 be the corresponding vector fields.Then one has that where we also used (2.5).From (4.3) one has
ABSORPTION TERMS IN DIRICHLET PROBLEMS FOR THE PRESCRIBED MEAN CURVATURE EQUATION 17
This means that Moreover, using (4.2), one gets (z i,k := z i χ {|ui|≤k} for i = 1, 2) hold as measures in Ω.Once again, this follows by splitting it in the absolutely continuous and singular parts.For the absolutely continuous part of the measures one needs that For the singular part it is sufficient to recall that for any k > 0. The proof is concluded as g is increasing.
Finite energy and unbounded solutions
So far we have shown the existence of a bounded BV -solution when the datum f lies in L N (Ω) (Theorem 3.3) while we proved the existence of an infinite energy solution when f is merely integrable (Theorem 4.3).One could wonder what happens to the solution's regularity when the datum f is in between these two extreme cases.
In particular, in the next result, we consider data lying in the Marcinkiewicz space L N,∞ (Ω); again, let us refer to the monograph [32] for an introduction and basic properties.Among other things let us recall that for functions in BV (Ω) the natural embedding is in the Lorentz space L N N −1 ,1 (Ω) (see [32]) where the best Sobolev constant is given by S1 = [(N − 1)ω Let us state a first regularity result, in which we prove that a bounded solution exists provided the L N,∞ -norm of the datum is small enough.This result shows how the absorption given by a general g only satisfying (3.2) is to weak to infer boundedness for any data in the Marcinkiewicz space and it fits with the result in [19].Proof.The proof strictly follows the lines of the one of Theorem 3.3 once that one uses the Sobolev inequality in L N N −1 ,1 (Ω).
Remark 5.2.As a technical remark, let us stress that, in the previous theorem, the smallness condition on the norm of f is needed since we are not anymore able, in general, to fix h great enough in order to deduce (3.16) as in the proof of lemma 3.6 when f ∈ L N,∞ (Ω).This is not only a technical obstruction, in fact in Example 1 below we will show that unbounded solutions could exist if f ∈ L N,∞ (Ω) with f L N,∞ (Ω) = S−1 yielding optimality to the result of Theorem 5.1.
In the next regularity result we show how the existence of a finite energy solution can be proven, also below the critical threshold N for the datum, provided some stronger growth assumption on g is required.
Theorem 5.3.Let us assume that there exist c 0 > 0 and q > 1 such that g(s)s ≥ c 0 |s| q for any s ∈ R. Moreover let f ∈ L q ′ (Ω).Then there exists a solution u ∈ BV (Ω) to (3.1) in the sense of Definition 4.1 such that u ∈ L q (Ω).
Proof.In Theorem 4.3 we have found the solution u to (3.1) as the almost everywhere limit (up to subsequences) in n of u n ∈ BV (Ω) ∩ L ∞ (Ω) solution to (4.6).Hence, to show that u ∈ BV (Ω), we only need to show that u n is bounded in BV (Ω) ∩ L q (Ω) with respect to n.
To this aim we take u n as a test function in (4.From (5.1) and after an application of the Young inequality, one obtains (5.2) From (5.2) it is simple to convince that u n is bounded in both BV (Ω) and L q (Ω) with respect to n.
The proof is concluded.
Let us now summarize some of the results proven so far.If f ∈ L N (Ω) then Theorem 3.3 gives the existence of a bounded solution u to (3.1) in the sense of Definition 3.1.
As we have just seen in Theorem 5.1, we can enlarge the set of admissible data to L N,∞ (Ω) and still deducing existence of bounded BV -solution to (3.1) as long as we require a smallness condition on f .Finally, in Theorem 5.3, we proved that no smallness condition is required on f to provide BV -solutions on condition that a suitable growth assumption is imposed on g.In this case, in general, the solutions are not bounded anymore.
Both of the results of this section are sharp as the next example shows.
The following example shows that, for a suitable f ∈ L N,∞ (Ω) with norm above the critical threshold found in Theorem 5.1, the unique solution to (3.1) (here g(s) = s; see Theorem 3.5) is not bounded anymore. 1 ; a similar computation can be found in Example 1 of [19].
and let g satisfy (3.2).Then there exists a bounded solution u to problem (3.1) in the sense of Definition 3.1.
Remark 4 . 2 .
Let us explicitly observe that a function u, solution to (3.1) in the sense of Definition 3.1, is also a solution to the same problem in the sense of Definition 4.1.Indeed, if z k = zχ {|u|≤k} , it follows from (3.7) that
. 5 )Theorem 4 . 3 .
Equations (4.4) and(4.5)show that (4.2) holds.This is sufficient to conclude that u is a solution to (3.1) in the sense of Definition 4.1.We stress that conditions (4.2) and (4.3) are the translation of, resp.,(3.4) and (3.5) to the L 1 -setting and they formally tend to them as k → +∞.Let f ∈ L 1 (Ω) and let g satisfy (3.2).Then there exists a solution to problem (3.1) in the sense of Definition 4.1.
Theorem 4 . 4 .
Let g be an increasing function.Then there is at most one solution to problem (3.1) in the sense of Definition 4.1.
4 . 2 .
and this concludes the proof since |[z, ν]| ≤ 1 H N −1 −a.e. on ∂Ω.Proof of Theorem 4.3.Let u n be the solution to(4.6) given by Theorem 3.3.It follows from Lemma 4.5 that T k (u n ) is bounded in BV (Ω) with respect to n and for any k > 0. In the same lemma it is shown that u n converges, up to subsequences, as n → ∞ almost everywhere in Ω to a function u, which is almost everywhere finite.Moreover g(u n ) converges to g(u) in L 1 (Ω) as n → ∞.Requests (4.1) and (4.2) are proved in Lemma 4.6.The boundary condition (4.3) is shown in Lemma 4.9.The proof is concluded.Uniqueness of infinite energy solutions.In this section we prove the uniqueness Theorem 4.4 by strictly following the lines of the proof of Theorem 3.5.
19))Let us observe that the main difference with respect to the proof of Theorem 3.5 relies on the fact that u 1 , u 2 are not suitable test functions in (4.19)anymore.Hence we have to take v = T k (u 1 ) − T k (u 2 ) in the difference between two weak formulations (4.19) related to u 1 and u 2 , yielding to | 8,778 | 2023-07-26T00:00:00.000 | [
"Mathematics"
] |
Method of ultrasonic nano-crystal surface modification for improvement of surface characteristics of AISI D2 heat-treated with different tempering conditions
AISI D2, an alloy tool steel, has been widely used as a cold-work die steel because of its excellent wear resistance, toughness, and machinability. However, when it is used as a mold or tool under a high load, high hardness and wear resistance are required to improve its service life. This study aimed to apply ultrasonic nanocrystal surface modification (UNSM), a local surface hardening technology, to the surface of D2 steel. To maximize the surface improvement effect achievable through UNSM, it is important to select the optimal process conditions according to the characteristics of the base metal. Therefore, the effects of UNSM were compared for different initial hardness values of D2 in this study. To this end, the base metal was subjected to different heat treatment conditions to exhibit different hardness values (approximately 40, 45, 50, and 55 HRc). Changes in the surface of the base metal were observed according to the three main parameters of the UNSM process: the load, inter-pass interval, and feed rate. No significant changes in the surface roughness and hardness were caused by the feed rate. As the inter-pass interval of UNSM decreased and the static load increased, the surface roughness of the base metal increased. However, the specimen with low initial hardness (D2-H40), the roughness improvement decreased as the load increased. On the other hand, the surface hardness improvement increased as the inter-pass interval decreased and the load increased regardless of the initial hardness of the base metal. It also was found that the specimen with the highest initial hardness (D2-H55) exhibited the greatest hardness improvement rate of 9% and smallest hardened layer thickness of 220 μm. In addition, the largest compressive residual stress (− 1,130.6 MPa) was formed on the surface of D2-H55 after UNSM, which occurred because the base metal with high initial hardness had limited plastic flow due to fine grains and low ductility, causing thus the energy repeatedly delivered by UNSM to be concentrated in a limited area. Consequently, reducing the inter-pass interval is effective on improving the surface roughness and hardness regardless of the material hardness. However, the load must be applied at an appropriate level, depending on the material hardness. Therefore, when D2 is heat-treated and used according to the product requirements, it will be effective to consider the changes after UNSM treatment based on hardness after heat treatment that were derived in this study.
Introduction
AISI D2 or simply D2 is a tool steel that has been widely used as a high-carbon, high-chromium alloy steel. It has been employed in various industries owing to its excellent toughness and wear resistance and has mainly been utilized as a cold-work tool steel, such as in cutting tools, forming rolls, and dies for press. Therefore, it is necessary to improve the mechanical characteristics and fatigue lives of cold-work tool steels to be used as parts and dies [1]. In general, the mechanical and microstructural characteristics of metallic materials are improved by heat treatment. D2 has excellent quenching performance due to the addition of molybdenum and vanadium and can exhibit a high hardness of more than 58 HRc when processed by normalizing (at 500-530 °C). In addition, heat treatment makes the microstructure finer and increases the wear resistance by generating martensitic structures. Therefore, the mechanical characteristics can be controlled by varying the heat-treatment conditions according to the product requirements.
However, cold-work dies and tools are exposed to environments that make them vulnerable to fatigue cracking due to repeated friction. To address this problem, surface hardening studies have been conducted. Surface hardening is a post-treatment process that increases the hardness and wear resistance and maintains the toughness of the material. Surface hardening is mainly divided into chemical surface hardening, which changes the chemical composition, and physical surface hardening, which does not change the chemical composition. Chemical surface hardening includes nitriding, in which steel is heated in ammonia gas over an extended period of time to form a nitride layer [2][3][4][5][6], and carburizing, in which carbon penetrates the metal surface [5][6][7]. Physical surface hardening includes metal spraying, in which molten metals are attached to a surface by spraying them with compressed air [8,9]; induction hardening, in which a surface is hardened using high-frequency current and coolant injection [10]; and shot peening, in which the surface layer is hardened by injecting metals at high speed [11][12][13][14][15][16][17][18][19][20][21]. For D2 steel, physical surface hardening methods, such as shot peening and laser shock peening, have mainly been used.
In shot peening, a type of cold working, 0.5-0.1 mm metal shot balls strike the metal surface through high-speed airflow. A thin plastically deformed layer is formed on the surface through repeated impacts. In such a deformed layer, compressive residual stress is produced. Compressive residual stress extends the fatigue life by offsetting the tension under repeated tensile loads. Shot peening includes hot shot peening, which is performed near the austenite or martensite transformation temperature, and a water jet in which water is injected at a high speed for processing. Lee et al. [11] applied shot peening to SCM920, a material for automotive gears, and confirmed an improvement in wear resistance. Kim et al. applied shot peening to aluminum alloys and verified that compressive residual stress improved the fatigue life [12]. Cheong et al. observed changes in the hardness of spring steel SPS 5A specimens with different hardness values by varying the shot peening parameters [13]. In addition, the surface and mechanical characteristics have been improved using shot peening in many studies [14][15][16][17]. However, shot peening has low precision because it is impossible to adjust the positions of the injected metal balls accurately. In addition, the shot balls used cause environmental pollution. Studies on peening to address these shortcomings have been conducted.
Laser shock peening is a method of hardening by transferring the energy of a high-power laser beam to the material surface in the form of heat and shock waves, and it can also produce the self-quenching effect. This technology generates structures with high hardness values and prevents thermal deformation and stress. Laser shock peening uses 30-300 μm laser spots, thereby enabling finer and more precise processing than shot peening [18]. Kim et al. improved the surface and mechanical characteristics of high-strength titanium alloys using laser peening technology [19]. Yang et al. observed the compressive residual stress and hardness of SM45C according to the power of an yttrium-aluminumgarnet laser. They reported that irregular surfaces and tensile residual stress were produced when the laser power was not appropriate but that the appropriate laser power improved the surface and produced compressive residual stress [20]. Laser shock peening technology, however, requires a robot dedicated to laser beam irradiation, and thermal deformation of the material may occur owing to plasma exposure [21].
In this study, surface hardening using ultrasonic nanocrystal surface modification (UNSM) was investigated. UNSM is a technology that modifies a surface crystalline structure using ultrasonic vibration energy. It is more precise and ecofriendly than shot peening and more economical than laser shock peening. In addition, many studies have verified that UNSM technology improves the surface roughness, wear resistance, and fatigue strength of metal. Lee et al. confirmed the possibility of extending the service lives of press dies through wear tests [22]. Moreover, Cho et al. improved the surface characteristics of an aluminum alloy, AlSi10Mg, produced by direct metal laser sintering through the application of UNSM, and Kheradmandfard et al. utilized UNSM to improve the surface hardness of the titanium-niobiumtantalum-zirconium alloy [1,23]. Thus, many studies have been conducted to improve the surface and mechanical characteristics of various materials by applying UNSM technology [24][25][26]. Among them, there have been research cases in which UNSM has been applied to D2 steel. Amanov et al. applied UNSM treatment to D2 steel at 500 °C and confirmed an improvement in performance [27]. Pyun et al. analyzed the effects of the number and direction of UNSM treatment on the surface and mechanical characteristics of D2 and observed the surface changes between before and after heat treatment [28].
However, there have been no studies in which UNSM technology has been applied after varying the initial hardness of D2 and the surface and mechanical characteristics have been compared. D2 has been used as a tool steel in various fields owing to its numerous benefits, as described above. In particular, the hardness and toughness of D2 steel can be controlled under different heat treatment conditions, depending on the product requirements. UNSM can be applied to heat-treated D2 for further improvement. In this case, UNSM causes different changes depending on the characteristics of D2 after heat treatment. Therefore, this study focused on the changes caused by UNSM depending on the process conditions and initial hardness of the D2 base metal. D2 specimens were prepared by applying different tempering temperature conditions after quenching. UNSM was then applied to the D2 specimens using the load, interpass interval, and feed rate, among the UNSM process conditions, as parameters. After UNSM treatment, changes in the surface roughness and hardness of the specimens were observed. To analyze the causes of the property changes, the microstructures of the specimens were observed and the compressive residual stress was examined.
Specimen preparation
The base metal used in this study was AISI D2, an alloy tool steel commonly used for cold pressing. Table 1 describes the chemical composition of the D2 base metal [29]. This material has excellent wear resistance and toughness owing to its high carbon content. In addition, it can have various characteristics following heat treatment owing to the addition of molybdenum and vanadium. Thus, it is mainly hardened by quenching and tempering treatment. Figure 1 presents the detailed heat treatment conditions utilized in this study. Firstly, quenching was performed by heating D2 to the austenitizing temperature and then rapidly cooling it after a certain period of time. In the quenching process, D2 was heated to 1010 °C after preheating it twice to 650 and 850 °C. After heating, N 2 cooling was performed in a vacuum furnace. The tempering process in which heating and cooling were performed at temperature below the transformation point (723 °C) was then implemented to remove the brittleness and internal stress resulting from quenching. In this instance, the hardness and toughness of the base metal were determined by the tempering temperature. The hardness decreased as the tempering temperature increased, whereas it increased as the tempering temperature decreased. In this study, tempering was performed at 620, 600, 580, and 560 °C. The tempering process in which cooling was performed after maintaining the tempering temperature for 2 h was repeated three times. The hardness of the base metal was varied by changing the tempering temperature, and the changes in the characteristics of each base metal after UNSM treatment were observed.
UNSM
UNSM induces severe plastic deformation on the material (or component) surface by causing tens of thousands of collisions per second with an amplitude of several μm using ultrasonic vibration energy. It can refine the microstructure of a material to a certain depth from its surface and create regular micro-dimples on the surface. The refined structure improves the wear resistance and impact resistance by enhancing the hardness, compressive residual stress, and wear and fatigue characteristics. Figure 2 shows a schematic diagram and the application of UNSM to the material surface. The static load is controlled with weights, and ultrasonic waves of a certain frequency are generated by the ultrasonic oscillator. The generated ultrasonic waves are used when the transducer and booster devices deliver a load to an indenter, on which a ball tip of WC material is attached. WC is a cemented carbide with a hardness of about 1700 Hv. Lee et al. observed that WC ball brought the highest hardness increase compared Si3N4 and stainless steel 304 ball materials [22]. Generally, tool wear is observed after extended operation time. It is recommended to replace the WC tip after 20 to 50 h of processing [30]. Here, the delivered load corresponds to the static load of the weights combined with the dynamic load resulting from the vibration energy of the ultrasonic waves. Because the UNSM device is installed on a numerical control/computer numerical control (NC/CNC) machine tool, precise processing of the surface is possible.
In the authors' previous study, the effects of the ball tip diameter (range of 2.38-6.0 mm) on the hardness and surface roughness were investigated in order to determine the optimal conditions to maximize the hardness and minimize the roughness. Kim et al. [30] studied on the effect of amplitude in UNSM treatment on the microstructures, residual stress, hardness, and corrosion properties of Alloy 600. They controlled the amplitudes of 30, 40, and 50 µm. According to Ye et al. [31], the repeated and high frequency (10-30 kHz) strikes cause severe plastic deformation, which leads to nanocrystallization and compressive residual stresses at and below the material surface to a certain depth that depends on the amplitude, load, and strike rate. Based on the previous works, a ball tip diameter d of 2.38 mm, an amplitude A of 30 μm, and a frequency f of 30 kHz were set as fixed parameters in this study. The other main parameters, that is, the static load, inter-pass interval, and feed rate, were varied to analyze the effects of UNSM on heat-treated D2. Table 2 lists the fixed parameter values, and Table 3 summarizes the strain energy densities calculated under each condition. In each case, UNSM was applied once to the top surface of the D2 base metal with dimensions of 10 × 50mm (Fig. 3).
Experimental details
The purpose of this study was to find the optimal UNSM conditions that most improve a D2 surface. Surface roughness and hardness were selected as response values to analyze changes in surface characteristics before and after UNSM treatment. To measure the roughness of the UNSM treated surface, a contact-type surface roughness meter (Mitutoyo, SJ-410) was used. For each specimen, the surface roughness (Ra) was measured (perpendicular to UNSM path) ten times as shown in Fig. 4a, and the average was adopted as a response value. Figure 4b presents the example of surface profile measured by SJ-410. The hardness before and after UNSM treatment was measured using a micro-Vickers hardness tester from Mitutoyo (Hv-112), and indentation was performed for 10 s using a load of 0.1 kgf. In addition, the hardness in a cross section was measured to examine the hardness inside the material after UNSM treatment (Fig. 3). Each specimen was cut, mounted, and then polished to #2000 grit using silicon carbide (SiC) sandpaper. Finally, it was polished using 1 and 3 µm diamond suspensions. Upon completion of all pretreatments, the hardness was measured at 20 μm intervals in the depth direction from the point 20 µm away from the top surface subjected to UNSM treatment. In this instance, ten times of measurements were performed at each depth, and the average values were used as representative values.
An etching solution (HNO 3 : 3, distilled water: 1) was used at an applied voltage and current of 2 V and 2 A, respectively. The microstructure of the etched surface was examined using a field-emission scanning electron microscope (FE-SEM, MIRA3, and TESCAN). Additionally, electron backscatter diffraction (EBSD) analyses were performed to determine the phases and phase fraction. For the specimens treated with UNSM, the residual stress was measured. A Stresstech XSTRESS 3000 was utilized for measurement, and the sin2Ψ method that measures the stress from the strain of the lattice spacing was employed. If stress exists on the metal surface, the lattice spacing varies, changing the position of the X-ray diffraction peak. The residual stress can be measured by obtaining the lattice spacing according to the X-ray diffraction angle (sin2Ψ) [32]. The surface residual stress was analyzed with a 2-mm diameter collimator on a 70 mm arc using Cu-Kα radiation.
Changes in roughness
Figure 5a presents photographs of the specimens that were heat-treated at different tempering temperatures and then subjected to UNSM treatment under the conditions listed in Table 3. Surface changes caused by different conditions can be observed. Figure 5b shows the surface of the heat-treated D2 after the application of UNSM in more detail. Gloss is observed because micro-dimples were formed as the UNSM ball tip that vibrated 1.8 million times per min struck the metal surface [22].
This study aimed to determine the UNSM process conditions suitable for surface improvement of the D2 base metal, which was heat-treated to have hardness values of 40, 45, 50, and 55 HRc (designated as D2-H40, D2-H45, D2-H50, and D2-H55, respectively) as shown in Table 4. The average surface roughness Ra before UNSM treatment was 0.7 μm (see Table 4). The surface roughness values of all the UNSM-treated specimens decreased, regardless of the hardness and treatment conditions. Therefore, the roughness improvement rate Φ between before and after UNSM treatment was calculated as follows: (1) Φ(%) = ΔRa Ra initial × 100 ΔRa represents the change in surface roughness between before and after UNSM treatment and is divided by the roughness value ( Ra initial ) before treatment. The result is expressed as a percentage.
To compare the surface characteristics under different UNSM process conditions, Fig. 6 depicts the roughness and improvement rate between before and after treatment. Figure 6a shows the roughness results obtained when the feed rate was increased from 1000 to 2000 and 3000 mm/min, while the load and inter-pass interval were fixed at 30 N and 50 μm, respectively. Under all conditions, the roughness is lower after treatment. In addition, the roughness after treatment increases with increasing feed rate for D2-H50 but shows no relationship to the feed rate for D2-H40, D2-H45, and D2-H55. This finding indicates that the feed rate does not significantly influence the roughness improvement.
The inter-pass interval was increased from 10 to 50 and 90 μm, while the load and feed rate were fixed at 30 N and 2000 mm/min, respectively. Figure 6b presents the results. Regardless of the initial hardness, the roughness decreases, and the roughness improvement rate increases as the interval decreases. The greatest improvement rate is observed with an inter-pass interval of 10 μm applied to D2-H45. Figure 6c shows the roughness results obtained when the load was varied, while the inter-pass interval and feed rate were fixed at 50 μm and 2000 mm/min, respectively. For D2-H45, D2-H50, and D2-H55 with relatively high initial hardness, the roughness decreases and the roughness improvement rate increases as the static load increases. A smooth surface can be obtained under high static loads owing to the increase in the effect of peening. In the case of D2-H40, however, the roughness improvement rate decreases as the static load increases. Meanwhile, the roughness improvement rate of D2-H55 is significantly lower than those of the other specimens. However, with the highest applied load (50 N), the surface roughness is the lowest, and the improvement is the highest for D2-55. From these results, it can be concluded that it is necessary to apply a load higher than an appropriate level for the surface of a material with high hardness to exhibit a significant change in surface roughness. Figure 7 shows the scanning electron microscope (SEM) images at × 500 magnification for the surfaces treated at different inter-pass intervals. The set inter-pass interval of the UNSM is consistent with the interval between the formed tracks. When the interval is 90 μm, a relatively rough surface is evident, regardless of the initial hardness. However, as the interval decreases, a smoother surface is observed, because the number of impacts per unit area increases as the number of overlapping passes increases. Such overlapping passes deliver higher energy density to the metal surface. They are expected to affect the microstructure, mechanical characteristics, and surface characteristics significantly.
Surface topography
For detailed observation of the surface characteristics according to the initial hardness and UNSM conditions, the treated surface was analyzed. Figure 8 presents the surface images of D2-H40 and D2-H55 subjected to static loads of 10, 30, and 50 N. As shown in Fig. 8a, D2-H40 exhibits micro-valleys with crests and troughs formed by the passing ball tip under all load conditions. The micro-valleys become clearer as the static load increases. For detailed observation of the micro-valleys, they were magnified more (× 1000), revealing a relatively smooth surface under a 10 N load because the difference between the crests and troughs is not large. For the surfaces under loads of 30 and 50 N, however, a rough surface is observed. In particular, with a 50 N applied load, irregular damage is evident owing to the high load, because the static load applied to the base metal with a low initial hardness exceeds the critical range (not specified in this paper). An excessive load increases the friction between the ball tip and material. It appears that such an increase in friction causes excessive plastic deformation [33] and decreases the roughness improvement rate. On the other hand, with low loads applied to D2-H55, as shown in Fig. 8b, regular patterns are not observed on the surface. However, with high applied loads, regular micro-valleys are evident, significantly improving the surface roughness. As described, the static load within the critical load range is helpful for improving the roughness owing to the effect of peening. A static load that exceeds the critical level, however, reduces the roughness improvement.
As described above, the application of the same load (50 N) resulted in different changes in D2-H40 and D2-H55. To observe the micro-valleys caused by UNSM treatment, the specimens were cut in a direction perpendicular to the UNSM treatment passes, as shown in Fig. 9. Figure 9a presents the results of applying a static load of 50 N to D2-H40. Cracks are observed in the surface zones, marked with white dotted lines. Wave-shaped patterns appear owing to the plastic deformation of the material under an excessive load, and cracks are observed under the waves. It is estimated that the roughness improvement rate is reduced when a load of 50 N is applied to D2-H40 owing to the generation of these wave patterns and cracks. In addition, these features may act as starting points for corrosion, thereby reducing the corrosion resistance [30]. In the case of D2-H55, however, the top surface was leveled through the ball tip, unlike D2-H40, as shown in Fig. 9b. When magnified at × 7000 magnification, uniformly compressed microstructures are observed in the moving direction of the UNSM ball tip. Consequently, the surface roughness could be decreased by applying an appropriate static load according to the initial hardness of the base metal. For D2-H45, D2-H50, and D2-H55, 50 N decreased the surface roughness as an appropriate load. For D2-H40, however, the application of 50 N damaged the surface and increased the roughness. These results indicate that the roughness improvement increases as the inter-pass interval decreases regardless of the initial hardness of the base metal but that the static load effectively improves the roughness only when it lies within the critical range. An increase in the static load (within the critical range) may increase the Fig. 10 Comparisons of the surface hardness and hardness improvement rate ( Φ � ) according to the UNSM process conditions of a feed rate, b inter-pass interval, and c load ◂ dislocation density by leveling the surface, thereby improving the corrosion resistance [30].
Surface hardness
To examine the changes in surface hardness after UNSM treatment, the surface hardness of each treated specimen was measured. The micro-Vickers hardness values of D2-H40, D2-H45, D2-H50, and D2-H55 before UNSM treatment were 420, 460, 530, and 620 Hv, respectively. As with the surface roughness change between before and after UNSM treatment described above, the surface hardness increased for all specimens after UNSM treatment, regardless of the process conditions. The hardness improvement rate Φ � was calculated in the same manner as the surface roughness improvement rate, as shown in Eq. (1). Figure 10 presents the results. Figure 10a depicts the hardness results according to the feed rate. As with the surface roughness change, there is no relationship between the hardness and feed rate. Figure 10b shows the hardness results according to the inter-pass interval. For all specimens, the highest hardness is observed when the interval is 10 μm, because the areas impacted by the ball tip overlap with each other as the interval decreases, thereby increasing the compression of the surface microstructure and grain refinement. Regarding the hardness change due to the load (Fig. 10c), the hardness increases, and the hardness improvement rate also increases almost linearly as the load increases. An increase in the load applied per unit area causes larger plastic deformation, which decreases the grain size and hinders dislocation sliding. Therefore, an increase in the load increases the hardness owing to the strain hardening of the material. The Hall-Petch relation shown in Eq. (2) indicates that the yield strength y of a material increases as the average grain diameter d decreases. Here, i is the friction stress that hinders intragranular dislocation motion, and k y is a constant representing the relative hardening contribution of the grain boundary. In addition, the above equation can be represented by Eq. (3), which indicates that the hardness Hv increases as the grain size d decreases. Here, H 0 and k H are constants based on hardness [34].
Based on this principle, the hardness increases as the grain size decreases owing to UNSM treatment. In addition, as the amount of energy applied per unit area increases, the hardness increases owing to the grain refinement. Therefore, as the inter-pass interval decreases and the load increases, the hardness improvement rate increases because the conditions are more favorable for grain refinement and plastic deformation. The next section presents quantitative comparisons of grain refinement and compressive residual stress.
Hardness in cross-section
An increase in load also affects the depth of the surface upon which the impact of the UNSM is applied. Khan reported that a high static load in UNSM increased the hardening depth by generating smaller grains and deeper nanocrystalline layers [35]. The cross-sectional hardness values of the specimens under the highest static load of 50 N were measured. Figure 11 shows the hardness values of D2-H40, D2-H45, D2-H50, and D2-H55 in the depth direction. As with the surface hardness, the hardness near the surface is increased by UNSM, and a significant increase in hardness is observed mainly in the upper part. The hardness decreases as the depth increases, and it converges to the same value as the initial hardness of each base metal when a certain depth is reached. The depth affected by UNSM and hardness improvement differs depending on the base metal, as depicted in Fig. 12. First, the depth affected by UNSM ranges from 220 to 400 μm. D2-H40 exhibits the greatest UNSM-affected depth (400 μm). For D2-H55, on the other hand, the hardness increase by UNSM is observed only at a depth of 220 μm. In other words, the UNSM-affected depth tends to decrease as the initial hardness of the base metal increases. For the material with low hardness, refined grains are formed at a greater depth under the same load because plastic flow occurs more easily. On the other hand, D2-H40 exhibits the lowest hardness improvement rate of 6.5%. D2-H55, which has the lowest UNSM-affected depth, exhibits the greatest hardness improvement rate of 9%. UNSM is a technology that modifies microstructures by delivering impact energy using ultrasonic waves. Such impacts can be considered as micro-cold forging. They cause plastic and elastic deformation in the surface layer, leading to refined structures and compressive residual stress. The fine structures and residual stress were analyzed to investigate the behavior of the hardness improvement rate in the depth direction.
Changes in grain size
As can be seen from Hall-Petch (Eq. 3), a decrease in grain size indicates an increase in hardness. If refined grains are formed in deep areas, the thickness of the hardened layer increases. For D2-H40, tempering was performed at a relatively high temperature. For D2-H55, however, tempering was performed at a low temperature. Thus, a high hardness was obtained owing to the fine microstructure, but the ductility decreased. To examine the microstructural refinement according to the heat treatment conditions of the base metal, the grain changes were observed using EBSD. Figure 13 shows the phase diagrams of the specimens with different hardness values. Red represents martensite (or ferrite), blue corresponds to austenite, and the other colors indicate carbide. Regardless of the hardness of the base metal, most of the phases were transformed from the austenite phase to martensite due to quenching. The austenite phase either appears rarely (0.03%) or is not observed. Chromium carbides (Cr 23 C 6 and Cr 7 C 3 ) are partially distributed in the structures and precipitated during quenching. Figure 14 compares the grain sizes of D2-H40 (Fig. 14a) and D2-H55 (Fig. 14b) before and after UNSM treatment. The average grain size of D2-H55 before the treatment is 0.51 μm, which is smaller than that of D2-H40 (0.65 μm). The average grain size of D2-H40 decreases by approximately 18%, from 0.65 before treatment to 0.53 μm after treatment. Meanwhile, the average grain size of D2-H55 decreases by approximately 29%, from 0.51 to 0.36 μm. As the grains in the upper part of D2-H55 are further refined, Fig. 12 Φ � and UNSM-affected depth according to initial base metal hardness the hardness improvement rate is higher than that of D2-H40 (Fig. 12). D2-H55 exhibits limited plastic flow because it has finer grains and more grain boundaries than D2-H40. D2-H55, however, has a relatively low energy absorption through deformation because of its low ductility. Consequently, D2-H55 exhibits a low UNSM-affected depth under the same load, as shown in Fig. 12, because it has a relatively limited plastic flow and low energy absorption. Figure 15 shows the microstructural changes of D2-H40 and D2-H55 under the same load. For D2-H40, the plastic flow was free because its ductility was relatively high, which caused a large hardening depth by dispersing the impact energy into the surrounding area. Because the repeated impact loads of UNSM could easily be dispersed to the surrounding area, the degree of refinement was lower than that of D2-H55. Meanwhile, for D2-H55, grain refinement occurs only in a limited area even under repeated impact energy because the plastic flow is limited owing to the fine grains. Therefore, for D2-H40 (Fig. 15a), the plastic flow and energy transfer caused by UNSM treatment extend over wide ranges. In the case of D2-H55 (Fig. 15b), on the other hand, only the narrow portion of the upper part exhibits plastic deformation because the plastic flow is limited compared to that in the case of D2-H40, but higher grain refinement is evident due to the concentration of energy without transfer.
Changes in residual stress
If a material is subjected to repeated non-uniform plastic processing, heating, or cooling, residual stress is generated inside it. If such residual stress exists in the form of tensile stress, it may cause premature fracture of the material. UNSM forms a large compressive residual stress on the surface layer by delivering repeated compressive loads, improving the fatigue life of the material. To examine the formation of compressive residual stress according to the initial hardness of D2 under the same UNSM conditions, the surface residual stress was indirectly measured using the sin2Ψ method. The sin2Ψ method measures the stress through the strain of the lattice spacing caused by residual stress [32]. The residual stress was calculated using the following simple equation [36]: where, σ is the residual stress, d 0 is the lattice spacing measured in the non-stress state, d Ψ is the lattice spacing in the Ψ direction changed by the stress, E is Young's modulus of the material (used value: 211GPa), and is the Poisson's ratio (used value: 0.3).
In Fig. 16a, the diffraction peak difference before and after the UNSM treatment was presented. The peak shift was observed after UNSM treatment. The shifting to the left of diffraction peak indicated that the compressive residual stress was formed [37]. Figure 16b compares the compressive residual stresses formed on the surfaces of the specimens. Firstly, before UNSM treatment, a low average tensile stress of 12.4 MPa is observed on the surface of each specimen, because stress relief occurred as the tempering was repeated three times after quenching. For each specimen, the compressive residual stress is at least 20 times higher after UNSM treatment regardless of the initial hardness of the base metal. Here, the same UNSM treatment conditions, that is, a load of 50 N, an inter-pass interval of 20 μm, and a feed rate of 2000 mm/min, were applied to each specimen. For D2-H40, a compressive residual stress of 679.2 MPa is observed. As the initial hardness of the base metal increases, the compressive residual stress increases, and D2-H55 exhibits the highest compressive residual stress of 1,130.6 MPa. Under the same UNSM conditions, the hardness improvement rate also increases as the initial hardness of the base metal increases owing to the increase in the compressive residual stress formed on the surface. These results show that higher grain refinement and larger compressive residual stress are generated with increasing initial hardness. Therefore, the specimens with higher initial hardness values exhibit higher hardness improvement rates.
Conclusions
In this study, the parameters (load, inter-pass interval, and feed rate) of UNSM were varied and applied to D2 base metals with different initial hardness values (40, 45, 50, and 55 HRc) through heat treatment at different temperatures. In addition, the changes in the surface and mechanical characteristics after UNSM were observed for each specimen. It was found that the surface roughness improvement rate increased as the inter-pass interval decreased regardless of the initial hardness of the base metal. In addition, an increase in the static load within the critical load range improved the roughness. For D2-H40, the specimen with an initial hardness of 40 HRc, however, the roughness improvement decreased under a load of 50 N, which was excessive. Furthermore, the surface hardness improvement increased as the load increased and the inter-pass interval decreased regardless of the initial hardness of the base metal. High loads and overlapping passes could deliver a higher energy density to the metal surface, which increased the hardness improvement because of the formation of fine crystal structures and large compressive residual stress. D2-H40, with the lowest initial hardness, exhibited the highest hardening depth of 400 μm after the UNSM treatment. D2-H55, which had the highest initial hardness, exhibited the greatest hardness improvement rate of 9%. For D2-H40, the plastic flow was free because its ductility was relatively high, which enabled grain refinement in deep areas because the impact energy of the UNSM was dispersed in the surrounding area. The hardness improvement rate of D2-H40, however, was lower than that of D2-H55 because the degree of grain refinement was small as the impact energy was dispersed.
In the case of D2-H55, grain refinement occurred only in a narrow area because the plastic flow was limited. The hardness improvement could have been greater owing to the grain refinement in a limited area. These results were confirmed by grain size measurements through EBSD. After UNSM treatment, the grain size decreased by approximately 18% for D2-H40 and 29% for D2-H55. In addition, when the residual stress was measured, it was found that D2-H55 exhibited the largest compressive residual stress of − 1130.6 MPa after UNSM treatment under the same load. The results of this study are expected to be used as basic data for setting the UNSM process conditions for the surface hardening of D2. However, the tendencies of the surface roughness and hardness according to the UNSM feed rate were not clearly observed in this study. It appears that the influence of the feed rate was not significant because the treated area of the ball tip was small. Therefore, in the future, the effects of the feed rate on the surface and mechanical characteristics will be observed by increasing the UNSM treatment area. In addition, the changes in the fatigue characteristics due to increasing compressive residual stress will be evaluated under the same conditions for D2-H40, D2-H45, D2-H50, and D2-H55. | 8,451.8 | 2021-06-04T00:00:00.000 | [
"Materials Science"
] |
Positronium microwave spectroscopy using Ramsey interferometry
We describe an experimental arrangement suitable for performing microwave spectroscopy measurements of the positronium (Ps) n=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n=2$$\end{document} fine structure using Ramsey interferometry. Simulations are presented concerning the production of energetic 23S1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\,^3S_1$$\end{document} Ps atoms from a Xe gas cell, and their subsequent interactions with a pair of waveguides in which coherent microwave radiation fields are applied. We conclude that for realistic experimental parameters, optimized using a Monte Carlo simulation, a gas cell length of 10 cm with a gas pressure of 0.5 Pa and a positron beam energy of 40 eV, count rates on the order of 50 Hz should be possible.
Introduction
Because positronium (Ps) [1] has no nuclear structure [2] it is expected to be described by QED theory to extremely high precision [3], with weak force contributions occurring at the part per trillion level [4]. As a result Ps can, in principle, be used to search for 'new physics" [5], since any confirmed deviation of Ps properties from QED would necessarily involve physical mechanisms not included in that theory [6]. Such investigations are only possible, however, if Ps measurements can be performed with a precision comparable to that of the corresponding calculations. Unfortunately Ps experiments are in general far less precise than theory.
QED calculations of Ps energy levels are complete up to order O(mα 6 ) (e.g., [7,8]), and work on the higherorder terms is ongoing (e.g., [9]). For the n = 2 fine structure intervals, the uncertainty in theory, obtained from estimates of the magnitudes of uncalculated terms, is 80 kHz [10]. Until recently, the experimental uncertainties of these intervals were on the several MHz scale [11][12][13].
Recently, new measurements of the Ps n = 2 fine structure 2 3 S 1 → 2 3 P J (J = 0, 1, 2) have been performed [14,15]. For these measurements, a positron trap was used to generate a pulsed positron beam [16] which was implanted into a mesoporous silica film to generate a pulsed Ps gas [17]. Excited state Ps atoms were then obtained using laser excitation [18]. This approach is more efficient than the previous methods [19], and also results in the creation of slower atoms, with typical energies of around 50 meV [20]. This means that Ps atoms passing through waveguides (whose size is a e-mail<EMAIL_ADDRESS>(corresponding author) dictated by the frequency of the radiation used) experience less transit-time broadening, and that lower power microwave radiation can be employed, reducing power broadening effects (e.g., [21]), and thus resulting in narrower lineshapes.
These advances allowed sub-MHz precision to be obtained, with limitations arising primarily from statistical uncertainties. However, despite the improved precision, a disagreement with theory was observed [14], as well as asymmetries in some of the measured lineshapes [15], suggesting the presence of unknown systematic effects. Subsequent numerical simulations of the experimental apparatus have indicated that reflections of microwave radiation in the vacuum chamber back into the waveguides may have been responsible for the observed asymmetries [22].
An intrinsic limitation of direct spectroscopic measurements of the Ps n = 2 fine structure is that the transitions have a natural linewidth of 50 MHz, determined by the 3.19 ns radiative lifetime of the 2 3 P J levels. In order for experiments to reach a level of precision commensurate with the 80 kHz theory uncertainties, the 50 MHz line must be split by approximately 1000. While this is certainly possible (for example Beyer and co-workers have obtained measurements with a precision corresponding to one part in 10,000 of the 20 MHz linewidth in one-photon 2S-4P transitions in hydrogen [23]), it requires a complete understanding of the lineshape; even small effects that distort such broad lineshapes can result in large uncertainties in the inferred transition frequencies.
As discussed in more detail in Sect. 4, this problem can be mitigated using the Ramsey technique of separated oscillatory fields (SOF) [24]. The SOF method addresses the linewidth problem because it allows one to effectively obtain narrower linewidths by replacing the single interaction region with a pair of (coherent) microwave radiation fields, separated by some distance D. Atoms are excited to a superposition state in one field, and then pass with speed v through the field-free region for a time T = D/v Ps . The second interaction returns the atoms to the initial state, yielding an interference effect that narrows the line by a factor of two, even for T = 0. For T > 0 the linewidth is further narrowed, and depends on T , rather than the Ps lifetime (essentially, by selecting longer-lived atoms). The price of this advantage is loss of signal; the loss fraction depends on the particular superposition state populations, but for lifetimes close to 3.2 ns, a factor of 5 linewidth reduction (i.e., T = 16 ns), would incur a signal loss of two orders of magnitude. This highlights why standard Ps production methods [19] cannot be used for SOF measurements: 50 meV Ps atoms have a speed v Ps = 9 × 10 6 cm/s, which would travel only 0.14 cm in 16 ns. Since this is much less than the ≈ cm dimensions of the relevant waveguides used for the fine structure measurements [15], a SOF signal cannot be obtained using such slow atoms.
Here we discuss an experimental scheme in which a beam of Ps atoms in the 2 3 S 1 level (henceforth Ps*) may be generated with an energy sufficient to allow SOF measurements of the Ps fine structure to be performed. Such measurements require atoms with energies of several 10's of eV, allowing them to travel cm scale distances during the mean 2P lifetime rather than mm, so as to match the waveguide length scales. Most solid-state Ps converters emit low energy Ps atoms, and offer no or limited energy tunability [19]. Some materials do emit more energetic Ps (for example, MgO emits 4 eV Ps [25]), but they generally do so with broad energy and angular distributions [26].
Forward directed Ps beams with higher energies and lower energy spread can, however, be obtained using a gas cell target [27,28], a beam-foil method [29], or a photodetatched Ps ion beam [30,31]. The latter technique is somewhat involved and is not well-suited to the production of excited state Ps beams. The beam foil method has a low efficiency, and it is not known if there is any significant fraction of excited state atoms produced. However, gas cell beam production is relatively straightforward, and has the advantage that, for certain gas targets, a useful fraction of excited state (n = 2) atoms will be spontaneously generated [32,33], allowing for the production of a forward directed and energy-tunable Ps* beam. Using Monte Carlo methods, we simulate the properties of such a beam and describe its interaction with two spatially separated waveguides to generate SOF lineshapes for different experimental conditions.
Experimental apparatus
A schematic of the proposed experimental setup is shown in Fig. 1. A standard [34] slow positron beam is produced using a 22 Na radioactive source with solid neon moderator [35]. Typical conical neon moderators have efficiencies of ≈ 0.5%. [36] (NB: higher values that have been reported in the literature probably do not represent long-term beam production). Thus, using a 50 mCi (1.85 GBq) source it is possible to generate positron beams of ≈ 10 7 s −1 . The DC positron beam will be magnetically guided and passed through a gas cell to generate both ground state and excited state Ps atoms [37]. The length of the gas cell and the gas pressure required to optimize Ps* production have been determined by Monte Carlo simulations, as discussed in Sect. 3. Experimentally, Ps* production will be optimized by measuring the beam intensity on a position sensitive multi channel plate (MCP) detector. The MCP will be used to measure both the efficiency of Ps* production and the Ps* beam divergence; the latter can be inferred by measuring the spatial profile of the beam at different distances from the gas cell.
In a magnetic field free region, the Ps* mean lifetime will be 1136 ns [38], which means a 30 eV beam (see below) will travel ≈ 261 cm in one lifetime. The experiment therefore requires simultaneous optimization of the beam loss through annihilation, transmission through the waveguides and detection, gas cell pressure/length and the positron beam energy. Moreover, Fig. 1 Schematic of the Ps* beam and waveguide arrangement. Excited state (2 3 S1) atoms are emitted from the Xe gas cell and subsequently pass through a double waveguide SOF setup. The microwave radiation in the two waveguides (WG1 & WG2) are controlled by two phase-locked signal generators (RFG1 & RFG2), and the transmitted atoms are detected using a microchannel plate detector (MCP) after passing through a set of electric field quenching plates (QP) ground state atoms will also be produced with almost the same energy as the Ps* beam. Therefore, the system geometry must also take into account the requirement to reduce shorter lived ground state Ps detection (i.e., by using longer path lengths). We note also that, by using mu-metal shielding, we expect to reduce the magnetic field in the measurement region to negligible levels. Ps* detection can be verified and discriminated from Ps, electron, or positron detection in various ways: (1) a transverse magnetic field can be applied to deflect charged particles away from the MCP, (2) an electric field can be applied using a set of quenching plates to induce Ps* to decay (3) the MCP front plate can be biased to reflect charged particles (either positive or negative).
In addition to Ps* beam optimization, it will also be necessary to investigate the conditions for optimal signal generation. The main compromise will be the improved resolution versus the increased loss obtained as the length of the region between the waveguides is increased. Details regarding the expected SOF signal are discussed in Sect. 4, but in general a waveguide separation of several cm will be required. This spacing is a crucial parameter for SOF measurements, and the waveguides will be built so that their separation can be varied in situ.
Gas cell simulations
Ps beams based on gas cells are useful in that they are energy tuneable over a broad range, as opposed to the relatively slow quasi-monoenergetic emission from solid targets. However, the Ps production in gas cells may be limited by secondary processes such as fragmentation or scattering. In order to optimize the gas cell length and gas pressure, and to estimate the fraction of excited state positronium produced in the experimental apparatus, a Monte Carlo simulation was performed. In this simulation positrons were generated with an energy distribution expected to match the beams produced by neon moderators, i.e., with a variable beam energy and an intrinsic energy spread of ≈ 2 eV full-width-half-max (FWHM) [39]. The beam was modeled considering only the 1-D velocity distribution in the z direction, defined as the positron beam axis. The fraction of Ps* emitted into the forward angles (0 to 6 degrees) was estimated using the calculated differential cross sections of McAlinden and Walters [40]. This data gives the fraction of positronium scattered into the forward angles as 22%. This number was then verified using the experimental data of Shipman et al. [41].
The MC code propagated incident positrons through the gas cell, with a spatial step size dl = 1 mm. The probability of generating Ps atoms was determined using the (measured) Ps formation cross section Q Ps [42] for Xenon. The cross section is related to the cell parameters and Ps intensities via the following relationship: where n d is the number density of the gas, l is the gas cell length, I T is the transmitted intensity, and I 0 is the initial intensity. This equation was used to determine the probability that a positron entering the cell will form a ground-state Ps atom. The energy of the produced positronium is given by the relation: where E + is the energy of the incoming positron beam, E i is the ionization threshold of Xenon, and n is the principle quantum number. The ionization threshold for Xenon is 12.13 eV [43]. The probability that a Ps atom was formed in an excited (n = 2) state was obtained using Q 2P , the cross sections measured by Murtagh et al. [33]. These measurements, shown in Fig. 2a, give the fraction of Q 2P formation, relative to the total positronium formation (Q Ps ) for all n. The measured data were sensitive to Ps atoms produced in 2P states. Assuming that all n = 2 Ps states are populated equally in positron-Xenon collisions, the cross section for Ps* formation will be a factor of 4 smaller than Q 2P . To give a crude approximation, a 4th-order polynomial fit to Q 2P was used to interpolate the cross section to arbitrary positron energies (see Fig. 2a).
Excited state Ps atoms were further propagated through the gas cell with checks for fragmentation. If the Ps* atoms were ionized, then the positron continued through the cell and was checked for further Ps production. Ps fragmentation was determined using calculated ionization cross sections of Starrett et al. [44] These calculations, performed for ground state atoms, show that the Ps fragmentation cross section increases rapidly from 0 eV and peaks around 50 eV, meaning that Ps* beam losses due to fragmentation will increase for higher positron beam energies in the range of interest. Based on geometrical scaling, one can expect this cross section to scale with n 4 [45], meaning that it will be a factor of 16 times larger for Ps*. The calculated ionization cross sections are in agreement with measurements [46] performed in the Ps energy range 18-30 eV.
The total Ps* production efficiency, including Ps fragmentation, is shown in Fig. 2b. This percentage is relative to the sample number of incoming positrons, which was 100,000. These data indicate that useful Ps* production should occur for positron beam energies up to 50 eV.
The optimum gas cell pressure and length were determined from MC simulations performed for positron energies ranging from 0 to 60 eV, pressures ranging from 0 to 6 Pa, and for cell lengths up to 15 cm. The results of these simulations using positron energies of 21 and 40 eV are shown in Fig. 3a, b, respectively. These data show that the optimum Ps* formation can occur over a range where there is a balance of the gas pressure and cell length. That is, a greater cell length allows for more positronium production, but the positronium produced has to travel further to exit the cell, increasing chances of fragmentation. Similarly, a higher gas pressure increases the efficiency of positronium formation but also increases chances of fragmentation. For both of these positron beam energies, the optimum gas cell length and cell pressure were found to be 10 cm and 0.5 Pa. However, there is also good Ps* production for lower cell lengths and higher pressures, for example at 7 cm and 1 Pa for a 21 eV beam. Due to the large error in the experimental data used for this MC, there is a large estimated uncertainty (> 30% in the MC results).
Separated oscillatory field lineshape
The Ps fine structure comprises three intervals 2 3 S 1 → 2 3 P J (J = 0, 1, 2). For the present discussion, we focus on the transition to J = 0 states, which is expected to occur with a transition frequency of 18.49825 GHz [10]. The waveguides used for these measurements are therefore type WR-51, which have dimensions 12.95 × 6.48 mm [15], meaning that the d dimension described in Fig. 4 is 6.48 mm. The field-free region is a parameter that will need to be optimized experimentally. For the current simulations we use a spacing of 4×d = 25.92 cm.
A general schematic of the SOF measurement scheme is shown in Fig. 4, in which two waveguides define regions of width d containing an RF-field, and are separated by a field-free region of width D. For a positron beam energy of 40 eV, the mean Ps n = 2 energy is 29.6 eV, with a Ps speed of 2.3 × 10 8 cm/s. Thus the field interaction time is τ = 2.8 ns, and the field free interaction time is T = 11.4 ns. The maximum loss of signal due to radiative decay from the 2P component of the superposition states will be proportional to exp[−γ 2 (τ + T )]. For the parameters above, this amounts to a loss of 98.8% of the signal. Note that this loss rate will be lower for superposition states containing more S character.
In SOF measurements linewidth reduction occurs via two effects, termed interference narrowing and uncertainty-principle narrowing by Fabjan and Pipkin [47]. Interference narrowing arises from the combined effect of the two separated field regions (d) and depends on their relative phase. Uncertainty principle narrowing occurs by selecting longer-lived atoms that are able to traverse the region D. The expected SOF lineshapes were modeled following the treatment of Fabjan and Pipkin [47] in which the system is described by a wave function of the form: where |1 corresponds to the 2 3 S 1 level and |2 corresponds to the 2 3 P 0 level. The decay rate of |1 is γ 1 = 1/1136 ns, the 2 3 S 1 annihilation rate. The decay rate of |2 is γ 2 = 1/3.2 ns, which is the 2P radiative decay rate. The amplitudes of the wavefunction after passing through the two separated waveguides are given by equations 19a and 19b in reference [47]: and where μ E is the transition dipole moment and E 0 is the amplitude of the applied RF field, ω 1 and ω 2 are the resonance angular frequencies of the respective sublevels. These equations describe the evolution of the wavefunction of an atom that traverses two separated oscillatory fields in a time τ , and a field free region in a time T. The first RF field is described byĒ 0 cos(ωt), and the second field byĒ 0 cos(ωt+δ), with oscillating frequency ω, amplitudeĒ, and phase δ. We assume here that the fields turn on and off instantly and with ideal amplitude profiles. The measurement consists of detecting the surviving Ps* fraction on the MCP detector. This is equivalent to measuring c 2 (τ + T + τ ), and the SOF signal S is then given by the difference of measurements with the fields in and out of phase: Simulated SOF signals for different Ps* energies are shown in Fig. 5. This signal is normalized and applies to any arbitrary microwave power as long as there is no significant power broadening. In this figure, the center peak occurs at the resonant frequency ν 0 = 18 498.25 MHz. This peak is narrower than the single excitation lineshape owing to the interference term that results from Eq. 13, as well as the selection for longer lived atoms. Thus, the centre peak width is determined by the interaction time within the field free regions (T ) [47]. Lower energy positronium beams will take longer to traverse the field, and will therefore exhibit narrower linewidths, although this comes at the cost of signal intensity.
In a real experiment, the positron (and hence also the positronium) beam, will have some spread in energies. Using a neon moderator, this is expected to be ≈ 2 eV [16]. An example of variations in the signal arising from this is shown in Fig. 6. A Ps* velocity spread will result in a corresponding spread of field interaction times which broadens the signal. It can be seen from Fig. 6 that, while the linewidth is increased a SOF signal is still present. This is compared to a Lorentzian with linewidth 50 MHz (representing the best-case lineshape expected from a single-field measurement of the 2 3 S 1 → 2 3 P 0 interval [15]). These data indicate that even after taking into account the beam velocity spread, the SOF measurement still offers a significant improvement in linewidth.
Discussion and conclusions
We have shown via simulations that a beam of Ps* atoms (i.e., atoms in the 2 3 S 1 level), suitable for per-forming Ramsey SOF measurements can be produced using a gas cell apparatus. For a positron beam energy of 40 eV, the optimum value for the gas cell length was found to be 10 cm, with a Xenon gas pressure of 0.5 Pa. Using these parameters and measured production cross sections [33,42], the Ps* production efficiency was estimated to be 0.9% for a positron beam energy of 40 eV, and concomitant Ps* beam energy on the order of 30 eV. Based on the Ps* production cross sections, a higher production efficiency is expected at lower energies, but the overall SOF count rates benefit from higher Ps* energies.
Energetic Ps beams produced using gas cells have not been previously used for spectroscopic measurements, which generally require slow atoms to minimize Doppler effects, transit-time broadening and various other unwanted effects [19]. However, for SOF measurements many of these effects are not significant. Moreover, for measurements conducted with short-lived superposition states, fast atoms are essential so as to allow atoms to travel between different spatially separated regions before they decay. We note that this is not only a problem related to Ps experimentation: 30 eV Ps atoms have a speed of ≈ 2.3 × 10 8 cm/s, which is comparable to the 3.2 × 10 8 cm/s hydrogen atoms recently used in a Lamb-shift measurement based on a SOF variant [48] (frequency offset separated oscillatory fields (FOSOF) [49]).
In the SOF measurements described in Sect. 4, Ps* atoms are expected to pass through two waveguides with an average loss fraction of 98.8%. Therefore, for an incident positron beam intensity of 5 × 10 6 s −1 we can expect an overall count rate on the order of 50 Hz. This takes into account 78% loss through beam collimation. The detection efficiency was estimated to be 50% for a MCP [50]).
If successful, the SOF measurements we have described here could be extended to more advanced FOSOF measurements [49]. Rather than measuring lineshapes directly, the FOSOF technique relies on a frequency offset between the separated SOF fields, leading to a continuous phase shift between the two fields. The resulting SOF interference signal then also includes an additional oscillating component that depends on the applied offset frequency. Comparing this signal with the applied offset signal allows the atomic resonance frequency to be determined via a straight line fit, removing any need to fit a (possibly inadequate) function to the full spectral lineshape. Thus, the measurement is not sensitive to the complete frequency response of the system, because it depends on the phase of the signal rather than its amplitude. Since variations in the frequency response seem to be the limiting factor of our previous Ps experiments [22], this would represent a significant advantage for improved Ps n = 2 fine structure measurements.
Author contributions
T. J. Babij and D. B. Cassidy discussed the concept and wrote the manuscript. T. J. Babij performed the simulations.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Email<EMAIL_ADDRESS>Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 5,682.6 | 2022-07-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Performance and interaction of sodium silicate activated slag with lignosulfonate superplasticiser added at different mixing stages
This paper investigated the effect of adding lignosulfonate (LS) superplasticiser at the different stages of mixing on the workability and rheological behaviour of sodium silicate activated slag (SSAS) in order to find a practically feasible approach to tackling the incompatibility issue between superplasticiser and alkaline activator. In addition to rheology and minislump tests, adsorption, zeta potential and environmental scanning electron microscopy tests were also undertaken to understand the interactions between the lignosulfonate and the fresh SSAS in order to reveal the mechanisms behind the observation. The results show that adding the LS and the activator separately at the different stages of mixing can significantly improve both the initial minislump and 60-min minislump retention due to the increased adsorption of LS and the improved dispersion of slag particles, with the prior addition of LS better than the delayed addition. However, a nonlinear rheological behaviour of SSAS was observed in the LS-superplasticised SSAS under separate addition and, consequently, modified Bingham model was found to be more suitable for describing this kind of rheological behaviour.
Introduction
Rheological behaviour and fresh properties of concrete are not only important for transporting, placing, compacting and finishing of fresh concrete, but also essential for hardened concrete to achieve desirable mechanical and durability properties.Due to these importance, the rheological behaviour of Portland cement (PC) -based cementitious materials has been widely studied in the past and most of the results show that fresh PC concrete is a thixotropic material which is featured by a decrease in viscosity when a certain amount of shear is applied and a gradual recovery of viscosity when the shear is removed [1].To achieve a good understanding of the fresh properties of concrete, considerable efforts have been made to model the rheological behaviour of PC concrete [2].It is now generally agreed that, at a moderate/high shear rate, the rheological behaviour of PC-based cementitious materials without chemical admixture, can be best fitted by the Bingham model (Equation ( 1)) [3], which depicts a linear relationship between shear stress and shear rate.In addition, the yield stress, which is defined as the intercept in the Bingham model in Equation (1), can be considered as the transition point below which the substance behaves as a solid and above which it becomes a fluid [4,5], resulting from the attractive inter-particle forces responsible for the flocculation and/or from hydration [6,7].Thus, substances with a lower yield stress reflect a better dispersion and fluidity.On the other hand, the plastic viscosity, defined as the slope in the Bingham model, depends largely on the volume friction of solid particles and the packed density [8].As a result, a low plastic viscosity might cause segregation [9].However, at a low shear rate, especially in the presence of superplasticiser (SP), the rheological behaviour may deviate away from the linear relationship as defined by the Bingham model.This is, in particular, the case when SP is added together with other chemicals.For example, when higher dosages of SP and viscosity modifying agent are added in self-compacting concretes, negative yield stress and non-linear shear thickening (in which case the plastic viscosity increases/decreases with the increase of shear rate) have been reported [10,11].However, it must be emphasized that the negative yield stress does not have any physical meaning.It is, thus, questionable whether the linear Bingham model is still suitable to fit these nonlinear rheological behaviours.To tackle these issues, nonlinear rheological models, such as the modified Bingham model (MB model, Equation ( 2)) or the Herschel-Bulkley model (HB model, Equation ( 3)) (as shown below), have been considered as the most favourable alternative [9,11,12].In these two models, the value of exponent n (MB model) and the term c/μ (HB model) can be applied to indicate the non-linear rheological behaviour of the materials, e.g., shear thinning (n < 1 or c/μ < 0), shear thickening (n > 1 or c/μ > 0) and Bingham behaviour (n = 1or c/μ = 0) [13,14].
Alkali-activated slag (AAS), which typically consists of ground granulated-blast furnace slag (GGBS for short hereafter) and alkaline activators, has received increased attention worldwide due to its sustainable nature and superior performance in some aspects over PC, such as low heat of hydration, high early strength and excellent durability in some aggressive environments [15][16][17][18][19].In recent years, attempts have also been made by the authors to explore its cast-in-situ application for reinforced structures [20] as well as the immobilisation of nuclear wastes [21].Slag, a by-product from iron manufacture, is often in the form of granulated glassy calcium-magnesium aluminosilicate powder.It can react with alkaline activators, such as sodium silicate and sodium hydroxide, to form a cementitious binder at ambient temperature.However, compared to PC system, AAS involves a different cement chemistry and reaction mechanism.For instance, a gel-like matrix is formed through a fast 'through solution' precipitation in sodium silicate activated slag, whilst needle-like products are formed through a slow dissolution-precipitation mechanism of a 'torpotactic' growth of C-S-H in PC systems [22].Additionally, the composition of the pore solution and surface chemistry of AAS also differ from those of PC systems [23,24].All these could lead to different inter-particle forces which could partly explain the more viscous nature and higher resistance to shear characteristics of AAS as compared to PC-based systems [15].Consequently, the rheological behaviour of fresh AAS might be affected and the suitability of the Bingham model for describing the rheological behaviour of AAS becomes questionable [25,26].On the other hand, to develop high performance AAS concretes, attempts have also been made by researchers to add SPs into AAS systems [27,28].However, the current commercially available SPs have been developed based on the chemistry of PC-based cementitious materials and it may not be suitable for AAS systems due to the different cement chemistry involved [29].For example, it was found that adding polycarboxylate-based SP was less efficient in reducing the yield stress of sodium silicate-activated slag (SSAS) compared to its performance in PC systems [25].Whilst the lignosulfonate-based (LS) SP has demonstrated some limited improvement in the workability of AAS, the workability retention is considered to be insufficient [30].The reduced performance of SPs designed for Portland Cement in AAS systems has been mainly attributed to two reasons.The first is the competitive adsorption between the negatively charged alkaline activator and the SPs (which is also negatively charged); the second is the instability and, hence, the change of the chemical structure SPs designed for Portland Cement in highly alkaline environment [25,27,31].It should be highlighted that the latter effect could become even worse if the SP is directly added into the alkaline activator before mixing with slag due to the strong alkaline nature of the activator [27].
In an attempt to address the above issues, some preliminary studies were carried out by the authors to add SP at different stages of mixing in order to avoid the competitive adsorption between SP and activator as well as to improve the stability of SP in alkaline activator [32].This approach is very much similar to that adopted in PC systems to improve the workability of concrete by delayed addition of SP during the mixing which allows the time for gypsum and clinker to react without competing with SP.However, it has been shown that by delayed adding SP, the rheological behaviour of the PC-based cementitious system was changed [33].Hence, it is likely that adding SP and alkaline activator at different time intervals during the mixing stage could also make the rheological behaviour of AAS more complex and uncertain.
The aim of this study is, therefore, to investigate the rheological behaviour and some fresh properties of SSAS when a lignosulfonate SP was added at the different stages of mixing.The rheological behaviour of the fresh SSAS was then analysed using the flow curves obtained from the rheology tests by fitting into different rheology models.Moreover, the interaction between the lignosulfonate SP and the fresh SSAS was also characterised by adsorption, zeta potential and environmental scanning electron microscopy (ESEM) tests.Based on the information obtained, the possible mechanisms involved are then proposed and discussed in this paper.
Materials
The slag complies with BS EN 15167-1:2006 and supplied by Civil and Marine Ltd.UK (now Hanson Heidelberg Cement Group) was used in this study.Its chemical composition obtained by X-ray fluorescence spectroscopy is reported in Table 1.It is a granulated product with a specific surface area of 527 m 2 /kg (based on Blaine method) and can be categorized as a neutral slag according to its basicity coefficient, K b = (CaO + MgO)/(SiO 2 + Al 2 O 3 ), which equals to 0.96.Moreover, its particles size distribution was obtained by a Mastersizer (Mastersizer 2000, Malvern, UK) and is presented in Fig. 1.As shown in the figure, 65.62% of the slag was smaller than 10 μm.
A liquid sodium silicate solution with a silica modulus of 2.58 was obtained from Charles Tennant & Co Ltd and its chemical and physical properties are shown in Table 2.The sodium silicate solution was then modulated to a modulus (molar ratio of SiO 2 to Na 2 O) of 1.5 by adding sodium hydroxide (NaOH, obtained from Tennants Distribution), which was then used as an alkaline activator in this study.A lignosulfonate derivation superplasticiser (LS), a dark brown dry powder supplied by Tianjin Jiangong Special Material Co. Ltd., was used as a superplasticiser.
Sample preparation
All the samples were prepared and mixed in 5L planar-action highshear mixer, and a lower speed (140 rpm) was employed during the whole mixing procedure.The water to slag ratio of all the mixes was fixed at 0.45 which was obtained from trial mixes to ensure sufficient workability could be achieved for the rheology tests.The dosages of LS were controlled at 0, 0.4, 0.8, 1.2, 1.6, and 2.0% (by the mass of slag).The liquid sodium silicate with a silica modulus of 1.5 was used as the activator and its content was fixed at 4 wt % (counted as Na 2 O equivalent) by the mass of slag.Both the activator and the LS were firstly dissolved in water.When the activator and the LS were added separately at the different stages of mixing, i.e., separate addition as described below, the total mixing water was split into 2/3 and 1/3 for the first and second components (as indicated in Fig. 1) respectively, and the activator or the LS were then dissolved in water and added into the mixer accordingly during the mixing stage.Three different approaches of adding LS were investigated in this study, and they are named accordingly as: 1) simultaneous addition (SA): adding LS and activator together when mixing with slag; 2) delayed addition (DA): adding activator to slag first and then LS at a 3 min interval; and 3) prior addition (PA): adding LS to slag first, then activator at a 3 min interval.The 3-min interval was determined based on the results obtained from the preliminary experiments.The mixing procedure of all the mixes was strictly controlled throughout this study in order to compare the rheological behaviour on a like-for-like basis and the details of this are schematically presented in Fig. 2. The total mixing time for the three addition methods was all fixed for 5 min.
Test procedure 2.3.1. Minislump test
The minislump test was carried out with a PVC plate and a cone with a lower inner diameter of 38.1 mm, an upper inner diameter of 19 mm and a height of 57.2 mm by following Palacios's method [28].The diameters of the spread from the minislump test were measured at two perpendicular directions and the average diameter was reported.All the minislump measurements were first conducted at 7 min after mixing, which is referred to as the 'initial minislump' in this paper.In addition, to assess the workability retention capacity of the SSAS, the minislumps were also measured over time at 15, 30 and 60 min since mixing.
Rheological test
The rheological behaviour of the SSAS pastes was determined with a rheometer, Viscotester 550, under all the three SP addition methods (namely, SA, DA and PA).Immediately after mixing (i.e., at the 5th minute shown in Fig. 1), approximate 800 ml freshly mixed paste was transferred into a 1000 ml plastic cup (95 mm in diameter and 180 mm in length) and then fixed into the sample holder of the rheometer before inserting a six-bladed vane (40 mm in diameter and 60 mm in length) to establish the relationship between the shear stress and the shear rate.The samples were then subjected to a cycled measuring procedure as proposed by Palacios et al. [25] as follows.The shear rate was kept constant at 150 s − 1 for 2 min during the pre-shearing, then up-ramped from 0 to 10 s − 1 in 1 min, continually raised from 10 to 150 s − 1 in 1 min and finally reduced from 150 to 0 s − 1 in 1 min.The results were recorded by an in-house developed software.To ensure that reliable data can be obtained from the rheological tests, the 'pure' AAS was tested for five times to check the repeatability of the measurement first [34].Once the reliability requirements were met, the flow curves of the SSAS pastes under different SP addition methods were then obtained.The area of the hysteresis loop of the flow curve was then calculated which was used to quantitatively analyse the structural breakdown of the paste [35].Moreover, the down curve of the flow curves, which is believed to be able to reflect the rheological properties of cementitious materials [36], were used to fit different rheological models as described by Equations (1)-(3).
Adsorption test
The amount of LS adsorbed by SSAS was determined by a UVspectrophotometer (Camspec 550) at a wavelength of 286 nm.Prior to the experiments, a calibration curve was established by plotting the concentrations of added LS against the absorption of UV light by the corresponding LS solutions.To overcome the matrix effect, the solution was prepared by dissolving the LS, ranging from 0.025 mg/ml to 0.150 mg/ml, in the supernatant of SSAS paste.To measure the amount of LS adsorbed by SSAS, mixes consisting of 5 g of slag, 1.04 g sodium silicate solution (4 wt% Na 2 O equivalent by the mass of slag), 19.45 g of water and different dosages of LS, namely 0, 0.4, 0.8, 1.2, 1.6, and 2.0% by the mass of slag, were prepared by hand mixing.The suspension of each mix was then centrifuged at 700 rpm for 3 min before being filtered by a quantitative filter paper.The obtained solution was then diluted with deionised water to the measurable range of UV-spectrophotometer.The amount of LS adsorbed by SSAS was finally calculated from the difference of the LS concentrations before and after mixing with SSAS.
Zeta potential test
The change in the electrostatic properties of the SSAS was investigated through the measurement of zeta potential.The suspensions used for the zeta potential test were prepared by dispersing 1 g of slag and the corresponding amount of activator (as detailed in Section 2.2) in 200 ml deionised water at different dosages of LS by hand mixing for 5 min.The zeta potential of the SSAS suspension was then determined at 20 • C using a Malvern Nano ZS90 (Malvern Instruments Ltd., UK).
ESEM
The dispersion of the freshly mixed paste samples was observed by Environmental scanning electron microscopy (ESEM) in a FEI Quanta 200 Scanning Electron Microscope at 20 kV.Mixes containing approximately 0.5 g of slags and corresponding SP solution and activator were firstly mixed in a beaker outside the environmental chamber of the ESEM by hand for 3 min.After that, the fresh paste was rapidly poured into a steel sample holder before being placed into the sample chamber in the ESEM.The observation was then started at 10 min after mixing under the ESEM mode with the stage being cooled to 5 • C and the water vapour used as imaging gas at 5.0 torr pressure.To quantitatively analyse the dispersion of the slag particles, the Heywood diameter of slag particles, which is referred to the diameter of a circle with equal area of the shape of slag particles, were generated from the ESEM images using the software 'Image J' [37,38].For each mix, ten ESEM images were quantitatively analysed.It should be pointed out that, in this method, the outline of the particle shape was automatically identified and transferred to the particle size and area.Therefore, it has been confirmed to be an effective approach for identifying the agglomeration behaviour of particles [39].
Workability
The results of the initial minislump tests of SSAS with the LS added at the different stages of mixing are shown in Fig. 3.It is obvious that compared with the reference (i.e., the mix containing 0% of LS), regardless of the LS addition methods, adding LS increased the initial minislump spread and this was increased with increasing dosage of LS.However, only less than 10 mm increment was observed in the initial minislump from the simultaneous addition with even up to 2.0% of LS, which was less significant than those obtained from the separate addition methods (i.e., both delayed and prior additions).In contrast, the spread diameters of the initial minislump obtained from both prior and delayed addition methods were at least 10 mm higher than that of the SA, which correlates well with the yield stress results (as presented in Figs.7(a), 8(a) and 9(a) below).
In addition to the initial minislump measurements, the change of the minislump was also monitored over time at 15, 30 and 60 min to assess the workability retention of SSAS when the LS was added at the different mixing stages.The minislump results of SSAS mixes over 60 min when 1.2% of LS, which was close to its saturation dosage, was added by different addition methods are shown in Fig. 4. It should be noted that similar trends were also observed at other LS dosages (results not shown in this paper).To clearly identify the effect of LS on the workability retention of SSAS, the minislump of the SSAS without LS was also monitored over the same period of time and the results are presented together in Fig. 4. It is evident that, without adding LS, the minislump of SSAS was the lowest throughout the 60 min period.Moreover, it also showed the poorest workability retention.When the LS was added by SA, although both the initial minislump and minislump retention of SSAS were slightly improved compared with the SSAS without LS over 60 min, they were still poorer compared with the separate addition methods.As can be seen from Fig. 4, although the initial minislump of the SSAS with SA reached a high value of 120 mm, this was dropped to Fig. 3. Initial minislump of SSAS with different addition methods.less than 85 mm at the 60th minute, representing an approximate 30% minislump loss, which is much worse, both in terms of the initial minislump and workability retention, than the SSAS with separate addition methods.Similar result was also reported by Palacios and Puertas when melamine-based superplasticiser and sodium silicate activator were added by an SA method [28].On the contrary, when the activator and the LS were added separately (i.e., by DA or PA), the workability retention was much improved, with only less than 12% minislump loss being observed.Moreover, slightly better workability retention was obtained from PA.
The above results indicate that the separate addition methods can not only enhance the initial workability, but also can improve the workability retention of LS-superplasticised SSAS.This is, in particular, the case when the LS SP is added before the alkaline activator during the mixing.
Rheological behaviour 3.2.1. Flow curve and thixotropic behaviour
The flow curves of the SSAS pastes obtained from different LS addition methods at a LS dosage of 1.2% are shown in Fig. 5.The flow curve of the SSAS without adding SP (i.e., no LS.Hereinafter referred to as 'pure' SSAS) was also presented in Fig. 5 (a) for comparison purpose.Additionally, by integrating the area between the up-curve and downcurve of the paste, the thixotropic area of the SSAS was calculated by following the method proposed by Chen and co-workers [40] as shown in Fig. 5 (b).It can be seen that without LS, the hysteresis loop of the 'pure' SSAS was larger than those of the LS-superplasticised SSAS pastes.As the area of the thixotropic loop is related to the energy needed to break down the reversible flocculation of particles [41], the reduced area of the loop when LS SP is added suggests that it becomes easier for the particles to be deflocculated.It should be noted that with increasing LS dosage, the thixotropic area was increased under SA, while that of the DA or PA was not obvious.This phenomenon could be due to the complex interactions among the LS, activator and slag, which should be further explored.Furthermore, among the three LS-superplasticised SSASs, the paste with SA showed a much larger area of thixotropic loop than those of DA and PA, indicating a better dispersion of the slag particles was achieved by both DA and PA.However, it should be noticed that the area of the thixotropic loop of SA was only slightly smaller than that of 'pure' SSAS, indicating that SA is not efficient in dispersing slag particles in SSAS compared to the separate addition methods.From the results shown in Fig. 5 (a), it is also evident that not only the addition of LS, but also its adding method, can change the down-curve of the paste.For example, the down-curve of both the 'pure' paste mixed without SP and the paste mixed by SA showed a near-linear relationship between the shear stress and the shear rate, while those mixed by PA and DA showed a non-linear relationship (further details can be seen in Fig. 6).Similar patterns were also identified at the other LS dosage levels (namely, 0.4%, 0.8%, 1.6% and 2.0%).
In Fig. 6, the down curves of the hysteresis loop of the paste without LS and those with 1.2% LS added by different addition methods (i.e., SA, DA and PA) are fitted by Bingham, Modified Bingham and Herschel-Bulkley models, respectively.It is evident that, for the reference mix (i.e., no LS), the fitted curves of the three rheological models are close to each other.However, in the presence of LS, the fitted curve of the Bingham model is slightly different from those of the other two nonlinear models (i.e., Modified Bingham and Herschel-Bulkley).Furthermore, the differences of the fitted curves between the Bingham and the other two non-linear models are further enlarged by the separate addition of the SP and the activator (i.e., DA and PA).It can be easily noticed from Fig. 6 (c) and (d) that the regression curves of both non-linear models are better fitted than that of the Bingham model for DA and PA.A negative yield stress, which has no physical meaning, is only obtained by applying the Bingham model, indicating the error could occur with linear regression.To obtain a holistic understanding of the rheological behaviour of the LS-superplasticised SSASs, all the down curves of the pastes mixed by SA, PA and DA at the other LS dosages (namely, 0.4%, 0.8%, 1.6% and 2.0%) were also fitted into the Bingham, MB and HB models.The regression equations and R-squared values obtained from these analyses are listed and further compared in Table 3.Even though good regressions (as indicated by R 2 > 0.96) have been achieved by applying all the three rheological models, by adding the LS and the activator separately at different mix stages, a reduction in the R 2 value has occurred in the Bingham model, indicating that the reliability of the Bingham model for the separate addition methods has been reduced.By comparing the data in Table 3, it can be concluded that, instead of Bingham model, the MB model showed the best suitability for describing the rheological behaviour of SSAS, in particular, the SSAS with LS SP.Similar trend has also be reported in self-compacting concrete by Fey et al. [42].The possible reasons leading to this changed rheological behaviour, i.e., from Bingham to non-Bingham behaviour, will be discussed in Section 3.5 in the context of the observations made with ESEM.
Rheological parameter
Since a non-linear relationship between the shear stress and the shear rate was identified from all the mixes with separate additions of LS and activator, the rheological behaviour of the LS-superplasticised SSAS pastes are further compared using the rheological parameters, such as yield stress, plastic viscosity/consistence factor and exponent and c/μ, obtained from the Bingham (linear), Modified Bingham (non-linear) and Herschel-Bulkley (non-linear) models, whenever relevant, in this section below.
Yield stress.
As shown in Figs.7-9, in the presence of LS, the general trend is that the yield stresses of the LS-superplasticised SSAS pastes calculated from the Bingham (Fig. 7 (a)), Modified Bingham (Fig. 8 (a)) and Herschel-Bulkley models (Fig. 9 (a)) reduced with the increase of the LS dosage, especially when adding the LS and the activator separately at different mixing stages (i.e., both DA and PA).Compared to the SA, the separate addition methods significantly reduced the yield stress of the LS-superplasticised SSAS pastes, with the highest reduction being achieved by PA in all the three models.However, negative yield stresses from some of the mixes by PA and DA were observed from the Bingham model.As negative yield stress could not occur in reality, the suitability of the Bingham model for describing the rheological behaviour of the LS-superplasticised SSASs, in particular, when the LS is added by DA or PA, is thus, questionable [43].Nonetheless, from the yield stress results, it could be anticipated that a better workability could be obtained from the separate addition methods, especially the PA.This is well corroborated by the minislump results presented in Fig. 3 and will be further discussed in Section 3.2.2.4 below.respectively.Apparently, increasing the LS dosage exhibited less effect on the reduction of plastic viscosity when the LS and the activator were added simultaneously.In fact, some slight increase in the plastic viscosity was even noticed from both the Bingham and the MB models with SA method.On the contrary, adding the LS and the activator separately at different mixing stages, the plastic viscosity was significantly decreased upon adding LS at a dosage of 0.4% in both models.However, there was no further reduction at the dosages above 0.4% and the difference between the PA and the DA was also insignificant.Nonetheless, the values of plastic viscosity were already very low (close to zero in both cases).On the other hand, the effects of different addition methods on the consistence factor (based on the HB model) were different.From Fig. 9 (b), it can be seen that there is no obvious trend for the consistence factor under SA when the SP dosage was increased.However, the consistence factors obtained from both PA and DA were lower than that from the SA, with an even slightly lower value being achieved from the DA when the dosage levels were below 1.2%.However, beyond 1.2%, a plateau was reached for both PA and DA and the change of LS dosage had no effect on the consistence factor thereafter.Again, very low values of consistence factors were achieved above 1.2% LS dosages.increased when the LS and the activator were added separately, indicating a shear thickening rheological behaviour could have occurred with both the DA and the PA methods, which will be further discussed in Section 3.5.
Plastic viscosity and consistence
It should be noted that the MB model can be considered as an extension of the linear Bingham model with a second order term being introduced so that the nonlinear behaviour could be well described [14].
Therefore, as shown in Equation ( 4), there exists a theoretical relationship between c/μ (Fig. 8 (c)) and n (Fig. 9 (c)) [13].From the data in Table 3, it is apparent that the c/μ values of all the pastes were positive, which was closely linked to the results of exponent obtained from the HB Model, indicating again that the shear thickening behaviour could have occurred in the LS-superplasticised SSAS.
Where: μ stands for plastic viscosity; a for random parameter; c for second order parameter, and n for exponent.
Relationship between minislump and rheological parameters.
It has been well established by various researchers that a correlation exists between slump/minislump and yield stress [44].To further verify the suitability of different rheological models for SSAS systems, the relationships between the yield stress obtained from the three different rheological models and the minislump spread of the SSASs are compared in Fig. 10.From the R 2 of the regression analysis, it can be seen that the highest R 2 was obtained from the MB model (0.9248 Vs 0.8773 from Bingham model and 0.8576 from Herschel-Bulkley Model), which indicates again that, among the three models, the MB model showed the best prediction of the workability by its yield stress.However, there is no relationship between the plastic viscosity (or consistence factor) and the minislump with the change of LS dosage (not shown in this paper) in the studied SSAS system, which is in agreement with those reported in PC systems [44].
Adsorption isothermal
It is generally believed that the adsorption of the superplasticiser by hydrating cement particles is linked to the dispersion of cement particles in water.That is, the higher the adsorption of SP, the better the dispersion [45].The influence of the different addition methods on the adsorption of LS in SSAS was, thus, investigated and the results are shown in Fig. 11.It is evident from Fig. 11 that, compared with the mix without the activator (i.e. the mix prepared with only slag and water, which was used as a control in the adsorption test for comparison purpose), the addition of activator by all the three methods (SA, DA and PA) resulted in a significant reduction in the adsorption of LS on the slag surface, which implies that either there is a competition between the activator and the LS during the adsorption to the slag surface or there are some adverse effects from the activator on the adsorption process of LS, or both [31,46].Moreover, in the presence of the activator, compared with the simultaneous addition method at all dosage levels, the adsorptions of LS with both prior and delayed additions (i.e., PA and DA) were higher.It, thus, seems that the separate addition of the LS and the activator at different mixing stages can reduce the possible competition (if there are any) between the LS and the activator in adsorbing onto the surface of slag particles.This is, in particular, the case for the PA method, which might have provided a window (3 min) for the LS to play its dispersing role before the potential decomposition (if there are any) could have happened after the high alkaline activator is added.However, it should be noticed that, even for the PA method, the adsorption of the LS was still lower than that of the slag without activator, which suggests that the competitive adsorption and desorption from negatively charged sodium silicate activator could still have existed [46].Fig. 11 also shows that the adsorbed amount of the LS increased rapidly with the increase of the LS dosages up to around 1.2% LS.At higher dosages (>1.2%), the adsorption increased slowly and gradually reached a plateau, following the trend of Langmuir isothermal adsorption.The quantitative analysis of the adsorption isothermal was, therefore, conducted using Langmuir adsorption (Equation ( 5)) and the results are presented in Table 4.
where C stands for equilibrium concentration of superplasticiser (LS in this study) (mg⋅L − 1 ), A for the adsorbed amount of superplasticiser by slag (mg⋅g − 1 ), A s for the saturated adsorbed amount of superplasticiser (mg⋅g − 1 ) and K for the adsorption constant.
A linear relationship of C/A (g⋅L − 1 ) with C is normally observed with a regression coefficient close to one [47].The A s can be calculated by 1/slope of the regressed straight lines, and constant K can be then obtained by slope/intercept.In this study, both the K and A s were calculated and summarised in Table 4.
As can be seen from Table 4, when no activator was added, the highest A s value (0.95 mg g − 1 ) was obtained which could be assumed that all the slag surface might have been occupied by LS.However, after the activator and the LS were added simultaneously (SA), the A s values dropped by more than half and reached 0.43 mg g − 1 , indicating that only less than half of the slag surface might have been occupied by the LS.This suggests that when the activator and the LS were added simultaneously, the activator was more prone to be adsorbed onto the surface of the slag.Compared to the SA, the characteristic plateau (A s ) of both the PA and the DA were increased from 0.43 mg g − 1 (the A s for SA) to 0.51 mg⋅g − 1 and 0.46 mg g − 1 , respectively.This implies that by adding the activator and the LS separately at different mixing stages, the adverse effect that the alkaline activator may have imposed on the adsorption of LS onto the slag surface could have been reduced.As a result, the relative amount of the LS adsorbed onto the slag surface was increased.However, it should be noted that the characteristic plateaus of DA and PA are 0.46 mg g − 1 and 0.51 mg g − 1 , respectively, suggesting that although both separate addition methods can reduce the adverse effect that the alkaline activator might have imposed on the adsorption of LS onto slag, PA is more effective than DA.Consequently, it can be deduced that the workability of SSAS with PA should be better than DA and SA, which is well supported by the minislump results presented in Fig. 3.
As illustrated in Fig. 12, although it is difficult to establish a quantitative relationship between the adsorbed LS and the minislump value for all the three addition methods, separate trends can be identified for SA and (DA + PA) respectively, indicating that different mechanisms might be functioning in SA and (DA + PA).Nonetheless, regardless of the possible mechanism involved, it still can be seen that, in general, a higher adsorbed amount of LS fosters a better workability of SSAS.Moreover, at similar adsorption of LS, the minislumps of the SSAS pastes produced by the separate addition methods, i.e., DA and PA, were higher than that from SA.This could be attributed to the reduction of the possible competitive adsorption as well as the improved chemical stability of the LS due to the separate addition of the SP and the activator [48].As reported by Palacios and Puertas [27], the SPs designed for Portland Cement are not chemically stable in highly alkaline sodium silicate solution, and could result in the loss of surface activity.This might be the reason that adding SPs simultaneously with alkaline activator (i.e., SA) is not suitable for alkali-activated cementitious materials (AACM) because in this method the SP is directly mixed with the alkaline activator, which could lead to the loss of surface activity of SP.On the contrary, in the case of separate addition of SP and activator (such as DA and PA), the SP is exposed to a less alkaline environment, which could potentially reduce or even prevent the interaction between the SP and the activator and, hence, improve the performance of SP in AACM.The result obtained from the LS-superplasticised SSAS in the current study is just a good example to exemplify this hypothesis.Nonetheless, similar to what has already been reported in PC system, separate addition of SP and activator, in turn, could lead to different rheological behaviour of the AACM, as, again, exemplified by the rheological behaviour of the LS-superplasticised SSAS reported in this study.
Zeta potential
Zeta potential is the potential at the shear plane between the suspended solid particles and the liquid phase [49], which has often been used to describe the interactions between the particles and the superplasticisers in cement research [50].Generally, it is believed that a higher absolute zeta potential value is linked to a better dispersion (and, hence, a lower yield stress) between solid particles and an increased workability [51].
The results of the zeta potential of SSAS pastes are shown in Fig. 13.As can be seen, in the absence of LS, the zeta potential of SSAS was close to − 47.3 mV whereas the 'pure' slag mix (i.e., the slag was only mixed with water without adding sodium silicate) only showed a zeta potential of around − 15 mV.
When slag is mixed with water, it is expected that a basic pH environment could be formed [52].As a result, some sillanol groups on the slag surface could deprotonate and, thus, induce a net negatively charged surface.With further dissolution of the alkali and earth alkali cations from slag into the solution under this alkaline environment, cationic species such as Na + , K + and Ca 2+ , will then be released into the solution.However, since the rate of the dissolution of these cationic ions is faster than that of their diffusion, some cationic species, in particular, Ca 2+ , could accumulate and then be adsorbed onto the surface of the deprotonated silanol groups, introducing some positively charged sites in the double-layered structure [53].This may explain why the initially formed negative zeta potential is observed to shift to less negative and sometimes even slightly positively zeta potential over time [54].In the current study, a − 15mV has been measured from pure slag mix which is similar to those reported in the literature [55].When sodium silicate solution is added into the slag mix (i.e., without adding the LS), not only the surface reaction similar to the pure slag mix (i.e., slag/water mix) could occur, a more intense surface reaction is also to be anticipated due to the strong alkaline environment introduced by sodium silicate solution.As a result, one may expect that more cationic species, such as Na + , K + and Ca 2+ , could be released from the slag surface into the solution and then adsorbed onto the surface of the slag particles.Consequently, a more positive zeta potential value should have been anticipated [54].In contrast, a more negative, i.e., -47.3 mV, zeta potential was measured in this study when sodium silicate solution was added, which indicates that, as a strong electrolyte, sodium silicate itself has more influence on the surface charge of slag [55].This is mainly due to the fact that the negatively charged silicate species from sodium silicate can also adsorb or precipitate onto the surface of slag particles, leading to a more negative zeta potential value, overriding its activation effects on slag.Similar results have also been reported by Kashani et al. [55].
When the LS was added into the 'pure' slag mix (i.e., without activator), a further reduction of the zeta potential can be observed in Fig. 13 and, in general, the zeta potential was decreased with the increase of the LS dosages.For example, with 1.2% LS being added, the zeta potential of the slag without activator was reduced from − 15 mV to around − 29 mV and this was then becoming almost stable, which is similar to the pattern observed from the adsorption results in Fig. 11.Therefore, the increased absolute value of the zeta potential (i.e., more negative) might be attributed to the higher amount of LS adsorbed onto the surface of slag particles.As highlighted before, when the slag is dispersed in water, depending on the type of slag, a significant amounts of the cations, i.e., Ca 2+ , Mg 2+ , K + and Na + , are released during the dissolution process [54] and these cations can adsorb onto the negatively charged silanol groups on the surface of the slag particles, introducing some positively charged sites.Since the LS in hydrolysed form is a type of negatively charged polymer, once dissolved in water the LS could then be adsorbed onto the surface of the slag via the electrostatic attraction to the cations that have already adsorbed onto the surface of slag particles, leading to a further decreased zeta potential (i.e., more negative) of the LS-superplasticised slag.
However, when the LS was added into the SSASs, regardless of the addition methods, the reduction in the zeta potential was less significant and the reduction pattern as observed in the LS-superplasticised 'pure' slag mix was missing.Considering the fact that a zeta potential of − 47.3 mV and a range of zeta potentials between − 25 mV and − 30 mV have been developed by the pure SSAS and the LS-superplasticised slags (i.e., no activator), respectively, it becomes immediate obvious from Fig. 13 that, the combination of the activator and the LS is not just a simple superimposition of the zeta potentials of these two.Additionally, it can be noticed from the same figure that the zeta potential trend of the LSsuperplasticised SSASs cannot be correlated with the Langmuir isothermal adsorption pattern presented in Fig. 11.As it is generally accepted that a higher superplasticiser adsorption should result in a higher zeta potential [31], the contradicting results between the adsorption in Fig. 11 and the zeta potential in Fig. 13 in the current study would indicate that some interactions could have happened between the LS and the sodium silicate solution.Because of these interactions, it has made it rather complex and almost impossible to correlate the zeta potential with the LS adsorption as well as the rheology of SSAS results as reported before.Consequently, it becomes impossible to make use of the zeta potential data to interpret the possible mechanisms involved when different LS addition methods were adopted in this study.For example, it can be seen from Fig. 13 that, compared with the SA, even though the absolute values of the zeta potential of SSASs under both separate addition methods were decreased (i.e.moving towards more positive direction), instead of leading to a higher yield stress (and reduced workability), it has actually resulted in a lower yield stress (shown in Fig. 7) and a better workability (shown in Fig. 3), which just contradicts to the commonly accepted theory [56].Although further research is still needed before the exact nature of these interactions can be fully understood, the complicated structure of sodium silicate micelle should Fig. 13.Zeta potential of LS in SSAS under different addition methods.
J. Ren et al. have, at least, partly contributed to this complex [57,58].Furthermore, due to this complex nature, one cannot simply interpret the zeta potential results by just considering the effect of LS and activator separately.A synergistic effect from both sodium silicate solution and the LS has to be considered.
ESEM
The effect of different addition methods on the dispersion of slag particles in SSAS mixes was further investigated by ESEM, and the selected images of the SSAS at 10 min after mixing are shown in Fig. 14.However, as the ESEM images can only provide qualitative information based on visual observation, a judgement on the dispersion of slag particles fully based on ESEM images could, thus, be subjective.To identify the exact effect that different addition methods may have on the dispersion of slag particles, a more objective method is preferred.In this study, the ESEM images shown in Fig. 14 were, therefore, quantitatively analysed by ImageJ [59] and the resulting particle size (i.e., Heywood diameter of slag particles) distributions and cumulative curves of the slag particles are presented in Fig. 15.
As can be seen from Fig. 15(a), without adding LS, there exists a considerable number of agglomerates of slag particles in 'pure' SSAS (i.e., SSAS without adding LS).This observation is further confirmed by the particle size distribution obtained by ImageJ in Fig. 15 which clearly showed that the portion of larger particles in SSAS without LS is higher than that of the SSASs with LS under different addition methods.For example, when the Heywood diameter is ≥ 10 μm, the SSAS without LS showed the highest proportion of particles among the SSASs investigated.In contrast, when the Heywood diameter is < 5 μm, the SSASs without LS showed smaller proportion of particles.As discussed previously, the addition of sodium silicate activator into slag could increase the complexity of the surface chemistry environment of slag due to the further released alkali and earth alkali cations as well as the increased electrostatically counter silanol groups [55] generated from the accelerated dissolution of slag under this strong alkaline environment.Moreover, the quick formation of some initially precipitated hydration products could have also affected the dispersion of SSAS [68].Therefore, it is generally agreed that, compared to PC, AAS usually exhibits a poor workability [28].Furthermore, as presented in Fig. 5(b), the 'pure' SSAS showed the largest thixotropic area, which can also be used to verify this observation because it has already indicated that there should exist some flocculation of slags.
However, when the LS was added into the SSASs, regardless of its addition methods, less agglomerates and better dispersion of the slag particles was, in general, observed (Fig. 14 (b), (c) and (d)), which implies that LS can improve the dispersion of slag particles even in a highly alkaline media.As shown in Fig. 15(a), compared to the 'pure' SSAS, higher portion of small particles with Heywood diameter <5 μm, whilst less portion of large particle with Heywood diameter >15 μm, was observed in SSASs with LS.This results was further confirmed by Fig. (b), in which the particle size distribution curves of SSASs with LS all shifted to left of the 'pure' SSAS, suggesting that more small particles exist in the SSASs with LS.This observation also correlates well with the increased minislump and decreased yield stress of the LSsuperplasticised SSAS in Fig. 3 and Figs.7(a), 8(a) and 9(a), respectively.Since LS is a highly cross-linked polymer, consisting of various phenyl-propanoid units with coniferyl alcohol, sinapyl alcohol and pcoumaryl identified as three main components [60], when it is hydrolysed, a negatively charged backbone and countering cations can be formed.As a result, electrostatic repulsion has been considered as the dominant dispersion mechanism.Although the deteriorated SP performance in AAS has been reported by various researchers [61][62][63][64], which was primarily attributed to the cleavage of some functional groups of the SPs, the current results would suggest that not all the functional groups are unstable in alkaline environment and, hence, the addition of LS still can increase the dispersion of SSAS [48].
Furthermore, as anticipated, out of the three addition methods, less agglomerates were observed in the SSAS pastes prepared by both DA and PA (as shown in Fig. 14 (c) and (d), respectively) as compared to that of SA (Fig. 14 (b)).In addition, as can be seen in Fig. 14 (a), the largest portion of particles was obtained from the Heywood diameter <5 μm under PA, indicating that the best dispersion was achieved from PA among those three addition methods.This observation again correlates very well with the results presented in Fig. 5 (b), which showed that regardless of the LS dosages, the smallest thixotropic area was obtained under PA.Furthermore, the observation from Fig. 14 can also be corroborated by the adsorption results presented in Fig. 11 where PA also showed the highest LS adsorption out of the three different addition methods which again is expected to show the best dispersion of slag particles [31].
To conclude, based on the ESEM observation as well as the quantitative analysis by ImageJ, it is evident that better dispersion of slag particles can be achieved from the separate addition of LS and alkaline activator.In particular, the best is achieved by the PA method (i.e., adding LS 3 min before activator), which is believed to be due to the reduced competition in adsorption between LS and alkaline activator as well as the increased stability of the LS prior to the inclusion of alkaline activator in the SSASs [48].It has been well-established that the presence of sufficient amounts of small particles is one of the prerequisite conditions for the occurrence of shear thickening behaviour [65].Therefore, the shear thickening behaviour observed in this paper when LS and alkaline activator were added separately is mainly attributed to the improved dispersion of the slag particles as clearly showed in Figs. 14 and 15.This is because, in theory, a shear thickening behaviour could occur under two conditions, namely, high-volume fraction of the solids and non-flocculated particles [11].Under such a circumstance, when the hydrodynamic forces overcome the repulsive forces, such as the electrostatic force, Brownian force and steric force [65,66], temporary agglomerates can be formed.As a result, shear thickening could occur as the viscosity increases with increasing shear rate due to the enlargement of the particle clusters [67].
Conclusions
Based on the results presented in this paper, the following conclusions can be drawn: 1) Both the initial workability and the 60-min minislump retention of lignosulfonate-superplasticised SSAS (in term of the minislump) can be effectively improved by separate addition of LS and alkaline activator at the different stages of mixing.Furthermore, the prior addition of LS has demonstrated even better workability than the delayed addition of LS.Therefore, the separate addition of LS and alkaline activator at different mixing stages could be a simple and effective approach to tackle the incompatibility issue of applying PC based SPs in alkali-activated slag.2) Separate LS addition methods changed the linear rheological behaviour of SSAS to non-linear.As a result, the Bingham model is no longer the best rheological model for the SSAS systems when LS and alkaline activator are added separately.Through a detailed analysis presented in this paper, the Modified Bingham has been identified as the best model for describing the rheological behaviour of lignosulfonate-superplasticised SSAS.
3) The addition of LS increased the magnitude of the zeta potential of SSAS.However, compared to the simultaneous addition of LS and alkaline activator, the magnitude of zeta potential of SSAS decreased by both prior and delayed addition methods, even though the workability and rheological behaviour of lignosulfonatesuperplasticised SSASs were improved.This observation contradicts to the commonly accepted theory, although it could be presumably attributed to the complicated structure of sodium silicate micelle.Further research is therefore still needed to investigate the synergistic effect of sodium silicate activator and LS on the SSASs before the fundamental mechanisms behind the rheological behaviour observed in the current study could be fully understood.
Fig. 4 .
Fig. 4. Minislump of SSAS with different addition methods over time (1.2% LS dosage by the mass of slag).
factor.The effects of different LS addition methods on the plastic viscosity (based on Bingham model and MB model) of SSASs are plotted in Figs.7 (b) and Fig. 8 (b),
3. 2
.2.3.Exponent and c/μ.The exponents from the HB model are plotted in Fig. 9 (c).It is clearly shown in the figure that the exponent of the reference paste (i.e., No LS) was around 1.0 and the addition of LS by SA had little effect on the exponent value.However, the exponent
Fig. 6 .
Fig. 6.Rheological models on flow curve (down curve) of SSAS with different LS addition.
Fig. 7 .Fig. 8 .
Fig. 7. Effect of LS dosage on the rheological properties of SSAS with different addition methods fitted by Bingham model.
Fig. 9 .
Fig. 9. Effect of LS dosage on the rheological properties of SSAS with different addition methods fitted by Herschel-Bulkley model.
Fig. 10 .
Fig. 10.Relationship between initial minislump spread and yield stress calculated by different rheological model.
Fig. 11 .
Fig. 11.Adsorption of LS in SSAS under different addition methods.
Fig. 14 .
Fig. 14.Selected ESEM image of SSAS with different LS addition methods (1.2% LS dosage by the mass of slag).
Fig. 15 .
Fig. 15.Particles size (Heywood Diameter) distribution and cumulative curve of SSAS with different LS addition methods from ESEM image.
Table 1
Chemical composition of slag.
Table 2
Chemical and physical properties of sodium silicate 'Crystal 0503'.
Fig. 2. Flow chart of mixing procedure.J.Ren et al.
Table 3
Regression analysis of the flow curves fitted by different rheological models.
Table 4
Adsorption characteristics of the LS in SSAS with different addition methods.
Fig.12.Relationship between adsorption and minislump of SSAS.J.Ren et al. | 12,069.4 | 2022-12-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Measurement of the branching fraction for the decay $K_S \to \pi \mu \nu$ with the KLOE detector
Based on a sample of 300 million $K_S$ mesons produced in $\phi \to K_L K_S$ decays recorded by the KLOE experiment at the DA$\Phi$NE $e^+e^-$ collider we have measured the branching fraction for the decay $K_S \to \pi \mu \nu$. The $K_S$ mesons are identified by the interaction of $K_L$ mesons in the detector. The $K_S \to \pi \mu \nu$ decays are selected by a boosted decision tree built with kinematic variables and by a time-of-flight measurement. Signal efficiencies are evaluated with data control samples of $K_L \to \pi \mu \nu$ decays. A fit to the reconstructed muon mass distribution finds $7223 \pm 180$ signal events. Normalising to the $K_S \to \pi^+ \pi^-$ decay events the result for the branching fraction is $\mathcal{B}(K_S \to \pi \mu \nu) = (4.56 \pm 0.11_{\rm stat} \pm 0.17_{\rm syst})\times10^{-4}$.
h Novosibirsk State University, Novosibirsk, Russia.
s INFN Sezione di Roma Tre, Roma, Italy.t ENEA, Department of Fusion and Technology for Nuclear Safety and Security, Frascati (RM), Italy u Department of Physics and Astronomy, Uppsala University, Uppsala, Sweden.
v National Centre for Nuclear Research, Warsaw, Poland.
Introduction
The branching fraction for semileptonic decays of charged and neutral kaons together with the lifetime measurements are used to determine the |V us | element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.The relation among the matrix elements of the first row, |V ud | 2 + |V us | 2 + |V ub | 2 = 1, provides the most stringent test of the unitarity of the quark mixing matrix.Different factors contribute to the uncertainty in determining |V us | from kaon decays [1,2,3] and among the six semileptonic decays the contribution of the lifetime uncertainty is smallest for the K S meson.Nevertheless, given the lack of pure high-intensity K S meson beams contrary to the case of K ± and K L mesons, the K S → πeν decay provides the least precise determination of |V us |, and the branching fraction B(K S → πµν) has not yet been measured.Measurement of this decay mode allows an independent determination of |V us | and to extend the test of lepton-flavour universality to K S semileptonic decays by comparison with the expected value of (4.69 ± 0.06) × 10 −4 [4] derived from B(K S → πeν).
We present a measurement of the K S → πµν branching fraction performed by the KLOE experiment at the DAΦNE φ-factory of the Frascati National Laboratory based on an integrated luminosity of 1.6 fb −1 .DAΦNE [5] is an electron-positron collider running at the centre-of-mass energy of 1.02 GeV colliding e + and e − beams at an angle of π−0.025 rad and with a bunch-crossing period of 2.71 ns.The φ mesons are produced with a small transverse momentum of 13 MeV and K L -K S pairs are produced almost back-to-back with a cross section times the φ → K L K S branching fraction of about 1 µb.The beam energy, the energy spread, the beams transverse momentum and the position of the interaction point are measured using Bhabha scattering events [6].
The K S (K L ) mesons are identified (tagged ) by the observation of a K L (K S ) meson in the opposite hemisphere.This tagging procedure allows the selection efficiency for K S → πµν to be evaluated with good accuracy using a sample of the abundant decay K L → πµν tagged by the detection of K S → π + π − decays.The branching fraction is extracted normalising the number of K S → πµν events to the number of K S → π + π − events recorded in the same dataset.
The KLOE detector
The detector1 consists of a large-volume cylindrical drift chamber, surrounded by a lead/scintillating fibers finely-segmented calorimeter.A superconducting coil around the calorimeter provides a 0.52 T axial magnetic field.The beam pipe at the interaction region is spherical in shape with 10 cm radius, made of a 0.5 mm thick beryllium-aluminum alloy.Final-focus quadrupoles are located at ±50 cm from the interaction region.Two small lead/scintillating-tile calorimeters [7] are wrapped around the quadrupoles.
The drift chamber [8], 4 m in diameter and 3.3 m long, has 12582 drift cells arranged in 58 concentric rings with alternating stereo angles and is filled with a low-density gas mixture of 90% helium-10% isobutane.The chamber shell is made of carbon fiber-epoxy composite with an internal wall of 1.1 mm thickness at 25 cm radius.The spatial resolution is σ xy = 0.15 mm and σ z = 2 mm in the transverse and longitudinal projections, respectively.The momentum resolution is σ pT /p T = 0.4%, tracks vertices are reconstructed with a spatial resolution of about 3 mm.
The calorimeter [9] is divided into a barrel and two endcaps and covers 98% of the solid angle.The readout granularity is 4.4×4.4cm 2 , for a total of 2440 cells arranged in five layers.Each cell is read out at both ends by photomultipliers.The energy deposits are obtained from signal amplitudes while the arrival time and the position along the fibers are obtained from time differences between the two signals.Cells close in time and space are grouped into energy clusters.The cluster energy E is the sum of the cell energies, the cluster time and position are energy-weighted averages.Energy and time resolutions are σ E /E = 0.057/ E (GeV) and σ t = 54 ps/ E (GeV) ⊕ 100 ps, respectively.The cluster spatial resolution is σ = 1.4 cm/ E (GeV) along the fibers and σ ⊥ = 1.3 cm in the orthogonal direction.
The first-level trigger [10] uses both the calorimeter and the drift chamber information; the calorimeter trigger requires two energy deposits with E > 50 MeV in the barrel and E > 150 MeV in the endcaps; the drift chamber trigger is based on the number and topology of hit drift cells.A second-level cosmicray veto rejects events with at least two energy deposits above 30 MeV in the outermost calorimeter layer.The trigger time is determined by the first particle reaching the calorimeter and is synchronised with the DAΦNE radio frequency signal.The time interval between bunch crossings is smaller than the time spread of the signals produced by the particles, thus the time of the bunch crossing originating the event, T 0 , is determined after event reconstruction and all the times related to that event are shifted accordingly.Data for reconstruction are selected by an on-line filter [11] to reject beam backgrounds.The filter also records the events into different output files for analysis according to their properties and topology (event classification), 5% of the events are recorded without applying the filter to control the efficiency of the event classification.
Data sample and event preselection
Processes of interest for the analysis are simulated with the GEANFI Monte Carlo (MC) program [11] for an integrated luminosity equal to that of the data.All φ decays are generated according to their branching fractions as well as other final states produced in e + e − annihilation.The operating conditions of DAΦNE during data taking as well as measurements of beam parameters are included in the MC on a run-by-run basis.Calorimeter energy deposits and drift chamber hits from beam background acquired with a random trigger are overlaid onto the simulated events.The simulated events are processed with the same reconstruction algorithms as the data.
Kaons from φ-meson decays are emitted in two opposite hemispheres with mean decay path λ S = 5.9 mm and λ L = 3.4 m, thus about 50% of K L mesons reach the calorimeter before decaying.The velocity of the K L in the φ reference system is β * = 0.22.K S mesons are tagged by K L interactions in the calorimeter, named K L -crash in the following, with a clear signature of a late signal of about 25 ns not associated to tracks.The following requirements are applied to select K L -crash: • a cluster with energy E clu > 100 MeV not associated to tracks (neutral cluster); the centroid of the neutral cluster defines the K L direction with a resolution of ∼1 • ; • polar angle of the neutral cluster 15 • < θ clu < 165 • to suppress small-angle beam backgrounds; • 0.17 < β * < 0.28 for the velocity in the φ reference system of the particle originating the neutral cluster; β * is obtained from the velocity in the laboratory system, β = r clu /ct clu , with t clu being the cluster time and r clu the distance from the nominal interaction point, the φ transverse momentum determined run-by-run and the angle between the φ momentum and the K L -crash direction.
Assuming the neutral kaon mass, the K S 4-momentum is defined by the K Lcrash direction and the φ 4-momentum: The K S → πµν candidates are selected requiring two tracks of opposite curvature forming a vertex inside the cylinder defined by In case more than one vertex is found, the closest to the interaction region is chosen.The above requirements define the event preselection.After preselection, the data sample contains about 300 million events and its composition, as evaluated by simulation, is shown in Table 1.The large majority of events are K S → π + π − decays, and there is also a large contribution from φ → K + K − events where one kaon or its decay products generate a fake K Lcrash and the other kaon decays early into π ± π 0 .The distribution of β * is shown in Figure 1 for data and simulated events.Two peaks are visible, the first is associated to events triggered by photons or electrons, and the second to events triggered by charged pions.The trigger is synchronised with the bunch crossing and the time difference between a photon (or electron) and a pion (or muon) arriving at the calorimeter corresponds to a time shift of about one bunch-crossing.
Selection of signal and normalisation events
The selection of signal events is performed in two steps; first a selection based on the event kinematics using only tracking variables and then a selection based on the time-of-flight measured with the calorimeter.The two groups of variables are uncorrelated.In order to assign a time to the particles each track is associated to a cluster.The track-to-cluster association (TCA) is applied as follows: for each track connected to the vertex a cluster with E clu > 20 MeV and 15 • < θ clu < 165 • is required whose centroid is within 60 cm of the track extrapolation to the calorimeter front surface.The event is retained only if TCA is satisfied by both tracks.
Five variables with good discriminating power against background are used in a multivariate analysis.A boosted decision tree (BDT) classifier is built with the following variables: The distributions of the variables are shown in Figure 2 for data and simulated events.
After preselection two cuts are applied to suppress the background in the tails of the distributions: p < 320 MeV for both tracks and ∆p < 190 MeV. ( The training of the BDT classifier is done on a simulated sample of 5,000 K S → πµν events and a sample of 50,000 background events; samples of the same size are used for the test.After training and test the classification is run on all events of the MC and data sample.The distribution of the BDT classifier output is shown in Figure 3 for data and simulated events.The data distribution is well reproduced by simulation in the region populated by the signal.To suppress the large background of chosen to maximise the ratio S/ √ S + B where S and B are the signal and background yields.
The selected events contain ππ, Kπ, eπ track pairs for the main backgrounds and µπ for the signal.A selection based on time-of-flight measurement is performed to identify µπ pairs.For each track associated to a cluster, the difference is computed, where t clu,i is the time of the cluster associated to track i, L i is the length of the track, and β i = p i / p 2 i + m 2 i is function of the mass hypothesis for track i.To reduce the uncertainty due to the T 0 determination, the difference is used to determine the mass assignment to the tracks.The ππ hypothesis is tested first, the distribution of δt ππ = δt 1,π − δt 2,π is shown in Figure 4(left).A fair agreement is observed between data and simulation, the K S → πµν and K S → πeν distributions are well separated and the K + K − background is isolated in the tails of the distribution, however the signal is hidden under a large K S → π + π − background.To reduce the background a cut is applied The number of surviving events in the data sample is 38686 and its composition as evaluated by simulation is listed in Table 2.After the mass assignment to the two tracks the invariant mass of the charged particle identified as the muon is evaluated as with p 2 miss = ( p K S − p π − p µ ) 2 , E K S and p K S being the energy and momentum reconstructed using the tagging K L , and p π , p µ , the momenta of the candidate pion and muon track.
The number of signal events is extracted with a fit to the m 2 µ distribution with the MC shapes of three components: K S → πµν, K S → π + π − and the sum of all other backgrounds.The fit is performed in the range −6000 < m 2 µ < 24000 MeV 2 with 48 degrees of freedom.The third component, which is peaked around m 2 e , is constrained to a negligible value by the fit.Figure 5 shows the distribution of m 2 µ for data, simulated events and the fit, and Table 3 presents the result of the fit.The number of signal events is with χ 2 /ndf = 30/48.The normalisation sample of K S → π + π − events is selected by requiring 140 < p < 280 MeV for both tracks (Figure 2).This requirement selects N ππ = (282.314± 0.017) × 10 6 events with a purity of 99.9% as determined by simulation.
Determination of efficiencies
The branching fraction for the K S → πµν decay is evaluated as where N πµν and N ππ are the numbers of K S → πµν and K S → π + π − events, πµν and ππ are the respective selection efficiencies, and R is the ratio of the efficiencies for the trigger, on-line filter and preselection for the two decays.
The signal selection efficiency is determined with K L → πµν control samples (CS) and evaluated as where CS is the efficiency of the control sample and MC πµν , MC CS are the efficiencies obtained from simulation for the signal and control samples, respectively.
The K L → πµν decay [12,13] is kinematically identical to the signal, the only difference being the much longer decay path.For the control sample the tagging is done with K S → π + π − decays, preselected in the same way as for the signal sample with the addional cut |m ππ − m K 0 | < 15 MeV to increase the purity.The radial distance of the K L vertex is required to be smaller than 5 cm to match the signal selection, but greater than 1 cm to minimise the ambiguity in identifying the K L and K S vertices.The control sample is composed mainly of K L → πeν , K L → π + π − π 0 and K L → πµν decays, while most of K L → π 0 π 0 π 0 decays are rejected by the requirement of two tracks.
The distribution of the missing mass, m 2 miss , of the two tracks connected to the K L vertex, assigning the charged-pion mass, shows a narrow isolated peak at the π 0 mass; a cut m 2 miss < 15000 MeV 2 efficiently rejects the K L → π + π − π 0 decays.The number of events in the control sample is 911757.
In order to evaluate the signal selection efficiency, two control samples are used, one selected based on kinematic variables and the other based on time-offlight (TOF), the two groups of variables being largely uncorrelated.
The control sample for evaluating the efficiencies of the selection with kinematic variables and BDT classifier is selected applying a cut on the two-dimensional (δt πµ , δt µπ ) distribution that removes most of the K L → πeν events.The purity of the sample as determined with simulation is 86%.The resolutions in the measurement of the tagging K S (control sample) are similar to those of the tagging K L (signal sample) and the same BDT classifier is used for both samples.The BDT MC distributions for the signal and control sample are compared in Figure 6(left).Applying to the control sample the same selections as for the signal, the efficiencies evaluated with Eq. ( 7) are (kinem.sel.) = 0.982 ± 0.004 stat and (BDT) = 0.417 ± 0.003 stat .To evaluate TCA and TOF efficiencies for the signal, the T0 determination using the earlier among the two clusters associated with the K S decay has to be considered.The control sample selection therefore requires the earliest cluster to be associated with one of charged secondary particles from K L decay and a cut on the (m ππ , m 2 miss ) distribution to reject K L → πeν events.The purity of the sample as determined with simulation is 87%.The MC distributions of δt µ for the signal and control sample are compared in Figure 6(right).Applying to the control sample the analysis procedure as for the signal the efficiencies evaluated with Eq. ( 7) are (TCA) = 0.347 ± 0.002 stat and (TOF) = 0.392 ± 0.003 stat .
The correction factors in Eq. ( 7) differ from one by less than 10% but for (TOF) where it differs by 20%.
The tails of the m 2 µ distribution in Figure 5(left) are not included in the fit to improve its stability, the relative efficiency is 0.991 ± 0.001.
The signal selection efficiencies are summarised in Table 4 where only the statistical errors are shown.Combining the values accounting for the correlation of the control samples we obtain πµν = 0.0552 ± 0.0005.The ratio R in Eq. ( 6) accounts for several effects all depending on the global properties of the event: trigger, on-line filter, event classification, T 0 determination, K L -crash and K S identification.The various contributions to R evaluated with simulation are listed in Table 5 where only the statistical errors are shown.The efficiency of the K S → π + π − normalisation sample is measured using the preselected data by varying the cut on the vertex transverse position, as in Eq. ( 1), in 1 cm steps from ρ max vtx = 1 cm to ρ max vtx = 4 cm, based on the observation that ρ vtx and the tracks momenta are very loosely correlated.Using Eq. ( 7) and extrapolating to ρ max vtx = 5 cm the efficiency is ππ = (96.569± 0.004)%.Alternatively, the efficiency is evaluated using the K S → π + π − data sample (with ρ max vtx = 5 cm ): ππ = (96.657± 0.002)%.The latter value is used as the efficiency and the difference between the two values is taken as systematic T 0 -The systematic uncertainty is evaluated analising the data and MC T 0 distributions for the decays with the most different timing properties: K S → π + π − and K S → π 0 π 0 [14].The data over MC ratios is one with an uncertainty of less than 0.1%.
K L -crash and β * selection -The systematic uncertainty is evaluated comparing data and simulated events tagged by K S → π + π − and K S → π 0 π 0 decays which have different timing and topology characteristics.The data over MC ratio is 1.001 with negligible error.
K S identification -The systematic uncertainty due to the requirement of two tracks forming a vertex in the cylinder defined by Eq. ( 1) is evaluated separately for signal and normalisation samples.The first is evaluated with K L → πµν events selected with the same vertex requirements as for the signal but tagged by K S → π 0 π 0 decays.For the K S → π + π − sample the efficiency is evaluated by tagging with K L -crash and removing the requirement of the vertex.Combining the two values gives a data over MC ratio of 1.002 ± 0.017 where the error is due to the purity of the samples.
The R total systematic uncertainty is estimated by combining the data over MC ratios and amounts to 1.7%.
All systematic uncertainties are summarised in Table 6.
Result
From Eq. ( 6) with N πµν = 7223 ± 180, N ππ / ππ = (292.08± 0.27) × 10 6 , the values of the efficiencies πµν = 0.0552 ± 0.0017, R = 1.472 ± 0.025, and the value B(K S → π + π − ) = 0.69196 ± 0.00051 measured by KLOE [15], we derive the branching fraction B(K S → πµν) = (4.56 ± 0.11 stat ± 0.17 syst ) × 10 −4 = (4.56 ± 0.20) × 10 −4 .This is the first measurement of this decay mode and completes the set of kaon semileptonic decays.The branching fraction for K S (K L ) → π ν decay is related to the weak coupling constant and V us through the relation where f + (0) is the hadronic form factor at zero momentum transfer, m K and τ S are the K S mass and lifetime, I K is the phase space integral, S EW is the short-distance electroweak correction [17] and δ K EM is the long-distance electromagnetic correction [18,19].Assuming universality of the kaon-lepton coupling the expected value [4] is as derived from the value of the branching fraction B(K S → πeν) measured by KLOE [15] and the ratio R(I K ) of the phase space integrals for the semileptonic decays K L → πµν and K L → πeν measured by KTeV [16].Inverting Eq. ( 8) and using I µ K = 0.10262 ± 47 [3] we derive |f + (0)V us | K S →πµν = 0.2126 ± 0.0046.
These results are consistent with those determined for the other kaon semileptonic decays [1,3] though less precise mainly due to the intrinsic limitations related to µ-π discrimination in the momentum range 100-250 MeV.
Conclusion
A measurement of the branching fraction for the decay K S → πµν is presented based on data collected with the KLOE experiment at the DAΦNE e + e − collider corresponding to an integrated luminosity of 1.6 fb −1 .The φ → K L K S decays are exploited to select samples of pure and quasi-monochromatic K S mesons and data control samples of K L → πµν decays.The K S → πµν decays are selected by a boosted decision tree built with kinematic variables and by a measurement of time-of-flight.The efficiencies for detecting the K S → πµν decays are derived from K L → πµν data control samples.A fit to the m 2 µ distribution finds 7223 ± 180 signal events.Normalising to K S → π + π − decay events, the result for the branching fraction is B(K S → πµν) = (4.56±0.11stat ± 0.17 syst ) × 10 −4 to be compared with the expected value of (4.69 ± 0.06) × 10 −4 assuming lepton-flavour universality.
Figure 1 :
Figure 1: Distribution of β * after preselection for data and simulated events.
p 1 ,
p 2 : the tracks momenta; α 1,2 : the angle at the vertex between the two momenta in the K S reference system;α SL : the angle between p sum = p 1 + p 2 and the K L -crash direction;∆p : the difference between | p sum | and the absolute value | p K S | of the K S momentum determined using the tagging K L ; m ππ : the invariant mass reconstructed from p 1 and p 2 , in the hypothesis of charged-pion mass.
Figure 2 :
Figure 2: Distributions of the variables used in the multivariate analysis for data and simulated events after preselection.From top left: track momenta (p 1 , p 2 ), angle between the two tracks in the K S reference system (α 1,2 ), angle beween K L and K S directions (α SL ), two-track invariant mass in the hypothesis of charged pions (mππ), ∆p = | psum| − | p K S |.
between the time measured by the calorimeter and the time-of-flight measured along the particle trajectory
Figure 3 :
Figure 3: Distribution of the BDT classifier output for data and simulated events.
Figure 4 :
Figure 4: Distributions of δtππ (left) and δtµ (right) for data and simulated events.
Figure 5 :
Figure 5: The m 2 µ distribution for data, MC signal and background (left); comparison of data with the fit (right).
Figure 6 :
Figure 6: Normalised Monte Carlo distributions of the BDT classifier output (left) and δtµ (right) for K L → πµν and K S → πµν events.
Table 1 :
Number of data and simulated events after preselection.
Table 2 :
Number of events after the δtµ selection for data and simulated events.
Table 3 :
Result of the fit to the m 2 µ distribution.
Table 4 :
Efficiencies for the signal selections.The errors are statistical, the error of the total efficiency accounts for the correlation of the control samples.
Table 5 :
Contributions to the ratio of efficiencies R in Eq. (6).The error on R is calculated as the quadratic sum of the errors of the single ratios.
Table 6 :
Summary of systematic uncertainties of ππ , πµν and R . | 6,147.4 | 2019-12-12T00:00:00.000 | [
"Physics"
] |
Optimal Vaccination of a General Population Network via Genetic Algorithms
Herein we extend the work from Patel et al. (1) to find the approximate, optimal distribution of vaccinations of a virus spreading on a network with the use of Genetic Algorithms (GAs). We couch our investigation in an online social network, a Facebook graph of ~4000 nodes. Within this framework we investigate the performance of an optimized vaccine distribution scheme to that of a random distribution benchmark, and show the algorithm is superior to this benchmark as all vaccine coverage values.
Introduction
The role of vaccination in public health can hardly be understated. The reduction in rates of Polio and Measles are a direct result of the preventative effectiveness of vaccines. The ability of such prophylactics to save lives, and to a lesser extent reduce the financial burden on a society, is therefore of great importance to the scientific community. The topic of vaccination is a large and complex field (2). There has been a fair amount of work done on the optimal method to vaccinate a population under various schemes. From the perspective of mean-field models, these schemes seek the best distribution over time, and generally with respect to some cost of the vaccine. Under these deterministic models one can employ methods from Optimal Control to find analytical solutions to the problem (3)(4)(5), for example: vaccination in a two-strain Tuberculosis model (6). This forward and back optimization is computationally, highly effective. But the lack of flexibility of the methodology works against it. Mathematical Epidemiology has increasingly turned to modeling of spreading processes on complex networks (7) as they relax some of the harder assumptions of mean-field models by providing a contact structure. With this added complexity comes rich, realistic dynamics ( Fig. 1), but the models often move further from analytical tractability. Although, some aspects of epidemics on networks can be characterized analytically, such as the critical translatability needed to infect a sizable fraction of the network (akin to the basic reproduction number), and spreading rates (2). As these models grow in sophistication, so do the vaccination schemes. Vaccination schemes on networks can readily adopt the frame work of Statistical Physics and map it to a site percolation problem -hence some characterizations, mentioned above, are possible. Given this thread of thinking, some schemes hope to increase the percolation threshold (by removing nodes -vaccination) and, thereby reduce the spread of the virus. Alternately, many schemes are based on heuristic models characterized by node measures of the network, such as degree correlations, or differing types of centrality measures, such as: betweeness, eigenvector, random-walk, closeness (2,8). All of these methods assume global knowledge of the networks, which in reality is not often available. There has been work on vaccination methods on networks, with only local properties, such as acquaintance vaccination (2). Finally there are other more specific targeting schemes such as ring vaccination (9). Although this subfield is well populated, we believe there is a need for method to find an (approximate) optimal solution, for use in the field, and also to benchmark these heuristic measures mentioned. We therefore look to the work of Patel et al. (1), to extend their work to networks. Patel et al. devised a scheme to deliver the optimal amount of vaccinations to each age group in their age-stratified, metapopulation Influenza model. They considered two numerical optimization schemes to benchmark against a random distribution of vaccines; namely Genetic Algorithms and Random Hill Climbing with mutation. The former was found to be optimal, and hence is the focus of this study. Genetic Algorithms, succinctly describe by Hamblin (10), "...are a heuristic global optimization technique mimicking the action of natural selection to solve hard optimization problems..." The work of Patel and co. was novel, but lacked generality to a network contact structure, as opposed to a metapopulation model. It is here where our work sits. We expound upon their model to that of a general contact structure network and at the same time bring parsimony to certain aspects of the algorithm to create a general, flexible method to find an optimal strategy for vaccination on general networks.
Spreading and Vaccination
For our work, we analyze a simple contagion model; we use the SIR model as our basis, for its generality with respect to mimicking many viral traits. We adapt the model to a network, where each node on a network is a person. We assume that people can be infected by infected nearest neighbors with a probability where η is the number of infected nearest neighbors, and β is the probability that one infected neighbor with infect another neighbor in a given timestep dt. Nodes recover with a (1), on the Facebook network ( Fig. 2). We simulate 500 outbreaks (with random starting locations -a single node infected each time) shown individually as the grey lines, whose mean is given by the red. Recovery time is modeled after Influenza, of six days (γ = 1/6), whereas, probability, γ, in a given timestep dt. These surprisingly rich dynamics are shown in Fig. 1.
In our model, nodes which are initially Susceptible can be vaccinated. That is, vaccination has the role of shifting the health state from Susceptible, to Recovered, and is assumed to be 100% effective. Immunity is considered lifelong.
Optimization Scheme. For the optimization scheme, we follow the path laid out by Patel and co. (1), for which we will briefly go over here, with an adjustment appropriate for networks.
In their work, their spreading model consists of an age stratified, metapopulation model. Here we remove the stratification (such that all nodes are homogeneous) but increase the contact structure to a network of N nodes (people). We suppose that we have n v vaccines available (one per person) at the beginning of the simulation (sans infection) that we can distribute. Here, we conduct the model as follows. Construct m individuals, conceptually thought of as vaccination strategies, which are vectors of genes, where each vector is the genome of the individual (in the nomenclature of GAs). We denote the i th individual/strategy as v i . Each index of an individual maps to a distinct node on the network. The elemental value on that index, referred to as the gene, or locus, can be 0 or 1, denoting that the node to which it maps is either non-vaccinated or vaccinated, respectively. 1 Note then that An outbreak is initiated post-vaccination, where one random node is infected per simulation. For each individual, the simulation of a virus spreading on the network is run through e n times (ensemble size). The number of recovered nodes (excluding those initially vaccinated) are summed to give the total total number of nodes affected by the disease. The lower this number, the higher the so-called fitness of the individual. We encode this mathematically as, 1 We assume the efficacy of the vaccines to be 100%.
where f j i is the fitness of the i th individual, for the j th simulation out of the ensemble, where R j i denotes the final number of recovered at the end of the simulation. Therefore, the mean fitness, for a given individual iŝ Note: Simulations are conducted until no infected nodes are left.
For each individual, the mean fitness is found for a given generation. The whole procedure is iterated over many generations, g. Between each generation, genetic information exchange occurs between individuals. Following Patel et al.'s work, we use both Tournament selection (below) and elitist selection (10): select the top fittest half of the individuals to be immediately passed to the next generation. Tournament selection is used to construct the remaining individuals such that the total number of individuals per generation remains constant. The tournaments are as follows.
Mating Scheme and Tournaments. To construct the remaining individuals to pass onto the next generations, firstly, select a pool of 10 individuals without replacement from the whole set. From these find the one with the highest average fitness: the paternal individual. Construct another tournament from the current pool of individuals sans the paternal individual, of the same size. Select the one with the highest fitness: the maternal individual. From here we mate the two via crossover breeding. The mixing of gene states (0 or 1) is not necessarily 50/50. We designate this as the crossbreeding factor, c B (≤ 1).
In this situation, to construct the offspring individual from the paternal and maternal, we consider each gene index (the vector index) in sequence. With a probability c B we select the gene from the paternal, otherwise we select it from the maternal individual. Once all genes have been chosen, we need to balance the number of vaccines issued in the offspring individual; namely it may have greater or less than the total number of vaccines required, v n . If there are a greater number of vaccines issued to the offspring, we collect all indexes which have a value equal to unity. Without replacement, we select, at random, the same number of genes as the excess number of vaccines issued. Those genes are therefore set to 0. In the case too few vaccines are issued, we collect all null genes. Without replacement, we select at random the same number of genes as needed to make up the total number of vaccines issued per individual. We set those genes selected to unity. This final adjustment on the offspring individual serves as a mutation step in the creation process. We here depart from the original authors' scheme due to the differing nature of our model and in seeking a more parsimonious algorithm.
Convergence. We set the convergence of the algorithm in a similar fashion to before: if the top mean fitness of an individual has not changed after c generations, or the algorithm has exceeded g generations of computation, we assume convergence.
Random Vaccination.
To benchmark our work, we measure the approximate optimal vaccination scheme against a random distribution of vaccines over the network, irrespective of the nodal hierarchy. Again, we run the random distribution over an ensemble of simulations, and take the mean fitness to represent the benchmark vaccination proficiency level.
Model Initialization and Parameters.
To investigate the performance of our methodology we require a representative real-world contact network. We have thus chosen to analyze a Facebook network provided by SNAP (11). This network is undirected, with 4039 nodes, 88234 edges, and a diameter of 8 -a size which is not trivial, but still computationally tractable. This network was investigated in (12) and is displayed in Fig. 2. 2 To compare the GA vaccination scheme to the random benchmark scheme, we consider the same network structure for each case. 3 During the simulations, the viral parameters and simulation parameters are held constant. Namely, we set γ = 1/6, and γ = β · 2.5. All simulations are initially infected with a single node that varies randomly for each simulation, but this variation is accounted for by running the simulation many times (the ensemble size). We consider a range of vaccine coverage rates [5-90 %], and run the simulations on the network, such that for the given spreading process, the simulation is run until there are no more infected nodes on the network. Fig. 3. We show the performance of the GA approach ('GA.', red) against a random vaccine strategy ('Rand.' in black). We also include no vaccination ('Sans' Black dot-dashed line). Simulations are conducted on the network shown in Fig. 2 for a simple contagion process [β = 1/6, γ = β · 2.5, Eq. (1)]. On the y-axis we show the average number of people infected (and then recovered) from the virus, less the number of vaccines issued for each simulation. As expected, the mean infected of the disease drops linearly with a random strategy. But we see a marked difference in the effectiveness of the GA scheme. The algorithm is able to seek a far more optimal arrangement at all vaccine coverage values. The remaining parameters are housed in Table 1.
For the GA, we use 50 individuals, The initial (random) individuals for the GA are chosen prior to infecting the network for any simulation. The network is immunized before selecting the infected node. The top 25 fittest individuals are passed to each new generation. The remaining 25 are created via a tournament scheme composed of 10 individuals. We select our crossover breeding percentage to be 80%, as in Patel et al. We conduct the GA over 20 generations, or until the algorithm converges. The parameters for our simulations are found in Table 1.
Vaccination Scheme Performances. In Fig. 3, for each combination of parameter tuples, we show the the mean fitness between the GA and the random benchmark (including no vaccination scheme at all). We see in Fig. 3 that the GA approach is superior to the random strategy at every vaccination level. The random strategy is approximately linear in its effectiveness, whereas, the GA response could likely be described as quadratic. In comparison: To ensure that half of the population is safe from the virus, the random strategy on average requires 40% coverage rate, whereas the GA strategy requires 15% coveragean extremely efficient response.
Discussion and Conclusion
We have extended the work by Patel et al. (1) to consider a general network structure with a simple spreading process upon it. We have shown, via Fig. 3, that our Genetic Algorithm scheme was able to find an approximate, optimal vaccination strategy, for the network considered in Fig. 2, that consistently beats the random scheme for any value of vaccine coverage. Through this we show the value of this meta-heuristic approach: With the advent of cheap computational power, one can find the optimal vaccination scheme for highly dimensional, complex models. The flexibility of this model, coupled with its simplicity, is its strength.
Future Work. As this work is a preliminary use-case investigation, there is much to do in terms of future work with respect to this framework. Simply, one could test the sensitivity of the results with respect to viral parameters, and also, obviously with respect to network structure. It could be, that the GA is more efficacious with respect to some structure classes of networks that others. If this is then the case, benchmarking this algorithms against other meta-heuristic optimization algorithms, such as Simulated Annealing (15), would be beneficial.
As this method is effectively rather simple, the cost of computation is non-trivial. One could investigate the computational cost of convergence, with respect to the algorithmic parameters, namely: tournament size, individual size, etc. Finally: It will be interesting to see how the methodology can be adapted to other optimization problems on networks, such as managing disaster spreading via external resources (8), or in situations of complex contagion, such as advertising and opinion spreading [which would likely be reflected through changes to the fitness/cost function, Eq. (3)].
Heuristic strategies inspired by GAs. We believe that the GA scheme offers more than simply a method to find the optimal solution to the vaccination problem on networks: It can be used to inspire other heuristic strategies of vaccination. Namely, when conducting simulations, we believe there will likely be certain nodes of greater importance to minimizing spread through their own vaccination. It is therefore likely, that over a range of coverage values, these nodes are picked out via the algorithm more than others -essentially weighting their importance more. Comparing the likelihood of vaccination for a given coverage as a function of nodal degree (or any other node measure on a network) could then inspire, or help validate some of the heuristic schemes mentioned in the Introduction, which are based on these nodal properties.
Machine Learning based on GA feature sets. Looking further ahead: The GA can also be used as a basis to create a training set for Machine Learning algorithms with respect to vaccination schemes. One could pose the question: Can we teach an algorithm, given a snapshot of a susceptible network, the viral parameters, and the vaccine coverage, the likely best nodes to vaccinate? We believe, the GA framework explicated here, could be up to the task. Let us outline a possible avenue of investigation.
One could generate many different synthetic networks (be they small world, lattice, random, or scale-free), along side real network datasets, to run large ensembles of outbreaks with differing viral parameters. In each case, we could use the GA to find the approximate optimal solution given the appropriate cost function. Once the vaccination scheme is found, a feature set could be developed where the viral parameters, network, network statistics, degree distributions and vaccine coverage are included. The output could be the proportion of vaccines given to each nodal degree in the network degree distribution. This would then constitute as a data point in a training set. Given a large enough training set, and using the appropriate machine learning algorithms, one could then train the algorithm to give out the vaccination scheme, given the inputs above.
In so doing, one would have developed a machine learning algorithm to vaccinate populations, given a snapshot of the network. | 3,988.6 | 2017-12-11T00:00:00.000 | [
"Computer Science"
] |
Bayesian Nonparametric Modeling for Rapid Design of Metamaterial Microstructures
We consider the problem of rapid design of massive metamaterial (MTM) microstructures from a statistical point of view. A Bayesian nonparametric model, namely, Gaussian Process (GP) mixture, is developed to generate the mapping relationship from the microstructure’s geometric dimension to the electromagnetic response, which is approximately expressed in a closed form of Drude-Lorentz typemodel.This GPmixturemodel is neatly able to tackle nonstationarity, discontinuities in themapping function. The inference is performed using a Markov chain relying on Gibbs sampling. Experimental results demonstrate that the proposed approach is highly efficient in facilitating rapid design of MTM with accuracy.
Introduction
Metamaterials (MTMs) are artificially engineered materials that exhibit extraordinary electromagnetic (EM) response, such as negative refraction, superresolution imaging, and planar slab lensing [1][2][3], which cannot be found in naturally available materials.The macroscopic properties of MTMs are harnessed by engineering the geometric dimensions of artificially engineered subwavelength MTM particles, which are required to generate the desired magnetic and/or electric responses to externally applied fields [4][5][6][7][8][9].The electromagnetic responses are usually characterized by constitutive parameters like permittivity and permeability.Values of such parameters can be tailored by changing the geometric dimensions of the MTM particles.Recently, a great deal of research has been involved with designs of MTMs with inhomogeneous constitutive parameters, based on which technique of optical transformation can be implemented as a powerful tool to manipulate electromagnetic waves in various desired manners [6,[10][11][12][13][14][15][16][17][18].However, optical transformation dramatically increases the complexity of MTMs, because the constitutive parameters are often spatially inhomogeneous [6,11,12,14,[16][17][18] and anisotropic [17,18].In addition, the overall size of the MTMs can be much larger [6,14].The large number of particle elements required in a MTM design can represent a substantial computational burden resulting in long design cycles.
To design a MTM, analytical method can be derived by setting up equivalent circuit models [4,[19][20][21] for the MTM particles.The problem is that such analytical method is only efficient for limited classes of particle structures and is often unable to accurately predict the macroscopic behavior of MTMs.In practice, MTMs are more often designed by full-wave electromagnetic simulations such as finite element method (FEM) and finite integration technique (FIT).Commercial software like FIT-based CST Microwave Studio and FEM solver of ANSOFT HFSS has been widely used.The common approach to design a MTM requires repeating simulations to optimize the constitutive parameters for any single MTM particle.Optimization techniques, such as density method [22], genetic algorithm [23,24], and topology optimization [25][26][27], have been taken into account in improving the efficiency in designing MTM particles.We argue that, even with such optimization techniques, it would still be too time-consuming for designing MTMs which contain massive different particles, because the fullwave simulations consume a considerably large amount of time and the optimization requires running a large number of simulations.
International Journal of Antennas and Propagation
To address the aforementioned time-consuming design step, Liu et al. developed an automated and rapid method that is applied once the spatial distribution of the constitutive parameters has been determined [6].Differently with previous methods, the approach in [6] requires only a comparatively small number of full-wave simulations, relying on a simple statistical regression modeling scheme to mathematically generate the functional dependence of the constitutive parameters on the geometric dimensions of the MTM particles.
Following the idea of employing statistical techniques to reduce the times of full-wave simulations, we introduce an advanced statistical modeling scheme, namely, Bayesian nonparametrics, into the context of rapid design of MTMs.Compared with [6], which did not disclose any details either for the regression model or for its associated inference algorithm, this work reveals all the necessary details for the models and the corresponding inference algorithm.A more significant difference lies in the statistical model.In [6], a simple polynomial-like model is used to fit a series of responsive data points which can be connected by a smooth curve.In contrast, our model can not only fit smooth data, but also allow nonstationarities and/or discontinuities embedded in the data to be fit, attributed to the idea of combining mixture models with Bayesian nonparametrics.In another word, we provide a more flexible model that has predictive ability even for much more sophisticated data which is likely to appear in the practice of MTM design.With this model at hand, it is easy for us to predict the constitutive parameter values for any specific design for a large, diverse range of MTM particles.By doing so, lots of full-wave simulations can be avoided, even for very complicated MTM design tasks.
The remainder of this paper is organized as follows.Section 2 presents the proposed GP mixture modeling approach as well as the Gibbs sampling based inference mechanism.Section 3 presents the experimental results of a MTM prototype designed with our method.Finally, we conclude the paper in Section 4.
Methodology
In this section, we first provide a brief review on the basic idea of GP regression.Then, we introduce the mixture of GP regression model as well as the proposed inference algorithm.The mixture of GP regression model is used as a surrogate model for the full-wave simulations required in MTM design.The efficiency of our approach is demonstrated by real MTM experiments as presented in Section 3.
2.1.Overview on GP Regression.GP regression has been proven to be powerful tool to deal with nonlinear regression problems [28].The problem of (possibly nonlinear) regression can be stated as follows: assume that we are given some noisy data = {( , ) =1 }, ∈ = R , ∈ = R, ∈ {1, . . ., }, where is the number of data points and is the dimensionality of input vectors.Let be drawn independently and be identically distributed from a probability density (, ) = ( | )().The purpose is to find a regression function ∈ F, : → such that the risk [((), )] is minimized, where : × → R + specifies the pointwise regression loss; in our case the quadratic loss ( 1 , 2 ) = ( 1 − 2 ) 2 is used, and F denotes the function set.
GP regression deviates subtly from the standard formulation above because it is really a transductive method that does not provide a single regression function but a posterior density over target values for the test or working set.
A GP is completely specified by its mean function and covariance function.The mean function () and the covariance function (, ) of a real process () are defined as below: The covariance function specifies the covariance between pairs of random variables as follows [29]: where is called characteristic length-scale, which informally can be thought of as roughly the distance you have to move in input space before the function value can change significantly.
In the above we take the mean function to be zero, which is usually adopted for notational simplicity.
Prediction of function values * corresponding to new test inputs * can be sampled from the joint posterior distribution by evaluating the mean and covariance matrix from the equation as below: where (, ) denotes the matrix of the covariances evaluated at all pairs of points in and points in [8], and N(, ) denotes the Gaussian density function with mean and covariance .
Observe that the length-scale can be varied.In general, such free parameter is called hyperparameter.The hyperparameter is set by optimizing the log marginal likelihood [29]: where is the covariance matrix for .As the observations, namely, the electromagnetic response, are assumed to be noise-free, the covariance matrix for is identical with the covariance matrix for the noise-free latent . is computationally impractical for large datasets [28].Second, it is difficult to specify priorly and perform learning in GP models if we require nonstationary covariance functions, multimodal output, or discontinuities [28][29][30][31][32].
Mixtures of GP Regression
We develop a GP mixture model to conquer the above limitations that may appear in the context of MTM design.The idea of combining GP with mixture modeling framework has appeared in the literature, see, for example, in [30][31][32], while the model we develop here is custom-tailored to the problem of MTM design and the inference mechanism is different.We summarize the operations of the proposed method in a flowchart as shown in Figure 1.We introduce each part of this algorithm as follows.
Input Training Data.
The training data represents a set of { , } =1 which is obtained from full-wave simulations, where denotes the data size, denotes the geometric parameter of the microstructures, and denotes the parameter of the numerical model that is used to approximate the relevant electromagnetic response data.
Initialization of GP Mixture Model.
The GP mixture modeling and inference mechanism is initialized by a random partition operation.Denote c = ( 1 , . . ., ) the configuration vector, where is the discrete indicator variable assigning data points to clusters.Denote the number of clusters.Initialize the value of ∈ {1, . . ., } using roulette wheel selection method [33,34] to partition the data set = {( , )} =1 into clusters randomly and uniformly.
Parameter Optimization for GP Mixture Model.
For each cluster, a GP model is prescribed using the data points belonging to this cluster as the training set.The hyperparameter of the GP is set by optimizing the marginal likelihood by particle swarm optimization (PSO) [35].Such PSO algorithm produces a globally optimal estimate of the hyperparameter.
Dirichlet Process-Based Data
Clustering.The operation of data clustering is conducted based on Dirichlet Process (DP), and the purpose is to find an optimal allocation of the data points into different clusters.DP can be defined as the limit of a Dirichlet distribution when the number of clusters tends to infinity [36][37][38][39][40]. Let us begin with a symmetric Dirichlet distribution on proportions: where is the (positive) concentration parameter.The conditional probability of a single indicator is derived to be for components, where −, > 0, and for all other components combined, where c − denotes all indicators except number , −, = ∑ ̸ = ( , ) denotes the occupation number of cluster excluding observation , and the delta function selects data points assigned to class .It is shown that the probabilities are proportional to the occupation numbers.To bring in input dependence, we estimate this occupation number as follows [30]: where is the kernel function parameterized by .We use a Gaussian kernel function:
Stopping Rule.
We record the clustering results of five last iterations.If they are identical, we determine that the Markov chain has adequately sampled the posterior and then stop the iterative process; otherwise, continue the iterations.
Experimental Verification
In this section, we present experimental results of a MTM prototype designed with the proposed approach.To begin with, we introduce the related background information on our experiment.
Background.
Our MTM sample prototype is fabricated on copper-clad printed circuit board with FR4 substrate (the substrate thickness is 0.2 mm, with a dielectric constant of 3.85 + 0.02).This prototype is shown to be a cube in physical appearance.The spatial distribution of the desired refractive index is predetermined as shown in Figure 2. The refractive index is designed to change along the coordinate and keep unchanged along the coordinate of the sample.The rate of change in the refractive index along the coordinate is constant.The purpose of using such a spatial distribution of the refractive index is to achieve the effect of steering one polarized EM wave beam to a desired direction, at an ad hoc frequency point, 13 GHz in our experiment.The deflection angle of the incident wave in an inhomogeneous material mainly depends on the following two factors: (1) the rate of change in the refractive index along the coordinate of the MTM and (2) the number of material layers along the coordinate [3].Denoting as the deflection angle, it satisfies where is the number of material layers along the coordinate and Δ is the difference in the refractive index for any two neighboring unit cells distributed along the coordinate.We select the maximum and minimum values of the refractive index to be max = 2.2, min = 1.3, respectively, and then distribute 200 and 80 microstructures along the and coordinates, respectively.In another word, the (11).See Figure 3 for the simulation result of the macroscopic property of this design.
As the spatial distribution of the refractive index has been determined, the remaining task is to find appropriate microstructures to fill the spatial area of the MTM sample.The microstructures are repeated periodically and fit in the planar slab's thickness.The periodicity along the -, -axes is = 2.5 mm and = 2.5 mm, respectively.
We use a microstructure which is similar as used in [6].The microstructure's geometric topology is characterized by parameter "".The overall topology and a cross section of the microstructure are shown in Figures 4 and 5, respectively.As is shown, the geometric parameter "" just controls the physical appearance of the microstructure.The value space of "" is specified to be [0.2 mm, 4.4 mm].In cases with ≤ 2.3 mm, the cross section of the microstructure is shown to be a rectangle with height "" and fixed width 0.2 mm; otherwise if 2.3 mm < ≤ 4.4 mm, it becomes an "I" shape.Although simple, this type of microstructure has been successfully used to design complicated MTMs, for example, in [6]; we adopt this topology here due to its simplicity in parameter representation and capability for designing complicated MTMs.
The relationship between the microstructure (as shown in Figures 4-5) and the macrostructure of the MTM prototype (corresponding to Figure 2) is conceptually explained by Figure 6.The finally fabricated MTM prototype consists 3 mm, the shape of a cross section of the microstructure is a rectangle with height "" and fixed width 0.2 mm; the right one shows that, in case of 2.3 mm < ⩽ 4.4 mm, the cross section of the microstructure has an "I" shape.
of multiple pieces of MTM as conceptually illustrated in Figure 7.
The Modeling Process.
Recall that to build up a statistical model we first need to collect some data as presented in Section 2. We call such data training data, which consists of a set of predetermined "" values and the corresponding EM response data.Given a specific value of "", the responsive data is obtained over a predetermined range of frequencies via full-wave simulations.To use a lower dimensional in the process of building the regression function : → , we intend to find a closed form expression for the responsive data.
The unit cell size in our design is about one-tenth of the wavelength, and so the procedure of homogenization enables effective constitutive parameters-the electric permittivity and the magnetic permeability-to be defined and used to characterize the composite [41].However, the retrieved parameters always display anomalous and often nonintuitive behavior due to the spatial dispersion effect [41,42].The unfamiliar response leads to a significant discrepancy between the theoretical prediction, which is typically in the form of a Drude or Drude-Lorentz model [43][44][45], and the practical retrieval result.We get out of this obstacle using the spatial dispersion model developed in [42], which establishes the relationship between a theoretical local field average response and the retrieved macroscopic equivalent electromagnetic parameters.Then, a Drude/Lorentz-type model developed in [46] is utilized to fit the local field average effective parameters at the tested frequency points.This model assumes that the EM response parameter, permittivity, or permeability, over a predetermined frequency range, is expressed in the form as follows: where denotes the frequency, 0 denotes the resonance frequency, { 0 , 1 , 2 , } constitute the parameter set of the model, and "data" denotes the local field average parameter values on the tested frequency points, which are obtained by modifying the full-wave simulation data via the spatial dispersion model [42].For details about the derivation of model (12), readers are referred to [34][35][36].There are also several papers in the literature, for example, Chapter 3 of [3] and [42,[47][48][49], which discussed the Drude/Lorentz models in detail.We selected model (12), other than the other similar Drude/Lorentz models in the literature, because model (12) gives a good numerical stability indicated by our experimental tests.An example fitting result with this model for a set of local field average parameters, namely, the electric permittivity , is depicted in Figure 8.As is shown, this model provides a very accurate fit to the resonant data.
We also give a very brief review on the spatial dispersion effect that causes unusual form of the constitutive parameters International Journal of Antennas and Propagation obtained from the retrieval methods [42] and the closed form expressions we used for the constitutive parameters.
The Measurement Result.
We obtained the training data by first drawing a set of values of geometric parameter "" uniformly from its value space and then getting the corresponding EM responsive data.Given the training data, we constructed the GP mixture model and made predictions of the EM response parameters for massive different microstructures.Then we got a library containing the massive geometric designs as well as their corresponding EM characteristics.In Figure 9, we just plot the achievable scope of the refractive index for this type of microstructure with 0.2 mm ≤ ≤ 4.4 mm.Finally, from the library, we obtained the desired geometric parameter values of all the MTM particles which constitute the MTM sample.
We monitored the running speed of the proposed modeling approach in predicting EM responses for arbitrary microstructure designs and compared it with that of a standard CST simulation running on the same computer.In average, the time spent on one microstructure is of the order of tens of milliseconds for the proposed approach, while it is of the order of minutes for CST.In summary we could predict at least one thousand times of speedup using our approach compared with standard simulation methods.The real test result for our MTM sample, including the mapping of the field, the magnitude, and the phase, is presented in Figure 10, which indicates an angle of deflection 19 ∘ .It is shown that the real result is very close to the desired value 21.1 ∘ .
Concluding Discussions
Motivated by vast needs to implement rapid design of complicate MTMs, which usually consist of very large number of inhomogeneous microstructures, we introduce an advanced Bayesian modeling scheme, namely, GP mixture model, into the context of MTM design.This model is used to generate the mapping function from the structure's geometric dimension International Journal of Antennas and Propagation to its corresponding EM response.Thanks to the predictive property of this model, lots of time-consuming full-wave simulations are avoided in the phase of determining appropriate geometric parameter values for the microstructures.
The experimental result of a MTM prototype demonstrates that the proposed method can facilitate the rapid design of MTMs to a large extent with high accuracy.
In this paper we just select a simple microstructure topology as an example to present our approach, while we argue that the proposed modeling scheme is adaptable to other topologies.To prove this claim conceptually in a clear way, we could separate our modeling scheme into two parts.The first part is represented by (12) which parameterizes the EM response data; the second part is the mixture of GP regression model, which connects the geometric dimension "" to parameters of (12).The first part, that is, (12), is only applicable to EM response data with no or one resonance pattern.If there are two or more resonance patterns in the EM response data, (12) will not work; then we need to find a new model (probably with more parameters) to parameterize the EM response.The second part of our approach, namely, the mixture of GP regression model, is itself a generic datadriven modeling approach, which means the structure of the model is adaptively determined by the data to be modeled.Such kind of data-driven modeling technique has been a hot research topic recently in communities of machine learning and Bayesian statistics.We introduce it to the new application area: MTM design, which also requires advanced statistical stuff as shown here.
.Figure 1 :
Figure 1: Flowchart of the mixture of GP regression algorithm.
Figure 2 :
Figure 2: Schematic diagram to show the spatial distribution of the refractive index and the corresponding desired effect on modulating EM waves.
Figure 3 :
Figure 3: Comsol simulation result corresponding to the spatial distribution of the refractive index shown in Figure 2.
Figure 4 :
Figure 4: Two examples showing the microstructures' topology: the left panel corresponds to a case with = 1 mm, and the right panel with = 4.4 mm.See explanation of parameter "" in Figure 5 and the text.
Figure 5 :
Figure 5: Parametric representation of the topology of the microstructures under use: the left hand figure shows that, if ⩽ 2.3 mm, the shape of a cross section of the microstructure is a rectangle with height "" and fixed width 0.2 mm; the right one shows that, in case of 2.3 mm < ⩽ 4.4 mm, the cross section of the microstructure has an "I" shape.
Figure 6 :
Figure 6: A conceptual show for the relationship between the microstructure and the macrostructure of the MTM prototype.
Figure 7 :
Figure 7: A schematic diagram of the MTM prototype.
Figure 8 :
Figure 8: An example showing the discrete real and imaginary local field average values and the corresponding fitted curves yielded by the numerical model.
Figure 9 :
Figure 9: The range of the real part of the refractive index over frequency band 5∼15 GHz.
Figure 10 :
Figure 10: The upper plot shows the measured 2D field mapping (E-field); the middle and the downmost figures correspond to the magnitude and the phase, respectively, at the frequency point 13 GHz.The arrow shows the incident direction of the EM wave. | 5,014.8 | 2014-06-16T00:00:00.000 | [
"Computer Science"
] |
Protein-protein interaction of zinc finger LIM domains with protein kinase C.
The LIM domain comprising two zinc-finger motifs is found in a variety of proteins and has been proposed to direct protein-protein interactions. During the identification of protein kinase C (PKC)-interacting proteins by a yeast two-hybrid assay, a novel protein containing three LIM domains, designated ENH, was shown to associate with PKC in an isoform-specific manner. Deletion analysis demonstrated that any single LIM domain of ENH associates with the NH2-terminal region of PKC. ENH associated with PKC in COS-7 cells and was phosphorylated by PKC in vitro. Upon treatment of the cells with phorbol ester, ENH in the membrane fraction was translocated to the cytosol fraction in vivo. Other LIM domain-containing proteins, such as Enigma and LIM-kinase 1, also interacted with PKC through their LIM domains. These results suggest that the LIM domain is one of the targets of PKC and that the LIM-PKC interaction may shed light on undefined roles of LIM domain-containing proteins.
interactions (1,2). In fact, five proteins have been reported recently as a highly specific target of each LIM domain: the LIM1 domain of zyxin binds to the LIM-only protein CRP by LIM-LIM interaction (6); the CRP forms homodimer by LIM-LIM interaction (7); the LIM domain of RBTN2 binds to the bHLH (basichelix-loop-helix) domain of TAL1 protein (8); the LIM2 and LIM3 domains of Enigma interact with the Tyr-containing tight-turn motifs of the GDNF 1 receptor (GDNFR, known as a Ret Tyr kinase) and the insulin receptor (InsR), respectively (9, 10). Although most LIM domains adopt a similar zinc-coordinated finger consisting of well conserved amino acid sequences, no protein has been identified yet as a common target of LIM domains.
The PKC family consists of at least 11 isoforms, which play distinct roles for many cellular functions but show subtle difference of substrate specificities by in vitro phosphorylation studies (11,12). Therefore, it is reasonable to assume that there are some mechanisms by which each PKC isoform recognizes its specific substrate proteins in vivo. Recently, several proteins associating with PKC have been emerged to govern the subcellular localization of the enzyme family (13)(14)(15)(16).
We report here a novel PKC-binding protein containing three LIM domains, designated ENH, and show the association of PKC with LIM domains of different proteins including this novel PKC-binding protein, suggesting that protein-protein interaction with PKC is a general property of LIM domains.
EXPERIMENTAL PROCEDURES
Yeast Two-Hybrid Assay-The yeast two-hybrid assay (17) was conducted in the yeast strain CG-1945 , a derivative of HF7c (18), by using a fusion between GAL4 DNA binding domain and the regulatory domain of rat PKC I (residues 1-340) (19) as a bait. -Galactosidase activity in yeast cells was measured by plate assay methods. All measurements were repeated at least four times.
Expression of Epitope-tagged ENH in COS-7 Cells-We constructed two parental vectors, pTB701-FLAG and pTB701-HA, for expression of NH 2 -terminal epitope-tagged fusion proteins in COS-7 cells, by inserting the sequences encoding FLAG (20) and HA epitope (21) under the SV40 early promoter of pTB701 (22), respectively. We constructed pTB701-FLAG-ENH by fusing the ENH cDNA under the FLAG epitope sequence of pTB701-FLAG. Similarly, pTB701-HA-PKC I was constructed by inserting the PKC I cDNA to pTB701-HA. HA-tagged PKC ␣, ␥, ␦, ⑀, and were constructed using each insert cDNA (23,24). A kinase-negative mutant of PKC I-HA was generated by replacing Lys-371 by Met by site-directed mutagenesis, and designated as K371M PKC I-HA.
Immunoprecipitation and Phosphorylation Assay-COS-7 cells coexpressing FLAG-tagged ENH and HA-tagged PKC I from a 10-cm plate were suspended in 500 l of lysis buffer (50 mM Tris, pH 7.5, 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 10 mM 2-mercaptoethanol, 50 mM NaF, 1 mM Na 3 VO 4 , 1 mM phenylmethylsulfonyl fluoride, 1 tablet/50 ml complete protease inhibitor mixture tablets (Boehringer Mannheim), 1% (v/v) Triton X-100). Cleared lysates (500 l) were incubated for 1 h on ice with 2 g of either anti-FLAG (M2, Eastman Kodak Co.) or anti-HA (12CA5, Boehringer Mannheim) monoclonal antibody and then mixed with 20 l of protein G-Sepharose 4 fast flow (50% slurry, Pharmacia Biotech, Uppsala, Sweden). After incubation at 4°C for 1 h with rotation, the beads were washed with lysis buffer four times. The beads were subjected to Western blotting. For phosphorylation assay, the beads were mixed with 25 l of the reaction mixture (20 mM Tris, pH 7. In Vitro Phosphorylation Assay-The reaction mixture (25 l) and 50 ng of PKC (mixture of ␣, I, II, ␥) purified from rat brain (25) was added to 5 g of GST-fused ENH bound to glutathione-Sepharose 4B. After addition of 1 l of [␥-32 P]ATP (10 mCi/ml), the reaction mixture was incubated for 5 min at 30°C. Samples were analyzed onto SDS-PAGE and then autoradiographed.
RESULTS AND DISCUSSION
We used a yeast two-hybrid system to identify proteins that bind to the regulatory domain of PKC I. Six positive clones were isolated independently from a rat brain cDNA library. Sequence analysis showed that one of the positive clones encodes a novel protein containing LIM domains. A full-length cDNA clone (1,896 base pairs) was obtained by a rapid amplification of cDNA ends method from the same cDNA library (Fig. 1). The cDNA encodes a novel polypeptide sequence of 591 amino acid residues with a calculated molecular weight of 63,197. The deduced protein sequence has two Pro/Ser-rich regions (Pro/Ser-1 (residues 106 -216): Pro, 18.9%; Ser, 19.8%; Pro/Ser-2 (residues 308 -394): Pro, 16.1%; Ser, 23.0%) and three LIM domains (LIM1-3; residues 415-585), which resembles the molecular organization of human Enigma (455 amino acid residues) (9). Residues 120 -591 of this protein show high similarity to the full-length Enigma, and approximately 37% of the amino acid residues are identical between this protein and Enigma. Therefore, we termed this protein ENH (Enigma homolog). Northern blot analysis of adult rat tissues revealed that the 1.9-kb ENH mRNA was efficiently expressed in heart and skeletal muscle, and the 4.4-kb ENH mRNA was expressed in various tissues, such as heart, brain, spleen, liver, and kidney (Fig. 2).
To investigate the in vivo interaction between ENH and PKC I, we constructed expression vectors of FLAG-tagged ENH (ENH-FLAG) and HA-tagged PKC I (PKC I-HA). COS-7 cells coexpressing ENH-FLAG and PKC I-HA were lysed, and proteins were immunoprecipitated with either anti-FLAG or anti-HA antibody. Western blotting analysis (Fig. 3A, top and bottom panels) showed that ENH-FLAG (70 kDa) associated with PKC I-HA (80 kDa) in vivo. TPA treatment prior to the cell lysis had no effect on the association of these two proteins. Phosphorylation assay (Fig. 3A, middle panel) indicated that PKC I-HA can phosphorylate ENH-FLAG. When a kinasenegative mutant of PKC I-HA (K371M PKC I-HA) was used under the same condition (Fig. 3A), although the association of ENH-FLAG with the kinase mutant was observed, the phosphorylation of ENH-FLAG was not observed. These results indicate that association of PKC I with ENH is independent of the enzymatic activity of PKC I.
We next examined whether ENH is a substrate for PKC by incubating bacterially expressed GST-ENH hybrid with purified rat cPKC. The GST-ENH fusion protein was phosphorylated by PKC in vitro, whereas GST alone was not (Fig. 3B, top and bottom panels). H1 histone, known as a good substrate for PKC (26), was more efficiently phosphorylated than GST-ENH by PKC (Fig. 3B, middle panel). H1 histone was known to be rapidly phosphorylated by PKC, and after prolonged reaction approximately 1.8 mol of phosphate was incorporated into every mole of H1 histone (26). Under similar conditions, the reaction with GST-ENH was relatively slow. After prolonged reaction, approximately 0.9 mol of phosphate was incorporated per mol of GST-ENH. These data suggest that ENH is a good substrate of PKC at least under the conditions employed.
Western blotting of subcellular fractions of COS-7 cells coexpressing ENH-FLAG and PKC I-HA showed that ENH-FLAG is equally localized in both cytosol and membrane fractions (Fig. 3C). After TPA treatment, ENH-FLAG in the membrane fraction disappeared, and the amount of ENH-FLAG in the cytosol fraction increased. When a PKC inhibitor, such as cal-phostin C (100 nM) (27), was added for 15 min prior to TPA treatment (Fig. 3C), ENH-FLAG remained in the membrane fraction. Staurosporine (100 nM), a general inhibitor of Ser/Thr protein kinases (28), also blocked the TPA-induced translocation of ENH. When we used ENH-HA instead of ENH-FLAG, the same results were obtained. Thus, ENH interacts with PKC I in vivo, and activation of PKC results in translocation of ENH from membrane to cytosol.
We next delineated the region of ENH that interacts with the regulatory domain of PKC I by the yeast two-hybrid system. A series of deletion mutants of ENH as GAL4 DNA binding domain hybrids were examined for the interaction with the regulatory domain of PKC I (Fig. 4A). The NH 2 -terminal region of ENH (residues 1-414; ⌬LIM) is not required for PKC binding, whereas each LIM domain (LIM1, LIM2, LIM3) can bind to PKC I. In addition, the COOH-terminal half of the LIM1 domain (1/2 LIM1) was unable to interact with PKC I. These results indicate that the region of ENH essential for the PKC I binding is a single intact LIM domain. Using the regulatory domain of various PKCs as a bait, it was shown that residues 415-591 of ENH (ENH LIM1-3 domain) were able to interact with PKC I, ␥, and ⑀ but not with PKC ␣, ␦, and . To further delineate the region interacting with the ENH LIM domains, we assayed deletion mutants of the regulatory domain of PKC I and ⑀ (Fig. 4B). Mutants harboring the NH 2terminal V1 region of either PKC I or ⑀ showed an intact ability to bind to the ENH LIM1-3 domain. The V1 region of PKC is thus critical for the ENH-PKC interaction. Since the V1 regions of PKCs show high diversity (29), it is likely that the LIM domains of ENH may identify PKC isoforms by a specific sequence of the V1 region rather than the conformation of PKC molecule.
Because human Enigma (9) has three highly related LIM domains at its COOH-terminal (Enigma LIM1-3 domain), we investigated the interaction of Enigma LIM domains with PKC by a pull-down assay. Cell lysates from the COS-7 cells expressing one of the HA-tagged PKCs (␣, I, ␥, ␦, ⑀, and ) were mixed with GST or GST-Enigma LIM1-3, and the proteins bound to glutathione-Sepharose 4B were analyzed with anti-HA antibody. The Enigma LIM1-3 domain was found to interact with PKC ␣, I, and , but not with PKC ␥, ␦, and ⑀ (Fig. 5A, top panel), indicating that LIM domains of Enigma and ENH have individual specificities for PKC isoforms. Previous studies showed that the LIM2 and LIM3 domains of Enigma bind to GDNFR and InsR, respectively (9,10). Each LIM domain of Enigma was found to associate with PKC, as in the case of ENH. These results suggest the possibility that one LIM domain interacts with more than one molecule. Enigma has been FIG. 4. The interaction of ENH and PKC in the yeast two-hybrid system. -Galactosidase activity of yeast transformants was assayed by the plate method (17). After developing for 10 h, the yeast cells were classified into positive colonies showing either dark blue (ϩϩ) or blue (ϩ), and negative colonies showing white (Ϫ). A, delineation of the domain of ENH which interacts with the regulatory domain of PKC I. P/S1, P/S2, 1, 2, and, 3 in the schema represent Pro/Ser-rich region 1, Pro/Ser-rich region 2, LIM1, LIM2, and LIM3, respectively. B, delineation of the domain of PKC I and ⑀ which interacts with the ENH LIM1-3. Based on the sequence similarity, the primary structure of the regulatory domain of PKC I and ⑀ is divided into conserved domains (boxes), which are separated by variable regions (lines) (29). postulated to be either a scaffold or an anchoring protein that coordinates the subcellular localization and activity of GDNFR and InsR (9,10) and also associates with PKC as described above. Since ENH shows the same molecular organization as Enigma, ENH might have a role similar to Enigma and its subcellular localization is controlled by PKC.
By the pairwise alignment analysis of LIM domains, Dawid et al. (1) demonstrated that LIM domains are classified into five discrete groups (groups A to E). All LIM domains of ENH and Enigma belong to group D. We tested LIM domains from other groups using human LIM-kinase 1 (LIMK1) (30,31), which harbors two LIM domains at its NH 2 terminus (LIM1 from group A, LIM2 from group B). In COS-7 cells, we found that the LIMK1 interacts tightly with PKC ␥ and , and weakly with PKC␣, I, ␦, and ⑀ (Fig. 5B, bottom panel). To confirm that the LIMK1-PKC interaction is mediated by the LIM domains of LIMK1, we constructed expression vectors for three LIMK1 mutants, to which LIM domains were mutated not to form zinc binding by substituting Gly for consensus Cys (32), see Fig. 5C. COS-7 cells coexpressing each LIMK1 mutant and PKC ␥-HA, which efficiently bound to LIMK1, were lysed, and proteins were immunoprecipitated with either anti-LIMK1 (30, 31) or anti-HA antibody. Western blotting (Fig. 5C, third and fourth panels) indicated that the LIM2 domain of LIMK1 (group B) is critical for the binding between LIMK1 and PKC ␥. Further analysis of other groups of LIM domains should yield insight into the general role of the LIM-PKC interaction.
In this study, we demonstrated that the LIM domains selectively bind to PKC isoforms in vitro and in vivo. The LIM-PKC interaction may be an important clue for the understanding of PKC isoform-specific functions in vivo and the roles of LIM domain-containing proteins. | 3,187.8 | 1996-12-06T00:00:00.000 | [
"Biology"
] |
Acid-base adjustments and first evidence of denticle corrosion caused by ocean acidification conditions in a demersal shark species
Global ocean acidification is expected to chronically lower the pH to 7.3 (>2200 µatm seawater pCO2) by the year 2300. Acute hypercapnia already occurs along the South African west and south coasts due to upwelling- and low-oxygen events, with increasing frequency. In the present project we investigated the impact of hypercapnia on the endemic demersal shark species Haploblepharus edwardsii. Specifically, we experimentally analysed acid-base regulation during acute and chronic hypercapnia, the effects of chronic hypercapnia on growth rates and on denticle structure- and composition. While H. edwardsii are physiologically well adapted to acute and chronic hypercapnia, we observed, for the first time, denticle corrosion as a result of chronic exposure. We conclude that denticle corrosion could increase denticle turnover and compromise hydrodynamics and skin protection.
Materials and Methods
Experimental animals. All research presented here was conducted under permission of the Research Ethics Committee: Animal Care and Use of Stellenbosch University (SU-ACUM14-00006) and is in accordance with the relevant guidelines and regulations. Specimens (80) of the Puffadder shyshark H. edwardsii were caught in the harbour basin of the False Bay Yacht Club in Simons Town, South Africa (34.07°S, 18.33°E) in austral spring. Sharks were caught by SCUBA divers by hand after setting out bait (sardines) in perforated 5 l plastic bottles. Caught sharks were collected in nets until they were transferred into an 800 l tank on a car trailer. Water in the tank was continuously provided with oxygen (technical) from a cylinder. After capture, sharks were transported within an hour to holding tanks at the Research Aquarium of the Department of Agriculture, Forestry and Fisheries (DAFF) in Cape Town.
In Cape Town, they were weighed and maintained in round flow-through holding tanks (4500 l) for four months prior to experimentation (pH ranged from 7.9 to 8.1; T A from 8.4 to 16.8 °C). They were fed rations of 5% of average body mass with pieces of squid once a week. Sharks were not fed in the week of experimentation.
Experimental procedures. For acute exposure, 66 larger sharks (179 ± 52 g, 67% male) were acclimatized for 48 h prior to experimentation in smaller round tanks (∅ = 1.2 m, h = 1 m, 1130 l). Tanks were well mixed by propellers and aerated by compressed air. At the start of the trial, individuals were distributed between two replicate control-(normocapnic) or hypercapnic tanks (141 l rectangular glass tanks) with corresponding pH levels of ~7.3 and ~8.0, respectively. Weight of animals did not differ significantly between replicate groups (ANOVA, p = 0.993). For blood sampling, individuals were removed from tanks at the respective time points of incubation, i.e. after 1.5, 3, 6 and 24 hours, alternating between replicate tanks. Each time point represents samples from both replicates of the same treatment (3 per replicate for hypercapnic treatment, 2 or 3 for normocapnic treatment. For the starting value (time 0), blood was collected from sharks from the same batch. These sharks were not incubated thereafter. To test recovery, some sharks were transferred after 24 h of exposure into tanks with normocapnic conditions and sampled after a further 8 h (i.e. a total of 32 h from the start of the trial). After sampling, sharks were transferred back into the acclimation tank and not used further. Blood was sampled (see below) only once from each shark. Analyses were carried out in the statistical software environment R (Ver. 3.0.1.), including the nlme package (Ver. 3.0.1.) 33 . Differences between parameters were tested for each interval against the base value (0 h) for the normocapnic (control) group and the hypercapnic (treatment) by means of ANOVA. Response variables were modelled as a function of the interaction between sampling time and treatment with linear-mixed effects models. Tank was initially included as a random effect, but found to increase the AIC value when tested against a fixed-effects model and consequently dropped. Filtered seawater for the system was provided by the main water storage tank of the research aquarium. It supplied each replicate via a header (mixing) tank. In two of the header tanks, pH was adjusted by its own CO 2 supply for hypercapnic treatment. This was accomplished by using a pH controller (7074/2, TUNZE, Germany) containing a solenoid valve (7074.111) and a pH electrode (7070.110) attached to a 9 kg CO 2 bottle (technical). A level of pH of 7.3 was selected as this level is predicted by the year 2300 7 . The experiment was carried out in a room with stable ambient air temperature (ranging from 16-18 °C) so that additional control of seawater temperature was unnecessary. During the acute experiment, seawater conditions were tested five times in each replicate tank (summarised in Table 1) and did not differ significantly between replicates of each treatment (pH, T; ANOVA).
Subsequently, the remaining 13 smaller sharks were taken from the holding tank (see above) for chronic exposure. They were weighed (w) total length (L) measured and tagged (barbed dart tags (D-tag; 89 mm, Ø 1.4 mm; Hallprint Pty Ltd, South Australia) left of the first dorsal fin (121.4 ± 34.2 g, 85% male, see Results for more details). Thereafter they were transferred into normocapnic-or hypercapnic replicate tanks as described above, except that rectangular 1000 l plastic tanks were used here. All tanks were well mixed and aerated. ANOVA revealed no difference in w and L between the four replicate groups (p = 0.993; 0.914). In the two weeks prior to chronic exposure, seawater temperature in the holding tanks was in the range of 15.7-16.3 °C. Sharks of both treatments were acclimatized for a week to around 18 °C after which the pH for hypercapnic treatment was lowered in two steps in five days from approximately 8.1 to 7.3, using a pH control system as described above. The experimental pH of 7.3 was selected as this level is predicted to be reached by the year 2300 7 . It will possibly be reached earlier in the BCLME as it is close to values already attained over short periods during severe upwelling periods and after decay of algal blooms 10 . Seawater temperature was allowed to fluctuate with the incoming seawater. Similar seawater temperatures were recorded in all replicate tanks throughout experimentation. Seawater parameters were measured daily with the exception of A T which was measured thrice a week (summarised in Table 1) and did not differ significantly between replicates of the same treatment (pH, T; ANOVA). Sharks remained under experimental conditions for 63 days (~9 weeks). To record growth and to adjust the food rations, sharks were re-weighed and measured after 4, 6 and 9 weeks. Paired t-tests were carried out to test for significant changes in length and mass within treatments. Differences at the end of incubation (blood parameters, elemental composition and physical damage of denticles) were modelled with linear-mixed effects models that included treatment as fixed and Tank as random effect. Equally to the acute experiment, Tank was found to increase the AIC value when tested against a fixed-effects model and consequently dropped (R Ver. 3.0.1.) 33 .
Seawater pCO 2 , [CO 3 2− ] and [HCO 3 − ] were calculated using measured pH, salinity, ambient temperature (T A ) and total alkalinity (A T ) 34 as constants in CO2SYS software 35 . Oxygen concentration was determined using a Multi 350i meter set (WTW, Germany). Water quality was monitored by measuring NH 3 concentration (Ammonia test kit, Sera, Germany) and never exceeded 0.09 mg l −1 .
Sampling. At the given intervals during the acute experiment and at termination of the chronic experiment, sharks were removed from their tanks, heads (eyes) were covered by a seawater-soaked cloth to reduce stress and prevent curling of the tail and animals placed upside down on a seawater-soaked cloth. In addition, head and tail were held tight by hand to avoid movement. Approximately 1 ml blood was immediately withdrawn from the caudal vein by syringe with a hypodermic needle (Neomedic 1 ml, 26 G) into a 2 ml syringe treated with heparin before the animals were carefully returned to the tank. All animals from the chronic experiment were subsequently sacrificed using ethylene glycol monophenyl ether (C 8 H 10 C 2 , 0.8 ml l −1 ). Skin samples were then taken dorso-laterally next to the first dorsal fin and frozen at −20 °C for electron-microscopic-and elemental analysis.
Analysis of denticles. Micrographs of shark skin areas and denticles were obtained by scanning electron microscopy (SEM) using a Leo 1430VP (Zeiss, Germany) of gold-platinum-sputtered samples whereas elemental composition of the outer denticle surface was analysed using energy-dispersive X-ray spectroscopy (EDX) with an ESEM Quanta 400 FEG instrument (Thermo Scientific, USA) after sputtering with gold and palladium (80:20) 36,37 . On the resulting SEM micrographs, ratios of damaged and intact denticles were quantified by counting. www.nature.com/scientificreports www.nature.com/scientificreports/ Blood acid-base balance. The blood pH was measured within 20 s after sampling using an Orion 3 star pH meter equipped with an Orion 8220 BNWP micro pH electrode (Thermo Scientific, USA). Calibration was performed with NBS precision buffers (Applichem, Germany) at the same temperature as that of ambient seawater of the experimental tanks. A blood subsample (50 µl) was immediately injected into a de-gassing (magnetic stirrer) chamber containing 200 µl of 100 mM H 2 SO 4 and liberated total CO 2 (cCO 2 ) determined as described previously 38 . From measured pH and cCO 2 values, pCO 2 , and [HCO 3 − ] were calculated using derivatives of the Henderson Hasselbalch equation (I and II). The required solubility coefficient αCO 2 and dissociation constant pK' 1 of carbonic acid were obtained from Boutilier et al. 39 Haematocrit. Subsamples of 500 μl blood were immediately transferred into an EDTA pre-treated K2E reaction vessel (BD Microtainer, USA) for measurement of haematocrit. The vessels were closed and the samples shaken to ensure mixing of EDTA. Thereafter, 80 iu/ml sodium-heparinised micro haematocrit capillaries (Marienfeld, Germany) were completely filled, sealed with plasticine and subsequently spun at room temperature for 5 min in a Haematospin 1300 centrifuge (Lasec, SA). Haematocrit was subsequently quantified using a Micro Haematocrit reader (Hawksley, UK).
Results
Blood acid-base balance during acute hypercapnia. The acute exposure experiment revealed differences between responses to normocapnic and hypercapnic conditions, respectively. In the normocapnic control group, there were no significant changes in pH ( Table 2). Extracellular total CO 2 (cCO 2 ) levels remained in a narrow range of between 5 and 6 mmol l −1 throughout the course of the experiment. Accordingly, calculated values of [HCO 3 − + CO 3 2− ] and pCO 2 showed very little change. This is also evident from the Henderson-Hasselbalch diagram (Fig. 1).
Despite the increase in seawater pCO 2 and subsequently blood pCO 2 (Table 2), the extracellular pH in sharks in the hypercapnic treatment increased by 0.1 units in the first 3 h after sudden exposure to hypercapnia ( Table 2). From this point, it followed a similar course as observed in normocapnic sharks (see above). To the 24 h mark, this comprises an over-compensation of pH by approximately 0.15 pH units or reduction in [H + ] by 5.1 mM (29%) compared with initial levels. pH increased by another 0.05 units during the subsequent 8 h of recovery in normocapnic seawater. All treatment: time interactions were significant from 1.5 h incubation onwards.
Extracellular cCO 2 levels rose by 114% from an initial 6.3 mmol l −1 during 24 h of hypercapnic exposure ( Table 2). These increases were significant from 1.5 h incubation onwards. Subsequent recovery in normocapnic seawater caused a sharp decline by 58% (not sig. different to 0 h) to below the initial level. Values for [HCO 3 − + CO 3 2− ] followed a similar trend ( Table 2). The partial pressure of CO 2 increased from its initial level peaking at the 24 h time interval (sig. different from 1.5 h incubation) after which it decreased sharply to below the initial value during recovery (not sig.). Data was used to construct a Henderson-Hasselbalch diagram, depicting (Fig. 1). No acidosis occurred initially after exposure to hypercapnia which would have been indicated by a shift to the left. Compensation by an increase in [HCO 3 − + CO 3 2− ] started immediately, buffering the blood and leading to an alkalosis when compared to the initial pH measured. This is indicated by a shift to the right. This alkalosis is carried over to the 32 h (recovery) value although pCO 2 and [HCO 3 − + CO 3 2− ] dropped sharply during this period. In contrast, there is very little change in sharks exposed to normocapnic conditions (Fig. 1). Between treatments, haemolymph pHs only differed up to the 1.5 h time interval, after which both followed a similar path, including 8 h of recovery ( ], however, were substantially higher in hypercapnic sharks with the exception of the recovery values. Concentrations of Ca 2+ and Mg 2+ were stable in both treatments and did not differ between treatments throughout the experiment ( Table 2).
Acid-base balance and other blood parameters during chronic hypercapnia.
The pH levels measured in the blood of hypercapnic-and normocapnic incubated sharks were identical (not sig.), whereas cCO 2 had approximately doubled (sig.) to 8.3 mM under hypercapnia compared with normocapnia (Table 3). Accordingly, calculated pCO 2 and [HCO 3 − + CO 3 2− ] were 2.2 Torr compared to 1.1 Torr (not sig.) and bicarbonate 8.1 mM compared to 4.2 mM under normocapnia (sig.), respectively. Water pCO 2 levels rose from 0.3 to 1.4 Torr during hypercapnic incubation but venous blood was significantly higher at 2.2 Torr. The Henderson-Hasselbalch diagram provided an illustration of the interaction of extracellular pH, calculated haemolymph bicarbonate and pCO 2 : At similar pHs in sharks from both treatments, elevation of pCO 2 and [HCO 3 − + CO 3 2− ] caused a vertical shift in hypercapnic sharks, indicating bicarbonate buffering that results in slight alkalosis (Fig. 2). [Ca 2+ ] and [Mg 2+ ] were at the same level after both treatments (not sig.; Table 3). Haematocrit levels were similar after chronic treatment and not sig. different ( Table 3).
Analysis of denticles following chronic hypercapnia. SEM analysis of skin areas covered with denticles revealed contrasting results: Denticles from normocapnic sharks were mostly intact and had a shiny surface with sharp edges (Fig. 3). In contrast, many denticles from hypercapnic sharks were damaged and their surface www.nature.com/scientificreports www.nature.com/scientificreports/ appears corroded and edges less sharp (Fig. 3). Comparison of SEM images from individual denticles confirmed this. Surfaces look smoother on normocapnic denticles (Fig. 4). Quantitative analysis of denticles from SEM micrographs revealed that significantly less (n = 3, p < 0.05) denticles (9.2 ± 3.2%) were damaged (pieces broken off) on normocapnic sharks than on hypercapnic ones (25.0 ± 6.7%).
Elemental composition of denticles from both treatments revealed some significant differences: Two of the elements that form fluoroapatite (Ca 5 (PO 4 ) 3 F) and hydroxyapatite (Ca 5 (PO 4 ) 3 OH) that are normally present in substantial quantities, Ca (−35%) and P (−26%), have a lower proportion in denticles from hypercapnic sharks (sig.), resulting in a lower Ca: P ratio (Table 4). In contrast, the proportions of C (+29%) and O (+15%,) are elevated although not significantly in the case of O. Impact of chronic hypercapnia on body size. Sharks from the chronic normocapnic treatment increased in total length by 0.2 ± 0.6 cm (0.5%) from 31.6 ± 2.8 to 32.4 ± 3.0 cm and by mass by 7.7 ± 9.9 g (6.2%) from 123.8 ± 33.4 to 131.5 ± 37.1 g. Total length of hypercapnic sharks remained relatively stable (+0.1 ± 0.5 cm = 0.2%) from 31.4 ± 2.1 to 31.5 ± 2.1 cm and their mass changed by 0.8 ± 4.5 g (0.7%) from 118.5 ± 27.6 to 119.3 ± 29.0 g in nine weeks. The changes in length and mass were neither significantly different within nor between treatments.
Discussion
The main findings of the present study are that (1) Haploblepharus edwardsii adjusts well physiologically (i.e. regulation of acid-base balance) to acute hypercapnia. (2) This regulation can be maintained during chronic hypercapnic exposure. (3) The prolonged regulation is likely to be energetically costly but in the present study no significant depression of somatic growth was observed. (4) Although the sharks can maintain their acid-base balance, prolonged exposure to hypercapnia has detrimental chemical effects that cannot be compensated, namely the dissolution of their denticles' surface.
As a result of upwelling, the habitat of H. edwardsii is characterised by short periods of strong hypercapnia. Acute environmental hypercapnia results in increased extracellular pCO 2 and decline of pH and, if uncompensated, would most likely impact gas exchange of haemoglobin and at the tissues. Compensation mechanisms exist in many marine animals in the form of an elevation of bicarbonate [HCO 3 − ] levels to return pH levels to close to original values 3 . Also, the outward gradient for CO 2 removal would become difficult to maintain due to an insufficiently high extracellular pCO 2 level compared with the increased environmental pCO 2 . Such compensatory capacity is present in H. edwardsii.
We could demonstrate that H. edwardsii possesses the necessary compensation mechanisms to react to a sudden onset of hypercapnia: A rapid elevation of [HCO 3 − ] by a net 7.1 mmol l −1 (+115%) after 24 h was observed, with a near doubling (+4.3 mmol l −1 ) already after 90 min of exposure, likely rapid enough to prevent a decline of arterial plasma pH. At the same time, the arterial plasma pCO 2 was elevated by a net 1.2 Torr (+50%). This increased the outward CO 2 gradient to 1.9 Torr, despite elevation of ambient pCO 2 by 1.4 Torr. The 1.7 Torr of the latter was well within the range of resting plasma pCO 2 and would have made gas exchange impossible. In the normocapnic group, the gradient was 1.2 Torr after the same incubation period. Such fast response is indicative of exposure and adaptation of the species to frequently elevated hypercapnia in its habitat 40 .
The Henderson-Hasselbalch diagram (Fig. 1) illustrates the interaction of arterial plasma pH, calculated plasma bicarbonate and pCO 2 : Whereas values are concentrated in a very restricted area throughout entire experimentation in the normocapnic group, the situation is very different in the hypercapnic group. Over-compensation www.nature.com/scientificreports www.nature.com/scientificreports/ of plasma pH (move to right, alkalosis) due to the elevation of bicarbonate levels (upward move) caused an increase in pH. In other species, a transitional acidosis (drop in plasma pH) that lasted several hours, has been observed before compensation through bicarbonate became effective 41,42 . In our study, a further increase (along a constant pCO 2 isopleth) was caused by bicarbonate increase until 24 h of exposure. Such rapid responses indicate the presence of an efficient Cl − /HCO 3− exchanger as was previously found in the gills of sharks 43 , possibly supplemented by a Na + /NH 4 + exchanger 42 . In addition, an involvement of a Na + -K + -ATPase as in teleosts is possible 44 . After the sharks were returned to normocapnic conditions after 24 h, alkalosis (high pH) persisted regardless of a substantial and rapid decline in bicarbonate concentration. Adjustment of pH to initial levels takes probably longer to allow cellular processes to adjust. The rapid recovery indicates the reversibility of this mechanism. Esbaugh et al. 26 have hypothesised that species that are adapted to low-level hypercapnia may no longer rely on traditional short-term acid-base regulation and use morphological changes (gill permeability, diffusion distances) instead or in addition. In H. edwardsii this may be a contributing factor but has not been studied here.
It is important to note that acute physiological responses often differ distinctly from those to chronic exposure. In H. edwardsii, however, the response shown after acute exposure was maintained for a period of more than 60 days. In both treatments, plasma pH levels were very similar and there was no acidosis as could be expected if compensatory mechanisms cannot be maintained for prolonged periods. The increase in bicarbonate concentrations seen in the Henderson-Hasselbalch diagram makes it also apparent that, at least in part, increased bicarbonate levels are part of these mechanisms, ensuring that the outward pCO 2 gradient (0.6 Torr) is maintained at a level found in normocapnia (0.8 Torr). However, long term compensation is likely to come at a cost: Lower metabolic rates, dissolution of hard structures such as shells and carapaces have been found 3,45,46 . Although metabolic depression 47 is an adequate, reversible strategy to mitigate against short term hypercapnic exposure, the concomitant reduction in somatic growth and reproductive output might have negative effects during chronic exposure. This was previously shown in Port Jackson sharks 48 .
Structural-and compositional changes of denticles under chronic hypercapnic conditions were evident from structural scans and elemental composition of samples. Weakening and deformation of CaCO 3 shell and skeleton elements has been observed in a host of different marine invertebrates 30 and fishes 49 and largely attributed to the negative effects of increased pCO 2 on calcification or chemical dissolution. Shark denticles differ from invertebrate www.nature.com/scientificreports www.nature.com/scientificreports/ shells and skeletal structures as they are composed of calcium fluoro phosphate (fluorapatite) and calcium hydroxyl phosphate (hydroxyapatite) 37 . Although both materials are only weakly soluble 50 , the H + concentration of 50 nM in our experiment seems to have been sufficient to dissolve a measurable portion of the apatites. This is evident from the lower concentration of Ca, P, and F in denticles exposed to those conditions ( Table 4). The observed changes here are not the result of a physiological process, as the time it takes to form new denticles is in the order of 4 months 51 and therefore exceeds the duration of the experiment. The observed effects are the result of chemical dissolution, but there is no information on such an effect on shark denticles under chronic hypercapnia. Green and Jutfelt 21 found no visible change in Scyliorhinus canicula denticle morphology after short term (one month) hypercapnic exposure (pH 7.7). S. canicula and H. edwardsii are both common in areas with strong coastal upwelling and large fluctuations in seawater conditions, including temperature and pH, and are therefore likely adapted to acute changes in these. Our results however suggest that chronic exposure to severe hypercapnic (pH 7.3) conditions causes the dissolution of fluorapatite and in turn corrosion and weakening of the denticle surface. Shark denticles have been attributed a number of different functions, the protection against skin abrasions during hunting and mating and the improvement of hydrodynamics 52 . Our results suggest that chronic exposure to lower pH commensurate to future ocean acidification scenarios might impair the functionality of the denticles and in turn have negative effects for feeding and hydrodynamics. The latter might not be important in demersal sharks such as H. edwardsii that are buccal ventilators and do not need to swim in order to breathe. We speculate that, in large pelagic elasmobranchs which are facultative ram ventilators and must maintain forward motion to sufficiently ventilate their gill surface, impaired hydrodynamics by denticle corrosion may impact metabolic CO 2 removal from the gills. However, further research is needed to examine this hypothesis. The intact denticle surface is responsible for as much as 12% increase in swimming speed 52 . Reduced hydrodynamics will not only make their swimming less effective, but will in addition make it more difficult to remove metabolic CO 2 . In a mackerel (teleost), higher swimming speed was shown to eliminate CO 2 more efficiently from the blood 39 . In addition, corrosion of the teeth surface of all sharks will impair feeding. www.nature.com/scientificreports www.nature.com/scientificreports/ conclusions H. edwardsii are already well adapted to hypercapnic conditions due to the frequent occurrence of these after coastal upwelling and subsequent low-oxygen events. Despite these adaptations, we observed negative consequences during chronic hypercapnia: denticle corrosion. Denticle corrosion and the resultant increase in denticle turnover can potentially compromise hydrodynamics and skin protection. As denticles and shark teeth are structurally and materially identical, chemical dissolution of teeth at a similar rate can be expected. We speculate that a combination of these multiple effects might negatively affect the populations of this and other endemic, coastal elasmobranch species for which range shift is impossible as they reside at the southern tip of the African continent. We suggest that these multiple stressors make chondrichthyans particularly susceptible to ocean acidification and additional studies are urgently needed to elucidate the extent of this effect on already vulnerable species. Table 4. Elemental composition (in wt %) of denticles from adult H. edwardsii after exposure to normocapnic and hypercapnic conditions for nine weeks. Values are means ± S.D. *Significantly different from normocapnia group (Students t-test; p < 0.05). | 5,829.6 | 2019-12-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Dual regulation by ethanol of the inhibitory effects of ketamine on spinal NMDA-induced pressor responses in rats
Background Acute exposure of ethanol (alcohol) inhibits NMDA receptor function. Our previous study showed that acute ethanol inhibited the pressor responses induced by NMDA applied intrathecally; however, prolonged ethanol exposure may increase the levels of phosphorylated NMDA receptor subunits leading to changes in ethanol inhibitory potency on NMDA-induced responses. The present study was carried out to examine whether acute ethanol exposure influences the effects of ketamine, a noncompetitive NMDA receptor antagonist, on spinal NMDA-induced pressor responses. Methods The blood pressure responses induced by intrathecal injection of NMDA were recorded in urethane-anesthetized rats weighing 250-275 g. The levels of several phosphorylated residues on NMDA receptor GluN1 subunits were determined by western blot analysis. Results Intravenous injection of ethanol or ketamine inhibited spinal NMDA-induced pressor responses in a dose-dependent and reversible manner. Ketamine inhibition of NMDA-induced responses was synergistically potentiated by ethanol when ethanol was applied just before ketamine. However, ketamine inhibition was significantly reduced when applied at 10 min after ethanol administration. Western blot analysis showed that intravenous ethanol increased the levels of phosphoserine 897 on GluN1 subunits (pGluN1-serine 897), selectively phosphorylated by protein kinase A (PKA), in the lateral horn regions of spinal cord at 10 min after administration. Intrathecal administration of cAMPS-Sp, a PKA activator, at doses elevating the levels of pGluN1-serine 897, significantly blocked ketamine inhibition of spinal NMDA-induced responses. Conclusions The results suggest that ethanol may differentially regulate ketamine inhibition of spinal NMDA receptor function depending on ethanol exposure time and the resulting changes in the levels of pGluN1-serine 897.
Background
Ethanol has several effects on the central nervous system, such as intoxication, tolerance, and withdrawal. Although these mechanisms are still not well understood, many evidences suggest an important role of the glutamate neurotransmitter system in ethanol effects [1][2][3]. It has been repeatedly reported that ethanol antagonizes central effect of glutamate by acting at NMDA (N-methyl-D-asparate) receptors, a subtype of ionotropic glutamate receptors, at pharmacologically relevant concentrations [4,5]. The ability of ethanol to inhibit NMDA-activated current is linearly related to its potency for causing intoxication [6]. NMDA receptors are composed of 7 subunits including a GluN1 subunit, a family of GluN2 subunits (A, B, C, D), and two GluN3 subunits (A and B) [7]. NMDA receptor function is regulated by several kinases and phosphatases. There are several serine residues on GluN1 subunit. The serine residues 896 and 897 on GluN1 subunit are specifically phosphorylated by protein kinase C (PKC) and cAMPdependent protein kinases (PKA), respectively [8].
Sympathetic preganglionic neurons (SPNs), located in thoracolumbar spinal cord, are the final site where sympathetic activity is integrated within the central nervous system [9]. SPNs provide projections to sympathetic ganglia and adrenal medulla, whose activation elicits an increase in peripheral sympathetic activity and the underlying cardiovascular responses. Our previous studies showed intrathecal injection of NMDA into the T7-T9 segments of spinal cord may cause an increase in blood pressure resulting from the activation of NMDA receptors in SPNs; intravenous injection of ethanol selectively inhibited the NMDA-induced pressor responses [10]. We further demonstrated that prolonged application of ethanol may increase the phosphorylated levels of NMDA receptors by activating signaling pathways and subsequently regulate (counteract) ethanol inhibition of the NMDA receptor function [11], which may contribute to the development of acute ethanol tolerance. We suggest that depending on exposure time and the resulting alteration of the phosphorylated levels of NMDA receptors, acute ethanol may have differential influences on NMDA receptor function. Whether ethanol intake differentially modulates the inhibitory effects of NMDA receptor antagonists on NMDA receptor function remains unclear. The present study was undertaken to examine the hypothesis that acute ethanol exposure may affect the inhibitory effects of ketamine, a non-competitive NMDA receptor channel blocker, on the responses of NMDA receptors in spinal sympathetic neurons using an in vivo model established previously; the magnitude of increases in blood pressure induced by intrathecal injection of NMDA was used as an index for responses of NMDA receptors in vivo.
Animals
Sprague-Dawley (SD) rats purchased from BioLASCO Co., LTD. (Taipei, Taiwan) were used to establish a breeding colony at the Laboratory Animal Center, Tzu Chi University, Taiwan. Adult male rats weighing 250-270 g selected from the colony were used in the present study. All procedures were carried out in accordance with the guidelines of the Institutional Animal Care and Use Committee of Tzu Chi University. To avoid unnecessary sacrifice and suffering, the number of animal used was minimized, and anesthetics were used throughout the experiment.
Determination of blood ketamine and ethanol levels
To avoid perturbing the blood pressure recording, blood ketamine and ethanol concentrations were measured in another group of male rats under the same conditions as the experimental ones. The rats were anaesthetized with urethane. The right femoral vein was cannulated for intravenous injection of ketamine (2 mg/kg or 4 mg/ kg) or ethanol (0.025 g or 0.16 g). Ketamine (1 ml/kg) or ethanol (1 ml) at known concentrations was injected into the femoral vein in 100 seconds. Blood sample of 0.2 ml was withdrawn from the right femoral artery at 10 min and 40 min after intravenous injection. Plasma ketamine concentrations were measured by a gas chromatography mass spectrometer coupled to mass detector (Hewlett Packard GC 6890 with MS 5973) equipped with an autosampler and a HP-5MS capillary column (12.5 m × 0.20 mm i.d. 0.33 μm film thickness) (Agilent Technologies, Palo Alto, CA) was used for GC-MS analysis [12]. Blood ethanol concentrations were determined by an alcohol diagnostic kit available commercially (Diagnostic Chemicals Limited, Oxford, CT); the rate of increase in absorbance at 340 nm is recorded with a spectrophotometer (Beckman DU650).
Intrathecal administration and blood pressure measurement
Procedures for intrathecal administration to anesthetized rats were similar to those described previously [10,13]. The rats were anaesthetized with urethane (1.2 g/kg, i. p.). Additional urethane (0.3 g/kg, i.p.) was applied if the rats responded to tail pinch or to intrathecal insertion of polyethylene tubing. The left femoral artery was cannulated with a polyethylene tubing (PE 50) and connected to a pressure transducer with its output to a Gould EasyGraf Recorder (TA420) for recording of blood pressure. The signals from the recorder were sent to a data acquisition system (MP 100, BIOPAC System, Inc.) for continuous recording of blood pressure, and the built-in function of the acquisition system provides simultaneous measurements of mean arterial pressure (MAP). The right femoral vein was cannulated for intravenous injection of ketamine or ethanol. Ketamine (1 ml/kg) at known concentrations was injected into the femoral vein in 100 seconds. Rats were mounted in a stereotaxic header and implanted with a spinal catheter for intrathecal injection. A slit was made in the atlantooccipital membrane and the catheter (PE-10 tubing) was inserted down into the spinal subarachnoid space so that the tip was placed in the vicinity of T7-T9 segment. The reagents at known concentrations were injected intrathecally at a volume of 10 μL, which was followed by 10 μL of saline to wash in the agent. As a negative control, intrathecal saline did not elicit any significant changes in blood pressure. NMDA was applied at intervals of 30 min. NMDA or other chemicals at known concentrations were dissolved in saline and injected with a microsyringe pump (KDS 100). After NMDAinduced responses were stable over two consecutive tests, experiments were then carried out. Firstly, to examine whether ketamine or ethanol affected NMDA-induced responses, ketamine or ethanol was injected intravenously 10 min before the next application of NMDA. Secondly, to examine the effects of ethanol on ketamine inhibition of NMDA-induced responses, ketamine was applied immediately or at 10 min or 30 min after intravenous injection of ethanol; NMDA was applied 10 min after administration of ketamine. Thirdly, to examine the effects of pretreatment with PKA activators on modulation of ketamine effects, the activators was applied intrathecally 10 min prior to intravenous ketamine; NMDA was applied 10 min and 40 min after administration of ketamine.
Western blot analysis
The procedure for Western blot analysis of spinal cord tissue was similar to that described in earlier studies [11,14]. Rats were anaesthetized by intraperitoneal injection of urethane (1.2 g/kg). A group of 3 rats was used to determine the levels of several phosphorylated residues on NMDA receptor subunits following administration of ethanol or PKA activators. For determination of the effects of ethanol, a control rat was sacrificed after the surgery without administration of ethanol; one rat each was sacrificed at 10 and 30 min following intravenous injection of ethanol. For each dose tested, the above experiments were repeated 4-5 times. For determination of the effects of PKA activators, a control rat was sacrificed at 10 min after intrathecal injection of saline; one each rat was sacrificed at 10 min after intrathecal injection of cAMPS-Sp (0.5 and 5 nmol). The thoracic segments of spinal cord were removed immediately after cardiac perfusion with normal saline containing protease inhibitors (Complete protease inhibitor cocktail tablets, Roche Diagnostics, GmbH). Coronal 1000 μm-thick sections from T7-T9 segments of spinal cord were prepared and quickly frozen by cold spray (FREEZE 75; CRC Industry Europe NV, Zele, Belgium). The lateral horn regions of the slices from each rat were punched out by a tissue puncher (0.75 mm in diameter). The isolated tissues were frozen in liquid nitrogen and stored at -85°C until use. The tissue was homogenized in 60 μL of lysis buffer (0.32 M sucrose, 1 mM EDTA and 1 mTIU·mL-1 aprotinin) with a homogenizer (Glas-Col, Terre Haute, IN) under ice bath. SDS was added to the sample to a final concentration of 0.1%, and 20 μg of protein was electrophoresed and transferred onto a nitrocellulose (NC) membrane. After NC membrane blocking and washing, blots were probed with primary antibody, rabbit anti-GluN1 polyclonal antibody (1:800, Upstate Biotechnology Inc., Lake Placid, NY, USA) and rabbit anti-pGluN1 antisera (serine 896, 1:1500 and serine 897, 1:1500, Upstate Biotechnology Inc.) in TBS-T containing 5% skimmed milk powder overnight at 4°C on a 2D shaker. The blot was then incubated with secondary goat anti-rabbit (1:2000, Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA) antibody conjugated to horseradish peroxidase, which was measured with Western Blotting Luminol Reagent (Santa Cruz Biotechnology, Santa Cruz, CA). The specific protein bands were visualized using the enhanced chemiluminescence (ECL) reagents. The chemiluminescent signal was detected by X-ray film (Fuji Photo Film Co., Ltd., Tokyo), and the intensity of the bands was digitalized by scanner and analysed with UN-SCAN-IT gel software version 6.1 for Windows (Silk Scientific Corporation, Orem, UT, USA). Protein concentrations were determined by bicinchoninic acid method (Sigma Co.) using bovine albumin as standard.
Chemicals and statistical analysis
cAMPS-Sp triethylammonium salt, a PKA activator, was obtained from Tocris Cookson Ltd. (Bristol, UK). Ethanol was purchased from Riedel-de Haen (Deisenhofen, Germany). NMDA, ketamine, aprotinin and other reagents used for Western blot analysis were purchased from Sigma Co. (St. Louis, Missouri, USA). Stock solutions were prepared in distilled water; further dilutions were made in saline. The reagents for electrophoresis were obtained from Bio-Rad Laboratories (Richmond, CA).
Data are presented as mean ± SEM and were plotted and analysed statistically with GraphPad Prism version 4.0 for Windows, GraphPad Software (San Diego, CA). The time-effect relationship of ethanol or ketamine on NMDA-induced pressor responses was analysed using repeated measure ANOVA followed by Newman-Keuls post-test. The effects of prior administration of ethanol or pretreatment with cAMPS-Sp on ketamine action at different times after administration of ketamine were analysed using two-way ANOVA followed by Bonferroni post-test. The statistical evaluation of western blots was analysed using one-way ANOVA followed by Newman-Keuls post-test. P < 0.05 was considered statistically significant.
Ketamine inhibition of spinal NMDA-induced pressor responses
Resting mean arterial pressure (MAP) of the urethaneanesthetized rats was 80.2 ± 4.3 mmHg (n = 26). As in our previous study [10], the MAPs increased following intrathecal injection (1, and 2 nmol, 10 μL) of NMDA in a dose-dependent manner, which were 14.3 ± 2.5 (n = 6), and 26.5 ± 2.3 (n = 16) mmHg in magnitudes following the applications of 1 and 2 nmol of NMDA, respectively. Consecutive intrathecal administration of NMDA (2 nmol) at intervals of 30 min elicited reproducible increase in MAP. Intravenous injection of ketamine (2 and 4 mg/kg) alone did not cause significant changes in MAP. However, NMDA-induced pressor effects were dose-dependently attenuated by intravenous administration of ketamine. Representative recording of ketamine inhibition of spinal NMDAinduced pressor response is illustrated in Figure 1a. The time course of percentage changes in NMDA-induced pressor responses and the corresponding blood concentrations of ketamine are illustrated in Figure 1b and 1c, respectively. The decline level of blood ketamine over time was accompanied by a comparable degree of reductions in ketamine inhibition of pressor responses induced by NMDA after a single injection of ketamine; NMDA-induced pressor responses decreased by 31% and 45% when blood ketamine concentrations were about 195 and 460 ng/mL at 10 min after intravenous injection of 2 mg/kg and 4 mg/kg of ketamine, respectively.
Ethanol inhibition of spinal NMDA-induced pressor responses
Similar to the results in our previous reports [10,11], spinal NMDA-induced pressor responses were dosedependently attenuated by intravenous administration of Ethanol potentiated ketamine inhibition of spinal NMDAinduced pressor responses Ethanol (0.025 g) applied alone had little effects on NMDA-induced responses. However, ethanol (0.025 g) significantly potentiated ketamine inhibition of NMDAinduced pressor effects when ethanol was applied just before ketamine (Figure 3a, b). Intravenous injection of ethanol (0.16 g) followed by ketamine (2 or 4 mg/kg) caused a strong inhibition of NMDA-induced pressor effects at 10 min after the injection (Figure 3c); the degree of inhibition induced by combined administration of ethanol and ketamine was greater than the sum of that induced by ethanol or ketamine alone (Figure 3b, c). These results revealed that ethanol synergistically potentiated ketamine inhibition of NMDA-induced responses.
Ketamine inhibition of spinal NMDA-induced pressor responses was reduced at 10 min after administration of ethanol The degree of inhibition by ketamine (2 and 4 mg/kg) of NMDA-induced pressor effects was examined immediately (0 min) or at 10 min or 30 min after intravenous injection of ethanol (0.16 g). The degree of ketamine inhibition of NMDA-induced pressor effects at 10 min after intravenous ethanol was significantly less than that at 0 and 30 min (Figure 4). NMDA-induced responses decreased by 92%, 55%, and 90% while ketamine (2 mg/ kg) was applied at 0 min, 10 min, and 30 min after injection of ethanol (0.16 g), respectively; NMDAinduced responses decreased by 100%, 63%, and 96% while ketamine (4 mg/kg) was applied at 0 min, 10 min, and 30 min after ethanol (0.16 g), respectively. These results showed a reduction of ketamine and/or ethanol inhibition at 10 min after administration of ethanol.
Discussion
The present study showed for the first time that ethanol differentially regulate, depending on ethanol exposure time and the underlying changes in the levels of pGluN1-serine 897, ketamine inhibition of spinal NMDA receptor-mediated responses. In addition, our study provided the first evidence that an increase in the levels of pGluN1-serine 897 reduced the inhibitory potency of ketamine on NMDA receptor function.
Our previous study found that intravenous ethanol, at doses without noticeable effects on intrathecal AMPAmediated pressor responses, significantly inhibited NMDA-mediated pressor responses, indicating NMDA receptors are main targets of ethanol in sympathetic neurons of spinal cord [10]. The present study demonstrated that intravenous ethanol inhibited spinal NMDA-induced pressor responses as in our previous findings. In addition, we also found that intravenous ketamine inhibited NMDA-induced pressor effects in a blood concentration-dependent and reversible manner. An interesting finding was that intravenous injection of ethanol just before ketamine may produce synergetic effects on the inhibition of NMDA-induced responses, i. e. the combined inhibition is greater than the sum of individual inhibition. The synergistic inhibition may be of greater concern with respect to fetal brain damage because NMDA receptors play an important role during the developmental period of synaptogenesis [15,16]. Because of the ability of ketamine to produce mild-altering effects it is frequently abused. Abuse of ketamine mixed with other drugs is increasingly common in rave party [17,18], where intake of ketamine would be combined with that of ethanol. There were reports of the lethality from mixed-drug intoxication involving ketamine and ethanol [19]. The current findings showing a potentiated inhibition of ketamine by ethanol on NMDA receptor function may provide a molecular mechanism for the neurotoxicity induced by combined intake of ethanol and ketamine.
By utilizing a patch clamp technique to examine NMDA-activated currents, ketamine may inhibit the NMDA receptor by two distinct mechanisms. First, ketamine blocks the open channel and thereby reduces channel mean open time; secondly, ketamine may bind to the closed receptor and decrease the frequency of channels opening by an allosteric mechanism. Low concentrations of ketamine predominantly caused closedchannel blockade, whereas both open and closed channel blockade occurred at higher ketamine concentration [20]. Ethanol is well known to inhibit NMDA receptor function. However, the precise site and the action mechanism underlying alcohol inhibition have not been established [21]. Ethanol inhibits NMDA receptors in a noncompetitive manner and does not appear to act by interfering with either glutamate or glycine binding. By studying cells transfected with NMDA receptor subunits, acute alcohol treatment has been shown to interact with several transmembrane domains of NMDA receptors because alterations in certain amino acids of NMDA receptor subunits changed alcohol sensitivity [22,23]. Alcohol may also act at a site located in a domain exposed to the extracellular environment [24]. In addition, ethanol sensitivity of NMDA receptors may be modulated by discrete sites within the TM3 and TM4 domains of the GluN1 subunit [25]. These studies suggest a putative binding site of NMDA receptor to Figure 4 Bar graphs show percentage changes in NMDA-induced increases in MAP 10 min and 40 min after intravenous injection of ketamine (2 mg/kg, left part; 4 mg/kg, right part) applied immediately (0 min) or at 10 min and 30 min after intravenous injection of ethanol (0.16 g). The peak magnitude of NMDA-induced increase in MAP immediately prior to application of ethanol and/or ketamine is taken as control (100%). *Statistically significant difference between groups analyzed using two-way ANOVA followed by Bonferroni post-test.
ethanol. The current study demonstrated that ethanol may synergistically potentiate ketamine inhibition of NMDA-induced responses, suggesting that the binding site of ethanol may be different to that of ketamine in NMDA receptors and both binding sites may interact with each other in regulating NMDA receptor function. Actually, a recent study also showed that ifenprodil, a selective NMDA receptor GluN2B subunit antagonist, may enhance the anti-hyperalgesic effect of ketamine [26]. Phosphorylation is important in direct and indirect modulation of NMDA receptors. The serine residue 897 on GluN1 subunit is specifically phosphorylated by PKA; the serine residue 890 and 896 is phosphorylated by PKC; the tyrosine residue 1336 on GluN2B subunit (pGluN2B-tyrosine 1336) is specifically phosphorylated by protein tyrosine kinases [27]. Acute and chronic exposure of ethanol affects the function of specific intracellular signaling pathways, including PKA, PKC, and tyrosine kinase signaling pathways [28,29]. Some kinases such as fyn tyrosine kinase, PKC and PKA are able to reduce NMDA receptor sensitivity to acute ethanol [30][31][32]. Our previous study demonstrated that continuous intravenous ethanol infusion may activate PKA, PKC, and Src tyrosine kinase leading to increases in the levels of pGluN1-serine 896, pGluN1-serine 897, and pGluN2B-tyrosine 1336, respectively, in the lateral horn of the spinal cord [11]. We also showed that the increase in the levels of pGluN1-serine 896 and pGluN2B-tyrosine 1336, but not pGluN1-serine 897, may counteract ethanol inhibition of spinal NMDAinduced pressor responses and be responsible for acute ethanol tolerance during prolonged ethanol exposure. In the current study, we found that a single bolus intravenous injection of low dose of ethanol would cause an increase in the levels of pGluN1-serine 897, but not pGluN1-serine 896, a short period of time (10 min) after the injection when blood ethanol concentrations were about 90 mg/dL (0.09%). The increases in the levels of pGluN1-serine 897 induced by ethanol may reduce the inhibitory potency of ketamine as supported by our results showing that intrathecal cAMPS-Sp, a PKA activator, at doses elevating the levels of pGluN1serine 897 significantly blocked ketamine inhibition of spinal NMDA-induced responses. The results provide the first in vivo evidence that PKA signaling pathways may participate in the regulation of ketamine inhibition of NMDA receptor function. In addition, our results have an important implication that alteration of the phosphorylated levels of NMDA receptor subunits may have influence on the effects of NMDA receptor antagonists. A recent study showed that inhibition of cAMP hydrolysis by phosphodiesterase inhibitors significantly reversed ketamine-induced anesthesia in mice [33], further supporting the idea that cAMP signaling pathways are involved in the regulation of ketamine effects.
Our previous study showed that the immunoreactivity to pGluN1-serine 896 (regulated by PKC) was decreased at 10 min but increased at 40 min in neurons of intermediolateral cell column following continuous ethanol infusion [11]. The present study also showed that pGluN1-serine 896 was significantly reduced in the lateral horn regions of spinal cord at 10 min after administration of higher dose of ethanol. Ethanol is widely reported to regulate the function of PKC [34,35]. PKC may regulate NMDA receptor function through nonreceptor tyrosine kinase [36]. It has been suggested that ethanol-induced tyrosine dephosphorylation of NMDA receptor subunits plays an important role in mediating the inhibitory effects of ethanol on NMDA receptor function [37]. It is possible that decreases in the levels of pGluN1-serine 896 may contribute to the inhibitory effects of ethanol on spinal NMDA-induced responses. Further work is required to establish this. In addition to NMDA receptors, GABA receptors are another important target for ethanol action. Ethanol increases GABA receptor function in several brain regions [3,38]. Ethanol inhibition of NMDA receptor activity has been suggested to be both directly through actions on the NMDA receptors, and indirectly, possibly through potentiation of GABA receptor activity [39]. Therefore, the possibility that ethanol regulation of NMDAinduced responses was secondary to the effects of ethanol on other targets such as GABA receptors cannot be ruled out.
Heavier alcohol consumption is associated with cardiovascular dysfunction such as hypertension and stroke [40]. However, epidemiological studies have also showed that moderate consumption of alcohol is associated with reduced risk of coronary artery disease [41]. A positive relationship between alcohol consumption and blood pressure is well-established in epidemiologic studies [42,43]. Though the mechanisms underlying alcoholinduced hypertension remain unclear, alteration of central sympathetic activity has been suggested to participate in alcohol-induced changes in blood pressure [44,45]. It is likely that differential regulation by ethanol of NMDA receptor function in spinal sympathetic neurons may contribute to ethanol regulation of cardiovascular function, although further studies are required to clarify this.
Conclusion
In summary, our results indicate that simultaneous intake of ethanol and ketamine may produce synergistically inhibitory effects on NMDA receptor function. However, intake of ethanol a certain period of time prior to ketamine may reduce the inhibitory effects of ketamine by activating PKA signaling pathways resulting in elevated levels of pGluN1-serine 897.
of Laboratory Medicine and Biotechnology, Tzu Chi University, Hualien, Taiwan. 4 Department of Pharmacology, Tzu Chi University, Hualien, Taiwan. 5 Department of Radiological Technology, Tzu Chi College of Technology, Hualien, Taiwan. 6 General Education Center, National Taitung Junior College, Taitung, Taiwan. | 5,422.2 | 2012-02-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Behavior of Rubber Materials under Exposure to High Electric Fields
The effect of high electrical stress on rubber materials is investigated by performing breakdown tests and tracking resistance tests on selected samples. The study is focused on the relationship between the dielectric strength and the thickness of the samples, as well as the influence of the interfaces between different layers of material. Tracking resistance tests are also performed on the rubber material. The purpose is to provide a complete study of the applicability of the rubber material in thunderstorm environments. materials.
Introduction
In the recent years, new technologies have been developed to increase the efficiency of wind turbine blades, some of which involve the use of rubber materials in the blade structure. Amongst these technologies, the deformable flaps aim at reducing the load on the blade, thus alleviating the fatigue strain on the whole wind turbine [1]. This is achieved by installing a rubber flap in the trailing edge of the blade ( Fig. 1), and controlling its deflection using a compressed-air system. Once installed in the blade, the rubber flap will be subjected to severe ambient conditions. This paper is focused on the effect of high electric fields on the electrical performance of rubber materials. During their lifetime wind turbines are repeatedly exposed to high electric fields from thunderstorms, which degrade progressively the insulating properties of the blade materials [2]. The interaction between the thunderstorm electric field and the fiberglass material usually used in wind turbines has been widely studied, [3]. However, the behavior of the rubber material in a thunderstorm environment is not fully known and therefore needs to be assessed. The IEC standard on lightning protection of wind turbines [2] defines the tests to be performed in wind turbines to reproduce the effects of direct lightning strike. However, there is a need for tests aimed at assessing the effects of repeated high electric field and discharge exposure on the insulating materials of the blade. In this study, the rubber behavior under high electric fields is investigated by performing breakdown and tracking resistance tests. These methods have been used previously to evaluate the performance of blade insulating materials against lightning [3], [4]. The criterion used to evaluate the tests results on fiberglass was to affect the material as it was seen in blades in service.
Samples of four different rubber materials have been subjected to breakdown and tracking tests. The samples and setups used in the tests are described in section 2. Section 3 summarizes the test results, comparing the different rubber materials. The relationship between the breakdown strength and the thickness of the samples and the influence of the interfaces between the layers of material are also investigated and described in this section. Finally, the outcome of the tests is discussed in section 5, where the suitability of rubber materials in wind turbine blades is discussed.
Breakdown strength tests
The specimens used for this test are square shaped, with a side length of 100 mm. Each material has been tested with a thickness of 1, 2, 3 and 4 mm. The sample is placed between two electrodes inside a container filled with silicone oil (Fig. 2). The upper electrode is spherical, with a diameter of 12.5 mm, and it is connected to the high impulse voltage generator. The lower electrode is cylindrical with rounded edges, with a diameter of 70 mm, and is connected to ground. The purpose of the silicone oil is to increase the electrical breakdown of the media around the specimen, in order to prevent side flashovers. The test follows the procedure described in [5]. It consists of applying a high voltage impulse with a rise
Rubber flap
Nordic Insulation Symposium -Nord-IS 13 -Trondheim, Norway, June 9 -12, 2011 time and decay to half value of 1.2 and 50 μs respectively, according to [6]. The test starts at relatively low voltage, where there is no risk of breakdown, and it is increased progressively until the breakdown of the material is reached. This procedure is repeated 6 times. Each time, the peak value of the voltage impulse that produces breakdown and the peak value of the withstand voltage previous to breakdown are measured. The breakdown and withstand voltages of each material found in section 3 correspond to the average value of the measurements.
Tracking resistance tests
The setup arrangement and the test procedure follow the standard setup described in [7]. The specimens are mounted on an insulating support, which stands at an angle of 45 degrees from the horizontal. Two electrodes are placed on the top and the bottom of the sample, connected to high voltage and ground respectively (Fig. 3). The specimens are arranged in sets of 5 samples. Each sample is 50 x 120 mm, with a thickness of 2 mm (Fig. 4). The procedure follows the Method 2, according to [7]: stepwise tracking voltage. It mainly consists of applying a sequence of AC voltage levels across the sample while a contaminant solution based on NH 4 Cl is flowing over the sample lower surface. The initial voltage is chosen in such a way that no sample will fail during the three first steps, and is increased by 250 V every hour. The end-point criterion used in this test is "End-point criterion A: the value of current through the specimen exceeds 60 mA". In order to determine when the current is over the maximum value allowed, a fuse is installed in the HV circuit of each sample.
Tests results
This section summarizes the results of the breakdown strength and the tracking resistance tests on the rubber materials.
Breakdown strength tests
The breakdown strength tests were performed on 6 samples of each thickness, for the four different rubber materials. The average breakdown and withstand voltages for each material and thickness are displayed in According to the test results, the Santoprene material presents the highest breakdown field strength of 110 kV/mm, followed by silicone rubber showing 72 kV/mm. It is also observed that the breakdown strength decreases when increasing the thickness of the sample. This can be explained by the so-called volume effect, where an increase of the material thickness involves a higher probability of impurities or microscopic defects. These inhomogeneities, normally small particles and air bubbles, enhance the electric field around or inside them, and lead to an earlier breakdown of the material. The volume effect can be found in all the tested materials except the EDPM. Considering that this material shows a very low breakdown strength, the negative influence of possible impurities is less dominant. In order to study further the volume effect and the influence of interfaces, additional tests were performed to Santoprene, the material showing the highest breakdown strength. These tests consisted of testing two layers of material together. The results were compared to the breakdown strength of a single layer with the same total thickness and with the theoretical calculation of the breakdown strength obtained from the independent layers. The test reveals that the double-layer setup has higher breakdown strength than the single-layer setup. At a first glance, this may be surprising, since we have an additional interface in the material. On the other hand each layer of the double-layer setup presents fewer impurities per volume than the thicker single-layer, which makes the result more plausible. It is also found that the breakdown strength of the double-layer setup is lower than the value calculated from each independent layer. This fact indicates that in the double-layer setup, the breakdown occurs first in the weakest layer, directly followed by the other layer. Therefore, the weakest layer determines the breakdown of both layers. This phenomenon is more evident in the case of mixed thickness (1+2 mm), where the difference between the calculated and the actual breakdown strength of the double-layer is greater.
Tracking resistance tests
The tracking resistance tests were performed on sets of five samples of the four different materials. Table 1 summarizes the results. The initial and the final voltages are the voltage level applied to the samples at the beginning of the test and the voltage level where the first sample failed, respectively. The classification of the material according to [7] corresponds to IEC Class -Method used to apply the voltage/end-point criterion/ maximum level of voltage withstood. It is seen in the pictures (Figs. 7-10) that all the samples show a matt appearance between the electrodes. However, this band is different in each material, and only some materials present a deep track. The Santoprene and EPDM materials have a narrow track. The Santoprene samples present erosion only next to the bottom electrode, while the EPDM samples have severe erosion in the whole path between the electrodes. The silicone and PUR materials have a wider dry band and no significant erosion. A failed sample of each material is shown in Fig. 11.
Discussion
In this paper, the assessment of materials is done by comparison of the tests results regarding break down and tracking. Though being relevant for wind turbine flap application, the tests themselves do not directly provide information to determine if the material is suitable for the flap application with respect to interaction with lightning discharges. The breakdown strength tests show that the Santoprene material performs better than the other rubber materials, and its performance is comparable to the fiberglass materials used in wind turbine blades [3]. It is also observed that the thickness of the sample has a significant impact on the breakdown strength of the material due to the volume effect. Regarding the tracking resistance tests, all rubber materials reach similar levels of voltage, except the EPDM material, which failed at the beginning of the tests. Furthermore, significant differences in erosion are observed in the materials that withstand the same level of voltage. Finally, it has to be considered that a rubber flap installed in a blade in service will be subjected to mechanical fatigue. Therefore mechanical tests should be done with the tested samples in order to evaluate how the erosion due to tracking affects the performance of the material in general.
Conclusions
Breakdown and tracking tests were performed on a selection of rubber materials following the same procedure as in GFRP materials for wind turbine blades.
The tests results show that the Santoprene material performs better than the other materials, and has comparable properties to the fiberglass material used in blades with respect to tracking resistance and breakdown strength. Therefore, it can be considered as a suitable candidate for wind turbine blade flaps application, regarding its performance in interaction with lightning discharges. | 2,458.4 | 2018-02-16T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Directed assembly of optically bound matter
We present a study of optically bound matter formation in a counter-propagating evanescent field, exploiting total internal reflection on a prism surface. Small ensembles of silica microspheres are assembled in a controlled manner using optical tweezers. The structures and dynamics of the resulting optically bound chains are interpreted using a simulation implementing generalized Lorentz-Mie theory. In particular, we observe enhancement of the scattering force along the propagation direction of the optically bound colloidal chains leading to a microscopic analogue of a driven pendulum which, at least superficially, resembles Newton’s cradle. ©2012 Optical Society of America OCIS codes: (240.0240) Optics at surfaces; (290.4020) Mie theory; (290.4210) Multiple scattering; (350.4855) Optical tweezers. References and links 1. K. Dholakia and P. Zemánek, “Gripped by light: Optical binding,” Rev. Mod. Phys. 82(2), 1767–1791 (2010). 2. C. D. Mellor, J. Leckner, and C. D. Bain, “Pattern formation in evanescent wave optical traps,” Proc. SPIE 5930, 59301C, 59301C-10 (2005). 3. C. D. Mellor and C. D. Bain, “Array formation in evanescent waves,” ChemPhysChem 7(2), 329–332 (2006). 4. M. Šiler, M. Sery, T. Cizmar, and P. Zemanek, “Submicron particle localization using evanescent field,” Proc. SPIE 5930, 59300R, 59300R-9 (2005). 5. M. Šiler, T. Cizmar, M. Sery, and P. Zemanek, “Optical forces generated by evanescent standing waves and their usage for sub-micron particle delivery,” Appl. Phys. B 84(1-2), 157–165 (2006). 6. S. Kawata and T. Sugiura, “Movement of micrometer-sized particles in the evanescent field of a laser beam,” Opt. Lett. 17(11), 772–774 (1992). 7. M. Gu, J.-B. Haumonte, Y. Micheau, J. W. M. Chon, and X. Gan, “Laser trapping and manipulation under focused evanescent wave illumination,” Appl. Phys. Lett. 84(21), 4236 (2004). 8. C. D. Mellor, T. A. Fennerty, and C. D. Bain, “Polarization effects in optically bound particle arrays,” Opt. Express 14(21), 10079–10088 (2006). 9. P. J. Reece, V. Garcés-Chávez, and K. Dholakia, “Near-field optical micromanipulation with cavity enhanced evanescent waves,” Appl. Phys. Lett. 88(22), 221116 (2006). 10. V. Garcés-Chávez, R. Quidant, P. J. Reece, G. Badenes, L. Torner, and K. Dholakia, “Extended organization of colloidal microparticles by surface plasmon polariton excitation,” Phys. Rev. B 73(8), 085417 (2006). 11. M. D. Summers, R. D. Dear, J. M. Taylor, and G. A. D. Ritchie, “Controlled formation of optically bound matter in evanescent fields,” Proc. SPIE 7762, 776213, 776213-8 (2010). 12. N. J. van Leeuwen, L. J. Moore, W. D. Partridge, R. Peverall, G. A. D. Ritchie, and M. D. Summers, “Near-field optical trapping with an actively-locked cavity,” J. Opt. 13(4), 044007 (2011). 13. K. C. Neuman and S. M. Block, “Optical trapping,” Rev. Sci. Instrum. 75(9), 2787–2809 (2004). 14. M. M. Burns, J.-M. Fournier, and J. A. Golovchenko, “Optical binding,” Phys. Rev. Lett. 63(12), 1233–1236 (1989). 15. M. M. Burns, J.-M. Fournier, and J. A. Golovchenko, “Optical matter: crystallization and binding in intense optical fields,” Science 249(4970), 749–754 (1990). 16. S. A. Tatarkova, A. E. Carruthers, and K. Dholakia, “One-dimensional optically bound arrays of microscopic particles,” Phys. Rev. Lett. 89(28), 283901 (2002). 17. N. K. Metzger, K. Dholakia, and E. M. Wright, “Observation of bistability and hysteresis in optical binding of two dielectric spheres,” Phys. Rev. Lett. 96(6), 068102 (2006). 18. A. Constable, J. Kim, J. Mervis, F. Zarinetchi, and M. Prentiss, “Demonstration of a fiber-optical light-force trap,” Opt. Lett. 18(21), 1867–1869 (1993). 19. M. Guillon, O. Moine, and B. Stout, “Longitudinal optical binding of high optical contrast microdroplets in air,” Phys. Rev. Lett. 96(14), 143902 (2006). #155364 $15.00 USD Received 26 Sep 2011; revised 15 Nov 2011; accepted 2 Dec 2011; published 4 Jan 2012 (C) 2012 OSA 16 January 2012 / Vol. 20, No. 2 / OPTICS EXPRESS 1001 20. V. Garcés-Chávez, D. Roskey, M. D. Summers, H. Melville, D. McGloin, E. M. Wright, and K. Dholakia, “Optical Levitation in a Bessel Light Beam,” Appl. Phys. Lett. 85(18), 4001 (2004). 21. V. Karásek, O. Brzobohaty, and P. Zemanek, “Longitudinal optical binding of several spherical particles studied by the coupled dipole method,” J. Opt. A, Pure Appl. Opt. 11(3), 034009 (2009). 22. J. M. Taylor and G. D. Love, “Optical binding mechanisms: a conceptual model for Gaussian beam traps,” Opt. Express 17(17), 15381–15389 (2009). 23. P. C. Chaumet and M. Nieto-Vesperinas, “Optical binding of particles with or without the presence of a flat dielectric surface,” Phys. Rev. B 64(3), 035422 (2001). 24. Y. L. Xu and B. A. S. Gustafson, “Comparison between multisphere light-scattering calculations: rigorous solution and discrete-dipole approximation,” Astrophys. J. 513(2), 894–909 (1999). 25. J. M. Taylor, L. Y. Wong, C. D. Bain, and G. D. Love, “Emergent properties in optically bound matter,” Opt. Express 16(10), 6921–6929 (2008). 26. J. P. Barton, D. R. Alexander, and S. A. Schaub, “Theoretical determination of net radiation force and torque for a spherical particle illuminated by a focused laser beam,” J. Appl. Phys. 66(10), 4594–4602 (1989). 27. D. W. Mackowski, “Analysis of radiative scattering for multiple sphere configurations,” Proc. R. Soc. Lond. A 433(1889), 599–614 (1991). 28. H. Chew, D.-S. Wang, and M. Kerker, “Elastic scattering of evanescent electromagnetic waves,” Appl. Opt. 18(15), 2679–2687 (1979). 29. E. E. M. Khaled, S. C. Hill, and P. W. Barber, “Scattered and internal intensity of a sphere illuminated with a Gaussian beam,” IEEE Trans. Antenn. Propag. 41(3), 295–303 (1993). 30. J. A. Stratton, Electromagnetic Theory (McGraw-Hill, 1941). 31. S. Chang, J. H. Jo, and S. S. Lee, “Theoretical calculations of optical force exerted on a dielectric sphere in the evanescent field generated with a totally-reflected focused Gaussian beam,” Opt. Commun. 108(1-3), 133–143 (1994). 32. G. Videen, “Light scattering from a sphere on or near a surface,” J. Opt. Soc. Am. A 8(3), 483–489 (1991). 33. D. W. Mackowski, “Exact solution for the scattering and absorption properties of sphere clusters on a plane surface,” J. Quant. Spectrosc. Radiat. Transf. 109(5), 770–788 (2008). 34. M. Abramowitz and I. A. Stegun, Handbook of mathematical functions (Dover, 1972). 35. P. Zemanek, A. Jonas, P. Jakl, J. Jezek, M. Sery, and M. Liska, “Theoretical comparison of optical traps created by standing wave and single beam,” Opt. Commun. 220(4-6), 401–412 (2003). 36. J. Lekner, “Force on a scatterer in counter-propagating coherent beams,” J. Opt. A, Pure Appl. Opt. 7(5), 238– 248 (2005). 37. V. Karásek, T. Cizmár, O. Brzobohatý, P. Zemánek, V. Garcés-Chávez, and K. Dholakia, “Long-range onedimensional longitudinal optical binding,” Phys. Rev. Lett. 101(14), 143601 (2008).
Introduction
There have been a number of recent studies involving the trapping of multiple particles using evanescent waves formed at interfaces by counter-propagating laser beams, as comprehensively reviewed by Dholakia and Zemánek [1].The spatial distributions adopted by the particles are determined both by the underlying optical potential associated with the interference fringes and the interparticle light scattering, while the relative contribution of each effect in determining these "optically bound" structures are a sensitive function of the particle size parameter, x (= 2π a/λ where a is the radius of the particle and λ is the wavelength of the trapping radiation).The first observations of one-dimensional optical binding in evanescent wave systems were reported by Mellor and Bain [2,3], and Siler et al. [4,5] in 2005.These studies built upon earlier work by Kawata and Sugiura [6], where colloidal particles were confined on the surface of a prism using counter-propagating beams incident at an angle just above the critical angle.The optical gradient provided by the resulting evanescent fields caused particles near the surface to accumulate in the region of highest light intensity [7].Interestingly, experiments using sub-µm diameter particles resulted in the formation of close-packed 2-d crystals, strongly influenced by both interference fringes produced by the counter-propagating coherent beams and coherent scattering between multiple particles [8].Similar experiments conducted by Reece et al. enhanced the optical forces using resonant dielectric waveguides [9] and surface plasmon resonance techniques [10].Alternatively the evanescent field intensity can be enhanced by placing the interface within an actively locked optical cavity [11,12].
While optical tweezing forces are well understood [13], interactions in multiple particle systems are less so.The origin of these binding forces was first described in 1989 in the work of Burns et al. [14], examining the forces induced by an optical field between two dielectric spheres, and was later extended to studies of colloidal crystals illuminated by multiple optical standing waves [15].The term "optically bound matter" was used to describe the structures formed by dielectric colloidal particles in the presence of applied optical fields.Subsequently optical binding effects have been observed in other experiments [16][17][18] dealing with dielectric particles in counter-propagating light fields and in experiments using only a single levitating beam [19] as well as with different beam geometries, such as Bessel beams [20].
A number of theoretical models have been used to explore these optical binding effects.These include the coupled dipole method [21] and Generalized Lorenz-Mie Theory (GLMT) [22], both of which use the same underlying vector wave treatment of light.The coupled dipole method is a volume discretization method more easily applied to arbitrarily-shaped bodies and complex light beams [23], but one which can suffer from subtle convergence issues [24]; GLMT is arguably more analytically complex but is computationally more efficient, which can be a crucial advantage, particularly for studies of dynamics.Such models typically deal with small ensembles of particles due to the complex nature of multiple scattering processes.Recent experiments [25] have shown that, for large chains of particles, multiple scattering effects become increasingly important relative to the background optical potential.There is still much work to be done to bridge the gap between these simple model systems and the more complex structures observed in reality, and our aim in this paper is to further explore the transition between these two regimes using a simple counter-propagating evanescent trap in combination with optical tweezers [12].The controlled assembly of onedimensional chains using the optical tweezers allows us to explore the interparticle interactions and develop a corresponding implementation of GLMT which is capable of predicting both static and dynamic properties of this optically bound matter.
Experimental method
The experiment is constructed around a modified Leica microscope as shown in Fig. 1.A 1064 nm solid state laser (Crystalaser) is collimated and aligned into the back aperture of the imaging objective (Leica Hi-Plan, 100× NA = 1.2) using a 4f imaging system.The tweezers position can therefore be controlled using the single mirror located in the conjugate plane.The optical power to the tweezers can be controlled using a half-wave plate and polarizing beam splitter placed before the steering optics.Precise z-axis placement of the tweezers spot in the focal plane of the microscope can be adjusted a small amount using lenses L1 and L2 in the 4f system, with focal lengths of 100 and 200 mm respectively, before causing noticeable off-axis aberration of the tweezers.A truncated isosceles fused silica prism is mounted with the top surface in the focal plane of the objective lens.White light illumination is provided from below the prism by the microscope light source, allowing samples on the top surface to be imaged onto a CCD using the same microscope objective [12].
The evanescent component of the trap is built around the silica glass isosceles prism (n = 1.45) cut to an angle of 68° (θ c = 66.7°).The output from a Laser Quantum Forte system (maximum power of 400 mW at 1064 nm) is coupled into a polarization maintaining fibersplitter to produce two output beams of equal intensity which are focused onto the prism from opposing directions using 50 mm lenses held in x-y-z adjustable mounts.The exact experimental conditions used are as follows: two coherent counter-propagating spots with a major radius of 20 μm and a separation of 50 μm; an angle of incidence of 1.3° greater than the critical angle; a peak intensity contribution of 0.48 mW μm 2 on the surface from each fiber; and p-polarization used in both propagation directions (controlled by half-wave plates).Sample chambers are prepared using thickness #1 coverslips and thin polyvinyl spacers, which allow the focal plane of the tweezers to lie on the prism surface.The samples are pipetted directly onto the prism surface and the sample chamber placed on top.The samples themselves consist of 3.5 µm diameter silica spheres (Bangs Laboratories SS05N) suspended in Milli-Q water (Millipore) in concentrations optimized to avoid clutter in the region of interest.Typical sample lifetimes are on the order of 1 hour.Fig. 1.The experimental setup.The apparatus is divided into 2 parts.Both the evanescent optical trap and the optical tweezers are built into a commercial microscope (Leica).A CrystaLaser (350 mW at 1064 nm, but with typically only 10 mW used in the tweezer itself) is focused by a 100× NA = 1.2 microscope objective (Leica) onto the prism surface.The resulting tweezers are controlled manually by manipulation of the steering mirror.A Laser Quantum Forte laser is coupled into a polarization maintaining optical fiber splitter, producing 100 mW at 1064 nm of light in each output arm.The two mutually coherent beams are then focused onto the prism surface at an angle greater than the critical angle of the prism.Polarization is controlled using the half-wave plates in each arm.Imaging of the trapped samples is done through the microscope objective and recorded on a CCD (Watec 902-H3).
GLMT simulation of evanescent optical binding
Our experimental system is simulated using an exact vector description of coherent monochromatic light.This optical model can be applied to particles of any size, but is most efficient for Mie particles i.e. of dimensions comparable to the wavelength of the light.Our approach is broadly similar to that used in previous work [25], including theoretical work by other authors to include considerations such as optical forces acting on particles [26] and multiple particle interactions [27].In addition, important roles are played by two factors that are not normally considered in GLMT calculations; these are the effects of scattered light reflected by the prism surface and the focused nature of the illuminating Gaussian beams.Both of these effects are now detailed below.
Gaussian evanescent wave
To our knowledge, there is very little published work that takes full account of the Gaussian profile of the laser beam used to produce the evanescent trapping field by total internal reflection (TIR).In the case of an extremely broad laser spot, such effects can be ignored and the beam can be approximated as an ideal evanescent wave (using the formulation used previously in [25] and derived in [28]).If the laser is more tightly focused the particles will be exposed to different evanescent field strengths depending on their position on the prism surface.Such a scenario can still be modeled using the standard beam shape coefficients for an ideal evanescent wave, but with the intensity that any given particle is exposed to being determined by its position in the field.In such a scheme the local field around the particle is still modeled as that of an ideal evanescent wave.While this model is an improvement it still cannot, for example, represent gradient force effects drawing particles towards the center of the laser beam spot.A more rigorous approach considers the incoming Gaussian beam in terms of its plane wave spectrum.Each individual plane wave is treated separately in its interaction with the interface; depending on the angle of incidence of that plane wave component, the wave may merely be refracted at the interface or may undergo TIR, producing an evanescent wave on the far side of the interface.When all these components are recombined, the correct representation of the field on the far side of the interface in response to the incident beam is obtained.
Our simulation uses the plane wave decomposition method of Khaled et al. [29], calculating the interaction of each wave vector with the interface [30] and representing the components using standard evanescent or plane wave expansions as appropriate.This is similar to the approach used by Chang et al. [31], except for our retention of non-evanescent components of the incident beams which can have a measurable effect when the angle of incidence is close to the critical angle.This approach gives a good representation of a paraxial Gaussian beam, although naturally the plane wave decomposition of the beam breaks down for very high numerical apertures.In our case, the beam waist is of the order of 10 μm and the paraxial approximation is quite valid.
Surface reflections
When a particle scatters light from the evanescent field, the resultant outgoing wave impinges upon the substrate and the wave may be reflected.These reflections may then impinge upon other particles, including the original particle, modifying the field around them and hence the optical force acting on the system.Unfortunately, a planar interface such as our prism surface does not fit well with the spherical symmetry of the basis functions used in GLMT.Although a perfectly reflecting surface can be treated in terms of image sources, and for a single particle the approximation of normal incidence may sometimes be reasonable [32,33], the general case of reflection by a dielectric surface is an extremely challenging problem that often requires numerical solution of integrals to determine the matrix coefficients for the basis-set transformation.To this end we have implemented a scheme based on the equations derived by Mackowski [33].One detail of the implementation worth noting relates to the Jacobi polynomials ( , ) n P , which appear in the reflection integrals and must be evaluated for relatively large, negative values of α.This is a situation in which standard recursive or iterative evaluation methods [34] suffer from catastrophic cancellation when implemented using standard double-precision arithmetic.The solution to this is to factorize the relevant Jacobi polynomials once at the start of the simulation (using extended precision arithmetic).Subsequent to this, a polynomial can be efficiently evaluated as a product of its factors using double-precision arithmetic.
To highlight the importance of surface reflections in our system, Fig. 2 plots the optical forces between two spheres, both of diameter 3.5 μm and resting on the substrate, as a function of their separation for a number of different calculation methods.Firstly, the force is shown without taking into account any reflections from the substrate, and also with a full reflection calculation.It can be readily seen that the inclusion of surface reflections can significantly alter the inter-particle separation at which the optical force between the particles is zero.Secondly, an approximation is shown where the planar substrate is replaced by a large sphere with the same properties as the substrate.This approximation, not described elsewhere in the literature to our knowledge, was very useful for quickly estimating the significance or otherwise of surface reflections.The same (exact) background evanescent field is still used, but surface reflections are approximated using the appropriate GLMT multiple reflection coefficients for the large sphere used as a proxy for the substrate.In our case it established that surface reflections are important and prompted the implementation of the full, rigorous calculation.It also provides an independent verification of the correctness of the full reflection model implementation, with surprisingly good agreement between the full reflection calculation and the approximation.Fig. 2. The optical force between two 3.5 μm diameter spheres resting on the substrate as a function of inter-sphere spacing.Forces are shown with and without substrate reflections taken into account, and results are also shown for an alternative, approximate method of handling surface reflections that uses a large dielectric spherical surface as an approximation to the planar surface.
Particle equilibrium conditions
In order to examine optical binding effects as closely as possible and to facilitate a comparison with our GLMT model, we used the tweezers to assemble linear chains of 3.5 µm diameter silica microspheres in the evanescent field on the prism surface.This particle size was selected due to the weak interaction of the particles with the underlying optical potential [35,36] ensuring that scattering/binding forces are principally responsible for determining the resultant structures.
Figure 3 shows a typical example of the controlled formation of optically bound chains as individual particles are added sequentially, clearly showing that the average separation between the particles decreases as the number of particles in the chain increases.In this case we observe "chain collapse" when a 5 th particle is added to the chain; this corresponds to a phase transition towards a configuration where the particles are almost in physical contact [37].This chain collapse is a general phenomenon and is commonly observed when a 5 th particle is added, however we note that in some cases the collapse does not occur until the addition of a 6 th particle.Chain collapse on addition of the 5 th particle is evident in Fig. 4a, which shows values of the interparticle separation for a single example run, averaged over 30 seconds.Before making comparisons with theory we note firstly that the interparticle separation is not constant within the chain but is largest between the end particles and their nearest neighbours in the chain; similar effects have been observed and modeled in counterpropagating fibre traps [22].Secondly, for a given number of particles in the chain, the optically bound structures can access a wide range of sizes.For example, Fig. 4b shows the spread of particle separations observed for two particles over a period of one minute as they move about an average value of 8.6 µm with a standard deviation of 1.2 µm.Thirdly, as the optical forces present in our experiment are far weaker than those acting on particles in, for example, optical tweezers traps (tens of fN compared to pN) we must consider the role of electrostatic repulsion between the colloidal particles in determining the optically bound structures.We have conducted studies using both ultra pure Milli-Q water and a range of samples with NaNO 3 concentrations of up to 0.5 mM; in the former low screening case the Debye length is on the order of 1 μm while for the most highly charge-screened sample used the Debye length is 14 nm.In general we find that increased screening leads to a modest decrease in the average chain length for a fixed particle number but that this change is generally smaller than the variation in the chain lengths that are observable both within a single run due to the Brownian motion of the particles within the chain, and between repeated measurement on different samples.We therefore conclude that electrostatics do not play an important role in determining the structures of the observed optically bound matter.Fig. 3. Controlled assembly of a chain of 3.5 μm silica microspheres in an evanescent field (top half of each image).Particles are moved from the pool of loose particles (bottom half of each image) using optical tweezers.The chain is observed to fill the optical potential, collapsing in the centre when a 5 th particle enters the chain.Each screenshot is taken approximately 1 minute after the addition of a new particle to allow the chain equilibrium positions to be reached.
The resulting force plots from our simulations provide equilibrium chain lengths, and examples of these are shown in Fig. 5; the intercepts of each curve with the x-axis show points at which the net force acting on the outermost particles in the chain equals zero, and those intercepts with positive-gradient intersections represent stable positions.For the larger chains, the number of degrees of freedom make it difficult to fully explore the full parameter space in this rather challenging calculation.As a result of this we have applied a number of restrictions to our simulations.For these examples of stable chains we have assumed symmetry about the centre of the chain.For the 4-particle plot shown here, the spacing between the central two particles was allowed to reach equilibrium as well, but for 5 and 6 particles we have assumed the central particles to be in contact with each other (minimum possible spacing).The positive compressive force on these central particles (not shown in the figure) confirms that such a chain configuration as a whole is stable.Thus the results obtained from simulation are fully stable configurations, but we note that other stable configurations with reduced symmetry may also be possible.The probability density of interparticle separation for the case of 2 particles.The variation in length of a chain of two particles is taken over a period of one minute and shows a standard deviation of 1.2 µm.Fig. 5.A typical output plot using our evanescent wave optical binding simulation, in the absence of charge effects.In this plot, where compressive forces are shown as positive, stable chains are indicated by positive-gradient intersections of each curve with the x-axis.The full conditions on the central particles are indicated in the text.The interplay between optical forces and scattering effects acting on the spheres in small linear chains leads to complete coalescence of the chain with the addition of a 5 th particle, to form a chain where all particles are in contact.
These results indicate that there are multiple stable chain configurations for a given number of particles.Our simulation results report the largest stable chain lengths, since we expect these to be the kinetic product in experiments where chains are built up by addition of further particles at initially large distances from the trap centre.At the laser intensities used, the particles have a low probability of crossing the potential barriers required to access the smaller equilibrium interparticle spacings.In order to compare our experimental and theoretical results, we have calculated the average chain separations from several sets of experiments.For example, for a system of two particles, we observe average chain lengths ranging from 6.4 -9.3 µm from different runs with standard deviations of around 1µm for each run.In Fig. 6 we have plotted the average of these values, with the error bars being the standard deviation from a single representative run.We see good qualitative agreement between experiment and theory for 2, 3 and 4 particles, with a divergence at 5 particles due to the simulation predicting complete chain collapse at this point while the experiment often requires a 6 th particle to initiate collapse.We have also included data from a single example run where collapse at 5 particles was observed, which shows closer agreement with the simulation.It is clear that the chains very rarely collapse such that the particles are truly in physical contact.Fig. 6.Plot comparing simulated equilibrium chain lengths and experimental results for charge screened samples.The experimental points are the average of several runs with the error bars representing the standard deviation observed within a single typical run.Since the experimental datapoints are averages between multiple experimental runs, the point at 5 particles includes some chains which have collapsed, and some which have not.As such, the results diverge from the simulation which predicts complete collapse at this point.An example of an experimental run which does demonstrate collapse at 5 particles has also been included for comparison.Uncertainty in the simulation parameters is estimated on the basis of a potential 5% uncertainty in the sphere radii and illuminating spot radii, 1% in the refractive index of the spheres and 0.2% in the angle of incidence of the beam.
Particle dynamics
During our studies we also observe emergent dynamic behaviour in longer chains such as chain collapse, tilting and off-axis drift [25].Perhaps the most striking dynamical effect observed was what we have dubbed "Newton's Cradle" motion, in which particles are ejected from the ends of chains with velocity components both in and out of the plane of the interface; at a certain height and distance from the optically bound chain the free particle then rejoins the chain under the influence of both gravity and optical binding/scattering forces.Media 1 shows this process occurring over a 2 minute interval, playing at 8x speed, with screenshots given in Fig. 7.This emergent behaviour was only observable under experimental conditions that used the highest intensities available.Fig. 7. Screenshots of the "Newton's Cradle" motion from above from top left to bottom right taken from Media 1.A particle is allowed to drift in from the left hand side into a linear chain of 6 particles.The two right-most particles are ejected out of the plane (moving out of focus) due to the force due to the scattered light being guided along the chain being greater than that provided by the evanescent field.The particles then fall back into the evanescent field due to gravity and are pushed back into the chain, repeating the process on the other side.The period of oscillation is approximately 2 minutes in this example, with the video playing at 8x speed.The complex, non-linear dynamics of this driven system are qualitatively predicted by our simulations, and these are shown in Fig. 8.We have already noted the difficulties in dealing with larger chains of particles and the increased number of degrees of freedom, and therefore the simulations presented here are restricted in that they consider particle Brownian motion but ignore the effects of surface reflections.The simulations predict a period of oscillation a factor of four smaller than we observe experimentally, perhaps due to the treatment of scattered light and the ejection of a single particle rather than the two ejected experimentally.Despite these differences, the simulation does however offer real insight into the physical processes which govern the chain dynamics as we now explain.
The refractive index contrast between the microspheres and the surrounding water results in two closely coupled phenomena: frustrated total internal reflection, and "waveguiding" effects, both of which can be seen in Fig. 8.As a sphere approaches from the left and passes over the left-hand laser focus (a) it causes an additional beam of light to escape through the prism and illuminate the chain of spheres.This radiation is then waveguided through the optically bound chain structure and is sufficiently intense to draw the rightmost sphere away from the prism surface (b).Once this has occurred, the horizontal and vertical restoring force exerted on it by the right-hand laser focus is dramatically reduced due to its evanescent decay with height, while the waveguiding effect ensures that the influence of the left-hand laser on it continues.This lack of a restoring force means that the particle can be ejected from the end of the chain and away from the trapping region (c).After travelling some distance this particle then falls under the influence of gravity (d), returning to the substrate surface where it is drawn back into the trapping region (e), thus perpetuating the circulatory process and resulting in the effect reminiscent of a "Newton's cradle" toy.
Critical to this effect is the presence of the intensity-modulated fringes caused by interference between the two counter-propagating laser beams.These provide a potential barrier that individual spheres must cross in order to move along the substrate surface.If the intensity is too great then particles will require a strong additional force to cross the fringes, while at low intensities particles are largely unaffected by their presence.At an intermediate intensity the fringes are able to introduce hysteresis into the system in terms of the horizontal position of the particles, an effect aided by small vertical displacements of particles within the chain which experience a reduced evanescent field.This hysteresis appears to be a key element of the "Newton's cradle" effect, enabling regular periodic motion in this overdamped system.
We also note that under the same experimental conditions there exist trajectories for which particles can be ejected from the chain back in the direction from which they have just arrived.This has also been observed in numerical simulations, and the direction in which particles are ejected appears to be extremely sensitive to the exact experimental conditions, as well as the instantaneous configuration of particles within the chain.
Conclusion
We have explored the dynamics of 1-dimensional chains of optically bound matter in evanescent fields by combining optical tweezers with an evanescent trap utilizing total internal reflection at the surface of a prism.Both equilibrium and dynamic effects have been investigated under conditions in which the scattering forces are relatively strong compared to the contribution of the evanescent interference fringes typically characteristic of such systems.These studies were used to assist in the development of a simulation of light scattering in evanescent fields using Generalized Lorentz-Mie Theory (GLMT).Our theoretical model agrees with the observed results for short particle chains sufficiently well in order to explain the compression of chains with particle number due to the modified light field, as well as dynamic effects caused by coherent light scattering between multiple particles in such an ensemble.
With a working model of the behavior of optically bound matter in evanescent fields, we will investigate more complex systems and a range of particles of varying morphology and composition.The hybrid tweezers/evanescent trapping geometry developed here will also allow mapping of forces in the evanescent field and the exploration of optical effects peculiar to a total internal reflection geometry, such as the Goos-Hänchen shift.
Fig. 4 .
Fig. 4. a (Left) Average particle positions for an example experimental run as a function of the number of particles in the chain.Positions are averaged over 30 seconds and clearly show both the "chain collapse" at 5 particles as the particles in the chain come into contact, and the decrease in interparticle separation observed towards the chain centre for a given chain length.b (Right) The probability density of interparticle separation for the case of 2 particles.The variation in length of a chain of two particles is taken over a period of one minute and shows a standard deviation of 1.2 µm.
Fig. 8 .
Fig. 8. Simulation screenshots of "Newton's Cradle" motion (side view) with the light intensity marked in yellow.The particles are ejected out of the plane in agreement with experiment. | 7,737.8 | 2012-01-16T00:00:00.000 | [
"Physics"
] |
ON THE USAGE OF ELLAM TO SOLVE ADVECTION-DIFFUSION EQUATION DESCRIBING THE POLLUTANT TRANSPORT IN PLANETARY BOUNDARY LAYER
The paper deals with the numerical solution of the advection diffusion equation describing the pollutant transport in the atmosphere. The idea is to use the not so common ELLAM framework and to compare the results with the state of the art Walcek method, which is used to solve the advection problems. The real wind model was chosen for the tests in order to get the good performance idea of the two methods. From the performed experiments and the calculation times, one can conclude that ELLAM is suitable to solve the presented problem.
INTRODUCTION
The air pollution modelling is an important and very current topic.It allows predicting the progress of the pollutant dispersion from the specific time to the near future and thus to help to deal with the low-quality air or to prevent from the possible contamination.
The model of the pollutant transport is described by the specific partial differential equation 1.It consists of the several parts that describe the whole process.The first and the most important part from the pollutant behaviour point of view is the advection term.It determines the wind field, commonly changing during time and space, which is the most influencing term in the equation.The diffusion/dispersion is the second important part and also it changes during time and space.The rest of the equation can be summarized into the reaction term which includes the behaviour of the reactant in the atmosphere.All three mentioned parts are in balance with the other side of the equation.It consists of source term the source(s) of the pollution.
Variable c is the concentration, u is the velocity field, D is the diffusion-dispersion model, R is the reaction model and t is time.
The model is usually solved by operator splitting technique, where the equation is separated into the advection, diffusion and reaction models.All the models are then evaluated separately and the calculated concentration is counted together.Although the special simple methods can be used to solve the separate parts, the error due to splitting, which is often neglected, arises [3].
One of such a method was proposed by [8] and it was successfully tested and compared against up to its date best methods that solve the pure advection part of the ADR equation.Consequently, another method, more precisely the framework, for solution of advection dominated pollutant dispersion model was being developed.Although the ELLAM is much more complicated than the Walceks method, it has the advantage of incorporating of the other part, diffusion or reaction and the very important boundary conditions, mostly these near the ground.ELLAM has many variants that were mostly adapted to water fluid environments were the behaviour of the flows is different from the atmospheric turbulent flows.
In this paper the form of ELLAM was tested against the Walceks method.Beside the artificial tests the real wind models with the truly measured data from the performed experiments were chosen for the schemes evaluation.
The paper is organized as follows.In the following section the general and the used ELLAM scheme are outlined.The scheme evaluation including the test inputs and methodology is then described in the ELLAM Evaluation section.Some conclusion and the future work is stated at the end of the paper.
ELLAM
ELLAM method is based on a philosophy of algebraic theory by [2].In this theory, the test functions are used to define the weak form of the governing equation.In the following subsections the general framework as well as the modified version used in this work is presented.
General ELLAM Framework
The procedure of the general ELLAM framework is based on the idea of conversion of equation 1 to the corresponding weak formulation.This is done by multiplying of the governing equation by the test function.Next the whole equation is integrated using Greens formula.
The obtained equation contains several integral terms that have to be evaluated.If we choose the proper form of the test function we can get rid of the one of the integral.This leads from the adjoint equation of equation 1 [6].
The integrals are evaluated analytically for the simple cases, however for practical problems one has to use the numerical approximation.The time discretization is usually done by backward Euler or Runge-Kutta methods.It is an advantage of the scheme that ELLAM can be used generally together within finite difference, finite volume or finite element approaches.The other advantage of the EL-LAM framework is its ability to naturally incorporate the boundary conditions.They simply appear as other integral terms in the equation.
The important part of the ELLAM scheme is the accurate characteristic tracking of the points.The problem of characteristics tracking is described by the ordinary differential equations, thus the solution can be obtained by various numerical methods.The more details on general EL-LAM method can be found for example in [6].
The Presented ELLAM Implementation
Our current implementation of ELLAM framework is in two dimensions (Ω), it is designed for the advectiondiffusion equation and for a rectangular grid.We came out from the work of [4] where the space discretization is based on finite element method.The governing equation is: where f is the function of the source of the pollution and all other variables have the same meaning as in equation 1.The resulting weak formulation for the specified time t n after multiplication by test function w( x,t n ) and applying of Greens formula is: where n is the normal outward unit vector from the element dydt, J n is time domain, Γ n is boundary domain, ).The second integral on the left hand side of the equation 3 is a diffusion term, the third integral is a boundary term and the second integral on the right hand side is a source term.
To evaluate the equation 3, the following procedure is done.The test function was chosen as piecewise linear as is usual for the ELLAM scheme [6].The terms with integration by time are approximated by backward Euler method.The remaining integrals can be evaluated by numerical integration using for example Gaussian quadrature with appropriate integration points.It remains to evaluate the equation w x,t ).This problem leads to the solution of the ordinary differential equation back in time.The common integration methods such as Euler can be used.In our case we decided to use the 4th order Runge-Kutta method.It is a trade-off between speed and accuracy and it behaved very well in cases of the performed experiments.
The last think to explain is the space discretization.We used the rectangular grid of points and the standard FEM process.The equation 3 has to be solved on the whole domain, therefore the elements, on which the approximation of the unknown function c is defined, have to be assembled together.This leads to the system of algebraic equations that has to be solved at each time step.
ELLAM EVALUATION
The most of the tests done for the numerical schemes use the artificial wind velocity fields and then they are evaluated using them.The wind velocities do not have to represent the performance of the scheme in concrete practical applications.Therefore, we have decided to do the tests of the ELLAM scheme against the scheme presented by [8] on the wind velocity fields based on real wind models and real measured data.
Used Wind Model
The used wind model is in the stationary form and it depends on the z (height) variable.
The special coefficients are needed in order to calculate the wind speed at given position.These are Monin-Obukhov length (L), roughness length (z 0 ), von Krmn constant (k) and friction velocity (u * ) [7,9].Then, the wind speed is given by: The coefficient z b = min (|L| , 0.1 • h), where h is a height of the unstable boundary layer.
The function Ψ m is of the form: (5)
Experiment Data
The above explained wind model has the specific parameters that differ in cases of different experiments.The experiments done in Copenhagen [1] were chosen to completely describe the wind model.
There were 9 experiments performed in Copenhagen, in which all of the required parameters were measured.The all parameters of the experiments that were used for calculations are shown in Table 1.
The Experiments
Experiments were done for the all cases of the performed Copenhagen experiments.The space was discretized to 100 × 100 points with the 40m spacing.The wind velocity field was pre-calculated using the equation 4 and the data in Table 1.The initial conditions for the test cases were defined by the initial concentration profile at zero time.
Two cases were tested.The first one has the shape of cone and the second one has the initial shape in form of cylinder.The cylinder case simulates the very sharp edges of concentrations which is very critical for the numerical schemes.
Table 1 The Parameters of the Performed Experiments in Copenhagen On the other hand the cone shape has the single maximum value of concentration and the numerical schemes have often tendency to smooth it.The diffusion in these test was set to zero, therefore the concentration profile should remain the same at the end of the experiments.
Fig. 1 The concentration profiles at the beginning of the simulation (red) and at the end of the simulation (blue) in case of no artificial diffusion addition
Oscillations
For the very steep concentration profiles like the cylinder, the numerical oscillations of the solution in case of EL-LAM scheme appear.The situation is shown in Figure 1.
To avoid the oscillations the artificial diffusion was added to the model.As a result, the oscillations disappeared afterwards.The new situation is shown in Figure 2.
Fig. 2 The concentration profiles at the beginning of the simulation (red) and at the end of the simulation (blue) in case of artificial diffusion addition
Error Measurement and Results
Each of the nine experiments was evaluated by several error measurements.The first one is the peak error (Equation 6).It demonstrates how the scheme is able to preserve the level of the concentration with respect to certain profile.The oscillations have the very negative effect to this measure.
peak c resp.peak 0 is the peak and min c resp.min 0 is the minimum of calculated resp.precise concentration profile.
The second measure is the distribution error (Equation 7).It describes the difference between the concentration distribution at the beginning and at the end of the simulation.The position of the profile has no influence to the error size.
Ω i and Ω j refers to domain where calc i, j and exact i, j differs from min 0 .
The last error measurement is referred to the mass conservation principle.In other words the sum of the concentration in the domain at the beginning of the simulation should be the same as at the end of the simulation.For the completeness of the evaluation we measured the calculation times of the schemes as well.
The results of the first set of tests are summarized in Table 2. ELLAM scheme gained the smaller peak error than the Walcek scheme, the similar distribution and final mass error.The calculation times was nearly the same, however one should be noted that the used ELLAM scheme that is described in section 2.2 includes the calculation of the diffusion term which was set to zero for the purposes of the error evaluation.The second test includes the cylindrical profile of the concentration.As was mentioned in section 3.4, the undesired oscillation due to the very sharp edges can occur during the simulations.Therefore, we performed the tests for the two versions of the ELLAM scheme.The first was with zero diffusion and the second one was with the artificial diffusion added to the model.The results are shown in Table 3, where the scheme with diffusion is marked as ELLAM D .It can be seen that in case of the zero diffusion ELLAM scheme gained relatively big peak error but very low distribution error and vice versa.It can be seen that the trade-off between the two cases should be appropriate.For further work on this problem we can be inspired by the [10] where the authors developed the technique to prevent the oscillation for 1D case.The calculation times are nearly the same as in previous test case.
CONCLUSION
We used the specific ELLAM scheme to solve the advection-diffusion equation with the parameters of atmospheric wind velocity fields.We performed the experiments and compared the scheme with the used advection integration scheme proposed in [8].From the performed experiments it is obvious that the ELLAM scheme is more than equal competitor to the state of the art method.It comes from the fact that the errors as well as the calculation time were similar and the ELLAM framework contained the diffusion computation as well.
For the further development we plan firstly to try to avoid oscillation in the scheme together with preserving the low distribution error.The next step would be the testing of the ELLAM scheme with the complete advection-diffusion atmospheric model and its evaluation against the real collecting data measurement.From the effectiveness point of view we are planning to rewrite the ELLAM code to the parallel platform using the very popular OpenCL framework [5].and computer graphics.For more information see please http://www.fit.vutbr.cz/idvorak.
František Zbořil is an Associate Professor of Computer Science at the Faculty of Information Technology, Brno University of Technology, Czech Republic.He received his M.Sc. in 1968 and Ph.D. in 1978 (both in Computer Science) at the same university.He started his research activities on analogue and hybrid computers with simulation of continuous systems, namely of systems described by partial differential equations.His next research was focused on classical artificial intelligence, robotics and neural networks.Now, the main objects of his professional interests are the simulation of combined dynamic systems and soft computing problems.He is the author of more than 100 papers and several lecture notes and he has supervised about 200 bachelors, masters and doctoral theses.He is a member of the board of the Czech and Slovak Simulation Society (CSSS) and a member of several other educational, research and academic boards or societies.
Table 2
The results of the performed experiments in case of cone profile
Table 3
The results of the performed experiments in case of cylinder profile | 3,280.2 | 2012-01-01T00:00:00.000 | [
"Physics"
] |
Megapneumonia Coinfection: pneumococcus, Mycoplasma pneumoniae, and Metapneumovirus
We report a young girl who died of Streptococcus pneumoniae 19A pneumonia, septic shock, and hemolytic uremic syndrome despite prior pneumococcal vaccination, appropriate antibiotics, and aggressive intensive care support. Serotype 19A is not covered by the 7- or 10-valent pneumococcal vaccines. Mycoplasma pneumoniae and metapneumovirus were simultaneously detected by PCR in the nasopharyngeal and tracheal aspirates. The pneumococcus is penicillin sensitive. Although infections with each of these pathogens alone are typically mild, this case highlights that co-infection with the triple respiratory pathogens possibly contributed to the fatal outcome of this child. Also, the new policy in Hong Kong to use PCV13 may help prevent further cases of serotype 19A infections.
Case
A previously healthy 4-year-old girl became dyspneic and critically ill with pneumonia and multiorgan system failure. She had been febrile with chills for 4 days (39 • C), breathless and coughing for 2 days. She had visited the general practitioners twice, and treated symptomatically but with no antibiotic prescription. She had vomited and passed watery stool. Her immunizations were up-to-date and she received one dose of 7-valent pneumococcal vaccine at age 2 years. The young brother had recovered from a recent febrile coughing episode. Because of increasing shortness of breath, the mother took her to the emergency department. Her vitals were: heart rate 165/min, respiratory rate 30/min, tympanic temperature 36.7 • C, SaO 2 79% in room air, and decreased air entry to right lung. Supplemental oxygen of 6 liter/min was immediately given. Chest radiography showed right sided pneumonia with pleural effusion (Figure 1).
On admission to the pediatric ward, temperature was 36.9 • C, pulse 125/min, respiratory rate 46/min, blood pressure 94/52 mmHg, and SaO 2 95% with 100% oxygen. Shortly, the patient became lethargic with cold peripheries, cyanosis, and insucking chest. She was given normal saline boluses and dobutamine (10 mcg/kg/min) and promptly transferred to the pediatric intensive care unit (PICU). There, she received cardiopulmonary resuscitation (13 minutes) because of cardiopulmonary failure, septic shock, and bradycardia. She was ventilated and treated with intravenous cefotaxime, high dose ampicillin, vancomycin, clarithromycin, packed cell, fresh frozen plasma, and cryoprecipitate transfusions. Abnormal laboratory findings included hemoglobulin 8.0 g/dL, platelets 87 × 10 9 /L, and white cell count 1 × 10 9 /L; activated partial thromboplastin time APTT was 47.9 seconds, prothrombin time PT 13.4 seconds, d-dimer 2192 ng/mL, C-reactive protein 51.9 mg/L, plasma urea 18.2 mmol/L, creatinine 214 umol/L. Her lowest PaO 2 /FiO 2 was 42, and arterio-alveolar gradient 610 mmHg. Echocardiography showed reduced cardiac contractility (ejection fraction 48%) but no pericardial effusion. The pleural effusion was drained, and 40 mL of dark turbid fluid 2 Case Reports in Medicine yielded (Figure 1 insert). Despite multiple inotrope infusions of dobutamine, adrenaline, noradrenaline, and milrinone, her mean blood pressure remained low (50 mmHg) and she suffered another episode of cardiac arrest 9 hours following admission. Hemodiafiltration was instituted for intractable septic shock, renal failure, acidosis, and fluid retention. The patient ran a deteriorating course despite aggressive ICU support. She developed another episode of cardiac arrest and succumbed 39 hours following hospitalization. Nasopharyngeal and tracheal aspirates yielded Mycoplasma pneumoniae (by polymerase chain reaction PCR) and human Metapneumovirus RNA (by RT-PCR). Blood culture, pleural fluid, and tracheal aspirate yielded leucocytes and Streptococcus pneumoniae (serotype 19A) sensitive to cefotaxime and penicillin. CSF was not obtained during the critical episode.
Postmortem examination confirmed generalized necrotizing pneumonia with diffuse alveolar damage and right middle lobe hemorrhagic infarct. The proximal tubules of both kidneys showed acute tubular necrosis but no microangiopathic thrombosis. Histology of the heart and the brain was unremarkable. Tissue cultures were negative.
Discussion
Pneumococcus is an important pathogen in childhood [1][2][3]. Invasive pneumococcal disease refers to unlocalised bacteremia, pneumonia, or meningitis. Despite the availability of effective vaccines, new serotypes continue to evolve [1][2][3]. The Hong Kong Government introduced the 7-valent polysaccharide vaccine in September 2009. In 2010, the vaccine was changed to the 10-valent vaccine and in 2011 recommendation was made to switch to a 13-valent vaccine.
Locally, the coverage of the 7-valent or 10-valent and 13valent vaccines was 65% and 90%, respectively [2]. The parents reported that the child received one prior dose of 7-valent vaccine before 3 years of age in early 2010. It is possible that 19A was a commonly circulating strain before the introduction of the 10-valent or 13-valent vaccines. The 13-valent vaccine should stop circulation of 19A when the program is fully implemented.
History of immunization with the 7-valent vaccine is not a guaranteed prevention against pneumococcal infection in children [1][2][3]. Evolving serotypes associated with severe lobar pneumonia, pleural effusion, and PICU admission despite prior immunization have been previously reported locally [4].
Evidenced-based guidelines for management of infants and children with community-acquired pneumonia (CAP) were prepared by an expert panel comprising clinicians and investigators representing community pediatrics, public health, and the pediatric specialties of critical care, emergency medicine, hospital medicine, infectious diseases, pulmonology, and surgery [5]. Amoxicillin should be used as first-line therapy for previously healthy, appropriately immunized infants and preschool-aged children with mild to moderate CAP suspected to be of bacterial origin. Amoxicillin provides appropriate coverage for S. pneumoniae, the most prominent invasive bacterial pathogen. Macrolide antibiotics should be prescribed for treatment of children (primarily school-aged children and adolescents) evaluated in an outpatient setting with findings compatible with CAP caused by atypical pathogens. Laboratory testing for M. pneumoniae should be performed if available in a clinically relevant time frame. Ampicillin or penicillin G should be administered to the fully immunized infant or school-aged child admitted to a hospital ward with CAP when local epidemiologic data document lack of substantial high-level penicillin resistance for invasive S. pneumoniae. Empiric therapy with a third-generation parenteral cephalosporin (ceftriaxone or cefotaxime) should be prescribed for hospitalized infants and children who are not fully immunized, in regions where local epidemiology of invasive pneumococcal strains documents high-level penicillin resistance, or for infants and children with life-threatening infection, including empyema. Empiric combination therapy with a macrolide (oral or parenteral), in addition to a beta-lactam antibiotic, should be prescribed for the hospitalized child for whom M. pneumonia and C. pneumoniae are significant considerations; diagnostic testing should be performed if available in a clinically relevant time frame. Accordingly, earlier initiation of antibiotics might have increased the chances of survival in this child.
Antibiotic resistance has also developed in Hong Kong [1,2]. The serotype 19A is especially virulent and may be difficult to isolate in patients who have already been started on antibiotics. The pathogen has been reported to be associated with the hemolytic uremic syndrome [6][7][8][9][10][11]. Penicillin can be used in sensitive pneumococcus [1,2]. In patient with pneumonia not responding satisfactory initially, more invasive investigative/therapeutic management including a pleural drain for biologic specimen is indicated to guide management. Local antimicrobial sensitivity in PICU patients has been reported [1,2]. The pathogen was sensitive to penicillin and cefotaxime in this case. In patients not responding satisfactorily with initial antibiotics but with known penicillin sensitivity, a higher dose of penicillin should be tried.
Coinfections by viral and bacterial agents in critically ill patients have been reported [12][13][14][15]. Mycoplasma pneumoniae usually affects older children and a clinical entity of atypical pneumonia or "walking" pneumonia. Metapneumovirus usually causes co-infection [13]. Occasionally, both pathogens can cause severe acute respiratory symptoms just like SARS (severe acute respiratory syndrome) [16].
In-house real-time RT-PCRs were performed according to hospital laboratory standard operating procedures for the qualitative detection of human Metapneumovirus (hMPV) RNA and of Mycoplasma pneumoniae (MP). The target of amplification for hMPV was the nucleoprotein gene (N gene) with primer sequences and method as described by Hopkins et al. and 45 cycles were run on real-time PCR (ABI prism 7900 HT FAST) [17]. Positive and negative controls were included, and a Ct value of ≤37 was considered positive. The target of amplification for MP was the ADPribosylating toxin gene encoding the CARDS (communityacquired respiratory distress syndrome) toxin using primer pairs and method as described by Winchell et al. [18]. Both internal DNA control and a positive and negative control were included in the reaction run. A Ct value ≤34 was considered positive.
It is difficult to ascertain if Metapneumovirus and Mycoplasma had contributed to this fatal illness. These pathogens were detected in the tracheal and the nasopharyngeal aspirates but not in the postmortem lung tissue cultures.
Unlike the nasopharynx, the presence of any pathogens in the tracheal aspirates represents infection in the lower respiratory tract rather than carriage in the upper airway. Both pathogens are known to cause pneumonia on their own and both are not commonly carried by healthy young persons [5,13,[19][20][21][22]. These facts support the argument that they were copathogens with pneumococcus which was found in pleural fluid, tracheal aspirates, and blood. In conclusion, the simultaneous isolations of 3 respiratory viral and bacterial pathogens have not been reported in our locality and may contribute to the fatal outcome of this unfortunate child. | 2,080.4 | 2012-10-17T00:00:00.000 | [
"Medicine",
"Biology"
] |
Spontaneous Physical Activity Downregulates Pax7 in Cancer Cachexia
Emerging evidence suggests that the muscle microenvironment plays a prominent role in cancer cachexia. We recently showed that NF-kB-induced Pax7 overexpression impairs the myogenic potential of muscle precursors in cachectic mice, suggesting that lowering Pax7 expression may be beneficial in cancer cachexia. We evaluated the muscle regenerative potential after acute injury in C26 colon carcinoma tumor-bearing mice and healthy controls. Our analyses confirmed that the delayed muscle regeneration observed in muscles form tumor-bearing mice was associated with a persistent local inflammation and Pax7 overexpression. Physical activity is known to exert positive effects on cachectic muscles. However, the mechanism by which a moderate voluntary exercise ameliorates muscle wasting is not fully elucidated. To verify if physical activity affects Pax7 expression, we hosted control and C26-bearing mice in wheel-equipped cages and we found that voluntary wheel running downregulated Pax7 expression in muscles from tumor-bearing mice. As expected, downregulation of Pax7 expression was associated with a rescue of muscle mass and fiber size. Our findings shed light on the molecular basis of the beneficial effect exerted by a moderate physical exercise on muscle stem cells in cancer cachexia. Furthermore, we propose voluntary exercise as a physiological tool to counteract the overexpression of Pax7 observed in cancer cachexia.
Introduction
Cachexia is a multifactorial syndrome mostly associated with chronic illnesses and characterized by severe skeletal muscle atrophy. In cancer patients, cachexia is observed in 50-80% of advanced-stage malignancies and accounts for 20% of all cancer-related deaths [1,2]. So far, major efforts to study the disruptive catabolic events occurring in cachectic muscle tissues focused on the molecular processes occurring within the fibers. Circulating proinflammatory cytokines and tumor-released factors are the best-established players triggering muscle wasting during cancer progression [3,4]. Their activity alters muscle fiber amino acid metabolism, transport, and proteolysis, affecting protein synthesis and ultimately leading to cell death. These alterations in metabolic pathways, in combination with diminished regenerative capabilities, mediate the severe loss of skeletal muscle mass and function observed in cancer-related cachexia [1].
Although significant advances have been achieved in the understanding of the catabolic events occurring within cachectic muscle fibers, not much is known about muscle microenvironment, where the different muscle progenitor cells reside [5,6]. We recently investigated the role of resident 2 Stem Cells International pools of stem cell in cachectic muscle [6]. Our findings showed that cachexia is linked to the overexpression of Pax7 in satellite cells and other myogenic precursors in both C26 colon carcinoma bearing mice and pancreatic cancer patients. Circulating tumor factors (e.g., proinflammatory cytokines) promote fiber damage, which is followed by the activation of both satellite cells and nonsatellite cells. Proinflammatory cytokines activate IKK /NF-kB and this results in a subsequent deregulation of Pax7, which ultimately impairs myogenic cell differentiation [6]. Physiological Pax7 expression drives myogenic stem cell commitment, while its persistent expression in cancer cachexia has been shown to inhibit the differentiation of muscle stem cell [6]. Overall, the impairment of muscle regeneration, together with the increase in muscle degeneration, synergizes with an unbalanced muscle homeostasis, leaning toward a cachectic state [1,6]. In particular, the reduced regenerative potential of skeletal muscle tissue in cancer cachexia represents a pivotal determinant of the pathological progression [6][7][8].
Loss of muscle function is another important feature of cancer cachexia directly impacting patient's quality of life and many studies suggest ameliorative effects of physical activity in cancer patients [3]. It has been shown that physical activity reduces fatigue [9], counteracts cancer therapy side effects, both during and after treatment [10,11], and, in general, improves patients' quality of life [3,12]. Specifically, systematic review of 16 randomized clinical trials for different types and stages of malignancies showed that both aerobic and resistance exercises counteract cachexia and improve muscle strength more than usual care measures in treated cancer patients [13]. The molecular mechanisms by which physical activity prevents cancer cachexia and its beneficial effects in cachectic patients have been shown to involve the anti-inflammatory properties of specific cytokines [3,14]. Indeed, it has been reported that physical activity reduces systemic inflammation by increasing the circulating level of IL-10, a known anti-inflammatory cytokine that acts locally to counteract muscle wasting [3,15,16]. Furthermore, physical activity is considered a promising intervention strategy for the prevention and the treatment of cancer-related cachexia also because of its antioxidant effects. These involve an enhanced activity of antioxidant enzymes such as superoxide dismutase (SOD) and glutathione peroxidase (GPx), which counteract ROS-mediated muscle damage [3]. Moreover, exercise improves insulin sensitivity enhancing skeletal muscle metabolism [3].
Previous studies on cancer-related muscle wasting performed in C26-bearing mice showed a strong reduction of the myogenic potential of muscle stem cells in a Pax7-dependent manner [6][7][8].
Here we show that the impairment of muscle regeneration after local damage in cancer cachexia is associated with a prolonged inflammation and increased Pax7 expression. Furthermore, using a voluntary wheel running exercise protocol in C26-bearing mice, we demonstrate for the first time a direct involvement of aerobic exercise in the removal of the myogenic differentiation block exerted by the persisting expression of Pax7 in cachectic muscles, associated with diminished NF-kB activation.
2.1.
Mice. Female, 7-week-old BALB/c mice were used for this study. To induce cancer-associated cachexia, a 0.5 mm 3 solid fragment of colon carcinoma C26 was subcutaneously implanted in the back of the animals, as previously described [17]. Two different experimental protocols were adopted. For muscle regeneration analysis, muscle damage was induced by freeze-injury two weeks after tumor implantation. A steel probe precooled in dry ice was applied to Tibialis Anterior (TA) muscle of anesthetized animals for 10 seconds, as previously described [18]. These mice were euthanized 3, 6, 8, and 10 days after muscle damage. Mice for running experiments were housed in standard conditions with day/night cycles of 12 hours and food ad libitum and euthanized 19 days after tumor implantation.
All the animal studies were performed in accordance with ARRIVE guidelines and following the three Rs rule of Replacement, Reduction, and Refinement principles [19]. Animals were treated with protocols approved by the animal experimentation ethics committee of Sapienza University of Rome, Italy.
Voluntary Wheel Running Exercise.
To analyze the effects of voluntary wheel running, we compared mice hosted in standard versus wheel-equipped cages. Cages were prepared as previously described [20]. Briefly, all wheels were supplied with a tachometer in order to record physical activity data, including total speed, total distance covered, and their daily averages. Mice in the running groups were hosted in wheelequipped cages from the day of tumor implant until sacrifice (19 days). Mice hosted in normal cages were considered unexercised mice.
Dry Weight Muscle
Measurement. TA, quadriceps femoris (QU) and gastrocnemius (GA) muscles were dissected, weighed (i.e., wet weight), frozen in liquid nitrogen, subjected to lyophilization in a vacuum chamber for 18-20 hours, and weighed before defrosting (i.e., dry muscle weight). This approach was used to calculate the percentage of water content of the samples after the different treatments.
Histology and Histochemistry.
TA muscles were dissected, embedded in tissue freezing medium (Leica, Wetzlar, Germany), and frozen in liquid nitrogen-cooled isopentane. Muscle cryosections of 8 m thickness were obtained using a Leica cryostat (Leica Biosystems). For histological analysis, the sections were stained with hematoxylin and eosin (H&E, Sigma), using standard methods.
To measure NADH transferase activity, the sections were treated as previously described [17]. Esterase staining was performed as previously reported [5]. Photomicrographs were obtained using an Axioscop 2 plus system equipped with an AxioCamHRc (Zeiss, Oberkochen, Germany) at standard 1300 × 1030 pixel resolution.
After washing in TBST, blots were incubated with anti-mouse or anti-rabbit secondary antibody HRP-conjugated (BioRad, Hercules, CA) diluted 1 : 10000 in TBST and detected by using Super Signal West Pico Chemiluminescent Substrate (Pierce, Rockford, IL).
Morphometric Analysis.
Morphometric analysis was performed on type IIb fibers (i.e., low NADH transferase activity) as previously described [21]. For each muscle, the crosssectional area of all the fibers in a muscular cross section was measured and the median calculated as an index of fiber size. The Scion Image Software was used to calculate the fiber cross-sectional area.
Statistical Analysis.
All quantitative data are presented as mean or as mean ± SEM. Statistical analysis was performed using ANOVA or by Student's -test, using the software available on the VassarStats web page (http://faculty.vassar.edu/ lowry/VassarStats.html). A value less than 0.05 was considered significant; a value less than 0.01 was considered highly significant.
Impaired Muscle Regeneration in C26 Induced Cancer Cachexia Is Associated with Prolonged Local Inflammation.
We analyzed muscle regeneration by histological evaluation of center-nucleated fibers 2 weeks after tumor implantation when tumors were demonstrated to begin growing exponentially and muscles developed fiber atrophy [5]. Analysis of regeneration 6 days after injury showed the presence of more mononuclear interstitial cells in the injured site of C26-bearing mice with respect to controls, indicative of a prolonged inflammatory phase following injury (Figure 1(a)).
In addition, a marked deficit of regeneration was reported by the evaluation of the cross-sectional area of fibers with centrally located nuclei. At day 6 of regeneration, the mean area of regenerating fibers in muscles from C26-bearing mice was 330 m 2 , approximately half of the area of regenerating fibers in muscles derived from control mice, which was 611 m 2 (Figure 1(b)). The delay in the number of regenerating fibers was increasing over time, compared with control muscles which showed a bigger number of regenerating fibers at 8 and 10 days after damage (Figure 1(c)). Further confirmation of a hampered muscle regeneration capability in cancer cachexia came from both histological and WB analyses of IgG expression (Figures 2(a) and 2(b)). Increasing expression of IgG was observed in regenerating muscles of tumor-bearing mice at all time points analyzed following injury (with a peak at day 8), while an opposite trend in the expression profile was detected in muscles from control mice (Figures 2(a) and 2(b)). In line with these findings, histological analyses of focal lesions revealed a higher burden of recruited macrophage cells within the regenerative area of muscles from C26-bearing mice at all time points, with a peak at day 8 after muscle damage (Figure 2(a)), in agreement with the IgG expression pattern.
Muscle Regeneration after Acute Damage in Cachectic
Muscle Is Associated with Prolonged Pax7 Expression. WB analysis of regenerative markers confirmed that C26 negatively affected muscle regeneration ( Figure 3). Indeed, Pax7, MyoD, and Desmin expressions were significantly higher in muscles from C26-bearing mice than in controls, up to 8 days following injury, when these markers were almost undetectable in control muscles, a sign of completed regeneration ( Figure 3). Interestingly, Pax7 expression in C26bearing mice decreased at day 8 of regeneration, allowing MyoD to start the regenerative process, while this stage appears already completed in control muscles (Figure 3).
Voluntary Wheel Running Removes Pax7-Mediated Block of Myogenic Differentiation in Cancer Cachexia and Rescues Muscle
Physiology. Cachexia is characterized by muscle damage and defective muscle regeneration associated with elevated levels of Pax7 expression [6,7]. Genetic reduction of Pax7 expression has been shown to rescue muscle homeostasis [6]. We tested whether physical activity, known to be able to rescue muscle homeostasis in cachexia, was also sufficient to restore Pax7 physiological expression levels and promote stem cell progression from myoblast to nascent myofibers.
To analyze the effects of voluntary wheel running on muscle wasting, we hosted BALB/c mice in wheel-equipped cages, as previously described [20]. Mice were divided into two groups, C26 tumor-bearing mice and healthy control 4 Stem Cells International mice, and individual physical activity was recorded daily by tachometers connected to the wheels. Since the exercise regimen started on the same day of tumor implantation, the approach allowed us to analyze the effects of voluntary wheel running during the development of pathological symptoms. The distance run by C26-bearing BALB/c mice was lower than that run by the healthy controls: we found nearly 50% decrease in the total as well as in the daily distance covered by cachectic mice (6 km/day versus 11 km/day, resp.) [20]. However, no significant differences were observed in average speed between cachectic and control mice, suggesting that the two groups exercised at similar intensity levels. With the aim to investigate the molecular signature of muscle response to cachexia and how it is modulated by exercise, we performed WB analysis for markers of muscle regeneration, such as MyoD and Pax7 in C26-bearing and control mice, with or without exercise. A striking upregulation of both Pax7 expression and MyoD expression in cachectic muscle was found (Figures 4(a)-4(d)). This confirmed the occurrence of myoblast activation in cancer-mediated muscle wasting, as previously reported [6]. However, we found, for the first time, that voluntary physical activity per se did not induce a significant increase in the expression of Pax7 (Figures 4(a) and 4(b)) and MyoD (Figures 4(c) and 4(d)), albeit it specifically downregulated Pax7 expression in the musculature of tumorbearing mice, likely releasing a block to muscle regeneration (Figures 4(a) and 4(b)). In addition, analysis of NF-kB confirmed its increment in muscles from C26-bearing mice, as reported by other authors [22,23]. However, enhanced voluntary physical activity, together withPax7 decrease, is also associated with a downregulation of both total and activated (p65) levels of NF-kB compared to those observed in muscles from C26-bearing mice at rest ( Figure 5).
Given the observed exercise-induced Pax7 downregulation to physiological levels in cachexia, we analyzed the effects of exercise on muscle homeostasis. In order to avoid the bias of a possible contribution of inflammatory edema to muscle mass, we evaluated the dry weight of TA, QU, and GA muscles in all experimental conditions to assess muscle mass. Muscles from tumor-bearing mice placed in standard cages (without wheel) showed a significant weight loss (Table 1), while wheel running was able to revert the loss of muscle mass in C26-bearing mice to levels comparable to those observed in not exercised healthy control mice (Table 1). In addition, we evaluated glycolytic fibers areas, which are highly reduced in cancer-related muscle atrophy [24] (Figures 6(a) and 6(b)) but not after exercise-induced Pax7 downregulation ( Figures 6(a) and 6(c)). Overall, morphometric analyses of TA muscles from healthy controls and C26-bearing mice, with or without wheel running, showed that the latter improves muscle mass in cachectic mice by increasing fiber cross-sectional area of glycolytic fibers (C26 versus C26 wheel: = 1126; = 0.0001) (Figures 6(b) and 6(c)).
Discussion
Deregulated levels of Pax7 have been recently shown to contribute to muscle wasting in cancer cachexia [6,7]. To further investigate the procachectic role of Pax7 in skeletal muscle : Early regenerative marker expression pattern in injured muscles. Representative WB images for Pax7, MyoD, and Desmin at 3, 6, and 8 days after freeze-injury. Blots were performed in duplicate for both control and C26-bearing mice. The first lane was loaded with the extract from a healthy, uninjured control muscle. GAPDH was used as loading control. tissue in tumor-bearing mice, we characterized the cellular response to muscle damage occurring in muscles obtained from cachectic mice after acute damage. In a previous work, we exploited a pharmacologically or genetically induced Pax7 downregulation to obtain rescue of muscle homeostasis [6]. In this study, we demonstrate that the same result/effect can be obtained by a physiological regulation of Pax7 expression mediated by physical activity. The latter may have readily applicable translational implications.
Upon focal injury of both healthy and C26-bearing mice, histological evaluations showed a considerable delay of muscle regeneration in C26-bearing mice compared with healthy controls, both in terms of quantity (number) and quality (size) of the regenerating fibers involved. Furthermore, the early phases following the mechanical destruction of both fibers and connective tissue were characterized by a significant accumulation of infiltrating mononuclear cells. This massive recruitment of inflammatory cells is confirmed by high immunoglobulins expression, which appears stronger and prolonged over time in muscle from C26-bearing mice compared to controls. Our previous study on the C26 cachectic model showed that, overall, neutrophils and lymphocytes do not accumulate within the stromal compartment of cachectic muscles, whereas the macrophage content declines [5]. Interestingly, esterase staining reveals that macrophages are the most abundant interstitial mononuclear cells in muscles from C26bearing mice after acute damage. We noticed a prolonged expression of Pax7 in muscles from C26-bearing mice after acute injury. Likely, this elevated Pax7 expression represents an inhibitory signal keeping satellite cells in an undifferentiated, proliferating status. Since macrophages promote fiber membrane repair during regeneration [25], while the overexpression of Pax7 triggers the onset of muscle wasting [6], the asynchronous trend observed between weaves of events involved in divergent homeostatic processes mirrors the remarkable delay in muscle regeneration observed in cachectic mice. Pax7 deregulation also impacts the functionality of MRFs [6,26,27]. We found that voluntary wheel running, considered as a low-intensity [28,29] and aerobic model of exercise [3], downregulates Pax7 closely to the control levels, thus reestablishing the typical expression pattern observed in healthy muscle. Moreover, the absence of MyoD and the early detectable presence of Pax7 in muscles from exercised healthy controls confirm that voluntary free running is associated with skeletal muscle adaptations related to an increased anabolism [3,30]. Within the molecular adaptation of skeletal muscles mediated by physical activity in cancer cachexia, we also observed a downregulation of total and activated NF-kB levels in C26-bearing mice hosted in wheel-equipped cages compared to those hosted in standard cages. These data confirmed the pivotal role of NF-kB observed in cancerrelated muscle wasting [22,23] and showed that the decline of Pax7 during voluntary wheel running is associated with a reduction of NF-kB activity, further suggesting a regulatory link between the two factors, as previously demonstrated [6].
In summary, here we demonstrate that, in addition to the already known beneficial effects in cancer patients [29][30][31], physical activity downregulates Pax7 and restores muscle mass by increasing glycolytic fiber size. Indeed, muscle weight analysis revealed a significant rescuing effect of exercise against muscle weight loss. Several clinical reports showed an abnormal high level of Pax7 in skeletal muscle tissue from patients with different cancers, including gastric [32] and pancreatic tumors [6] and rhabdomyosarcomas [33]. Thus, the possibility to counteract Pax7 overexpression by adopting a protocol of low-intensity physical exercise, compatible with the clinical features of the primary disease, may represent an important tool to be used in association with the common therapeutic strategies.
Conclusion
Pax7 is one of the key mediators of the impaired myogenic ability observed in cancer-induced cachexia. Our data demonstrate that in cachectic mice, displaying a prolonged Pax7 expression, muscle regeneration after an acute damage is delayed, compared to controls. In addition, we showed that in C26 tumor-bearing mice voluntary wheel running downregulates Pax7 expression to levels similar to those observed in not exercised healthy mice.
Taken together, our findings suggest that the beneficial effects of a moderate physical activity on cachectic muscles are mediated, at least in part, by its ability to downregulate the expression of Pax7 and the activation of NF-kB, thus removing the myogenic differentiation block observed in cancer cachexia. | 4,407.2 | 2015-12-20T00:00:00.000 | [
"Biology"
] |
Bilateral neuroinflammatory processes in visual pathways induced by unilateral ocular hypertension in the rat
Background Glaucoma is one of the leading causes of irreversible blindness in the world. The major risk factor is elevated intraocular pressure (IOP) leading to progressive retinal ganglion cell (RGC) death from the optic nerve (ON) to visual pathways in the brain. Glaucoma has been reported to share mechanisms with neurodegenerative disorders. We therefore hypothesize that neuroinflammatory mechanisms in central visual pathways may contribute to the spread of glaucoma disease. The aim of the present study was to analyze the neuroinflammation processes that occur from the pathological retina to the superior colliculi (SCs) in a rat model of unilateral ocular hypertension induced by episcleral vein cauterization (EVC). Results Six weeks after unilateral (right eye) EVC in male Long-Evans rats, we evaluated both the neurodegenerative process and the neuroinflammatory state in visual pathway tissues. RGCs immunolabeled (Brn3a+) in ipsilateral whole flat-mounted retina demonstrated peripheral RGC loss associated with tissue macrophage/microglia activation (CD68+). Gene expression analysis of hypertensive and normotensive retinas revealed a significant increase of pro-inflammatory genes such as CCL2, IL-1β, and Nox2 mRNA expression compared to naïve eyes. Importantly, we found an upregulation of pro-inflammatory markers such as IL-1β and TNFα and astrocyte and tissue macrophage/microglia activation in hypertensive and normotensive RGC projection sites in the SCs compared to a naïve SC. To understand how neuroinflammation in the hypertensive retina is sufficient to damage both right and left SCs and the normotensive retina, we used an inflammatory model consisting in an unilateral stereotaxic injection of TNFα (25 ng/μl) in the right SC of naïve rats. Two weeks after TNFα injection, using an optomotor test, we observed that rats had visual deficiency in both eyes. Furthermore, both SCs showed an upregulation of genes and proteins for astrocytes, microglia, and pro-inflammatory cytokines, notably IL-1β. In addition, both retinas exhibited a significant increase of inflammatory markers compared to a naïve retina. Conclusions All these data evidence the complex role played by the SCs in the propagation of neuroinflammatory events induced by unilateral ocular hypertension and provide a new insight into the spread of neurodegenerative diseases such as glaucoma.
Background
Glaucoma is the second leading cause of irreversible blindness and is projected to affect 80 million people worldwide by 2020, including at least 7 million people who will become bilaterally blind [1]. Glaucoma is a chronic degenerative optic neuropathy in which intraocular pressure (IOP) is abnormally elevated, leading to the progressive loss of retinal ganglion cells (RGCs) and an alteration of their axons. Today, high IOP is the major risk factor and the current treatment strategy focuses on its ability to reduce IOP by pharmacological or surgical means [2]. However, other mechanisms than IOP increase seem to be involved in the development and progression of this degenerative disease. Indeed, glaucoma progression can be observed in 15-25 % of patients despite appropriate IOP control [3,4], and normal-tension glaucoma comprises a significant proportion of glaucoma cases in which an elevated IOP cannot explain neurodegeneration. In addition, abnormal activation of autoimmunity has been observed, which could induce the loss of RGCs in glaucoma patients without elevated IOP [5]. Consequently, this disease may progress through an IOP-independent mechanism and may involve other factors. In this way, numerous data indicate that inflammatory responses in the retina could appear earlier than the IOP rise [6][7][8][9]. In DBA/2J mouse glaucoma model, it has been reported that young mice present microglial activation in their retina and optic nerve in absence of an elevated IOP context [10,11]. In addition, it has been suggested that astrocytes and Müller cells exhibit an activated phenotype with morphologic changes in retina before the increase of IOP. However, inflammatory events induced by glaucoma occur not only in the retina and the optic nerve (ON). In the human brain, it has been clearly observed that neurodegenerative processes also affect all layers of the lateral geniculate nucleus (LGN), which constitutes the major target of RGCs (corresponding to up to 80-90 % of RGC fibers) [12,13]. In contrast to humans, in the rodent visual system, after decussating, the majority of RGC axons (90 %) project directly to the contralateral superior colliculus (SC) [14][15][16]. Thus, in a rat model of acute intraocular hypertension, a strong astrogliosis has been shown in the SC contralateral to the hypertensive eye [17]. Moreover, it has been recently demonstrated that a deregulation of cytokine signaling in the SC can appear before the IOP or axonal degeneration in the DBA/2J mouse glaucoma model [18].
The normotensive eye could also be affected by these inflammatory changes. It has been observed that microglia and astrocyte reactivity is upregulated in the retina contralateral to the hypertensive eye in mice [19,20]. However, the pathophysiology of neuroinflammatory events, responsible for contralateral eye alteration, remains to be clarified. Nevertheless, some data have provided evidence of RGC axon projection from one retina to the other via the optic chiasm and called retino-retinal projection pathway [21,22]. However, this bypass pathway remains minor and concerns only a marginal proportion of axons [23,24]. The SC could be the major structure of communication by which neuroinflammatory events could spread to both eyes. In rodents, after the optic chiasma, each SC receives RGC axonal projections from both retinas in a proportion of around 85/15 % [14-16, 22, 23, 25].
The purpose of this study was to understand how neuroinflammatory processes could spread to the normotensive eye in the unilateral ocular hypertension glaucoma model. To respond to this question, we first analyzed the neuroinflammatory consequences of unilateral ocular hypertension in visual pathway tissues including retinas, optic nerves, and SCs in both episcleral vein cauterization (EVC) and contralateral tissues. Then to demonstrate the role played by the SC in the propagation of neuroinflammatory events induced by glaucoma, we used a model of unilateral injection of TNFα into the right SC.
Animal model
Fifty male 8-week-old Long-Evans rats (Janvier Labs) weighing 250-300 g were used. Animals were kept in pathogen-free conditions with food and water available ad libitum and housed in a 12-h light/12-h dark cycle.
All experiments were conducted after evaluation and approval by the Institutional Animal Care and Use Committee, Comité d'éthique pour l'expérimentation animale Charles Darwin (reference number: 03858.02), in accordance with the guidelines from Directive 2010/63/ EU of the European Parliament on the protection of animals used for scientific purposes. For sample size and number of rats used in each experiment see Table 1.
Induction of ocular hypertension and IOP measurements
The surgical ocular hypertension (OHT) model was induced in the right eye of each rat by cauterization of three episcleral veins after conjunctival dissection under general anesthesia (intraperitoneal injection of ketamine (100 mg/kg, Virbac, Vauvert, France) and xylazine (10 mg/kg, Bayer HealthCare, Whippany, USA) as previously described [26,27]. Briefly, the superotemporal, superonasal, and inferotemporal episcleral veins were located and cauterized using a standard disposable ophthalmic cautery. The left eyes were used as contralateral controls, and eyes from naïve animals were used as naïve controls. At baseline and after the surgery, the animals were maintained for a 6-week period and monitored for IOP once a week using a handheld tonometer (TonoLab, Medtronics, Jacksonville, USA). In this study, IOP average values were calculated from three initial values from eight rats for the three groups (hypertensive, normotensive, and control eyes) at each time point. During IOP measurements, all the animals were awake.
Stereotaxic CTB-Alexa 594 and TNFα injection
Fourteen male Long-Evans rats (7 weeks old) were deeply anesthetized with ketamine (100 mg/kg) and xylazine (10 mg/kg) and placed on a digital stereotaxic frame. A small drop of ophthalmic gel was placed on both eyes to prevent corneal dehydration throughout the procedure. The scalp was removed, and the skull was exposed. A small craniotomy (1 mm × 1 mm 2 ) was drilled above the SC in the right hemisphere (mediolateral, 1.6 mm; anteroposterior; −6.72 mm; dorsoventral, −4.2 mm from the bregma). Four microliters of sterile recombinant rat TNFα (25 ng/μl) (Biorad Laboratories, Nazareth, Belgium) or cholera toxin subunit B (1 μg/μl) Alexa 594 (Life Technologies, Paisley, UK) were unilaterally injected into the SC (1.2 μl/min). After injection, the needle was left for 5 min and removed slowly. The skull skin was stitched and to recover, rats were placed for 1 h at 30°C. The tissue was analyzed 2 weeks after TNFα injection and 5 days after CTB injection.
Optomotor response
The optokinetic tracking threshold was measured, under photopic conditions, by observing the optomotor responses of rats to rotating sinusoidal gratings (OptoMotry, CerebralMechanics, Alberta, Canada). Rats reflexively respond to rotating vertical gratings by moving their head in the direction of the grating rotation [28]. The protocol yields independent measures of right and left eye acuity based on the unequal sensitivity of the two eyes to pattern rotation: the right and left eyes are most sensitive to counter-clockwise and clockwise rotations, respectively. Once the rat became accustomed to the pedestal, the test was initiated by presenting the rat with a sinusoidal striped pattern that rotates either clockwise or counterclockwise and varied widths. The software randomly increased spatial frequency of the grating until the animal no longer responded. The process of changing the spatial frequency of the test grating was repeated a few times until the highest spatial frequency the rat could track was identified, which defines the optokinetic tracking threshold under the experimental conditions. Experiments were conducted after TNFα administration by two observers masked to the animal's treatment and previously recorded thresholds.
Tissue preparation
At the end of the experiment, rats from each group were deeply anesthetized via a single IP injection of ketamine (75 mg/kg) and xylazine (10 mg/kg) and fixed by a 50 ml intraaortic perfusion of 0.9 % NaCl solution then 400 ml of 4 % paraformaldehyde in 1× phosphate buffer saline (PBS; pH 7.4). After fixation, the eyes were carefully removed and post-fixed for 1 h, then rinsed in 1× PBS solution. The optic nerves were cut right after the eye, and then the brain was removed with optic nerves. The ONs and brains were post-fixed for 24 h and only ONs were cryoprotected for 24 h by immersion in a 1× PBS solution containing 30 % sucrose at 4°C. For the immunofluorescence experiments, we used optic nerve part the most close to the eye on 3 ml of length. Then the ONs were included in a 7.5 % gelatin, 10 % sucrose mix, and frozen at −20°C. Retinal tissues were dissected intact from the globe and flat-mounted.
Dual immunofluorescence labeling in whole flat-mounted retina
For primary antibody and secondary references and dilutions see Table 2.
The retinas were incubated for 2 h in a blockingpermeabilizing solution of 1× PBS containing 10 % bovine serum albumin (BSA), Triton X-100 2 %, and Tween20 0. Secondary antibodies included donkey anti-mouse and anti-goat antibodies conjugated with Alexa Fluor 488 and 594 (1/500, Life Technologies) and biotinylated horse anti-rabbit antibodies (1/500, Vector Labs, Burlingame, USA) followed by a streptavidin Alexa Fluor 488 and 594 conjugate (1/500, Life Technologies). Staining with Dapi (1/2000), a nuclear marker was also used. Sections were then washed, mounted on gelatin-coated glass slides, and coverslipped with Fluoromount. Control sections were processed in parallel in the absence of either primary or secondary antibodies.
Immunolabeling
Free-floating sections were incubated with 3 % H 2 O 2 for 20 min and then with blocking/permeabilizing buffer (see previous section). Sections were incubated overnight at 4°C (in 1× PBS containing 2 % BSA, 2 % NHS and 0.15 % Triton X-100) with GFAP and Iba1 antibodies. The sections were incubated with biotinylated goat antimouse and horse anti-goat antibodies (1/500, Vector Labs) for 1 h and the Vectastain ABC kit (1/250, Vector Labs) for 1 h. The color reaction was developed for 3-5 min with DAB + (Sigma-Aldrich). The sections were washed in 0.05 Tris buffer, pH 7.6, dehydrated in Safesolv (no toxic xylene substitute, VWR Q Path, Leighton Buzzard, UK) and coverslipped with Eukitt mounting media.
Data analysis
Images were captured with a DM6000 microscope (Leica, Nanterre, France) and analyzed using MetaMorph software (Molecular Devices, Sunnyvale, CA). For labeling density analysis, image files were inverted and opened in gray-scale. Subsequently, using the thresholding function of Fiji [29] to discriminate objects of interest from the surrounding background, the total surface occupied by immunoreactive structures (i.e., total stained pixels) above this set threshold was estimated within a standard area (ROI manager). Images were taken at objective ×20 on a Leica microscope. The resulting values were expressed in surface units corresponding to 306,804 μm 2 . The results were expressed as the mean ± SEM of six to eight values per structure and per animal. Images for co-localization were taken with an Olympus FV1000 laser-scanning confocal microscope (Olympus, Philadelphia, USA) and acquisitions were obtained using the Olympus Fluoview software version 4.1.
RGCs and activated tissue macrophage/microglia labeling and counting
Eight microscopic images were captured using a ×20 objective in the peripheral retina and then peripheral, middle, and central regions in whole flat-mounted retina labeled with Brn-3a and CD68 (which plays a role in phagocytic activities of tissue macrophages) antibodies, respectively. Images were captured with a DM6000 microscope (Leica), and automatic enumeration of RGC nuclei was assessed in a blind manner with MetaMorph software (MolecularDevices). Iba1-and CD68-positive cells were counted with the Fiji cell-counter plugin (ImageJ software, NIH, Bethesda, MD, USA) [30]. The Iba1-and CD68positive cells in the central retina were quantified around the ON head. In order to analyze the density of neuronal cell population (labeled with the NeuN marker) in SC of EVC and control groups we used an automatic counting cell software (Explora Nova Morpho Strider; 2D Analysis). mRNA extraction, reverse transcription, and real-time polymerase chain reaction (RT-PCR) Tissue preparation At the end of the experiment, rats from each group were deeply anesthetized using a single IP injection of ketamine (75 mg/kg) and xylazine (10 mg/kg) and flushed with an intraaortic perfusion of 0.9 % NaCl. The eyes were carefully removed and dissected, and the retinas were frozen in dry ice. The rat brains were detached from the cranial box and directly frozen in carbonic ice. Then the brains were put in stainless steel coronal brain matrix (Harvard Apparatus), and 1-mm slices were taken and microdissected. Regions of interest (superior colliculi) were frozen in liquid nitrogen.
mRNA extraction, reverse transcription, and RT-PCR
Total RNA levels in retinal tissues and brain punches were assessed using the NucleoSpin RNA II and NucleoSpin RNA XS Purification kit, respectively (Macherey-Nagel, Düren, Germany). RNA concentration was evaluated from absorbance measurements with NanoDrop (ND-1000 spectrophotometer, Wilmington, USA).
Statistical analysis
All values are expressed as means ± SEM.
IOP measurement was analyzed using two-way ANOVA for repeated measures followed by the Bonferroni post hoc test. RGC counting, RT-qPCR data, CD68-positive cells, and c-fos counting were analyzed using one-way ANOVA, followed by the Tukey or Kruskal-Wallis or Dunnett multiple comparisons post hoc tests. ON and SC immunostaining levels of GFAP and Iba1 were analyzed using one-way ANOVA followed by the Bonferroni post hoc test. Statistical analyses were performed at the significance level of 0.05 with Prism 6 GraphPad software (San Diego, USA).
Results
Ocular hypertension leads to RGC death in the cauterized eye and also affects the contralateral eye IOP was measured with a tonometer and followed once a week for 40 days. Immediately after the surgery, the cauterized animals showed elevated IOP, which was stable between 30 and 40 mmHg. Naïve and normotensive (NT) eyes showed sustained normal IOP at 20 mmHg (Fig. 1a). Six weeks after cauterization, the animals were sacrificed and whole flat-mounted retina immunolabeling of RGCs was performed. All experiments were done at this time point. The images represent peripheral areas of naïve, NT, and hypertensive (HT) eye retinas (Fig. 1b). First, we observed that RGC density differed between naïve and HT eyes. Indeed, the HT group exhibited a significant 22 % decrease (p = 0.0184) in RGC density in peripheral retina compared to the naïve group (Fig. 1c). Furthermore, we did not observe any loss of RGC in the middle and central retina in HT eyes compared with naïve eyes (data not show). Moreover, there was a trend to the decrease in RGC density in the NT group. Indeed, we found an 8 % non-significant decrease compared to the naïve group. Ocular hypertension induced RGC degeneration in peripheral retina 6 weeks after cauterization.
Ocular hypertension induces retinal inflammation and tissue macrophage activation in HT and NT eyes
An earlier study showed that in an unilateral laserinduced OHT, the NT eye is also altered [20]. We aimed to explore and further detail this new data in our experimental model of OHT. In all experiments, we compared the EVC and contralateral structures to the naïve structures.
We found tissue macrophage activation in the HT eye retina and a trend toward an increase in the NT eye retina. To investigate this increase in mRNA levels in CD68 due to tissue macrophage/microglia activation, whole flat-mounted retina double immunohistofluorescence of Iba1 and CD68 was performed. Pictures represent peripheral areas of naïve, contralateral, and EVC retinas (Fig. 2c). We observed a change of microglia morphology in HT and NT eye retinas, showing that microglia cells acquire a reactive profile with short or even absent processes and large soma called ameboid.
Quantification of Iba1 + CD68 + cells (tissue macrophages/ microglia) in peripheral retina showed a twofold increase in contralateral eyes compared to naïve eyes and in HT eyes compared to NT eyes (Fig. 2d). We observed the same type of result in the middle retina but to a lesser extent (Fig. 2e). In the naïve eyes, the basal level of tissue macrophages/microglia appeared to be higher in the central retina than in the peripheral and middle retina. A significant increase was still found in the NT eyes compared to naïve eyes (Fig. 2f).
We also found a significant increase in the CCL2 mRNA level and in the number of tissue macrophages in HT eye retinas. Given that previous studies have assumed that CCL2 is strongly implicated in monocyte recruitment during inflammation [30], we further wished to determine which cells were expressing CCL2 in the present model.
Ocular hypertension induces greater CCL2 expression by astrocytes in HT and NT eye retinas
We showed that OHT resulted in increased expression of CCL2 in HT eye retinas compared to naïve and NT eye retinas and in NT eye retinas compared to naïve retinas.
To define in which type of cells CCL2 was expressed, we double stained the retinas with CCL2 and the specific marker of astrocyte and Müller cell GFAP. Confocal images showed that CCL2 was widely co-localized with these cells specifically in the RGC layer (Fig. 2g). Moreover, GFAP and CCL2 labeling revealed a significant increase in HT eye retina compared to naïve retina. These data suggest that CCL2 is probably released by astrocytes and Müller cells in both basal and injury conditions. Retinal astrogliosis and microgliosis are transmitted along the ON Segments of the ONs were examined for GFAP and Iba1 expression by immunofluorescence (Fig. 3a). Quantitative analysis of the surface area covered by GFAP staining within the ON revealed almost a twofold increase in the HT eye group compared to the naïve eye group (p = 0.0415) (Fig. 3b). Surprisingly, the extent of Iba1 immunoreactivity in the NT eye group was at the same level as in the naïve eye group but showed a twofold increase in the HT eye group (p = 0.0476) (Fig. 3c).
Neuroinflammation in central visual projection pathways
After decussating, ON fibers divide in two ways in the rodent brain. The minor synaptic relay that concerns only a small percentage of fibers is the LGN. The major ON fiber relay in the rat brain is the SC. We found in the SC a similar profile of inflammation than in the retina (Fig. 4). EVC led to astrocyte and tissue macrophage activation (increase in GFAP and CD68 mRNA levels [p = 0.0136 and p = 0.0018, respectively], with respect to naïve rats) (Fig. 4a). There were higher levels of mRNA for proinflammatory and oxidative stress factors: IL-1β (p = 0.0115), TNFα (p = 0.0134) (Fig. 4b), Nox2 (p = 0.0004), and Nox4 (p < 0.0001) (Fig. 4c) in HT eye retinas. For some genes, we found an increase in mRNA levels in the NT eye: TNFα (p = 0.0175), CD68 (p = 0.0291), Nox2 (p = 0.003), and Nox4 (p = 0.0105).
To confirm these results, we performed immunochemistry in SCs. On the images of rat brain sections, SCs are circled in red (Fig. 5a). In naïve SCs, GFAP labeling was very rare and focused around the blood vessels and under the pia mater (Fig. 5b). In left SC, dense GFAP labeling was observed, indicating astrocyte activation. In left SCs, microglia seemed to have a wider soma than in naïve SCs (Fig. 5b).
Ocular hypertension induces c-fos and phosphorylated/ activated p38 (p-p38) activation in SC Naïve, right and left SC sections were analyzed with immunofluorescence with c-fos and NeuN markers (Fig. 6a), and the number of neurons exhibiting the (See figure on previous page.) Fig. 2 EVC induces retinal inflammation and microglial activation in both HT and NT eyes. a, b RT-qPCR analysis of the effect of cauterization on GFAP, CD68, IL-1β, CCL2, and Nox2 mRNA levels in naïve, NT, and HT eye retinas. mRNA levels are presented as a fold change relative to naïve rats after normalization with respect to the housekeeping gene (GAPDH). Each bar is the mean ± SEM. n = 8-10 animals/group. c Peripheral images from whole flat-mounted retina double-immunolabeled with Iba1 and CD68 antibody in naïve, NT, and HT eyes. Arrows show colocalization between markers. Scale bar = 100 μm. c-e Quantification of peripheral (d), middle (e), and central (f) mononuclear phagocyte (Iba1 + CD68 + cells) density per retina in the naïve (n = 6), NT (n = 7), and HT (n = 7) eye groups. The average mononuclear phagocyte density values were determined from eight peripheral, middle, or central images per retina. g Confocal images of double-immunolabeling of CCL2 with GFAP, an astrocytic marker, in naïve, NT, and HT eye retinas. Arrowheads in inset show colocalization between markers. Scale bar = 100 μm. Results are expressed in arbitrary units and correspond to the means ± SEM. We performed a one-way ANOVA followed by the Kruskal-Wallis multiple comparisons post hoc test: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001 neuronal activation c-fos marker was counted (Fig. 6b).
The quantitative analysis showed a twofold increase in the right group compared to the naïve group (p = 0.0385) and nearly a threefold increase in the left group compared to the naïve group (p = 0.001). These data suggested that retinal inflammation increased second-order neuron activation in right and left SCs compared to naïve SCs.
In the next series of experiments, we investigated the activation of the p-p38 pathway in the SCs (Fig. 6c). In naïve SCs, only a few cells were immune-positive for p-p38. In contrast, in right and left SCs, p-p38 was exclusively detected in microglia and the positive cell distribution did not change between right and left SCs. Quantification of p-p38-positive cells confirmed that this activation only occurred in right and left SCs (Fig. 6d) (p < 0.0001).
HT eye inflammation is mostly transmitted to the NT eye via the SCs
We hypothesized that the SC (the first relay in the rat visual system) is the structure by which inflammation can be transmitted to the contralateral eye. To verify this hypothesis, we injected the CTB Alexa 594 into the right SC of naïve rats. First, to test the quality of injection, we made a single injection of CTB in the right SC (Fig. 7b). We did not find CTB in the left SC (Fig. 7a). Then the injection was localized in the superficial layers of the SC [31], with most of the CTB fluorescence (in red) located in the superficial SC (Fig. 7b). Five days after injection in the right SC, the CTB was found in the right optic tract nerve fibers (Fig. 7c). In rats, the majority of fibers originating from the left retina projects to the right SC. Indeed, we found a large number of RGCs containing the fluorescent tracer in the left retina (Fig. 7d), while in the right retina less stained RGCs were detected (Fig. 7e).
After validating the quality and depth of injection, we further injected TNFα (25 ng/μl) into the right SC of naïve rats. Figure 8a illustrates the rat visual pathway and the injection site. We observed whether or not RGC fibers decussate at the optic chiasma. We also assessed whether the treatment could impair rat visual function , and Nox4 (c) mRNA levels in naïve, right, and contralateral SCs. For each marker, mRNA levels are presented as a fold change relative to naïve rats after normalization with respect to the housekeeping gene (GAPDH). Each bar is the mean ± SEM. n = 8-10 animals/group. We performed a one-way ANOVA followed by the Dunnett multiple comparisons post hoc test for GFAP, IL-1β, Nox2, and Nox4 and the Kruskal-Wallis test for TNF-α and CD68: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001 by studying their optomotor responses to rotating sinusoidal gratings 5, 7, 9, and 15 days after TNFα injection into the SC (Fig. 8b). These data indicate that the visual function of rats treated with TNFα was impaired from 7 days up to 15 days after injection (p = 0.0446 for the left eye versus the naïve eye and p = 0.0211 for the right eye versus the naïve eye at 7 days; p = 0.0032 for the left eye versus the naïve eye and p = 0.0006 for the right eye versus the naïve eye at 9 days; p = 0.0025 for the left eye versus the naïve eye and p = 0.0146 for the right eye versus the naïve eye at 15 days).
TNFα injection produced wide astrogliosis and tissue macrophage/microglia activation (Fig. 8f) in both left and right SCs. Quantification of the number of Iba1 + CD68 + cells showed an increase in the number of tissue macrophages/microglia in the left and right superficial SCs compared to a naïve SCs (Fig. 8g). We also observed an activation of the p-p38 pathway in microglia in the left and right SCs, while no p-p38-positive cells were seen in naïve SCs (Fig. 8f ). Quantification of the number of Iba1 + p-p38 + cells confirmed these results (Fig. 8h).
To evidence damage in both eyes, we performed whole flat-mounted retina double immunohistochemistry for Iba1 and CD68. The images show areas of naïve, right and left retinas (Fig. 9a). We observed morphological changes of microglia in the right and left retinas corresponding , and contralateral (n = 7) in superficial SCs. c Doubleimmunofluorescent labeling p-p38/Iba1 in naïve, right (Ipsi), and contralateral SCs. d Quantification of Iba1 + p-p38 + cells in naïve (n = 6), right (n = 6), and contralateral (n = 6) SCs. Scale bars = 200 μm. Results are expressed in arbitrary units and correspond to the means ± SEM. We performed a one-way ANOVA followed by the Dunnett multiple comparisons test: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001 to activated tissue macrophages. Quantification of Iba1 + CD68 + cells (tissue macrophage/microglia activation) in retinas showed more than a twofold increase in right and left eyes compared to naïve eyes (Fig. 9b). To support these results, we performed RT-qPCR analysis in the left and right retinas in TNFα-treated rats compared to naïve rats. We observed a significant increase of GFAP ( Then we evaluated RGC density in the projecting retina after TNFα microinjection into the right SC (Fig. 9f). We observed that TNFα injected into the right SC induces a loss of ≈20 % of RGC in both retinas as compare to control (p < 0.001 in the left retina and p < 0.001 in the right retina compared to a naïve retina).
Discussion
In this study, we used an experimental model of glaucoma (induced by EVC), which is very close to the human form of the disease. Numerous studies have characterized and used this EVC model, which matches specific glaucoma symptoms: IOP increase with a reduction of aqueous humor outflow, progressive RGC degeneration, aqueous humor TGF-β2 level increase, and trabecular cell death [2,3,32]. The EVC model used in this study shares several features with human primary open-angle glaucoma, but it is known there is blood flow damage.
Indeed, following episcleral/vortex veins cauterization, arterial blood is suddenly left with insufficient outflow from the globe. This would rapidly produce significant ocular venous congestion, which is consistent with an immediate rise in IOP observed in this ocular hypertensive model [33]. In this EVC glaucoma model, RGC loss is progressive (22 % decrease in peripheral retina 6 weeks after cauterization). This peripheral RGC loss has also been observed in different glaucoma models with, however, a markedly different percentage of RGC loss. One study reports an 11 % decrease of RGC density 12 weeks after intracameral microbead injection in mice [34], while another group showed a 30 % decrease 4 months after an association of microbead injection and laser photocoagulation of the trabeculum meshwork in mice [35]. Other groups have developed EVC model but in other rat strains. Thus, one study reported 15 % of RGC loss in peripheral retina 8 weeks after EVC. In another study, the authors found a decrease of 40 % in RGC density 26 weeks after EVC [36], whereas two other groups observed, 7 weeks after EVC, 30 % of RGC loss in peripheral retina [37] and in the whole of the retina for the other [38], respectively. In neurodegenerative disease, a neuronal degeneration is almost always associated with inflammation. Therefore, in glaucomatous human eyes, an activated phenotype of astrocytes, Müller cells and microglia cells are found in retinal layers [39]. Indeed, the hallmarks of glial and microglial activation such as cellular body hypertrophy and increased expression of glial intermediate filaments (GFAP) and tissue macrophage/microglia marker such as CD68 are classically observed in glaucomatous eyes in humans [40] as well as in experimental models of OHT [11,41,42].
In this study, we demonstrated that the increase of the number of activated tissue macrophages/microglia in the HT eye retina is correlated with an increase in CCL2 mRNA. In addition, we observed that CCL2 chemokine is expressed by astrocytes. It is well known that CCL2 is strongly implicated in monocyte chemoattractivity from blood circulation to the inflammatory site [30]. Activated tissue macrophages could stem from either activation of resident microglia or activation of infiltrating monocytes. Tissue macrophage/microglia activation could be responsible for the increase in pro-inflammatory cytokines (TNFα and IL-1β) observed in retina [42]. This result is corroborated with a recent study that clearly demonstrated that pro-inflammatory mediators signal the progression of pathological changes in visual projection in a DBA/2J glaucoma model prior to functional transport loss and RGC death [18].
We hypothesized that loss of RGCs can lead to RGC axon impairments and ON gliosis. In this way, we clearly (See figure on previous page.) Fig. 8 Unilateral TNFα injection induces visual defect, inflammation, and p-p38 pathway activation in both SCs. a Diagram showing the neuronal visual pathway in rodent. TNFα (25 ng/μl) is injected into the right SC. b The optomotor set-up allowed the determination of the optokinetic tracking threshold measurements (cycles per degree) for the left and right eyes, independently scored (clockwise and counterclockwise responses, respectively) under photopic conditions. Right and left eye sensitivity for naïve (n = 5), left (n = 6), and right (n = 6) rats at 5, 7, 9, and 15 days after TNFα injection. RT-qPCR analysis for GFAP (c), CD68 (d), and IL-1β (e) mRNA levels in naïve, left, and right SCs. For each marker, mRNA levels are presented as a fold change relative to naïve rats after normalization with respect to the housekeeping gene (GAPDH). Each bar is the mean ± SEM. n = 8-10 animals/group. Immunofluorescence of GFAP, Iba1/CD68, and Iba1/p-p38 (f) in naïve, left, and right SCs. Arrows show colocalization between markers. Scale bar = 200 μm (GFAP) and 50 μm (Iba1/CD68 and Iba1/p-p38). Quantification of Iba1 + CD68 + cells (g) and Iba1 + p-p38+ cells (h) in naïve (n = 5), left (n = 6), and right (n = 6) SCs. Results are expressed in arbitrary units and correspond to the means ± SEM. Two-way ANOVA for repeated measures followed by Bonferroni post hoc test was used for optomotor responses: *p < 0.05, **p < 0.01, and ***p < 0.001 naïve, right, and left eyes. We performed a one-way ANOVA followed by the Dunnett multiple comparisons test: **p < 0.01, ***p < 0.001, and ****p < 0.0001 and the Kruskal-Wallis test for IL-1β For each marker, mRNA levels are presented as a fold change relative to naïve rats after normalization with respect to the housekeeping gene (GAPDH). Each bar is the mean ± SEM. n = 8-10 animals/group. Results are expressed in arbitrary units and correspond to the means ± SEM. f Quantitation of RGCs loss in the projecting retina after TNFα stereotaxic microinjection into the right SC. The resulting values were expressed in surface units corresponding to 306,804 μm 2 . We performed a one-way ANOVA followed by the Dunnett multiple comparisons test: **p < 0.01, ***p < 0.001, and ****p < 0.0001 and the Kruskal-Wallis test for GFAP, and IL-1β observed a strong radial astrogliosis and tissue macrophage activation in the ON of the HT eye. These results are consistent with other studies that have examined glaucomatous ON. In a diode laser burn model, strong GFAP immunostaining was found associated with a degeneration of ON fibers [43]. In a 12-month-old DBA/2J model, the authors are found lower β-Tubulin labeling than in the control mice [44]. This suggests an anterograde deficit transport that can lead to damage in SCs. In our experimental model of glaucoma, the SC is also injured. The left SCs showed astrogliosis, microglia activation, and elevated pro-inflammatory cytokine expression. In a model of acute perfusion of the anterior chamber of the right eye with saline solution, they found an increase in the number of GFAP-positive astrocytes throughout the superficial layers of the left SC [17]. In a model of laser photocoagulation of the perilimbal and episcleral veins, they found a significant enlargement of the GFAP within the left SC and also in the right SC [45].
Increasing evidence supports that the glaucomatous tissue stress (apoptosis of RGCs) initiated by inflammation involves reactive oxidative species (ROS) [46,47]. This study found an increase in an oxidative stress marker (Nox2) in HT eye retina. The Nox2 enzyme is implicated in ROS synthesis, particularly in superoxide ion production. This result is corroborated by the demonstration that in another experimental model in which glaucoma is induced by chronic injection of hyaluronic acid in the eye's anterior chamber, a decrease in superoxide dismutase and catalase activity was observed in total retina after 1 month [48]. Moreover, we found that oxidative stress markers, Nox2 and Nox4, were also significantly elevated in SCs. It is already known that oxidative stress by ROS overproduction may significantly contribute to neurodegeneration [49].
In the present study, we hypothesized that retinal inflammation in the HT eye could be transmitted to the NT eye via the SC (the main projection site of RGCs in rodents). Indeed, RGC axons from one retina project to both SCs with 10-15 % RGC fibers coming from the right eye and 85-90 % from the contralateral eye [14-16, 22, 23, 25]. In our unilateral glaucoma model, we observed an activation of second-order neurons and/or interneurons in both SCs. Indeed, there is a significant increase in the number of c-fos-positive cells in both SCs compared to naïve SCs. c-fos is a marker that could identify activated neurons in histological preparations. c-fos is a proto-oncogene that is expressed within neurons following voltage-gated calcium entry into the cell [50]. Neuronal excitation leads to a rapid and transient induction of c-fos. We demonstrated that c-fos is activated in left and right SC after unilateral OHT. We postulated that neuroinflammation and RGC death induce activation of the second-order neurons in both SCs. The right SC analysis revealed, in an original way, an injury characterized by microglia activation and elevated proinflammatory cytokine expression. We hypothesized that the 10-15 % of RGC fibers from the HT eye induce neuroinflammation in the right SC.
We also found activation of the p-p38 pathway in microglia in both left and right SC. It is already known that p-p38 immunostaining was found in glia of human glaucomatous retina [39]. Moreover, p38 could be activated and phosphorylated by proinflammatory cytokines such as TNFα and IL-1β [51]. This observation is particularly true in our model because we found an increase in TNFα expression in both left and right SC. p-p38 activation is also involved in apoptotic signaling, especially in RGCs [52]. The NT eye analysis of EVC rats also displayed astrogliosis and tissue macrophage/microglia activation. A research group has already shown damage in the NT eye but only in the retina of OHT animals [19,20]. We wished to identify how this inflammation could be transmitted to the NT eye. In the present experimental model of OHT, the results clearly show that the widespread inflammation observed in the HT eye spread to the NT eye. We hypothesized that the SC is the major site of antero-and retrograde communications between the eyes. To test this hypothesis, we injected an inflammatory cytokine (TNFα) into the right SC, which receives approximately 10-15 % RGC fibers from the right eye and 85-90 % RGC fibers from the left eye. Thus, we showed that after unilateral injection of TNFα into the right SC, both right and left SCs present astrogliosis and tissue macrophage/microglia activation. In addition, a significant increase in the number of activated tissue macrophages in both left and right eyes was associated with inflammation. These results contribute new information to the history of glaucoma development. Inflammation in one eye could activate RGC fibers and induce inflammation in both SCs, which in turn may activate inflammation in the other eye by retrograde transport by way of RGC fibers. To corroborate this hypothesis, RGCs have been traced from the SC using a neurotracerdye such as Fluorogold [53], which was preferentially concentrated in the cell body of RGCs during the next 3-4 days. Moreover, in a model of Parkinson disease, a retrograde degeneration of dopaminergic neurons was observed after 6-hydroxydopamine (6-OHDA) injection in the striatum, which receives substantia nigra dopaminergic neuronal fibers. Striatal fibers degenerate, followed by cellular bodies located in substantia nigra [54]. | 9,024.4 | 2016-02-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
Species Diversity of Trichoderma Associated With Soil In The Zoige Alpine Wetland of Southwest China
Background: The Zoige alpine wetland is one of the most important wetlands in China because of its complex natural environment, abundant ecological resources, and unique climatic conditions. The ecology of soil fungi is poorly understood, and recent comprehensive reports on Trichoderma are not available for any region, including the Zoige alpine wetland ecological region in China. Our results may be used as a reference for a greater understanding of soil microorganism at various ecological regions, ecological rehabilitation and reconstruction and as microbial resources. Results: One hundred soil samples were collected from different soil types and soil layers in Zoige alpine wetland ecological regions in 2013. Using the traditional suspension plating method, a total of 80 Trichoderma strains were isolated. After a preliminary classication of morphological characteristics and the genes glyceraldehyde-3-phosphate dehydrogenase (gpd), 57 strains were representatively selected and eventually identied as seven species via phylogenetic analyses of multilocus sequences based on the genes transcription elongation factor 1 alpha (tef1), encoding RNA polymerase II subunit B (rpb2) and ATP citrate lyase (acl1). Among them, Trichoderma harzianum was the dominant species and had the highest isolation frequency (23%) in this zone, while Trichoderma polysporum and Trichoderma pyramidale were rare species, with isolation frequencies of less than 1%. Conclusions: Our detailed morphological observation and molecular phylogenetic analyses support the recognition of Trichoderma zoigense was described for the rst time as a new species. 30-day-old length and width of conidia (n = 50), length of phialides (n = 50), width of phialides at the base (n = 50), and width of phialides at the widest point (n = 50). Nomarski differential interference contrast (DIC) was used for observations and measurements, and were with showed all had different growth rates but were differentiated from each other on CMD. These were roughly divided into four groups based on their optimum growth temperature. soil in the Zoige alpine ecological region of Southwest China, morphological characteristics and multilocus phylogenetic analyses were performed to identify 80 strains as T. harzianum (48 strains, 60%), T. alni (15 strains, 18.75%), T. zoigense (a new species, 8 strains, 10%), T. rossicum (4 strains, 5%), T. atrobrunneum (3 strains, 3.75%), T. polysporum (1 strain, 1.25%) and T. pyramidale (1 strain, 1.25%). This is the rst comprehensive report on the population structure of Trichoderma in the Zoige alpine wetland. A specialized analysis of Trichoderma from 100 soil samples shows high richness of the Trichoderma species in this region and indicates the presence of latent resources that require further study, such as new species. surface reverse-diffusing to pale brown hyphae are numerous to punctate and long, forming radial strands, with white mycelial patches appearing in aged Autolytic excretions are rare, with no coilings observed. Conidiation noted 3–4 d at 25 ℃ , a yellow or greenish color appears after 7 d, conidiation is effuse and in intense tufts, erect conidiophores occur around the plug and on aerial hyphae, and they are mainly concentrated along the colony center, show a white color that turns green, and then nally degenerate, with conidia often adhering in chains. Conidiophores are short and simple with asymmetric branches. Branches produce phialides directly. Phialides are generally solitary along main axes and side branches and sometimes paired in the terminal position of the main axes, sometimes in whorls of 2–3. Phialides are × 2–5 µm ( x̄ 7.5 ± 1.5 × 3 ± n and 1.5–2.5 µm ( x̄ 2 ± 0.2) wide at the base, lageniform or ampulliform, mostly uncinate or slightly curved, less straight, and often distinctly widened in the middle. Conidia are 3–4.5 × 2.3–4 µm ( x̄ = 3.5 ± 0.3 × 3 ± 0.3, n = 50) and initially hyaline, green guttules,
for any region, including the Zoige alpine wetland ecological region in China. In fact, only Feng et al. (2009) has analyzed the fungal community structure in the soil of this region via a combination of BIOLOG analysis and traditional culture methods. Because morphological and molecular tools are ideal for assessments of the species diversity in all geographical regions, the work described here was designed to investigate the species diversity of the genus Trichoderma in the uniquely ecological environment of the Zoige alpine wetland, with an emphasis on four major soil types (peat soil, meadow soil, subalpine meadow soil and aeolian sandy soil). Our results may be used as a reference for a greater understanding of soil microorganism at various ecological regions, ecological rehabilitation and reconstruction and as microbial resources.
Study region
The Zoige alpine wetland (32°10′~34°10′N, 101°45′~103°55′E) is located in the northwest part of Sichuan Province in China and the eastern edge of the Qinghai-Tibet Plateau and has an average altitude of 3400 m above sea level and an area of 19600 km 2 . It is a relatively pristine natural area with an annual temperature of 0.6-1.0 ℃ and annual precipitation level of 580-860 mm. The cold, humid weather slows the decomposition of the soil organic matter and facilitates its accumulation in the soil (Sun 1998;Ding et al. 2004;Feng 2009). Peat soil, meadow soil, subalpine meadow soil and aeolian sandy soil are extensively developed and the most common soil types in this area, because of its unique ecological conditions.
Isolates and specimens
A total of 100 soil samples were collected in June 2013 across a range of soil types (peat soil, meadow soil, subalpine meadow soil and aeolian sandy soil) and soil layers (depth 0-10, 10-20, 20-30, 30-50, and 50-100 cm) in the Zoige alpine wetland ecological regions. Global positioning system technology (GPS Map 76; Garmin Ltd, USA) was used to determine the sampling locations. After removal of vegetation debris, approximately 300 g of each soil sample was immediately placed in a sterile plastic bag in a cooler, transported to the laboratory within 48 h and then stored at 4 ℃.
Soil fungi were isolated using the suspension plating method (Mueller et al. 2011). Brie y, suspensions (1 mL) of various dilutions (10 − 1 , 10 − 2 and 10 − 3 ) were placed on 90 mm diameter petri plates and Martin medium was then added and mixed evenly with the suspension. The plates were kept in the dark at 25 ℃ for 5 d, and the colonies of fungi were observed and counted. Three replicates were performed for each concentration. According to the colony characteristics, the puri ed fungal colonies were transferred onto potato dextrose agar (PDA) and kept in tube slants and glycerol for further taxonomic identi cation. The specimens were deposited in the Fungal Herbarium of Sichuan Agricultural University, with accession numbers of T1-T80. And the holotype of new species and new record species were deposited in China General Microbiological Culture Collection Center (CGMCC), with accession numbers of CGMCC3.20145 and CGMCC3.20167.
Fungal colony characteristics were observed on the CMD, PDA, MEA and SNA media and grown under 12 h of white light and 12 h of darkness at 20 ℃ and 25 ℃. Colony textures and the presence or absence of exudates were recorded using a stereomicroscope (OLYMPUS SZX16, Japan). Colony morphologies were observed weekly with a digital camera (Nikon D3100, Japan). Micromorphological characteristics were observed after 3-7 d or 14 d of cultivation, and microscopic observations were performed in 3% KOH. Chlamydospores were measured from 7-30-day-old cultures on CMD or SNA plates under a compound microscope using a 100× objective. The following characteristics of each isolate were measured: length and width of conidia (n = 50), length of phialides (n = 50), width of phialides at the base (n = 50), and width of phialides at the widest point (n = 50). Nomarski differential interference contrast (DIC) was used for observations and measurements, and data were gathered using a Carl Zeiss microscope (Axio Imager Z2, Germany). Colors were determined with Methuen's Handbook of Colour (Kornerup & Wanscher 1981).
To identify the optimal growth temperature and differentiate growth rates of the species, 3 representative strains or all strains (≤ 3 in total) for each species were selected to determine the growth rate on CMD at ve temperature levels (15 ℃, 20 ℃, 25 ℃, 30 ℃ and 35 ℃) as described previously with minor modi cations (Jaklitsch 2009). The strains were pre-grown on PDA for 48 h or 72 h at 25 ℃. For new cultures, 5-mm agar blocks were cut from the margin of the colonies and transferred to fresh medium from the edge of the 9-cm petri dish. The maximum colony radius was measured every day until the plates were entirely covered with mycelium. The growth rate was calculated by linear regression of t versus r (t = time of incubation and r = radius measured from the edge of the agar plug). Every treatment was repeated twice, with three replicates each time.
Phylogenetic analyses
For approximate identi cation, all sequences of the 57 strains listed in Table 2 were compared with the NCBI sequence database using the BLAST algorithm. The two markers (ITS and gpd) sequenced in the present study were analyzed separately. Their closest matches were aligned by ClustalX (Thompson et al. 1997), and a distance tree was built with the neighbor-joining (NJ) algorithm in MEGA v. 6.0 with 1000 bootstrap replicates (Tanaka et al. 2009;Tamura et al. 2011). Combined rpb2, tef1 and acl1 gene sequences were analyzed based on a multilocus dataset. A phylogenetic analysis was performed for the sequences of a total of 101 strains obtained from the present study or other references in previous studies and complemented with GenBank sequences (Jaklitsch 2009; Jaklitsch & Voglmayr 2015) ( Table 2).
The strains from this study are indicated in bold letters. (T = ex-type). T42, T44, T48 were deposited in China General Microbiological Culture Collection Center (CGMCC), and the rest in the Fungal Herbarium of Sichuan Agricultural University.
Maximum parsimony (MP) analyses of the combined DNA matrix was performed with PAUP* v. 4.0 b10 (Swofford 2002) using 1000 replicates of a heuristic search with the random addition of sequences. All molecular characteristics were unordered and given equal weight, and all gaps were treated as missing data.
Relationship with ecological factors
The isolation frequency was calculated at the species level using the following formula: where F = the isolation frequency (%), n = the number of species isolated from soil samples, and N = the number of total soil samples. The relationships between the isolation frequency and soil types and soil layers were subsequently analyzed.
Trichoderma species collection
A total of 80 strains were obtained from 100 soil samples collected from Zoige alpine wetland ecological regions in China. Details of the strains isolated from soil samples are given in Table 1. All strains were subsequently used for morphological identi cation, while fty-seven were used for phylogenetic analysis.
Phylogenetic analysis
The ITS region used preliminarily as a species identi cation criterion was applied to TrichOKey at www.ISTH.info ). However, the ITS region has a low number of variable sites and long insertions in certain species, thus, it is not suitable for a phylogenetic reconstruction of this group ). In our study, most fragments of the genes tef1, rpb2 and acl1 were successfully ampli ed. We also designed a pair of new primers based on the full-length tef1 gene, 5'-GAGAAGTTCGAGAAGGTGAGC-3' and 5'-ATGTCACGGACGGCGAAAC-3', with which a 1.4-kb fragment was ampli ed for most isolates.
All samples analyzed in our study were divided into 4 primary clades based on the gpd gene region, including 49 strains from the Trichoderma harzianum complex, 3 Trichoderma rossicum strains, 1 Trichoderma polysporum strain and one unknown species (4 Trichoderma sp. strains) (Fig. 1). Maximum parsimony analysis was conducted among 101 strains, with Protocrea farinosa (CPK 2472) and P. pallida (CBS 299.78) used as outgroup ( Table 2). The dataset for the rpb2, tef1 and acl1 genes contained 3403 characteristics, among which 1152 were parsimonyinformative, 988 were variable and parsimony-uninformative, and 1263 were constant. The most parsimonious trees are shown in Fig. 2
Growth rates
As shown in Fig. 3, the genus Trichoderma from Zoige alpine wetland ecological regions was able to grow in a range from 15-35 ℃, and the suitable growth temperature for most species ranged from 20 ℃ to 30 ℃. All seven species identi ed had normal viability at relatively low temperature (15 ℃), and they rarely grew well over 35 ℃ except for T. zoigense. For T. atrobrunneum, T. harzianum and T. pyramidale, the optimum growth temperature on CMD was 25-30 ℃. Trichoderma alni and T. rossicum preferred a cool growth environment, with an optimum temperature of 25 ℃, whereas T. zoigense was more partial to a hot environment, with an optimum temperature of 30 ℃, and it even grew well up to 35 ℃. T. polysporum was the only slow-growing species that grew with less than 6.0 mm/d between 15 ℃-30 ℃ and did not survive at 35 ℃. The above results showed that all species had different growth rates but were not completely differentiated from each other on CMD. These species were roughly divided into four groups based on their optimum growth temperature.
Relationship with ecological factors
Our results revealed a substantial disparity in the number and distribution of Trichoderma species among Zoige alpine wetland ecological regions (Tables 3, 4). Table 3 showed that T. harzianum was found in all four soil types, but most isolates of this species were obtained from peat soil. T. rossicum, T. alni and T. zoigense were also present in meadow soil and subalpine meadow soil, whereas T. atrobrunneum was found in aeolian sandy soil and peat soil. T. polysporum was found only in peat soil. T. rossicum Table 4, T. harzianum was widely distributed in the ve soil layers at depths of 0-100 cm. T. rossicum, T. alni and T. zoigense were isolated mainly from the soil layers at depths of 0-50 cm. Both T. atrobrunneum and T. pyramidale were only isolated from depths of 0-10 cm, and T. polysporum was found only in the soil layers at depths of 50-100 cm.
Regarding isolation frequency, T. harzianum was the most common of the seven species with a 23% isolation frequency, and it was therefore the dominant species in the zone, while the rare species T. polysporum and T. pyramidale had the lowest isolation frequencies at 1%.
Discussion
To characterize the biodiversity and establish the species composition of Trichoderma associated with soil in the Zoige alpine wetland ecological region of Southwest China, morphological characteristics and multilocus phylogenetic analyses were performed to identify 80 strains as T. harzianum (48 strains, 60%), T. alni (15 strains, 18.75%), T. zoigense (a new species, 8 strains, 10%), T. rossicum (4 strains, 5%), T. atrobrunneum (3 strains, 3.75%), T. polysporum (1 strain, 1.25%) and T. pyramidale (1 strain, 1.25%). This is the rst comprehensive report on the population structure of Trichoderma in the Zoige alpine wetland. A specialized analysis of Trichoderma from 100 soil samples shows high richness of the Trichoderma species in this region and indicates the presence of latent resources that require further study, such as new species.
Although many studies have focused on the identi cation of Trichoderma, identifying Trichoderma species based on only morphological characteristics remains di cult. Ampli cation of four universal fungal genes, gpd, acl1, rpb2 and tef1, showed that the gpd gene could be used to divide approximately the 57 representative strains into 4 clades, which were exactly aligned with the previous 4 morphological groups. The gpd gene was suitable for categorizing large groups but was not useful for the accurate identi cation of speciation within the Trichoderma complex (Druzhinina et al. 2010). In fact, any single gene among acl1, rpb2 and tef1 can play an important role in the identi cation of Trichoderma species but cannot accurately distinguish Trichoderma at the species level. Notably, although the primer pair EF1-728F and TEF1LLErev for tef1 was useful, it did not always successfully amplify all tested DNA materials. Admittedly, there are many factors affecting PCR ampli cation, not all of which can be attributed to primers, among which the quality of DNA may also be one of the factors. Phylogenetic studies of many species have proven that the most accurate method of species identi cation is to combine phylogenetic analysis with morphological phenotypic characteristics. In this study, when the genes acl1, rpb2 and tef1 were used in multilocus phylogenetic analysis, the phylogenetic relationships among taxa were consistent with those identi ed in previous studies in which the phylogenetic tree was built based on the genes rpb2 and tef1 either singly or in combination ( We found that the Longibrachiatum clade contained a new species, T. zoigense, which was phylogenetically distinct from any other species of Trichoderma (Fig. 2) and provided a low level of support for relationships with T. citrinoviride (C.P.K. 2005) and T. saturnisporum (ATCC 18903) (Fig. 2, MPBP = 62%). Compared to their morphological characteristics of the above two species, T. zoigense was di cult to distinguish from T. citrinoviride and T. saturnisporum by colony and spores. However, T. zoigense was able to produce yellow pigment dispersion and a fragrance in all tested media and easily produces chlamydospores (Samuels et al. 1998 The results of our studies demonstrated signi cant differences in the abundance and distribution of Trichoderma species isolated in the Zoige alpine wetland natural region. T. harzianum showed the highest abundance among the species isolated from ve soil layers and four soil types, implying that this species had good adaptability and can survive under most environmental conditions. Only T. polysporum was isolated at a soil depth of 50-100 cm, indicating that it prefers to live in a low-temperature environment (Domsch et al. 2007). In general, it is assumed that some Trichoderma species have stricter requirements for the growth environment and, thus, a narrower range for survival ). Etymology: zoigense (Latin), the speci c epithet in reference to the place where the type was found.
Conclusion
Description: Cultures and anamorph: optimal growth at 25 ℃ on all four media. On CMD after 72 h, growth is 25-28 mm at 20 ℃ and 28-31 mm at 25 ℃. Colony is dense and has a wavy to crenate margin. Surface becomes distinctly zonate and white to grayish-green but celadon to atrovirens later, and it is granular in the center and distinctly radially downy outside and shows whitish surface hyphae and reverse-diffusing croci to pale brown pigment. Aerial hyphae are numerous to punctate and long, forming radial strands, with white mycelial patches appearing in aged cultures. Autolytic excretions are rare, with no coilings observed. Conidiation noted after 3-4 d at 25 ℃, a yellow or greenish color appears after 7 d, conidiation is effuse and in intense tufts, erect conidiophores occur around the plug and on aerial hyphae, and they are mainly concentrated along the colony center, show a white color that turns green, and then nally degenerate, with conidia often adhering in chains. Conidiophores are short and simple with asymmetric branches. Branches produce phialides directly. Phialides are generally solitary along main axes and side branches and sometimes paired in the terminal position of the main axes, sometimes in whorls of 2-3. Phialides are 4.5-10.5 × 2-5 µm (x̄ = 7.5 ± 1.5 × 3 ± 0.5, n = 50) and 1.5-2.5 µm (x̄ = 2 ± 0.2) wide at the base, lageniform or ampulliform, mostly uncinate or slightly curved, less straight, and often distinctly widened in the middle. Conidia are 3-4.5 × 2.3-4 µm (x̄ = 3.5 ± 0.3 × 3 ± 0.3, n = 50) and initially hyaline, and they turn green and are oblong or ellipsoidal, almost with constricted sides, and smooth, eguttulate or with minute guttules, with indistinct scars.
On PDA, after 72 h, growth is 35-41 mm at 20 ℃ and 50-55 mm at 25 ℃; and mycelium covers the plate after 5 d at 25 ℃. Colonies are dense with wavy to crenate margins; and mycelia are conspicuously differentiated in width of the primary and secondary hyphae. Surface becomes distinctly zonate, yellowish-green to prasinous in color and celadon to atrovirens later, and it is farinose to granular in the center, distinctly radially downy outside, with whitish of surface hyphae and reverse-diffusing brilliant yellow to fruit-green pigment. Aerial hyphae are numerous, long and ascend several millimeters, forming radial strands, with white mycelial patches appearing in aged cultures. Autolytic excretions are rare; and no coilings are observed. Odor is indistinct or fragrant. Chlamydospores examined after 7 d at 4.5-9 × 4.5-7.5 µm (x̄ = 6 ± 1.1 × 6 ± 0.7, n = 50), and they are terminal and intercalary, globose or ellipsoidal, and smooth. Conidiation is noted after 3-4 d and yellow or greenish after 7 d. Conidiophores are short and simple with asymmetric branches. Phialides are similar to CMD. Conidia are greenish, ellipsoidal, and smooth.
On SNA, after 72 h, growth is 13-15 mm at 20 C and, 16-21 mm at 25 ℃; and mycelium covers the plate after 12-13 d at 25 ℃. Colony is similar to that on CMD, with a little wave margin, although mycelia are looser and slower on the agar surface. Aerial hyphae are relatively inconspicuous and long along the colony margin. Autolytic activity and coiling are absent or inconspicuous. No diffusing pigment or distinct odor are produced. Conidiation noted after 3-4 d at 25 ℃, and many amorphous, loose white or aqua cottony tufts occur, mostly median from the plug outwards, and they are con uent to masses up and white but then turn green. From the inside after 4-5 d, conidiation becoming dense within the tufts, which are loose at their white margins with long, straight or slightly sinuous sterile ends in the periphery. Tufts consisting of a loose reticulum with branches often in right angles give rise to several main axes. Main axes are regular and tree-like, with few or many paired or unpaired side branches. Branches are exuous and phialides are solitary along the main axes and side branches, and they are sometimes paired in terminal position of the main axes, sometimes in whorls of 2-3 that are often cruciform or in pseudo-whorls up to 4. Phialides and conidia are similar to that on CMD. often constricted below the tip to form a narrow neck of 4.5-8 × 2-3.5 µm (x̄ = 6 ± 0.8 × 2.5 ± 0.3, n = 50) and 1-2.5 µm (x̄ = 2 ± 0.3) wide at the base. Conidia are subglobose to ovoid, 3-4.5 × 2.5-3.3 µm (x̄ = 3.5 ± 0.3 × 3 ± 0.2, n = 50), laurel-green to bright green, smooth, and ellipsoidal. Description: Cultures and anamorph: optimal growth at 25 ℃ on all media, with little growth at 35 ℃. On CMD, after 72 h, growth is 29-32 mm at 20 ℃ and 48-53 mm at 25 ℃; and mycelium covers the plate after 5-6 d at 25 ℃. Colonies show distinct zonation. Mycelium is loose and thin; hyphae are narrow, sinuous and often form strands on the margin. Aerial hyphae are slight, form a thin white to green downy, uffy or occose mat. Brown pigment is shown, but no distinct odor noted. Conidiophores are hyaline and thick with side branches on several levels at the base of the elongations that are mostly paired and in right angles with phialides in whorls of 3-5. Phialides are 5-9.5 × 2.5-3 µm (x̄ = 7 ± 1.1 × 3 ± 0.3, n = 50) and 1-2.5 µm (x̄ = 1.5 ± 0.3) wide at the base and often short, wide, and ampulliform. Conidia are 2.5-4 × 2.5-3.5 µm (x̄ = 3.5 ± 0.3 × 3 ± 0.2, n = 50), green, smooth, and ellipsoidal.
On SNA, after 72 h, growth is 33-35 mm at 20 ℃ and 38-40 mm at 25 ℃; and mycelium covers the plate after 7-8 d at 25 ℃. Colonies show distinct zonation. Mycelium is thin, yellow to green; hyphae are wide, sinuous, with indistinct strands on the margin. Margin is thin and ill de ned. Aerial hyphae are slight and form a thin white downy, uffy or occose mat in distal parts of the colony. No diffusing pigment or distinct odor noted. Conidiation similar to CMD. Description: Cultures and anamorph: optimal growth at 25 ℃ on all media. On CMD, growth of 10-11 mm/d at 20 ℃ and 15-17 mm/d at 25 ℃; and mycelium covers the plate after 6-7 d at 20 ℃. Colony is dense with a wavy margin, and the surface becomes distinctly zonate. Aerial hyphae are numerous, long, and villiform in the plate. No diffusing pigment or odor. Autolytic activity is variable, and coilings are scarce or inconspicuous. Conidiation noted after 3-4 d at 20 ℃. Conidiation is effuse and in intense tufts that are hemispherical or irregular, and they show wide wheel grain banding that is gray green to deep green. Conidiophores radiate from the reticulum and are broad, straight, sinuous or helically twisted, show distally slightly pointed elongations, taper from the main axes to top branches, and present primary branches arranged in pairs or in whorls of 2-3, with secondary branches to solitary. Phialides are 4.5-14 × 2.5-4 µm (x̄ = 7 ± 1.5 × 3.5 ± 0.3, n = 50) and 2-3.5 µm (x̄ = 3 ± 0.4) wide at the base, ampulliform, and in whorls of 3-6. Conidia are 3.5-5.5 × 2.5-4 µm (x̄ = 4.5 ± 0.5 × 3 ± 0.2, n = 50), short cylindrical, and a gray color when single and pea green to yellow green in a group.
On SNA, growth is 8-13 mm/d at 20 ℃ and 8-12 mm/d at 25 ℃; and mycelium covers the plate after 6-7 d at 25 ℃. Colony is hyaline, thin and dense; and mycelium degenerate rapidly. Aerial hyphae are inconspicuous, autolytic activity is scant, and coilings are distinct. Conidiation noted after approximately 4 d and starts in white uffy tufts spreading from the center to form concentric zones, and they compact to pustules with a white to greenish color. Figure 1 Neighbor-joining tree based on partial gpd gene sequences from 57 Trichoderma isolates. Parsimony bootstrap values of more than 50% are shown at nodes | 6,095.6 | 2021-09-21T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Ordered Convex Metric Spaces
Menger [1] initiated the study of convexity in metric spaces which was further developed by many authors [2–4]. The terms “metrically convex” and “convex metric space” are due to [2]. Subsequently, Takahashi [5] introduced the notion of convex metric spaces and studied their geometric properties. Takahashi also proved that all normed spaces and their convex subsets are convex metric spaces and gave an example of a convex metric space which is not embedded in any normed/Banach space. Kirk [6] showed that a metric space of hyperbolic type is a convex metric space. Afterward, Shimizu and Takahashi [7] gave the concept of uniformly convex metric space, studied its properties, and constructed examples of a uniformly convex metric space. Beg [8] established some inequalities in uniformly convex complete metric spaces analogous to the parallelogram law inHilbert spaces and their applications. Beg [9] proved that a closed convex subset of uniformly convex complete metric spaces is a Chebyshev set. Recently, Abdelhakim [10] studied convex functions on these spaces. The aim of this note is to further continue the research in this direction by introducing the concept of ordered convex metric spaces and study their structure. We conclude with the plan of the paper. In Section 2, we recall some basic notations and definitions from the existing literature on convex metric spaces, order structure, and general topology. In Section 3, we introduce the new concept of ordered convex metric spaces and study some basic properties. Several characterizations of these spaces are also proven that allow making geometric interpretations of the new concepts Finally, Section 4 concludes with a summary statement. 2. Preliminaries
Introduction
Menger [1] initiated the study of convexity in metric spaces which was further developed by many authors [2][3][4]. The terms "metrically convex" and "convex metric space" are due to [2]. Subsequently, Takahashi [5] introduced the notion of convex metric spaces and studied their geometric properties. Takahashi also proved that all normed spaces and their convex subsets are convex metric spaces and gave an example of a convex metric space which is not embedded in any normed/Banach space. Kirk [6] showed that a metric space of hyperbolic type is a convex metric space. Afterward, Shimizu and Takahashi [7] gave the concept of uniformly convex metric space, studied its properties, and constructed examples of a uniformly convex metric space. Beg [8] established some inequalities in uniformly convex complete metric spaces analogous to the parallelogram law in Hilbert spaces and their applications. Beg [9] proved that a closed convex subset of uniformly convex complete metric spaces is a Chebyshev set. Recently, Abdelhakim [10] studied convex functions on these spaces. The aim of this note is to further continue the research in this direction by introducing the concept of ordered convex metric spaces and study their structure.
We conclude with the plan of the paper. In Section 2, we recall some basic notations and definitions from the existing literature on convex metric spaces, order structure, and general topology. In Section 3, we introduce the new concept of ordered convex metric spaces and study some basic properties. Several characterizations of these spaces are also proven that allow making geometric interpretations of the new concepts Finally, Section 4 concludes with a summary statement.
Preliminaries
In this section, basic results about convex metric spaces and order structure are given.
Definition 1 (see [5]). Let ðX, dÞ be a metric space and I = ½0, 1. A mapping ω : X × X × I ⟶ X is said to be a convex structure on X if for each ða, b, λÞ ∈ X × X × I and u ∈ X, Metric space ðX, dÞ together with the convex structure ω is called a convex metric space. A nonempty subset K ⊂ X is said to be convex if ωða, b ; λÞ ∈ K whenever ða, b, λÞ ∈ K × K × I.
Definition 3 (see [11]). A binary relation a≼b defined for some pairs a, b of elements of a set X is called an order relation in X if ≼ is reflexive, transitive, and antisymmetric. A reflexive and transitive relation ≼ is called a preorder.
Remark 4 (see [11]). Let ≼ be a binary relation on a set X. By a ≺ b we mean a≼b and a ≠ b: Relation~is defined as a~b if a≼b and b≼a: The inverse of ≼ is defined as a ≽ b if b≼a. Incomparable elements a and b (i.e., a ≰ b and a ≱ b) are denoted by a⊳⊲b: Definition 5 (see [11]). An ordered set is called totally ordered if it has no incomparable elements.
Proposition 6 (see Proposition 4.1 of [12]). A topological space is disconnected if and only if it has a nonempty subset that is both open and closed.
Proposition 7 (see [13]). Let X be a connected topological
Ordered Convex Metric Spaces
In this section, first, we introduce the property ðLÞ on a convex metric space. Next, we present some notations and definitions related to an order relation ≼ on a convex metric space. Finally, we define ordered convex metric space and prove several interesting results related to ordered convex metric spaces.
Each normed space has property ðLÞ, if we define ω ða, b ; tÞ = ta + ð1 − tÞb: In Definition 8, taking μ = 0 and using Remark 2, we obtain Let X be a convex metric space and ≼ be an ordered relation on X. First, we define some notation for subsequent use. For any a, b, c in X and λ ∈ I, Definition 9. (i) A relation ≼ on a convex metric space X is said to be continuous if for all a, b, c in X, the sets A ≼ ða, b, cÞ and A ≽ ða, b, cÞ are closed.
(ii) A relation ≼ on a convex metric space X is said to be Archimedean if for all a, b, c, d in X with d⊳⊲a ≺ b⊳⊲c, there exists λ, μ ∈ ð0, 1Þ such that ωða, c ; λÞ ≺ b and a ≺ ωðb, d ; μÞ: When relation ≼ is total, the space is called Archimedean.
(iii) A relation ≼ on a convex metric space X is said to have betweenness property if for all a, b in X, all c ∈ fa, bg and all λ ∈ ð0, 1Þ,a≼b if and only if ωða, c ; λÞ≼ωðb, c ; λÞ: Proof. Let ≼ be an Archimedean relation on X and A ≺ ða, b, cÞ be closed. Without loss of generality, we can assume that A ≺ ða, b, cÞ is nonempty. Choose λ in A ≺ ða, b, cÞ: Now, continuity of ≼ and λ ∉ A ≽ ða, b, cÞ imply that there exists t > 0 such that N t ðλÞ = fα : |α − λ|<tg is contained in the complement of A ≽ ða, b, cÞ.
Assume there exists β ∈ N t ðλÞ ∩ A ⊳⊲ ða, b, cÞ: Continuity of ≼ implies that A ⊳⊲ ða, b, cÞ is an open set in I: Thus, A ⊳⊲ ða, b, cÞ is union of at most countably many mutually disjoint open intervals. Axiom of choice further implies that there exists among these intervals an open interval F such that β ∈ F: If β ≺ λ, then set δ = sup F; otherwise, δ = inf F: Then, δ ∈ A ≺ ða, b, cÞ: By Definition 8 (ii), we have Proof. Let ≼ be not totally ordered on the convex metric space X. Then, there exists u, v ∈ X such that u⊳⊲v and a, b ∈ X with a ≺ b: Let a ≺ u: Then, using Remark 4 a ≺ v or v ≺ u: Since u⊳⊲v, therefore a ≺ v: Thus, a ≺ u and a ≺ v: Now, we prove Obviously, A ≺ ða, u, vÞ ∩ A ≺ ða, u, uÞ ⊂ A ≼ ða, u, vÞ ∩ A ≼ ða, u, uÞ: To prove other inclusions, choose λ ∈ A ≼ ða, u, vÞ ∩ A ≼ ða, u, uÞ: If v~ωða, u ; λÞ, then it follows from transitivity and ωða, u ; λÞ≼u that u ≽ v, which is a contradiction to u⊳ ⊲v: Therefore, v ≻ ωða, u ; λÞ, i.e., λ ∈ A ≺ ða, u, vÞ: In a similar way, if u~ωða, u ; λÞ, then u≼v which contradicts u⊳⊲v: Thus, λ ∈ A ≺ ða, u, uÞ: Now, u⊳⊲v and Remark 2 (i) imply that 0 ∉ A ≺ ða, u, vÞ ∩ A ≺ ða, u, uÞ and 1 ∈ A ≺ ða, u, vÞ ∩ A ≺ ða, u, uÞ: Continuity of ≼ further implies that A ≼ ða, u, uÞ ∩ A ≼ ða, u, vÞ is closed. Using Equality 3, we obtain that A ≺ ða, u, vÞ ∩ A ≺ ða, u, uÞ is a closed set. On the other hand, Proposition 12 implies that A ≺ ða, u, uÞ ∩ A ≺ ða, u, vÞ is an open set. Thus, we have a nonempty closed-open proper subset of I: Since I is connected, therefore it is a contradiction to Proposition 6 : Similarly, we can show a contradiction for the case a ≻ b: Hence, ≼ is a totally ordered relation.
Corollary 16. Let X be an ordered convex metric space with property ðLÞ: If ≼ is an Archimedean relation, then the space X is also Archimedean.
Concluding Remarks
Order, convexity, and metric are three fundamental concepts in mathematics. These ideas have beautiful geometric 3 Journal of Function Spaces properties with significant applications in approximation and optimization (see [14,15]). In this work, we tried to combine these three indispensable notions of order, convexity, and metric. We introduced the new concept of ordered convex metric spaces and studied some of their properties. Several characterizations (Propositions 12, 17, 18, and 19 and Theorem 20) of these spaces are proven that allow to make geometric interpretations of the new concepts. This author's recommendation is to study other applications of ordered convex metric spaces to economics, preference modelling, control theory, functional analysis, etc.
Data Availability
No data were used to support this study.
Conflicts of Interest
Author declares that he has no conflict of interest. | 2,412.2 | 2021-10-25T00:00:00.000 | [
"Mathematics"
] |
Comparative Analysis of the Complete Mitochondrial Genomes of Five Species of Ricaniidae (Hemiptera: Fulgoromorpha) and Phylogenetic Implications
Simple Summary Although previous studies have recently explored the phylogenetic relationships among the planthopper families, the taxonomic relationships between Ricaniidae and other families of Fulgoroidea need to be further explored. Meanwhile, the morphological definitions of the two largest genera, Pochazia Amyot & Serville, 1843 and Ricania Germar, 1818 (the type genus of Ricaniidae) remain controversial, and their monophyly status has never been established. This study aims to clarify the relationship of Ricaniidae with other families of Fulgoroidea and to provide evidence to clarify the differences between these two related genera for species attribution. Our results support the monophyly of Ricaniidae and the sister group status of the two families Flatidae and Ricaniidae but fail to support the monophyly of Pochazia and Ricania. Diagnoses between these two genera cannot be resolved until more evidence is acquired. This study provides new evidence toward the phylogenetic analysis and revision of the distinguishing characteristics of related genera in this family. Abstract Ricaniidae is a relatively small planthopper family with about 69 genera and 442 species worldwide. Members of this family occur throughout the warm temperate and tropical regions. Some species cause devastating damage to major agricultural and economic plants. However, the relationship between Ricaniidae and other families of Fulgoroidea needs to be further explored. The morphological definitions of the two biggest genera, Pochazia Amyot & Serville, 1843 and Ricania Germar, 1818 (the type genus of Ricaniidae) remain controversial. In this study, mitogenomes of five representatives in these two genera were decoded using the next-generation sequence method and genome assembly. Results showed that their complete mitogenomes are circular DNA molecules with 15,457 to 16,411 bp. All protein-coding genes (PCGs) begin with the start codon ATN, GTG or TTG and end with TAA, TAG, an incomplete stop codon single T or an incomplete stop codon single A. A lost DHU arm was discovered in the trnS gene of the five mitogenomes and the trnV gene within Pochazia confusa, Pochazia guttifera and Ricania simulans. The remnant tRNAs folded into clover-leaf structures. The sliding window, genetic distance, and Ka/Ks analyses indicated that the cox1 gene is the slowest evolving and is relatively conserved. The phylogenetic tree topologies support (Delphacidae + (((Issidae + (Lophopidae + Caliscelidae)) + (Flatidae + Ricaniidae)) + (Achilidae + (Dictyopharidae + Fulgoridae)))) as the best topology, as recognized by both PhyloBayes, RAxML and MrBayes based on four data sets (PCG, PCGRNA, PCG12, PCG12RNA). The monophyly of Ricaniidae and the sister group status of two families Flatidae and Ricaniidae are supported, but all analyses failed to support the monophyly of Pochazia and Ricania. The diagnoses between these two genera cannot be resolved until more evidence is acquired.
Introduction
Ricaniidae Amyot & Audinet-Serville, 1843 is an economically important family in the superfamily Fulgoroidea. It is comprised of 442 species (3.2% of the Fulgoromorpha) with 69 genera (2.8% of the Fulgoromorpha) divided into two subfamilies [1]. This family of planthoppers is distributed throughout the warm temperate and tropical regions of the world. They are associated with both herbaceous and woody plants and have adapted to various habitats ranging from rainforests to semideserts.
Some phytophagous members of this family are major agricultural pests. For example, Ricania speculum (Walker, 1851), Pochazia shantungensis (Chou & Lu, 1977) and Orosanga japonica (Melichar, 1898) are major agricultural pests having a very wide range of hosts. Their damage, caused by sap suction and by egg-laying, may lead to the withering of the host tissues [2].
The genera Ricania Germar, 1818 and Pochazia Amyot & Audinet-Serville, 1843 have the most abundant species (Ricania 82 spp., Pochazia 44 spp.) within the family Ricaniidae. The morphological definitions of both genera are unclear and based on external characters, mainly in the size of the forewing, the apical angle, and the ratio of apical margin and claval suture. In Pochazia the forewing is large, the apical angle is prominent, and the apical margin is longer than the claval suture. In Ricania, the forewing is relatively small, the apical angle broadly rounded, and the apical margin is nearly as long as the claval suture [10][11][12].
Both genera (in recent definition and composition) are non-monophyletic groups (Stroiński, per. com.). Further research into the scope and definition of these taxa is needed.
Sample Preparation and DNA Extraction
Adult specimens of five Ricaniid species were studied. Pochazia confusa ( Figure 1A), Pochazia discreta ( Figure 1B), Pochazia guttifera ( Figure 1C), and Ricania fumosa ( Figure 1E) were collected in Guangdong Province (Table S1). Ricania simulans ( Figure 1D) was collected in Hunan Province (Table S1). All specimens were preserved in 100% ethanol at −20 • C to allow DNA extraction. All specimens were identified by the first author before DNA extraction. The genomic DNA was extracted using the DNeasy DNA Extraction Kit (Qiagen).
Sequencing, Assembly, Annotation, and Analysis
The whole genomic DNA for each of the five Ricaniid species was sequenced once by the next-generation sequence method on the Illumina NovaSeq platform. Results were aligned with Sanger sequencing results to ensure accuracy. The quality-trimming and assembly of the paired reads were checked by Geneious v 11.0.2 with default parameters [19], employing the closely related Ricania speculum (Hemiptera: Ricaniidae; MT834932) [17] as a reference sequence.
The annotation of genomic features was conducted using Geneious v 11.0.2, with Ricania speculum and Pochazia shantungensis as references. The open reading frames (ORFs) Finder was created based on the invertebrate mitochondrial genetic codes. The mitogenomic maps of these five species were visualized using the CGview Server [20]. The secondary structures of tRNAs were predicted by the MITOS Web Server [21]. According to the predicted results, the tRNAs of the five species were edited using Adobe Illustrator CS2020.
PhyloSuite v 1.2.2 calculated base composition and RSCU (relative synonymous codon usage) [22]. Tandem Repeats Finder Online server was employed to obtain the tandem repeats in the control region [23]. The sliding window analysis was performed with DnaSP v 6.0 based on concatenated alignments of PCGs and rRNA genes among eight Ricaniidae mitogenomes [24]. The average non-synonymous (Ka)/synonymous (Ks) substitution rates and average genetic distances were estimated via DnaSP v 6.0 and MEGA-X based on each PCG of the eight Ricaniidae mitogenomes, respectively [25]. The mitogenomes of Pochazia confusa, Pochazia discreta, Pochazia guttifera, Ricania simulans and Ricania fumosa were uploaded to GenBank with accession numbers MZ617458, MZ673797, MZ617457, MZ617459, and MZ617460, respectively (Table 1).
Mitogenome Sequence Alignment and Analyses of Sequence Heterogeneity
Five newly sequenced Ricaniidae mitogenomes and a total of 44 known mitogenome sequences of Fulgoroidea were selected as ingroups, including 3 species of Ricaniidae, 10 species of Delphacidae, 5 species of Achilidae, 1 species of Dictyopharidae, 14 The extraction of 13 PCGs, 2 rRNAs, and amino acid (PCG-AA) was implemented by PhyloSuite v 1.2.2 [22]. All 13 PCGs were aligned with the G-INS-i algorithm and codon alignment mode in MAFFT 7 [26]. Alignments of two rRNAs were produced with the Q-INS-i algorithm in the MAFFT 7. Alignments of PCG-AA were produced using the G-INS-i algorithm in the MAFFT 7. We used Gblocks v 0.91b [27] to remove all gaps and poorly matched aligned sites of 13 PCGs, 2 rRNAs, and PCG-AA. The concatenated data of all alignments were performed by PhyloSuite v 1.2.2. Five various data sets were generated to reconstruct the phylogeny: (1) The sequence divergence heterogeneity of five data sets was assessed using Ali-GROOVE [28] with the default sliding window size. The gaps in the nucleotide data set were treated as ambiguity, and a BLOSUM62 matrix was used for a default amino acid substitution matrix.
Phylogenetic Analyses
The phylogenetic analyses under site-homogeneous models were reconstructed by Bayesian Inference (BI) and Maximum Likelihood (ML) methods. The optimal partitions and best models for both ML and BI trees were selected by PartitionFinder 2.1.1 (Tables S9 and S10) [29], with greedy algorithm and BIC criterion. ML analyses were conducted in IQ-TREE 1.6.5 [30] using 1000 replicates of ultrafast bootstraps. BI analyses were performed using MrBayes 3.2.6 [31], each run for 10,000,000 generations, with sampling every 100 generations. A consensus tree was calculated from the remaining samples after burn-in of the first 25% of trees.
We used PhyloBayes MPI v1.5a on CIPRES [32] to reconstruct Bayesian inferences with a site-heterogeneous CAT+GTR model and the default parameter. Two independent chains proceeded simultaneously until the runs were converged (maxdiff was <0.1). The initial 25% of the two chains were discarded as burn-in and a consensus tree was generated from the remaining samples.
Mitogenome Organization and Base Composition
The circular complete mitogenomes of Pochazia confusa, Pochazia discreta, Pochazia guttifera, Ricania simulans and Ricania fumosa were 16,121, 16,411, 16,153, 15,457 and 16,016 bp in length, respectively ( Figure 2). The total length of the complete mitogenomes is associated with the variation in length of the control region. The five newly sequenced mitogenomes comprised the typical 37 genes: 13 protein-coding genes (PCGs), 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs), and an A+T-rich region (control region). Gene arrangement was consistent with other planthopper mitogenomes. The majority strand (J-strand) encoded 9 PCGs and 14 tRNAs, while the remaining genes were encoded on the minority strand (N-strand) (Tables S2-S6).
The whole mitogenomes of five Ricaniid species presented a positive AT skew and negative GC skew. The high A+T content was observed in five Ricaniid mitogenomes with 76.7, 73.6, 75.1, 77.3, and 76.1%, respectively (Table S7). This situation has also been observed in other planthopper species.
Protein-Coding Genes and Codon Usage
The total length of PCGs ranged from 10,914 bp (Pochazia confusa) to 10,956 bp (Ricania fumosa) in size among these five newly sequenced Ricaniidae mitogenomes. Comparing the PCGs in the five Ricaniid mitogenomes, the A+T contents of Pochazia confusa, Pochazia discreta, Pochazia guttifera, Ricania simulans, and Ricania fumosa were 75.6, 72.9, 73.3, 76.4, and 75.6%, respectively. All PCGs represented a negative AT skew and GC skew. The A+T content of the third codon was highest, while that of the second codon was lowest. The AT skew and GC skew of the first codon position was highest (Table S7). In the Ricaniid mitogenomes, most PCGs initiated with the typical start codon ATN (ATA/T/G/C), with an exception for the nad5 gene in Ricania simulans and Ricania fumosa that began with GTG and TTG, respectively. Correspondingly, most PCGs terminated with a TAA/TAG codon, but the cox2, atp6, and nad4 genes ended with a single T, except for atp6 in Ricania marginalis using TAG as the stop codon. The nad1 gene in Ricania speculum, Pochazia shantungensis, and Ricania marginalis ended respectively with a single A, single A, and TAA, whereas the five newly sequenced mitochondrial genomes terminated with T (Table S8). A large segment poly (A) appeared in nad4 and nad5 genes of five newly sequenced Ricaniidae mitogenomes.
The RSCU (relative synonymous codon usage) of eight Ricaniidae mitogenomes is shown in Figure 3. Phe (UUU), Ile (AUU), Met (AUA), Ser (UCA), and Leu (UUA) were observed to be the most frequently used codons. The amino acid compositions were mostly A or U, indicating the strong AT bias in the whole mitochondrial genome. This codon usage pattern across these eight Ricaniidae mitogenomes was consistent with other planthoppers. However, the codon Thr (ACG) was not found in Ricania speculum and the codons Arg (CGG) and Ala (GCG) were not observed in Pochazia confusa.
Transfer and Ribosomal RNA Genes
The 22 tRNAs were located scattered throughout the whole mitogenome in the five newly sequenced Ricaniid species (Figure 2). Their total lengths ranged in size from 1406 bp (Ricania simulans) to 1424 bp (Pochazia discreta). The tRNAs presented a positive AT skew and GC skew in the five Ricaniid mitogenomes. These tRNAs with a heavy AT nucleotide bias reached 76.0, 74.8, 75.5, 76.1, and 76.1% in Pochazia confusa, Pochazia discreta, Pochazia guttifera, Ricania simulans, and Ricania fumosa, respectively (Table S7); this has also been found in other sequenced planthoppers.
The loss of the DHU arm was found in the trnS gene of the five Ricaniid mitogenomes and the trnV gene within Pochazia confusa, Pochazia guttifera, and Ricania simulans. The remnant tRNAs folded into clover-leaf structures. All five newly sequenced mitogenomes had an unpaired base in the anticodon stem of the trnL2 and trnR genes. In addition, six types of unmatched base pairs, G-U, U-U, A-A, G-A, A-C, and U-C, were found in the secondary structure of tRNAs in these five Ricaniid mitogenomes. The total number of unmatched base pairs were 30 in Pochazia confusa, 29 in Pochazia discreta, 28 in Pochazia guttifera, 32 in Ricania simulans and 28 in Ricania fumosa ( Figures S1-S5).
The Control Region
The control region, encoded on the J-strand, was located between rrnS and trnI ( Figure 2). The control region of all five newly sequenced mitogenomes represented a positive AT skew and negative GC skew, except Ricania simulans, which presented a negative AT skew and positive GC skew. The total length of this region was 1721 bp in Pochazia confusa, 1985 bp in Pochazia discreta, 1763 bp in Pochazia guttifera, 1078 bp in Ricania simulans, and 1652 bp in Ricania fumosa (Tables S2-S7). Comparing tandem repeat regions of eight Ricaniid mitogenomes, the results showed that one repeat region was detected in Pochazia confusa, Pochazia discreta, Pochazia guttifera, Pochazia shantungensis, and Ricania simulans, and three repeat regions were present in Ricania fumosa. In addition, the control region of Ricania speculum and Ricania marginalis had two tandem repeat regions with a second repeat unit of "ATAATATAT". We also found poly (A) or poly (T) in some Ricaniid species (Figure 4).
The average Ka/Ks rates of 13 PCGs were calculated among eight Ricaniidae species with Cromna sinensis as the reference sequence. The Ka/Ks values were less than 1, representing purifying selection in each gene. The Ka/Ks ratio of cox1 (ω = 0.088) exhibited the strongest purifying selection, while atp8 (ω = 0.536) exhibited the weakest purifying selection. Furthermore, the average genetic distances among eight Ricaniidae species with Cromna sinensis as the reference sequence showed that nad2 (0.428) was evolving comparatively fast, while cox1 (0.198) was relatively slower (Figure 6). According to this assessment, heterogeneity was higher in pairwise sequence comparisons with the Delphacidae, whereas heterogeneity was lower among other planthoppers. Results also indicated that PCG-AA had lower heterogeneity than other data sets. Comparing sequence composition heterogeneity of four data sets (PCG, PCGRNA, PCG12 and PCG12RNA), we found the third codon of PCGs had higher heterogeneity (Figure 7).
Phylogenetic Analyses
To establish the evolutionary relationships within Ricaniidae species, the complete mitogenomes of 44 other planthoppers species were downloaded from GenBank ( Table 1).
Comparative Analysis of Ricaniid Mitogenomes
Current evidence shows that Pochazia discreta has the largest size with 16,411 bp while Ricania simulans has the smallest size with 15,457 bp [16][17][18]. The size variation depends on the variation of length in the control region. Here we found diversity in the repeat unit among the control region of Ricaniid species. The control region of the genus Pochazia had one repeat unit. There are different repeat units in the genus Ricania, but Ricania speculum and Ricania marginalis had two tandem repeat regions with the second repeat unit of "ATAATATAT". Tandem repeats were also found in the control region in other species of Fulgoroidea [18,[33][34][35][36][37].
This study found cox2, atp6, nad4 ended with an incomplete T, which is universal in planthoppers [33,34,36]. In addition, nad1 was terminated with incomplete A in Pochazia shantungensis and Ricania speculum, which rarely exists in other planthoppers. Meanwhile, a large segment poly (A), which probably makes mitochondrial genome sequence rather difficult, was found in nad4 and nad5 genes of all five Ricaniidae mitogenomes. It was also found in some other planthoppers species [18,33,34].
Nucleotide Diversity of Ricaniid Mitogenomes
When fully analyzed, the cox1 gene can provide reliable, rapid species-level classifications and/or species identifications, which have been used by Ceotto [13,[38][39][40][41][42][43][44]. This analysis indicates that the cox1 gene is the slowest evolving and most relatively conserved compared to other PCGs. The nad2 gene has a relatively faster evolution rate, suggesting that the nad2 gene would be a suitable candidate marker for species classification in Ricaniidae.
Phylogeny
The monophyly of Ricaniidae and the sister grouping of Ricaniidae and Flatidae were supported in all analyses. This was consistent with the explorations of Emeljanov (1990) and Song and Liang (2013) [6,9]. Within Ricaniidae, the phylogenetic relationships among eight species were stable based on different data sets and analytical methods, Ricania speculum was recovered as sister to Ricania marginalis and Pochazia shantungensis grouped with Pochazia guttifera, all with high node support. This was consistent with previous studies of Kang [12,16,17,45,46]. In addition, all analyses here failed to support the monophyly of both Pochazia and Ricania, which was congruent with the result of Akiner et al., (2019) [46]. Therefore, the diagnoses between these two genera cannot be resolved until more evidence is acquired.
Prior to this study, two mitochondrial data sets from Pochazia shantungensis and Ricania speculum were found in GeneBank. We think the two mitochondrial sequences of Ricania speculum (Ricania speculum-KX371891 and Ricania speculum-MT834932) are from the same species because the similarity in the two sequences (without the control region) is 99.0%; therefore, any of the two mitochondrial sequences can be used in this analysis. The similarity in the two mitochondrial sequences of Pochazia shantungensis (without the control region) (Ricania shantungensis-MW036196 and Ricania shantungensis-MT898421) from Korea is 93.6%. This genetic gap seemed too big, so we compared the cox1 gene of Pochazia shantungensis from China and the cox1 gene of two mitochondrial data sets of Pochazia shantungensis from Korea and found that Ricania shantungensis-MT898421 and Pochazia shantungensis from China had a higher similarity ratio (97.7%) and were therefore used in this analysis.
The results presented in this paper are the beginning of a new period of research on the phylogenetic position of the family Ricaniidae and its phylogeny and evolution as well as genetic diversity within the family and between species. In addition, the species used in this study will constitute the basis for in-depth research on population and species variability for species that are listed as pests. For example, for Pochazia shantungensis more than 200 host plants (81 families, 157 genera) have currently been reported, and for Ricania speculum, more than 140 host plants (54 families, 108 genera) are known (Stroiński, unpublished). Are they related and how? Do they belong to the same genus? Further molecular studies from a larger area of distribution will answer this question.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biology11010092/s1, Figure S1. The secondary structure for the tRNAs of Pochazia confusa. Figure S2. The secondary structure for the tRNAs of Pochazia discreta. Figure S3. The secondary structure for the tRNAs of Pochazia guttifera. Figure S4. The secondary structure for the tRNAs of Ricania simulans. Figure S5. The secondary structure for the tRNAs of Ricania fumosa. Figure S6. Phylogenetic trees obtained from PhyloBayes based on the data sets of PCG, PCGRNA, PCG12 and PCG12RNA. Figure S7. Phylogenetic trees obtained from IQ-TREE and MrBayes based on the data sets of PCG-AA. Figure S8. Phylogenetic trees obtained from PhyloBayes based on the data sets of PCG-AA. Table S1. Species investigated and their related information. Table S2. Mitogenomic organization of Pochazia confusa. Table S3. Mitogenomic organization of Pochazia discreta. Table S4. Mitogenomic organization of Pochazia guttifera. Table S5. Mitogenomic organization of Ricania simulans. Table S6. Mitogenomic organization of Ricania fumosa. Table S7. Nucleotide composition of mitogenomes of the five species. Table S8. Start and stop codons of eight Ricaniidae mitochondrial genomes. Table S9. Best partitioning schemes and models based on five data sets for IQ-TREE analysis. Table S10. Best partitioning schemes and models based on five data sets for MrBayes analysis. | 4,593.4 | 2022-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
[D-Arg1,D-Trp5,7,9,Leu11]Substance P coordinately and reversibly inhibits bombesin- and vasopressin-induced signal transduction pathways in Swiss 3T3 cells.
The novel substance P (SP) analogue, [D-Arg1,D-Trp5,7,9,Leu11]SP like [D-Arg1,D-Phe5,D-Trp7,9,Leu11]SP inhibited DNA synthesis induced by bombesin, vasopressin, and bradykinin, but did not interfere with the mitogenic response induced by other growth factors or pharmacological agents in Swiss 3T3 cells. [D-Arg1,D-Trp5,7,9,Leu11]SP reversibly inhibited bombesin-induced DNA synthesis, causing a 6-fold greater rightward shift in the bombesin dose response than [D-Arg1,D-Phe5,D-Trp7,9,Leu11]SP at identical concentrations (10 μM). We found that the new, more potent, SP analogue coordinately and reversibly inhibited bombesin-induced Ca2+ mobilization and protein kinase C (PKC) and mitogen-activated protein (MAP) kinase activation. The dose-response curves for bombesin-induced Ca2+ mobilization and MAP kinase activation were similarly displaced (51- and 40-fold, respectively) by [D-Arg1,D-Trp5,7,9,Leu11]SP. In addition, [D-Arg1,D-Trp5,7,9,Leu11]SP reversibly inhibited bombesin-induced tyrosine phosphorylation of Mr 110,000-130,000 and 70,000-80,000 bands as well as p125 focal adhesion kinase. [D-Arg1,D-Trp5,7,9,Leu11]SP also reversibly and coordinately inhibited vasopressin-induced Ca2+ mobilization, PKC stimulation, MAP kinase activation, tyrosine phosphorylation, and DNA synthesis in Swiss 3T3 cells. Surprisingly, deletion of the terminal Leu of [D-Arg1,D-Phe5,D-Trp7,9,Leu11]SP to yield [D-Arg1,D-Phe5,D-Trp7,9]SP1-10 resulted in a selective loss of inhibitory activity of this analogue against bombesin- but not vasopressin-stimulated DNA synthesis, Ca2+ mobilization, and MAP kinase activation. Collectively, these results suggest that SP analogues act at the receptor level to coordinately and reversibly antagonize bombesin- or vasopressin-induced signal transduction in Swiss 3T3 cells.
Interestingly, multiple neuropeptides, including bombesin and vasopressin, have also been implicated as autocrine and paracrine growth factors for small cell lung cancer (36), breast (37), and prostate cancer (38). Consequently, it may be useful to develop inhibitors of neuropeptide action, which have a broader spectrum than specific receptor antagonists (39).
SP analogues were initially synthesized to generate specific SP antagonists. Surprisingly, the synthetic SP analogues, [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP and [Arg 6 ,D-Trp 7,9 , MePhe 8 ]SP (6 -11) were found to inhibit the action of a broad range of neuropeptides structurally unrelated to SP, including bombesin-and vasopressin-stimulated DNA synthesis in Swiss 3T3 cells (40 -44). In contrast, they did not inhibit mitogenesis stimulated by either vasoactive intestinal peptide, which induces cAMP accumulation via G s , or platelet-derived growth factor, which signals through receptors with intrinsic tyrosine kinase activity (45). More recently, [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP and [Arg 6 ,D-Trp 7,9 ,MePhe 8 ]SP (6 -11) have been shown to inhibit small cell lung cancer cell proliferation in liquid culture, soft agar, and as xenografts in nude mice (46 -48). Despite their intriguing biological effects and potential importance as antiproliferative agents, the mechanism of action of SP analogues as broad spectrum inhibitors of neuropeptide-mediated signal transduction remains incompletely understood.
A recent report has proposed that [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP selectively uncouples PIP 2 -PLC- from the bombesin receptor (49). It was therefore suggested that the inhibitory effect of the SP analogues on cell proliferation could be attributed to disruption of the coordinated regulation of bombesin-induced signal transduction pathways. This proposal was based on the fact that high concentrations of bombesin reversed the inhibition of MAP kinase but not PIP 2 -PLC- activation caused by [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP in Swiss 3T3 cells (49). In contrast, we have recently demonstrated that high concentrations of either bombesin or vasopressin reverse the inhibitory effect of this SP analogue on inositol phosphate production in Swiss 3T3 cells (50). These discrepant results prompted us to examine the inhibitory effect of several novel SP analogues on the multiple signal transduction pathways induced by bombesin and vasopressin in Swiss 3T3 cells.
EXPERIMENTAL PROCEDURES
Cell Culture-Stock cultures of Swiss 3T3 fibroblasts were maintained in DMEM supplemented with 10% fetal bovine serum in a humidified atmosphere containing 10% CO 2 and 90% air at 37°C. For experimental purposes, cells were plated in 33-or 90-mm Nunc Petri dishes at 10 5 cells or 6 ϫ 10 5 cells/dish, respectively, in DMEM containing 10% fetal bovine serum and used after 6 -8 days when the cells were confluent and quiescent.
Assays of [ 3 H]Thymidine
Incorporation-Quiescent and confluent cells in 33-mm dishes were washed twice in DMEM and incubated at 37°C in 2 ml of a 1:1 mixture of DMEM and Waymouth medium containing 1 Ci/ml [ 3 H]thymidine with various additions as indicated. After 40 h, unless otherwise indicated, acid-precipitable material was measured as described previously (51).
Measurement of Intracellular Calcium-[Ca 2ϩ ] i was measured with the fluorescent Ca 2ϩ indicator fura-2/AME using a modification of the procedure previously described (52). Quiescent cells in 90-mm dishes were washed twice in DMEM and then incubated at 37°C for 10 min in 5 ml of DMEM with 1 M fura-2 tetraacetoxymethyl ester. The dishes were washed three times in phosphate-buffered saline at 37°C, and the cells were then suspended in 2 ml of electrolyte solution containing 120 mM NaCl, 5 mM KCl, 1.8 mM CaCl 2 , 0.9 mM MgCl 2 , 25 mM glucose, 16 mM Hepps, 6 mM Tris, and an amino acid mixture equivalent to DMEM (pH 7.2) by gentle scraping and transferred to a quartz cuvette. The suspension was stirred continuously and maintained at 37°C. Various factors were added as indicated in the figure legends. Fluorescence was monitored in a Perkin-Elmer LS-5 luminescence spectrophotometer with an excitation wavelength of 336 nm and emission wavelength of 510 nm. [Ca 2ϩ ] i was calculated using the formula [Ca 2ϩ ] i nM ϭ K(F Ϫ F min )/(F max Ϫ F), where F is the fluorescence at the unknown [Ca 2ϩ ] i , F max is the fluorescence after addition of 0.02% Triton X-100, and F min is the fluorescence after the Ca 2ϩ in the solution is chelated with 10 mM EGTA. The value of K was 220 nM for fura-2/AME (52). 32 P-Labeling of Cells and Analysis of 80K/MARCKS Phosphorylation-Quiescent and confluent cultures in 33-mm dishes were washed twice in phosphate-free DMEM and incubated at 37°C with this medium containing 50 Ci/ml of carrier-free [ 32 P]P i . After 18 h, various factors were added for the indicated times. The cells were then lysed in 500 l/dish of a solution containing 10 mM Tris/HCl, pH 7.6, 5 mM EDTA, 50 mM NaCl, 30 mM sodium pyrophosphate, 50 mM NaF, 100 M Na 3 VO 4 , 50 mM phenylmethylsulfonyl fluoride, and 0.5% Triton X-100 (lysis buffer) supplemented with 3.5 g/ml aprotinin and 1 g/ml leupeptin and the lysates were clarified by centrifugation for at 15,000 ϫ g for 20 min at 4°C. The supernatants were then immunoprecipitated with specific anti-80K/MARCKS antibody (53) and the immunoprecipitates were further analyzed by SDS-PAGE prior to autoradiography.
Immunoprecipitation of Tyrosine Phosphorylated Proteins-Quiescent and confluent cultures of cells in 33-mm dishes were washed twice with DMEM, and then treated with factors as indicated in the figure legends prior to lysis at 4°C in 1 ml of lysis buffer. Lysates were centrifuged at 15,000 ϫ g for 20 min, and the supernatants were incubated for 4 h at 4°C with anti-mouse IgG agarose-linked mAbs directed against phosphotyrosine (Py72). The immunoprecipitates were washed three times with lysis buffer and further analyzed by SDS-PAGE and Western blotting (see below). Cells from parallel cultures treated in an identical fashion were suspended by trypsinization and counted using a Coulter counter to ensure equal numbers of cells per condition.
Immune Complex Assay of p42 MAPK Activity-Quiescent cultures of Swiss 3T3 cells were treated with factors as described in the figure legends and lysed at 4°C. Lysates were clarified by centrifugation at 15,000 ϫ g for 20 min at 4°C, and the supernatants were immunoprecipitated using the polyclonal anti-p42 MAPK antibody together with protein A-agarose beads (40 l, 1:1 slurry) for 2 h. Immune complexes were collected by centrifugation and washed once in lysis buffer and three times in kinase buffer (15 mM Tris-HCl, 15 mM MgCl 2 ). The kinase reaction was performed by resuspending the pellet in 25 l of kinase assay mixture containing kinase buffer, 1 mg/ml myelin basic protein-peptide (APRTPGGRR), 100 M ATP, 100 Ci/ml [␥-32 P]ATP, and 200 M microcystin LR. Incubations were performed for 10 min (linear assay conditions) at 30°C and terminated by spotting 20 l of the supernatant onto P81 chromatography paper (Whatman). Filters were washed four times, 5 min each, in 0.5% orthophosphoric acid, immersed in acetone, and dried before counting. The average radioactivity of two blank samples containing no immune complex was subtracted from the result of each sample. Results are expressed as a percentage of the maximum response obtained with either bombesin or vasopressin alone in the absence of inhibitors. The specific activity of [␥-32 P]ATP used was 900-1200 cpm/pmol. SDS-PAGE-Slab gel electrophoresis was performed essentially according to the method of Laemmli (54). Specifically, the slab gels were 1.5 mm thick with 1.5 cm of a 4% acrylamide stacking gel and 12 cm of 8 or 10% acrylamide resolving gel. Samples (100 ml) were electrophoresed at 20 V for 30 min, then run overnight at 50 V, and finally at 150 V for 30 min before terminating the run. Gels for 80K/MARCKS experiments were fixed in 25% methanol, 10% acetic acid (v/v) prior to drying under vacuum for 2 h at 80°C. Radioactivity was detected at Ϫ70°C using Fuji x-ray film with exposure times of 12-72 h.
Materials-Bombesin, vasopressin, EGF, and IgG-agarose were obtained from Sigma. Protein A-agarose was from Boehringer Mannheim. Anti-Tyr(P) mAb clone Py72 was obtained from the hybridoma development unit, Imperial Cancer Research fund, London, UK. PY20 anti-Tyr(P) mAb was from ICN. 4G10 anti-Tyr(P) mAb was from Upstate Biotechnology Inc., Lake Pladid, NY. The anti-p125 FAK mAb for Western blotting was obtained from AFFINITI Research Products Ltd., Nottingham, UK. The polyclonal anti-p42 MAPK (anti-ERK-2) antibody raised against a COOH-terminal peptide (EETARFQPGYRS) was a generous gift from Dr J. Van Lint (Katholieke Universiteit Leuven, Belgium). 125 11 ]SP resulted in a broad spectrum neuropeptide antagonist which was 5-fold more potent (44). We reasoned that further substitutions at this position may result in SP analogues with increased potency. We established that substitution of D-Phe at position 5 with D-Tyr did not significantly affect antagonistic activity against bombesin (data not shown). In contrast, when this residue was replaced by D-Trp to form [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP an increase in inhibitory activity was obtained (see below). In view of these results, [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP was selected for further investigation. (Fig. 3, upper). In particular, the ability of [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP to block MAP kinase activation was, just like Ca 2ϩ mobilization, reversed at high concentrations of bombesin. To further substantiate the results obtained with the immune complex MAP kinase assay, lysates of Swiss 3T3 cells stimulated with bombesin in the absence or presence of [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP were subjected to SDS-PAGE followed by Western blotting with anti-MAP kinase antibody (55). The activation of MAP kinase induced by bombesin was inhibited by increasing concentrations of the SP analogue as judged by the disappearance of slower migrating forms in the mobility shift assay (Fig. 3, lower).
[D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 Reversibly Inhibits Vasopressin-but Not Bombesin-induced DNA Synthesis, Ca 2ϩ Mobilization, and MAP Kinase Activation-Recently, it has been shown that SP analogues are metabolized predominantly by oxidation of the amino acid at the COOH terminus (56). To test the effect of additional SP analogues on neuropeptide stimulated mitogenesis and signal transduction, we used synthetic peptide analogues which had substitutions or deletions of the terminal amino acid of [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP (shown in Fig. 6). Replacement of the terminal Leu of [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP with Val to produce [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Val 11 ]SP did not change the inhibitory activity of the SP analogue against either bombesin-or vasopressin-induced mitogenesis (Fig. 6), Ca 2ϩ mobilization, and MAP kinase activation (Fig. 7). In addition, the inhibitory effect of both SP analogues could be reversed by high concentrations of either neuropeptide (data not shown). In contrast, substitution of the terminal Leu with Gly resulted in a SP analogue with almost no inhibitory effect on mitogenesis, Ca 2ϩ mobilization, and MAP kinase activation stimulated by either neuropeptide (Figs. 6 and 7).
Deletion of the terminal Leu to form [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 produced an antagonist which potently inhibited vasopressin-induced mitogenesis (Fig. 6), Ca 2ϩ mobilization, and MAP kinase activation (Fig. 7). Surprisingly, [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 did not significantly block mitogenesis (Fig. 6) and only weakly inhibited Ca 2ϩ mobilization and MAP kinase activation (Fig. 7) 11 ]SP has been previously shown to block the action of multiple neuropeptides including bombesin and vasopressin in Swiss 3T3 cells (4, 40 -44) and to inhibit small cell lung cancer cell growth in vitro and as xenografts in vivo (46 -48). Here, we demonstrate that substitution of D-Phe at position 5 with D-Trp to form [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP produced a SP analogue with a further increase in potency against neuropeptide-induced mitogenesis. Importantly, this new SP analogue, like previously identified SP antagonists, inhibited DNA synthesis induced by bombesin, vasopressin, and bradykinin, but did not interfere with the mitogenic response induced by other growth factors or pharmacological agents.
A recent report has suggested that [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP can selectively inhibit the activation of PIP 2 -PLC- but not MAP kinase at high bombesin concentrations in Swiss 3T3 cells (49). It was proposed that disruption of the coordinate regulation of bombesin-induced signaling pathways contributes to the growth inhibitory properties of [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP (49). This implies that the growth-inhibitory effects of the SP analogues should not be reversed by high concentrations of agonist. However, our results demonstrate that 1) the inhibition of DNA synthesis by [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP was reversed by increasing concentrations of bombesin, 2) although the inhibition curves for [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP on bombesin-induced Ca 2ϩ mobilization and MAP kinase activation were slightly different, the dose-response curves for bombesin-induced Ca 2ϩ mobilization and MAP kinase activation were similarly displaced by the SP analogue, and 3) importantly, the inhibitory effect of the SP analogue on both Ca 2ϩ mobilization and MAP kinase activation could be completely reversed at high bombesin concentrations. These results prompted us to perform additional experiments to test further the mechanism of action of [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP and other related peptides.
It is well established that bombesin induces a rapid increase in the tyrosine phosphorylation of multiple substrates including p125 FAK (29 -35), through a signal transduction pathway that is mediated by p21 Rho (58,59). This pathway is not dependent on either PKC activation or Ca 2ϩ mobilization (32,34,60). Recently, we verified that the bombesin receptor transfected and stably expressed in Rat-1 cells mediates Ca 2ϩ mobilization, PKC activation as well as tyrosine phosphorylation of multiple substrates including p125 FAK (61). In the present study we demonstrate that [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP reversibly inhibited bombesin-induced tyrosine phosphorylation of multiple substrates including p125 FAK in Swiss 3T3 cells. These data support the proposition that this SP analogue coordinately inhibits the activation of the signal transduction pathways emanating from the bombesin receptor.
Vasopressin binds to a distinct G q -coupled receptor that also induces PIP 2 -PLC-mediated Ca 2ϩ mobilization, 80K/MARCKS phosphorylation, MAP kinase activation, and tyrosine phosphorylation of multiple substrates including p125 FAK in Swiss 3T3 cells (reviewed in Ref. 2). In order to substantiate our findings with bombesin, we also studied the effect of [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP on vasopressin-stimulated signaling events. As previously seen with bombesin, we found that this SP ana- logue coordinately inhibited vasopressin-induced Ca 2ϩ mobilization, 80K/MARCKS phosphorylation, MAP kinase activation, tyrosine phosphorylation, and reinitiation of DNA synthesis. In particular, the dose responses for vasopressininduced Ca 2ϩ mobilization and MAP kinase activation were similarly displaced by [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP.
The coordinate inhibition of neuropeptide stimulated signal transduction pathways could be a feature specific to the new SP analogue, [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP, rather than a common property of all SP analogue antagonists. We verified that [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP could also coordinately block bombesin-and vasopressin-induced mitogenesis, Ca 2ϩ mobilization, and MAP kinase activation. Furthermore, we examined the effect of additional SP analogues generated by substitutions or deletion of the terminal amino acid of [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP. The conservative substitution of Leu 11 with Val yielded a peptide that behaved identically to [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP and [D-Arg 1 ,D-Trp 5 ,D-Trp 7,9 ,Leu 11 ]SP. Thus, three different SP analogues inhibit neuropeptide-induced mitogenesis, Ca 2ϩ mobilization, and MAP kinase activation in a reversible and coordinate fashion.
A model that accounts for the coordinate inhibition of bombesin or vasopressin stimulated signal transduction by [D-Arg 1 ,D-Trp 5,7,9 ,Leu 11 ]SP and related peptides is that these SP analogues interfere with agonist binding to their receptors. In fact, we have previously shown that SP analogues competitively inhibit ligand binding (50), but these findings could not rule out an indirect mechanism mediated by uncoupling of a G protein from the receptor. Surprisingly, deletion of the terminal Leu of [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP to form [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 has provided novel mechanistic insight into this problem. We found that [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 potently inhibited vasopressin-induced mitogenesis, Ca 2ϩ mobilization, and MAP kinase activation. In contrast, [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 did not significantly block bombesin-induced mitogenesis and only weakly inhibited Ca 2ϩ mobilization and MAP kinase activation, even at concentrations where these responses were completely inhibited by either [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]SP or [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Val 11 ]SP. Therefore, the SP analogue [D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ]SP 1-10 coordinately inhibits vasopressin but not bombesin-induced signal transduction. This differential modulation strongly suggests that the truncated SP analogue acts as a potent vasopressin (but not bombesin) receptor antagonist. Our results imply that these inhibitory molecules block neuropeptide-mediated signal transduction at the receptor level. | 4,319 | 1996-11-15T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Skew Killing spinors in four dimensions
This paper is devoted to the classification of 4-dimensional Riemannian spin manifolds carrying skew Killing spinors. A skew Killing spinor $\psi$ is a spinor that satisfies the equation $\nabla$X$\psi$ = AX $\times$ $\psi$ with a skew-symmetric endomorphism A. We consider the degenerate case, where the rank of A is at most two everywhere and the non-degenerate case, where the rank of A is four everywhere. We prove that in the degenerate case the manifold is locally isometric to the Riemannian product R x N with N having a skew Killing spinor and we explain under which conditions on the spinor the special case of a local isometry to S 2 x R 2 occurs. In the non-degenerate case, the existence of skew Killing spinors is related to doubly warped products whose defining data we will describe.
Introduction
Let (M n , g) be an n-dimensional Riemannian spin manifold. A generalised Killing spinor on M is a section ψ of the spinor bundle ΣM of M satisfying the overdetermined differential equation ∇ X ψ = AX · ψ for some symmetric endomorphism field A of T M . Here and as usual, "·" denotes the Clifford multiplication on ΣM . Numerous papers have been devoted to the classification of Riemannian spin manifolds carrying such spinors. Several results have been obtained for particular A but it is still an open problem to get a complete classification for general A. Let us quote some of these results. First, recall that when A is the zero tensor field, that is, the corresponding spinor is parallel, then McK. Wang [22] showed that such manifolds can be characterised by their holonomy groups which can be read off the Berger classification. The case where A is a nonzero real multiple of the identity is that of classical real Killing spinors. It was shown by C. Bär [2] that real Killing spinors correspond to parallel spinors on the (irreducible) cone over the manifold, to which then McK. Wang's result applies. Furthermore, in dimension n ≤ 8, there are several results on a classification up to isometry [5,16]. When the tensor A is parallel [18], or a Codazzi tensor [4] or both A and g are analytic [1] (see also [8]), it is shown that the manifold M is isometrically embedded into another spin manifold of dimension n + 1 carrying a parallel spinor and that the tensor A is the half of the second fundamental form of the immersion. We also cite the partial classification of generalised Killing spinors on the round sphere [21,19] and on 4-dimensional Einstein manifolds of positive scalar curvature [20] where in some cases the generalised Killing spinor turns out to be a Killing spinor.
In this paper, we are interested in an equation dual to the generalised Killing one, which we call skew Killing spinor equation. More precisely, on a given Riemannian spin manifold (M n , g), a spinor field ψ is called a skew Killing spinor if it satisfies for some skew-symmetric endomorphism field A of T M the differential equation for all X ∈ T M . This equation was originally defined in [14]. Each skew Killing spinor is a parallel section with respect to the modified metric connection ∇ − A ⊗ Id, in particular it has constant length. Moreover, for a given skew symmetric endomorphism field A of T M , the space of skew Killing spinors is a complex vector space of dimension at most rk C (ΣM ) = 2 [n/2] .
Very few examples of Riemannian spin manifolds (M n , g) carrying skew Killing spinors are known for which A = 0. For 2-dimensional manifolds, apart from R 2 or quotients thereof with trivial spin structure, only the round sphere of constant curvature can carry such spinors and in that case they correspond to restrictions of Killing spinors from S 3 onto totally geodesic S 2 [14]. In that case, the tensor A coincides with the standard complex structure J induced by the conformal class of S 2 or with −J depending on the sign of the Killing constant chosen on S 3 . Each skew Killing spinor on S 2 immediately gives rise to a three-dimensional example, namely to a skew Killing spinor on S 2 × R, where A = ±J on S 2 is trivially extended to the R-factor. More generally, for a manifold of dimension n = 3 the following is known [14,Prop. 4.3]. If M 3 admits a skew Killing spinor ψ, then, locally, ψ can be transformed into a parallel spinor by a suitable conformal change of the metric. In particular, M 3 is locally conformally flat. If, in addition, M 3 is simply-connected, then this conformal change is defined globally. Conversely, if (M 3 , g) admits a nonzero parallel spinor, then for any conformal change of g, there exists a skew Killing spinor with respect to the new metric. See Section 4.1 for more detailed information.
Obvious examples in four dimensions can be obtained as products N × R, where N is a threedimensional manifold admitting a skew Killing spinor, see Example 4.1. A special case of this construction is the product S 2 ×R 2 , see Example 4.2. For each of the endomorphisms A ± := ±J ⊕0, this manifold admits the maximal number of skew Killing spinors.
The main purpose of this work is to establish a classification result when the dimension of M is four. Note that the pointwise rank of A is either zero, two or four. We will split the classification into two parts. In Section 4 we will study the degenerate case, where the rank of A is at most two everywhere. In Section 5 we will consider the case where rk(A) = 4 on all of M . Before we start the classification, we determine the general integrability conditions in arbitrary dimensions arising from the existence of a skew Killing spinor, see Section 2. In Sections 3 and 4, we specify these conditions to four dimensions, especially to the degenerate case. We use that the spinor bundle ΣM splits into the eigenspaces Σ + M and Σ − M of the volume form and the bundle of two-forms splits into those of self-dual and of anti-self-dual forms, which act on Σ ± M . We also adapt some techniques used in [20] but for a skew-symmetric endomorphism A. We use the integrability conditions to achieve the following classification result in case that the Killing map is degenerate everywhere.
Theorem A. Let (M 4 , g) be a connected Riemannian spin manifold carrying a skew Killing spinor ψ, where the rank of the corresponding skew-symmetric tensor field A is at most two everywhere. Then either ψ is parallel on M or, around every point of M , we have a local Riemannian splitting R × N with N having a skew Killing spinor. If, in addition, the length of the summand ψ + in the decomposition ψ = ψ + + ψ − ∈ Σ + M ⊕ Σ − M is not constant, then we are in the second case with N = R × S 2 , that is, (M, g) is a local Riemannian product S 2 × R 2 around every point.
For a more detailed formulation see Theorem 4.13, where we also discuss the global structure of (M, g) if M is complete.
Let us turn to the case where the Killing map is non-degenerate everywhere. In Section 5.1 we will prove that, essentially, the existence of a skew Killing spinor ψ with non-degenerate Killing map A is equivalent to the existence of a Killing vector field η and an almost complex structure J satisfying certain conditions, see Proposition 5.1 for a detailed formulation. The spinor ψ and the data η and J are related by the equations J(X) · ψ − = iX · ψ − and g(η, X) = X · ψ + , ψ − /|ψ| 2 for all X ∈ T M .
In Section 5.2, we consider the special case where Aη is parallel to Jη. Then AJ = JA holds and J is integrable, see Remark 5.3. Manifolds with skew Killing spinors satisfying these conditions are related to doubly warped products. A doubly warped product is a Riemannian manifold (M, g) of the form (I ×M , dt 2 ⊕ρ(t) 2ĝη ⊕σ(t) 2ĝη ⊥ ), where (M ,ĝ) is a Riemannian manifold with unit Killing vector fieldη, andĝη,ĝη⊥ are the components of the metricĝ along Rη andη ⊥ , respectively, I ⊂ R is an open interval and ρ, σ: I → R are smooth positive functions on I. Locally, doubly warped products can be equivalently described as local DWP-structures, see the appendix. OnM , we define a functionτ by∇ Xη =τ ·Ĵ(X) for X ∈η ⊥ , whereĴ is a fixed Hermitian structure on η ⊥ . Locally, (M ,ĝ) is a Riemannian submersion over a two-dimensional base manifold B. LetK denote the Gaussian curvature of B. We obtain the following result, see Theorem 5.5 and Corollary 5.8.
Theorem B. Let (M, g) admit a skew Killing spinor such that Aη||Jη and |η| ∈ {0, 1/2} everywhere. Then M is locally isometric to a doubly warped product for which the dataK andτ are constant and ρ and σ satisfy the differential equations Conversely, if M is isometric to a simply-connected doubly warped product for which the dataK andτ are constant and ρ and σ satisfy the above differential equations, then (M, g) admits a skew Killing spinor such that Aη||Jη.
The differential equations in Theorem B can be locally solved and one obtains explicit formulas for the doubly warped product. Let us finally mention that the skew Killing spinors on M = I ×M are related to quasi Killing spinors in the sense of [10] onM , see Remark 5.10.
The Hodge star operator satisfies * 2 = (−1) p(n−p) on p-forms and has the following useful properties for any vector field X. Recall also that the Clifford multiplication between a vector field X and a differential p-form ω is defined as from which the identity X · Y · +Y · X· = −2g(X, Y ) follows for any vector fields X and Y.
From now on, we assume M to be spin with fixed spin structure. In that case, there exists a Hermitian vector bundle ΣM → M , called the spinor bundle, on which the tangent bundle T M acts by Clifford multiplication, T M ⊗ ΣM → ΣM ; X ⊗ ψ → X · ψ. We will write XY · ψ instead of X · Y · ψ. Recall that a real p-form also acts by Clifford multiplication in a formally self-or skew-adjoint way according to its degree: for any p-form ω and any spinors ϕ, ψ, we have The Levi-Civita connection ∇ on M defines a metric connection, also denoted by ∇, on ΣM with respect to the Hermitian product · , · and that preserves Clifford multiplication. In other words, for all X, Y ∈ Γ(T M ), the rules the curvature tensor associated with the connection ∇, the spinorial Ricci identity states that, for all ψ and X, see e.g. [5,Eq. 1.13].
In the following, we will assume the manifold M to carry a skew-Killing spinor field ψ with corresponding skew-symmetric endomorphism A. We make A into a 2-form via the metric g, that is, we consider (X, Y ) → g(AX, Y ), which we still denote by A. In a pointwise orthonormal basis {e i } i=1,···,n of T M , we have A = 1 2 n j=1 e j ∧ Ae j (mind the factor 1 2 ). In particular, Clifford multiplication of any spinor field ψ by A is given by In the next proposition, we compute the curvature data arising from the existence of such a spinor. These integrability equations will play a crucial role for the classification in the 4-dimensional case.
Proposition 2.1 Let ψ be any solution of (1) on a spin manifold (M n , g) for some skew-symmetric endomorphism field A of T M . Then the following identities hold for X, Y ∈ Γ(T M ) where d is the exterior derivative and δ is the codifferential w.r.t. the metric g.
Proof: We derive (1) and take suitable traces of the identities obtained. First, if x ∈ M and X, Y ∈ Γ(T M ) such that ∇X = ∇Y = 0 at x, then which is the first identity.
Next we fix a local orthonormal basis of T M , which we denote by (e j ) 1≤j≤n . Using the spinorial Ricci formula (4) and the identities (3), we compute Now we compute each term separately. First, n j=1 e j · (∇ X A)(e j ) · ψ = 2∇ X A · ψ by (5), where we see ∇ X A as a 2-form on M . The second sum can be computed in terms of the exterior and the covariant derivatives of A. Namely It remains to notice that, by Equations (3), we have n j=1 e j · (Ae j ∧ AX) · ψ = n j=1 e j · Ae j · AX · ψ + n j=1 g(Ae j , AX)e j · ψ This shows the second equation.
To obtain the scalar curvature, we trace the spinorial Ricci identity. Given a local orthonormal basis (e j ) 1≤j≤n of T M , we write which is the last identity. Here, we use the the identity n j=1 e j ∧ (e j ω) = pω, which holds for any p-form ω.
The vector fields η and ξ in four dimensions
In this section, we consider a 4-dimensional spin manifold (M, g) that carries a skew Killing spinor. On spin manifolds of even dimension 2m, the complex volume form (vol g ) C := i m e 1 · e 2 . . . · e 2m , where (e j ) j=1,···,2m is an arbitrary orthonormal frame, splits the spinor bundle into two orthogonal subbundles that correspond to the eigenvalues ±1 of (vol g ) C . Hence, on our four-dimensional manifold (M, g), we have ΣM = Σ + M ⊕ Σ − M , where The spaces Σ ± M are preserved by the connection ∇ of the spinor bundle and are interchanged by Clifford multiplication by tangent vectors. According to this decomposition, we write any spinor field ψ as ψ = ψ + + ψ − and we setψ := ψ + − ψ − . Recall now that differential forms act on the spinor bundle ΣM as follows: for any differential p-form ω on M and ψ ∈ Γ(ΣM ) ω · ψ = * ω ·ψ for p = 1, 2 and ω · ψ = −( * ω) ·ψ for p = 3, 4.
We collect some properties of η and ξ that will be used later on.
Proof: Differentiating the function |ψ − | 2 along any vector field X ∈ T M gives This proves 1. To prove 2, we consider two vector fields X and Y that can be assumed to be parallel at some point x ∈ M to compute which yields the first part of 3. The divergence of η is clearly zero by 2 and the fact that A is skew-symmetric. Finally, which together with 1 gives 4. The open sets M 0 and M 1 are dense in {p ∈ M | A p = 0}. Indeed, if, e.g., ψ − vanishes on some open set U ⊂ {p ∈ M | A p = 0}, then so does its covariant derivative and therefore AX · ψ + = 0 on U . Hence A = 0 on U , which contradicts the assumption on A.
With the notation introduced above, we have We define also the set
The degenerate case
In this section, we assume that rk(A) ≤ 2 everywhere on M 4 , which is equivalent to suppose that the kernel of A is at every point either 4-or 2-dimensional. Then AX ∧ A = 0 for all X ∈ T M . In particular, dA = 0 on M ′′ by Lemma 3.1.
Let us prove the above statement. Recall that the spinor bundle of M = N × R is given by ΣM = ΣN ⊕ ΣN and the Clifford multiplication on M is related to the one on N by [3] where ∂ t is the unit vector field on R and X ∈ T N . Now we set ψ := ϕ + ∂ t · ϕ according to the above decomposition. Let A denote the Killing map associated with ψ. Then we can easily check that ∇ ∂t ψ = 0 and, for X ∈ T N , Hence ψ is a skew-Killing spinor on M . The vector field ξ in this example is just −∂ t which is parallel. Since |∂ t | = 1, we have |ψ + | = |ψ − |.
Let us recall at this point, what is known about three-dimensional manifolds with skew Killing spinors. As already mentioned in the introduction, each skew Killing spinor on S 2 immediately gives rise to a three-dimensional example, namely to a skew Killing spinor on S 2 × R. Furthermore, if dim N = 3 and if (N, g) admits a skew Killing spinor ψ, then N is locally conformally flat [14,Prop. 4.3]. Indeed, locally, there exists a function u such that ψ transforms into a parallel spinorψ with respect to the metricḡ := e 2u g and three-dimensional Riemannian manifolds with a non-trivial parallel spinor field are flat. If N is simply-connected, then u is globally defined. In the latter case the metricḡ is not necessarily complete even if (N, g) is. Conversely, if (N, g) admits a nonzero parallel spinor, then for any conformal change of the metric on the manifold N there exists a skew Killing spinor with respect to the new metric. We conclude this overview with the flat case N = R 3 . If ψ = 0 is a solution of (1) on N = R 3 endowed with the flat metric, then A = 0 and ψ is a parallel spinor field. Indeed, as mentioned above, there exists a globally defined function u on R 3 such that metricḡ := e 2u g admits a parallel spinor. Hence,ḡ is also flat. In particular, the scalar curvatureS vanishes. On the other hand,S = 8e −2u e −u/2 ∆e u/2 sinceḡ arises by conformal change from the flat metric g. Thus ∆(e u/2 ) = 0, that is, e u/2 is a harmonic function on R 3 . But since e u/2 ≥ 0, Liouville's theorem implies that e u/2 -and so u itself -is constant. This shows A = 0.
For each of these endomorphisms, the space of skew Killing spinors is four-dimensional. It can be spanned by elements with non-vanishing Aη and it also can be spanned by elements for which Aη = 0 holds.
Let us prove this statement. The spinor bundle of S 2 × R 2 is pointwise given by Σ(S 2 × R 2 ) = ΣS 2 ⊗ ΣR 2 and the Clifford multiplication on for X ∈ T S 2 and Y ∈ T R 2 . Now, we consider on S 2 a skew Killing spinor ϕ, corresponding to the standard complex structure J, and a parallel spinor σ in Σ + (R 2 ) of norm 1. The spinor field ψ := ϕ ⊗ σ is clearly a skew-Killing spinor, since in the S 2 -direction we have and ∇ Y ψ = 0 in the R 2 -direction. The same computation holds when replacing J by −J and choosing σ ∈ Σ − (R 2 ). As the spaces of skew-Killing spinors ϕ corresponding to the standard complex structure J or its opposite on S 2 are each complex 2-dimensional, we deduce that the space of skew Killing spinors with Killing map A + is at least -and therefore exactly -4-dimensional. The same holds for A − . In particular, each skew Killing spinor on S 2 × R 2 is a linear combination with constant coefficients of skew Killing spinors for A + and also one of skew Killing spinors for A − . Note that the vector field ξ, associated to the above-defined skew Killing spinor ψ, is the one coming from the spinor ϕ on S 2 , since T S 2 ≃ Σ + S 2 and Therefore, ξ = ξ S 2 and A 2 ξ = J 2 ξ S 2 = −ξ S 2 , which cannot vanish on the sphere. Thus Aη = 0. If we consider instead of the above constructed ψ the spinor ψ + Y ·ψ for a parallel vector field Y on R 2 with |Y | = 1, we obtain a skew Killing spinor with ξ = −Y , hence Aη = 0.
Classification
Let us first assume that ρ = 1/2 on an open set. By definition of ρ, this condition is equivalent to |ψ + | = |ψ − |. We prove that, under this assumption, the manifold is locally isometric to that in Example 4.1. Proof: Let ψ be a skew Killing spinor of norm one such that |ψ + | = |ψ − |. Then f = 0 by definition of f . Thus η is parallel by Lemma 3.1. In this case η ⊥ is integrable and the spinor ψ restricts to a skew Killing spinor on the integral manifolds. In fact, for any given integral manifold N , its spinor bundle is identified with Σ + M , so the spinor ϕ = ψ + restricts to a skew Killing spinor on N . Indeed, In the next part of the section, we want to exclude the case ρ = 1/2 and make the stronger assumption for any vector field X.
Proof: We take the orthogonal projection of the formulas in Proposition 2.1 to Σ + M and Σ − M . This gives, after using ψ + = ξ · ψ − , dA = 0 and A ∧ AX = 0 that and respectively. Equation (16) gives Hence, by formula (6), we obtain (10). Equation (17) yields Now, by taking the scalar product with ψ − and identifying the real part, the 0-th order term must vanish. This is Equation (12). Also, we have The isomorphism from 2 − M to the orthogonal complement (ψ − ) ⊥ yields Equality (11) from the above identity. Equation (18) gives (13). Finally, Equation (19) yields Taking the Hermitian product with ψ − , we obtain Equations (14) and (15) after identifying the real parts.
In the following, we will further simplify the equations in Lemma 4.4.
for every X ∈ T M .
Remark 4. 6 We can prove integrability conditions analogous to those in Lemma 4.4 and Proposition 4.5 also for arbitrary rank of A. These general conditions are more involved. Since we will not use them in the present paper, we do not state them here. Proof: Assume that Aη = 0 on an open set U . We know that η is a Killing vector field on M . Moreover, by Lemma 3.1, the vector field η has constant length on U . Indeed, for every X ∈ T M , By [6,Thm. 4], since (22) implies Ric(η) = 0, we can conclude that η is parallel on U . But this contradicts item 2 of Lemma 3.1 since f = 0 and A = 0 everywhere by assumption.
In the following, we will often assume assume that Aη = 0 on all of M . If Aη = 0, then we have A 2 η = 0 everywhere, thus the vectors Aη |Aη| and A 2 η |A 2 η| form an orthonormal basis of the image of A. As A is of rank 2, we obtain Furthermore, note that (27) already implies where the last equality comes from the identity (21). Obviously, This equation has been extensively studied in [12]. Using this formula, we now express the Ricci tensor of the vector field Aη.
In the following, we will compute the Ricci curvature of the vector field ( * A)η. Notice first that ( * A)η = η ( * A) = * (η ∧ A). Hence, this vector field belongs to the kernel of A as for any X ∈ T M . Based on the fact Aη ( * A) = * (Aη ∧ A) = 0, we first compute This gives On the other hand, by (26) and (28), we have Comparing the two identities gives the second equation in (30). Equation (31) can be deduced from computing Ric(Aη, ( * A)η)) in two ways from (30) taking the scalar product by ( * A)η in the first formula and by Aη in the second one. Remember that ( * A)η lies in the kernel of A.
In the following, we will establish and prove three technical lemmas (Lemmas 4.9, 4.10 and 4.11), which will show that the kernel and the image of the endomorphism A are integrable and totally geodesic. Then the proof of Theorem A will follow from the de Rham theorem.
Lemma 4.9 Assume that (GA) holds. Then we have the identity Proof: By continuity, it suffices to prove the assertion on the set {p ∈ M | Aη| p = 0} since this set is dense in M by Lemma 4.7. Thus we may assume that Aη = 0 everywhere. For any X ∈ T M , we have where we use Equation (25) in the last equality. Thus, from Lemma 4.8, we find Moreover, δ(A 2 η) = 0. Indeed, for any two-form ω in four dimensions and any vector X, the formula δ(X ω) = * (dX ∧ * ω) − δω(X) holds. Using δA = 0 and 4d(Aη) = ddf = 0, this yields Now, by taking the divergence of both sides of (27), we compute Furthermore, the divergence of Aη is equal to f S/4 as an easy consequence from tracing Equation (25). This finally gives (33).
The following technical lemma expresses a partial trace of the Ricci tensor.
Lemma 4.10 Assume that (GA) holds and that Aη = 0 everywhere. Then the following identity holds: 1 Proof: The proof relies on taking the scalar product of Ric(A 2 η) in Lemma 4.9 with the vector field A 2 η. Indeed, we have Hence, again by (28), we find Finally, the identity Ric(Aη, Aη) which follows from Lemma 4.8, leads to the required equality. Proof: As in the proof of Lemma 4.9, we may assume that Aη = 0 everywhere. By Lemma 4.8 we know that We take the divergence of both sides. We start with the left hand side. Note that for any vector field X ∈ Γ(T M ) the formula δ(Ric(X)) = g(δRic, X) − n i=1 g(Ric(e i ), ∇ ei X) holds, where e 1 , . . . , e n is any pointwise orthonormal basis. Using this and δ(Aη) = f S 4 , we compute To get the divergence of the right hand side, we first compute that of the vector field Ric(( * A)η). For this, we use the same formula as above and again dS = −2δRic to write In the last equality, we used (31). Inserting (26) into (36), we find which in turn gives δ(f · Ric(( * A)η)) = −g(df, Ric(( * A)η)) + f · δ(Ric(( * A)η))) by (30). Comparing Equations (35) and (37), we obtain On the other hand, this sum can be computed on the particular orthonormal frame Aη |Aη| , A 2 η |A 2 η| , e 3 , e 4 with e 3 , e 4 in the kernel of A as follows: using Lemma 4.10, we write Comparing these two computations yields The Cauchy-Schwarz Inequality gives We take the square of this inequality. Then we use (38) and (39) to express the left and the right hand side, respectively. We obtain where besides (21), which says that S = 4|A| 2 , we used |A 2 | 2 = (|A| 2 ) 2 /2, which follows from the fact that A is skew-symmetric of rank two. This inequality is only true if Aη(S) = 0. But then (40) is an equality. Hence, Ric is a multiple of A 2 at every point of is not constant, then (M, g) is a local Riemannian product S 2 × R 2 around every point and the Killing map equals ±J ⊕ 0.
If, in addition, (M, g) is complete, then (M, g) is globally isometric to the Riemannian product S 2 × Σ 2 , where Σ 2 is either flat R 2 , a flat cylinder with trivial spin structure or a flat 2-torus with trivial spin structure.
Proof: We define U := {p | A p = 0} and U ′ := U ∩ M ′ , U ′′ := U ∩ M ′′ . Recall that U ′ ⊂ U is dense. We know that Equation (29) But then p ∈ U would imply that S/2 > 0 is an eigenvalue of Ric p and p ∈ W would imply that Ric p = 0, a contradiction.
Note that, as we already noticed in [12, Theorem 2.4], the manifold (M, g) must be globally isometric to the product S 2 × Σ 2 , where Σ 2 is a quotient of flat R 2 . The reason is that the fundamental group of M can act on the S 2 -factor only in a trivial way. It remains to recall that a parallel spinor descends from R 2 to a nontrivial quotient (flat cylinder or torus) if and only if the fundamental group acts on the spin structure of R 2 in a trivial way, that is, the quotient Σ 2 carries the trivial spin structure.
Skew Killing spinors with non-degenerate Killing map A
This section is devoted to the case where we have a skew Killing spinor ψ whose Killing map A is non-degenerate everywhere. Recall that ψ defines a vector field η by (8). As above, we put ρ := |η|.
Here, we want to assume that This is a sensible restriction since M ′′ is dense in M if A is non-degenerate everywhere, see Section 3. Working on M ′′ has the advantage that we do not have to care about the sign of f . Indeed, as explained in Remark 3.3, up to a possible change of orientation on each connected component we may assume that f > 0. In particular, f is defined by ρ = |η| via f = 1 − 4ρ 2 , which will be important for the reverse direction of Proposition 5.1.
Equivalent description by complex structures
Let M be a manifold and A be a skew-symmetric endomorphism field on M . Define a tensor field Proposition 5.1 Let M be a four-dimensional spin manifold and A be a skew-symmetric endomorphism field on M . Put C := C A .
If (M, g) admits a skew Killing spinor ψ associated with A such that M = M ′′ , then there exist an almost Hermitian structure J and a nowhere vanishing vector field η of length |η| =: ρ < 1/2 such that where f := 1 − 4ρ 2 and C P := C(s, Js) for any unit vector s ∈ P , and such that the sectional curvature K P in direction P satisfies where A P := g(As, Js) for any unit vector s ∈ P .
If M is simply-connected, then also the converse statement is true. (41) and (42). Then
In the following computation, the sign '≡' means equality up to a term S(X, Y ) for some symmetric bilinear map S. We compute This implies Using Equations (46) and (47) we obtain (48).
Proof of Prop.5.1: Before we start the proof of the two directions of the assertion, let us first suppose that, on M , we are given a Hermitian structure J and a nowhere vanishing vector field η of length ρ < 1/2. We want to define a vector field ξ such that the identities ξ = −(|ξ|/ρ) · η and ρ = |ξ|/(1 + |ξ| 2 ) hold according to Equation (9). Since this leads to a quadratic equation, we have to choose one of the solutions. Here we use our assumption M = M ′′ and define f = 1 − 4ρ 2 and ξ = 2(f − 1) −1 η, compare Remark 3.3, which motivates this choice. Assume that the orientation on M is such that orthonormal bases of the form s 1 , Js 1 , s 2 , Js 2 are negatively oriented. We define a one-dimensional subbundle E of ΣM by We want to show that E is parallel with respect to∇ defined by∇ X ϕ := ∇ X ϕ − AX · ϕ if and only if J and η satisfy (41) and (42). Let X and Y be vector fields satisfying ∇X = ∇Y = 0 at p ∈ M . Then we have at p ∈ M This equals iX · (∇ Y ϕ) − if and only if (∇ Y J)(X) = 2X Jξ ∧ AY + ξ ∧ JAY holds, which is equivalent to Equation (41). Furthermore, This equals ξ · (∇ X ϕ) − if and only if ∇ X ξ = (1 − |ξ| 2 )AX − 2g(Aξ, X)ξ holds, which is equivalent to (42). Consequently, E is parallel with respect to∇ if and only if J and η satisfy (41) and (42).
Assume that∇ reduces to a connection∇ E on E. Then Equations (41) and (42), and therefore also (46), (47) and (48) hold. We will show that the curvatureR of∇ E vanishes if and only if the Riemannian curvature R of M equals the tensor B defined by for all vector fields X and Y on M . By an easy calculation similar to that in the proof of Proposition 2.1, we getR
This shows thatR vanishes if and only if
for all vector fields X and Y and all sections ϕ of E. In the following, we will use that 2 ± M acts trivially on Σ ∓ M and that, for any nowhere vanishing section ϕ ± of Σ ± M , the maps defined by (7) are isomorphisms. Let ϕ be a section of E such that ϕ + (x) = 0, ϕ − (x) = 0 for all x ∈ M (here we use that ξ does not vanish). Then Thus (50) for all X, Y ∈ X(M ) and all Z ∈ Γ(P ). Recall that (46) holds in our situation, which we will use in the following computations. Equations (55) and (56) are equivalent to the two equations which are equivalent to (43) and (44), respectively. Because of Now we can prove both directions of the proposition. Suppose that there exists a spinor field ψ on M satisfying ∇ X ψ = AX · ψ for all X ∈ T M such that M = M ′′ . The latter condition means that the vector field η defined in (8) satisfies 0 < ρ = |η| < 1/2. In particular, ψ − = 0 everywhere and we can define an almost Hermitian structure J by J(X) · ψ − = iX · ψ − . Thus we may apply our above considerations. If we define E ⊂ ΣM and∇ as above, then ψ is a∇-parallel section of E. In particular,∇ reduces to a connection∇ E and the curvature of∇ E vanishes thus (41) -(45) hold.
Conversely, if we are given an almost Hermitian structure J and a nowhere vanishing vector field η of length 0 < ρ = |η| < 1/2 such that (41) -(45) are satisfied. Then we can define a onedimensional subbundle E ⊂ ΣM by (49) together with a flat covariant derivative∇ on E. If M is simply-connected, then E admits a parallel section, which is a skew Killing spinor.
Remark 5.3
Let J be an almost Hermitian structure on a four-dimensional manifold M such that (41) and (42) hold for a skew-symmetric endomorphism field A and a vector field E. Then J defines a reduction of the SO(4)-bundle SO(M ) to U(2). Here we want to give the intrinsic torsion of this bundle in the special case where A and J commute. The two components of the intrinsic torsion of this bundle are the Nijenhuis tensor N of J and the differential dΩ of the Kähler form Ω := g(J·, ·). A direct calculation using (41) and (42) shows that under the assumption AJ = JA these components are given by N = 0 and dΩ = −2A ∧ (ξ Ω).
The case where Aη is parallel to Jη
Let us assume again that the Killing map A is non-degenerate everywhere. We want to consider the case where Aη is parallel to Jη in more detail. We will see that, in this situation, the existence of skew Killing spinors is related to doubly warped products and to local DWP-structures. These notions and their basic properties are explained in the appendix.
Lemma 5.4
Assume that M admits a skew Killing spinor with nowhere vanishing Killing map A that satisfies Aη = uJη for some function u. Then A 2 η = −u 2 η. In particular, AJ = JA.
Proof. Note first that Lemma 3.1, 4 and Eq. (46) give for all X, Y ∈ T M . Consequently, f ∇ η A = 0. Because of Eq. (41) gives (∇ η J)η = 0. Now, by differentiating the equality Aη = uJη in the direction of η, we get Finally, using the fact that ∇ η η = f Aη and f ∇ η A = 0, we get that η(u) = 0 and f 2 A 2 η = −u 2 f 2 η. The latter equation implies Let (M 3 ,ĝ,η) be a minimal Riemannian flow, i.e., an orientable three-dimensional Riemannian manifold together with a unit Killing vector fieldη. Then, locally, (M ,ĝ) is a Riemannian submersion over a two-dimensional base manifold B. Let us fix a Hermitian structureĴ onη ⊥ and put ω :=ĝ(·,Ĵ·). We define a functionτ onM which is constant along the fibres by∇ Xη =τ ·Ĵ(X) for X ∈η ⊥ . Furthermore, letK denote the Gaussian curvature of B. Now consider the metric g rs = r 2ĝη ⊕ s 2ĝη ⊥ onM , whereĝη,ĝη⊥ are the components of the metricĝ along Rη andη ⊥ , respectively. Then (M , g rs , r −1η ) is again a minimal Riemannian flow and we obtain new functionŝ τ andK, sayτ rs andK rs . These functions satisfŷ If our four-dimensional manifold M is endowed with a DWP-structure, then every three-dimensional leaf associated with this structure can be understood as a minimal Riemannian flow. In this way, we obtain functions τ and K on M .
Conversely, suppose that M is simply-connected and admits a local DWP-structure (ν, η) on M such that the length ρ of η satisfies 0 < ρ < 1/2. Moreover, assume that K and τ satisfy (58) for f := 1 − 4ρ 2 . Then M admits a skew Killing spinor such that η is associated with ψ according to (8) and such that Aη||Jη.
Proof: Assume first that M admits a skew Killing spinor such that Aη||Jη and 0 < ρ < 1/2 everywhere. We define a vector field ν and functions A E and A P by Then η is a Killing vector field, see Remark 3.2. Equation (42) yields We want to show that (ν, η) is a DWP-structure. The next Lemma will prove all properties of such a structure except the conditions for the Weingarten map W = −∇ν and its eigenvalues.
Indeed, (46) implies g C(s 1 , s 2 ), η = 0, thus we obtain which gives s 1 (A E ) = 0. Using (65) and taking into account that [s 1 , s 2 ] is a multiple of s 1 , we get Hence we proved that besides ρ also A E and A P are constant on the integral manifolds of ν ⊥ . Thus also µ and λ are constant along these leaves. Consequently, (ν, η) is a local DWP-structure on M . By (62), the associated function τ satisfies where s ∈ {η, ν} ⊥ is of length one. This proves the first equation in (58).
It remains to prove that also the second equation in (58) is true. Let N be an integral manifold of ν ⊥ . Then, locally, N is a Riemannian submersion over a base manifold B. The following lemma will relate the sectional curvature K P in direction of P = span{s 3 , s 4 } to the Gaussian curvature K of B, which will almost finish the proof of the forward direction of Theorem 5.5.
Lemma 5.7 Let (ν, η) be a local DWP-structure such that the coefficients of the Levi-Civita connection satisfy (64) with respect to an orthonormal frame s 1 = −η/ρ, s 2 = ν, s 3 , s 4 . Then the Gaussian curvature K of B equals Proof Let A denote the fundamental tensor used in O'Neill's formulas. We have The O'Neill formula for R N now gives which combined with (68) implies the assertion.
Up to rescaling of the metric, each integral manifold N in our construction has a Sasakian structure, see [7] for a definition of such structures. Indeed, η restricted to N is a Killing vector field of constant length and ∇η restricted to η ⊥ equals |η|τ J| η ⊥ , where also τ is constant. The Nijenhuis tensor of J| η ⊥ vanishes since η ⊥ is two-dimensional. Consequently,ξ := η/(τ |η|) is the Reeb vector field of a Sasakian structure on (N,g := τ 2 g). The scalar curvature of (N,g) equals S = 4λ/(f τ )+2.
Thus we are up to a change of orientation exactly in the situation described above.
In dimension three Sasakian quasi-Killing spinors of type can also be understood as transversal Killing spinors, see [11] for a definition. If we return to our original metric g on N , this means that the restrictions of ψ ± to N are transversal Killing spinors. Indeed, holds for the transversal covariant derivative∇ on N .
It remains to notice thatη must be a Killing vector field along (M ,ĝ) since it is already Killing on (M, g) and is tangent toM . On the whole, we obtain the doubly warped product metric as required. | 10,088.4 | 2020-05-02T00:00:00.000 | [
"Mathematics"
] |
Ultrafast Femtosecond Laser Maskless Patterning for Multitype Microsupercapacitors
Downsizing electrode architectures provides great potential for the fabrication of microscale energy storage devices. With their extended voltage window and high energy density, asymmetric microsupercapacitors (MSCs) play an essential role in various applications. However, the ecient manufacturing of asymmetric MSCs remains challenging. Herein, maskless lithography–based ultrafast fabrication of multitype micron-sized planar MSCs in one step through temporally and spatially shaped femtosecond laser is presented. MSCs are only 10 × 10 µm 2 in size and have a minimum line width of 200 nm. MXene and 1T-MoS 2 can be combined with laser-induced MXene-derived TiO 2 and 1T-MoS 2 derived MoO 3 to form various types of asymmetric and symmetric MSCs in the same material system within seconds. The asymmetric MSC exhibits an ultrahigh speci (cid:0) c capacitance (220 mF cm −2 and 1101 F cm −3 ), cycling stability (98.3% capacitance retention after 15,000 cycles), energy density (0.495 Wh cm −3 ) and power density (28 kW cm −3 ). This versatile strategy overcomes the current limitations of MSC manufacturing. Multitype MSCs fabricated herein have high resolution, enhancing the feasibility and exibility of the preparation of microscale energy storage devices. The morphology and microstructures were characterized by scanning electron microscope (SEM) using SU8220 (Hitachi, Japan) by a Hitachi SEM in Tsinghua university. X-ray photoelectron spectroscopy (XPS) analysis was performed using a ESCALAB 250Xi spectrometer with a monochromatic Al Kα source (7.5 µm beam spot). Raman spectra were acquired using a Via-reex spectroscopy with the excitation laser line at 532 nm. The XRD patterns were conducted on a D8 Advance (Bruker) with CuKa radiation. Using an Olympus metallographic microscope can take optical microscopy images. The Confocal Laser Scanning Microscopy used an MPLAPONLEXT x20 lens. The Raman spectroscopy investigations were performed using a Renishaw inVia Reex spectrometer with laser wavelength of 532 nm. in a two-electrode cyclic (CV), galvanostatic charge/discharge electrochemical areal cm −2 ) per GCD Eqs. follows:
Introduction
The growing demand for miniaturized, multifunctional portable electronics has greatly stimulated the development of microintegrated energy systems. The microsupercapacitor (MSC) plays an integral role in microscale energy storage devices. [1] With technological advances, various types of MSCs have been developed. Unlike symmetric MSCs, asymmetric MSCs can be assembled using two electrode materials, providing a larger voltage window and signi cantly increasing the energy density. [2,3] Thus, the practical applications of MSCs can be extended. Substantial progress has been made for asymmetric MSCs in recent years. Conventional asymmetric MSCs are referred to as sandwich-type MSCs. Thinner, smaller, more exible planar asymmetric MSCs require multiple fabrication steps and are di cult to further downsize. [4,5] Speci cally, the preparation process is extremely complex because of the unique structure of the inconsistent electrode materials, and accurate control of the electrode materials is challenging. The minimum size of asymmetric supercapacitors remains at the micron or millimeter level. The precise control of electrode material assembly, as well as the simultaneous formation of patterns of two types of materials in one step, is challenging to accomplish through conventional methods such as electrodeposition, [6] inkjet printing, [7] laser etching, [8] and photolithography. [9] Pseudocapacitance materials are common asymmetric electrode materials, but due to their instability and poor electrical conductivity, their power density is low and their lifespan is short. In recent years, two-dimensional materials have attracted considerable scholarly attention because of their excellent electrochemical properties. 1T-MoS 2 and MXene, the most notable, have highly reversible surface redox reactions and favorable metallic conductivity. [10,11] A recent study reported that 1T-MoS 2 demonstrated excellent conductivity and a high cycle life as an electrode material for asymmetric supercapacitors. Moreover, the voltage window was large. [12] In other investigations, MXene materials were combined with various metal oxides and carbon-based materials to construct asymmetric supercapacitors, which achieved excellent electrochemical performance. [13,14] In this study, we present a method for the ultrafast fabrication of submicron-scale symmetric and asymmetric MSCs on the same 1T-MoS 2 /MXene thin lms by using temporally and spatially shaped femtosecond laser. Three types of MSCs, namely a 1T-MoS 2 / MXene symmetric MSC, a laser-induced symmetric MSC prepared on MXene-derived TiO 2 and 1T-MoS 2 -derived MoO 3 thin lms, and an asymmetric MSC prepared on 1T-MoS 2 /MXene/laser-induced MXene-derived TiO 2 and 1T-MoS 2 -derived MoO 3 thin lms, were fabricated. The composition of the laser-induced MoO 3 and TiO 2 thin lms was manipulated through laser pulse delay and energy. Using the proposed approach, more than 150 groups of MSCs could be fabricated every minute, and each MSC was only 10 × 10 µm 2 in size.
Discussion
The Preparation of Multitype MSCs First, 1T-MoS 2 /MXene hybrid thin lms of variable thickness were prepared through vacuum ltration. We transferred thin lms of differing thicknesses to the glass substrate. As shown in Figure 1a, a confocal spatial pulse shaping system for femtosecond lasers was constructed to achieve ultrafast, high-precision patterning. In this process, materials are subjected to laser ablation according to controllable energy. By designing any combination of light elds, various types of symmetric and asymmetric MSCs can be fabricated. Almost any material, including ultrahard, ultrastrong materials, can be subjected to such processing. [15] Phase-adjustable spatial light modulators (SLMs) were employed to focus the Gaussian femtosecond laser into a femtosecond laser with a varying spatial distribution. Through the design of distinct phases, arbitrary changes in the light eld were realized within an extremely short period to produce multiple types of MSCs. SLMs can load computer-generated holograms (CGHs) to focus light in space. Before the Gaussian laser entered the SLM, it was focused using a Michelson interferometer. The Gaussian femtosecond pulse was integrated into an evenly divided double pulse sequence with a pulse delay of 10 ps. The subsequent subpulses focused the laser on the material in the preceding sequence.
When the front sequence pulse contacted the material, numerous freely moving electrons were excited.
The subsequent pulse sequence further interacted with the seed electrons generated by the front sequence pulse before the material was modi ed or ablated, leading to the avalanche ionization of more free electron eruptions. This occurrence is because the pulse delay between the two pulse sequences is in the picosecond order, which is substantially shorter than the time required for material phase transition.
For our initial beam (800 nm, 35 fs), we customized a special algorithm to calculate the original incident beam, adjusting it according to our target light eld. The optimized GS algorithm ensured a more uniform light eld distribution. [16] In previous experiments using SLM, each pattern corresponded to a CGH. [17,18] However, considering the pattern processing of multiple types of MSCs, various patterns must be processed in the asymmetric MSCs in situ at the same time. Thus, the processing technology requires optimization. We overlaid the target pattern on multiple target images, and programming was applied to load various CGHs into SLMs to realize continuous changes in multiple light elds over a 0.001-s duration. Multiple spatially focused light elds were smoothly focused by the objective lens from the SLM outlet through the 4f relay system. As shown in Figure 1b extremely small (10 × 10 µm 2 ), had extremely high processing consistency, and could be rapidly prepared in a very short time (150 groups/min) across a large area (Figures 2b and 2c). The MSCs clearly exhibited regularity, and the minimum line width was continuous. The interdigital MSCs with differing shapes and ngers could be completely prepared, such that the in uence of ngers on electrochemical performance could be determined. MSCs of different shapes were prepared by controlling the shapes of the target light eld (Supplementary Figure 1). This enabled the controllable and personalized preparation of microscale electronics, and the exibility surpassed that achieved through conventional processing techniques.
Moreover, we could control the laser frequency in the actual processing procedure; laser pulses could be shot out of the laser extremely rapidly. In the processing of MSCs with a size of 10 × 10 µm 2 , the translational was set to move at a rate of 2000 um/s under a laser frequency of 200. Therefore, 200 subpulses could be used to pattern 200 symmetric MSCs in 1 s. However, because multiple patterned light elds are required to realize the fabrication of an asymmetric MSC, the maximum number of MSCs processed per second is 100. Supplementary Figure 2 presents MSCs arrays (with varying sizes and line widths) prepared through this method. As indicated in Supplementary Figure 3, we designed patterned MSCs with differing line widths by adjusting the patterns and parameters of the light eld with focused laser pulses. These line widths were adjustable from the micron scale (5 µm) to the nanometer scale (200 nm). Furthermore, we achieved ultrahigh-resolution machining by setting a delay in temporally and spatially shaped femtosecond laser to control pulse shaping near the ablation threshold of the material. We employed femtosecond laser pulses to realize one-step pattern processing of various electrode material systems because this technology can be used to process almost any material.
As shown in Figure 2d, regular pattern processing was conducted on various materials (a metal-organic framework, graphene, WS 2 , MoTe, MnO 2 , and RuO 2 ). Raman characterization of the laser-patterned area con rmed the complete removal of the material through laser ablation. This demonstrates that our technology can not only be applied to the pattern processing of two-dimensional materials but also be employed for the high-precision processing of metal oxide materials. These results indicate that our technology is promising for the preparation of microelectronics and microscale energy storage devices. We conducted a review of technologies used in the processing of asymmetric supercapacitors.
Supplementary Table 1 presents a comparison of our technology with other processes in terms of the size and the maximum resolution of the asymmetric supercapacitors fabricated. Our asymmetric MSCs are at the micron scale, which is dozens or even hundreds of times smaller than conventional asymmetric [19] In the highresolution transmission electron micrographs of typical orthogonal MoO 3 nanorods, lattice fringes are clearly visible. The distance between adjacent fringes is approximately 0.23 nm, indicating that the nanorods grew in the (200) direction. [20] MT was found among laser-induced materials, but the morphology differed due to the in uence of laser power and pulse delay. The transmission electron micrographs revealed that the corresponding metal oxides were formed during the laser ablation of MXene/1T-MoS 2 . We speculate that the temporally and spatially shaped femtosecond laser rst excited a large number of free electrons. After laser pulse bombardment, these free electrons increased in number. The extremely high instantaneous power of the femtosecond laser pulses generated defects in the Mo-S and Ti-C bonds. The numerous free electrons facilitated the combination of the femtosecond laser pulses with oxygen in the air. Moreover, the femtosecond laser pulses ionized to the oxygen to produce oxygen bonds. Thus, the original MXene material was easily transformed into metal oxide.
To further explore the effect of laser ablation on MXene/1T-MoS 2 , X-ray photoelectron spectroscopy (XPS) was conducted. Studies have reported that the binding energy of 1T-MoS 2 is almost 0.9 eV lower than that of 2H-MoS2 in non-laser-processed materials. [19,21] As presented in Figure 3a, the high-resolution XPS spectra of Mo 3d could be deconvoluted into peaks assigned to Mo 3d 3/2 and Mo 3d 5/2 . The peaks at 231.6, 232.8, 234.6, and 235.8 eV in various MT thin lms indicated that the Mo 3d 5/2−3/2 doublets corresponded to MoO 3 and MoS 2 (with Mo 5+ 3d 5/2 peaks at 231.6 eV, Mo 5+ 3d 3/2 peaks at 234.6 eV, Mo 6+ 3d 5/2 peaks at 232.8 eV, and Mo 5+ 3d 5/2 peaks at 235.8 eV, respectively). [19,24] . The content of Mo in distinct valence states could be summarized from XPS analysis. The Ti2p spectra con rmed the presence of TiO 2 . The peaks centered at 455.1 and 461.2 eV (Figure 3b) corresponded to Ti-C bonds. The peaks centered at 458.5 and 464.4 eV were assigned to Ti-O 2p 3/2 and Ti-O 2p 1/2 , revealing that the oxygen in TiO 2 resulted in the formation of C-Ti-O. [25,26] As shown in Table 1, the proportion of Mo uctuated with changes in laser power and pulse delay, con rming that the composition of mixed materials and the content of MoO 3 can be adjusted by modifying laser parameters. Results regarding the unprocessed MXene hybrid materials and the TM materials laser processed under various parameters were also summarized. After laser ablation, the Ti-C bond in MXene was substantially reduced. By contrast, the Ti-O bond was considerably increased, indicating the production of titanium oxide. As displayed in Figure 3c
Effects of Laser Parameters on Materials and Their Electrochemical Properties
When focused laser pulses are applied to MXene and 1T-MoS 2 composite materials, they not only cause changes in the material properties but also produce new laser-induced metal oxide materials. Controlling these composite materials through laser parameters and determining the optimal parameters are of great research relevance. In this study, as shown in Figure 4a, we analyzed changes in the Mo content in the composite material by adjusting the laser power and pulse delay. As the laser power was increased, the proportion of Mo 5+ and Mo 6+ increased gradually, whereas the Mo 4+ content initially decreased substantially and then remained stable. These results can be explained as follows: As the laser pulses acted on the material, Mo 4+ was oxidized and modi ed, leading to an increase in the valence state. Taken together with the XPS data, these results suggest that the modi ed Mo formed a more stable bond with oxygen, generating molybdenum oxide. We also analyzed the mixed materials generated by laser-induced MXene targets. As presented in Figure 4b, the proportion of Ti-C bonds in MXene decreased as the laser power was increased. At the same time, the Ti-O bonds increased gradually. This is in line with the premise that the laser pulses acted on 1T-MoS. In sum, metal oxides were formed during laser processing. When the laser power remained unchanged but the pulse delay was adjusted, a slight change was noted in the composite material. This is mainly related to the ionization of electrons and materials under the pulse delay. An increase in the pulse delay resulted in increased valence states and a more uniform distribution of Mo and Ti in the composite material. This also con rms that the pulse delay excited more other studies. [27,28] Small peaks of molybdenum oxide near 263, 342 and 831 cm −1 were assigned to the orthorhombic MoO 3 compound. [29] These characteristic peaks were located at 150, 198, 401, 515, and processed under various laser parameters, and the cyclic voltammetry (CV) curves are shown in Figure 4e. The MSCs had excellent electrochemical performance, which may be due to the laser ablationinduced production of multivalent molybdenum oxide in the hybrid materials. The multivalent metal oxides affected the electrochemical performance of the materials. Because of the material modi cation threshold, the femtosecond laser pulses fully oxidized the material. Moreover, to maximize electrochemical performance, the electrode material on the surface was not removed. Figure 4f demonstrates the in uence of laser parameters, including the laser pulse delay, on MSC performance. 3 MSCs. This is mainly attributable to the excellent conductivity and capacitance properties of two-dimensional materials; the conductivity of metal oxides is slightly inadequate in comparison. The galvanostatic charge-discharge (GCD) pro les of the two types of MSCs (Figures 5d and 5e) are consistent with previous results. A performance comparison of the two MSCs with differing electrode materials was conducted on the basis of the CGD pro les. To clearly observe the differences in performance, we compared the CV curves generated at the same scan rate and the GCD pro les established at the same current density (Figures 5g-5i).
Electrochemical Performance Of The Multitype Mscs
Next, the asymmetric MSCs were subjected to comprehensive electrochemical testing. As shown in Figure 6a, favorable capacitance characteristics were retained even after the voltage window was expanded from 1.2 to 1.8 V in 1M H 2 SO 4 aqueous electrolyte. This large voltage window contributed to the high energy density of the asymmetric MSCs. Through our approach, the shapes, number of ngers, and microscopic size of the interdigitated MSCs were controlled. As presented in Figure 2, interdigitated MSCs with various numbers of ngers were fabricated, and the in uence of the number of ngers on electrochemical performance was investigated. Curves in GCD pro les generated under the same current density of 1.8 V indicated favorable characteristics. The GCD pro les in Figure 6b indicated that MSCs with more ngers outperformed those with fewer ngers; this is related to the area of the active interface site in contact with the electrolyte. Multiple ngers can increase electrode-electrolyte contact and facilitate rapid charge transfer, thus enhancing electrochemical performance. CV curves were measured according to the change in scan rates. A rectangular shape and similar curves were noted (Figure 6c). The speci c capacitance results support that MSCs with more indexes have more favorable electrochemical performance. We examined the electrochemical properties of MSCs with four ngers. Figures 6d and 6e display the GCD pro les and CV curves corresponding to these MSCs under various current densities and scan rates, respectively. We determined the areal and volumetric capacitance of the MXene-derived (Figure 6h). Notably, the MSCs retained more than 98.8% of the initial capacitance after 15,000 cycles. For comparison, we extracted several GCD pro les from the loop. They were almost consistent under a voltage window of 1.8 V. CV curves of asymmetric MSCs under differing bending states were generated (Supplementary Figure 10). The CV curves were generally consistent, indicating that the MSCs had excellent exibility and exural resistance. Figure 6i presents a Ragone plot comparing the energy and power density of our MSCs and other energy storage devices. [16,[31][32][33][34] The energy density of 0.495 Wh cm −3 achieved in the present study is several orders of magnitude higher than those of other capacitors or batteries. It is also substantially higher than those of MSCs presented in previous studies. In addition, our MSCs exhibited an excellent power density of 28295 W cm −3 , which is attributable to its favorable capacitance characteristics under a high scan rate. In sum, our MSCs have high potential for application to microscale energy storage devices.
Conclusion
Through our simple maskless patterning approach, high-performance multitype MSCs were prepared. Leveraging the unique advantages of temporally and spatially shaped femtosecond laser, the ultrafast fabrication of multipatterned MSCs was realized. Our method can be applied to various material systems, including to the construction of asymmetric electrode structures from laser oxidation materials. This is the rst speci c mention of micromachining and packaging in this paper. Are you referring to the proposed ultrafast fabrication method? Please review. This appears to better re ect your intended meaning. Please review. This is the idiomatic term. Please review. The meaning of "large cycles" is unclear. Please review this change.
MXene/1T-MoS 2 thin lm Preparation
MXene dispersion (2.5 mg mL −1 , 200-500 nm) purchased from Nanjing/Jiangsu XFNANO Materials Tech Co., Ltd was mixed with the suspended chemically exfoliated single-layer 1T-MoS 2 nanosheets dispersion purchased from Nanjing/Jiangsu XFNANO Materials Tech Co., Ltd. The two solutions underwent exfoliation for 5h, separately. Then, the two diluted mixed solutions were mixed together, treated by sonication for 2 h and stirred for 1 h. The prepared mixed solution was vacuum ltered by ber lter. About 6 h of ltration, a layer of mixed lm was produced, then it was vacuum-dried. Nitrocellulose membrane for vacuum ltration was purchased from Merck Millipore LTD. Sulfuric acid (95%-98%) was obtained from Sigma Aldrich and used to dissolve the cellulose lm when transferring the mixed lm. In a simple vacuum ltration process, MXene and monolayer MoS 2 nanosheets are ltered through a membrane with an aperture of 25 nm to form a stacked MXene/metal 1T-MoS 2 membrane. The thickness of this re-stacked lm is controllable, depending on the volume of the ltered solution. In our test, the thickness of the composite lm prepared by us is 1μm.
The Shaped femtosecond laser A Titanium sapphire laser regeneration ampli cation system was used to transmit a Gaussian beam with a central wavelength of 800 nm and a pulse duration of 35 fs. Holoeye Pluto (spatial light modulator) can receive the phase difference distribution of the load and re ect the beam away. The designed electrode shape determines the intensity distribution by locating a 256×256 pixel region to a black 1080×1920 background image. We use an improved Gaussian algorithm to optimize the algorithm by increasing the number of iterations and using a function to optimize the distance between the beam spots. So we can get different expected light elds. Then, the gray phase hologram is loaded onto SLM to transform the light eld of any geometry. The shape beam is focused by an Olympus objective lens (20×, NA = 0.45). The sample was placed horizontally on the six-axes translation stage (M840.5DG, PI, Inc.).
Characterization of laser-induced MXene/1T-MoS 2 The morphology and microstructures were characterized by scanning electron microscope (SEM) using SU8220 (Hitachi, Japan) by a Hitachi SEM in Tsinghua university. X-ray photoelectron spectroscopy (XPS) analysis was performed using a ESCALAB 250Xi spectrometer with a monochromatic Al Kα source (7.5 µm beam spot). Raman spectra were acquired using a Via-re ex spectroscopy with the excitation laser line at 532 nm. The XRD patterns were conducted on a D8 Advance (Bruker) with CuKa radiation. Using an Olympus metallographic microscope can take optical microscopy images. The Confocal Laser Scanning Microscopy used an MPLAPONLEXT x20 lens. The Raman spectroscopy investigations were performed using a Renishaw inVia Re ex spectrometer with laser wavelength of 532 nm.
Electrochemical characterization of the result micro-supercapacitors
Further analysis was performed to determine the current density using the software package NanoScope Analysis. Electrochemical testing was performed on a CHI760E electrochemical workstation connected through a Probe Station with polyamide-coated platinum probes (tip diameter, approximately 5 µm) as the current collectors. To ensure a stable electrochemistry environment, the open-circuit potential (Eocp) measurements were tested for one hour until the uctuation was of less than 10 mV in 10 minutes before every electrochemistry measurements. The electrochemical performance of the MSCs was measured in a two-electrode system, and analysed according to the cyclic voltammetry (CV), galvanostatic charge/discharge (GCD) and electrochemical impedance spectra (EIS). The areal capacitance (mF cm −2 ) per electrode was derived from the CV and GCD tests by using Eqs. (1) and (2), respectively, as follows: where I, ϑ and V represent the current applied, scanning rate and voltage (Vf and Vi are the nal voltage and initial voltage).
where I is the discharge current, and dV/dt is the slope of the discharge curve. Cycling stability measurements were performed by repeating constant current charge-discharge at 1 mA for 12,000 cycles. The energy densities (mWh cm −2 ) of the supercapacitors were calculated according to the following equations: where ∆E represents the operating voltage window. Therefore, the power density (μWh cm −2 ) of the obtained supercapacitor was obtained as follows: where t represents the discharge time (t = ∆V/ϑ).
Declarations Data availability
All relevant data that support the plots within this article and other ndings of this study are available from the corresponding authors upon request. Table Table 1 is available in the supplementary les section. Figure 1 Schematic of the SLM-based maskless patterning method for the ultrafast manufacturing of multitype MSCs. a) The original Gaussian laser is transformed by the Michelson interferometer into a double pulse with a pulse delay. It then passes through the SLM and is transported to the objective lens by the 4f system to realize micro/nano processing. b) The Schematic of the laser-induced synthesis of materials for different MSCs. c) The magni ed image of the objective lens and the processed sample can be processed within an extremely short period of time by controlling the 1, 2, and 3 subpulses to obtain various types of MSCs. | 5,607 | 2022-02-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Multidimensional electrostatic energy and classical renormalization
Recent interest in problems in higher space dimensions is becoming increasingly important and attracted the attention of many investigators in variety of fields in physics. In this paper, the electrostatic energy of two geometries (a charged spherical shell and a non-conducting sphere) is calculated in higher space dimension, N. It is shown that as the space dimension increases, up to N = 9, the electrostatic energy of the two geometries decreases and beyond N = 9 it increases. Furthermore, we discuss a simple example which illustrates classical renormalization in electrostatics in higher dimensions.
INTRODUCTION
The space dimension N plays an important role in studying many physical problems.It has been used for the radial wave functions of the hydrogen like atoms in N dimensions [1,2].Exactly solvable models have also been investigated [3,4].In addition, a great deal of recent work in field theory [5], high energy physics [6], and in cosmology [7] has been conducted.Furthermore, problems of mathematical interest have been investigated in higher dimensions [8,9].One of the fundamental quantities in physics is the electrostatic energy which is currently investigated by many workers in various areas [10][11][12].Therefore, the present author is motivated to consider the effect of space dimension on the electrostatic energy of two simple, but illustrative, systems.A connected technique to electrostatic energy is the renormalization in classical field theory.Renormalization is needed to eliminate divergences which appear in the computation of Feyman graphs so that sensible physical results can be achieved [13][14][15].Just recently, Corbò [16] considered renormalization technique in classical fields and Tort [17] discussed renormalization of electrostatic energy.So in the present paper, we will consider an example of classical renormalization of electrostatic energy in higher space dimensions.The organization of the present paper is as follows: In Section 2, we consider electrostatic energy in a hyper spherical shell.In Section 3, we calculate the electrostatic energy of a non conducting hyper sphere.In Section 4, we present an example of renormalization of electrostatic energy in higher space dimensions.Section 5 is devoted for conclusions.
ELECTROSTATIC ENERGY OF A HYPER SPHERICAL SHELL
We consider a charged hyper spherical shell of radius R and charge Q in N-dimensional space.Our purpose is to calculate the electrostatic energy of the shell by two methods.In the first method, we calculate the work done to bring the charge Q infinitesimally from infinity to the surface of the shell, while in the second method, we evaluate the volume integral over the squared of the electric filed, E .The two methods require the electric field and the electric potential in space.Gauss's Law in The angular surface integral gives [18], where ( ) x is the Gamma function.Since the charge is distributed on the surface, the above two equations yield and 0 E for r R .The electric potential is given by The first method yields the electrostatic energy, W as Which can be written as where is the surface charge density and S is the surface area of a unit shell as given in Eq.2.The second method enables us to write Which is the same result given in Eq.5.It is interesting to note that our result yields the well-known result [19] for the three-dimensional case (N = 3), namely . It is noticed that the electrostatic energy of the hyper shell depends on the space dimension N. It is illustrative to calculate the electrostatic energy ( shell W ), with R = 1, for different values of N.This is calculated in units of 0 2
8
/ Q and is shown in the second column of Table 1.Our calculated results clearly show that the electrostatic energy has a minimum at the space dimension N = 9.This can be explained as follows: In higher space dimensions, there are more orientations in space and thus more angles (N-1).This implies that it is relatively easy to assemble electric charges on the hyper surface of the shell which explains the decrease in the electrostatic energy as the space dimension increases up to N = 9.However, beyond this value of N, the surface area of the shell becomes smaller and smaller so that the decrease in the surface area, as N increases, dominates over the increase in the angular orientation.In mathematical terms, the surface area times (N-2) has a maximum at N = 9 and thus the electrostatic energy has a minimum at that value of N. It is tempting to investigate the behavior of the electrostatic energy for very large N.This can be checked by using Stirling's formula [20] n n e n n n n and In the infinite dimensional space, the above equation gives an infinite electrostatic energy in the limit as N . In this limiting case the surface area of the shell vanishes as can be seen from Eq.2 and the use of Stirling's formula.Therefore, the shell behaves like a point charge in the infinite dimensional space and thus one expects the divergence of the electrostatic energy as an infinite self energy of a point particle.
ELECTROSTATIC ENERGY OF A CHARGED NON-CONDUCTING HPER SPHERE
Our main purpose here is to calculate the electrostatic energy of a uniformly charged non-conduction sphere in N-dimensional space.Following the second method of Section 2, we calculate the electric field inside and outside the sphere.The application of Gauss's Law given in Eq.1 gives where Q is the charge in the sphere.The electrostatic energy of the hyper sphere is thus . 2 The integrals in the curly bracket yield and the integral over Ω is given by Eq.2, Therefore, the electrostatic energy is simplified to which can be written as is the volume of the unit sphere in the N-dimensional space [18].Clearly, the above electrostatic energy depends on the space dimension N, and it yields the well-known result [19] for N = 3, namely , with R = 1 for different values of N.This is shown in the last column of Table 1.As before, the electrostatic energy has a minimum at the space dimension N = 9.But here, the volume of the hyper sphere time (N 2 −4) has a maximum at N = 9 and hence the electrostatic energy has a minimum at that value.As it was checked in the previous section, the electrostatic energy becomes infinite in the infinite dimensional space ( N ).In this limiting case the volume of the hyper sphere vanishes and thus the sphere behaves as a point charge with an infinite self energy.
RENORMALIZATION OF ELECTRO-STATIC ENERGY
Renormalization, as is widely believed, is required in quantum field theory [21][22][23].The main task of renormalization is to handle and eliminate the divergences so that one can obtain sensible physical results.Recently, it has been reported that renormalization can be applied to classical fields: For example, Corbò [16] gave two examples for renormalization of electrostatic potential and Tort [17] presented an example for renormalization of electrostatic energy.Our purpose here is to generalize Tort's example to higher space dimension N. Beside its mathematical interest, we will show that the divergence (or so-called singularity) of the electrostatic energy persists in the infinite dimensional space.Following Tort's model for the classical atom, we consider a point electric charge of magnitude Ze, where Z is the atomic number and e is the electron charge, surrounded by a concentric thin hyper-spherical shell of radius R and electric charge equal to −Ze.Ionization (partial or total) of this atom amounts to the removal of part of or the entire negative charge from the shell.This can be achieved by letting ) 1 , where . We will show below that the renormalization of the electrostatic energy ( U ) in N dimensions is given by The electric field inside the shell is only due to the point charge, since there is no contribution comes from the shell.Thus, the application of Gauss's Law, given in Eq.1, yields for R r .The initial electrostatic energy before ionization can be calculated as diverges at the origin and thus we have a singular point at 0 r .As Tort suggested, we can avoid this problem by introducing a finite non-null radius for the point charge and thus Now, when the atom is ionized part of the charge ) ( Ze of the shell will move to infinity and thus the enclosed charge within a hyper-spherical Gaussian surface of radius R r .Will be . It is clear that the electric field, for R r , remains the same as before ionization (see Eq.14) and for R r Gauss's Law immediately gives Therefore the final electrostatic energy becomes The first term is just initial U and the integral in the second term has the same form as that of Eq.7 and thus, one gets Therefore, the change in the electrostatic energy is which is exactly the same as the electrostatic energy of a hyper-spherical shell that we found in Section 2. It is noticed that the variation of electrostatic energy is finite for all values of space dimension N, except for N where U becomes infinite.Therefore, the renormali- zation of the electrostatic energy works out for all space dimensions but failed in the infinite dimensional space.The persistent of the singularity in the infinite dimensional space is a result of the infinite electrostatic energy of the hyper-shell in that space, as we outlined in Section 2.
CONCLUSIONS
We have obtained the electrostatic energy of two systems (a charged spherical shell and a charged nonconducting sphere) in the N-dimensional space.Our calculated results show that the electrostatic energy decreases as the space dimension increases up to 9 N and it increases without limit beyond that and thus the electrostatic energy of each system has a minimum at this value, as shown in Eqs.6 and 12.Our results also show that the electrostatic energy, for both systems, becomes infinite in the infinite dimensional space.Furthermore, we considered classical renormalization of electrostatic energy for a simplified model of a classical atom in higher space dimension.It was shown that the variation in electrostatic energy (the final minus the initial energy) is exactly the same as that of the hyper-shell, and thus the singularity persists in the infinite dimensional space.
4 /
It is again constructive to calculate the electrostatic energy, in units of 0 2 Q
Table 1 .
The electrostatic energy of the shell and the sphere as function of space dimension. | 2,429 | 2010-07-29T00:00:00.000 | [
"Physics"
] |
The relationship among information and communication technologies and GDP in Middle East countries: panel cointegration approach
This paper studies the causality relationship between Foreign Direct Investment and Information and Communication Technologies based on panel data covering Middle East countries during the period 1990-2010. The empirical results support of a short-run cointegration relationship after allowing for the heterogeneous country effect. The long-run relationship is estimated using a full-modified OLS. Pedroni's heterogeneous panel cointegration test reveals have long-run equilibrium relationship between FDI and economic growth. The analysis of results indicate that the growth contribution of ICT and FDI was quite low for these countries. The econometric results showed a positive and significant impact of ICT and FDI in these countries.
INTRODUCTION
The growth of international production is driven by economic and technological forces.It is also driven by the ongoing liberalization of Foreign Direct Investment and trade policies.The rapidly rising level of economic integration, stimulated by advances in Information and Communication Technology (ICT), renders technology adoption, coming from foreign developed countries, a matter of great importance for economic growth and productivity improvement.Foreign Direct Investment is considered, among others, an important channel for technology diffusion, which in turn raises the host country's productivity growth.On the other hand, the new 'information economy' of the past decades is associated with increased diffusion of ICTs, which are expected to deliver higher productivity gains and enhanced growth (Dimelisa and Papaioannou, 2010).
However, FDI provides much needed resources to developing countries such as capital, technology, managerial skills, entrepreneurial ability, brands, and access to markets.These are essential for developing countries to industrialize, develop, and create jobs attacking the poverty situation in their countries (Louzi.B. Mohammed and Abadi, Abeer, 2011).As a result, most developing countries recognize the potential value of FDI and have liberalized their investment regimes and engaged in investment promotion activities to attract various countries.Fewer studies have been conducted at the macro or international level given the lack of long time-series data on FDI, ICT and other relevant country characteristics.(Dimelisa and Papaioannou, 2010).
Existing empirical evidence, in contrast with more settled theoretical evidence, shows mixed results about the relationship between FDI and economic growth of the host countries, and the determinants of FDI.Several reasons may be advanced to explain such disparity of empirical results.To mention a few, first, tests are traditionally conducted using data sets usually belong to heterogeneous groups of countries.Second, previous studies have used a variety of theoretical models.Third, empirical studies have usually implemented a number of different econometric techniques in testing and estimation (Al-Iriani, Mahmoud and Fatima Al-Shamsi, 2008).
In this paper, we intend to examine the FDI and ICT Effects on Productivity Growth: A Comparative Analysis of Developing and Developed Countries according to Dimelisa, and Papaioannou (2010) article.
The rest of this article is organized as follows.The next section introduces the econometric specification.In the penultimate section, the econometric results are shown and discussed.Finally, the last section concludes.
ECONOMETRIC APPROACH
To capture FDI and ICT effects on productivity growth, a production function is specified with several types of inputs.The present study considers the accumulation of FDI or ICT as special types of knowledge and technology capital introduced in the production process.Consequently, the regression analysis will be carried on by decomposing the overall effect of total capital to that of its individual domestic, foreign and ICT components.Thus, following the paradigm of Hall and Mairesse (1995), Dimelisa and Papaioannou (2010) an aggregate Cobb-Douglas production function is specified, which incorporates four inputs, domestic capital (K), labor (L), foreign capital (F) and ICT capital: where the subscripts of i and t denote country and year, respectively; Y measures gross output of each country; A is an index of technical progress; while K and F are taken to represent non-ICT capital.Parameters , , and are the elasticities of domestic capital, labor, foreign capital and ICT with respect to output and finally is the error term capturing unobserved variations between countries and over time.After taking logarithms and following the assumption of constant returns to scale, the level of output per worker can be expressed as a function of domestic, foreign and ICT capital to labor ratios (Dimelisa, and Papaioannou, 2010): where small case letters denote figures per worker.Writing (2) in first differences we obtain the following growth regression: ∆ ln( ) = + ln(∆ ) + ln(∆ ) + ln(∆ ) + ln(∆ ) Following common practice in the growth literature, equation ( 3) is further augmented by the lagged level of the dependent variable.According to the neoclassical growth model, a
14
Volume 44 negative impact is expected, implying that more developed economies are closer to their steady state equilibrium and display lower growth rates (Dimelisa and Papaioannou, 2010).
ECONOMETRIC METHODS -ENDOGENEITY ISSUES
When dealing with panel data growth regressions, the standard practice is to use either the fixed or the random effect estimator, depending on the correlation between the cross section effect and the explanatory variables.Nevertheless, their use might not always provide precise estimates in the presence of endogenous variables (Chowdhury and Mavrotas, 2006).The inclusion of the lagged level of the dependent variable in the empirical specification of model 3 may also create endogeneity problems its relation to the dependent variable, causing correlation with the error term (Dimelisa and Papaioannou, 2010).
1. A Growth Accounting Approach
Given the construction of ICT stocks, it would be interesting to perform a preliminary growth accounting exercise and analyze the relative contribution of each production factor.In this way, the growth accounting analysis can motivate the econometric analysis that constitutes the main part of this study.We start with the production function specified in (1).In growth accounting we assume that constant returns to scale are present, so that + + + = 1.After taking logarithms, differentiating both sides of equation ( 1) and accepting the hypothesis of constant returns to scale, we obtain (Dimelisa and Papaioannou, 2010): where the hats above letters denote variables in logarithmic differences.In the above equation, output growth is decomposed to TFP growth ( ̂), and a weighted average of domestic ( ̂), foreign ( ̂ ), ict ( ̂) capital and labor ( ̂) growth.
2. The panel unit roots test
In order to investigate the possibility of panel cointegration, it is first necessary to determine the existence of unit roots in the data series.For this study we have chosen the Im, Pesaran and Shin (IPS, hereafter), which is based on the well-known Dickey-Fuller procedure.Investigations into the unit root in panel data have recently attracted a lot of attention.Levine and Lin, (1993) proposes a panel-based ADF test that restricts parameters by keeping them identical across cross-sectional regions as follows: where t =1,. .., T time periods and i =1,. ..N members of the panel.LL tests the null hypothesis of = =0 for all i, against the alternate of 1 = 2 . . .= < 0 for all i, with the test based on statistics = ̂/..( ̂).One drawback is that c is restricted by being kept identical across regions under both the null and alternative hypotheses (Lee, Chien-Chiang, 2005).
For the above reason, IPS (1997) relax the assumption of the identical first-order autoregressive coefficients of the LL test and allow to vary across regions under the alternative hypothesis.IPS test the null hypothesis of = 0 for all i, against the alternate of International Letters of Social and Humanistic Sciences Vol. 44 < 0 for all i.The IPS test is based on the mean-group approach, which uses the average of the statistics to perform the following ̅ statistic: where , the terms E( ̅ ) and Var( ̅ ) are, respectively, the mean and variance of each statistic, and they are generated by simulations and are tabulated in IPS (1997).Hadri (2000) argues differently that the null should be reversed to be the stationary hypothesis in order to have a stronger power test.Hadri's (2000) Lagrange multiplier (LM) statistic can be written as (Lee, Chien-Chiang, 2005): where ̂2 is the consistent Newey and West (1987) estimate of the long-run variance of disturbance terms.The next step is to test for the existence of a long-run cointegration among GDP and the independent variables using panel cointegration tests suggested by Pedroni (1999 and2004).The panel cointegration tests Pedroni (1999) considers the following time series panel regression.
= + + + (8) where and are the observable variables with dimension of ( * ) × 1 and ( * ) × , respectively.He develops asymptotic and finite-sample properties of testing statistics to examine the null hypothesis of non-cointegration in the panel.The tests allow for heterogeneity among individual members of the panel, including heterogeneity in both the long-run cointegrating vectors and in the dynamics, since there is no reason to believe that all parameters are the same across countries (Lee, Chien-Chiang, 2005).Two types of tests are suggested by Pedroni.The first type is based on the within dimension approach, which includes four statistics.They are panel -statistic, panel statistic, panel PP-statistic, and panel ADF-statistic.These statistics pool the autoregressive coefficients across different members for the unit root tests on the estimated residuals.The second test by Pedroni is based on the between-dimension approach, which includes three statistics.They are group statistic, group PP-statistic, and group ADF-statistic.These statistics are based on estimators that simply average the individually estimated coefficients for each member.Following Pedroni (1999), the heterogeneous panel and heterogeneous group mean panel cointegration statistics are calculated as follows.Panel -statistic: Panel -statistic: Panel ADF-statistic: Group PP-statistic: Group ADF-statistic: Here, ̂ is the estimated residual from Eq. ( 4) and ̂11 2 is the estimated long-run covariance matrix for Δî .Similarly, ̂2 and ̂2 (î * 2 ) are, respectively, the long-run and contemporaneous variances for individual i.The other terms are properly defined in Pedroni (1999) with the appropriate lag length determined by the Newey-West method.All seven tests are distributed as being standard normal asymptotically.This requires a standardisation based on the moments of the underlying Brownian motion function.The panel m-statistic is a onesided test where large positive values reject the null of no cointegration.The remaining statistics International Letters of Social and Humanistic Sciences Vol. 44 diverge to negative infinitely, which means that large negative values reject the null.The critical values are also tabulated by Pedroni (1999) (Lee, Chien-Chiang, 2005).Table 2 reports the panel cointegration estimation results.For the all statistics significantly we cannot reject the null of no cointegration.Thus, it cannot be seen that the GDP, ICT, and FDI move together in the long run.That is, there is not a long-run steady state relationship between ICT and GDP for a cross-section of countries.The next step is an estimation of such a relationship.
In the presence of unit root variables, the effect of superconsistency may not dominate the endogeneity effect of the regressors if OLS is employed.Pedroni (2000) shows how FMOLS can be modified to make an inference in being cointegrated with the heterogeneous dynamic.In the FMOLS setting, non-parametric techniques are exploited to transform the residuals from the cointegration regression and can get rid of nuisance parameters (Lee, Chien-Chiang, 2005).
EMPIRICAL INVESTIGATION
Our study uses annual time series for Middle East countries.The empirical period depends on the availability of data, where the time period used is 1990-2010.All variables used are in natural logarithms.Given that our variables are cointegrated, the next step is estimation of the long-run relationship.The OLS estimator is a biased and inconsistent estimator when applied to cointegrated panels.Therefore, we estimate the long-run relationship using FMOLS approach suggested by Pedroni (2000Pedroni ( , 2001)).The FMOLS estimator not only generates consistent estimates of the β parameters in small samples, but it controls for the likely endogeneity of the regressors and serial correlation.The panel FMOLS estimator for the coefficient β is given as follows: Table 3 reports the results of the individual and panel FMOLS.The panel estimators with and without common time dummies are shown at the bottom of the table.We can see from the table that the estimated coefficient of the FDI and ICT are all positive and statistically significant.These findings provide strong evidence that FDI has a positive effect on economic growth.
CONCLUSIONS
This paper investigates the long-run relationship between the FDI, ICT and economic growth for a panel of Middle East countries over the period 1990-2010 by using recently developed panel data unit root tests and Pedroni panel data cointegration techniques.The IPS panel unit root test results show that the series in the panel are integrated of the order one.The Pedroni panel cointegration test results based on seven test statistics show that there is a longrun relationship between economic growth and the FDI.The growth accounting results indicate that the contribution of ICT and FDI was quite low for this countries.The econometric results confirm that the growth impact of ICT is positive and significant in these countries, the effect being larger among developing countries.A positive and significant effect was also found for FDI in the panel of countries.Policies that foster macroeconomic stability, increased investment in capital and productive spending, and therefore improve economic growth, would also have an important effect on FDI in the long run.
Table 1 .
Panel unit root tests.
Table 1
presents the panel unit root tests.At a 5% significance level, all statistic of the level model confirm that all series have a panel unit root.
Table 3 .
Full modified OLS estimates (dependent variable is Y). | 3,082.8 | 2014-12-15T00:00:00.000 | [
"Economics"
] |
Ultrasmall compact CMOS imaging system for bioluminescence reporter-based live gene expression analysis
Abstract. Significance: Gene expression analysis is an important fundamental area of biomedical research. However, live gene expression imaging has proven challenging due to constraints in conventional optical devices and fluorescent reporters. Aim: Our aim is to develop smaller, more cost-effective, and versatile imaging capabilities compared with conventional devices. Bioluminescence reporter-based gene expression analysis was targeted due to its advantages over fluorescence-based imaging. Approach: We created a small compact imaging system using micro-CMOS image sensors (μCIS). The μCIS model had an improved pixel design and a patterned absorption filter array to detect the low light intensity of bioluminescence. Results: The device demonstrated lower dark current, lower temporal noise, and higher sensitivity compared with previous designs. The filter array enabled us to subtract dark current drift and attain a clearer light signal. These improvements allowed us to measure bioluminescence reporter-based gene expression in living mammalian cells. Conclusion: Using our μCIS system for bioluminescence imaging in the future, the device can be implanted in vivo for simultaneous gene expression imaging, behavioral analysis, and optogenetic modulation.
Introduction
The usual optical devices for bioimaging applications are the photomultiplier tube (PMT), the charged-coupled device (CCD), and the complementary metal-oxide-semiconductor (CMOS) image sensor. 1 The PMT is the most sensitive of the three; however, it is very costly to make, requires high power, and is usually large and bulky. 2 The CCD is less noisy than the CMOS, but the readout is slower and requires more power due to the serial transfer of electrons across each pixel. 3 The advantage of the CMOS is that it is much faster, smaller, and cheaper than the CCD and PMT.
The CMOS fabrication process is commonly used in making integrated circuits. This makes it less expensive to produce in bulk and easily allows different components to be integrated into one chip. Therefore, CMOS devices can be flexible and multifunctional because they can be incorporated into very large-scale integrated (VLSI) circuits, which can be adapted for numerous applications. 4 It also has the fastest processing time since readout happens in parallel, instead of being passed down serially like in CCD. CMOS image sensors (CIS) are commonly used as camera chips for smartphones since they are easy to integrate and are cost-effective. Finally, another important advantage of the CMOS is the ability to make micro-sized CISs (μCIS), much smaller than conventional CCD or PMT machines. Due to their small size, μCISs can be implanted into biological tissue with very minimal damage, as we have shown previously. [5][6][7] Therefore, μCISs offer a competitive advantage.
We have previously reported the use of these ultrasmall μCIS for fluorescence imaging applications (e.g., ELISA, Ca 2þ imaging, and FRET). [8][9][10] However, fluorescence imaging requires outside excitation light, which causes high background noise. An effective excitation cut-off filter needs to be designed, which is difficult for small and lensless imaging devices. 11,12 The strong excitation light can pass through the filter and add noise to the image. In addition, it can induce autofluorescence in nontarget biological molecules causing false-positives. It can also lead to phototoxicity in a sample especially during long imaging times. 13 Finally, photobleaching may also occur. These problems can be avoided using bioluminescence imaging (BLI) because it does not need any excitation light and emits a weaker light signal.
BLI offers a higher signal-to-noise ratio since there is less background noise. Small signal changes are easier to detect, which makes bioluminescence more sensitive than fluorescence. Specifically, dynamic gene expression is better imaged using bioluminescence because luciferase has a shorter half-life (3 to 4 h) than conventional fluorescent proteins (25 to 54 h). Changes in gene expression can be more accurately represented by luciferase, since fluorescent proteins take longer to degrade and may not indicate the actual decrease in gene expression. BLI is also better for studies that require longer imaging times because there is less phototoxicity and photobleaching due to its weak light emission. Thus, bioluminescence offers some benefits when imaging gene expression. 14 Moreover, BLI is advantageous when combined with photomodulation techniques, such as optogenetics. This is because the strong excitation light during fluorescence imaging may activate off-target rhodopsins used in optogenetics, if the excitation and activation spectrum overlaps in some tail-end wavelengths. This can cause lower precision when controlling neural circuits. Since BLI does not require excitation light, then cross-talk can be prevented, which is useful when combining imaging and optogenetics.
However, the use of implantable CIS for BLI has not been previously reported because of the low light intensity emitted by luciferase. Therefore, we developed a new compact device with improved sensitivity by upgrading the pixels from 3T-APS to 4T-APS (Fig. 1). Compared with commercial CISs, the pixel area was made larger, while peripheral components, such as the on-chip analog-to-digital converter (ADC) and lenses were removed. The number of input/ output (I/O) pads was also reduced. This enabled the addition of the extra transistor while maintaining the small size of the device.
Another problem for in vivo imaging studies is the effect of temperature. The elements in an integrated circuit are affected by changes in temperature. For example, the leakage current of the photodiode (PD) increases and the voltage threshold of the transistors also change. 15 Noise due to an increase in temperature is the reason that PMT and CCD cameras are placed outside the organism and cooled for bioluminescence detection. 16 But in our case, the μCMOS device needed to be small for future implantation, so a cooling system was not designed. Instead, we proposed a relatively simple modification on the pixel array. By adding a black absorption filter pattern, we can continuously measure dark current changes due to temperature fluctuation and subtract this from the recording pixels.
The improvements and modifications in our μCIS system enabled us to observe gene expression with luciferase as a bioluminescent reporter under CMV promoter. This new imaging system can be used to measure biological activity without the need for excitation light and cooling equipment. We demonstrate a smaller, simpler, and relatively cheaper method compared to conventional devices. In summary, this study features a compact, lensless, microimaging device for bioluminescence applications.
Device Fabrication
μ-CMOS imaging devices were fabricated for bioluminescence detection and measurement. The device mainly consists of a custom-made CIS chip on a printed circuit board (PCB). It is described further in Sec. 3.
First, the CIS chips were cleaned by submerging them in acetone twice then in isopropanol once, all at 80°C for 5 min in each solution. Then, the chips were placed on silicon glass slides, and a fluoropolymer (CYTOP-M 25%) was applied on the back of the chip for easier detachment later on. The fluoropolymer was dried by heating at 120°C for a few minutes.
Then, an optical adhesive (NOA63, Norland) was applied on the back and around the chip and was then cured under UV light for 90 s. This was to provide more surface area and a level substrate for the device for uniform spin coating later on. Afterward, the device and substrate were carefully detached from the silicon glass slide. The bottom of the device and substrate were attached to another glass slide without silicon, using wax adhesive. It was then spin-coated with a black absorption filter (1:1:1 w/w of red, blue, and green resist mixture). The spin coating sequence was as follows: 3 s slope, 3 s at 500 rpm, 5 s slope, and finally, 20 s at 2000 rpm. After spin coating, the black absorption filter was alternately removed using a fourth harmonic (266 nm) Nd:YAG laser (TNS Systems LLC) for patterning. The pattern is an alternating two rows of pixels with filter (PWF), then two rows of pixels without filter (POF). The filter on the clock, V DD , ground, and V out terminal pads were also removed. The laser power was adjusted to find the optimal energy that would remove the filter and not damage the pixels.
Then, the chips were carefully removed from the optical adhesive substrate and attached on a PCB using epoxy, dried at 120°C for 10 min. All the terminal pads on the chips were then wirebonded onto the PCB. The wires and terminals were then protected with epoxy (Nissin Resin Z-1).
Header pins were then soldered onto the PCB with the image sensor intact. Then, the corresponding wires were connected to the chip. Then, the device was tested whether it was working or not by shining a flashlight on it. Horizontal stripe patterns were observed as expected due to the filter. Pixels were checked if they were damaged. Furthermore, the general resolution of the device was tested using a fiber optic plate (J5734, Hamamatsu) with numbers and symbols.
Device Parameters Testing
After adding the patterned absorption filter and assembling the devices, the (i) dark current, (ii) temporal noise, and (iii) signal sensitivity were tested.
To measure signal sensitivity, a constant LED light source (455, 530, or 625 nm, ThorLabs) was assembled on an optical table and was fitted with an absorptive neutral density filter (ThorLabs) of optical density 3 (0.1% transmittance) or 0.5 (32% transmittance) to reduce the LED's light intensity. The LED's current was manipulated using an LED driver (ThorLabs), and the light intensity was measured using a power meter (ThorLabs). Then, the CMOS devices were placed under the light and their raw digital values were recorded.
Next, the temporal noise of the devices was measured. Temporal noise is the fluctuation in signal across time due to factors such as dark current, reset noise, shot noise, and flicker noise. The amount of temporal noise was measured by getting the standard deviation of 128 consecutive frames in each device while in pitch dark condition. Longer exposure times are needed for the detection of low light levels; however, the amount of temporal noise also increases with longer exposure time. To take this into account, the noise of each device was measured at increasing frame/exposure times. This was indirectly increased by lengthening the pixel row settling time, which was the waiting time for the clock reset signal in each row.
Since dark current is one major contributor to temporal noise, it was also measured. To measure this, the average digital pixel values of each device in dark condition were also recorded.
Finally, to measure the signal-to-noise ratio, the average difference between PWF and POF was computed. That is, since PWF block any incoming light signal, whatever value registered here must be due to noise. Then, this value was subtracted to POF to ensure that only the signal from light is measured. A larger difference between the two would mean that there's a higher signal-noise ratio since the amount of noise is lower compared with the signal only.
CMV Transfection and Luciferase Assay
CMV promoter sequence was subcloned to the multiple cloning site upstream of luc+ in pGL3basic vector (Promega) to construct pGL3-CMV. Then, 1 × 10 6 HEK293T cells were inoculated with 2 mL of Dulbecco's modified eagle medium (4.5 g∕mL glucose) supplemented with 10% fetal bovine serum (Sigma Aldrich), 100 units∕mL penicillin, and 100 g∕mL streptomycin (Nacalai Tesque) in a 35-mm tissue culture dish (Thermo Scientific Nunc) and incubated overnight. 2.4 g of pGL3-CMV vector and 7.2 L of 1 μg∕mL linear form of polyethyleneimine (Polyscience) were mixed with 100 L of PBS, followed by 20 min incubation at room temperature. Plasmid-polyethyleneimine solutions were added to the culture medium of HEK293T cells. Five hours after incubation, cells were washed with 1 mL of PBS twice and cultured for additional 48 h. Bioluminescence signals derived from luciferase were generated by exchanging medium containing 100 M D-luciferin (Nacalai Tesque). 17 The signal was detected by placing the CMOS microimaging device below the dish.
Data Analysis
Image data were recorded using custom-made software (CIS_NAIST) and then analyzed using MATLAB (Mathworks). Custom-made codes were written to extract the data from RAW files and convert them into matrix values containing three dimensions (two-dimensional pixel array across time). The values were subtracted from the background measured via fixed pattern noise (FPN) cancellation done in the dark before putting the sample. Then, the pixel array data were trimmed by removing the borders and columns with defective pixels. Afterward, the pixel array was separated into PWF and POF. Frame number was converted into time by dividing the frame number by the average frame rate.
After, the average PWF and POF were plotted per frame. Then, an elementwise subtraction was made between the matrices of PWF and POF. The average per frame was graphed, and the values were normalized by dividing by the total range of values in the data set. Figure 2 shows a sample frame extracted from MATLAB. Then, borders and damaged pixels are removed from the pixel array. Afterward, the PWF are separated from POF. Then, the difference and normalized matrices are shown.
The total image of the bioluminescence recording was computed by getting the average of the difference matrix of all frames. Then, the image was deconvolved using blind deconvolution, and then normalized to the range.
μCISs Were Successfully Assembled and Modified
Two kinds of CIS chips were designed and manufactured: the 3T-APS using 0.35-m standard CMOS process and the 4T-APS using 0.18-m CIS process. These chips were then assembled to create the previous device model 5,6,[8][9][10][11][12] and the new device model featured in this study. The devices were modified and tested in this study to detect bioluminescence signal and image gene expression.
First, a black absorption filter was added onto the sensor pixel array by spin coating [ Fig. 3(a)]. Then, a Nd:YAG laser was used to etch and design an alternating pattern of PWF and POF [ Fig. 3(b)]. This enabled PWF to block any light signal and measure only device noise. Meanwhile, POF measure light signal with device noise. By subtracting PWF to POF, then the purified light signal is attained.
Unlike other conventional devices, our total device size is smaller but the pixel area is larger. The CIS chips were 450 m × 1560 m in size, and each pixel was 7.5 m 2 . The on-chip ADC was removed and lenses were not used. In addition, the number of input-output wires was reduced to four terminals (clock, V DD , ground, V out ). This makes the devices very small and compact, which is suitable for in vivo implantation. For brain slice culture (ex vivo) and cell culture (in vitro) setup, the culture dish must rest on a substrate. Therefore, the chips were attached and wire-bonded to a PCB around 4 cm × 4 cm in size [ Fig. 3(c)]. Then, external wires were soldered on the PCB to connect to the computer [ Fig. 3(d)]. Finally, a culture dish can be put on top of the PCB with the chip and encased in a light-tight box placed inside an incubator at controlled temperature and CO 2 levels [ Fig. 3(e)].
Optimization of device fabrication process was also done. It was important to have an even and flat coating of black absorption filter on the pixels so that the filter thickness is constant for all. This was done using an adhesive substrate around the chip to provide more surface area during spin coating. Furthermore, the usage of low Nd:YAG laser intensity for patterning was needed to reduce pixel damage.
Overall, two devices were successfully assembled and fabricated. These are the 3T-APS device (previous model) and the 4T-APS device with patterned filter arrays (new model). These were then compared for their imaging performance.
New Device Model Exhibited Lower Noise and Higher Sensitivity
After device fabrication, the noise and sensitivity of the devices were tested using a constant artificial light source (LED light, ThorLabs). Testing was done using 530 nm wavelength (green LED) since this was closest to our bioluminescent model (FLuc ¼ 560 nm). First, the dark current of each device model was measured. Dark current is the current generated despite having no actual light signal. To measure this, the signal output from the device was recorded in pitch dark conditions. A longer frame time is needed to increase the accumulation of signal during low-light situations such as in BLI. Therefore, the dark current in various frame times was tested [ Fig. 4(a)].
It can be seen that the new device had a lower amount of dark current compared with the previous device. Furthermore, despite the increase of device frame time, the rate of noise increase (i.e., slope) in the new device was only 0.3054. On the other hand, the rate of noise increase of the previous device was 1.4400 as the device frame time was increased. Therefore, the new device exhibited less dark current and lower rate of noise increase compared with the previous device.
Next, after measuring the dark current of the devices, the deviation between consecutive frames given a constant light signal was measured. This standard deviation between 128 consecutive frames was termed as temporal noise. Similar to dark current, temporal noise is proportional to the frame time of the device [ Fig. 4(b)]. It can be observed that the new device had a lower trend of temporal noise compared with the previous device. Therefore, the new device provides a more consistent or stable reading across frames even when increasing the frame imaging time.
After the analysis of noise, the sensitivity of the devices was measured. This was done by measuring the pixel output values of the devices across different light intensities. The frame time used for the test was set at the minimum/shortest possible to get the base sensitivity. In a single device, the PWF and POF were graphed separately to differentiate noise recording versus signal plus noise recording [ Fig. 5(a)]. These were then subtracted to extract pure signal recording [ Fig. 5(b)].
The results show that the output pixel value from the new device is much higher than the previous device. More importantly, the rate of change in output value (i.e., slope) is higher in the new device despite small changes in light intensity/power density. Based on the linear slope, 1 μW∕cm 2 of light corresponded to 405.5 digital number (DN) pixel value with an offset of −6.41 DN in the previous device. On the other hand, the new device corresponded to a 3304.3-DN pixel value with an offset of −93.32 DN. This means that the new device was more sensitive to changes in light signal and was also better at detecting low light signals.
The saturation point of the devices can also be observed in the graph wherein the new device slightly saturates at around 3 to 4 μW∕cm 2 , while the previous device saturates at around 5 μW∕cm 2 . The new device saturates sooner due to its increased sensitivity. However, luciferase has a power density of 0.1 μW∕cm 2 so saturation is not a concern. 18 Low light situation is more relevant for BLI and was highlighted in Fig. 4(b).
Overall, the new CMOS device model showed superior performance because of its lower dark current, lower temporal noise, and higher sensitivity. Therefore, the new device may be better suited for BLI.
Improved Device Enabled Measurement and Imaging of Bioluminescence In Vitro
To test whether the newly designed device can detect bioluminescence, HEK293T cells were transfected with CMV-driven firefly luciferase (CMV-FLuc) using the PEI method. This would yield a constant and strong signal due to the constitutively active CMV promoter. Both previous and new devices were tested. First, no cells or dish was placed on the device to measure the background. Then after a few minutes, the HEKT cells were placed on the device. The values of the PWF were subtracted to the POF (difference). The fastest frame rate achievable by the device is 84.5 fps or a frame time of 11.82 ms; however, this setting did not yield any BLI result. To increase sensitivity for BLI, the frame time was extended to 3 s or 0.3 fps, which was the longest capability of the system. As observed, the previous device yielded only a small signal for the set-up with cells compared with no cells [ Fig. 6(a)]. However, when the new device was used for imaging, a larger difference in signal between with and without the cells was observed. The values of each pixel in all frames were plotted in a histogram [ Fig. 6(a)]. The histogram showed no distinction between with and without cells for the previous device. On the other hand, a divergence of the data sets in the new device was observed. This means that the new device was able to differentiate the set-up with CMV-FLuc signal from the set-up without any signal.
Then, the result was normalized to the range for comparison between devices [ Fig. 7(a)]. A signal of around three was detected for the new device with cells, but only a signal of one was detected by the previous device. This shows that the new device detected three times increase in signal compared with baseline when the dish with bioluminescent cells was placed. The Hellinger distance between the different datasets was computed to measure the f-divergence of the distributions [ Fig. 7(b)]. This measures the dissimilarity between two distributions of data and is used for applications such as image analysis since it is more robust and less affected by outliers, unlike univariate hypothesis tests. [19][20][21] Using this measure, it can be seen that the new device with cells had high divergence from the rest of the set-ups (blue-green to blue color). The imaging of cells using the new device yielded a Hellinger distance of above 0.5 when compared with other set-ups. On the other hand, the rest had a Hellinger distance of below 0.2 (yellow to orange) when compared with each other. This indicates that the data from the new device when imaging bioluminescent cells were different from the other set-ups and supports that a signal was detected. Furthermore, comparison of both devices with cells showed a Hellinger distance of 0.54, indicating a difference between new and previous devices during cell recording. This may be attributed to the fact that the new device was more sensitive and detected a larger signal than the previous device.
Finally, because our system is an image sensor, we can also see spatial differences in bioluminescence signal, unlike conventional photodetectors. An average image of the bioluminescence recording is presented (Fig. 8). It can be seen that upon addition of confluent cells, both the previous and new device had changes in the intensity of their pixels. However, the new device could detect a higher number and brighter intensity of confluent cells emitting bioluminescence signals. A common plastic-bottom dish with a thickness of 0.8 mm was used for imaging to demonstrate the applicability of our device in usual cell culture laboratories. In addition, our device was lensless to allow for in vivo implantation in future applications. These conditions reduced the resolution of the device, but the improved sensitivity of our microimaging device showed both spatial and temporal differences in bioluminescence signal even without additional optics. All in all, the new and improved device was able to detect bioluminescence signal better than the previous device. This enabled us to record and image spatiotemporal gene expression of CMV-FLuc in mammalian cells.
Improved Micro-CMOS Device
As presented in Sec. 3, an improved device was attained using the 4T-APS pixel architecture and black absorption filter pattern. There are several important advantages of 4T-APS over the 3T-APS pixels (Fig. 1). One main advantage is the photodetection (photon-to-electron) and photoconversion (electron-to-voltage) regions of the 4T-APS are separated. This is possible due to its additional components: the floating diffusion (FD) for charge storage and the transfer gate (M TG ) for controlled access to FD. The M TG is the additional transistor in the 4T-APS. The PD serves as the photodetection region, whereas the FD serves as the photoconversion region. The light signal gathered by the PD is transferred to the FD. The FD has a smaller size and capacitance than PD, which leads to a larger voltage difference during readout and finally a higher signal reading. This is one reason why 4T-APS is more sensitive than 3T-APS. The PD cannot have a small capacitance because a large surface area is needed for capturing light. Therefore, having an FD with smaller capacitance is integral. Furthermore, the separation of PD and FD allows for the measurement of reset noise via correlated double sampling (CDS). Pixel reset must occur to prepare for the next frame. However, some charges remain in the pixel after reset which adds noise to the sample. To remove this reset noise, the FD resamples from the PD right after reset, and this is subtracted from the signal. In total, the sampling occurs twice once during signal readout and another after reset thus, it is called CDS. 22 Another advantage is that 4T-APS structures have pinned PDs, meaning that the PDs are buried deeper into the substrate to prevent traps caused by the interface between the silicon dioxide layer and silicon layer. In addition, the pinned PD is completely depleted, and thus all of the accumulated charges in the pinned PD can be transferred into FD. This pinned PD structure reduces dark current and allows for easy signal transfer to the FD. 23 However, one drawback of having the 4T-APS design is that it is more difficult to manufacture. In addition, if there is an incomplete PD-to-FD charge transfer, a lag may occur. Despite these minor drawbacks, our more compact 4T-APS CIS displayed better performance than the previous 3T-APS model.
Additional improvements were implemented in our device for BLI. The new device was fabricated using a 0.18-m CIS process. It has a higher sensitivity due to higher quantum efficiency and lower dark current compared with the previous device that used 0.35-m standard CMOS process. Our pixel and PD area were larger than commercial devices that also use 0.18-m fabrication process because we wanted to increase sensitivity. But to keep the device small and implantable, we removed other components such as the on-chip ADC and lens. Furthermore, the number of I∕O pads were reduced by recycling the clock signal for both the row and column. Given these modifications, our device has a low temporal dark noise (1 to 3 DN) and high sensitivity similar to commercialized miniature CMOS devices, 24 but our device is more compact and lensless which is not commonly found in the market.
Since the target application is for biomedical imaging, additional noise sources are present due to temperature fluctuation in biological systems and its environment. To take this into account, the black absorption filter was applied on some pixels. This filter blocked any incoming light signal and allowed for the measurement of dark current and other noise sources common to all pixels. This can then be subtracted to the other pixels exposed to light. By having such a structure, the actual signal readings from the device can be separated from noise due to temperature changes.
Bioluminescence Imaging Applications of the CMOS Device
CMV-driven firefly luciferase BLI was successfully performed using our new device. This is important because detecting bioluminescence has a wide array of applications. For example, a bioluminescence lab-on-chip for luminometry applications such as pyrosequencing has been designed. 25 A 0.18-m CMOS process was also used to fabricate the device, but with fewer pixels-8 × 16 pixel array-compared with our 40 × 120 pixel array. Thus, their device sensitivity was higher, but our resolution was better for cell imaging. In addition, CIS in smartphone cameras was used to detect luminescence for bacterial monitoring of Pseudomonas fluorescens M3A. 26 The set-up involved a specialized chamber that would collect light and direct it toward the CIS. The exposure time used was up to 60 s, which allowed for longer signal accumulation. Then, bioluminescent bacteria of at least 7.9 × 10 6 CFU∕mL were detected. Despite the shorter max exposure time of our μCIS device (3 s), a bioluminescence signal from mammalian HEK293T cells seeded at 0.5 × 10 6 cells∕mL was still detected. Thus, incorporating our new μCIS devices may yield better resolution and sensitivity in the aforementioned applications.
Given the characteristics of our device, bioluminescence was detected using a strong active CMV promoter in mammalian HEK293T cells. Although signal intensity of CMV-driven expression varies across cell types, it was reported that CMV had the strongest intensity in HEK293T cells compared with other constitutive promoters such as SV40, CAGG, TRE, and UBC. 27 Furthermore, due to the small surface area of HEK293T cells, a higher density of cells can be achieved in confluent conditions. Also, a high amount of plasmid can be transfected into the cells, thereby increasing the copy of luciferase genes. 28 The combined increased performance of our new device and strong expression of CMV-driven luciferase allowed us to visualize bioluminescence activity in cells. Hence, if natural or endogenous gene expression shall be studied in vivo, the use of brighter luciferases, such as NanoLuc or AkaLuc, would be more suitable than the conventional firefly luciferase used in this study. 29 Notably, this study serves as a proof-of-concept for in vivo BLI of gene expression. Due to the small size of our imaging chips, it can be implanted into freely moving animals. This was demonstrated previously using fluorescent reporters. 6,7,30 However, as mentioned earlier, fluorescence imaging has some drawbacks and so we developed a new device for bioluminescence instead. This new device still follows the same size as our previous studies. Similarly, the device can also be coated with Parylene-C to protect the device and make it biocompatible. 31 Therefore, the device presented in this study can be used for gene expression analysis in freely behaving animals. Measuring gene expression in freely moving conditions is important to understand the relationship between genes and behavior. For example, understanding the relationship of clock genes with behavioral rhythms still remains a major challenge in the field of chronobiology. [32][33][34] Few current techniques can image and measure real-time bioluminescence gene expression in freely behaving animals. Some studies made use of implanted optical fibers; however, this was generally connected to a PMT and so was difficult to set-up and move around. [35][36][37] Furthermore, a single PMT is more expensive than multiple CMOS devices. Another study made use of cooled EM-CCD cameras to image luciferase-reporter-based gene expression noninvasively. 16 The drawback of this however is that the luminescence can only be visualized in a general brain region. Also, the experiment must be done in a very dark box which limits the possible behavioral tests that can be done in tandem with imaging. To the best of our knowledge, none of the previous methods could image in vivo gene expression, conduct behavioral tests, and do optogenetic manipulation of neurons all at the same time.
That is why an implantable μCIS would be advantageous. It can allow the animal to move freely in a semilighted room, which may allow for more behavioral tests. It is also smaller and more portable than other devices. In our case, the CMOS device is only around 1 mm in length, which makes it possible to implant into the brain with minimal damage. Furthermore, a guide cannula can be inserted to allow for administration of chemical substrates. Our CMOS devices can also be incorporated with μLEDs, which allow for photoexcitation of light-sensitive proteins such as rhodopsins needed for optogenetic manipulation of neurons. 5,9,30
Further Developments for In Vivo Bioluminescence Imaging
To image clearer signals during in vivo monitoring of gene expression in a freely moving animal, device sensitivity must be further increased. One of the biggest factors that reduce the detection limit is noise. Noise can occur either at the pixel-level circuit or column-level circuits such as in ADC and amplifiers. 38 Furthermore, noise can be classified into two kinds: temporal noise and FPN. FPN does not change over time and is caused by physical structure defects on the imaging chip, such as uneven pixel fabrication. Since it is time-invariant, FPN is easy to cancel by measuring the pixel values in a completely dark situation. This was done in our case by measuring around 250 frames in pitch dark condition and using this as the baseline value. This can also be done in vivo by turning off outside light and recording before administration of the luciferin substrate. An additional way to cancel FPN noise is through CDS. 39 Temporal noise is more difficult to remove, because it varies across time and is therefore different in each frame. Sources of this noise include: thermal noise, reset noise, 1∕f noise, photon shot noise, and random telegram signal noise. One prominent noise source in low-light situations is the dark shot noise. This is proportional to the square root of exposure time and dark current, where dark current is time-variant and temperature-dependent. 22 These factors are important to measure because a high exposure time is needed for low-light BLI, and temperature fluctuates in biological systems such as in vivo or upon adding chemical substrates in vitro. This is why the black absorption filter pattern was important.
The black filter enabled us to control for the effects of dark current drift caused by temperature changes. By having some pixels measure dark noise instead of light signal, it allowed us to measure and subtract the temperature-dependent background offset without the need for a cooling system. Therefore, our device was still small enough to be implantable. Other modifications can also account or reduce such temperature dependent noise in CMOS devices; 40,41 however, our method was simpler. The trade-off is blocking some pixels from receiving signal, thereby reducing resolution and imaging area. Nonetheless, other filter patterns can be used, such as a mosaic or tiled pattern, to reduce the number of pixels blocked. Since improved sensitivity to low light was the main target, our proposed method is still applicable.
In addition to noise inside the device, outside noise factors can also be considered. An example is the noise susceptibility of the copper wires connecting our device to the circuit board. This is very prone to noise because it carries an analog signal that is only converted to a digital signal in the circuit board. Analog signal is more prone to noise, and one way to solve this is to put ADCs into the pixel or chip architecture itself, as was done by Eltoukhy et al. 25 However, this can lead to a larger chip, more heat, and prohibit in vivo implantation.
Aside from hardware improvements, the use of computational techniques and software is also another way to extract signal from noisy data. Kim et al. 26 made use of a noise reduction ensemble averaging algorithm to filter out noise and amplify the signal. This was done by averaging a group of pixels in a certain circumference to smoothen the image (i.e., circular averaging filter). This resulted in the cancellation of random white noise; however, averaging also reduced signal intensity in the process. To solve this, they accumulated the signal of 10 consecutive frames and iteratively put them on top of each other to extract a brighter luminescence signal. Finally, the signal itself can be improved using brighter luciferases. New bioluminescent proteins are being developed that demonstrate brighter emission, higher stability, and longer wavelengths for better in vivo imaging in the future. 42
Conclusion
In summary, an improved μCMOS device suitable for brain implantation was developed and tested. The previous 3T-APS pixel architecture was improved to 4T-APS and modified by adding a filter pattern array. The new device showed superior imaging performance as shown by its lower dark current, lower temporal noise, and higher sensitivity. In addition, the black absorption filter pattern allowed us to measure and subtract temperature-dependent dark current drift. Because of these improvements, the new μCIS device was able to detect bioluminescence of CMV-FLuc HEKT cells, while the previous device model could not. To the best of our knowledge, no other published studies have reported the use of μCIS devices to detect CMV-FLuc bioluminescence in mammalian cells.
By implementing improvements in CMOS device fabrication and circuitry, lower noise and higher sensitivity can be achieved. Computational algorithms may also prove beneficial by using statistics and machine learning to remove noise, increase sensitivity, and improve resolution. Finally, integrating this with better biotechnology by developing new biosensors such as brighter luciferases may enable in vitro to in vivo BLI of gene expression, among other applications.
Disclosures
The authors have no relevant financial interests and no potential conflicts of interest to disclose.
Joshua Philippe Olorocisimo received his BS degree in biology from the University of the Philippines Diliman in 2014 with honors and received an award for best thesis. He pursued his graduate studies at Nara Institute of Science and Technology (NAIST), Japan, and received his MS degree in bionanotechnology in 2020. Currently, he is pursuing his PhD at NAIST and working on brain-implantable devices for imaging and modulation of neurons. His interests include neuroengineering, biophotonics, and understanding brain-behavior relationships in neuropsychiatric diseases.
Jeric Briones received his BS degree and his master of applied mathematics degree from Ateneo de Manila University, Philippines, in 2012 and 2013, respectively. He then received his Dr Eng degree in information science from NAIST, Japan, in 2020. His research mainly involves time series analysis and mathematical modelling, with focus on applications to time series data and mathematical finance. Other research areas he is interested in include stochastic analysis and operations research.
Kiyotaka Sasagawa received his BS degree from Kyoto University in 1999 and his ME and PhD degrees in materials science from NAIST, Japan, in 2001 and 2004, respectively. Then, he was a researcher with the National Institute of Information and Communications Technology, Tokyo. In 2008, he joined NAIST as an assistant professor and was promoted to associate professor since 2019. His research interests involve bioimaging, biosensing, and electromagnetic field imaging.
Makito Haruta received his BE degree in bioscience and biotechnology from Okayama University, in 2009, and his MS degree in biological science and his Dr Eng degree in materials science from NAIST, Japan, in 2011 and 2014, respectively. Then, he was a postdoctoral fellow with NAIST from 2014 to 2016 and became an assistant professor at NAIST in 2016. His research interests include brain-imaging devices for understanding brain functions related to animal behaviors.
Hironari Takehara received his BE and ME degrees in applied chemistry from Kansai University in 1984 and 1986, and his PhD in materials science from NAIST, Japan, in 2015, respectively. From 1986 to 2012, he was a semiconductor process engineer at Panasonic Corporation and developed BiCMOS, high voltage SOI, and optoelectronic IC processes. In 2015 and 2019, he joined NAIST as a postdoctoral fellow and assistant professor, respectively. His current research interests involve CMOS image sensors and bioimaging.
Hiroyuki Tashiro received his BE and ME degrees in electrical and electronic engineering from Toyohashi University of Technology, in 1994 and 1996, respectively, and his PhD in engineering from NAIST in 2017. In 1998, he joined Nidek Co., Ltd., and worked on ophthalmic surgical systems and retinal prostheses. In 2004, he became an assistant professor at Kyushu University; he is also an associate professor at NAIST since 2019. His current research interests include artificial vision systems and neural interfaces.
Norihiro Ishida-Kitagawa received his PhD from Kyoto University. He currently works as an assistant professor at NAIST, and his research interests are concerned with bone biology, especially osteoclasts, which are the only cells with bone resorption activity. He is trying to elucidate the molecular mechanisms of differentiation and activation processes of osteoclast in vitro and in vivo.
Yasumasa Bessho received his MD degree from Wakayama Medical University and his PhD in medical science from Kyoto University. He researched molecular biology, developmental biology, and neuroscience at Kyoto University and Washington University School of Medicine, Missouri, USA. Since 2004, he has been a professor in Gene Regulation Research in NAIST. He was given the Award for Young Scientists, MEXT, Japan, in 2005.
Jun Ohta received his BE, ME, and Dr Eng degrees in applied physics, all from the University of Tokyo, Tokyo, Japan, in 1981, 1983, and 1992, respectively. In 1983, he joined Mitsubishi Electric Corporation, Hyogo, Japan. From 1992 to 1993, he was a visiting scholar with the Optoelectronics Computing Systems Center, University of Colorado at Boulder. In 1998, he joined the Graduate School of Materials Science, Nara Institute of Science and Technology (NAIST), Nara, Japan, as an associate professor. He was appointed as a professor in 2004, and as an executive director and vice president in 2021. His current research interests include smart CMOS image sensors for biomedical applications and retinal prosthetic devices. He serves as an associate editor for IEEE Transactions on Biomedical Circuits and Systems, and as an editorial board member of IET's Journal of Engineering, and the Japanese Journal of Applied Physics. He is a fellow of IEEE, JSAP, and ITE. | 9,525.6 | 2021-11-01T00:00:00.000 | [
"Biology"
] |
Hidden Symmetries, the Bianchi Classification and Geodesics of the Quantum Geometric Ground-State Manifolds
We study the Killing vectors of the quantum ground-state manifold of a parameter-dependent Hamiltonian. We find that the manifold may have symmetries that are not visible at the level of the Hamiltonian and that different quantum phases of matter exhibit different symmetries. We propose a Bianchi-based classification of the various ground-state manifolds using the Lie algebra of the Killing vector fields. Moreover, we explain how to exploit these symmetries to find geodesics and explore their behaviour when crossing critical lines. We briefly discuss the relation between geodesics, energy fluctuations and adiabatic preparation protocols. Our primary example is the anisotropic transverse-field Ising model. We also analyze the Ising limit and find analytic solutions to the geodesic equations for both cases.
Introduction
In recent years, there has been an increasing interest in the study of the geometry of quantum states of quantum many-body systems. While the origin of the geometric approach for characterising quantum states is rooted in the quantum estimation theory developed in 70's [1,2], see [3] for a recent review, only relatively recently it became a useful tool for wider applications. Geometric invariants based on quantum geometric tensors have been used to study quantum phase transitions [4][5][6][7][8], to create optimal adiabatic ground-state preparation protocols [9] and to derive bounds for the time integral of energy fluctuations over unit fidelity protocols [10]. The quantum geometric approach became an experimentally testable tool for physics of the many-body ground states and non-equilibrium dynamics in a number of setups [11][12][13][14][15][16][17][18].
The idea behind these works is that quantum mechanics can be viewed as a geometric theory in the following sense. The parameter space of an arbitrary quantum system can be endowed with the structures of Riemannian and differential geometry. The simplest, and most commonly used way, is to introduce a metric in parameter space by considering the overlap amplitude between neighbouring ground states. The resulting object is commonly known as quantum geometric tensor (QGT). The real symmetric part of the QGT, also called quantum Fisher-Rao metric, quantum information metric or, somewhat erroneously Fubini-Study metric, defines a Riemannian metric on the parameter manifold. In contrast, the imaginary part is related to the Berry curvature associated to the Berry connection. Note however that its derivation is entirely generic and does not rely on any adiabatic assumptions. These two complementary parts of the QGT provide a wealth of geometrical and topological structures to study quantum many-body systems. From this metric, we can construct geometric quantities such as Killing vectors, Riemann and Ricci tensors, scalar curvatures, et cetera. Whereas both the real and imaginary parts provide us with topological data of the quantum parameter manifold like the Euler and Chern (or Chern-Simons, depending on dimensionality) invariants. Note that these invariants may abruptly change across phase transitions.
In order to have a better picture of the geometry and the shape of a manifold, it is important to understand its symmetries. These are encoded in the so-called Killing vector fields which are intimately related to Lie derivatives. Indeed, these Killing vectors naturally satisfy Lie algebra relations and form the isometry group of the manifold. In 1898 Bianchi (see [19] for a translation of the original text) suggested a classification of low-dimensional (d=1,2,3) Lie algebras which naturally leads to a classification of real and complex manifolds. In 3 dimensions, for example, this distinguishes 11 classes. For later developments and higher dimensions see [20]. In the 80's Thurston conjectured a geometrization program (see the summary book [21]) according to which every closed three-dimensional manifold can be built up out of these Bianchi geometric class model geometries using tools of differential topology. Perelman [22][23][24] proved the geometrization conjecture in 2003.
Following this line of thought, we arrive at the rather intriguing possibility of a Bianchibased classification of the parameter manifolds of the quantum ground states of many-body systems for (at least) a low number of parameters. As a consequence, different quantum phases of matter correspond to different Bianchi classes or can be constructed out of them according to the geometrization conjecture. States corresponding to different classes are separated by quantum phase transitions. We illustrate this approach here with the example of the quantum transverse-field Ising model (TFIM). This model shows an interesting phenomenon: the quantum ground state parameter manifold may have symmetries which are not visible at the level of the Hamiltonian. In particular, one of the phases of the anisotropic TFIM has two continuous symmetry generators while the Hamiltonian itself has only a Z 2 discrete symmetry.
Another facet of the Killing vectors approach is the notion of geodesics. For every Killing vector field, there is a quantity which is conserved along geodesics, according to the Nöther theorem. These conserved quantities allow for the explicit integration of the geodesic equations. The latter could aid in the design of optimal quantum state preparation protocols.
Despite the QGT being the "drosophila" of low-dimensional many-body physics, in terms of frequency of study, in both equilibrium and non-equilibrium setups, see e.g. [25] for an extensive review, only a limited number of papers are devoted to the quantum geometric aspects of the QGT, [8-10, 26, 27]. On the other hand, we are not aware of analytical solutions for the geodesic paths of the ground-state manifold of the TFIM spin chain for the full parameter space (h, γ, φ), solutions are only known for two-dimensional sections [28][29][30]. For this simple integrable model, we can find analytical solutions. In order to solve the geodesic equations, we exploit the symmetries of the manifold. Since the Noether's theorem associates a conserved charge to each symmetry, with enough symmetries, we can constrain the problem completely. Interestingly, we find that some symmetries are lost during phase transitions.
The paper is organized as follows: Sections II and III are devoted to covariant formulations of quantum geometric tensors and related geometric quantities, such as geometric tensors, Christoffel symbols, Killing vectors and symmetries; Section IV deals with the transverse field XY model; In Section V we analyze hidden symmetries of the Killing vector fields and the Bianchi classification of the quantum phases, while a special limit of near pure Ising model is treated in Section VI. Geodesics and the energy fluctuations are considered in Section VII. Possible future directions are discussed in Section VIII.
Geometric tensors
The geometric approach to quantum mechanics sprang from quantum information theory, in the study of quantum parameter estimation [1,2]. In this setting, a metric, the quantum Fisher information matrix or quantum Fisher-Rao metric, is defined in the space of possibly mixed density matrices ρ. This metric is based on the symmetric logarithmic derivative operator formalism. Consider a family of continuous parameters x µ such that ρ = ρ(x). The quantum Fisher information matrix is defined as where L µ denotes the symmetric logarithmic derivative whose defining equation is in turn The Fisher information is equivalent to the Bures metric and it endows the parameter space x µ with a Riemannian structure. The statistical distance that this metric defines is related to the quantum fidelity where F (ρ, σ) = Tr √ ρσ √ ρ 2 . The Fisher information measures the sensitivity of a quantum state with respect to changes in the parameters x µ (assuming one can trace this state through changes in the Hamiltonian, e.g. there is always a GS and a gap).
One of the central results of this theory is that the variance Var(x µ ), associated with the estimation of the parameter x µ after M independent measurements, satisfies the quantum Cramer-Rao bound One can consult [3,25] for a recent review of this topic. The geometrization of quantum mechanics via quantum information theory is robust and has been studied extensively. However, the generality of this approach turns out to be a disadvantage when working with pure states. Unlike mixed states, the set of pure density matrices, from now on denoted by the projective Hilbert space P (H), is a Kähler manifold. In addition to the Riemannian structure coming from the quantum Fisher information, there is a complex structure and a symplectic structure. To uncover the geometric tensors that define these structures, we take a different route to geometrization and focus our analysis on the properties of the tangent bundle T P (H). We want to emphasize that, in the following, we will only work with pure density matrices ρ = |ψ ψ|. We work with the matrices ρ(x) and not the wavefunctions |ψ(x) because, as we will see, this will simplify the equations and the results are guaranteed to be gauge invariant. However, as a final step, one can also express all the results in terms of wave functions |ψ . We will discuss some of the subtleties that appear when working with mixed states at the end of this section.
For now, let us assume that the variables x µ are a coordinate patch of P (H), i.e. dim(x µ ) = dim P (H). Later, we will restrict the variables x µ to a much narrower set of physical parameters. The tangent space T ρ P (H) at a point ρ(x) is the vector space spanned by the set of matrices t µ (x) = ∂ µ ρ(x).
This basis is called the coordinate basis of the tangent bundle. Note that our tangent vectors are Hermitian and traceless matrices. Moreover, if ρ(x) = |ψ(x) ψ(x)|, with ψ|ψ = 1, we have that t µ = |∂ µ ψ ψ| + |ψ ∂ µ ψ| , where |∂ µ ψ = ∂ µ |ψ(x) . Since we are working with pure states, t µ = {ρ, t µ }, i.e. t µ is proportional to the symmetric logarithmic derivative L µ . Let us define the linear operator A µ (x) such that A µ (x) |ψ(x) = i |∂ µ ψ(x) . Since ∂ µ ( ψ|ψ ) = 0, A µ (x) must be Hermitian, and our tangent vector can be written in terms of A µ (x) as where [ , ] is the matrix commutator. We conclude that every tangent vector t µ is generated by a Hermitian matrix A µ (x). The converse is also true: if A(x) is a Hermitian matrix, then the commutator i[ρ(x), A(x)] is a tangent vector. The matrices A µ (x) are called adiabatic gauge potentials (AGPs). These potentials are fundamental objects in adiabatic perturbation theory. They also play an essential role in describing the geometry of classical and quantum states. We can even use these potentials to generalize geometric concepts to the case of stationary and non-stationary density matrices. We recommend [31] for a recent review on this topic. As we saw earlier, the Fisher information matrix defines a metric on the tangent bundle There are multiple equivalent ways to write this equation. In terms of the wave function |ψ , and in terms of AGPs the metric reads where X = ψ| X |ψ . One can check that this formula is gauge invariant. That is, the components of this metric are the same even if we change our basis of kets |ψ(x) → e iφ(x) |ψ(x) . This metric is called the Fubini-Study metric. Let us explain the subtle difference between the terms Fisher-Rao metric and Fubini-Study metric. The Fubini-Study metric refers to the Hilbert-Schmidt inner product, or trace product, restricted to the set of pure density matrices. The Fisher-Rao metric, on the other hand, is defined on the set of mixed and pure density matrices via the symmetric logarithmic derivative. The Fisher-Rao metric, when restricted to pure states, reduces to the Fubini-Study metric. Because of this connection, we can relate the Fubini-Study metric to the notion of fidelity susceptibility This relationship has motivated the study of quantum phase transitions from a geometrical perspective [4,8,32].
Recall that an almost complex structure in a complex manifold M is a (1,1)-tensor field J : T p M → M such that J • J = −1. For P (H), an almost complex structure arises naturally when we consider the vector fields generated by the tangent vectors themselves Since t µ is a Hermitian matrix J(t µ ) is a tangent vector and J is a well-defined tensor field of rank (1,1). Note that applying the map twice returns the original tangent vector but with the opposite sign This follows from the property ρ 2 = ρ and the relations {ρ, t µ } = t µ and ρt µ ρ = 0. Hence, J is an almost complex structure on P (H). This complex structure is compatible with the Fubini-Study metric g(J(t µ ), J(t ν )) = g(t µ , t ν ).
A metric that has this property is called a Hermitian metric. Finally, we can use the almost complex structure to define the symplectic two-form By using the metric compatibility of J we can show that Ω is antisymmetric, i.e. it is a differential two-form. Moreover, this two form is non-degenerate because the metric is non-degenerate. If we can prove that dΩ = 0, then we have successfully endowed P (H) with a Kähler structure. Let us first demystify the identity of Ω by expressing it in terms of the wave function |ψ(x) , where ρ(x) = |ψ(x) ψ(x)|. This is the Berry curvature, and it is the field strength of the quantum geometric connection A µ Note that the quantum geometric connection depends on our choice of phase e iφ(x) |ψ(x) (as expected from a gauge field), but the field strength Ω = −idA does not. Also observe that A µ are the diagonal components of the AGP A µ . From this, we also conclude that dΩ = 0, since d 2 = 0. This shows that the Fubini-Study metric and the Berry curvature are intimately related. We can express both using a single complex tensor: the quantum geometric tensor When working with mixed states, there are a few generalizations that are worth mentioning. We began our discussion on geometry by introducing the Fisher information matrix, a metric defined on the set of mixed states. This metric is equivalent to the Bures metric, and it is related to the quantum fidelity F (ρ, σ). However, there are other metrics that we can consider. In dynamical response theory, for example, the definitions that appear naturally are a generalization of the connected correlation functions. For [31]. These two definitions only coincide when working with pure states and have different properties otherwise. In this paper, we focus on the Riemannian properties of pure states and leave the mixed states' discussion for future work.
The ground-state manifold
Let us begin our discussion with a Hamiltonian H(x) that depends on a parameter manifold x µ ∈ M. In this section, dim(x µ ) ≤ dim P (H). So now, our parameters will only parametrize a submanifold of P (H) and not the entire space. For simplicity, we will assume that our Hamiltonian has a non-degenerate ground state |Ω(x) . Depending on the specific Hamiltonian, the ground state |Ω(x) could be an embedding of M into P (H) or not. Recall that an embedding is a smooth map that is injective. Sometimes, |Ω(x) is independent of a variable x µ , and therefore the map is not injective. We are interested in the cases in which |Ω(x) describes an embedding (at least for a subset U ⊆ M). In other words, we want to study the cases in which the set {ρ 0 (x) = |Ω(x) Ω(x)| : x ∈ M} is a well-defined submanifold of P (H). We call this submanifold the ground-state manifold of H(x). Strictly speaking, the ground-state manifold and the parameter manifold M are two different spaces but, since we are dealing with an embedding, we will abuse the notation and refer to both as the ground-state manifold M.
What geometric tensors do we have on the ground-state manifold? The pullback of g defines a Riemannian structure on M, but the pullback of Ω does not always define a symplectic structure. This happens because the pullback of a non-degenerate two-form is not guaranteed to be another non-degenerate two-form. Indeed, if M is odd dimensional, then Ω (restricted to the tangent space of the submanifold), is a degenerate two-form. Nonetheless, Ω still has the interpretation of the Berry curvature. Unfortunately, the pullback of the almost complex structure is not a well defined tensor on M.
We will pay special attention to the Riemannian structure of the ground-state manifold and use this structure to study quantum phase transitions. Given a Riemannian manifold (M, g) there are a few standard quantities that we can compute: the Riemann tensor and its contractions, Killing vector fields and geodesics. Let us quickly recall the definitions of these objects.
A Killing vector field is the infinitesimal generator of an isometry. From an active point of view, isometries are changes in ground-state manifold that leave the metric invariant. Consider a smooth deformation of our ground-state manifold ρ(x, τ ) driven by the parameter τ such that ρ(x, 0) = ρ(x). We should think of this deformation as defining a new, deformed, ground-state manifold M(τ ) = {ρ(x, τ ) : x ∈ parameter space} for each value of τ . We say that this family of diffeomorphisms is a continuous isometry if That is, the metric does not change under the transformation. The Killing vector field that generates this isometry is An example of a Killing vector field is the vector field generated by a constant AGP A We can immediately check this result Since we are working with an embedding of M in an ambient space P (H), we have to consider two types of isometries. If the Killing vector field ξ ∈ T P (H) is part of the tangent bundle T M, i.e. ξ = ξ µ t µ for a coordinate basis {t µ = ∂ µ ρ} of T M then the submanifold is invariant under the isometry and ξ satisfies the Killing equation Where L denotes the Lie derivative. If ξ ∈ T P (H) but not in T M then the isometry does not leave the submanifold invariant. You may think of a rotation that leaves the 2-sphere embedded in R 3 invariant and a translation that changes its position in space. Both are isometries but the Killing vector field of the rotation lies inside the tangent bundle of the 2-sphere and the Killing vector field of the translation does not. We are mostly concerned with the first class of isometries thus, will also require the Killing vector field to be part of the tangent bundle of M.
The Killing equation can also be written in terms of wave functions |ψ or in terms of AGPs A µ . For example a vector field The set of Killing vectors on a manifold M forms a Lie algebra, whose Lie bracket is defined by the differential commutator [[ , ]]. This commutator should not be confused with where f : M → R is a test function on M. Low dimensional, real Lie algebras (d = 1, 2, 3) have been classified. In 3 dimensions, for example, there are 11 classes. This is called the Bianchi classification [19]. This suggest the possibility of classifying the different quantum ground-states of many-body systems using the Lie algebra of their Killing vector fields. Geodesics are paths that locally minimize the distance between two points in a manifold. We can find them by solving the geodesic equations Here s is an affine parameter, i.e. g µν dx µ ds dx µ ds = 1. Most of the time, we can only solve these equations numerically. One exception happens when we have enough Killing vector fields in our manifold. Each Killing vector has an associated conserved charge along geodesics x µ (s) So, each Killing vector corresponds to a first order differential equation. Requiring that our geodesic is parametrized by an affine parameter gives one extra restriction. In general, we only need dim(M) − 1 Killing vector fields to find the geodesics of a manifold. For completeness, let us recall the equations for the Christoffel symbols and the Riemann tensor. We will not directly discuss these quantities in this paper, but they are important and useful concepts in quantum geometry. Recall that the Christoffel symbols, are given by the formula We can take an advantage that we are working with an embedding and write an expression for these symbols in terms of traces and tangent vectors: We show how to derive this expression in Appendix A. The Riemann tensor and its contractions encode all the information about the curvature of the manifold. In a coordinate basis, the components of the Riemann tensor are given by The contractions of the Riemann tensor are commonly known as the Ricci tensor R µν = R λ µλν and the Ricci scalar R = R µ µ . The Riemann tensor also has topological information, since its integral gives the Euler characteristic of the manifold. This result is known as the Gauss-Bonnet theorem in two dimensions [33] and the Chern-Gauss-Bonnet theorem in any number of even dimensions [34]. In [8], the authors proposed to use the Euler characteristic of the ground-state manifold as a new topological number.
The anisotropic transverse-field Ising model
Let us apply these concepts to the anisotropic TFIM, also known as the XY model. There are a few reasons why this model is a good example. First, we can solve the model exactly. Second, this model has a rich phase diagram with three different regions: two ferromagnetic phases and one paramagnetic phase. Third, the Hamiltonian depends on three parameters and has a non-vanishing Berry curvature. The model is described by the Hamiltonian where σ α j are the Pauli matrices of the j-th spin site. To fix our energy scale we work with the variables: and set J = 1. We will add an additional parameter φ to our Hamiltonian that corresponds to a rotation of all spins around the z-axis by an angle of φ/2. We apply this rotation with Figure 1: Ellipse representation of the anisotropic TFIM ground state. Note that the ellipse is parametrized counterclockwise whenever γ > 0. The winding number |θ π − θ 0 |/π determines if the Hamiltonian is in the paramagnetic or ferromagnetic phase.
the unitary transformation U = j e −iφσ z j /4 , H → U HU † . We assume periodic boundary conditions, σ α N +1 = σ α 1 . The solution of this model is somewhat convoluted and it involves Jordan-Wigner, Fourier and Bogoliubov transformations. We will not solve the model here but the interested reader may find a modern version of the solution in [27]. The mapping to fermions yields a unique ground state that can be represented using a tensor product of Bloch vectors with polar angle θ and azimuthal angle φ: Here, Your may see [8] for a detailed description of this ground-state. The Brillouin zone of this system is k = 2πn/N , n = 1, 2, . . . , N or equivalently k = 2πn/N − π. For simplicity, we assume N is an even number, so that every k > 0 is given by k = πn/(N/2), for n = 1, 2, . . . , N/2. These definitions suggest a graphic representation for the ground state |Ω(h, γ, φ) as a loop in the xy-plane. If we interpret the energy E k as a distance from the origin of the xy-plane and θ k as its angle from the x-axis, we find that the allowed energies lie on the ellipse x(k) = h − cos k and y(k) = γ sin k.
The allowed energies depend on the values of h and γ. There are a few combinations of h and γ that are important (see Fig. 1). When h = 1 the ellipse touches the origin. At this point, our model is a gapless theory. The critical line h = 1 separates the ferromagnetic (h < 1) and paramagnetic (h > 1) phases. In the paramagnetic region, the ellipse also touches the origin when γ = 0. This is an example of an anisotropic phase transition between a ferromagnet aligned along the X direction (γ > 1) and a ferromagnet aligned in the Y direction (γ < 1).
We can associate a topological number to the ferromagnetic and paramagnetic phases: the winding number of the ellipse x(k), y(k) with respect to the origin. A winding number of 1 indicates a ferromagnetic ground state, while a winding number of 0 indicates a paramagnetic one. An analysis of these shapes and their topological properties can be found in [35]. These shapes are widely used in the study of extended TFIM, see, for example, [36]. Now that we have a ground-state manifold, let us compute its metric. A few properties of |Ω k simplify the computation: first ∂ µ Ω k |Ω k = 0 and second Re[ ∂ φ Ω k |∂ µ Ω k ] = 0. Where µ, ν = h, γ. We find that the components of the metric are and for φ, we have Using Fig. 1, it is easy to find explicit expressions for the metric The corresponding expressions for the Berry curvature are These expressions may be evaluated by solving six integrals in the thermodynamic limit, and results have been widely studied in recent years, e.g. [4,8,27,31]. However, in this paper, let us note that not all sums are independent and it turns out we only need to evaluate three of them. The following results are valid in the thermodynamic limit (N → ∞): where E(k) = E k .
The last sum is a complicated expression. It corresponds to the ground-state energy of the model and we need it to compute the components of the Berry curvature. and The authors of [37] found this expression and showed that, despite its appearance, it is a smooth function at the line h 2 + γ 2 = 1. Let us evaluate the components of the metric tensor. We will divide the components of the metric by the system size g → g/N and take the thermodynamic limit N → ∞. We find that We will focus on the Riemannian structure defined by g, so we do not evaluate the components of the Berry curvature explicitly. However, these can be derived from S 3 : The components of the metric tensor we present here, and the ones derived in other papers, e.g. [8], differ by a factor of 2. This depends on the convention used for the metric tensor. We work with the trace product whilst many authors prefer to work with half of the trace product.
Hidden symmetries and Killing vector fields
Let us focus momentarily on the ferromagnetic sector of the ground-state manifold (|h| < 1). A simple coordinate transformation reveals a hidden symmetry in the model. Take where u, v ∈ (−π/2, π/2). The transformed metric reads Remarkably, this metric is independent of the variable u, meaning that is a Killing vector field on the paramagnetic sector of the ground-state manifold. Note that the vector field ∂ φ is also a Killing vector field on this sector, and in fact of the entire ground-state manifold. This is no surprise, and the reason is quite simple: where U = j e −iφσ z j /4 is the unitary operator generated by A φ = j σ z j . The generator of the transformation is also the generator of the vector field ∂ φ ρ(x), i.e.
Since A φ (x) = A φ is a constant AGP, we conclude that ∂ φ ρ must be a Killing vector field on the ground-state manifold (see Eq. 22). Eq. 49 is perhaps the most important result of this paper. It is striking that this Killing vector field exists, since the transformation u → u + a, for a constant, is not a symmetry of the Hamiltonian. It changes the ground-state and its energy. Moreover, the Killing vector is confined to the ferromagnetic part of the ground-state manifold, and the symmetry is lost once we cross the critical line at h = 1.
Since both Killing vector fields ∂ u and ∂ φ correspond to partial derivatives, their Lie algebra corresponds to the Lie algebra of the abelian group (R 2 , +). Here, [[ , ]] is the commutator of differential operators, not to be confused with the matrix commutator [, ]. The paramagnetic region of the ground state has only one Killing vector field ∂ φ , and therefore, is isomorphic to the abelian algebra of the group (R, +). The fact that we can associate a Lie algebra to the different phases of matter suggest the possibility of a Bianchi-based classification of the different quantum phases of matter.
Critical lines and RG flows
Near the Ising phase transition at |h| = 1, the low energy TFIM is effectively described by a theory of Majorana fermions whose mass gap is proportional to |h − 1|. The arguments of Venuti and Zanardi [5] imply that g hh ∼ |h − 1| −1 whilst g µν ∼ 1 for the rest of the components. This argument is based on a simple scaling analysis on the operators associated with the deformations of x µ . More elaborate arguments, such as the ones presented in [38], give a relationship between Renomalization Group flows, homothetic vector fields and the scaling properties of the quantum metric tensor. However, we argue that this information alone is not enough to determine the Killing vector fields of the ground-state manifold. We can immediately see this from the exact expression for the metric tensor. Close to the critical line, all the components of the metric tensor coincide except the cross term g hγ . This term is zero in the ferromagnetic manifold and is not zero (and also not divergent) in the paramagnetic manifold. This change alone is enough to spoil the symmetry and prevents the vector field ∂ u to be a Killing vector field, even approximately, in the paramagnetic manifold. That is, knowing that g µν ∼ 1 is not enough information to fix the isometries of the manifold.
Geodesics
We have two Killing vector fields in the ferromagnetic manifold. Each associated with a conserved charge along geodesics. Together with the arc-length parametrization condition, Note that only the green curve with Q φ = 0 and constant φ(s) probes the two ferromagnetic phases. All solutions eventually touch the h = ±1 planes.
we have three first-order differential equations that suffice to solve for the geodesics of the manifold. Note that we do not have to consider the geodesic equations, the symmetries give us enough constrains. In terms of the physical variables h and γ, the conserved charges read These conserved quantities have also been found in [28] and in [29] by using Euler-Lagrange equations of motion. To understand these equation better, we need an input from numerical solutions. Fig. 2 shows four geodesic solutions. Note that solutions are generically confined to one of the ferromagnetic regions of the ground-state manifold unless there is a fine tuning involved, Q φ = 0. When Q φ = 0, γ(s) > for a positive . We prove this statement below. Hence, solutions do not usually touch or cross the critical line at v = 0 or γ = 0. However, they always touch, and cross, the critical lines at u = ±π/2 or |h| = 1.
If we restrict ourselves to the domain where the functions u(v) and φ(v) are well defined, solving for u (v) and φ (v), we find that We are interested in studying the behaviour of geodesics that cross the phase transition at v = 0. The values of v where the derivative diverge correspond to the maximum and minimum values of v a geodesic has. Note that csc v → ∞ when v → 0, so a geodesic crossing the critical line v = 0 must have Q φ = 0. Fig. 2 shows an example of a solution with Q φ = 0. A few references [28][29][30] have suggested that the geodesics inside the hγ-plane, i.e. solutions with Q φ = 0, do not cross the critical line γ = 0 but only touch it. However, if we have two geodesics, one with γ 1 (s) ≥ 0 and the other one with γ 2 (s) ≤ 0, we can connect them as long as h 1 γ = 0 = h 2 γ = 0 . The extension of a geodesic inside the upper plane is uniquely specified by demanding the lower plane geodesic to have the same conserved charge Q u . That is, geodesics with Q φ = 0 do cross the γ = 0 critical line and connect the two ferromagnetic phases.
Interestingly, at v = 0 the derivative u (v) vanishes independently of the value of Q u . This means that, at the critical line, a geodesic is not uniquely specified by its position and its velocity and we need to take into account higher derivatives. We can explicitly see this behaviour by doing a Taylor expansion of the geodesic path solution around v = 0, Recall that u = arcsin(h) and v = sgn(γ) |γ| + O |γ| 3 2 . The intuition behind this behavior is quite simple. Near the phase transition, the distance between two points is 8∆s 2 ≈ (∆v) 2 + cot 2 v(∆u) 2 . Since cot 2 v → ∞ when v → 0, ∆u must go to zero if we want to have a small value of s after crossing the critical line.
Near the Ising limit
A detailed analysis of the paramagnetic ground-state manifold is challenging due to the complexity of the metric. Part of the complexity lies in the non-vanishing cross term g γh . Due to this term the Killing vector field ∂ u is lost during the phase transition. Even the conserved charge Q φ = g φφ φ (s) has a complicated structure. To simplify the metric, we will restrict ourselves to the parameters (h, φ) and work with a constant value of γ (i.e. dγ = 0). We will refer to this manifold as the hφ-ground-state manifold. We will work near the Ising limit γ → 1.
First, let us do the coordinate transformation h → csc ψ to clean the metric in the paramagnetic manifold |h| > 1. Here, ψ ∈ (0, π/2). The resulting metric is Although it looks messy, another change of variables ψ → β− 3 8 (γ−1) sin 2β, for β ∈ (0, π/2), reveals that this is the metric of a 2-sphere The full coordinate transformation reads h → csc β + 3 4 (γ − 1) cos β cot β, for h > 1. The 2-sphere is a maximally symmetric space with three Killing vector fields: The Lie algebra of these Killing vector fields is the familiar algebra so (3) algebra which corresponds to a Type IX Lie algebra according to the Bianchi classification.
On the other hand, the ferromagnetic part of the hφ-ground-state manifold is a cylinder Here u = arcsin(h) and γ is constant but not necessarily close to one. Again, this is a maximally symmetric space with three Killing vector fields The Lie algebra of these Killing vectors is the algebra of the isometries of the Euclidean plane e (2): In the Bianchi classification, this is a Type VII 0 Lie algebra. Note that, despite having restricted ourselves to a hyperplane of the original groundstate manifold of the anisotropic TFIM, we still find that different quantum phases of matter correspond to different algebras.
Geodesics
Near the Ising point γ ≈ 1 the metric of the hφ-ground-state manifold is that of a cylinder for |h| < 1 and a 2-sphere for |h| > 1. So, in the ferromagnetic manifold, geodesics are linear functions of the type φ = mu + b, for some constants a and b. In the paramagnetic manifold, geodesics are great circles, characterized by the implicit equation cot β = q cos(φ + φ o ), for some other constants q, φ o ∈ R. The matching conditions at the boundary give a relationship between the two constants.
These conditions guarantee that geodesics are differentiable functions with a continuous first derivative. We can visualize the geodesics of the hφ-ground-state manifold using an isometric embedding of the plane (h, φ) into R 3 . This technique has been used to visualize the topological properties of the TFIM [8] and it is also useful to visualize geodesics. Taking advantage of the rotational symmetry we parametrize our manifold as a surface of revolution Our task now is to find the functions f (h) and g(h) such that the induced metric corresponds to our metric. We find the following system of differential equations for h > 1 and Note that having a surface of revolution simplifies the computation and gives us a direct result for f (h). For h < 1 we have the set of equations Continuity in f (h) requires that This condition can only be fulfilled if γ = 1. For other values of γ an isometric and continuous embedding into R 3 does not exist (at least as a surface of revolution). Solving the differential equations for γ = 1 we find that The embedding corresponds to a cigar-like surface made from a cylinder with two spherical caps. See Fig. 3.
Geodesics and energy fluctuations
The ideas presented in this paper appear to be somewhat abstract, however they are very physical. For example, we can apply these concepts to develop better ground-state preparation protocols [9,10,39]. Consider a parameter-dependent Hamiltonian H x µ and imagine that we have a system in the ground state Ω x µ i of H x µ i . We can change Usually, we want to do this in a finite amount of time T . To increase our chances of ending in the ground state Ω x µ f we would like to minimize energy fluctuations as much as possible. The question is: Given a fixed time T , how should we change the parameters x µ to minimize energy fluctuations? The answer is to take the geodesic path x µ (t).
For now, let us examine protocols that stay as close as possible to the ground-state manifold. Anandan and Aharonov [40] pointed out that the speed of the evolution of a pure state evolving via the Schrödinger equation is proportional to the uncertainty of its energy where ∂ t ρ(t) = i[ρ(t), H(t)] and ∆H(t) 2 = Tr ρH 2 − Tr(ρH) 2 . Or, in other words Here v is a velocity. The distance s(t) in this equation is the abstract distance in the projective Hilbert space defined by the Fubini-Study metric. Since the geodesic paths on the ground-state manifold minimize this distance, these are also the paths that minimize the integral ever the energy fluctuations. One might worry that our argument might be too sketchy, but this is indeed the correct answer. A proof of this statement is in [9,10]. Fig. 4 A shows numerical results supporting this argument. We solved the Schrödinger equation for the evolution of the spin chain of a slowly changing Hamiltonian. To improve the results, we evolved our system using a protocol whose velocity is zero at the initial, critical and final times. We can see this in Fig. 4 a), where points separated by constant time intervals accumulate at the critical line and at the beginning and end of the protocol.
We implement this by using functions of the type A exp{B[1 − cos 3 (t/T )]} − 1 . Protocols with this property suppress the fast oscillating terms in the time-evolution that contain the initial excitations of the system [10,41]. In Fig. 4 b) we see that the integral over energy fluctuations approximates or is larger than the length (t) of the path traced by the time-evolving ground state |Ω(t) . Since geodesic paths minimize this length, it follows that geodesics are the optimal adiabatic protocols. Note that in some cases the geodesic protocol is well approximated by the linear protocol, like in the case of the yellow trajectories. This is, however, not true for all protocols.
Conclusion
In summary, we have studied the symmetries of the ground-state manifold of the transversefield Ising model (TFIM) for both the anisotropic and the isotropic case. Remarkably, some symmetries in the manifold are not visible at the level of the Hamiltonian. For the anisotropic case, we encountered a hidden symmetry in the ferromagnetic sector of the manifold. This symmetry is related to a change in the magnitude of the magnetic field. The transformation modifies the energy and the states of the system. However, it acts as an isometry on the ferromagnetic sector of the manifold. From this result, we proposed a classification of the different quantum phases of a parameter-dependent Hamiltonian based on the Lie algebra of the related Killing vector fields. We found that the ferromagnetic manifold has two Killing vector fields with an abelian Lie algebra. The paramagnetic manifold has only one Killing vector field and a trivial Lie algebra. We argue that a simple scaling analysis near the critical lines |h| = 1 is not enough to determine the Killing vector fields of the metric tensor, since the regular terms in the metric play an important role in defining the isometries of the manifold.
We repeated the analysis for the the Ising limit of the anisotropic TFIM and this resulted in yet more symmetries. The ferromagnetic and the paramagnetic manifold both are maximally symmetric spaces with three Killing vector fields each. The algebra of the ferromagnetic manifold corresponds to the Lie algebra of the Euclidean isometries e(2) and is a Type VII 0 algebra in the Bianchi classification. The Lie algebra of the paramagnetic manifold is the familiar so(3) algebra and is a Type IX algebra in the Bianchi classification.
We took advantage of these symmetries and computed the geodesics of the ground-state manifold for the cases in which enough symmetries were available. Then, we analyzed the behaviour of these solutions near critical lines. We found that some geodesics are confined to specific regions of the ground-state manifold, but there are always solutions that cross the critical lines. These geodesics have several applications in adiabatic quantum preparation protocols as these are the paths that minimize the integral over energy fluctuations.
A An expression for the Christoffel symbols
When working with an embedding in a flat manifold, like the set of density matrices in C n×n , the covariant derivative may be computed by first taking the partial derivative of the vector field and then orthogonally project the result into the tangent space of the embedding note that we are only considering the tangent components of the partial derivative. This result is known as the Gauss formula, by applying the dot product with respect to another tangent vector t µ on both sides of this equation we get a closed formula for the Christoffel symbols Γ µ νλ = g µδ Tr(t δ ∂ ν t λ ) here we take the dot product with respect to the full vector ∂ ν t λ and not just the tangent projection because the normal components, by definition, vanish. Instead of tangent vectors, we can express our formula for the Christoffel symbols in terms of bras and kets. Let ρ = |ψ(x) ψ(x)|, then | 10,185 | 2020-07-23T00:00:00.000 | [
"Physics"
] |
Hydrogen-deuterium exchange mass spectrometry captures distinct dynamics upon substrate and inhibitor binding to a transporter
Proton-coupled transporters use transmembrane proton gradients to power active transport of nutrients inside the cell. High-resolution structures often fail to capture the coupling between proton and ligand binding, and conformational changes associated with transport. We combine HDX-MS with mutagenesis and MD simulations to dissect the molecular mechanism of the prototypical transporter XylE. We show that protonation of a conserved aspartate triggers conformational transition from outward-facing to inward-facing state. This transition only occurs in the presence of substrate xylose, while the inhibitor glucose locks the transporter in the outward-facing state. MD simulations corroborate the experiments by showing that only the combination of protonation and xylose binding, and not glucose, sets up the transporter for conformational switch. Overall, we demonstrate the unique ability of HDX-MS to distinguish between the conformational dynamics of inhibitor and substrate binding, and show that a specific allosteric coupling between substrate binding and protonation is a key step to initiate transport.
S tructural biology of membrane proteins has evolved at an increasing pace over the past few years 1 . The more highresolution structural information becomes available, the clearer it appears that complementary dynamic information is required to understand the mechanism of a protein of interest 2 . Energy coupling in secondary transporters is a good example of the type of information that static structures cannot directly provide about molecular mechanisms 3 . Although it is clear that these transporters alternate between different conformations ranging from open to the cytoplasm (inward facing, IF) to open to the extracellular medium (outward facing, OF), the molecular chain of events leading to these transitions are difficult to capture 4 . Specifically, the identification of the allosteric networks linking ion and substrate binding, and the ensuing protein conformational changes, are hard to deduce from structural snapshots 5 . Thus, linking structure to mechanism at a molecular level requires characterizing the conformational dynamics of membrane proteins 6 .
Among the techniques available to study conformational changes, hydrogen-deuterium exchange mass spectrometry (HDX-MS) is a newcomer for the study of membrane proteins 7 . This technique reports on the exchange of amide hydrogens on the protein backbone in the presence of deuterated solvent at a peptide level of resolution 8 . The main advantage over more established methods such as Förster resonance energy transfer (FRET) and Double Electron Electron Resonance (DEER) is that it does not require covalent labelling of the protein of interest, thus bypassing a lot of the molecular biology work and controls 9 . The method also requires lower amount of sample compared to other biophysical methods (such as nuclear agnetic resonance or X-ray crystallography) 10 and tolerates sample heterogeneity and complexity 11,12 . Hydrogen/deuterium (H/D) exchange, however, does not strictly report on distance changes involved in conformational transitions. Rather, it reports on the stability of Hbond of the amide backbone, which is mainly conditioned by two parameters: local structural dynamics and solvent accessibility 13,14 . We have shown previously that for a series of transporters, the changes in solvent accessibility can be correlated with conformational changes in most cases 15 . This is particularly helpful to understand the molecular mechanism of transporters as they switch between OF and IF conformations 16 . The conformational effect of ligand binding, mutation of conserved residues, or both, can be tested in a systematic way by comparing the H/D exchange pattern in different conditions, in so-called differential (Δ) HDX-MS experiments. Assuming that no major changes in the backbone stability of transporters occur when introducing either the ligand or a mutation, ΔHDX-MS offers a quick and easy readout of the conformational transition between different states.
The symporter XylE of the ubiquitous Major Facilitator Superfamily (MFS) family is a bacterial homologue of human glucose transporters GLUTs 1-4 17 , with a sequence similarity of 50%. The xyle gene was first isolated in 1987 18 . The expressed protein was shown to use the proton-motive force to catalyse xylose translocation across the membrane of Escherichia coli 19 . The majority of bacterial sugar transporters relies on ion gradients to energize transport 20 . Mammalian GLUTs transporters in contrast are facilitators. This difference in sugar transport energetics between human and bacteria appears to arise from the scarcer availability of sugar for bacteria, compared to humans whose sugar levels in the blood are in the mM range 21 . Despite this difference, several residues and structural motifs are strictly conserved from XylE to GLUTs 1-4, critical either for substrate recognition or to enable structural rearrangements 19 . The crystal structure of XylE has been solved in multiple conformations: inward-open, inward-occluded and outward-occluded with substrate xylose and inhibitor glucose bound 17,19,22 . The structures of the xylose-bound and glucose-bound protein are virtually identical, with only minor differences in the interaction network at the binding site 17 . This observation raises questions on how the transporter discriminates between substrate and inhibitor and how the potential differences are translated into conformational changes. Despite advances, the coupling between xylose and proton binding and conformational changes are not understood. Two transmembrane acidic residues located away from the binding pocket are likely candidates for the protonation step: D27 on helix 1 and E206 on helix 6 [23][24][25] . Biochemical assays have identified D27 as a critical component for active transport, with mutations at this site abolishing function 17,22,23 . Neighbouring residue E206 has been suggested to play a role in modulating the pK a of D27, to regulate its ability to bind and release proton 22,24 . Binding assays carried out on wild-type (WT) 17,24 and D27N mutant 23,24 show that they both bind xylose with a similar affinity (K d 0.3 mM).
In a previous study, we carried out an extensive characterization of the conformational dynamics of XylE by HDX-MS, to establish the mechanistic role of a conserved network of charged residues located on the intracellular side 11 . For benchmarking purposes, we locked the transporter in an OF conformation by replacing a conserved glycine necessary for the structural transition by a bulky tryptophan. This work provided a set of ΔHDX maps associated with transitions toward either the IF or OF states and allowed us to identify regions of the protein that can be used as conformational reporters. Peptic peptides from these regions are used as a fingerprint to guide interpretation of the ΔHDX experiments performed in the present study. Representative peptides from this benchmark experiment are provided in Supplementary Fig. 1. Here we performed HDX-MS measurements of the proton-coupled symporter XylE in the presence of its substrate xylose, inhibitor glucose and mutations at candidate protonation sites D27 and E206. The systematic HDX analysis coupled to molecular dynamics (MD) simulations identifies differences in structural dynamics and allosteric events between xylose and glucose binding, providing a rationale for inhibitor vs. substrate distinction.
Results
To dissect the role of proton and substrate binding, all the possible combinations between WT and mutants mimicking protonation-D27N, E206Q and E206Q&D27N-in the apo-and substrate-bound states were tested (Fig. 1a, b). At least three biological triplicates were used for each ΔHDX-MS experiment comparing two different protein states, as recommended for this type of experiments 26 . Heat maps of Relative Fractional Uptake per amino acid and Woods plots showing peptides with significant ΔHDX are available as Supplementary Figs. S2 and S3. As the peptides generated by enzymatic digestion can be different between biological replicates, we used Deuteros 27,28 to identify peptides showing a significant difference (confidence interval of 99%) in deuterium uptake for each individual ΔHDX-MS experiment and carried out an extra step of curating the data to represent only the peptides that are present in two or more of all replicates. Peptides containing the mutation(s) were excluded from analysis. It is noted that sequence coverage of >90% was obtained in most cases ( Supplementary Fig. S3), allowing us to monitor the dynamics of nearly the entire protein.
Protonation of D27 controls the conformational transition. We first set out to understand the effect of protonation on the dynamics of XylE in the absence of substrate or inhibitor. To this end, we carried out ΔHDX-MS experiments comparing the WT protein with the mutants. We observe that protonation mimics D27N and E206Q cause an overall decrease in deuterium uptake on both the extracellular and intracellular sides compared to the WT protein (Fig. 2a). No significant exchange is observed in the transmembrane regions, which are mostly solvent inaccessible. Interestingly, the double-mutant E206Q&D27N shows a decrease of deuterium uptake on the extracellular side coupled to an increase on the intracellular side-this corresponds to a ΔHDX pattern typical for the transition of transporter toward an IF state ( Fig. 2b) 11 . To understand the sequence of events enabling protein transition to the IF state, we carried out ΔHDX-MS experiments comparing the single to double mutants. By comparing the double-mutant E206Q&D27N to single-mutant E206Q, we found ΔHDX pattern typical of a transition toward an IF state (Fig. 2c). By contrast the ΔHDX of E206Q&D27N vs. D27N only showed minor differences in deuterium uptake (Fig. 2d). Taken together, these results suggest that D27 protonation is the main driver of the conformational transition to IF state, as long as a proton is already present on E206. To confirm that the ΔHDX observed in our experiments was the result of conformational changes and not changes in global stability caused by the mutations, we performed thermal unfolding experiments, monitored by circular dichroism (CD) measurements under temperature gradient 29,30 . A decrease in global stability often stems from an increase in local unfolding events, which in turn affects H-bond stability, thereby leading to an increase in H/D unrelated to an OF/IF conformational transition 31 . No significant change in stability was observed between the WT and the mutants below 50°C ( Supplementary Fig. S4), which comforted us that the changes observed with HDX were mainly caused by conformational changes.
Substrate or inhibitor binding favours the OF state. We then investigated the role of the substrate xylose and inhibitor glucose on the conformational equilibrium of XylE. The protein and mutants were first incubated with 750 µM of xylose and the effect was followed by HDX-MS. The comparison between the protein in the presence and absence of xylose consistently shows that the presence of the substrate leads to a ΔHDX pattern typical of a transition toward an OF conformation, an increase in deuterium uptake on the extracellular side coupled to a decrease in deuterium uptake on the intracellular side ( Fig. 3a and Supplementary Fig. S5). We performed similar experiments with the inhibitor glucose (750 µM). We observed that glucose also stabilizes the OF conformation, regardless of the presence of mutations ( Fig. 3e and Supplementary Fig. S5). We thus observe a systematic shift of the conformational equilibrium toward the OF state in the presence of either xylose or glucose. This transition is observed for all XylE variants, suggesting that substrate binding favours the OF conformation regardless of the prior protonation state of D27 or E206 and the apo conformational ensemble of the transporter. These results are in line with the observed OF states of the ligandbound structures captured by X-ray crystallography 17 . However, this raises the question about how the transition of the loaded transporter toward the IF conformation occurs.
Allosteric coupling between D27 protonation and substrate binding. Next, we went on to characterize how the combined effect of substrate binding and protonation mimics have an impact on the conformational dynamics, to emulate a fully loaded transporter. We carried out ΔHDX-MS experiments of the mutant proteins vs. the WT, in the presence of xylose. Strikingly, we observed that D27N vs. WT in the presence of xylose (Fig. 4a) presented a different ΔHDX pattern compared to the apo experiment (Fig. 2a). The mutation leads to an increase in deuterium uptake on both sides of the protein, a pattern different from all the other ΔHDX patterns observed so far. This increased uptake on both sides of the transporter suggests that there is a decrease in H-bond stability on the entire protein, suggesting a b 17 . Three charged residues of interest in the proton binding site (D27, R133 and E206) are shown with their intra-residue distances. b Twenty-eight combinations of eight different protein states of XylE WT and three mutants (D27N, E206Q and E206Q&D27N) in the presence or absence of a substrate (xylose) and four combinations of XylE WT and mutant D27N in the presence or absence of an inhibitor (glucose) were studied in this work. All structural representations were generated using PyMol. Mutated residues are indicated by a star. Tables reporting experimental details for each ΔHDX-MS experiment are available as Supplementary Data File 1.
NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-20032-3 ARTICLE that the combined presence of the mutation and the substrate leads to increased conformational heterogeneity. We hypothesize that D27N + xylose compared to D27N alone favours transitioncompetent conformations, where transition refers to the conformational change that allows the transporter to move between the OF and IF states. A similar ΔHDX pattern was observed when comparing D27N minus E206Q, the double-mutant E206Q&D27N minus E206Q, but not E206Q minus WT, suggesting that the coupling between substrate binding and protonation is specific to D27 (Supplementary Fig. S6). We then performed the same experiment comparing D27N with the WT in the presence of the inhibitor glucose. To our surprise, this time we observed a pattern consistent with an OF conformation, suggesting that glucose binding tips the conformational equilibrium even more toward the OF state (Fig. 4b). This comparison between xylose and glucose indicates that only a bona fide substrate can lead to the conformational transition. Overall, our results suggest that D27 protonation is the trigger for conformational cycling of the protein, while protonation of E206 has little effect. The shift toward a "transition-competent conformational ensemble" demonstrates that a specific allosteric coupling exists between the mutation/protonation of D27 and binding of the substrate xylose.
MD simulations suggest protonation leads to substrate destabilization. To understand the allosteric interplay between D27 protonation state and xylose binding, we ran all-atoms MD simulations on the ligand-bound and apo structures. We calculated the intrinsic pK a values of the residues D27 and E206 in the crystal structures using PROPKA 32 . The pK a of D27 ranges from 4.35 to 3.64, and that of E206 from 11 to 12.13 depending on the conformation in which the protein was crystallized (Supplementary Fig. S7). The intrinsic pK a values of these residues suggest that in the conformations captured in the crystal structures, XylE is protonated at E206 and deprotonated at D27.
We performed MD simulations of XylE embedded in a 1palmitoyl-2-oleoyl phosphatidylethanolamine (POPE) lipid bilayer with the residue D27, either unprotonated or protonated and E206 always protonated, using either the xylose-bound or glucose-bound structure. We clearly observe that in the case of unprotonated D27, xylose remains stably bound, essentially retaining the crystal structure pose (Fig. 5a, i-ii). In contrast, xylose adopts multiple rotameric states in D27-protonated state (Fig. 5b, i-ii), suggesting that xylose-binding stability is conditional on the absence of a proton on D27. Furthermore, instability of xylose is facilitated by the increased solvation of the substratebinding site (Fig. 5b, iii). In contrast, the glucose-bound simulation with D27 protonated retains the crystallographic pose with essentially a similar pattern of substrate stability and solvation as the xylose-bound D27-unprotonated state (Fig. 5c).
By looking closer at the effect of solvation of the subtract pathway and binding site, we observed that TM1 (bearing residue D27) undergoes a decrease in helical content that is markedly lower in the presence of xylose and protonated D27, compared to either the xylose-bound/D27-unprotonated or glucose-bound/ D27-protonated cases (Fig. 6a). This decrease in helicity is correlated with the increased solvation: the water-mediated bonds between TM1 and specific and ordered water molecules disappear as more water molecules come in. Consequently, residue T28 (next to D27) reorients its methyl group toward the binding site, resulting in an overall decrease of TM1 helicity that propagates and amplifies along the extracellular side of the helix (Supplementary Fig. S8). Furthermore, the importance of TM1 flexibility in regulating the conformational transition is corroborated by the observation of a similar loss in TM1 helicity (residues I31-G33) for XylE in IF-occluded (PDB: 4JA3) and IF-open (PDB: 4JA4) states. To identify the molecular mechanisms leading to such differences between glucose and xylose binding, we carried out a detailed analysis of residue rearrangements happening at the sugar-binding site. The dihedral angles of the residues involved in substrate binding were calculated ( Supplementary Fig. S9). Only minor differences between xylose-and glucose-bound states were observed with the notable exception of residues N294 and Q168. Both residues are pointing away from xylose ( Fig. 6b) but towards glucose (Fig. 6c). In contrast, these residues point away from the ligand-binding site in XylE in IF conformation. The MD predictions corroborate the HDX-MS results at several levels. First, the instability of xylose binding and the increase of water molecules along the substrate pathway observed upon D27 protonation matches with the global increase in H/D exchange observed. Second, the high calculated pK a value of E206 suggests that this residue is protonated most of the time during HDX-MS experiments carried out at pH 7.0. This explains why E206Q mutation leads to minor or no changes in ΔHDX-MS experiments carried out in the presence of a substrate (( Fig. 4a and Supplementary Fig. S6). Third, the simulations confirm that the coupling between D27 and substrate-binding site strictly depends on xylose binding, whereas glucose binding does not lead to increased solvation and loss of secondary structure of helix 1. suggests that the combined presence of xylose and a proton on D27 leads to an unstable state, hereby called "transition state," which allows the conformational transition underlying transport.
Discussion
As a symporter, XylE binds and co-transports protons alongside its substrate xylose. The coupling between ligand binding and conformational changes is central to active transport but the molecular determinants leading to the conformational transition are difficult to assess experimentally. Our work demonstrates the capability of HDX-MS to identify the structural signature of such coupling. Combined with predictions from MD simulations, we can decipher the molecular details underlying the interplay between substrate and proton binding. The most striking result of this work shows that XylE variant D27N leads to a transition state only if xylose is already bound, highlighting an allosteric coupling between the substrate-binding pocket and the charge network. This effect is specific to xylose and shows that the protein can distinguish between substrate and ig. 6 Helicity content of TM1 and dihedral angle changes at substrate-binding site. a Decrease in helical content is more pronounced in the case of protonated D27 + xylose-bound case (red) than unprotonated D27 (green) or when bound to glucose (blue). Differences in the orientation of N294 and Q168 based on the Chi 1 and Chi 2 dihedral maps generated from the combined trajectory of the two replicates. Protein and ligand are rendered from an equilibrated snapshot from MD trajectory at t > 100 ns. In b, N294 is freely moving and residue Q168 is pointing down and away from xylose, whereas in c, residue N294 is locked and Q168 is pointing up and towards glucose. inhibitor. Furthermore, we observe that xylose or glucose binding is protonation independent and favours the OF conformation, in line with the OF ligand-bound structures obtained at basic conditions (pH 9.6) 17 . It is worth noting that the mutations we have used as proxies for protonation, while revealing important effects, have their limits. The mutation is permanent, whereas protonation is an equilibrium reaction that depends on solvent accessibility and local pK a values, which are likely to change during the conformational cycle 24 . Furthermore, D27N mutant is known to be functionally inactive, as demonstrated in cell-based uptake assays 23 . However, such mutants have already been used successfully to decipher the molecular mechanism of other protoncoupled transporters such as the MDR transporters AcrB, LmrP, PfMATE and MdfA [33][34][35][36] , and identified key structural motifs during the transport cycle. Comparative HDX-MS experiments of protein harbouring protonation mimics appears to be a valuable method to study the molecular mechanism of proton-coupled transporters.
We propose the following transport cycle (Fig. 7): in its resting state, the WT transporter is protonated at residue E206 most or all of the time, in agreement with the high pK a values observed for E206 in both outward and inward conformations. Binding of xylose to the protonated transporter stabilizes the OF conformation and facilitates solvent accessibility to residue D27 (Figs. 7a and 5). The protonation of D27 when xylose is bound leads to a high-energy transition state, which initiates the conformational switch. This transition state is accessible only through allosteric coupling between D27 and the substrate-binding site, and such coupling is exquisitely sensitive to xylose binding. Under transport conditions (e.g., in the presence of a proton gradient), XylE can then switch toward the IF conformation and release substrate and proton in the cytosol. In contrast, binding of an inhibitor such as glucose further stabilizes the OF conformation, effectively trapping the transporter in an energy well and preventing the conformational cycling required for transport (Fig. 7b). The identification of D27 as the driver of the conformational transition correlates with the known role of equivalent residues for other proton-coupled MFS transporters such as LacY (E325), LmrP (E327), MdfA (D34) and YajR (E320) 37 . This suggests a conserved mechanism of action among proton-coupled symporters of the same structural family.
We surmise that, along the resolution revolution, the development of tools and workflows capable of answering mechanistic questions at a molecular level is much needed and we demonstrate that HDX-MS coupled to MD simulations have a key role to play.
Methods
XylE expression and purification. XylE was overexpressed in E. coli BL21-AI (DE3) (Invitrogen), which was transformed with the xyle gene in the presence or absence of the chosen mutations and cloned in the (30 µg/ml) kanamycin-resistant pET28-a plasmid (Novagen) modified with a C-terminal ten-histidine tag, grown in six baffled flasks each containing 1 L of Lysogeny Broth (LB) media at 37°C 220 r.p.m. to an OD 600 of 0.8. Expression was induced with 1 mM isopropy-β-D-1thiogalactopyranoside and 0.1% (w/v) L-arabinose, and growth continued until no increase of OD 600 is observed. The cells were collected by centrifugation, washed in 200 mL phosphate-buffered saline (PBS) buffer and centrifuged again for 20 min at 4200 r.p.m. in a Beckman JLA-16.250 rotor. The pellet was then resuspended in 50 mL PBS with 10 mM β-mercaptoethanol and 1 cOmplete protease inhibitor tablet and was frozen at −70°C before purification. Cells were defrosted and incubated with 1.5 µL benzonase nuclease (ThermoFisher) for 10 min at room temperature before passed through constant cell disrupter at 25 kPsi, 4°C. Then the ice-chilled membranes were isolated by ultracentrifugation for 30 min at 38,000 r.p.m. in a Beckman Ti45 rotor, 4°C. Membrane pellets were solubilized for 2 h with mixing in solubilization buffer [50 mM sodium phosphate pH 7.4, 200 mM NaCl, 10% (v/v) glycerol, 20 mM imidazole, 10 mM β-mercaptoethanol and 2% ndodecyl-β-D-maltoside (β-DDM, Anatrace), 0.1 mM phenylmethylsulfonyl fluoride (PMSF) and EDTA-free protease inhibitor tablet (Roche)] at 4°C. Then the protein solution was isolated by centrifugation for another 30 Then, peptides went through electrospray ionization progress in a positive ion mode using Synapt G2-Si mass spectrometer (Waters). Leucine Enkephalin was applied for mass accuracy correction and sodium iodide was used as calibration for the mass spectrometer. HDMS E data were collected by a 20-30 V trap collision energy ramp. All the isotope-labelling time points were performed in triplicates.
HDX data evaluation and statistical analysis. Acquired reference MS E data were analyzed by PLGS (ProteinLynx Global Server 2.5.1, Waters) to identify the peptic peptides, then all the HDMS E data including reference and deuterated samples were processed by DynamX v.3.0 (Waters) for deuterium uptake determination. Peptide filtration and analysis were performed as described before 11 . Woods plots were generated using Deuteros software 27,28 CD measurements. CD thermal denaturation was performed in an Aviv Circular Dichroism Spectrophotometer, Model 410 (Biomedical, Inc., Lakewood, NJ, USA). All samples of XylE were measured at a protein concentration of 0.14-0.17 mg/ml and using a cell path length of 1 mm. The sample was heated at 5°C intervals in SEC purification buffer (50 mM sodium phosphate, 10% (v/v) glycerol, 2 mM βmercaptoethanol, and 0.05% β-DDM (Anatrace), 0.1 mM PMSF pH 7.4) from 25-95°C. Each sample was scanned two times at a fixed wavelength of 222 nm in 1 nm wavelength steps with an averaging time of 1 s. The mean residue ellipticity ([θ] mre ) at 222 nm was used for further analysis and is calculated using the following equation: where θ obs is the observed ellipticity in degrees, d is the path length in cm, c is the concentration in mg/ml. The mean residue weight (MRW) (~110 for most proteins) is calculated as the molecular mass divided by the number of amino acids − 1.
Molecular dynamics: simulation setup. MD simulations were initiated from either xylose (PDB ID: 4GBY) or glucose-bound (PDB ID: 4GBZ) state of XylE 17 . Protonation states of the titratable residues were assigned based on pK a calculations performed using PROPKA3.1 at pH 7 38 . Thereafter, XylE was embedded in a POPE lipid bilayer using the membrane replacement method in CHARMM-GUI 39 . System was solvated with TIP3P water molecules 40 . Thereafter, Na + and Cl − ions were added, and the system was neutralized with the ionic concentration set to 100 mM. The final system inclusive of the protein, lipids, water molecules and ions comprised~100 K atoms. Subsequently, the system was relaxed by minimizing it to a minimum for 5000 steps using conjugate-gradient algorithm and simulated for 5 ns at 310 K, with all the heavy atoms of the protein and the substrate restrained to their crystallographic positions with a force constant of k = 5 kcal/mol/Å 2 . Finally, all the restrains were removed and the systems were simulated for 500 ns.
MD simulation protocol. The simulations were performed on with NAMD 2.13 41 employing CHARMM36 protein and lipid forcefields 42 . Simulations were performed in an NPT ensemble with periodic boundary conditions. Temperature was maintained at 310 K using Langevin dynamics with a damping constant of 0.5 ps −1 . Pressure was maintained at 1 atm using the Nosé-Hoover Langevin piston method 43 . The cutoff used for the short-range interactions were 12 Å with the switching applied at 10 Å. Long-range electrostatics was treated by the employing particle mesh Ewald (PME) algorithm 44 . Bonded, non-bonded and PME calculations were performed at 2, 2, and 4 fs intervals, respectively.
Analysis: dynamical network analysis. In XylE, coupling in the extracellular and intracellular gates can be understood in terms of the allosteric interactions of residues that efficiently move in a correlated manner. For this, dynamic network analysis was performed using the Network-View plugin 45 in VMD. In a network, all Cα carbons are defined as nodes connected by edges if they are within 4.5 Å of each other for at least 75% of the MD trajectory. Pearson's correlation was used to define the communities in the entire network corresponding to the set of residues that move in concert with each other (Supplementary Figs. S12 and S13).
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Data supporting the findings of this paper are available from corresponding authors upon reasonable request. All the deuterium uptake plots of the experiments presented for XylE are available on figshare data repository using the following link: (https://figshare. com/s/52d498fe3b10c60b64a4). Spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD018145. | 6,483.4 | 2020-04-04T00:00:00.000 | [
"Biology",
"Chemistry"
] |
The Effect of Thickness on the Sealing Ability of CEM Cement as a Root-end Filling Material
Background and aims. Different materials have been used for root-end filling during surgical endodontic treatment. The aim of this in vitro study was to evaluate the dye penetration in different thicknesses of calcium enriched mixture (CEM) cement as root-end filling material. Materials and methods. Following root canal filling in 70 extracted human single-rooted premolar teeth, the apical 3 mm of their root-ends was resected; the root-end cavities with depths of 1, 2 and 3 mm were prepared by ultrasonic retrotips and filled with CEM cement. After setting of cement, the roots were immersed in 2% Rhodamine B and the dye leakage was measured under stereomicroscope (×16) using Image J software. The data were analyzed by one-way ANOVA and Bonferroni post hoc tests at 5% significance level. Results. The means and standard deviations of dye penetration in the 1, 2, and 3 mm groups were 3395.5±1893.4, 3410.4±1440.5, and 2581.6±1852.9 μm, respectively. The one-way ANOVA analysis indicated significant differences (P < 0.001); however, the Bonferroni post hoc test revealed that only the positive control group differed significantly from the experimental groups (P < 0.001). Conclusion. The findings demonstrated CEM cement to have an adequate root-end sealing ability in 3-mm thickness.
Introduction
urgical endodontic retreatment is essential in cases of failed endodontic treatment or when conventional endodontic treatment cannot be undertaken. Endodontic surgery entails raising a mucoperiosteal flap and performing osteotomy, followed by root-end resection, root-end preparation, and root-end filling. A root-end filling material is considered effective when it provides a complete apical seal, preventing the passage of microorganisms into the root canal system and leading to the throwing out of microorganisms and their by-products from the root canal system. 1 Various materials have been suggested and used for root end filling. They include zinc oxide eugenol cements, glass ionomer cement, super EBA, polyvinyl resins, composite resins, resin-glass ionomer hybrids, and mineral trioxide aggregate (MTA). [2][3][4][5][6] Microleakage studies have confirmed MTA has the best apical sealing ability. However, despite its excellent sealing ability and biocompatibility when compared with other root-end filling materials, MTA is expensive, has delayed setting time, and poor handling properties. 7,8 Asgary et al 9 have introduced a novel endodontic cement with sealing ability comparable to MTA. This biomaterial was formulated using different calcium compounds such as calcium hydroxide, calcium oxide, calcium phosphate, calcium sulfate, calcium silicate, and calcium carbonate. Studies have demonstrated that the calciumenriched mixture (CEM) cement comprises watersoluble calcium and phosphate which immediately forms hydroxyapatite during and after setting. 9 The composition and surface characteristics as well as the physical and chemical properties of CEM cement have been examined. CEM cement has been shown to have a sealing ability comparable to MTA and superior to IRM. 6,10 The cement exhibits several advantages including high tissue biocompatibility, hard tissue induction, effective sealing ability against the entry of microorganisms, ability to set in an aqueous environment, antibacterial effects, and resistance to washout. [10][11]13 In-vitro studies have documented the equivalence of CEM cement and MTA properties. 10,14,15 In-vivo studies have also produced successful results with CEM cement. 11,12,[16][17][18][19][20] In an in-vitro study, the influence of the thickness of mineral trioxide aggregate on the sealing ability of root-end fillings was assessed by Valois et al 21 The results revealed a thickness of 4 mm as most adequate when using MTA as a root-end filling material. 21 In a previous study by Rahimi et al, 22 the microleakage with MTA as root-end filling material was not found to be significantly different among various thicknesses. Investigations of the sealing ability of CEM cement through dye penetration have revealed that the sealing properties of this root-end filling material parallel those of commercial types of MTA. 23 The aim of this study was to compare the sealing ability associated with three different thicknesses of CEM cement as a root-end filling material in cavities prepared by ultrasonic retro-tips.
Materials and Methods
Seventy single-rooted human premolar teeth extracted for periodontal or orthodontic purposes were selected for this study. The teeth were evaluated under stereomicroscope and radiography. Any teeth with caries, cracks, resorption, fractures, morphologic anomalies, and open apices were excluded from the study. The selected teeth were decoronated at cemento-enamel junction (CEJ) level with a diamond disk (D&Z, Darmstadt, Germany) under running water and air spray. Working length was determined with a #15 K-type file (Mani, Utsunomiya, Japan) 1 mm short of the apical foramen. The canals were prepared up to size #40 using the step-back technique and the shaping of the middle and coronal thirds was carried out by Gates Glidden burs 1, 2, and 3. During instrumentation procedures, root canals were irrigated with 10 ml of saline solution. All canals were obturated using gutta percha (Diadent, Korea) and AH-26 sealer (Dentsply, Konstanz, Germany) with the lateral compaction technique. The teeth were stored at 37°C and 100% humidity for 48 hours (Heratherm, Thermo Inc., Switzerland). The apical 3 mm of each tooth was resected perpendicular to the long axis of the tooth with a diamond bur under continuous water and air spray. Teeth were randomly divided into three experimental groups each containing 20 samples, and 5 positive and 5 negative controls. In group 1, root-end cavities were prepared to a depth of 1mm perpendicular to the long axis using ultrasonic retrotips Kis-3D (Spartan, Missouri, USA).
In samples of groups 2 and 3, the same procedure was done to depths of 2 mm and 3 mm, respectively. In positive controls, the cavities were prepared to a depth of 3 mm.
The cavities were then irrigated and dried by paper cones. The CEM cement was mixed according to the manufacturer's instructions on a sterile glass slab and filled into the cavities with the aid of a small condenser (Kerr Hawe, Orange, CA, USA). Any excess material was removed with a sterile cotton swab. The quality of the root-end fillings was confirmed by radiographs in two directions (mesiodistal and buccolingual), while the root-end cavities of controlled groups remained empty. In all experimental and positive control groups, two layers of nail varnish were applied to the surface of the teeth up to the level of the resected root-end. In negative controls, the entire surface of the teeth was covered with two layers of nail varnish.
Figure 1. Dye microleakage amounts in experimental and control groups (µm).
All teeth were kept at 37°C and 100% humidity for 48 hours, and then immersed into synthetic tissue fluid (STF) with pH=7 for 48 hours. The specimens were placed horizontally in 2% Rhodamine B (Merk, Darmstadt, Germany) for 48 hours. Then the samples were rinsed for 10 min under running water. After that, two facial and lingual fissures were created along the long axis of the roots using a diamond disc and the roots were longitudinally resected into two mesial and distal halves.
The maximum amount of linear dye penetration was measured under a stereomicroscope (Zeiss, Munich, Germany) at ×16 magnification with 0.1 mm accuracy and using Image J software (35d; National Institutes of Health, USA).
The data were analyzed with one-way ANOVA analysis and the Bonferroni post-hoc test was utilized to reveal specific group differences. The level of significance was set at P < 0.05.
Significant 'Skewness' and 'Kurtosis' tests confirmed non-normal data distribution. Natural logarithmic transformation provided normal distribution of the data. Analysis of variance indicated significant differences among the tested groups (P = 0.000). However, Bonferroni post-hoc test revealed that only the positive control group differed from the experimental groups (P = 0.000; Table 1). Consequently, there was no significant difference among sealing ability of 1, 2 and 3 mm CEM cement as a retrofilling material.
Discussion
The success of periradicular surgery directly depends on the achievement of a good apical seal, utilizing a well-adapted root-end filling material that prevents the leakage of irritants from the root canal system into the periradicular region. 24 In this study, the microleakage of three different thicknesses of CEM cement as a root-end filling material was evaluated by the dye penetration method. Findings indicated no significant differences among 1, 2, and 3 mm of CEM cement as a root end filling material. But in 1mm and 2-mm depths, the leakage was more than the root-end filling material's depth; in other words, only 3-mm depth was capable of providing adequate seal.
Apical microleakage has been assessed using various methods including dye/ink, bacterial/endotoxin leakage, radioisotope tracing, and fluid filtration technique. [25][26][27][28][29][30][31] There is no evidence to favor the superiority of any particular method. However, the dye penetration technique is widely used for microleakage studies because dyes are cheap, safe, easily available, and also relatively easy to be stored, used and to have their penetration assessed quantitatively. 32,33 Different dyes have been employed for dye penetration test as India ink and methylene blue. In this study, the extension of dye (2% Rhodamine B) penetration was used as the criteria for evaluation. The use of methylene blue in marginal sealing studies has been debated, due to its incompatibility with alkaline substances, which may induce discoloration of the dye. 34 It is known that methylene blue dye presents an acid character and Rhodamine B, a basic one. Rhodamine B is a basic intense organic dye, soluble in water at room temperature, also solvent in alcohols and common organic solvents, in addition to being highly stable. 35 Since color stability of organic dyes is an important factor that must be observed in microleakage studies, and because of alkali conditions around CEM cement, in this study Rhodamine B dye was used for le standard tools re with CEM cement used as root-end filling material.
cal Sciences, for the financial support of this study. s. Br Dent J akage assessment. An ideal root-end cavity preparation is a class I cavity at least 3 mm deep with parallel walls. 36 Achieving this with the classical method in surgical endodontics is held back by several difficulties such as limited access, root anatomy, and tooth angulation. To avoid these problems, Piezoelectric ultrasonic devices for root-end preparation have been developed and are nowadays used as for trograde cavity preparation. [37][38][39] In another in-vitro study, the influence of the thickness of mineral trioxide aggregate on the sealing ability of root-end fillings was assessed by Valois et al. 21 The 1-mm-thick MTA was the least effective in preventing apical leakage. No significant difference was found between 2-and 3-mm-thick MTA. Four-millimeter-thick MTA was found to be significantly more effective than the other thicknesses tested. These researchers suggested a thickness of 4 mm as most adequate when MTA is used as a root-end filling material. 21 Similar findings were obtained in our study and CEM cement in 3-mm thickness present the most effective sealing ability. The apical sealing ability of CEM cement has been reported to be similar to different commercial types of MTA. 23 In a previous study, microleakage in the 3-mm and 2-mm root-end cavities was less than that in 1 mm depth cavities, but analysis of variance revealed no significant differences across the three different thicknesses. 22 The current study led to similar results but
Conclusions
Based on the findings of this in vitro study, the CEM cement demonstrated a | 2,615.2 | 2015-02-09T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
A network topology approach to diagnose the shift of covariance structure
. Understanding the shift of covariance matrices in any process is not an easy task. From the literatures, the most popular and widely used test for covariance shift is Jennrich’s test and Box’s M test. It is important to note that Box and also Jennrich have constructed their own test by involving sample covariance matrix determinant or, equivalently, generalized variance (GV) as multivariate variability measure. However, GV has serious limitations as a multivariate variability measure. Those limitations of GV motivate us to use a proposed test based on an alternative measure of multivariate variability called vector variance (VV). However, if after hypothesis testing the hypothesis of stable process covariance is rejected, then the next problem is to find the cause of that situation. In this paper, network topology approach will be used to understand the shift. A case study will be discussed and presented to illustrate the advantage of this approach.
Introduction
The shift of the covariance structure can be determined by testing the stability of covariance structure.The importance of covariance structure stability has been shown in many areas.For example, in financial market [1][2][3], real estate industry [4], service industry [5,6], manufacturing industry [7,8], software industry [9,10], health care industry [11] and even supply chain management [12].
To test the stability of covariance structure, we use a test proposed by Yusoff and Djauhari [13].In this study, if the null hypothesis about the stability of covariance structure is rejected, we use the network analysis approach to find out the root causes of that rejection.For that purpose, covariance matrix will be considered as representing a network.
To filter the information in that network, we use minimum spanning tree (MST) issued from Kruskal's algorithm.Then, we construct the network topology of all variables.Its interpretation will be delivered by using centrality measures such as degree centrality and betweenness centrality.
The rest of the paper is organised as follows.In the Section 2, we present the related methodology, followed by results and discussion of corresponding example in Section 3. At the end, this paper is closed with a conclusion in the last section.
Methodology
In this section we focus on the shift of the covariance structure, i.e. testing the equality of two covariance matrices.However, to understand the shift of covariance is not an easy task.The complexity of multivariate data analysis, and the cross correlation among currencies make it difficult for the analysis.This motivate us to introduce a covariance network topology as a further analysis if there exists a shift in covariance structure, otherwise the analysis is not required.
Case study: Currency exchange rate
In this study, as an illustrative example, covariance matrices of foreign exchange rate time series are analyzed for 78 world currencies (European Euro Analysis), retrieved from Pacific Exchange Rate Service website.Those data were retrieved from January 2000 until January 2010 and they were analyzed quarterly.Hence, there are 40 corresponding covariance matrices in this study, i.e., S1, S2, …, S40.
Yusoff & Djauhari's test
Proposed statistical test by Yusoff and Djauhari [13] used VV as the multivariate variability measure as an alternative measure to substitute GV.The limitations of GV as the multivariate variability measure can be seen in [13] and [14].VV is a sum squared of the elements of sample covariance matrix, i.e., 2 Tr S .By using Yusoff & Djauhari's test (1) the hypotheses involved are 0 In the next section, we will determine which variables are responsible for the instability of the covariance matrix.For that purpose, the average of the stable covariance matrices, S is obtained and it will be compared with the most unstable covariance matrix, S35, i.e., the Z statistic for S35 is highest than the others.
Tr S s (iv) i n sample size of covariance matrix.
Network topology
A network analysis will be used if unstable covariance matrix occurs, to find the most important variables which are responsible for that situation.For that purpose, first, we transform S into a similarity matrices, * S by using one of the three methods for transforming S into * S [15].In this study, the first method of transforming S into * S is used, i.e., * ij ij s s , see [16].Then, we transform * S into dissimilarity matrices, D. Based on those dissimilarity matrices, by using Kruskal's algorithm provided in Matlab, we construct MST.From MST, we construct the network topology of all variables which will be visualized by using Pajek software.The interpretation of that network will be delivered by using centrality measures, i.e., degree centrality [17][18] and betweenness centrality [19][20].
Results and discussion
MST is a subgraph that connects all the variables (nodes) whose total weight, i.e., total distance is minimal.In figure 1, we present the visualization of corresponding MSTs issued from Pajek. Figure 1 S .This justifies our conclusion that the covariance matrix of 35 S is not stable since its structure differs from the structure of the average of stable covariance matrices, S .According to those figures, for S , the most important currencies are GHS, LKR, TRY and ZMK, while for 35 S , the most important currency is BRL.Those currencies are important because the number of their connections to others currencies are higher than the others.This means that all the currencies in S has been influenced by GHS, LKR, TRY and ZMK and BRL for 35 S .To elaborate the above finding more clearly, other information will be presented by using the centrality measures, i.e., degree centrality and betweenness centrality.In figures 2 and 3, we present the network topology where the size and color of the node represent the score of centrality measure and the rank of importance, respectively.The colors used in this analysis, ordered decreasingly in terms of the rank of importance: yellow, red, blue, purple and orange.The higher the score of the centrality measures of a particular node, the more influential that node is.In this study, the most influential node is a node which is responsible for the instability of the covariance matrix.
From figure 2, based on degree centrality, for S , GHS has the highest number of connections in the network, i.e., 38.Followed by TRY, LKR, ZMK and SAR with 10, 9, 7 and 5 connections, respectively.The rests are of 1, 2, 3 and 4 connections only.On the other hand, for 35 S , BRL has the highest number of connections in the network, i.e., 71.Followed by AUD with 3 connections and ARS, MXN, PEN and XPT have 2 connections for each.The rests are of 1 connection only.The higher the number of connections, the more influential the particular currency is.Manually, the number of connection of a currency is determined by the number of currency that connects to that currency.
In terms of betweenness centrality, see figure 3, for S , GHS plays the most important role in the network followed by, in order of importance: TRY, LKR, JMD and ZMK.While, for 35 S , BRL plays the most important role in the network followed by, in order of importance: AUD, ARS, MXN, PEN and XPT.This means that if those currencies are well managed, then the others will be influenced.
Conclusion
Based on the analysis on MST in figure 1, we learn that all the currencies in S is influenced by GHS, LKR, TRY and ZMK.On the other hand, for 35 S , BRL does influence the other currencies.Further analysis based on two centrality measures leads us to the following conclusions.(i) For S , the following currencies are among the five highest scores for at least one measure, i.e., the most influential currencies: GHS, JMD, LKR, SAR, TRY and ZMK.(ii) For 35 S , the following currencies are among the five highest scores for at least one measure, i.e., the most influential currencies : ARS, AUD, BRL, MXN, PEN and XPT.Therefore, based on these findings, we conclude that the differences between S and 35 S is generally caused by currencies ARS (Argentina Peso), AUD (Australia Dollar), BRL (Brazilian Real), GHS (Ghana Cedi), JMD (Jamaican Dollar), LKR (Sri Lankan Rupee), MXN (Mexican Peso), PEN (Peruvian Sol), SAR (Saudi Riyal) TRY (Turkish Lira), XPT MATEC Web of Conferences 189, 03027 (2018) https://doi.org/10.1051/matecconf/201818903027MEAMT 2018 (Ounces of Platinum) and ZMK (Zambian Kwacha), i.e., these currencies are responsible for the instability of 35 S .Consequently, these currencies should be given special attention in Foreign Exchange Rate.
(a) is for S and figure 1(b) is for 35 S .Those figures show the most important relationship among all currencies in terms of MST.From those figures, it is shown that the MST of S differs considerably from 35
Table 1 .
The value of Z statistic. | 2,014.6 | 2018-01-01T00:00:00.000 | [
"Mathematics",
"Engineering"
] |
On quaternion applications in obtaining surfaces
In this paper, we survey the historical development of quaternions and give some recently studies and applications of quaternions of obtaining surfaces.
Quaternions were advertised by Irish mathematician Sir William Rowan Hamilton in 1843. While he was searching for methods of extensioning complex numbers to higher dimensional spaces, he accidentally invented quaternions. He fell down to find a handy method in three-dimensional space. That is, he could not reach a real in three dimensional skew field. But, he composed quaternions by working in four dimensional space. As to Hamilton, while he was walking with wife over the Royal Canal in Dublin on 16 October 1843, he was experience an epiphany the solution of the equation, Then, without wasting any time, Hamilton carved this equation with the help of his pocketknife the around the corner Broom Bridge (where Hamilton called Brougham Bridge). The phenomenon is called exploration of the quaternion group. Also, in one sense, Hamilton fathered the cross products and dot products of vector algebra. Moreover, Hamilton characterized a quaternion as a serried four item multiple of real numbers, and identified the first item as the "scalar" part, and the remaining three as the "vector" part [1][2][3].
Nowadays, the quaternions which are chiefly utilized in computer graphics, control theory and signal processing for representing rotations and orientations are widespread utilized for spacecraft attitude control systems to be commanded. Also, they are utilized to telemeter their current attitude. The logic is that joining quaternion transitions is much more computationally determined then joining a lot of matrix transitions. Quaternions do not possess a computational singularity (unspecified partition by zero) in control and modelling applications that can take place for quarter spin turns (90°) that are accomplishable by several Air, Sea and Space transports. In pure mathematics, quaternions which are one of the four finite dimensional normed division algebras over the real numbers come to light considerably applications from beginning to end algebra and geometry. Also, quaternions are used not only in rotation but also in reflection [4][5][6][7][8].
Let us now examine the works on applications of quaternions in obtaining surfaces. Primarily, the sum, product, point product, division of two quaternions, inverse of a quaternion and its modulus have been defined. In addition, quaternions are represented by the matrix. Also, quaternions are used in rotation. For example, a counterclockwise rotation through an angle a about an axisx can be represented (Fig. 1), wherex is a unit vector [8].
If we consider the linear map: *Corresponding author<EMAIL_ADDRESS>w ! W w ð Þ ¼qwq; whereq ¼ bi þ cj þ dk, b; c; d 2 R is a unit vector quaternion. That is to say, this expression is called reflection (Fig. 2). On the other hand, give the following reference [9] to Definition 1 and Definition 2: Definition 1. A dual number A has the form A = a + ea* where a and a* are real numbers and e is the dual symbol subjected to the rules A dual quaternion Q is written as Hamiltonian conjugate of Q, norm of Q, reciprocal of Q and unit dual quaternion are obtained. Also, screw operators are defined using this dual quaternion.
Definition 2. Letà andB be unit dual vectors in ID 3 (i.e. set of unit vector dual quaternions) the quaternion product of these two dual vectors is given byÃB The expression (1) can rewrite as:ÃB This expression is called screw operator (Fig. 3).
Let's approach today. In computer graphics, spherical linear interpolation (slerp) is shorth and for spherical linear interpolation, in the context of quaternion interpolation for the purpose of animating 3D rotation [10]. Linear interpolation have been done on Euclidean sphere using quaternions. Also, split-quaternions are elements of a four-dimensional associative algebra introduced by James Cockle in 1849.
Definition 3. A split quaternion q is an expression of the form
where q 1 , q 2 , q 3 and q 4 are real numbers and i, j, k are split quaternion units which satisfy the non-commutative multiplication rules [11] The linear interpolation on Loretzian sphere Minkowski space have been done using split quaternions. That also yields the shortest possible interpolation path between the two split quaternion on the unit Lorentzian sphere [12].
Spherical spline quaternion interpolation has been done on sphere in Euclidean space using quaternions (Fig. 4). The spline split quaternion interpolation on hyperbolic sphere in Minkowski space has been done using split quaternions and metric Lorentz. This interpolation curve is called spherical spline split quaternion interpolation in Minkowski space (MSquad) (Fig. 5) [13].
Quaternions are more usable than three Euler angles in the three dimensional Euclidean space. Thus, many laws in different fields can be given by the quaternions. Canal surfaces and tube surfaces can be obtained by the quaternion product and by the matrix representation. Also, the equation of canal surface given by the different frames of its spine curve can be obtained by the same unit quaternion. In addition, these surfaces are obtained by the homothetic motion [1]. For example, for unit speed curve: ; cos ffiffi ffi 3 p t 2 ; the Frenet frame vectors can be given as, ; The tube surface X(t, h) can be obtained by the homothetic motion as X(t, h) = a(t) + rMN(t), where M is the matrix representation for the unit quaternion q(t, h) = cosh + sinhT(t). Moreover, the graphic is as follows (Fig. 6).
All constant slope surfaces are found in the Euclidean 3-space. Namely, those surfaces for which the position vector of a point of the surface makes constant angle with the normal at the surface in that point. These surfaces could be thought as then bi-dimensional analogue of the generalized helices (Fig. 7) [15]. Circular surfaces are smooth one-parameter families of circles. Three main purposes about circular surfaces and roller coaster surfaces are defined as circular surfaces whose generating circles are lines of curvature. The first one is to reconstruct equations of spacelike circular surfaces and spacelike roller coaster surfaces by using unit split quaternions and homothetic motions.
The second one is to parametrize timelike circular surfaces and give some geometric properties such as striction curves, singularities, Gaussian and mean curvatures. Furthermore, the conditions for timelike roller coaster surfaces to be flat or minimal surfaces are obtained. The last one is to express split quaternionic and matrix representations of timelike circular surfaces and timelike roller coaster surfaces [16]. where s is the arc-length parameter of a 1 (s) = e. Using the timelike unit split quaternion q e = cosh + esinh, we have C ðn;n;b;rÞ ðs; hÞ ¼ nðsÞ þ rðsÞq e ðs; hÞ Â nðsÞ: For r = 1 and r ¼ s 2 , the spacelike circular surfaces are illustrated (Fig. 8). Canal surfaces determined by spherical indicatrices of any spatial curve in Minkowski 3-space by means of timelike split quaternions. Moreover, using orthogonal matrices corresponding to these quaternions, the canal surfaces are obtained as homothetic motions. Then, we investigate a relationship between the canal surfaces and unit split quaternions [17]. Then, taking r = 1 and r = h, respectively, the equations of the tubular and generalized tubular surfaces KT ðs; hÞ and T (s, h) generated by the tangent indicatrix T are given by (Fig. 10 þ sin 3s cos h þ 5 3 cos 3s sin h :
Conclusion
In this paper, the historical development of quaternions is surveyed. Also, some recently studies and applications of quaternions of obtaining surfaces are considered. Within the framework of the knowledge acquired, it is observed that applies to many areas of quaternions. The number of these studies is increasing day by day. We were only able to give a small summary. We are very happy to try to contribute to these studies. Our work in this direction continues. | 1,821.4 | 2019-01-01T00:00:00.000 | [
"Mathematics"
] |
Secure D2D Communication for 5G IoT Network Based on Lightweight Cryptography
: Device-to-device (D2D) communication is a direct means of communication between devices without an intermediate node, and it helps to expand cell coverage and to increase radio frequency reuse in a 5G network. Moreover, D2D communication is a core technology of 5G vehicle-to-everything (V2X) communication, which is an essential technology for autonomous driving. However, typical D2D communication in an 4G network which is typical telecommunication network has various security challenges including impersonation, eavesdropping, privacy sniffing, free-riding attack, etc. Moreover, when IoT technology emerges with 5G networks in massive machine type communication (mMTC) and ultra-reliable low latency communication (URLLC) application scenarios, these security challenges are more crucial and harder to mitigate because of the resource-constrained nature of IoT devices. To solve the security challenges in a 5G IoT environment, we need a lightweight and secure D2D communication system that can provide secure authentication, data confidentiality/integrity and anonymity. In this paper, we survey and analyze existing results about secure D2D communication systems in terms of their security considerations and limitations. Then, we lastly propose a secure D2D communication system to address the aforementioned security challenges and the limitations of the existing results. The proposed secure D2D communication was designed based on elliptic curve cryptography (ECC) and lightweight authenticated encryption with associated data (AEAD) ciphers to cover resource-constrained IoT devices.
Introduction
D2D communication is a peer-to-peer communication mechanism between devices without an intermediate node [1,2]. D2D communication has many advantages in mobile networks [3]. First, it can expand coverage of each cell in a cellular network as a communication bridge for transmitting data to the node located outside of cell coverage. Second, D2D communication helps to reduce the energy consumption of the base station by transmitting data directly between devices. Lastly, the efficiency of reusing the same radio frequency is increased. In D2D communication, the distance between devices is quite shorter than the distance between a device and a base station. This means the interference of radio frequency decrease in D2D communication scenario, and it helps to transmit the multiple data using the same radio frequency. Moreover, D2D communication is a core technology of V2X communication [4]. Due to these advantages, the 5G network also includes D2D communication technology such as the LTE-advanced (4G) network.
The remainder of this paper is organized as follows. Section 2 introduces related works where we surveyed D2D communication and analyzed security considerations for secure D2D communication.
In Section 3, we propose a secure D2D communication system for a 5G IoT network. In Section 4, we show the simulation results of our proposed D2D communication system. In Section 5, we analyze our proposed D2D communication system based on our security considerations and finally conclude in Section 6.
Typical Security Threats of D2D Communication
D2D communication involves three steps, device discovery, link setup and data transmission, to make a direct connection between devices. In the device discovery step, the device searches for nearby devices. Then devices that are discovered in the previous step make a connection for transmitting data in the link setup step. After a connection is established, the data is transmitted through a direct link in the data transmission step. However, if there are no proper security measures, the data can be vulnerable to some security threats by attackers. Typical security threats of D2D communication introduced in [5] are as follows: • Impersonation attack. In this attack the attacker acts like a legitimate user by using an identity such as an international mobile subscriber identity (IMSI). To prevent this attack proper authentication of users has to be considered. • Eavesdropping. This is a type of attack where the attacker passively listens to communication between users and thereby the attacker can capture the transmitted data and also can fabricate the data. To prevent this attack, data confidentiality and integrity have to be considered. • Privacy sniffing. D2D communication has to broadcast request messages to search for nearby devices. However, the attacker uses this feature to find and track the victim device. To mitigate this security threat, the devices have to use an anonymous identity, and it has to be authenticated. • Free riding attack. Selfish devices receive the desired data from other devices but do not share their resources because of energy consumption and because of this they reduce system availability.
To mitigate this attack, the user identity has to be authenticated and managed by a base station.
•
Location spoofing. In this attack a malicious device may broadcast a request message with wrong or artificial location information to disrupt D2D communication in the device discovery step.
To mitigate this attack, the request message has to be processed only from validated devices in D2D communication.
Security Considerations for a 5G IoT Network
In a 5G network, IoT applications correspond to mMTC and URLLC scenarios. For the security of D2D communication against threats, the D2D communication system has to provide security functions including authentication, data confidentiality/integrity and anonymity. However, IoT devices have limited resources in terms of performance, memory and power consumption. Therefore, the security functions must also provide efficiency, meaning that each security function has to be implemented lightly and run faster. The detailed description of security considerations are as follows: • Authentication. Authentication is a key requirement for securing D2D communication in the 5G IoT network. For most types of attacks, proper user authentication is the most basic and appropriate solution. Every network should be able to verify the identity of users in order to guarantee the security of the network.
•
Data confidentiality and Integrity. The data transmitted in the IoT network contains sensitive information, and due to a variety of attacks that can eavesdrop on or modify that information, confidentiality and integrity are a big concern. For providing this, we have to encrypt the transmitted data and use hash functions or message authentication algorithms.
• Anonymity. Anonymity refers to hiding the identity of origin and sensitive information such as location. Anonymity is a necessary security function to prevent attackers from targeting specific users for their purpose. In such cases, when anonymity is not provided, the attacker can choose a specific target for the attack. If you take the example of autonomous vehicles, the attacker may decide to attack a specific car. Therefore, anonymity should be considered extensively.
•
Efficiency. Efficiency is the communication system's ability to be implemented and to operate economically. This consideration is about availability, which means that authorized users can access the information at any time they request it. This consideration is especially critical when it comes to the IoT network because IoT devices have limited resources.
Existing Research
Mingsheng Cao et al. [11] proposed a secure lightweight D2D communication system with multiple sensors. Their proposed communication system is designed based on lightweight key generation and a distribution scheme by leveraging an acceleration sensor and secure near field authentication by using a device's microphone and speaker as sensors and for data transmission, which includes encryption/decryption by audio and RF channels. Adeel Abro et al. [12] proposed a lightweight authentication scheme based on elliptic ElGamal encryption, which is public key algorithm based on elliptic curve discrete logarithm problem (ECDLP). This paper presents an authentication scheme based on public key infrastructure (PKI) and uses a combination of ECC to select key pair and ElGamal encryption to exchange the secret key. Yasir Javed et al. [13] also proposed a lightweight security scheme based on ECC and ElGamal encryption over public key infrastructure. This paper uses ECC to create keys and ElGamal for encryption and decryption. Atefeh Mohseni-Ejiyeh et al. [14] proposed an incentive-aware lightweight secure data sharing scheme for D2D communication in 5G networks. In their proposed scheme, users obtain digital signatures to prove successful data sharing and, in the sharing process, the symmetric encryption algorithm and MAC are used. Haowen Tan et al. [15] proposed a D2D authenticating mechanism employing smartphone sensor behaviour analysis. Their authentication scheme is designed based on certificateless cryptography for group authentication and user's behavior analysis extracted from smartphone sensors is employed for continuous authentication. Sheeba Backia, Mary Baskaran et al. [16] proposed a lightweight key exchange mechanism for LTE-A assisted D2D communication that can be applied in 5G networks. Their mechanism is designed by using ECC-based symmetric keys. Yunqing Sun et al. [17] proposed privacy protection device discovery and an authentication mechanism for D2D using the identity-based prefix encryption and ECDH key agreement protocol. All of these studies can provide authentication and data confidentiality/integrity and most of them use ECC based cryptographic algorithms. However, they have some limitations in that some of the results cannot provide anonymity or the researches did not deeply consider the data transmission step of D2D communication. Moreover, most of the existing schemes use only lightweight public key algorithms not lightweight symmetric encryption algorithms. Table 1 shows a taxonomy of strategies of existing research in terms of the security functions provided (confidentiality/integrity, authentication, anonymity) and the steps considered (device discovery, link setup, data transmission). Since our proposed system uses ECC and lightweight AEAD cipher for covering our security considerations and all of the steps in D2D communication, it can improve the efficiency and security of D2D communication. Table 1. Taxonomy of strategies of existing secure device-to-device (D2D) communication.
Proposed D2D System Model
In this section, we propose a secure D2D communication mechanism for a 5G IoT network based on lightweight AEAD ciphers. The proposed secure D2D communication model is shown in Figure 1. Objects participating in D2D communication consist of 5G network components including user equipment (UE), general node-B (gNB), access and mobility management function (AMF)/security anchor function (SEAF) and user data management (UDM). UE is a device that is a mobile entity in a 5G network, and UE is an actual device that communicates with other devices directly in our system. gNB is a base station responsible for connecting UE to mobile networks. In our system, gNBs share their public key with other gNBs in advance and use their private key to generate D2D tokens (D2DTK gNB x ) via ECDSA. Moreover, AMF is responsible for the management of a mobile entity. SEAF is a middle entity of authentication between UE and a 5G network and is co-located with AMF. UDM stores information about mobile entities in a 5G network. A 5G network provides the authentication framework using 5G-AKA to verify the identity of the UE. 5G-AKA is used to authenticate the UE's validity before generating a D2D token for use in communication in the proposed secure D2D communication. It is corresponded to step 0 in the proposed D2D system, and this process is performed only once for each UE.
After generating a D2D token, the D2D communication process has three steps similar to a typical D2D communication system: Device discovery, link setup and secure data transmission. However, in each process, there are features for security such as anonymity, authentication and confidentiality/integrity. We will discuss the details of these features in Section 3.2. The brief descriptions of each process are as follows: • Device discovery is a process that searches for nearby nodes. In this step, nodes in a network broadcast a request message to discover other nodes. If a node receives a request message, it sends a response message to another node. The broadcast or response message in this process includes each UE's encrypted identity SUCI and the issued D2D token. • Link setup is a process for making a peer-to-peer connection between two nodes. During this process, each node sends a verification request to its base station, gNB, with the SUCI and D2D token of the target UE being received in the device discovery phase. After verification, ECDH is used to exchange secret keys for secure data transfer. • Secure data transmission is a process where data is transmitted. The main feature of this step is that the data is encrypted using a lightweight AEAD cipher before transmission. In the encryption process, the sender node uses its D2D token identity and context sequence, thereby the confidentiality and integrity of the data are ensured. Moreover, authentication is processed in every transmission.
Details of Communication Mechanism
This section deals with the detailed process for the proposed D2D communication system. As described in the system model, the proposed D2D communication system has four steps in total. These four steps may be classified into one pre-processing step performed before D2D communication and the remaining three steps in which actual D2D communication is performed. The pre-processing step is the D2D token generation step (corresponding to step 0), and the steps in which D2D communication is performed are device discovery, link setup and secure data transmission (corresponding to steps 1-3, respectively).
First, in the D2D token generation step, each UE sends a request to the gNB to generate a D2D token for later use in D2D communication. The gNB that receives the D2D token generation request first verifies the identity of the UE that sent the request. At this time, the identity of the UE verifies the SUCI, which is an encrypted identity that emerges for user privacy in a 5G network. Unlike IMSI, the identity of the UE used in existing 4G networks, the SUCI can provide anonymity for the UE as a result of encrypting the IMSI using a public key (PUK). The verification for SUCI is performed using 5G-AKA, an authentication framework for performing primary authentication of UE registration in 5G networks. The subject that performs the actual verification is AMF/SEAF, and the verification is performed by comparing the credentials obtained by decrypting SUCI with the user credentials stored in the UDM. When the SUCI verification is completed, the result is transmitted to the gNB, and accordingly the gNB generates a D2D token and transmits it to the requesting UE. The generation of the D2D token uses the digital signature value calculated by the ECDSA of the UE's SUCI using the gNB's private key (PRK). The issued D2D token may also give anonymity to the UE with a value generated through a cryptographic algorithm by using the identity of the UE like SUCI. The issued D2D token can be verified if the SUCI of the UE and the public key of the gNB are known (note that each gNB shares the public key we mentioned in the previous section). The D2D token generation procedure is shown in Figure 2. From now on, actual D2D communication performing steps will be described. The whole process of proposed D2D communication is shown in Figure 3.
Step 1 is device discovery, which is a process of searching for a nearby device with which to perform D2D communication. Here, each UE desiring D2D communication broadcasts a message requesting to perform D2D communication, and UEs in a state capable of D2D communication transmit a response message to the received D2D request message. Here, the broadcast message or response message includes the D2D token issued in step 0 and its SUCI. If a response message to the broadcasted request message is received, the process proceeds to the next step.
The second stage of D2D communication is the link setup to establish a communication session. In this step, prior to establishing a communication session, verification is performed on the D2D token exchanged through device discovery. The verification of the D2D token performed here is similar to the UE identity verification performed in the D2D token generation, but the authentication is performed in the gNB without connecting to the core network. The D2D token can be verified using the public key and SUCI of each gNB, which authenticates that the D2D token has been issued from the gNB by request by a pre-authenticated UE. When the verification of the D2D token is completed, the secret key exchange used in the encryption process of the data transmission step is performed according to the result. The exchanged secret key is a secret key derived from the secret keys of both UEs using ECDH. Therefore, even if the attacker taps the data transmitted in the middle of the key exchange, the secret key cannot be derived. Secure data transmission, the last step of the proposed D2D communication, performs data encryption communication. At this time, encryption uses a lightweight AEAD cipher. The lightweight AEAD cipher is a cryptographic algorithm that provides not only confidentiality but also integrity and authenticity. It encrypts the data to be transmitted and creates a MAC for authenticating data integrity. Moreover, in the encryption process, the AEAD cipher uses additional information about a communication session and the other party, called the associated data (AD), thereby the AEAD cipher provides authenticity, which means the message is transmitted from the right party at the right time. In the proposed D2D communication system, the AD consists of the D2D token and context sequence information and manages the sequence for each transmission. Upon receiving the cipher text using the AD configured as described above, the UE may check whether the other UE performing D2D communication has received data corresponding to the current situation along with authentication. The data format used in secure data transmission is shown in Figure 4. In this step, any lightweight AEAD cipher can be applied according to available resources. Table 2 shows available lightweight AEAD ciphers which are candidate cipher from NIST lightweight cryptography standardization (Round 2) [10].
Simulation Results
In this section, we conduct a simulation to evaluate the proposed D2D communication system in terms of performance and efficiency. The performance in this section shows the whole processing time of the proposed D2D communication process. Moreover, for evaluating the efficiency of the proposed D2D communication, we perform analysis of implementation cost of lightweight AEAD ciphers, and simulate energy consumption according to AEAD ciphers.
The proposed D2D communication includes cryptographic algorithms for providing our security considerations (authentication, data confidentiality/integrity, anonymity). In detail, the applied cryptographic algorithms are the digital signature, the Diffie-Hellman key exchange algorithm and the AEAD cipher. We suppose that the processing time of each cryptographic algorithm is as follows.
The processing time for signing of a digital signature t DS sign , the processing time for verification of a digital signature t DS ver , the processing time for key exchange t DH and the processing time for the AEAD cipher t AEAD . Then we suppose the transmission latency in D2D communication is l tr . Finally, we can estimate the total length of the D2D communication processing time t D2D through Equation (1).
For calculating the summation of each processing time, we analyze the proposed D2D communication in terms of the number of transmissions and the usage count of the cryptographic algorithm at each step. In D2D token generation (step 0), there are two transmissions, request and response, for a D2D token; this step also includes 5G-AKA for user identity authentication. The 5G-AKA have 10 transmissions between UEs, gNB, AMF and UDM. In terms of the usage of the cryptographic algorithm, the D2D token generation step uses ECDSA-signing to process token generation. Moreover, we assume that the 5G-AKA consists of ECDSA-signing and ECDSA verification because the 5G-AKA is based on the ECC certificate. In device discovery (step 1), the requested UE broadcasts the request message; this means that the number of transmissions for a request message equal the number of devices (m), which are located near the sender UE. Moreover, in this step, there is a transmission to response. In link setup (step 2), when two devices set the connection, there are transmissions, including two for token verification, two for response of verification and two for key exchange, and there are the usages of the cryptographic algorithm, including two for ECDSA verification and one for ECDH. Lastly, the secure data transmission (step 3) has transmissions according to the amount of data (n bytes), and we assumed that data are transmitted in packets and in units of 1460 bytes, which is a general maximum transmission unit (MTU) size. Moreover, the AEAD cipher is used twice (encryption/decryption) in this step. Then we can finally calculate the summation of processing time by multiplying each processing time by the number of transmissions or the usage count of the cryptographic algorithm. Table 3 shows the summary of processing time at each step of proposed D2D communication. Table 3. The summary of processing time of proposed D2D communication. Step
ECDSA-Sign ECDSA-Verify ECDH AEAD
Step 0 (2+10) * l tr (1+1) * t DS sign 1 * t DS ver -- Step 1 (m+1) * l tr ---- Step 2 (2+2+2) * l tr -2 * t DS ver 1 * t DH - Step 3 (n/1460) * l tr -- When we simulate Equation (1) using processing time in Table 3, we set each time parameter based on 5G network requirements and existing implementation results of the cryptographic algorithm. The 5G network requires a transmission latency of 1 ms [18]; accordingly, we set l tr as 0.001. Moreover, we set the processing time of the ECC-based algorithm based on the performance presented in [19] (t DS sign = 0.122, t DS ver = 0.458, t DS DH = 0.1672). In the case of t AEAD , we can calculate processing time by multiplying the throughput (Mbps) of the algorithm by the amount of data (n (Mb) = n (MB) * 8/10 6 ). For simulating various AEAD ciphers, we set the parameter following five AEAD ciphers (AES-GCM, ASCON, SpoC, Spook and GIFT-COFB) based on the performance results presented in [20]. Each case of t AEAD is as follows (power measured: 50 MHz): t AES−GCM = n (Mb)/31.2 (Mbps), t ASCON = n (Mb)/39.0 (Mbps), t SpoC = n (Mb)/28.8 (Mbps), t Spook = n (Mb)/88.3 (Mbps), t GIFT−COFB = n (Mb)/120.8 (Mbps). Figure 5 shows the simulation result of the proposed D2D communication. The AEAD ciphers used in the simulation consist of one general-purpose AEAD cipher (AES-AEAD) and four lightweight AEAD ciphers. Simulation results show that three lightweight AEAD ciphers (ASCON, Spook and GIFT-COFB) are faster than AES-GCM (optimized). In particular, GIFT-COFB shows about 18.71% faster performance than AES-GCM when transmitting 10 KB data. However, because 5G IoT networks have limited resources, good performance of cryptographic algorithms may not cover all of the 5G IoT devices. This means the cryptographic algorithm has to be implemented lightly and must consume a small amount of power. Table 4 shows the hardware implementation results of AEAD block ciphers [19]. Even though Spook is faster than AES-GCM (optimized), Spook has the highest implementation cost, as in the mentioned area for implementing a look-up tables (LUTs). Figure 6 shows energy consumption by amount of data based on energy efficiency in Table 4. In terms of energy consumption, GIFT-COFB and ASCON consume less energy than AES-GCM (optimized), but SpoC and Spook consume more energy. Considering that both GIFT-COFB and ASCON show better performance than AES-GCM (optimized) in the performance simulation, when GIFT-COFB or ASCON is applied to the proposed D2D communication, both speed and energy efficiency of the proposed D2D communication are better than for AES-GCM (optimized)-based D2D communication.
Security Analysis
In this section, we perform security analysis of the proposed secure D2D communication system. As we mentioned before, secure D2D communication requires authentication, data confidentiality/integrity and anonymity. In addition, considering the resource-constrained nature of a 5G IoT network, it must be implemented lightly and must perform efficiently. We first discuss the proposed D2D communication system based on our security considerations. Moreover, we discuss security against typical threats of D2D communication.
Analysis Based on Security Considerations
• Authentication: The proposed D2D communication system performs primary authentication using 5G-AKA, which is an authentication framework provided by 5G, to perform authentication for a UE before issuing a D2D token. Moreover, the issued token can perform secondary authentication through verification of the process of creating a link of D2D communication through the gNB's public key and SUCI. Finally, in the data transmission step, the token is used as an AD to authenticate the other party for each transmission of data. In this way, authentication of the UE is performed in all processes of data communication before issuing a token for D2D communication so that more secure communication can be performed.
•
Data confidentiality and integrity: The proposed D2D communication system generates D2D communication using SUCI, which is the encrypted UE identity, and the secret key of the gNB during the D2D generation process. In this process, there is no case where the identity of the unencrypted UE is transmitted. In addition, in the step of actual data transmition after creating a D2D link, encryption is performed using a lightweight AEAD cipher. AEAD cryptography can provide integrity and authentication as well as data confidentiality. Therefore, the proposed D2D communication system can guarantee the confidentiality/integrity of the identity and communication data of the UE. • Anonymity: In 5G networks, SUCI is an encrypted identity for UE anonymity, which provides anonymity for the UE itself. Moreover, the D2D token used in the proposed D2D communication is a value obtained by signing SUCI with the private key of the gNB, which also provides anonymity by not being able to recognize the identity of the UE directly.
• Efficiency: Both the authentication process and the data encryption process used in the proposed D2D communication system are based on lightweight cryptography. The lightweight ciphers used in this paper are the ECC-based public key cryptosystem and the lightweight AEAD cipher. The ECC-based public key cryptosystem uses a 256-bit key and operates faster than RSA, which uses a 1024-bit key. Moreover, the lightweight AEAD cipher is designed to be efficiently implemented in a resource-constrained environment such as in IoT and provides data confidentiality/integrity and authentication.
Security against Typical Threats
• Impersonation attack. In D2D the token generation step, each UE is issued a D2D token, which is signed by the gNB. When gNBs generate the token, they authenticate the validity of the UE by comparing the identity of the UE in UDM. After this authentication process, gNBs complete the generation of D2D tokens by using their private key. Because of this procedure, the attacker cannot impersonate other UE.
•
Eavesdropping. In a secure data transmission step, every instance of data transmission is protected by the lightweight AEAD cipher. In the AEAD encryption process, UE uses its D2D token and context sequence as associated data. Using this associated data, MAC is generated, and thereby UE can check the integrity of the message and the validity of the sender UE. For these reasons, the attacker cannot eavesdrop and cannot fabricate a message. • Privacy sniffing. The proposed D2D communication system uses the D2D token, which is generated based on SUCI of UE and digital signature of gNB using ECDSA. The D2D token can provide anonymity as a cryptographic identity. For this reason, the attacker cannot recognize the original identity of the UE.
•
Free riding attack and location spoofing. When the D2D token is generated by gNB, the validity of the UE is authenticated. This means that each instance of validating a UE is managed by gNB. The D2D token is authenticated in the link setup step in the proposed D2D communication system, and the data transmission is protected by AEAD encryption using a D2D token. Therefore, if a free-riding attack or location spoofing occurs in D2D communication, gNB can handle these situations by eliminating malicious UE.
Conclusions
In this paper, we propose a secure D2D communication system in a 5G IoT environment. The proposed D2D communication is designed based on an ECC-based public key cryptosystem and a lightweight AEAD cipher for efficiency in 5G use cases corresponding to IoT scenarios, mMTC and URLLC. Before the D2D communication is performed, the UE identity is verified based on the 5G-AKA provided by the 5G network, and then a token is used as the ECDSA for the D2D communication. The generated token could authenticate the legitimacy of the corresponding UE identity in the link setup process after device discovery. This can be done without connecting to the core network. In addition, by performing the encrypted communication through the lightweight AEAD cipher using the token as the associated data in the secure data communication step, the confidentiality/integrity of the data and authentication of the UE can be performed in each data transmission step. This approach can provide higher performance and energy efficiency than a general-purpose AEAD cipher-based communication system, and can also provide security against security threats such as impersonation, eavesdropping, privacy sniffing, free-riding and location spoofing. | 6,637.4 | 2019-12-27T00:00:00.000 | [
"Computer Science"
] |
Statistical Analysis on the Structural Size of Simulated Thin Film Growth with Molecular Dynamics for Glancing Angle Incidence Deposition
: For the purpose of a deeper understanding of thin film growth, in the last two decades several groups developed models for simulation on the atomistic scale. Models using molecular dynamics as their simulation method already give results comparable to experiments, however statistical analysis of the simulations themselves are lacking so far, reasoned by the limits imposed by the computational power and parallelization that can only be used in lateral dimensions. With advancements of software and hardware, an increase in simulation speed by a factor of up to 10 can be reached. This allows either larger structures and/or more throughput of the simulations. The paper analyses the significance of increasing the structure size in lateral dimensions and also the repetition of simulations to gain more insights into the statistical fluctuation contained in the simulations and how well the coincidence with the experiment is. For that, glancing angle incidence deposition (GLAD) coatings are taken as an example. The results give important insights regarding the used interaction potential, the structure size and resulting important differences for the density, surface morphology, roughness and anisotropy. While larger structures naturally can reproduce the real world in more detail, the results show which structure sizes are needed for these aspects without wasting computational resources.
Introduction
The simulation and modeling of processes is an important area far beyond thin film growth. However, the computation of the deposition of thin films is a challenge, because for a concise model several orders of magnitude in space and time have to be covered by a multi-scale model. As described in [1][2][3], this involves several models from direct simulation Monte Carlo (DSMC), to atomistic models like molecular dynamics (MD) or kinetic Monte Carlo (kMC), to quantum mechanical density functional theory (DFT) calculations. Within the framework of the theoretical description, the complete virtualization of the coating process place enormous demands on both the computational technology and the numerical procedures. In addition, the computational effort and numerical precision have to be carefully balanced for each individual sub-area of the virtual coating. Atomistic models are employed to explore the nanostructure properties of the thin films [1,4,5], while glancing angle deposition (GLAD) is of high interest due to its concise nanostructure [6][7][8]. Results that correspond well between experiment and simulation have already been obtained [6,7] taking into account the size of the simulated structures.
The size is limited due to computational resources, and although parallelized algorithms are used, this allows mainly the increase of the lateral structure dimensions. The growth height is more restricted by the computational resources, because the deposition of atoms can be parallelized to an only very limited extent. Therefore, it is important to choose a structural size, which describes the properties of the material well, while not exhausting resources more than needed. Despite advances in computer technology, software and algorithms, the thickness of structures calculated by classical molecular simulation are still about one order of magnitude away from the structures of experimentally produced single layers, while the factor for multi-layers considering optical thicknesses is about two to three orders of magnitude. The technological advances will not be able to close this gap completely at least for multi layers in the foreseeable future. Therefore, it is crucial to evaluate the significance of the virtual structures as accurately as possible.
The intention of this manuscript is to conclude which structural sizes are useful to consider for virtual material investigations by a quantitative statistical analysis of several simulations and different structure sizes. Also, the impact of the chosen interaction potential is investigated. Only in this way can the maximum benefit by correlation of the theoretical results with the experiments be derived. The following presentation makes a substantial step towards improved knowledge of the significance of the simulations.
Materials and Methods
A major challenge for the simulation is to describe correctly the nanostructure, which occurs especially in thermal coating processes. In particular, the application of GLAD coatings relies on the proper use of these structural properties, making this coating process ideally suited for evaluation of the simulation results. For the investigation the thin film growth of TiO 2 and SiO 2 is analyzed, while the recent publications from Badorreck et al. and Grineviciute et al. [6,7] serve as a basis. The growth of the thin films is investigated by means of atom deposition within the framework of classical molecular dynamics.
Badorreck et al. [6] investigated GLAD coatings of TiO 2 with regard to the birefringence properties. However, the former performed simulations with deposition under 0 • , 30 • , 50 • and 70 • exhibit a small size in lateral dimension of 7 × 8 nm 2 ; therefore, here the results will be compared to structures grown on a substrate area of 30 × 20 nm 2 . In addition, the impact of using different interaction potentials will be analyzed here, namely the potentials from Matsui et al. [9] and Zhang et al. [10]. The latter has the advantage, that it can be combined with potentials for other materials, e.g., SiO 2 , from the same publication, as was performed in [5]. The substrate temperature is assumed to be at 300 K. More sophisticated potentials, e.g., modified embedded atom method (MEAM) [11] like from Lee et al. [12] are not considered here, because the computational effort is at least more than one order of magnitude higher compared to the potentials from Matsui and Zhang, which are of the Buckingham type [13], and therefore are not feasible for the investigation.
Grineviciute et al. [7] analyzed also birefringent properties, here of SiO 2 serial bideposition coatings, with the emphasis on the effect through continued coating onto these porous structures, which is essential for the application in all silica GLAD coatings. Here, the first layer grown by serial bi-deposition is performed for two different substrate areas of 16 × 7 nm 2 and 30 × 20 nm 2 , while the growth of the second layer is repeatedly simulated for the one with 16 × 7 nm 2 . The interaction potential from Zhang et al. [10] is used here. The substrate temperature is also assumed to be at 300 K.
The algorithm for the deposition simulation consists of placing several atoms at the top surface of the structure. For each iteration, the starting positions are computed by a randomly positioned grid, where each position on the grid has its own random position variation. Simultaneously, it is ensured that the distance between each position is larger than the interaction cutoff, which is sufficient for low deposition energies, exceeding thermic energies not significantly. With the direction of their velocity vectors, the atoms are directly moved towards the surface keeping the limits of the interaction cutoff of 1.2 nm, so that the computationally expensive part of the MD is minimized. An illustration of this state from different viewpoints is shown in Figure 1 with the atoms to be deposited viewed in green, traveling into the direction of the atoms marked in blue as indicated by the pink arrow. One MD deposition iteration is performed within Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [14] in two stages, the first within a NVE (constant number of particles, volume and energy), the second within a NVT (constant number of particles, volume and temperature) ensemble. The integration time step is 1 fs, while the number of integration steps is determined automatically for each deposition iteration depending on how fast the atoms reach the surface and usually lies in the range of 10 4 for the NVE step and 50 for the NVT step with a damping parameter of 0.01. Long-range coulomb interactions are calculated by Wolf's summation [15]. A more detailed description of the algorithm can be found in [6]. The computational effort used for the following simulations ranges, depending on the structure size, from one month to about six months on a system capable of 10 TFlops. [14] in two stages, the first within a NVE (constant number of particles, volume and energy), the second within a NVT (constant number of particles, volume and temperature) ensemble. The integration time step is 1 fs, while the number of integration steps is determined automatically for each deposition iteration depending on how fast the atoms reach the surface and usually lies in the range of 10 4 for the NVE step and 50 for the NVT step with a damping parameter of 0.01. Longrange coulomb interactions are calculated by Wolf's summation [15]. A more detailed description of the algorithm can be found in [6]. The computational effort used for the following simulations ranges, depending on the structure size, from one month to about six months on a system capable of 10 TFlops.
(a) (b) Figure 1. (a) Illustration from the top of a deposition step with a glancing angle of incidence of 50° directed to the top-left. Atoms to be deposited (green) are placed at the top and moved towards the structure keeping the limits of the interaction cutoff. The blue marked atoms indicate the direction of the depositing atoms and would be hit without any interaction, also indicated by the pink arrow. (b) 3D view of the same illustration with the dimension of the simulation box. Graphics generated with NGL [16,17].
The analysis of the finally grown thin films is then performed for the density behavior and the surface roughness. Further their anisotropy is analyzed as described in Ref. [6].
TiO2
GLAD coatings are manufactured under very specific conditions. The particle energy must be kept low and the coating temperature is low so that the structures can develop in a suitable manner. In addition, the angle of incidence must be set in such a way that the shadowing effect leads to the tailored growth of columns. Therefore, we will compare the results for structures grown for different deposition angles of 0°, 30°, 50° and 70° for the two different substrate areas of 7 × 8 nm 2 and 30 × 20 nm 2 . Comparing the smaller with the larger substrate the surface area increases by a factor of 10. Due to the computational effort, the smaller structures are simulated with the potential from Matsui et al. [9], while for the larger ones the potential from Zhang et al. [10] is used. To allow still a comparison between the structures despite the different used potentials, the structure for 0° and 7 × 8 nm 2 substrate size is simulated also with the potential from Zhang.
The structures in Figure 2 and especially the structures grown under 70° show that the larger structures exhibit a higher variation of the microstructure than can be modeled by the smaller ones. While the small structure for 70° deposition angle effectively returns one slanted column, which is periodically continued at the lateral borders and actually can be considered as a wall, the larger structure exhibits several individual and complete columns. The analysis of the finally grown thin films is then performed for the density behavior and the surface roughness. Further their anisotropy is analyzed as described in Ref. [6].
TiO 2
GLAD coatings are manufactured under very specific conditions. The particle energy must be kept low and the coating temperature is low so that the structures can develop in a suitable manner. In addition, the angle of incidence must be set in such a way that the shadowing effect leads to the tailored growth of columns. Therefore, we will compare the results for structures grown for different deposition angles of 0 • , 30 • , 50 • and 70 • for the two different substrate areas of 7 × 8 nm 2 and 30 × 20 nm 2 . Comparing the smaller with the larger substrate the surface area increases by a factor of 10. Due to the computational effort, the smaller structures are simulated with the potential from Matsui et al. [9], while for the larger ones the potential from Zhang et al. [10] is used. To allow still a comparison between the structures despite the different used potentials, the structure for 0 • and 7 × 8 nm 2 substrate size is simulated also with the potential from Zhang.
The structures in Figure 2 and especially the structures grown under 70 • show that the larger structures exhibit a higher variation of the microstructure than can be modeled by the smaller ones. While the small structure for 70 • deposition angle effectively returns one slanted column, which is periodically continued at the lateral borders and actually can be considered as a wall, the larger structure exhibits several individual and complete columns.
A simple qualitative consideration of the structures shows for the large structures that the direction of growth seems to change slightly with an increasing layer thickness resulting in curved columns. Such details cannot be seen in the smaller structures. This already indicates a qualitative difference. A simple qualitative consideration of the structures shows for the large structures that the direction of growth seems to change slightly with an increasing layer thickness resulting in curved columns. Such details cannot be seen in the smaller structures. This already indicates a qualitative difference.
The investigation is performed for two different interaction potentials, to also get an idea of the impact of the potential itself onto the structural properties. For a better understanding of the comparison between the thin film growth for different substrate sizes, first the impact of both used interaction potentials from Matsui et al. [9] and Zhang et al. [10] is explored. In Figure 3, a comparison of the density is shown for the small substrate size of 7 × 8 nm 2 , performed four times with the Matsui potential and one time with the Zhang potential. For the thin film growth using the Matsui potential the mean of the density profiles with about 3 g/cm 3 (excluding the region of the substrate and initial growth zone) is plotted with the standard deviation as shaded region. Compared to the growth with the Zhang potential, that results in a significantly lower density of about 2.25 g/cm 3 over the full height. The difference is much larger than the standard deviation of about ±0.25 g/cm 3 shown for the Matsui potential. While here no standard deviation for the simulation with the Zhang potential can be given, the analysis of the following 30 × 20 nm 2 structures divided into several 7 × 8 nm 2 lateral-sized parts with regard to the density profile indicates a very similar standard deviation as shown for the Matsui potential. Compared with experimental results [6], the structures grown using the Zhang potential obviously underestimate the density. The cross check for high energetic grown structures at 10 eV shows also a difference in density for both potentials of 4.05 g/cm 3 and 3.25 g/cm 3 for the Matsui and Zhang potential, respectively. Although the structures in Figure 2 are grown with different potentials, we are analyzing the results without correction factors.
The investigation is performed for two different interaction potentials, to also get an idea of the impact of the potential itself onto the structural properties. For a better understanding of the comparison between the thin film growth for different substrate sizes, first the impact of both used interaction potentials from Matsui et al. [9] and Zhang et al. [10] is explored. In Figure 3, a comparison of the density is shown for the small substrate size of 7 × 8 nm 2 , performed four times with the Matsui potential and one time with the Zhang potential. For the thin film growth using the Matsui potential the mean of the density profiles with about 3 g/cm 3 (excluding the region of the substrate and initial growth zone) is plotted with the standard deviation as shaded region. Compared to the growth with the Zhang potential, that results in a significantly lower density of about 2.25 g/cm 3 over the full height. The difference is much larger than the standard deviation of about ±0.25 g/cm 3 shown for the Matsui potential. While here no standard deviation for the simulation with the Zhang potential can be given, the analysis of the following 30 × 20 nm 2 structures divided into several 7 × 8 nm 2 lateral-sized parts with regard to the density profile indicates a very similar standard deviation as shown for the Matsui potential. Compared with experimental results [6], the structures grown using the Zhang potential obviously underestimate the density. The cross check for high energetic grown structures at 10 eV shows also a difference in density for both potentials of 4.05 g/cm 3 and 3.25 g/cm 3 for the Matsui and Zhang potential, respectively. Although the structures in Figure 2 are grown with different potentials, we are analyzing the results without correction factors.
Looking at the density profiles in Figure 4 for the structures from Figure 2, two interesting aspects are revealed. On the one hand, the density profiles for the small structures are very noisy. As already shown in Figure 3, a standard deviation of ±0.25 g/cm 3 can be expected, therefore a variation of up to ±0.5 g/cm 3 is not uncommon. Here, the increase of the structure size results in a considerable improvement, with a reduction of the density variation to about ±0.1 g/cm 3 . Therefore, the dependency of the density on the deposition angle can be considered with much more significance. While in principle also the small structures show this dependency, the fluctuation makes it especially hard to determine a difference between 0 • and 30 • . The large structures show here very clearly that the density for 30 • is a bit less than for a 0 • deposition angle. Looking at the density profiles in Figure 4 for the structures from Figure 2, two interesting aspects are revealed. On the one hand, the density profiles for the small structures are very noisy. As already shown in Figure 3, a standard deviation of ±0.25 g/cm 3 can be expected, therefore a variation of up to ±0.5 g/cm 3 is not uncommon. Here, the increase of the structure size results in a considerable improvement, with a reduction of the density variation to about ±0.1 g/cm 3 . Therefore, the dependency of the density on the deposition angle can be considered with much more significance. While in principle also the small structures show this dependency, the fluctuation makes it especially hard to determine a difference between 0° and 30°. The large structures show here very clearly that the density for 30° is a bit less than for a 0° deposition angle.
On the other hand, the larger structures show a significantly lower overall density than the small structures. With the results shown in Figure 3, a significant amount of the difference of 0.75 g/cm 3 can be attributed to applying different potentials. However, the reduction of the density is still larger with a remaining additional drop of about 0.5 g/cm 3 . While the density roughly varies between 2 and 3 g/cm 3 for the small ones, for the larger ones this is reduced to 1 to 1.75 g/cm 3 . In sum that is about a factor of 2. Compared to the experimental results presented in [6], the density for the structures grown with the Zhang potential are significantly too low. Consequently, a modification of the potential seems to be necessary for its application in the future. . Density profiles in growth direction z of TiO2 for an incidence angle of 0° and a substrate area of 7 × 8 nm 2 . The comparison between using different interaction potentials is shown. For the Matsui potential [9] the standard deviation shown as shaded region is computed from four independent simulations. With the Zhang potential [10] a lower density is obtained.
Looking at the density profiles in Figure 4 for the structures from Figure 2, two interesting aspects are revealed. On the one hand, the density profiles for the small structures are very noisy. As already shown in Figure 3, a standard deviation of ±0.25 g/cm 3 can be expected, therefore a variation of up to ±0.5 g/cm 3 is not uncommon. Here, the increase of the structure size results in a considerable improvement, with a reduction of the density variation to about ±0.1 g/cm 3 . Therefore, the dependency of the density on the deposition angle can be considered with much more significance. While in principle also the small structures show this dependency, the fluctuation makes it especially hard to determine a difference between 0° and 30°. The large structures show here very clearly that the density for 30° is a bit less than for a 0° deposition angle.
On the other hand, the larger structures show a significantly lower overall density than the small structures. With the results shown in Figure 3, a significant amount of the difference of 0.75 g/cm 3 can be attributed to applying different potentials. However, the reduction of the density is still larger with a remaining additional drop of about 0.5 g/cm 3 . While the density roughly varies between 2 and 3 g/cm 3 for the small ones, for the larger ones this is reduced to 1 to 1.75 g/cm 3 . In sum that is about a factor of 2. Compared to the experimental results presented in [6], the density for the structures grown with the Zhang potential are significantly too low. Consequently, a modification of the potential seems to be necessary for its application in the future. On the other hand, the larger structures show a significantly lower overall density than the small structures. With the results shown in Figure 3, a significant amount of the difference of 0.75 g/cm 3 can be attributed to applying different potentials. However, the reduction of the density is still larger with a remaining additional drop of about 0.5 g/cm 3 . While the density roughly varies between 2 and 3 g/cm 3 for the small ones, for the larger ones this is reduced to 1 to 1.75 g/cm 3 . In sum that is about a factor of 2. Compared to the experimental results presented in [6], the density for the structures grown with the Zhang potential are significantly too low. Consequently, a modification of the potential seems to be necessary for its application in the future.
Regarding the surface morphology in Figures 5 and 6 for the small and larger structures respectively, the morphology shows that the growth for 0 • starts with many columnar structures (mean height of 5 nm), while for 70 • fewer columns are recognized. This can be considered a direct result of the shadowing effect. Also, thicker structures reduce the number of columns, which is caused by the coalescence or overgrowth of the columns. The comparison of the morphology and roughness of the small structures with the larger ones show fewer columns and much less roughness for the small ones.
Regarding the surface morphology in Figures 5 and 6 for the small and larger structures respectively, the morphology shows that the growth for 0° starts with many columnar structures (mean height of 5 nm), while for 70° fewer columns are recognized. This can be considered a direct result of the shadowing effect. Also, thicker structures reduce the number of columns, which is caused by the coalescence or overgrowth of the columns. The comparison of the morphology and roughness of the small structures with the larger ones show fewer columns and much less roughness for the small ones. mean height/nm 0° 30° 50° 70° 5 9 12 Figure 5. Images of the surface morphology for a substrate area of 7 × 8 nm 2 and different mean heights. The scale is shifted for different heights, but its range is kept the same to make all images comparable. mean height/nm 0° 30° 50° 70° 5 9 12 Figure 6. Images of the surface morphology for a substrate area of 30 × 20 nm 2 for different mean heights. The scale is shifted for different heights, but its range is kept the same to make all images comparable.
In Figure 7, the structures from Figure 2 are analyzed with regard to the root mean square (rms) roughness [18]. This is calculated by Welch's method [19] as implemented in NumPy/SciPy [20,21]. The maximum roughness for 7 × 8 nm 2 is reached after 5 to 10 nm while with a larger angle of incidence the maximum is reached faster. When the maximum is reached, the roughness in the following is characterized by a large fluctuation around the maximum with no significant difference between the incidence angles. For 30 × 20 nm 2 the roughness is much higher already for a height of 10 nm and a plateau cannot be identified yet. In addition, the incidence angle of the deposited atoms shows a significant impact on the roughness, the value for 70° compared to 0° at a height of 12 nm is about twice Figure 5. Images of the surface morphology for a substrate area of 7 × 8 nm 2 and different mean heights. The scale is shifted for different heights, but its range is kept the same to make all images comparable.
Regarding the surface morphology in Figures 5 and 6 for the small and larger structures respectively, the morphology shows that the growth for 0° starts with many columnar structures (mean height of 5 nm), while for 70° fewer columns are recognized. This can be considered a direct result of the shadowing effect. Also, thicker structures reduce the number of columns, which is caused by the coalescence or overgrowth of the columns. The comparison of the morphology and roughness of the small structures with the larger ones show fewer columns and much less roughness for the small ones. mean height/nm 0° 30° 50° 70° 5 9 12 Figure 5. Images of the surface morphology for a substrate area of 7 × 8 nm 2 and different mean heights. The scale is shifted for different heights, but its range is kept the same to make all images comparable. mean height/nm 0° 30° 50° 70° 5 9 12 Figure 6. Images of the surface morphology for a substrate area of 30 × 20 nm 2 for different mean heights. The scale is shifted for different heights, but its range is kept the same to make all images comparable.
In Figure 7, the structures from Figure 2 are analyzed with regard to the root mean square (rms) roughness [18]. This is calculated by Welch's method [19] as implemented in NumPy/SciPy [20,21]. The maximum roughness for 7 × 8 nm 2 is reached after 5 to 10 nm while with a larger angle of incidence the maximum is reached faster. When the maximum is reached, the roughness in the following is characterized by a large fluctuation around the maximum with no significant difference between the incidence angles. For 30 × 20 nm 2 the roughness is much higher already for a height of 10 nm and a plateau cannot be identified yet. In addition, the incidence angle of the deposited atoms shows a significant impact on the roughness, the value for 70° compared to 0° at a height of 12 nm is about twice Figure 6. Images of the surface morphology for a substrate area of 30 × 20 nm 2 for different mean heights. The scale is shifted for different heights, but its range is kept the same to make all images comparable.
In Figure 7, the structures from Figure 2 are analyzed with regard to the root mean square (rms) roughness [18]. This is calculated by Welch's method [19] as implemented in NumPy/SciPy [20,21]. The maximum roughness for 7 × 8 nm 2 is reached after 5 to 10 nm while with a larger angle of incidence the maximum is reached faster. When the maximum is reached, the roughness in the following is characterized by a large fluctuation around the maximum with no significant difference between the incidence angles. For 30 × 20 nm 2 the roughness is much higher already for a height of 10 nm and a plateau cannot be identified yet. In addition, the incidence angle of the deposited atoms shows a significant impact on the roughness, the value for 70 • compared to 0 • at a height of 12 nm is about twice as large. This indicates that the lateral size of 7 × 8 nm 2 cannot image the full surface morphology and roughness. For a representative image of the roughness, structures have to cover all characteristic features completely. If the structure and, therefore, the surface area, is smaller, the relevance of the analysis is reduced.
The analysis of the anisotropy is presented in Table 1. While the fill factors are corresponding to the density distribution of the simulations shown in Figure 4, the birefringence show no similar behavior comparing small and large structures. For the substrate size 7 × 8 nm 2 , the birefringence steadily increases with larger deposition angles, and this is not reproduced for 30 × 20 nm 2 . The result of the birefringence ∆n for 0 • angle of incidence and 30 × 20 nm 2 is larger than for 7 × 8 nm 2 , although here a value closer to 0 is expected due to smaller statistical fluctuation. The structure for 30 • deposition angle shows a larger value of ∆n, while for 50 • and 70 • the values are between 0 • and 30 • . This can be a result of the extremely porous structure, which prevents a pronounced anisotropy. The analysis of the anisotropy is presented in Table 1. While the fill factors are corresponding to the density distribution of the simulations shown in Figure 4, the birefringence show no similar behavior comparing small and large structures. For the substrate size 7 × 8 nm 2 , the birefringence steadily increases with larger deposition angles, and this is not reproduced for 30 × 20 nm 2 . The result of the birefringence ∆ for 0° angle of incidence and 30 × 20 nm 2 is larger than for 7 × 8 nm 2 , although here a value closer to 0 is expected due to smaller statistical fluctuation. The structure for 30° deposition angle shows a larger value of ∆ , while for 50° and 70° the values are between 0° and 30°. This can be a result of the extremely porous structure, which prevents a pronounced anisotropy.
In sum the analysis shows, that regarding especially the Zhang potential for TiO2, there is a need for further investigation and optimization of the potential parameters, due to the produced densities, which are much too low. However, the different structure sizes are still comparable qualitatively. For reproducing features like several columns, changed growth during deposition and imaging the surface roughness to a much better extent, the use of a substrate size of at least 30 × 20 nm 2 can be recommended.
SiO2
For SiO2, an initial porous layer, grown by serial bi-deposition, is continued with a second layer. For the second layer deposition configurations for different incidence angles of 0°, 30° and 50° with a rotating substrate are chosen. Each simulation is performed three times, to evaluate the variation of the continued growth caused by the randomized deposition algorithm. The structures are shown in Figure 8. While the lower thirds of the structures are nearly identical, the upper two thirds exhibit differences. In sum the analysis shows, that regarding especially the Zhang potential for TiO 2 , there is a need for further investigation and optimization of the potential parameters, due to the produced densities, which are much too low. However, the different structure sizes are still comparable qualitatively. For reproducing features like several columns, changed growth during deposition and imaging the surface roughness to a much better extent, the use of a substrate size of at least 30 × 20 nm 2 can be recommended.
SiO 2
For SiO 2 , an initial porous layer, grown by serial bi-deposition, is continued with a second layer. For the second layer deposition configurations for different incidence angles of 0 • , 30 • and 50 • with a rotating substrate are chosen. Each simulation is performed three times, to evaluate the variation of the continued growth caused by the randomized deposition algorithm. The structures are shown in Figure 8. While the lower thirds of the structures are nearly identical, the upper two thirds exhibit differences.
When looking at the density profiles of the structures in Figure 9, an improvement to the results in previous works from the authors in collaboration with the Vilnius university in [7] can be presented. The plot shows the mean and standard deviation of the density profiles evaluated for the three repeated simulations. Here, the impact of the second layer deposition onto the first layer can be better distinguished for the 30 • and 50 • cases in the growth height region between 15 and 20 nm. First layer 0° 30° Figure 8. SiO2 structures, with the same initial first zig-zag layer (left), followed by a layer with a deposition angle of 0°, 30° and 50° with rotating substrate. Graphics generated with NGL [16,17] When looking at the density profiles of the structures in Figure 9, an improvement to the results in previous works from the authors in collaboration with the Vilnius university in [7] can be presented. The plot shows the mean and standard deviation of the density profiles evaluated for the three repeated simulations. Here, the impact of the second layer deposition onto the first layer can be better distinguished for the 30° and 50° cases in the growth height region between 15 and 20 nm. The standard deviation shows that the impact and its variation is minimal for 50°, while for 30° the impact is larger and with a higher fluctuation. Due to this variation the difference for 30° and 50° was hard to determine in [7]. For the height region of the second layer (25 to 50 nm) a slightly more dense structure is obtained for 0° compared with 50°, while the density for 30° is fluctuating between both. This is also an improvement on [7], where no significant difference between all three deposition angles can be quantified.
The surface morphology and roughness are presented in Figures 10 and 11 and show a large increase of the surface roughness for the first layer (zig zag), which is followed by a decrease with the continuation of the second layers. For 0° incidence, the largest decrease in surface roughness can be reached, and for 50° the value is constantly larger, while for 30° the roughness is fluctuating between the values for 0° and 50°.
The evaluation considering the anisotropic properties of the structures is also important for obtaining an idea of the random variations. Table 2 give the results for the first layer region between 7 and 20 nm (which is impacted by the second layer) and the second layer region between 25 and 47 nm, respectively. The fill factor is determined within limits of the standard deviation between 0.6% and 0.9% for 0° and 30°. Interestingly, for 50° the impact on the first layer has a very small variation of 0.05%, which is also caused by the fact that the impact itself is small, while the variation of the second layer is significantly First layer 0° 30° Figure 8. SiO2 structures, with the same initial first zig-zag layer (left), followed by a layer with a deposition angle of 0°, 30° and 50° with rotating substrate. Graphics generated with NGL [16,17] When looking at the density profiles of the structures in Figure 9, an improvement to the results in previous works from the authors in collaboration with the Vilnius university in [7] can be presented. The plot shows the mean and standard deviation of the density profiles evaluated for the three repeated simulations. Here, the impact of the second layer deposition onto the first layer can be better distinguished for the 30° and 50° cases in the growth height region between 15 and 20 nm. The standard deviation shows that the impact and its variation is minimal for 50°, while for 30° the impact is larger and with a higher fluctuation. Due to this variation the difference for 30° and 50° was hard to determine in [7]. For the height region of the second layer (25 to 50 nm) a slightly more dense structure is obtained for 0° compared with 50°, while the density for 30° is fluctuating between both. This is also an improvement on [7], where no significant difference between all three deposition angles can be quantified.
The surface morphology and roughness are presented in Figures 10 and 11 and show a large increase of the surface roughness for the first layer (zig zag), which is followed by a decrease with the continuation of the second layers. For 0° incidence, the largest decrease in surface roughness can be reached, and for 50° the value is constantly larger, while for 30° the roughness is fluctuating between the values for 0° and 50°.
The evaluation considering the anisotropic properties of the structures is also important for obtaining an idea of the random variations. Table 2 give the results for the first layer region between 7 and 20 nm (which is impacted by the second layer) and the second layer region between 25 and 47 nm, respectively. The fill factor is determined within limits of the standard deviation between 0.6% and 0.9% for 0° and 30°. Interestingly, for 50° the impact on the first layer has a very small variation of 0.05%, which is also caused by the fact that the impact itself is small, while the variation of the second layer is significantly The standard deviation shows that the impact and its variation is minimal for 50 • , while for 30 • the impact is larger and with a higher fluctuation. Due to this variation the difference for 30 • and 50 • was hard to determine in [7]. For the height region of the second layer (25 to 50 nm) a slightly more dense structure is obtained for 0 • compared with 50 • , while the density for 30 • is fluctuating between both. This is also an improvement on [7], where no significant difference between all three deposition angles can be quantified.
The surface morphology and roughness are presented in Figures 10 and 11 and show a large increase of the surface roughness for the first layer (zig zag), which is followed by a decrease with the continuation of the second layers. For 0 • incidence, the largest decrease in surface roughness can be reached, and for 50 • the value is constantly larger, while for 30 • the roughness is fluctuating between the values for 0 • and 50 • .
The evaluation considering the anisotropic properties of the structures is also important for obtaining an idea of the random variations. Table 2 give the results for the first layer region between 7 and 20 nm (which is impacted by the second layer) and the second layer region between 25 and 47 nm, respectively. The fill factor is determined within limits of the standard deviation between 0.6% and 0.9% for 0 • and 30 • . Interestingly, for 50 • the impact on the first layer has a very small variation of 0.05%, which is also caused by the fact that the impact itself is small, while the variation of the second layer is significantly larger with 1.59%. The variation of the birefringence is very small for the first layer, however here only the impact of the second layer on to the first one is obtained. For the second layer, the variation is slightly larger, while the influence from the first layer is still significant at least for 0 • . In sum, the standard deviation is much lower than the birefringence values computed for the first layers and in the range of the changes that describe a decreased birefringence for smaller incidence angles. The number of m = 3 simulations for each angle have to be considered as a compromise between computational effort and statistical significance. Assuming a normal distribution, for calculation of a 95% confidence interval, the multiplicative factor with regard to the Student's t distribution with m-1 degrees of freedom results in being 4.30, while the factor of 1.96 would be reached for m = ∞. Therefore, the statistical significance could be doubled roughly, but with a much higher computational effort. larger with 1.59 %. The variation of the birefringence is very small for the first layer, however here only the impact of the second layer on to the first one is obtained. For the second layer, the variation is slightly larger, while the influence from the first layer is still significant at least for 0°. In sum, the standard deviation is much lower than the birefringence values computed for the first layers and in the range of the changes that describe a decreased birefringence for smaller incidence angles. The number of m = 3 simulations for each angle have to be considered as a compromise between computational effort and statistical significance. Assuming a normal distribution, for calculation of a 95 % confidence interval, the multiplicative factor with regard to the Student's t distribution with m-1 degrees of freedom results in being 4.30, while the factor of 1.96 would be reached for m = ∞. Therefore, the statistical significance could be doubled roughly, but with a much higher computational effort. Figure 10. Surface morphology of the zig-zag structure from serial bi-deposition (first row) for 4, 7 and 14 nm mean height, and continued growth with glancing angle deposition under 0°, 30° and 50° with rotating substrate for 21, 32 and 40 nm mean height. Figure 11. Standard deviation σ of the height during growth of the first layer (zig zag) and the continued growth of the second layer with different coating conditions on the right including also the fluctuation (standard deviation) as shaded region. Figure 11. Standard deviation σ of the height during growth of the first layer (zig zag) and the continued growth of the second layer with different coating conditions on the right including also the fluctuation (standard deviation) as shaded region.
Discussion and Conclusions
The results show several aspects that have to be considered for performing and evaluation of the simulated thin film growth. The analysis of TiO 2 reveal, that the chosen interaction potential has a significant impact onto the structural properties, most notably the density and at least as a secondary effect also on the surface roughness and anisotropy. Here, the potential from Zhang et al. produces densities that are much too low compared to experimental values. While showing this already for the small structures, the effect is increased for the larger structures. This corresponds partly to the findings from Grigoriev et al. [8] (p. 7), who investigated SiO 2 thin film growth with high energetic Si, and observed a small decrease of the density for larger structures. However, our results for TiO 2 and the decrease in density for larger structures are more pronounced.
Nevertheless, the increase of the lateral structure size from 7 × 8 nm 2 to 30 × 20 nm 2 results in a significantly better description, which means much less fluctuation, of the density profiles in the growth direction and the development of the surface rms roughness. Furthermore, qualitatively the description of the rms roughness is enhanced, so that much higher values are reached and show a dependence on different angles of incidence. The small structures of 7 × 8 nm 2 cannot reproduce that. The analysis of the anisotropy is not clear. Here, further investigations have to be conducted into how large the influence of the different chosen potentials really is and which effects can be solely attributed to the structure size.
The analysis for repeated simulations of SiO 2 GLAD coatings gives results that are more reliable and also an estimate of the significance of the simulated structural properties and their difference. Therefore, repetitions of the simulations can help to gain more insights, while e.g., the columnar structure is probably not a complete representation of the real world, as learned from the analysis regarding TiO 2 structures. The size of the dimension in y with 7 nm does not allow the buildup of independent columns, instead forms of walls due to the periodic boundary conditions are grown. The density of the SiO 2 structures grown with the potential from Zhang et al. [10] results in experimentally reasonable values.
The presented data allows an estimation of the significance of structural properties depending on the structure size. While for a detailed dependence more data are needed, the results already give an important idea of how large the structures have to be for the respective property. Furthermore, the importance of the choice of the potential is emphasized, which needs to be thoroughly tested and compared to experimental findings, especially regarding the density. | 10,544.8 | 2021-04-17T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
A single nucleotide mutation in Nppc is associated with a long bone abnormality in lbab mice
Background The long bone abnormality (lbab) mouse is a new autosomal recessive mutant characterized by overall smaller body size with proportionate dwarfing of all organs and shorter long bones. Previous linkage analysis has located the lbab mutation on chromosome 1 between the markers D1Mit9 and D1Mit488. Results A genome-based positional approach was used to identify a mutation associated with lbab disease. A total of 122 genes and expressed sequence tags at the lbab region were screened for possible mutation by using genomic DNA from lbabl/lbab, lbab/+, and +/+ B6 mice and high throughput temperature gradient capillary electrophoresis. A sequence difference was identified in one of the amplicons of gene Nppc between lbab/lbab and +/+ mice. One-step reverse transcriptase polymerase chain reaction was performed to validate the difference of Nppc in different types of mice at the mRNA level. The mutation of Nppc was unique in lbab/lbab mice among multiple mouse inbred strains. The mutation of Nppc is co-segregated with lbab disease in 200 progenies produced from heterozygous lbab/+ parents. Conclusion A single nucleotide mutation of Nppc is associated with dwarfism in lbab/lbab mice. Current genome information and technology allow us to efficiently identify single nucleotide mutations from roughly mapped disease loci. The lbab mouse is a useful model for hereditary human achondroplasia.
(TJL) [4]. Homozygous mutants exhibit proportionate dwarfing of all organs and shorter long bones. The mutation has been transferred to the C57BL/6J strain to improve reproduction. The genetic locus responsible for the phenotype has previously been mapped to chromosome 1 (Chr 1) between markers D1Mit9 and D1Mit488 (53.5 cM) at TJL [4], but the responsible gene and the nature of mutation remained unclear.
Disease gene hunting has always been time-consuming and labor-intensive. For successful map-based cloning, a complicated fine mapping of a major locus is generally essential [5]. Recently, we developed an alternative, sequence-based, positional candidate cloning approach to bypass this bottleneck of cloning, and we have successfully identified several mutated genes in different mouse spontaneous mutants by applying this strategy [6,7]. Our strategy takes advantage of the availability of comprehensive murine sequence databases, polymerase chain reaction (PCR), high throughout PCR product analysis, and sequencing technologies to speed up the process of disease-related gene hunting. Interestingly, we identified a nucleotide mutation (C→G transversion) in gene Nppc in the lbab mice. Herein, we describe the detailed process of our cloning and validation.
Results
Body growth of lbab/lbab mice All lbab/lbab mice housed at the University of Tennessee Health Science Center died before 7 days of age with remarkable changes in organ weight and body size (Fig. 1, Table 1). No detectable differences in survival rate or body size were noted between the lbab/+ and +/+ mice.
Target region of the mutation in lbab locus
Previous genetic analysis showed that the lbab mutation is located on mouse Chr 1 and is flanked by molecular markers D1Mit9 and D1Mit488 [4]. According to the Ensembl database, D1Mit488 is located between 91902920-91903043 bps. However, there is no physical position for D1Mit9 in the database, although we know from TJL database that it is positioned at 53.5 cM. From TJL's mouse genome informatics database, we found 13 molecular markers at the 53.5 cM position. The positions of most of these [8] markers are near 84 Mb [9]. Accordingly, we decided to start our investigation in the region between 83 and 90 Mb ( Fig. 2A). Genomic sequences within this region are complete in the Ensembl database. There are a total of 122 transcripts, 70 of which represent genes and 52 represent expressed sequence tags (ESTs) ( Table 2).
Initial screening of the targeted region
Because lbab mice are bred on a C57BL/6J (B6) background, we assumed that the majority of the background genome carrying the lbab mutation was from the B6 strain. Therefore, we isolated genomic DNA from both B6 and lbab/lbab mice. We designed 528 pairs of primers flanking first and last exons of 122 candidate transcripts by using Primer3 software [10]. We obtained the primers commercially (Illumina, San Diego, CA) and conducted PCR amplification with genomic DNAs from lbab/lbab and B6 mice, which were regarded as normal controls. The PCR products were then analyzed for the presence of sequence differences between lbab/lbab and normal mice by using the RevealTM system (SpectruMedix LLC, PA). We found variations in 25 PCR products between normal B6 and lbab/lbab mice (Table 3).
We speculated that the identified sequence variations were likely to arise from the close linkage between the mutated gene and nearby sequences in the parental PL/J strain, even though the background of lbab mice was mainly on the B6 strain. To determine if this were the case, we isolated DNA from PL/J mice obtained from TJL and amplified the 25 variable fragments by using the same panel of primers. By comparing those DNA fragments with those from lbab/lbab mice, we did not find sequence differences between DNA products from lbab/lbab and PL/ J mice (data not shown), suggesting that those 25 variations represent polymorphisms between PL/J and B6.
Detecting the mutation in lbab locus in a more focused region
Because of the recognition of sequence polymorphisms between PL/J and B6 mice in the targeted region, we made two changes in our follow-up screening. First, we switched our controls from B6 to PL/J mice. Using another panel of primers (n = 240 pairs) for PCR amplification of every exon of all candidate genes, we identified only one DNA fragment from lbab/lbab mice that was different from PL/J mice DNA. This fragment was from exon 2 of the gene ENSMUSG00000026241, representing the gene for natriuretic peptide precursor C (Nppc). The same pair of primers for this fragment was used for further genotyping as indicated in material and methods.
Because we used PL/J mice as controls for this cycle of screening, there was another concern that the sequence variation might be a polymorphism derived from the B6 strain. To address this issue, we used PCR amplification of genomic DNA from B6 mice and compared results to the PCR products from PL/J and lbab/lbab mice. The data showed that the amplified DNA fragment from lbab/lbab mice was different from both B6 and PL/J mice, while the fragments from PL/J and B6 were the same (Fig. 2B). To find out which nucleotide(s) was different between lbab/ lbab and PL/J controls, we sequenced genomic DNA fragments from PL/J and lbab/lbab mice. The data revealed a C→G change from PL/J to lbab/lbab mice (Fig. 3A, left panel).
To confirm the same difference at the cDNA level, we performed reverse transcriptase PCR (RT-PCR) on total RNAs from lbab/lbab, lbab/+, and +/+ mice by using primers that covered the mRNA sequence from exon 2 to exon 3 of the Nppc. The resultant RT-PCR products were sequenced using the SpectruMedix system, and the same C→G transversion from PL/J to lbab/lbab mice was found (Fig. 3A, right panel) To evaluate the potential consequence of this point mutation in Nppc, we examined the translated amino acid sequence of Nppc. We found that this transversion predicts the substitution of arginine (R) for glycine (G) in a conserved domain of Nppc protein (Fig. 3B).
Confirmation of mutation
We conducted further experiments to confirm the single nucleotide change in the Nppc gene. Based on our initial screening, we think that it is very likely that the C→G change in exon 2 of the Nppc gene is causally related to the phenotypes in lbab/lbab mice. Because the lbab mutation arose from PL/J, theoretically, there should be no difference in the Nppc sequence between lbab and PL/J mice except the mutation. In addition, this single nucleotide replacement is the only change between homozygous lbab/lbab and homozygous normal PL/J mice in 122 transcripts. However, we could not completely rule out the possibility that we may have missed other mutations/polymorphisms. Moreover, one could question whether this is a spontaneous polymorphism in mouse strains or a random mutation that arose after the lbab was separated from the PL/J. Therefore, we carried out two more experiments to further ensure the association between the mutation and the disease. First, we examined sequence polymorphism in exon 2 of Nppc from nine other inbred strains (Fig. 2C). As shown in Fig. 2D, each such mixture showed multiple bands, indicating the difference between lbab/ lbab and those strains. Second, we bred 200 mice from heterozygous lbab/+ parents to evaluate allele frequency in relation to the phenotype. We genotyped every offspring by using the original pair of primers that flank exon 2 of the Nppc gene. From those progeny, we found 14 lbab/lbab mice that exhibited a genotype of homozygous G/G nucleotide and a phenotype of lbab mice, while the remaining progeny had 74 homozygous C/C and 112 heterozygous C/G genotypes, all with a normal phenotype. Taken together, our data indicate that the C→G transversion in exon 2 of the Nppc gene is associated with the phenotypes observed in lbab mice.
Phenotypic assay of normal and lbab mice Figure 1 Phenotypic assay of normal and lbab mice. A wholebody microCT scanned image of a pair of lbab/lbab and +/+ B6 mice in the same litter at the age of 6 days. A: lbab/lbab mouse, B: +/+ mouse.
Discussion
For the first time, we have identified a single nucleotide mutation by high throughput screening of a large genome region without fine mapping. The initial mapping at TJL was conducted with only 27 F 2 animals. Linkage of lbab was first detected on Chr 1 with D1Mit231 and D1Mit9 by using the pooled sample. DNA samples were then typed for the individual 96 animals with these two markers and three additional Chr 1 markers [4]. By the standard strategy of classical positional cloning, the lbab locus could be further mapped. However, with the availability of mouse genome information and a tested protocol for high throughput screening of mutations [6,7], we directly searched genes based on the map from TJL. With the success of finding this mutation and others [6,7], we feel con-fident that we no longer need fine mapping for most mutations.
In this study, several lines of evidence indicate that a single nucleotide mutation of Nppc is associated with the lbab phenotype. First, Nppc is located within the genetic region of the lbab locus. Second, the Nppc mutation was the only defect detected among candidate genes and ESTs within the lbab locus from lbab mice. Because the lbab mutation was transferred from the PL/J strain to the B6 inbred strain, we evaluated the possibility of close linkages of nearby sequences from the PL/J mice by screening any sequence difference near the mutation area and later crossing with PL/J mice. There were no other differences between lbab/lbab mice and their two parental strains, so PCR products from each of those strains were mixed separately with that of lbab/lbab mice. Each mixture showed multiple bands of signal, indicating the difference in their DNA sequences. The X-axis represents relative size of the PCR products. The Y-axis represents relative strength of signal or the amount of the PCR products.
Schematic of the mutation identification in lbab mice
the possibility of other mutation involvement was ruled out. Third, cDNA sequence results agreed with the genomic DNA data. Last, we showed that the Nppc genotype is unique in lbab/lbab mice compared with nine other inbred strains, and the G/G Nppc genotype was closely associated with the phenotype in lbab mice.
Recently, several transgenic and knockout mouse studies have demonstrated that Nppc is critical in the prevention and rescue of achondroplasia [11,12]. A recent gene knockout study done by Chusho et al. [11] indicated that Nppc null mice of 129/Sv background showed severe dwarfism and early death. The lbab mice have a phenotype similar to Nppc knockout mice with two exceptions. First of all, the lbab mice develop an overall smaller body size. Second, the mutants exhibit proportionate dwarfing of all organs with the possible exception of the male reproductive tract, which appears extremely small [4]. However, much precise information may be obtained by a direct comparison between Nppc null mice and lbab mice. There may still be some difference between them because of the difference not only in the nature of the mutations but also in the genome backgrounds of those two models. Accordingly, we speculate that the identified point mutation of Nppc in lbab mice belongs to a loss-of-function mutation. As a key positive regulator of endochondral bone formation, Nppc seems to express its activity mainly through natriuretic peptide receptor 2 (Npr2) [11,[13][14][15]. On the other hand, a recent study indicated that Nppc counteracts the activities of fibroblast growth factor signalling, which is a major negative regulatory pathway for long bone development, in both direct and indirect ways [16].
Murine Nppc is structurally similar to that of other species. The affected Arg at codon 117 on the Nppc domain is highly conserved among all members of the natriuretic peptide system and different species (Fig. 3B). The mechanisms for regulating Nppc expression are currently unknown. Importantly, the mutated nucleotide is also located in the common biologically active COOH-terminal 22 amino acid residue area, suggesting the critical significance of this amino acid residue in the functioning of these ligands during skeletal development.
Nppc was not in the list of candidate genes for the allelism test. According to the information on TJL webpage [4], allelisms tested were brachymorphic (bm) [17] with a ratio of disease/total of 0/42 progeny born, achondroplasia (cn) [18] with a ratio of 0/61 progeny born, osteochondrodystrophy (ocd) [19] with a ratio of 0/19 (4 unclassifiable) progeny born, and small (sml) [20] with a ratio of 0/59 progeny born. The first three loci are known to be located on Chr 19 [17], 4 [18], and 19 [19], respectively. The last one, sml, is either on Chr 6 or unknown [20]. Had Nppc been considered as a candidate gene, our initial screening would have been simpler, although we feel that some work is needed to exclude mutations in the nearby genes. In addition to the known function of Nppc, the fact that there is no other mutation in nearby genes in the lbab region is supporting evidence for the potential cause of the lbab phenotype by the single nucleotide mutation in Nppc.
Identifying the Nppc mutation in lbab mice provides useful information for human achondroplasia studies. It also demonstrates that, while candidate genes should be care-Schematic of the sequence changes of Nppc gene in lbab mice fully examined based on gene function, it is feasible to identify mutated genes that are roughly mapped by linkage analysis by sequence-based positional candidate cloning strategy. We speculate that this strategy will be particularly useful for familial human diseases with small numbers of patients; in those cases, researchers usually have either a rough map or name/number of the chromosome of the disease locus. Furthermore, by using functional genomics and rodent models with spontaneous mutations yielding measurable phenotypes, we can rapidly identify mutational events in a cost-effective manner.
Conclusion
In the present study, a sequence-based positional candidate cloning approach was applied to identify a gene mutation in lbab mice with abnormal endochondral ossification. Our results suggest that a single nucleotide mutation in gene Nppc is likely to be the causative factor and that the lbab mouse may be a useful model for human achondroplasia studies. some 19 encompassing the muscle-deficient osteochondrodystrophy (mdf-ocd) region. Mamm Genome 1998, 9:390-391. 20. Lundberg P, Welander P, Openshaw H, Nalbandian C, Edwards C, Moldawer L, Cantin E: A locus on mouse chromosome 6 that determines resistance to herpes simplex virus also influences reactivation, while an unlinked locus augments resistance of female mice. J Virol 2003, 77:11661-11673. | 3,661 | 2007-04-17T00:00:00.000 | [
"Biology",
"Medicine"
] |
The cut , the hole and the eclipse : Matta-Clark ’ s sections
In 1974, Matta-Clark cut a suburban family house scheduled for demolition; action called “Splitting”, a radical dissection from the house’s foundation to the roof throughout the whole body of the house, creating a new and radical architectural state that anticipated his future interventions. Then came “Conical Intersect” and “Day’s End”. His works describe a permanent disagreement with architectural functionalism and modern principles of architecture and put forward an endless series of questions about architecture and the sense of space, where the value of space lies not on its functionality but on its metaphorical possibilities.
Introduction
When Louis Kahn, one of the masters of modern architecture that created some of the fi nest architectural masterpieces of the 20 th Century, died in 1974, the young artist and architect Gordon Matta-Clark (1943-1978) was creating radical interventions in architecture and exploring alternative ways of experiencing the built environment.The same year of the death of Kahn who said "the light, the giver of all presences, is the maker of a material, and the material was made to cast a shadow, and the shadow belong to the light" (Kahn, 2003(Kahn, [1969]], p. 108), Matta-Clark was making shocking interventions in abandoned buildings, modifying their structure and the nature of their space through 'light cut', revealing another dimension of the materials and architectural space.
Matta-Clark's main works took place between 1968 after receiving his B.A. in architecture from Cornell University until his death in 1978.He was part of the Soho avant-garde in New York and his activities were a permanent critique to the American culture.Matta-Clark developed an individual expression combining ideas from the Minimalist Movement and Surrealism with his awareness on the dehumanization of modern life and the built environment.Using dumped buildings located in subur-ban and depreciates areas of the city and manipulating industrial tools, he cuts building, making radical apertures and dissections.His works describe a disagreement with architectural functionalism and modern principles of architecture and put forward an endless series of questions about architecture, the sense of space and about the urban politics (Diserens, 2003;Moure, 2006).
Splitting, the idea to cut a building
In 1974, Matta-Clark dissected a suburban family house scheduled for demolition cutting it down the middle and dismembering it; converting the narrow two-level house located in 322 Humphrey Street, New Jersey into the sculpture he called "Splitting" (Figure 1).The radical cut from the house's foundation to the roof throughout the whole body of the house was mercilessly and divided everything: fl oors, walls, and stairs.The light from the incision invaded the interior creating a dramatic architectural state that predicts the inevitable.
Matta-Clark transforms the space of the familiar and everyday life into an uncanny scenario, creating a series of fantasmagoric images and vertigo (Valdés, 2006;Jenkins, 2006).This radical act, anticipated in the poetic decapitation of a disused offi ce of engineers Chateau | The cut, the hole and the eclipse: Matta-Clark's sections "W-Hole House: Atrium Roof" (Genoa, 1973), have got in "Splitting" a total dissection of the building where walls and doors, roof and ceiling are united by light openings, embodying an in-between state amongst the building's current reality-the materialness of the human habitat-and its future demolition.Matta-Clark defi ed the conventional sense of architecture: cutting, fragmenting or dissecting a building emphasize a bizarre side of architecture when an old building is going to be demolished to build a new one.
On the other hand, "Splitting" changed the nature of a simple architectural object by critical cuts and generate a dissociated three dimensional entity opposed to the idea of an organized system.In this way, 'Splitting' confronts the ideas of Modern Architecture reacting to the serial house and the questionable Le Corbusier's idea of the house as a machine for living.According to Celant, in Splitting "the system of relations is overturned, the abstract signs, such a door or wall, ceiling or corner, opening or closing, are transformed into material quantity, no longer geometrically rational.The aristocracy of architecture is thus forced to withdraw by its vulgarity and practical elementariness" (Celant, 2003(Celant, [1974]], p. 162).Matta-Clark's action, intentionally violent and transgressive, exposes the contradiction between some abstract ideas of modern architecture and the decadence of this model in suburban areas of the city.
'Splitting' "has since become an iconic work of the period -despite its destruction-by virtue of its extreme procedural and visual simplicity" (Crown, 2002, p. 74) and it also represented the tip of the iceberg of the Soho avant-garde in the seventies and it defi ned the extreme approach that anticipated the unique language and expression of his futures interventions.
The hole/the eclipse
"The brilliant August sunset course through a gap creating a physical presence.At the lower left corner, a 20-foot diameter quarter circle has been described and the floor planking removed.Way up in the clerestory, where the roof intersected two corner walls, a circular removal revealed the sky at the edge of the 'interior canal' a jaunty cut-out seemed to mimic the outline of an unfurled sail.Light, air, sky and water.Everything was alive with motion and light" (Gerry Hovagimyan in Crown, 2003, p. 11. Description of 'Day's End', Pier 52, New York. 1975).
A general understanding of the work of Matta-Clark gives us an idea of the stress that is inherent in the evolution of our contemporary cities; his interventions refl ect a tension between the idea of progress and the destruction of old buildings that is necessary for the new development of the urban fabric.From 'Pier In/Out, Pier 14' (Figure 2) and 'W-Hole House' (Figure 3) to 'Conical Intersect' and 'Offi ce Baroque' this confrontation is always present, and through the metaphor of the hole Matta-Clark fi lled the gap between the building past and its inevitable demolition in the name of progress.
The metaphor of the hole refers to a poetic and ephemeral connection among past and present and it is physically powerful in 'Conical Intersect' (Paris, 1975), where Matta-Clark inscribed an invisible cone in an axis of 45 degrees into an old property, "the hole recalled the ', New York City, 1974(Corinne, 2003).structure of a telescope from outside the building, a periscope from within.[...] As Dan Graham note, "with the aid of this periscope, viewers could look not only into the interior of the Matta-Clark sculpture/building, but through the conical borings to these other buildings that embody past and present eras of Paris" (Lee, 2001, p. 180).The site was adjacent to the almost fi nished Centre Pompidou designed by Renzo Piano and Richard Rogers, and his idea was to create a pointed space allowing sight through ruined surfaces and heavy masonry to the white structure of the Centre Pompidou.This temporary transformation was like a shotgun impact on a building in process of demolition aiming to connect an historical location with the construction of the Centre Pompidou (Figure 4).
Matta-Clark's vision of the urban transformation was created through the scale of the intervention by anatomical dissection of materials; a naked drama between lightness and darkness and the ephemeral presence of a conical emptiness defi ned by light cut.According to Lee 'Conical Intersect' is about "the tension between narratives of historical progress -embodied in the construction of the Centre Pompidou -and the destruction of historical site that is prerequisite for progress" (Lee, 2001, p. 171).The hole, in short, becomes a cyclopean vision of destruction and modernization, a dramatic motif of the relationship among history and current events.
On the other hand, the metaphor of the hole refers to a sense of loss.The story of the transformation and subsequent destruction of a building holds the idea of sadness and abandon but also the hole is a metaphysic interpretation of our lonely modern life.'Offi ce Baroque' (Antwerp, 1977) and in particular 'Descending Steps for Batan' (Paris,1977), in memory of his twin brother who committed suicide, consisted on a profound hole excavated in the foundations of a new gallery in Paris in contrast to the immaculate space of the exhibition room.In this work the hole expresses his lack and emptiness after the loss of his brother (Lee, 2001).'Day's End' in Pier 52 (New York, 1975) uses the idea of the hole as means of radical and extreme understanding of architecture (Figure 5).It represents a transformation of an abandoned warehouse located on the West Side waterfront of the Hudson River into an astonishing intervention that exposes a new urban narrative with buildings in decayed areas and in an uncertainly state.This building with "basilical light and proportions" was a former warehouse with a large steel structure and corrugated façade and his intervention, in words of Matta-Clark, was described as "making a mark in a sad moment of history" (Matta-Clark in Lee, 2001, p. 121) in relation to abandoned piers in Manhattan by economic changes and new shipping technologies.Once again, this statement refl ects the 'Conical Intersect's idea of the periscope and telescope and his worries about the transformation of the city in the name of progress.
The basis plan of this intervention consisted on patterns derived from overlapping circles, thus Matta-Clark and his helpers made huge oval penetrations on the corrugated façade of the structure and cut out the structural steel beams from the underside of the pier creating an interior canal into the building."A simple cut or series of cuts acts as a powerful drawing device able to redefi ne spatial situations and structural components… There is a kind of complexity that comes from taking an otherwise Chateau | The cut, the hole and the eclipse: Matta-Clark's sections completely normal, conventional, albeit anonymous situation and redefi ning it, retranslating it into overlapping and multiple readings of conditions past and present.Each building generates its own unique situation" (Matta-Clark in Crown, 2003, p. 19).
The principal cut was facing out over the river, where it emerged a large oval penetration like a half moon, whereas inside the building is like an eclipse with an ecclesiastical aura.This cut reminds the round oculus of the Pantheon, constructed in the beginning of the Second Century as a tribute meant to commemorate all the gods in Roman times and lighted by a single circular window at its apex (Figure 6).The Pantheon embodies the idea of the light as "the giver of all presences", using Kahn's words, and represents a dramatic relationship between materiality, shadow and sunlight.And as the Pantheon, 'Day's End' main hole was the focal source of light into the dark interior of the building.The correspondence between light and cut was a composition of lightness spread across darkened surfaces and a metaphorical relation between space, materials and deconstruction by light.Matta-Clark argued "the thing I would really like to express is the idea of transforming the static, enclosed condition of architecture on a very mundane level into this kind of architecture which incorporates... this sort of animated geometry or this animated tenuous relationship between void and surface… [It] implies a kind of kinetic, internal dynamism of some sort" (Matta-Clark in Lee, 2001, p. 153).The duality light-cut was evident as a crucial part to contemplate and complete the trilogy: light, void and surface.'Day's End' itself belongs much more to the domain of architecture than sculpture and it embodies Matta-Clark's ideas about the relationship between project and place; the building's circular cuts relates to the sun path and the Hudson river, transforming the site into an unstable urban landscape that exposes a thin limit between art and architecture.The former warehouse itself was transformed from an abandoned industrial structure into a metaphysical social space (Figure 7).The parallel between the Pantheon and 'Day's End' is a bridge that connects the light mystery trough history and architectural space.Today, both are precedents in art and architecture and particularly in some work of the artist Olafur Eliasson.In 'The Weather Project' at the Tate Modern in London (2004) (Figure 8), Eliasson used an artifi cial sun that confused the distinction between inside and outside as a metaphor to the Pantheon in Rome.In 'Your Sun Machine' (1997) (Figure 9) the visitors entered an empty room with a large circular hole perforated in the roof; every day the movement of the 'sun' appeared into the space creating fi rst an elliptical and then a circular outline on the walls and fl oor of the gallery that conjugated the extemporal dimension of light and space.This project had a close relation to the work of Matta-Clark and the Agrippa's Temple in Rome (Grynsztejn, 2003).
Conclusions
Dealing with nothing more complex than the limits of human scale (Matta-Clark in Lee, 2001, p. 131).
The work of Matta-Clark, which he produced within a short life, incorporates astonishing architectural interventions as well as urban performances and sculptures.His works are well-known, despite the fact that they exist today only through 16 mm fi lms and photographic documentations.However, there are still lots of questions about the motif of his architectural interventions and proposals.
Matta Clark described his work as 'anarchitecture' (anarchy + architecture), a concepts that explains his interest on architectural decay and the social changes Chateau | The cut, the hole and the eclipse: Matta-Clark's sections within urban space.He turned his focus on the city itself, not from the architect's practice but from the streets and marginal zones, cutting through light abandoned buildings to create architectural interventions in-between life and death; construction and demolition; mass and weight; past and present; light, void and surface.
Like light was a particular feature of modern architecture remarked by theirs masters, this use express the defeat of Matta-Clark to the institution and conventional practice and knowledge of architecture.Like personal fi ght, but also like urban and social claim.It uses disciplinary tools to struggle the reality, and in this task he gave a more wide understanding of its social role.Finally, the idea behind the work of Matta-Clark was to create 'building dissections': while 'Splitting' referred to abandon; 'Conical Intersect' was about demolition in the name of progress; and 'Day's End' displayed the dark side of architecture, city and progress.We can argue that all these interventions could represent the building's own redemption by cuts, holes and at the end the eclipse of its own nature.Redemption of its architectural spirit. | 3,283 | 2011-01-01T00:00:00.000 | [
"Physics"
] |
Doubly resonant optical nanoantenna arrays for polarization resolved measurements of surface-enhanced Raman scattering
We report that rhomb-shaped metal nanoantenna arrays support multiple plasmonic resonances, making them favorable bio-sensing substrates. Besides the two localized plasmonic dipole modes associated with the two principle axes of the rhombi, the sample supports an additional grating-induced surface plasmon polariton resonance. The plasmonic properties of all modes are carefully studied by far-field measurements together with numerical and analytical calculations. The sample is then applied to surface-enhanced Raman scattering measurements. It is shown to be highly efficient since two plasmonic resonances of the structure were simultaneously tuned to coincide with the excitation and the emission wave- length in the SERS experiment. The analysis is completed by measuring the impact of the polarization angle on the SERS signal.
Introduction
The use of metal nanoantennas is superior if one aims at localizing light in volumes much smaller than the diffraction limit. The physical reason for this behavior is the excitation of either localized surface plasmon polaritons (LSPP) or propagating surface plasmon polaritons (PSPP) [1,2]. Both are collective oscillations of the charge density at the interface between a metal and a dielectric that is driven into resonance by an illuminating wave field. It causes strongly localized electromagnetic fields that decay evanescently off the interface. PSPPs are mainly confined along one [3,4] or two dimensions [5,6]. LSPPs are confined in all three spatial dimensions. It has been shown that for LSPPs excited at nanoantennas comprising sharp edges or coupled systems thereof [7][8][9][10][11][12] as well as for PSPPs at interfaces the field localization is accompanied with a dramatic field enhancement [13]. This enhancement can be used, e.g., to boost the efficiency of nonlinear optical frequency conversion [14] or of surface-enhanced Raman scattering (SERS) applications [15][16][17][18][19]21]. Consequently the combination of both effects (LSPP and PSPP resonances) has been shown to improve the capability of such devices furthermore [22][23][24][25][26][27]. Therein metal surfaces incorporating hexagonal arranged metallized nanovoids have been shown experimentally and theoretically to exhibit both LSPP (void plasmons) and PSPP modes. In our contribution we present a combination of LSPP and PSPP excitations for twodimensional sharp-edged rhombic nanoantenna arrays. A LSPPs can be excited as fundamental plasmonic dipole modes of the individual nanoantennas. A PSPP can be excited because of the periodic arrangement of the nanoantennas. The most relevant difference of such structures to the previously mentioned approach is the absence of a closed metal surface in our work. Compared to the LSPP and PSPP combinations for structures incorporating such a closed metal surface [22][23][24][25][26][27] our samples can be understood as a corresponding inverse approach. Thus the quasi-PSPP mode reported here propagates at the interface of an artificial effective medium whose dispersive properties are dictated by the isolated nanoantenna. This approach permits an individual tuning of two plasmonic resonances to a specified spectral regime. The LSPP mode can be controlled by tailoring the particle's shape and choosing the dispersive material properties it is made of; while the PSPP resonance is predominantly adjusted by selecting a certain grating period. An alternative approach for substrates accessing multiple plasmon resonances consists of using nanoantennas designed to support next to a dipolar resonance also higher order plasmonic eigenmodes (e.g. the electric quadrupole mode). For example Ref. [9,28] and [29] also aim at realizing more than one resonant plasmonic resonance within the same nanostructure.
By virtue of their multiple plasmon resonances these structures also provide a dramatic field enhancement making them promising SERS substrates. The difference to the approach presented here is that the resonance positions of these localized modes (higher order localized modes as well as LSPPs and PSPPs in nanosphere lithography manufactured samples) cannot be chosen independently, since they are both controlled by the individual nanoparticle geometry. In contrast the samples introduced herein allow for an individual adjustment of the spectral properties of both the localized and propagating plasmon. Furthermore, our structures are fabricated by electron beam lithography that allows to fabricate large areas of samples with reproducible properties suitable for the perspective of more industrial oriented SERS applications. This combination of LSPPs and PSPPs in a definite selected spectral interval results in an improvement of near field sensitive plasmonic effects, e.g. the SERS enhancement. We present a detailed numerical investigation of the optical properties of our sample that is in excellent agreement with the experimental spectral analysis. In order to separate the effects of the appearing LSPPs from PSPPs, we present also a detailed analytical electrodynamic model applying well-known classical descriptions of plasmonic nanostructures. The combination of the numerical and the analytical description facilitates the understanding of the plasmonic origin of the respective resonances and allows for a simplified, yet precise, prediction of the optical response of such samples. Their unique feature to significantly enhance the efficiency of SERS measurements is furthermore experimentally proven. This is possible since the resonances were adjusted to coincide simultaneously with the excitation and the emission wavelength. Thereby we observe that the quasi-PSPP mode yields a similar SERS enhancement as it is achieved by localized modes making the combination of both a promising approach for improved SERS substrates. For providing a comprehensive understanding of our sample we put also emphasis on detailing the angle dependent SERS signals. It is used to elucidate the interplay between the plasmonic properties and the SERS signal.
Fabrication
It has been shown in earlier publications that random-rough metal surfaces exhibit a field enhancement due to their sharp-edges [30][31][32]. This field enhancement reported in these pioneering SERS papers is now understood to be the origin of the enhancement of the Raman signals if molecules are brought in close proximity to such surfaces; coining the term surfaceenhanced Raman scattering. Since then various technical approaches have been realized to achieve dramatic near field enhancements accessing even the single molecule detection. A simple strategy to manufacture large area samples supporting such features is to apply selforganization processes. However, the intrinsic drawback of such random-rough surfaces is their non-deterministic character. Thereby it is difficult to observe a reliable and reproducible enhancement of the SERS signal over large surfaces for such substrates. Moreover, a large fraction of the surface does not contribute to the SERS signal as it acts non-resonant. Thus, we are interested in realizing artificial and deterministic SERS substrates comprising nanoantennas with sharp edges that sustain resonances in well-defined spectral domains. Ideally, the fabrication shall be simple and reliable to allow for an up-scaling of the fabrication process.
To fabricate nanostructures that meet these requirements we applied electron beam lithography of suitable masks together with a dry etching technique. The entire process is reported in [33]. Here we briefly summarize the principal fabrication steps. At first, a 20 nm gold (Au) layer was evaporated on a fused silica substrate. The electron-beam resist material (PMMA) was then spin-coated on top. It was afterwards exposed in a crossed-grating approach with two one-dimensional grating masks. Subsequently, the Au layer was etched by an ion beam (Ar). In a final step the remaining resist material was removed by means of oxygen plasma stripping. Parameters of the final structure which are subject to variations and which can be controlled in the fabrication process are the duty cycle, the tilting angle of the two gratings, and their period.
The careful adjustment of all such fabrication parameters allows to control the plasmonic properties of the samples. For illustrative purposes Fig. (1) shows selected examples of fabricated structures. As a result, the final structure is composed of periodically distributed rhombi with lateral dimensions fixed by the grating parameters used for the resist exposure. This allows for the desired fast, reproducible, and large area manufacturing of the optical nanoantenna arrays. Another advantage concerning the application of such substrates as efficient SERS structures is the high density of hot spots resulting in a defined field enhancement across the entire sample.
Besides the high density of hot spots the alignment of the nanoantennas on a regular grid can also be used to launch PSPPs. Their excitation introduces a further resonance into the system. PSPPs can be excited since their momentum mismatch to free space is compensated by the reciprocal grating vector. Taking advantage of this process various samples have been fabricated and thoroughly characterized. In the following we restrict ourselves to three samples. They were selected since they cover the main physical effects occurring in our nanoantenna arrays. Spectral features of other samples are comparable; though they may slightly differ in their resonance wavelengths. Detailed parameters of the three relevant samples are shown in Tab. (1).
Optical characterization and simulations
Besides the structural characterization with SEM and AFM [shown in Fig For the rhomb-like nanoantenna sample 1 only the two plasmonic dipole resonances associated to the principal axis of the structure can be observed (labelled as 1,2). They appear due to the different transverse dimensions at different wavenumbers around 8, 547 cm −1 and 16, 780 cm −1 . Additionally, sample 1 exhibits a very sharp apex tip along its major axis which will lead to an increased near field. Consequently, sample 1 represents already a favorable sample that could be used as an efficient SERS substrate. By increasing the period of the Table 1. Detailed parameters of the three nanoantenna samples with different manufacturing parameters that were selected for further investigation. Sample 1 represents a regular array of rhombs, which can be realized by a 34 • tilt of both illumination gratings. Sample 2 and 3 are characterized by roughly the same apex angle for the nanoantenna of (27 • and 34 • , respectively) but they are fabricated with an increased period of both crossed gratings. This translates into a larger period of the two-dimensional nanoantenna array as well as an expansion of the nanoantenna dimensions itself. The thickness of the Au layer is a final parameter that can be used to tailor the plasmonic properties of the samples. In all samples it was chosen to be 20 nm.
illumination gratings, the period of the unit cell as well as the rhombi dimensions increase simultaneously. This was done for sample 2 and 3 of which the spectral properties are shown in Fig. (2b,c). The spectral response is slightly modified when compared to sample 1. At first, we observe two resonances that occur at 5, 470 cm −1 (1) and 16, 340 cm −1 (2) for the two polarizations. Similar to sample 1, they are determined by the lengths of the principal axes of the rhombi. However, in addition, a third resonance occurs (for sample 2 at 13, 890 cm −1 and for sample 3 at 11, 600 cm −1 ) for an incident polarization of the electric field parallel to the short axis. For an even larger rhombus, as represented by sample 3, resonance (3) can be observed more pronounced [ Fig. (2c)], since it is now clearly separated from resonance (2). In order to investigate the origin of this additional resonance (3) and to verify our measured data we performed simulations of the spectral response of all samples. These simulations were done by using the Fourier Modal Method [35]. This method solves Maxwell's equations without approximation by taking explicit advantage of the periodic nature of the sample. In these three-dimensional simulations the dispersive material properties as well as the exact particle shape including the substrate and ambient media are fully taken into account. Results are show in Fig. (2d-f). The spatial dimensions of the nanoantennas as well as information about period, metal thickness, etc. have been extracted from the SEM and AFM images of the fabricated samples. They were fully taken into account in the simulations. Material data for the Au dielectric function was taken from literature [36]. By comparing numerical and experimental spectra we observe a good agreement with respect to the resonance positions and the resonance widths for all excited modes. Particularly, all resonances for sample 2 and 3 are also revealed in the simulations. Prior to applying the samples for SERS measurements, we want to prove the plasmonic origin of all occurring resonances. Together with the numerical description, we apply two analytical methods to interpret our observed resonances. We show that due to apparent similarities the resonances denoted as (1) and (2) can be similarly treated as localized dipole modes of the two main axes of the rhombus, while resonance (3) can be interpreted as a grating excited PSPP. For this purpose we describe the scattering response of the individual rhombus within the electric dipole limit in an effective medium approach [37]. It allows to predict the appearance of the LSPP modes. On the contrary, we motivate the mode denoted as (3) as a PSPP mode that is launched due to the alignment of the nanoantennas in a grid. We do so by investigating its (1) and (2), respectively. By increasing the size and the period of the 2D nanoantenna array, as done in sample 2 (b) and 3 (c), a third resonance appears. It is understood as a propagating surface plasmon polariton. The comparison to the respective numerically (FMM) simulated spectra is shown in (d-f).
angle and period dependent excitation frequency [38]. The dispersion relation which we will reveal for this mode corresponds to that of a PSPP at a metal-dielectric interface. For a precise analytical prediction of this dispersion relation that is compared to numerical results, one only has to take care that the metallic structure is a composite material whose properties can be reliably derived on the grounds of an effective medium theory. Details are given below.
Analytical considerations (LSPP)
At first we want to describe the LSPP modes in detail. For this purpose, we employ an analytical approach that is compared to the rigorous simulations. From the above comparison of sample 2 and sample 3 we might anticipate that the two resonances (1) and (2) are associated with LSPPs. However, the verification that the resonances can be predicted by relying on analytical grounds constitutes an inevitable tool in the design of SERS substrates with predefined resonance wavelengths. To treat the problem analytically, we assume that we can describe the plasmonic properties of an isolated rhombus by its dipole moments. The structure is assumed to be biaxial anisotropic; reflecting the different length of the axes. Assuming that this approximation is valid we can calculate the scattering response of an ensemble of individual rhombic nanoantennas. For this Figure 3. (Color online) The quasi-statically (a) and the numerically (b) obtained transmission spectra for an incident electric field polarized along the long (axis 1) and the short axis (axis 2) of sample 2 are presented. It can be seen that resonance (1) and (2) as the two main axes modes are excellently reproduced in both simulations. Additionally, resonance (3) is only present in the numerical calculations. purpose we follow an approach known from the fields of metamaterials in terms of an associated effective medium, comprising electric dipole interaction. We mention that an electric dipole resonance results in a strongly dispersive effective permittivity with a Lorentzian lineshape as In Eq. (1) ω 0 is the resonance frequency, A is the phenomenological resonance or oscillator strength, γ the damping and ε ∞ the background permittivity for infinite frequencies (here vacuum; ε ∞ = 1). The introduced damping constant accounts for both radiative and non-radiative losses and therefore ensures the correct resonance width. Within the dipole limit we have the possibility to account for the fundamental resonances that are characterized by a unique resonance frequency ω 0l (l ∈ [x, z]) in conjunction with the resonance width and strength. All these parameters are phenomenologically introduced. In the next step we adjust the free parameters of Eq. (1) in order to reproduce the measured spectral properties of the samples. For this purpose transmission through a slab of material with a permittivity as given in Eq. (1) is calculated and matched to the measured spectra by adjusting the free parameters in a least square sense. For obtaining the transmittance spectra shown in Fig. (3) the following parameters have been applied. A x = 40.5 · 10 30 s −2 , A z = 1.9 · 10 30 s −2 , ω 0x = 9.8018 · 10 14 s −1 , ω 0z = 3.0159 · 10 15 s −1 , γ x = 2.8274 · 10 14 s −1 , γ x = 1.6965 · 10 14 s −1 . In Fig. (3) the transmission spectra of such an effective medium including a possible set of parameters are presented. It can be seen that by means of this approximation only two resonances can be obtained, while a third resonance is missing. In passing we note that the third dipolar resonance of the third axis naturally cannot be probed at normal incidence.
In the next step we will show that this empirically introduced permittivity covering the localized dipole modes of the rhombi is sufficient to derive the origin of the third resonance, which is an effect of the alignment of the nanoantennas in a regular grid, mimicking a PSPP resonance.
Analytical considerations (PSPP)
In contrast to the analysis before, which was performed with the intention to elucidate the character of the LSPP eigenmodes, we proceed now by investigating the properties of the quasi-PSPP eigenmodes in detail. For this purpose we consider the nanoantenna array as an effective medium with the specified optical properties expressed by the previously introduced effective electric permittivity ε eff (ω) [Eq. (1)]. To observe PSPPs on such an effective medium interface the transverse momentum and the energy conservation must be satisfied. These conditions can be cast into the equation [40] ω c The left hand side of Eq. (2) corresponds to the transverse PSPP wavevector [2] at an interface between an effective medium described by the effective permittivity fixed by the polarization of the incident light ε eff l (ω) and a dielectric medium with a permittivity ε d (ω). In contrast to an ordinary PSPP at a metal dielectric interface the permittivity of the metal was replaced by the previously derived effective permittivity. The corresponding mode is known as a spoof plasmon [41] or quasi-PSPP mode. The first term on the right hand side accounts for the transverse wavevector of the incident illumination where θ is the angle of incidence. In addition to the transverse momentum provided at oblique illumination there is another transverse wavevector introduced by the 2D grating (second term on the right-hand side). Taking into account the grating vectors in Eq. (2) does not contradict the effective medium approach. The combination of both effects, i.e. a non-vanishing grating vector and the effective medium treatment of the nanoantenna arrays is covered by the empty lattice approximation [27]. Thus the integer m j denotes the diffraction order, while Λ j is the respective period of the unit cell as shown in Fig. (1c).
To test the quasi-PSPP character of the observed resonance (3) we performed numerical calculations (Fourier Modal Method -FMM) as well as transmission measurements for angular incidence which were compared with predictions from Eq. (2). It was done by computing numerically or measuring the frequency of excited resonances (taken as the dip in transmission) depending on a certain parameter (chosen period or angle of incidence). These resonance frequencies are compared to predictions from Eq. (2). At first the grating period for normal incidence (θ = 0 • ) was varied numerically. This causes the grating vector of the lattice to be the only contribution to the transverse wavevector in Eq. (2). The equation provides the resonance frequencies for each grating vector, see Fig. (4a). Here we found theoretically and experimentally that the resonance position for the quasi-PSPP mode is only dependent on the magnitude of the period pointing in x direction Fig. (4d,e), while the effective permittivity is fixed due to the polarization of the incident beam to ε eff (ω) z . Consequently, we set m x = 1 and m z = 0. For the ambient permittivity ε d (ω) in Eq. (2) we selected air (n = 1). Comparing the expected dispersive behavior according to Eq. (2) with the numerical one we observe some deviations in terms of a constant offset between both, but a qualitative agreement in the overall shape Fig. (4a). We point out that the rhombus dimensions were fixed for the grating vector variation, thus the choice of the lattice period can be applied to tailor the resonance position for the quasi-PSPP mode in a wide spectral interval. In turn this variation of the unit cell period increases or decreases the number of rhombi in a predefined volume. This will mainly result in a change of the oscillator strength A in Eq. (1), since this value accounts for the resonance weighting of the individual rhombus embedded in an effective medium. Considering this effect the predictions for the resonance frequency positions by a fixed effective permittivity can be considered as an approximation to motivate the precise calculated resonance dependence. In a second step the incidence angle was varied, which modifies the first term on the right hand side of Eq. (2). The dispersion relation of the quasi-PSPP mode used for the grating period variation was applied without any further adaption. We observed a better agreement between the quasi-PSPP description and the numerically performed analysis [Fig. (4b)] as for the grating vector variation since the effective material parameters now remain fixed and only the excitation conditions were varied. In order to ensure the PSPP origin of the observed resonance we also performed measurements of the sample for a variable angle of incidence Fig. (4d,e) to see the numerically predicted resonance behavior. The coincidence between the measured and the analytically as well as numerically calculated resonance positions verifies the assumption that the observed mode is not a localized mode, since for such modes the resonance positions are invariant under varied excitation conditions. Such variations only influence the excitation strength of the respective LSPP mode. Here we observe a resonance dependence that can be completely explained with a PSPP excitation and significantly differs from the behavior of a localized mode. Finally we performed FDTD simulations with a spatial resolution of 2.5 nm and the same material parameters as for the FMM simulations in order to calculate the near field distributions for both modes, rigorously. Therefore we selected sample 3 since here both modes are spectrally separate, which permits the undisturbed observation of the LSPP and the PSPP field patterns Fig. (4c,f) (Media 1, 2). It can be clearly seen that the LSPP near fields are determined by localized features at the rhombus surface, while we observe a standing wave pattern in the perpendicular (x) direction with respect to the incident polarization direction (z) for the PSPP excitation condition. We mention that we plotted the field component which is normal to the surface (H y ) and thus is not contained within the excitation field (E z , H x ). Additionally we emphasize that the propagation direction differs from the propagation direction observed for PSPP modes excited with one-dimensional gratings. There the PSPP mode propagates parallel to the incident polarization. Due to the two-dimensional nanoantenna grating reported here the PSPP mimicking mode is sensitive to modifications of the grating vector perpendicular to the polarization direction as observed in the resonance dependence Fig. (4b).
Based on the dispersive behavior observed in the far field simulations and the observed near field distributions, we identify resonance (3) as a surface plasmon polariton excited by the grating. We refer to it as a quasi-PSPP since the metallic surface is not flat and the metal itself constitutes an effective medium. In all calculations, the resonances (2) and (1) (not shown here) remain unaffected by any variation of the period or the illumination conditions of the arrays. Thus they are plasmonic features of the individual nanoantenna itself.
Application to SERS
Finally, we employ the most promising nanoantenna array (sample 2) for SERS measurements at optical frequencies. It is most promising as it is doubly resonant in the spectral region of interest around 15, 000 cm −1 (660 nm). We emphasize that by doubly resonant we understand here the presence of both; the PSPP as well as the LSPP within a predefined spectral interval. Stipulated by the fact that both elaborated plasmonic resonances are associated with strongly enhanced local fields, we expect an improvement of the SERS signal for this particular nanoantenna array. The main advantage of the sample is the possibility to have a strong field enhancement at the excitation wavelength as well as at the emission wavelength by incorporating two different resonances. this is the first investigation on SERS signals with structures sustaining such double plasmonic resonances. Moreover, the spectral separation between both resonances can easily be modified by varying, e.g., the angle of incidence or the period of the lattice. This introduces a further degree of freedom to tune the spectral position of the resonance to match to a particular SERS application.
For the SERS measurements the triphenylmethane dye crystal violet as a typical SERS analyte was selected. Fig. (5a) shows the excitation wavenumber (black solid line) which was realized by a HeNe laser (15, 798 cm −1 ) as well as the wavenumber window for the SERS measurement (black dashed lines). One can see that the spectral domain where SERS measurements were done are nicely covered by both plasmonic resonances. The excitation wavelength is close to the LSPP resonance, while the SERS interval (Stokes shift) includes the quasi-PSPP resonance. The resulting SERS signal for our first measurement is shown in Fig. (5b). In two different experiments the polarization of the incident laser radiation for the excitation was aligned along either of the two principal axis of the nanoantenna. The extracted enhancement factors for several well selected bands were found to be in the order of 10 3 . We note that there are several possibilities to define the enhancement factor. We have applied the definition for the averaged SERS surface enhancement factor (SSEF) given in [42]. To complement our investigations and to support the assumptions on this effective area, we have also performed finite-difference time-domain simulations to investigate the near fields in detail. From these simulations we estimated an effective enhanced near field area of equal size as well as an electric near field increase of |E/E 0 | 4 ≈ 10 3 as we have experimentally observed for the SERS enhancement factor. We mention that these numerical estimations represent an approximation, since the exact shape and the corresponding field enhancement factor are strongly dependent on the computational discretization parameters, e.g. the mesh size and the corner radii. Nevertheless the obtained field enhancement factor correlates with the experimentally observed SERS enhancement factor. In addition to the SERS measurement with a polarization along the short rhombus axis we have rotated the polarization of the illumination systematically in order to investigate the polarization dependence of the SERS signal. As it was shown recently [21], such polarization resolved SERS measurements can be used to elucidate the plasmonic origin of the field enhancing elements, which are represented by the nanoantenna arrays in our samples. Results of our investigations are summarized in Fig. (5c). The observed angular dependence of the measured SERS signal corresponds to the numerically calculated absorption dependence at the SERS excitation wavenumber. We note that the absorption is maximal at our samples when the transmission tends to be minimal. Thus, absorption peaks as well as transmission dips indicate plasmonic resonances in our system. We observe a strong correlation between the excitation of plasmonic resonances and an enhanced SERS signal for all three selected SERS bands. This can be seen from Fig. (5c). The spectral positions of the bands are indicated in Fig. (5b). Moreover, the angular dependence has approximately a sin 2 (θ ) distribution. This is expected for our dipole-type nanoantennas, since the excitation strength of a dipole is proportional to the sine of the polarization angle of the exciting electric field normal to the dipole, translating into a sin 2 (θ ) dependence for the intensity. The fact that the SERS enhancement factor is larger than unity even for the polarization direction along the long axis (θ = 0 • ) is attributed to a non-vanishing field enhancement due to the nanoantennas sharp edges for this polarization even for the long rhombus axis.
Conclusion
In summary, we have presented a plasmonic nanoantenna array that exhibits localized and propagating plasmon polariton modes. A comprehensive theoretical investigation incorporating numerical and analytical calculation as well as experimental verifications has been presented to elucidate the nature of the observed modes. We have shown that an excellent agreement with measured transmission spectra is obtained. Finally, we have applied the nanoantenna samples for SERS measurements in the optical domain in order to exploit the specific advantages of the double resonant structures. We have observed an enhanced SERS signal exactly for the polarization directions where resonances are excited in the nanoantennas. In addition to the polarized SERS measurement along the principal axes of our structures we have presented SERS experiments with continuously rotated polarization direction of the illuminating laser radiation. The resulting polarization dependent SERS signal reproduces the corresponding absorption dependence, indicating that the SERS signal strongly correlates with the excitation of the plasmonic eigenmodes. This work was performed with intention to introduce a new multimode plasmonic nanoantenna system allowing highly controllable and large-area plasmonic substrates with tailored resonance positions suitable for SERS measurements. Therefore, Au was chosen as a highly biological compatible metal even if Ag is a better plasmonic material in the optical domain regarding damping and field enhancement. | 6,889.4 | 2009-10-30T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Chronic Low Dose Morphine Does Not Alter Two In Vitro BBB Models
The blood–brain barrier (BBB) mediates cellular and molecular passage between the central nervous system (CNS) and peripheral circulation. Compromised BBB integrity has been linked to neurocognitive deficits in multiple diseases and various infections, including those associated with HIV-1 infection. Understanding the impact of exposure to pharmaceuticals, such as those utilized for pain management by patients suffering from CNS disease, on BBB regulation and function is clinically important. In this study, we modelled two different BBB systems; a primary human co-culture and a cell line monoculture. These systems were both exposed to three daily repeat doses of morphine and examined for alterations to BBB integrity via permeability, PBMC transmigration, and chemokine gradient changes. We did not find any significant changes to either BBB system with repeat morphine dosing, suggesting that repeat morphine exposure may not play a significant role in BBB changes.
Introduction
Despite the use of antiretroviral therapy (ART), people living with HIV-1 infections develop neurocognitive disorders at an accelerated rate compared to the general population [1,2]. These neurocognitive impairments are collectively referred to as HIV-1-associated neurocognitive disorders (HAND) and can vary in severity [1][2][3][4]. The development of HAND has been associated with blood-brain barrier (BBB) impairment, which can lead to increased inflammation and immune cell infiltration in the central nervous system (CNS), creating a chronic feedback loop of damage in the CNS and exacerbated HAND progression [5][6][7]. The BBB functions as a highly regulated and selective filter between the CNS and the peripheral circulation, mediating cellular and molecular passage between these biologic compartments [8,9]. The BBB is made up of specialized brain microvascular endothelial cells (BMECs) that are linked together by tight junction proteins (TJPs) and are supported by astrocytes [8][9][10]. Infected immune cells can cross the BBB to enter the CNS; therefore, the function and integrity of the BBB has been highlighted as a central component in the development of neurocognitive impairments [11,12].
Primary Cell Culture
Primary human fetal astrocytes were obtained from the Temple University Comprehensive NeuroAIDS Center (CNAC) Core from 18-week aborted fetuses and subsequently cultured in DMEM, 10% heat inactivated fetal bovine serum (FBS; Gemini; Thermo-Fisher Scientific, Waltham, MA, USA), and 1% penicillin-streptomycin (Thermo Fisher Scientific, Waltham, MA, USA). Astrocytes were used between passage 2 to 6. Cells were grown on 60 cm 2 Petri dishes (TPP; Sigma-Aldrich, Burlington, MA, USA), 24-well plates (Falcon, Corning, NY, USA), or 3 mm polycarbonate 0.3 cm 2 24-well transwell inserts with a pore size of 3.0 µm (Falcon, Corning, NY, USA). Astrocytes cells were seeded at a density of 100,000 cells/well in 24-well transwell inserts and grown to functional confluence prior to use in experimentation.
Human BBB Model
This model system was developed and characterized by [24] and validated in our laboratory. BMECs and human astrocytes were co-cultured on opposite sides of a gelatincoated 3.0 µm pore tissue culture insert, according to previously published methods [1,4,26]. Astrocytes were first seeded on the basal side of the inserts, followed by BMEC seeding to the apical side. Cultures were maintained for three days in 24-well plates at 37 • C, CO 2 (5%) in M199C media. At day three of co-culture, inserts were transferred to a low-serum version of M199C media (lacking human serum), until day four, at which time PBMCs were added to the apical chamber and allowed to transmigrate for 24 h. On day five, PBMCs and conditioned media were collected for flow cytometry and cytokine assays and the transwell inserts were analyzed for permeability.
Morphine Exposure of Cells
At confluence, on 12-well (hCMEC/D3 cultures), or 24-well transwell inserts (primary BMEC-astrocyte co-cultures), cells were exposed to vehicle or 200 nM morphine for up to 72 h, with re-administration every 24 h.
Fluorescein Isothiocyanate-Dextran (FITC-D) Permeability Assay for hCMEC/D3
Monolayers of hCMEC/D3 cells were grown to functional confluence on collagencoated 3.0 µM polycarbonate transwell inserts within six days, as previously determined by FITC-dextran permeability assays (Supplementary Figure S1). Inserts were exposed Brain Sci. 2022, 12, 888 4 of 12 to morphine or 1.4 M mannitol (positive control for assay) for 30 min. At time zero, all chambers were washed with warmed 10 mM HEPES in 1X HBSS, followed by addition of unsupplemented EBM-2 media to bottom chambers and 2 mg/mL of 70 kD FITC-dextran (in unsupplemented EBM-2 media) to top chambers. At 5-min intervals up to 30 min, and at 60, 90, and 120 min, all bottom chambers were sampled, by transferring 100 µL to a white-walled optical-bottom 96-well plate. Fluorescence intensity was read on a Fluoroskan Ascent™ Microplate Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA). The six time points at 5-min intervals were utilized to calculate permeability coefficients (Pe), or rate of passage through the monolayer, for each treatment condition. Pe = PS/s, where PS (clearance) was the permeability surface area of the endothelial monolayer and s was the surface area of the filter (1.12 cm 2 ). Pe is given as ×10 −5 cm/s. PS was given by 1/PS = 1/me -1/mf, where me and mf were the slopes of the curves corresponding to endothelial cells on filters and to filters only, respectively, with me and mf calculated by plotting the cleared volume against time. The cleared volume was calculated by (AUa-AUb)/Fi, where AUa was the total fluorescence (arbitrary units) in the basal compartment, AUb was the background fluorescence and Fi was the fluorescence of the initial solution (AU/mL).
BBB Permeability for Primary Co-Culture
Evans blue dye (0.5%; MilliporeSigma, Burlington, MA, USA) was added to 5% BSA and dissolved for 24 h at 4 • C to make Evans blue-coupled albumin (EBA), which is required for use in in vitro systems [26]. For the permeability assay, transwell inserts were first washed in phenol red-free DMEM (Thermo Fisher Scientific, Waltham, MA, USA). The inserts were transferred to 400 mL of 10% FBS/phenol red-free DMEM and 200 mL of 0.45% EBA was added to the apical side of the transwell. After a 30-min incubation the inserts were removed and discarded and the sample in the basal chamber was collected and read at 620 nm. For these studies, 4 nM and 0.5 M EDTA were used as positive controls for loss of barrier integrity, where 0.2 O.D. was determined to be the break point for barrier integrity.
CCL2 (MCP-1) ELISA
To determine the effects of morphine on the BBB in the hCMEC/D3 and primary coculture systems, 24 h after PBMCs were added to the apical chamber the conditioned media was collected from the apical and basal chambers along with any PBMCs. The collected media was centrifuged to remove transmigrated cells and PBMCs were isolated from the basal chamber and used for flow cytometry while the supernatants from both the apical and basal chambers were assayed for CCL2 using the Human CCL2 ELISA procedure as described by the manufacturer (eBioscience, Thermo Fisher Scientific, Waltham, MA, USA). In these assays, stromal-derived factor (SDF-1) (CXCL12) was used as an additional positive control as it acts on both resting and activated T cells and can also be a chemoattractant for monocytes [24].
Transmigration/Flow Cytometry
Transmigration assays were performed on both model systems using PBMCs from healthy donors (Human Immunology Core, University of Pennsylvania, Philadelphia, PA, USA) added to the apical chambers at 300,000 cells/well for primary co-cultures and 750,000 cells/well for hCMEC/D3 cells. For positive control wells, 0.1% BSA in PBS (chemokine diluent), CCL2 (200 ng/mL; R&D Systems, Minneapolis, MN, USA), or CXCL12 (100 ng/mL; R&D Systems, Minneapolis, MN, USA) was added to the basal chamber. Transwells were incubated at 37 • C, 5% CO 2 for 24 h to allow the PBMCs to transmigrate. After 24 h, the conditioned media and PBMCs were collected from both the apical and basal chambers and were centrifuged at 1200 rpm for 5 min at 4 • C to pellet the cells. The removed media was assayed for CCL2. The cells were washed twice in 1% BSA in PBS and centrifuged at 1200 rpm for 5 min at 4 • C. The appropriate antibody (CD3, Brain Sci. 2022, 12, 888 5 of 12 CD14, live/dead) was added in 1% BSA in PBS and incubated on ice for 30 min in the dark. The cells were washed a second time in 1% BSA in PBS and centrifuged at 1200 rpm for 5 min at 4 • C. The wash was aspirated, and 2% paraformaldehyde was added to the cells. The prepared cells were then quantified by flow cytometry.
Statistics
Statistics for the FITC-Dextran assay were calculated using the ANOVA method, with log transformation and adjustment for possible effects of time, to compare the untreated to the morphine treatments and to mannitol. Statistics for other assays were calculated using the Student's t-test. p values less than 0.05 were considered significant in all assays. Statistics were calculated utilizing Microsoft Excel and GraphPad Prism version 6.0.
Morphine Exposure Does Not Alter the Basal Rate of Tracer Molecule Passage across hCMEC/D3 BBB Cells, Regardless of Single or Repeated Administration of Morphine
Previous studies have reported differential outcomes of morphine exposure on BBB permeability to tracer molecule passage, depending on whether the system was in a state of constant repeated exposure versus withdrawal post-morphine exposure [22,27]. Here, we used single or repeated administrations of morphine at a concentration of 200 nM, to mimic chronic morphine exposure that would be given therapeutically to treat pain. We found that treatment with biologic concentrations of morphine at 200 nM, whether under single administration ("no spikes") or repeated administration ("spikes"), did not increase baseline leakiness of the monolayer over 72 h, as compared to vehicle (H 2 O) controls ( Figure 1). basal chambers and were centrifuged at 1200 rpm for 5 minutes at 4 °C to pellet the cells. The removed media was assayed for CCL2. The cells were washed twice in 1% BSA in PBS and centrifuged at 1200 rpm for 5 minutes at 4 °C. The appropriate antibody (CD3, CD14, live/dead) was added in 1% BSA in PBS and incubated on ice for 30 minutes in the dark. The cells were washed a second time in 1% BSA in PBS and centrifuged at 1200 rpm for 5 minutes at 4 °C. The wash was aspirated, and 2% paraformaldehyde was added to the cells. The prepared cells were then quantified by flow cytometry.
Statistics
Statistics for the FITC-Dextran assay were calculated using the ANOVA method, with log transformation and adjustment for possible effects of time, to compare the untreated to the morphine treatments and to mannitol. Statistics for other assays were calculated using the Student's t-test. p values less than 0.05 were considered significant in all assays. Statistics were calculated utilizing Microsoft Excel and GraphPad Prism version 6.0.
Morphine Exposure Does Not Alter the Basal Rate of Tracer Molecule Passage across hCMEC/D3 BBB Cells, Regardless of Single or Repeated Administration of Morphine
Previous studies have reported differential outcomes of morphine exposure on BBB permeability to tracer molecule passage, depending on whether the system was in a state of constant repeated exposure versus withdrawal post-morphine exposure [22,27]. Here, we used single or repeated administrations of morphine at a concentration of 200 nM, to mimic chronic morphine exposure that would be given therapeutically to treat pain. We found that treatment with biologic concentrations of morphine at 200 nM, whether under single administration ("no spikes") or repeated administration ("spikes"), did not increase baseline leakiness of the monolayer over 72 h, as compared to vehicle (H2O) controls ( Figure 1). . Re-administration of vehicle or morphine ("spikes") were performed every 24 h, with final re-administration conducted two hours before time zero, for all endpoints. Mannitol was incubated on positive control cells 30 min prior to time zero. Following exposure, all chambers were washed, and permeability was assessed by determining the amount of 70 kDa FITC-dextran to pass from the upper to lower chambers over 120 min. Permeability coefficient (Pe) (×10 −5 cm/s) (A) and raw fluorescence intensity (RFI) (B) were calculated. All courses were performed in triplicate and are representative of three independent experiments. Statistical analysis was performed using the ANOVA method with log transformation and adjustment for possible time effects to compare the untreated to the morphine treatments and to treatment with mannitol. Based on 95% confidence intervals, no significant change was observed with morphine treatment. Mannitol treatment *** p < 0.0001.
Repeated, Prolonged Morphine Exposure Does Not Induce Cytokine Gradients across a hCMEC/D3 BBB Model
To address the potential role of morphine exposure in the generation of chemotactic gradients across the BBB, conditioned media from the apical and basal chambers were assayed for the expression of the cytokine CCL2 by ELISA. CCL2 was selected due to its recognized roles in the chemotaxis of monocytes in vitro and in vivo [26,[28][29][30]. The results demonstrated that repeated administrations of morphine (200 nM) over 72 h exposures did not significantly alter the concentrations of CCL2 as compared to the vehicle control ( Figure 2). mannitol. Based on 95% confidence intervals, no significant change was observ treatment. Mannitol treatment *** p < 0.0001.
Repeated, Prolonged Morphine Exposure Does Not Induce Cytokine Gradie hCMEC/D3 BBB Model
To address the potential role of morphine exposure in the generatio gradients across the BBB, conditioned media from the apical and basal assayed for the expression of the cytokine CCL2 by ELISA. CCL2 was se recognized roles in the chemotaxis of monocytes in vitro and in vivo [2 sults demonstrated that repeated administrations of morphine (200 nM) sures did not significantly alter the concentrations of CCL2 as compare control (Figure 2). Figure 3A). To determine when the hCMEC/D3 monolayer reached confluence in the transwell, FITC-D permeability assays were performed and showed a significant reduction in FITC-D passage to the basal chamber on Day six that continued through to Day 10, as compared to the positive mannitol control (Supplementary Figure S1). At day nine, PBMCs were added to the apical chambers at a concentration of 1.5 × 10 6 cells/mL and allowed to transmigrate for 24 h. We found that daily administrations of morphine (200 nM) over 72 h exposures did not significantly alter the transmigration of T cells or monocytes compared to the vehicle control ( Figure 3C-E). day nine, PBMCs were added to the apical chambers at a concentration of 1.5 × 10 6 cells/mL and allowed to transmigrate for 24 h. We found that daily administrations of morphine (200 nM) over 72 h exposures did not significantly alter the transmigration of T cells or monocytes compared to the vehicle control ( Figure 3C-E).
Morphine Exposure Does Not Alter Evans Blue Permeability across a Primary Co-Culture BBB Model
To determine a timeframe for when the primary co-culture reached confluence, we performed Evans blue permeability assays each day and found that cell confluence was established after day two (Supplementary Figure S2A). To ensure significant barrier integrity, EDTA at 4 mM and 0.5 M were also examined, which showed the consistent point where significant barrier impairment occurred was an O.D. of 0.2 (Supplementary Figure S2B). After 72 h of morphine treatment, as measured by Evans blue, there was no further increase in baseline leakiness of the co-culture with morphine treatment as compared to vehicle (H2O) (Supplementary Figure S3).
Repeated and Prolonged Morphine Exposure Does Not Induce Cytokine Gradients across a Primary Co-Culture BBB Model
After morphine treatment, the media from the apical and basal chambers were assayed for expression of the cytokine, CCL2 by ELISA. The results demonstrated that repeated administrations of morphine over 72 h did not significantly alter concentrations of CCL2 compared to the control (Figure 4). These results were comparable to the results generated by the hCMEC/D3 model, with the caveat that the primary co-culture model generated much more concentrated levels of chemokine. However, the fold-change of each treatment of hCMEC/D3 to the primary co-culture model was comparable.
Morphine Exposure Does Not Alter Evans Blue Permeability across a Primary Co-Culture BBB Model
To determine a timeframe for when the primary co-culture reached confluence, we performed Evans blue permeability assays each day and found that cell confluence was established after day two (Supplementary Figure S2A). To ensure significant barrier integrity, EDTA at 4 mM and 0.5 M were also examined, which showed the consistent point where significant barrier impairment occurred was an O.D. of 0.2 (Supplementary Figure S2B). After 72 h of morphine treatment, as measured by Evans blue, there was no further increase in baseline leakiness of the co-culture with morphine treatment as compared to vehicle (H 2 O) (Supplementary Figure S3).
Repeated and Prolonged Morphine Exposure Does Not Induce Cytokine Gradients across a Primary Co-Culture BBB Model
After morphine treatment, the media from the apical and basal chambers were assayed for expression of the cytokine, CCL2 by ELISA. The results demonstrated that repeated administrations of morphine over 72 h did not significantly alter concentrations of CCL2 compared to the control (Figure 4). These results were comparable to the results generated by the hCMEC/D3 model, with the caveat that the primary co-culture model generated much more concentrated levels of chemokine. However, the fold-change of each treatment of hCMEC/D3 to the primary co-culture model was comparable.
Repeated, Prolonged Morphine Exposure Does Not Alter PBMC Transmigration across a Primary Human Co-Culture BBB Model
To examine how morphine may affect healthy PBMC transmigration across the BBB model, the co-culture was treated with vehicle (H2O) or morphine (200 nM) every 24 h for 72 h starting on day three ( Figure 5A). On day four, healthy PBMCs were added to the apical chamber at a concentration of 1.5 × 10 6 cells/mL and allowed to transmigrate for 24 h. After 24 h (day five), the PBMCs from the basal chamber were collected and stained for flow cytometry. The results demonstrated that repeated administrations of morphine over 72 h did not significantly alter transmigration of T cells or monocytes compared to the vehicle control ( Figure 5C-E).
Repeated, Prolonged Morphine Exposure Does Not Alter PBMC Transmigration across a Primary Human Co-Culture BBB Model
To examine how morphine may affect healthy PBMC transmigration across the BBB model, the co-culture was treated with vehicle (H2O) or morphine (200 nM) every 24 h for 72 h starting on day three ( Figure 5A). On day four, healthy PBMCs were added to the apical chamber at a concentration of 1.5 × 10 6 cells/mL and allowed to transmigrate for 24 h. After 24 h (day five), the PBMCs from the basal chamber were collected and stained for flow cytometry. The results demonstrated that repeated administrations of morphine over 72 h did not significantly alter transmigration of T cells or monocytes compared to the vehicle control ( Figure 5C-E).
Discussion
This study examined the effects of daily morphine exposure in hCMEC/D3 cell line and primary human co-culture BBB models and found no overt alterations induced at the BBB in either model system. The concentrations of morphine used in this study mimic repeat morphine exposure that would be given therapeutically to treat chronic pain, where morphine administered in a hospital setting can result in concentrations of 250-350 nM in the blood. (https://www.wsp.wa.gov/breathtest/docs/webdms/DRE_ Forms/Publications/drug/Human_Performance_Drug_Fact_Sheets-NHTSA.pdf; accessed on 14 June 2022).
In the hCMEC/D3 cell line model, we did not find increased baseline leakiness of the monolayer to small molecule passage, regardless of single versus repeated administrations of clinical doses of morphine. It has been shown that immortal BMECs do not have high TEER values, therefore we used FITC-D to measure permeability, as it has been supported in the literature for this model system [25]. In the primary co-culture system, we also did not find increased baseline leakiness of the barrier to Evans blue. Measuring permeability of the co-culture model by Evans blue has been previously established and validated by [24] as an appropriate method for this model system. Additionally, both FITC-D and Evans blue are colorimetric indicators, allowing for ease of comparison [28].
In the next set of experiments, we examined the production of a CCL2 chemokine gradient and the ability of this gradient to drive transmigration across the BBB. In the context of HIV-1 infection, there are elevated levels of pro-inflammatory cytokines in the CNS, which can recruit additional immune cells into the CNS, and lead to decreased BBB integrity [13,15,31,32]. An important chemokine involved in transmigration across the BBB is monocyte chemoattractant protein-1 (MCP-1; also referred to as chemokine C-C motif ligand 2, or CCL2), which is produced by many cell types including endothelial cells, fibroblasts, astrocytes, monocytes, and microglia [15,29,31]. CCL2 attracts monocytes, macrophages, T cells, NK cells, and dendritic cells to the site of interest and CCL2 gradients are capable of influencing BBB permeability by altered expression of TJPs, including occludin, claudin-5, and ZO-1 [22,31]. However, it was unknown if daily repeated morphine dosing that mimics a pain management regimen, is sufficient to create a chemokine gradient across a healthy BBB system. We observed no production of a CCL2 gradient across the chambers in either model system that was significant enough to alter transmigration of healthy PBMCs across the barrier. However, the concentration of CCL2 produced by the coculture was much higher than that observed in the hCMEC/D3 cell line model, most likely due to the high production of CCL2 by astrocytes as compared to endothelial cells [33]. Additionally, the majority of PBMCs that did transmigrate across the transwells were alive, and the ratio of T cells to monocytes that transmigrated was also not significant.
We have previously shown that prolonged morphine exposure on hCMEC/D3 cells increased mRNA and protein expression of ICAM-1, VCAM-1, and ALCAM, and enhanced firm adhesion of PBMCs [22]. In this study, prolonged morphine exposure did not alter transmigration of healthy, uninfected PBMCs across either of the two different model systems. This might be explained if the PBMCs interact with the BBB due to the upregulation of adhesion molecules in hCMEC/D3 cells exposed to morphine; however, as there is no strong CCL2 chemokine gradient in these model systems, these cells may not be induced to cross the barrier into the respective basal chambers. Future experiments could explore this possibility by examining the amount of PBMCs adhered to the transwell in the presence and absence of morphine.
Throughout this study we did not find any significant differences between both model systems, and due to the difficulty in acquiring primary human cells, the immortalized cell line system may be a good option for preliminary experimentation of the BBB. A limitation of this study is that it only examined the effect of daily, repeat morphine exposure in the context of healthy PBMCs; however, it will be essential that future studies consider pathogenic conditions, such as HIV-1-positive PBMCs, which may interact with the BBB differently [34].
Conclusions
These studies have shown that repeat administration of clinically relevant concentrations of morphine does not significantly alter two different in vitro models of the BBB. In both the cell line monoculture of hCMEC/D3 cells and the primary co-culture of human BMECs and astrocytes, no changes in permeability, CCL2 concentrations, or PBMC transmigration after 72 h of daily morphine exposure at 200 nM was observed.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/brainsci12070888/s1, Figure S1: Time to confluent hCMEC/D3 barrier as measured by FITC-D.; Figure S2: Time to confluent co-culture as measured by Evans blue; Figure S3: Evans blue Permeability of co-culture transwell system. | 5,436.2 | 2022-07-01T00:00:00.000 | [
"Biology"
] |
Fast Fourier Transform detection and reduction of high-frequency errors from the results of surface topography profile measurements of honed textures
This is an open access article under the CC BY license (https://creativecommons.org/licenses/by/4.0/) surface topography, measurement, measurement errors, measurement noise, cylinder liner, oil pocket, dimple, valley, texture features. In this paper, various type of noise detection procedures with surface topography profile analysis were proposed, compared (studied) and suggested. The honed cylinder liner surface textures with additionally burnished oil pockets were measured with a stylus or optical approaches. Measurement errors, defined as high-frequency measurement noise, were taken into sufficient consideration. It was proposed to select the noise detection methods more with profile (2D) than areal (3D) assessments; some-frequency noise was much easier to observe in profile than surface analysis. Moreover, applications of various type of regular filtration methods, mostly based on Gaussian functions, were compared with Fast Fourier Transform filtration for detection or reduction of some (high) frequency-defined measurement errors. Highlights Abstract
Power spectral density and autocorrelation func-• tion are valuable in noise analysis.
For minimisation of effect of measurement errors • the 'noise surface' is defined.
Profile analysis might be more valuable than areal • when measurement noise is assessed.
Fast Fourier Transform is useful in definition of • high-frequency measurement noise.
Fast Fourier Transform detection and reduction of high-frequency errors from the results of surface topography profile measurements of honed textures
Introduction
Measurement and analysis of surface topography (ST) are highly advantageous in assessments of tribological performance of 'engineering surfaces' [17], especially wear resistant [6], lubricant retention, sealing, friction, material contact in general [25]. ST is created in the last stage of the machining process; besides, medium-scale components of ST, waviness, in particular, arise as a result of imperfection in the manufacturing process [21].
Detailed information about ST can be obtained by measurements and analysis of the received raw data. Results in the assessment of wear behaviour of selected surface textures are fraught with many factors that can affect the accuracy of the research carried out. In general, errors in the analysis of ST, dedicated mainly for tribological elements, can be classified as errors in measuring equipment and environment [2], the measured object errors, software [30] or measuring method errors [37]. While considering the measurement uncertainty, it can be grouped in errors: typical for measuring methods [24], those caused by digitisation process [23], another received when data processing is accomplished [26] and other errors [20,29,44].
There can arise a few problems in the processing of raw measured data. Usually, the ST parameters of car engine parts are calculated after form (errors of shape and waviness) removal [32]. Errors of this type of data processing can be especially visible when edge-area of P. Podulka<EMAIL_ADDRESS>two-process textures are considered [28]. Improper selection of form removal procedure can cause classification of properly made parts as a lack and their rejections [31].
ST measurement equipment is often roughly divided into the stylus and non-contact methods [40]. In defiance of significant development of optical techniques, a stylus profilometer is still the most common roughness measuring device in the mechanical industry [22]. Contrary to stylus (contact, in general) methods which are robust but slow optical methods are fast but deeply sensitive to external effects. In general, optical methods require the isolation of instruments from the external environment, which is a complicated task to be accomplished. Therefore, the results of measurement of a rough surface can be fraught by errors caused by the noise occurrence. The noise has different sources, including those generated internally, and those defined as external sources received from the environment.
There are many types of noise in surface metrology, e.g. scattering [43], background [14], measurement/instrument [4], outlier [41], static [9] and other noise-like [1] errors. The measurement noise can be briefly described as the noise added to the output signal [12] occurring when the normal use of measuring instrument is accomplished. The effect of noise occurrence was minimised with various methods, e.g. correlogram correlation [16], by proposals of limitation and matching of bandwidth for various instruments [3], some optimisation approaches for selected (coherence scanning interferometry) measuring methods [7], the frequency de-noising technique based on the wavelet decompositions [5], Fourier reduction or random phase exclusion schemes [8], and other plenty methods [27].
Widely used in surface texture characterisation is the Fast Fourier Transform (FFT) [18]. The FFT technique is likely to play an important role in the analysis of dry contact of the rough surface [35], contact assessments in general [19]. For example, the power values of power spectral density (PSD) for the atomic force microscopy (AFM) digital data were determined by the FFT algorithms instead of the root-mean-square (rms) and peak-to-valley value when a surface morphology of pentacene thin films and their substrates were considered [15]. Moreover, peak detecting algorithm that combines the white-light phase-shifting interferometry (WLPSI) method and FFT coherence-peak-sensing technique was proposed, which could accurately determine the local fringe peak and improve the vertical resolution of the measurement [39]. Application of this type of algorithm could effectively reduce the batwing effects at the edges and solve the problem of positioning error in the maximum modulation.
FFT can also be associated with other filters, e.g. Gaussian [38], for measurement of surface roughness of optical thin films; the surface profile was obtained by the fringe pattern analysis using the FFT method. Besides, the asperity radius of curvature and asperity density, which are generally derived from a rough surface simulation with FFT, are the two essential parameters for statistical contact model [11]. The simulated rough surface with desired parameters was widely used as an input for the numerical simulation of tribological behaviour such as the asperity contact, lubrication, and wear. Gaussian rough surfaces with desired autocorrelation function (ACF) and spatial statistical parameters, including skewness (Ssk) and kurtosis (Sku), was developed by combining the FFT, translation process theory, and Johnson translator system [42]. An FFT algorithm can also reduce computational time in the calculation of universal spline filter. Furthermore, for practical use, an improved method prior to the FFT algorithm can be proposed to suppress the end effect [45].
In some cases, the FFT was considered to be one of the essential tools of digital signal processing for surface profiles filtration and was compared with the existing methods [33]. Moreover, a fast white-light interference method for measuring surface depth profiles at nanometer scales was also provided with FFT appliances. Whitelight profilers have relied either on path difference scanning or on spectral analysis of the reflection from a fixed interferometer. It was shown that, by performing this spectral analysis with an imaging Fourier transform spectrometer, the high speed of spectral techniques might be successfully combined with the simple data interpretation characteristic of a used scanning method [10]. A two-dimensional FFT technique was also proposed for accelerating the computation of scattering characteristics of features on surfaces by using the discrete-dipole approximation; FFT reduced the execution time of data processing as well [36].
In this paper, the FFT filtration method was proposed for detection and reduction of high-frequency measurement errors. It was suggested to analyse the ST with the profile (2D) instead of areal (3D) assessments. A number of profiling techniques have been developed for measuring the topography of rough surfaces [34]. Nonetheless, there is still a grave problem with the selection of procedure for detection of measurement noise. The influence of noise-separation methods on areal and profile ST analysis was not comprehensively studied in previous researches as well.
Analysed materials, measurement process and applied methods
In this paper, the honed cylinder liner texture with dimples created by the burnishing techniques was taken into account. They were measured by the stylus instrument, Talyscan 150.The nominal tip radius was about 2 μm, height resolution 10 nm and the measured area 5 mm by 5 mm (1000 x 1000 measured points), sampling interval was 5 µm.
Detection of noise, especially with high-frequency, from the results of raw measured data, was performed with FFT application. The FFT approach was provided for areal (3D) details but in many cases the results were presented for profiles (2D) only. It was caused by the more direct (visible) results in profile detection and reduction of the high-frequency measurement errors by analysis of the PSD and AFC graphs. For each type of surface texture (whereas topography contains the deep/wide dimple or not), the noise-detection procedure was modified to provide the best fits.
The effect of ST feature distribution on noise detection and reduction was carefully studied. Results were compared with those received by regular filtering methods, e.g. Gaussian (GF), robust Gaussian (RGF) or Spline filters (SF). Furthermore, analysis of the power spectral density and the autocorrelation function graphs were accomplished. The PSD was defined with no other functions, e.g. Zoom Factor equal to 1, Loops (number of iterations) also, the smoothing or windows functions were not provided as well. PSDs were received by the support of a commercial software. The noise removal results were also defined as the results of S-operator filtering, described in details previously in surface measurement (ISO) standards.
Areal and profile analysis of measurement noise
Considering the measurement errors, especially the high-frequency measurement noise (HFMN), it is necessary to describe it along with the associated measurement bandwidth. When HFMN was scrutinised, the PSD of measured detail has been carefully examined, explicitly taking into account the existence of the high-frequencies (frequencies of small scales) on the analysed graph. Moreover, visual detection of HFMN of both contour map plots and PSD graphs from the results of the measurement process was dependent on the width and depth of the ST features.
It was assumed that the size of dimples (valleys) additionally burnished on the honed cylinder liner texture has a significant impact on the noise detection process. In Figure 1, contour map plot of cylinder liner, measured with various velocity (a -0.2 mm/s and b -1 mm/s), was presented. It was found that if the width (depth) of the considered feature was greater than 0.1 mm (0.010 mm) the HFMN detection was exceedingly difficult or it was infeasible with PSD graph analysis, and there were no required (high) noise-frequencies on the diagrams. The occurrence of deep/wide features on the analysed detail made the visual (eye-view) noise detection enormously complicated.
It was also proposed to compare optical disclosure of HFMN for detail before (raw measured data) and after form removal. 'Flat' surface analysis might give more convincing results than details with higher amplitude (containing form and waviness) when measurement noise was detected with visual PSD plot assessment. The form was eliminated with the application of least-square fitted polynomial plane of 3 rd degree enhanced by the valley excluding method. This form separation approach was investigated and substantially confirmed by the previous studies, the usefulness of the applied scheme for characterisation of two-process cylinder liner topographies containing (relatively) large dimples was noticeably improved.
When the mentioned PSD diagrams were visually compared (c and d example against the a and b, from Figure 1), there was no discrepancy (or they were negligible) between the results obtained. Consequently, it was reasonably assumed that detection of HFMN by the analysis of the PSD graph of honed cylinder liner topographies including oil reservoirs, was not reasonably precise regardless of areal form removal process was completed or not. Notwithstanding the previous, the effect of form removal on noise identification (and vice versa) was not thoroughly scrutinised; nevertheless, this was not one of the underlying assumptions of the conducted research.
Similar problems in optical detecting of the noise from the results of surface texture measurement were obtained with profile (2D) analysis. In Figure 2, profiles (and respectively their PSD diagrams) extracted from honed cylinder liner topography, measured with different speed, were presented. Differences in measurement results were mainly (easy) noticeable with visual analysis of profile plots. Nevertheless, the variations on the PSD graphs were trifling or did not exist. In the considered profile, the valley depth and width were around 30 µm and 0.5 mm, respectively.
It was also assumed that when the number (density) of valleys in 2D (3D) analysis increased, the PSD-detection of HFMN became utterly impracticable. Subsequently, the process of noise disclosure, based on the PSD diagram assessment, appeared to be not precisely stated when honed cylinder liner topographies were studied with valley assays.
The effect of noise amplitude, not analysed in these studies, was also extraordinarily difficult to determine. In further research, where defining of error amplitude would be the primary task, the feature (e.g. valley, dimple) size (and density) and its distribution (considering as well the distance between each of them) might be of great importance. Moreover, the noise identification with PSD assessment was even more complicated that the noise was not visible for each measurement condition (velocity). Additionally, HFMN was discernible with visual profile analysis in contrast to the PSD graph assays for each surface texture measurement velocity.
The effect of feature (dimple) size on the detection of high-frequency measurement errors
The impact of feature (dimples, valley) size (width, depth) on detection of HFMN from the results of raw measured data was taken into consideration; the number (density) of valleys was not studied. It was assumed that for honed cylinder line textures containing dimples with a depth greater than 10 µm, the detection of HFMN with PSD analysis was severely impeded. The smallest valley depth was found the more accurate PSD-identification of measurement noise was observed. In Figure 3, three different profiles (containing dimples with various depth) extracted from honed cylinder liner topography are introduced. As it was assumed in the previous section, in condition when the valley depth was greater than 10 µm, the detection of HFMN with PSD analysis became unattainable (a, b). However, when the dimple depth was smaller than the previously specified value, the noise (high) frequencies were found in the PSD diagram (c). This valley-depth value should be identified for Sq of both the surface and the noise (HFMN specifically). Despite that, the arbitrarily selected values should not be decisive; it is suggested to identify the noise occurs in the measurement results by PSD analysis with out-of-deep-feature (out-of-deepvalley/out-of-deep-dimple to be more precise) characteristics. This finding can be entirely appropriate if the noise amplitude is relatively small, concerning the surface height (for both methods -out-of-dimple scheme and approach based on excluding the valleys).
Not only the depth of the surface features affects the PSD-accuracy of noise detection. It was fount that if the depth of valleys was relatively small (usually smaller than Sq/Rq value of non-dimple detail/ profile), the HFMN was also challenging to be discerned. The direct impact of dimple width can be clearly designated by the consideration of feature width (A) against the width of out-of-feature detail/ profile (B); the description was appropriately introduced in Figure 4. Each profile was shortened to reduce the A/B coefficient. The size of out-of-feature (dimple) profile detail was equal to 0.85 mm, and correspondingly, the width of the valley was decreased from 0.15 mm (a) to 0.05 mm (c), or its value was negligible (d). Consequently, when the value of A/B factor decreased, the PSD-identification of HFMN occurrence increased. In general, when the width of the feature (A) was smaller than 10% of profile width (A + B), then the noise PSDdisclosure increased; visual detection of HFMN was facilitated.
Reduction of noise from the results of surface topography measurements
Once the procedure of (high-frequency) noise detection is applied, the selection of approach for reduction (removal) of measurement errors should be chosen very attentively. The final selection of noise reduction procedure has basically the considerable influence on the calculation (values) of areal surface topography parameters. Properly accomplished measurement process might not provide desired (adequate) results when the processing of received raw measured data is not performed competently. The method of reduction of HFMN is, in addition to the form removal procedure, one of the main factors that can cause a false estimation of detail properties and its classification as a lack and rejection. For reduction of the effect of measurement errors on values of areal surface texture parameters, the FFT filtering was compared with commonly-used (mostly available in commercial software of measuring equipment) algorithms. Several approaches for noise reduction issue were proposed separately. Basically, the HFMN can be defined by analysis of the obtained results with the S-filtering process [13], e.g. regular Gaussian, spline or FFT methods. In the previous research, it was found that the procedure of reduction of HFMN can be selected by assessment of received data from the filtering approach. The results of S-filtering process can be defined as a noise surface (NS) or noise profile (NP), in this specific instance, when the 3D or, cor-respondingly, 2D measured data were considered. It was also found that NS (NP) received by the S-operation should be consist of only the high-frequencies or those (high) frequencies should be described as an overwhelmingly 'dominant' frequencies. Simultaneously, the frequencies that are required to be removed should be those dominant in the results of noise reduction (filtering as a considered case) procedure. In general, the more dominant frequencies are those with high-frequency spectrum the better results of HFMN reduction process are received.
From the analysis of PSDs of profiles from honed surface texture after reduction of HFMN, it was assumed that the most dominating high-frequency was found for the results obtained after S-filtering by the FFT method. In Figure 5, profiles and their PSDs were presented for profiles after various type of filtering approaches. From Gaussian schemes (GF or RGF), robust filtering caused a greater amplitude of received NP than the regular filter. When the FFT approach was applied, the high-frequencies were those 'dominant'; the amplitude of those frequencies (on PSD graph) was greater than for profiles after noise reduction by other methods (GF, RGF and SF). (maximum height) of received NP was the greatest when RGF was used. Nonetheless, the amplitude of HFMN (NS) was not considered in this paper. Some features located on the ACF graph ( Figure 6) occurred in periodic sentence when commonly available (Gaussian or spline) filters were applied. Received frequencies were more 'periodic' than those after noise reduction by the FFT approach. Moreover, the peak density of ACF also increased when FFT filtering was accomplished instead of generally used filters. The process of noise reduction in profile analysis can be found by definition and assessment of dominant frequency. Additionally, when HFMN was considered to be removed (minimised), the dominant frequency (high-in particular) should not be found in the received post-processing (after S-operator filtering) data. The smallest amplitude of dominant (removed) frequency was established in the received de-noised data the highly advantageous of the applied algorithm was clearly indicated. HFMN was removed by application of various filter with a bandwidth equal to 0.015 mm, 0.025 mm and 0.035 mm to improve the
Conclusions
The detection and then reduction of high-frequency measurement noise is a highly demanding task that makes the analysis of surface topography reasonably intuitive. Nonetheless, some applications might be accepted: For honed cylinder liner textures (plateau-honed in particu-1. lar) containing additionally burnished oil-reservoirs (dimple, valley, wholes in general), the high-frequency measurement noise was exceedingly difficult to detect for areal analysis with power spectral or autocorrelation function applications. c) The eye view assessment of the occurrence of high-frequency errors in the results of surface topography measurements was increasingly complicated when texture contains some deep features. The form (e.g. cylindrical) of analysed detail has no impact 2.
on the process of 'power spectral density' (PSD) detection of high-frequency measurement noise. The differences in PSD graphs, calculated for both raw measured data and data after application of the process of areal form removal (e.g. leastsquare fitted polynomial plane of 3 rd order improved by valley excluding method approach), were imperceptible. It is suggested to identify the noise in the results of surface 3.
texture measurement by power spectral density analysis of out-of-feature (deep valleys in particular) details. This might be entirely justified when the noise amplitude is relatively small, according to the surface amplitude (height). Moreover, when the number (density) of valleys in profile (areal) analysis increased, the detection of high-frequency measurement noise, with the analysis of power spectral density graphs, appeared to be infeasible. The feature (dimple) size has a considerable influence on 4.
the accuracy of the noise detection procedure. As it was assumed, when the depth of valley was greater than 10 µm (usually when Sq of the non-dimple area was smaller than 10% of the Sq value calculated for all measured detail -containing dimples), the detection of high-frequency measurement errors (with power spectral density analysis certainly) became unobtainable. In addition, when the width of the feature was also smaller than 10% of the profile width, then the precision of noise disclosure increased. It was suggested to determine the high-frequency measure-5.
ment noise with noise profile analysis, as a results of S-filtering methods. The properly obtained noise profile should contain only the 'unwanted' (with high-frequency spectrum) components. It was suggested that the removed frequency should be this dominant in the received results after the S-filtering of noise profile. When the received noise profile (surface) were considered 6.
with autocorrelation function, it was found that properly defined noise profile (surface) should not contain any periodic waveforms; the noise profile (surface) should be non-periodic (isotropic), in general.
For reduction of high-frequency noise from the results of 7.
measurement of surface topography, the fast Fourier transform (FFT) was proposed. When compared with commonly-used procedures (e.g. various Gaussian or spline filters) with various cut-off values (e.g. 0.015 mm, 0.025 mm, 0.035 mm), the amplitude of dominant (required high-) frequency was maximised.
It is recommended to select the noise reduction procedure with 8.
multivariate analysis, combining the profile (areal) eye view (of contour map/profile plots) analysis with assessments of the graphs of power spectral densities and autocorrelation functions. | 5,080.6 | 2021-01-01T00:00:00.000 | [
"Physics"
] |
Fully-strange tetraquark states with the exotic quantum numbers J PC = 0 + − and 2 + −
We study the fully-strange tetraquark states with the exotic quantum numbers J PC = 0 + − and 2 + − . We construct their corresponding diquark-antidiquark interpolating currents, and apply the QCD sum rule method to calculate both their diagonal and off-diagonal correlation functions. The obtained results are used to construct some mixing currents that are nearly non-correlated, from which we extract the masses of the lowest-lying states to be M 0 + − = 2 . 47 +0 . 33 − 0 . 44 GeV and M 2 + − = 3 . 07 +0 . 25 − 0 . 33 GeV. We apply the Fierz rearrangement to transform the diquark-antidiquark currents to be the combinations of meson-meson currents, and the obtained Fierz identities indicate that these two states may be searched for in the P -wave ϕ (1020) f 0 (1710) /ϕ (1020) f ′ 2 (1525)( → ϕK ¯ K/ϕππ ) channels.
In this paper we shall investigate the fully-strange tetraquark states with the exotic quantum numbers J P C = 0 +− and 2 +− through the method of QCD sum rules.Recently, we have applied this method to study the fully-strange tetraquark states of J P C = 0 −+ /1 ±± /4 +− in Refs.[84][85][86][87][88][89][90].In the present study we shall explicitly add the covariant derivative operator in order to construct the fully-strange tetraquark currents of J P C = 0 +− and 2 +− .We shall construct some diquarkantidiquark interpolating currents, and apply the QCD sum rule method to calculate both their diagonal and off-diagonal correlation functions.The obtained results will be used to construct some mixing currents that are nearly non-correlated, from which we shall extract the masses of the lowest-lying states to be M 0 +− = 2.47 +0.33 −0.44 GeV , M 2 +− = 3.07 +0.25 −0.33 GeV .
With a large amount of J/ψ sample, BESIII are carefully studying the physics happening around this energy region [91][92][93][94][95][96][97], and so do the Belle-II [98] and GlueX [99].Therefore, the above fully-strange tetraquark states of J P C = 0 +− and 2 +− are potential exotic hadrons to be observed in the future experiments.The present study would provide not only complementary information for possible counter-candidates in the charm sector cccc [100][101][102], but also systematic understanding of exotics in a wider flavor region.This paper is organized as follows.In Sec.II we construct the fully-strange tetraquark currents with the exotic quantum numbers J P C = 0 +− and 2 +− .These currents are used in Sec.III to perform QCD sum rule analyses, where we calculate both their diagonal and offdiagonal correlation functions.Based on the obtained results, we use the diquark-antidiquark currents to perform numerical analyses in Sec.III A, and use their mixing currents to perform numerical analyses in Sec.III B. Sec.IV is a summary.
II. TETRAQUARK CURRENTS
In this section we construct the fully-strange tetraquark currents with the exotic quantum numbers J P C = 0 +− and 2 +− .Note that these two quantum numbers can not be simply reached by using one quark arXiv:2307.07819v2[hep-ph] 18 Oct 2023 field and one antiquark field, and they can not be reached by only using two quark fields and two antiquark fields neither.Actually, we need two quark fields and two antiquark fields together with one or more derivatives to reach them.
We have systematically constructed three independent diquark-antidiquark currents in Ref. [90] using two quark fields and two antiquark fields together with two derivatives: Here and the symbol S denotes symmetrization and subtracting trace terms in the set Following a similar procedure for η α1α2α3α4 4 +− ;1/2/3 , for J P C = 0 +− and 2 +− currents, we can use the spin-0 and spin-2 projection operators rather than the symmetrization operator S: For J P C = 0 +− , we can construct three independent diquark-antidiquark currents: For J P C = 2 +− , we can also construct three independent diquark-antidiquark currents: The internal orbital angular momenta contained in the above diquark-antidiquark currents are all where L is the total orbital angular momentum, l ρ and l ρ ′ are the momenta inside the diquark and antidiquark respectively, and l λ is the momentum between the diquark and antidiquark, as depicted in Fig. 1.Among the above diquark-antidiquark currents, η .We use lρ and l ρ ′ to denote the momenta inside the diquark and antidiquark respectively, and l λ to denote the momentum between them.We use l ′ ρ and l ′ ρ ′ to denote the momenta inside the two mesons, and l ′ λ to denote the momentum between them.The Fierz identities given in Eqs.(23) indicate that the internal orbital angular momenta contained in the diquark-antidiquark currents η Moreover, the currents η ••• 0/2/4 +− ;1 are constructed by using the S-wave diquark field s T a Cγ µ s b of J P = 1 + , and the currents η ••• 0/2/4 +− ;3 are constructed by using the diquark field s T a Cσ µν s b of J P = 1 ± that contains both the S-and P -wave components, so they may lead to better QCD sum rule results.Oppositely, the currents η ••• 0/2/4 +− ;2 are constructed by using the P -wave diquark field s T a Cγ µ γ 5 s b of J P = 1 − , so their predicted masses are probably larger.The results of Ref. [90] have partly verified these analyses, as summarized in Table I.
Besides the diquark-antidiquark configuration, we can also investigate the meson-meson configuration.We have constructed three independent meson-meson currents of J P C = 4 +− in Ref. [90]: We can similarly construct three independent mesonmeson currents of J P C = 0 +− : and three independent meson-meson currents of J P C = 2 +− : × sa γ α1 γ 5 As depicted in Fig. 1, the internal orbital angular momenta contained in the above meson-meson currents are all where l ′ ρ and l ′ ρ ′ are the momenta inside the two mesons, and l ′ λ is the momentum between them.We can apply the Fierz rearrangement to derive the relations between η We shall use these Fierz identities to study the decay behaviors at the end of this paper.
III. QCD SUM RULE ANALYSES
The QCD sum rule method has been successfully applied to study various conventional and exotic hadrons in the past fifty years [103][104][105][106][107][108][109][110].In this section we apply this non-perturbative method to study the fullystrange tetraquark currents η 0 +− ;1/2/3 of J P C = 0 +− and η β1β2 2 +− ;1/2/3 of J P C = 2 +− .We use the three currents η 0 +− ;1/2/3 of J P C = 0 +− as examples, and assume that they couple to the states where X n is the state that the current η 0 +− ;i can couple to, N is the number of such states, and f in is the 3 × N matrix for the coupling of the current η 0 +− ;i to the state X n .Then we can investigate the diagonal and offdiagonal correlation functions at both the hadron and quark-gluon levels.
At the hadron level we use the dispersion relation to express Π ij (q 2 ) as where s < = 16m 2 s is the physical threshold.The spectral density ρ phen ij (s) can be generally parameterized for the states X n and a continuum as where M n is the mass of the state X n .At the quark-gluon level we apply the method of operator product expansion (OPE) to calculate Π ij (q 2 ), from which we can extract the OPE spectral density In the present study we have calculated the Feynman diagrams depicted in Fig. 2, where we use the strangeness quark propagator as We have considered the perturbative term, the quark condensate ⟨ss⟩, the gluon condensate ⟨g 2 s GG⟩, the quarkgluon mixed condensate ⟨g s sσGs⟩, and their combinations.We have calculated all the diagrams proportional to g N =0 s and g N =1 s , but we have only partly calculated the diagrams proportional to g N ≥2 s .
A. Single-channel analysis
In this subsection we perform the single-channel analysis by setting ρ ij (s)| i̸ =j = 0.This assumption neglects the off-diagonal correlation functions to make the three currents η 0 +− ;1/2/3 "non-correlated", i.e., any two of them can not mainly couple to the same state X, oth-erwise, Accordingly, we further assume that there are three states X 1,2,3 separately corresponding to the three currents η 0 +− ;1/2/3 through We parameterize the spectral density ρ ii (s) as one pole dominance for the single state X i between the physical threshold s < and the threshold value s 0 as well as a continuum contribution above s 0 .This simplifies Eq. ( 42) to be and the mass M i can be calculated through We use the spectral density ρ 11 (s) extracted from the current η 0 +− ;1 as an example to calculate the mass M 1 of the state X 1 .As given in Eq. ( 46), the mass M 1 depends on two free parameters: the Borel mass M B and FIG.2: Feynman diagrams calculated in the present study.The covariant derivative Dα = ∂α +igsAα contains two terms, and we use the green vertex to describe the latter term.the threshold value s 0 .We consider three aspects to find their proper working regions: a) the OPE convergence, b) the one-pole-dominance assumption, and c) the dependence of the mass M 1 on these two parameters.
Firstly, we consider the OPE convergence and require the D = 12/10/8 terms to be less than 5%/10%/20%, respectively: These conditions demand the Borel mass to be larger than M 2 B ≥ 2.31 GeV 2 , as depicted in Fig. 3. Secondly, we consider the one-pole-dominance assumption and require the pole contribution to be larger than 40%: This condition demands the Borel mass to be smaller than M 2 B ≤ 2.57 GeV 2 when setting s 0 = 14.0 GeV 2 , as depicted in Fig. 3.
Altogether we determine the Borel window to be 2.31 GeV 2 ≤ M 2 B ≤ 2.57 GeV 2 for s 0 = 14.0 GeV 2 .We redo the same procedures and find that the Borel windows exist as long as s 0 ≥ s min 0 = 12.5 GeV 2 .Accordingly, we demand the threshold value s 0 to be slightly larger and choose its working region to be 11.0 GeV 2 ≤ s 0 ≤ 17.0 GeV 2 .
Thirdly, we consider the dependence of the mass M 1 on M B and s 0 .As shown in Fig. 4, the mass M 1 is stable against M B inside the Borel window 2.31 GeV 2 ≤ M 2 B ≤ 2.57 GeV 2 , and its dependence on s 0 is acceptable inside the working region 11.0 GeV 2 ≤ s 0 ≤ 17.0 GeV 2 , where the mass M 1 is calculated to be M 0 +− ;1 = 3.21 +0. 23 −0.28 GeV .
Its uncertainty comes from M B and s 0 as well as various QCD parameters given in Eqs.(42).Note that the mass M 1 has a stability point at around s 0 ∼ 6.0 GeV 2 , as shown in Fig. 4(a).However, there does not exist the Borel window at this energy point.We apply the same method to study the other two J P C = 0 +− currents η 0 +− ;2/3 and the three . The obtained results are summarized in Table I.
B. Multi-channel analysis
In this subsection we perform the multi-channel analysis by taking into account the off-diagonal correlation functions that are actually non-zero, i.e., ρ ij (s)| i̸ =j ̸ = 0. To see how large they are, we choose s 0 = 9.0 GeV 2 and TABLE I: QCD sum rule results for the fully-strange tetraquark states with the exotic quantum numbers J P C = 0/2/4 +− , extracted from the diquark-antidiquark currents η (52) This indicates that η 0 +− ;1 and η 0 +− ;2 are strongly correlated with each other, as depicted in Fig. 5.
To diagonalize the 3 × 3 matrix ρ ij (s), we construct three mixing currents J 0 +− ;1/2/3 : where T 0 +− is the transition matrix.We use the method of operator product expansion to extract the spectral densities ρ ′ ij (s) from the mixing currents J 0 +− ;1/2/3 .After choosing we obtain at s 0 = 9.0 GeV 2 and M 2 B = 1.50 GeV 2 .Therefore, the off-diagonal terms of ρ ′ ij (s) are negligible and the three mixing currents J 0 +− ;1/2/3 are nearly noncorrelated around here, as depicted in Fig. 5. Besides, Eq. ( 55) indicates that the QCD sum rule result extracted from J 0 +− ;3 is non-physical around here due to its negative correlation function.
Since the off-diagonal terms of ρ ′ ij (s) are negligible around 0 = 9.0 GeV 2 and M 2 B = 1.50 GeV 2 , the procedures used in the previous subsection can be applied to study the three mixing currents J 0 +− ;1/2/3 .We summarize the obtained results in Table I.Especially, the mass extracted from the current J 0 +− is significantly reduced to Similarly, we can investigate the three J P C = 2 +− currents η β1β2 2 +− ;1/2/3 .We construct three mixing currents J β1β2 2 +− ;1/2/3 that are nearly non-correlated at around s 0 = 13.0GeV 2 and M 2 B = 2.0 GeV 2 : where We apply the QCD sum rule method to study the mixing currents J β1β2 2 +− ;1/2/3 , and the obtained results are summarized in Table I.Especially, the mass extracted from the current J β1β2 2 +− ;1 is the lowest: For completeness, we also summarize in Table I the QCD sum rule results obtained in Ref. [90] using the three J P C = 4 +− currents η α1α2α3α4 4 +− ;1/2/3 as well as their mixing currents J α1α2α3α4 4 +− ;1/2/3 that are nearly non-correlated at around s 0 = 11.0GeV 2 and M 2 B = 1.85 GeV 2 : where
IV. SUMMARY AND DISCUSSIONS
In this paper we apply the QCD sum rule method to study the fully-strange tetraquark states with the exotic quantum numbers J P C = 0 +− and 2 +− .We explicitly add the covariant derivative operator to construct some diquark-antidiquark interpolating currents, and apply the method of operator product expansion to calculate both their diagonal and off-diagonal correlation functions.Based on the obtained results, we construct some mixing currents that are nearly non-correlated.
We use both the diquark-antidiquark currents and their mixing currents to perform QCD sum rule analyses.The obtained results are summarized in Table I.Especially, we use the mixing currents J 0 +− ;1 and J β1β2 2 +− ;1 to derive the masses of the lowest-lying J P C = 0 +− and 2 +− states to be M 0 +− = 2.47 +0.33 −0.44 GeV , M 2 +− = 3.07 +0.25 −0.33 GeV .
In this paper we also construct some fully-strange meson-meson currents of J P C = 0 +− and 2 +− , which are related to the diquark-antidiquark currents through the Fierz rearrangement.We can use these meson-meson currents and their mixing currents to perform QCD sum rule analyses.The results extracted from these mixing currents are the same.
FIG. 4 :
FIG.4:The mass M1 of the state X1 with respect to (a) the threshold value s0 and (b) the Borel mass MB.In the subfigure (a) the short-dashed/solid/long-dashed curves are obtained by setting M 2 B = 2.31/2.44/2.57GeV 2 , respectively.In the subfigure (b) the short-dashed/solid/long-dashed curves are obtained by setting s0 = 11.0/14.0/17.0GeV 2 , respectively.These curves are obtained using the spectral density ρ11(s) extracted from the current η 0 +− ;1 . | 3,789.8 | 2023-07-15T00:00:00.000 | [
"Physics"
] |
The contributions of aspartyl residues in the acetylcholine receptor gamma and delta subunits to the binding of agonists and competitive antagonists.
The acetylcholine (ACh) receptors in muscle have the composition α2βγδ and contain two ACh binding sites. One is formed between an α subunit and the γ subunit, and the other is formed between an α subunit and the δ subunit. Among the residues in the ACh binding sites are αCys-192 and αCys-193. The negatively charged δAsp-180 is at an appropriate distance from αCys-192/193 also to be in the ACh binding site and to interact electrostatically with the positively charged ammonium group common to agonists and competitive antagonists. Mutation to Asn of either δAsp-180 or the aligned residue in the γ subunit, γAsp-174, decreased the affinities of three agonists, acetylcholine, tetramethylammonium, and succinyldicholine 170-560-fold. By contrast, these mutations decreased the affinities of three competitive antagonists, (+)-tubocurarine, hexamethonium, and dihydro-β-erythroidine, only 2-15-fold. Agonists, but not antagonists, promote the transitions of the receptor from the resting state to the higher affinity active and desensitized states, and the greater effects of the mutations of γAsp-174 and δAsp-180 on the apparent affinities of agonists could reflect the involvement of these residues in the conformational changes of the receptor corresponding to its transitions to higher affinity states. In these transitions, one possibility is that γAsp-174 and δAsp-180 move closer to bound agonist.
The binding of acetylcholine (ACh) 1 by nicotinic receptors promotes the transitions of the receptor from the resting state to the open and the desensitized states (Katz and Thesleff, 1957). Muscle-type ACh receptors contain two nonidentical ACh binding sites (Damle and Karlin, 1978;Neubig and Cohen, 1979;Dowding and Hall, 1987). Normally, these receptors are pentamers composed of four types of subunits in the stoichiometry ␣ 2 ␥␦ (Reynolds and Karlin, 1978;Lindstrom et al., 1979;Raftery et al., 1980). One of the ACh binding sites is formed in the interface between the first ␣ subunit and the ␥ subunit, and the other site is formed between the second ␣ subunit and the ␦ subunit (Kurosaki et al., 1987;Blount and Merlie, 1989;Pedersen and Cohen, 1990;Sine and Claudio, 1991;Czajkowski and Karlin, 1991).
Several residues have been identified as in or close to the ACh binding sites. In the ␣ subunit, disulfide-linked and four aromatic residues, were labeled by binding site-directed reagents (Kao et al., 1984;Kao and Karlin, 1986;Dennis et al., 1988;Abramson et al., 1989;Galzi et al., 1990;Cohen et al., 1991;Middleton and Cohen, 1991). Furthermore, the involvement of these six residues in ACh binding was supported by the functional consequences of site-directed mutagenesis (Mishina et al., 1985;Tomaselli et al., 1991;Galzi et al., 1991a;O'Leary and White, 1992). The adjacent cysteines and the four aromatic residues are highly conserved among all ␣-subunit sequences.
An early assumption was that the ammonium group common to all potent agonists and antagonists of the ACh receptor is bound to a negative subsite of the binding site. From the rates of reaction and functional effects of affinity labels of different lengths, it was inferred that this negative subsite is about 12 Å from the binding site disulfide in the resting state of the receptor and about 9 Å from the binding site disulfide in the open state (Karlin, 1969). Using a radiolabeled cross-linker that reacts with a sulfhydryl at one end and a carboxyl group at the other end, we showed that three negatively charged residues in the ␦ subunit of Torpedo ACh receptor, ␦Asp-165, ␦Asp-180, and ␦Glu-182 (see Fig. 1), are within approximately 9 Å of ␣Cys-192/193 (Czajkowski andKarlin, 1991, 1995). 2 Furthermore, in the complex of mouse ACh receptor ␣, , and ␦ subunits, the mutation of mouse ␦Asp-180 to Asn decreased the apparent affinity of the receptor for ACh by 2 orders of magnitude (Czajkowski et al., 1993). In addition, mutation of ␦Glu-189 to Gln decreased the apparent affinity for ACh by 1 order of magnitude. Therefore, both ␦Asp-180 and ␦Glu-189 could contribute to the negative subsite of the ACh binding site formed between the ␣ and ␦ subunits. By contrast, mutations of ␦Asp-165, ␦Glu-182, or eight other negatively charged residues in ␦ had only small effects on the apparent affinity for ACh.
The cross-linking of to ␦Asp-180 supports the location of an ACh binding site in the interface of an ␣ subunit and the ␦ subunit. The photolabeling of the aligned residues ␥Trp-55 and ␦Trp-57 by the competitive antagonist (ϩ)-tubocurarine also supported the location of ACh binding sites in the interfaces between ␣ and ␥ and between ␣ and ␦ (Pedersen and Cohen, 1990;Chiara and Cohen, 1992). Mutation of these Trp residues had modest effects on the binding of agonists and antagonists (O'Leary et al., 1994;Corringer et al., 1995). These Trp residues and some of the aromatic residues identifed in the ␣ subunit could also contribute to the negative subsite; i.e. to the binding of the ammonium group (Dougherty and Stauffer, 1990;Galzi et al., 1990). The ␥ subunit is the most similar in sequence to the ␦ subunit, and mouse-muscle ␥ and ␦ substitute for each other in forming fully functional receptor complexes with just three types of subunits, presumably with the stoichiometries ␣ 2 ␥ 2 or ␣ 2 ␦ 2 (Kullberg et al., 1990;Sine and Claudio, 1991). The residues in ␥ that align with ␦Asp-180 and ␦Glu-189 are ␥Asp-174 and ␥Glu-183 (see Fig. 1). These two residues are identically conserved among all aligned sequences of the ␥, ␦, and ⑀ subunits (Czajkowski et al., 1993). (The ⑀ subunit substitutes for ␥ in adult muscle ACh receptor (Mishina et al., 1986).) We mutated to Asn or Gln, ␥Asp-174, ␥Glu-183, and each of the other eight Asp and Glu residues in a 60-residue segment preceding the first membrane-spanning segment (see Fig. 1) and expressed each mutant ␥ subunit, together with wild-type ␣ and  subunits, in Xenopus oocytes. We also expressed ␦Asp-180 mutated to Asn and ␦Glu-189 mutated to Gln together with wild-type ␣ and  subunits. The salient findings were that the mutation of ␥Asp-174 to Asn, just like the mutation of ␦Asp-180 to Asn, decreased the apparent affinity for ACh by 2 orders of magnitude and that both of these mutations decreased the apparent affinities for agonists much more than the affinities for competitive antagonists.
ACh-induced currents were recorded with a two-electrode voltage clamp at a holding potential of Ϫ40mV as described (Akabas et al., 1992). The currents induced by various concentrations of ACh were fit by the Hill equation, Oocyte membranes were prepared as described (Czajkowski et al., 1993). The final membrane suspension contained about 1.5 g of protein/l. The yield was approximately 5 fmol of 125 I-␣-bungarotoxin binding sites and about 15 g of protein/oocyte.
The binding of 125 I-labeled ␣-bungarotoxin at different toxin concentrations was determined by diluting 20 l of membrane suspension to 400 l with NP50 (50 mM NaCl/10 mM sodium phosphate/1 mM EDTA, pH 7.0) containing 0.2% Triton X-100, mixing with 125 I-␣-bungarotoxin (at five concentrations from 0.05 to 5 nM), and incubating for about 24 h at 18°C. The samples were diluted with 5 ml of ice-cold wash buffer (0.2% Triton X-100/10 mM NaCl/10 mM sodium phosphate, pH 7.4), filtered through Reeves-Angel glass-fiber filters pre-soaked in 2% polyethyleneimine, and washed two times with 5 ml of wash buffer. The amount of 125 I-␣-bungarotoxin on the filters was determined by liquid scintillation counting. Specific binding was defined as that binding of 125 I-␣-bungarotoxin blocked by 1 M nonradioactive ␣-bungarotoxin. The concentration of toxin giving half-maximal binding, KЈ tox , was estimated by a nonlinear least squares fit of the specifically bound toxin, Y, to the equation, Y ϭ Y max /(1 ϩ (KЈ tox /X) n ), where X is the 125 I-␣-bungarotoxin concentration. Given the rate constants for the association and dissociation of ␣-bungarotoxin (Sine and Claudio, 1991), the binding would not be expected to reach equilibrium in 24 h, especially at low concentrations of ␣-bungarotoxin, and therefore KЈ tox is not an equilibrium dissociation constant.
The binding of 125 I-␣-bungarotoxin to intact oocytes was determined by placing 5-10 oocytes in a final volume of 400 l of 0.5 nM 125 I-␣bungarotoxin in MOR2 (82 mM NaCl/2.5 mM KCl/1 mM Na 2 HPO 4 /5 mM MgCl 2 /0.2 mM CaCl 2 /5 mM Hepes, pH 7.4), containing 0.1% bovine serum albumin, and incubating at 18°C overnight. The oocytes were placed on a single DE81 filter and washed four times with 5 ml of ice cold MOR2 (without bovine serum albumin). The amount of 125 I-␣bungarotoxin on the filter was determined by liquid scintillation counting. In the case of intact oocytes, specific binding was defined as that blocked by 100 nM nonradioactive toxin.
The binding of agonists and antagonists was determined by their inhibition of the binding of 125 I-␣-bungarotoxin to wild-type and mutant receptors to a crude membrane fraction of oocytes. Membranes (50 l), pretreated for 20 min with 200 M diisopropyl phosphofluoridate, 0.75 nM 125 I-␣-bungarotoxin (50 l), and various concentrations of agonist or antagonists (50 l), all in NP50, were combined in a final volume of 150 l. The final 125 I-␣-bungarotoxin concentration was 0.25 nM. After 5 h at room temperature, the suspension was filtered, and the bound 125 I-␣bungarotoxin was determined as above.
The dissociation constant, K ligand , for ligand was obtained by the nonlinear least squares fit of the following equation to the binding data: where Y is the specifically bound 125 I-␣-bungarotoxin (that blocked by 1 M ␣-bungarotoxin), Y 0 is the specifically bound 125 I-␣-bungarotoxin in the absence of ligand, and A is the concentration of ligand. 3 U is the specifically bound 125 I-␣-bungarotoxin not blocked by saturating concentrations of the ligand; U was estimated by the fit. Over all the ligands tested, U ranged from 13 to 42% of specifically bound 125 I-␣-bungarotoxin.
The inhibition data was also analyzed in terms of two binding sites by the following equation: where g is the fraction of blockable binding to site 1, K ligand,1 is the dissociation constant of the ligand for site 1, K ligand,2 is the dissociation constant for site 2, and the other parameters are defined above.
RESULTS
The ten acidic residues between and including ␥Glu-163 and ␥Glu-203 ( Fig. 1) were mutated one at a time, Asp to Asn and Glu to Gln. In addition, ␥Asp-174 was mutated to Glu, and ␥Glu-183 to Asp. These mutant ␥ subunits were expressed in Xenopus oocytes together with wild-type ␣ and  subunits and, where w ϭ 1/(1 ϩ a/K ligand ), k 1 and k Ϫ1 are the toxin-association and toxin-dissociation rate constants, respectively, and r total is the concentration of all receptor. Furthermore, For small values of the exponents, this equation is approximated by which is also the result obtained by assuming that toxin is binding at its initial rate and that dissociation is negligible. Given the rate constants for the binding of ␣-bungarotoxin to mouse muscle ACh receptor, k 1 ϭ 5.5 ϫ 10 5 M Ϫ1 s Ϫ1 and k Ϫ1 ϭ 3.3 ϫ 10 Ϫ6 s Ϫ1 (Sine and Claudio, 1991), and the conditions used here, x ϭ 0.25 nM, t ϭ 5 h, the exact expression is 0, 6, and 12% larger than the approximation at a ϭ 0, a ϭ K ligand , and a Ͼ Ͼ K ligand , respectively. Fitting data generated by the exact expression with the approximate equation yields a value for K ligand 13% larger than that used to generate the data. The error in the ratio of K ligand,mutant /K ligand,wild-type , as used in Tables I and II, would in some cases, together with ␣, , and ␦ subunits. We determined the effects of each of these mutations on the whole cell currents elicited by ACh and on the inhibition by ACh of the binding of 125 I-␣-bungarotoxin. For three mutations of ␥, ␥D174N, ␥D174E, and ␥E183Q, and for two mutations of ␦, ␦D180N and ␦E189Q, we also determined the binding of two additional agonists and of three antagonists (Fig. 2). The complex of wild-type ␣, , and ␥ (␣ 2 ␥ 2 receptor), similar to the complex of ␣, , and ␦ (␣ 2 ␦ 2 receptor; Czajkowski et al., 1993), yielded an EC 50 that was close to that of the complex of wild-type ␣, , ␥, and ␦ (␣ 2 ␥␦ receptor; Table I). No AChinduced current was obtained with just ␣ and  (Table I). Also, Liu and Brehm (1993) found that complexes of ␣ and ␥ or of ␣ and ␦ yielded ACh-induced currents that were 200 or 40 times smaller than the currents yielded by complexes of ␣, , and ␥ or of ␣, , and ␦, respectively. Initially, we tested the ␥ mutants in complexes just with ␣ and .
For 10 of the 12 ␥ mutants, the complexes with ␣ and  gave maximal ACh-induced currents that were 2-50% of wild-type currents ( Table I). The largest change in the EC 50 for ACh was a 4.6-fold increase shown by ␥E176Q; ␥E183Q gave a 4-fold increase; the other EC 50 values were within a factor of 3 of the EC 50 of wild-type ␣ 2 ␥ 2 receptor ( Table I). The Hill coefficients of all mutants were very close to the Hill coefficient of wild-type receptor. Also, for these 10 mutants the K ACh , determined by the inhibition by ACh of 125 I-␣-bungarotoxin binding, was slightly less than that of wild-type ␣ 2 ␥ 2 (Table I).
Two mutants, ␥D174N and ␥E202Q, co-expressed with ␣ and , gave no detectable ACh-induced current at ACh concentrations up to 2 mM. Furthermore, there was no specific binding of 125 I-␣-bungarotoxin to the surface of the intact oocytes expressing ␣ϩϩ␥D174N or ␣ϩϩ␥E202Q. 4 Therefore, we conclude that the complexes formed by these subunits were not transported to the cell surface. These subunits did form agonistbinding complexes in cytoplasmic membranes, the predominant constituents of the crude membrane fraction used in the binding experiments. Because the dissociation constant for ACh of the complex containing ␥E202Q was slightly less than that of all wild-type ␣ϩϩ␥, this mutant was not further characterized (Table I).
Unlike the complex of ␣, , and ␥D174N, the complex of ␣, , ␦, and ␥D174N was expressed on the cell surface. The AChinduced current was characterized by an EC 50 of 24 M, six times the EC 50 of all wild-type ␣ 2 ␥␦ ( Fig. 3; Table I). Because the complex of all wild-type ␣ 2 ␦ 2 receptor had an EC 50 of 2.7 M (Czajkowski et al., 1993) and the complexes of ␣ϩϩ␥D174N were not expressed on the cell surface, the increase in the EC 50 must have been due to the expression of the pentameric complex of ␣ 2 ␦␥D174N. Given that wild-type ␣ 2 ␦ 2 complex may also have been present, the observed EC 50 of 24 M ACh is a lower limit to the EC 50 of the complex of ␣ 2 ␦␥D174N.
The binding of ACh by complexes of ␣, , and ␥D174N in a subcellular membrane fraction of oocytes was characterized as K ACh , derived from the inhibition by ACh of 125 I-␣-bungarotoxin binding. K ACh for ␣ϩϩ␥D174N was 170 times the K ACh for wild-type ␣ϩϩ␥ (Fig. 4A and Table I). All of the other 11 mutants tested had K ACh slightly less than that of wild-type ␣ϩϩ␥.
The mutation ␥D174N also affected the binding of ACh in the 4 Receptor complexes transported to the cell surface are pentamers (Sine and Claudio, 1991), and therefore complexes whose properties were determined by electrophysiological assay we designate with pentameric stoichiometry, e.g. ␣ 2 ␥ 2 . The binding properties of receptor complexes were determined in a membrane fraction that included both surface and cytoplasmic membranes. The latter may contain incompletely assembled receptor complexes, as well as pentamers (Kreienkamp et al., 1995); we designate the mixture of oligomers obtained by expressing combinations of subunits with plus signs, e.g. ␣ϩϩ␥D174N.
FIG. 1. Aligned sequences of Torpedo ␦ subunit and mouse ␥ and ␦ subunits from residue 161 to residue 224 in ␦ and from residue 158 to residue 218 in ␥. Torpedo ␦ and mouse ␦ have the same numbering. In ␦, the first membrane-spanning segment starts near Pro-225 and in ␥, near Pro-219. In Torpedo ␦, the residues shown by cross-linking to be within 9 Å of ␣Cys-192/193 are underlined (Czajkowski et al., 1995). Each Asp and Glu in the mouse ␥ and ␦ segments were mutated individually to Asn or Gln; ␥Asp-174 and ␦Asp-180, the mutations of which caused the largest changes in affinities, are in large print. context of the ␣ϩϩ␦ϩ␥D174N complexes. The binding by these complexes were characterized by two dissociation constants, K 1 and K 2 , with K 1 Ͻ K 2 . K 1 for the mutant receptor was within a factor of 2 of K 1 for wild-type ␣ϩϩ␥ϩ␦ receptor; K 2 for the mutant, however, was 27 times larger than K 2 for wild-type receptor ( Table I). The high affinity site 44 Ϯ 8% of the binding (g in the two-site equation under "Experimental Procedures"), and the low affinity site accounted for 56 Ϯ 8%. The complex(es) formed by the co-expression of ␣, , ␥D174N, and ␦D180N bound ACh with a single dissociation constant of 45 Ϯ 5 M (n ϭ 2), 200 times the dissociation constant of 0.21 M for the high affinity site of wild-type receptor and eight times the dissociation constant of 5.7 M for the low affinity site (Fig. 4B).
We determined the effects of the mutations of ␥Asp-174 and FIG. 3. Current as a function of ACh concentration. The receptor complexes are all wild-type ␣ 2 ␥␦ (circles), ␣ 2 ␦␥D174N (squares), and ␣ 2 ␦␥E183Q (triangles). Oocytes expressing these complexes were superfused with five or six concentrations of ACh for 10 s each with a 5-min wash between, and the peak currents were recorded under twoelectrode voltage clamp at Ϫ40 mV (see "Experimental Procedures"). Each concentration was added twice. The peak current (I) is the average of the duplicates and is plotted as a fraction of the peak current (I max ) at infinite ACh concentration, calculated by fitting the Hill equation to the data (Table I)
TABLE I Effects of mutations in the gamma subunit on ACh-induced current and on ACh binding
Subunit mRNA was injected into Xenopus oocytes, and after 1-3 days ACh-induced currents were recorded, as described under "Experimental Procedures." Peak current as a function of ACh concentration was fitted by the Hill equation. The dissociation constants of ACh were determined from its retardation of the binding of toxin (see "Experimental Procedures"). Where two values are given, the data were better fitted by a two-site than a one-site fit. Means, S.E.s, and number of independent experiments are given. ␥Glu-183, and of the aligned residues, ␦Asp-180 and ␦Glu-189, on the binding of two additional agonists, tetramethylammonium, the smallest agonist of the ACh receptor, and the bisquaternary agonist, succinyldicholine (Fig. 2). The dissociation constant for tetramethylammonium, K TMA , for ␣ϩϩ␥D174N was 180 times larger than for wild-type ␣ϩϩ␥ (Table II). By contrast, K TMA values for ␣ϩϩ␥D174E and for ␣ϩϩ␥E183Q were not appreciably different than K TMA for wild-type (Table II). The dissociation constant for succinyldicholine, K SuCh , was 300 times greater for ␣ϩϩ␥D174N than for wild-type ␣ϩϩ␥ (Table II). The dissociation constants of ␣ϩϩ␦D180N for the three agonists were also 2 orders of magnitude greater than those of wild-type ␣ϩϩ␦ (Table II). Thus, the mutations to Asn of the aligned residues ␦Asp-180 and ␥Asp-174 had similar effects, even though wild-type ␣ϩϩ␦ had an order of magnitude higher affinity for the agonists than did wild-type ␣ϩϩ␥.
None of the mutations tested had much of an effect on the binding of ␣-bungarotoxin. The concentrations giving half-maximal binding (in 24 h) were 130 Ϯ 30 pM for wild-type ␣ϩϩ␥, 160 Ϯ 30 pM for ␣ϩϩ␥D174N, 170 Ϯ 30 pM for ␣ϩϩ␥E176Q, and 290 Ϯ 110 pM for ␣ϩϩ␥E183Q (n ϭ 3 for each). Mutations of ␦Asp-180 and ␦Glu-189 also had little effect on the binding of ␣-bungarotoxin by ␣ϩϩ␦ receptor (Czajkowski et al., 1993). DISCUSSION We previously identified two residues in the ␦ subunit, ␦Asp-180 and ␦Glu-189, that could contribute to the negative subsite of the ACh binding site formed between ␣ and ␦ (Czajkowski and Karlin, 1991;Czajkowski et al., 1993;Czajkowski and Karlin, 1995). One of these, ␦Asp-180, was cross-linked via a 9-Å cross-link to one of the adjacent Cys residues, ␣Cys-192 or ␣Cys-193, that form the binding site disulfide, and the mutation of ␦Asp-180 to Asn caused a 2 orders of magnitude decrease in the apparent affinity of the ␣ϩϩ␦ receptor complex for ACh, measured both by activation and by binding. Although ␦Glu-189 was not cross-linked to ␣Cys-192/193, its mutation caused a 1 order of magnitude decrease in the apparent affinity for ACh, also measured by both methods. ACh binding was uniquely sensitive to the mutation of these two residues among the 11 negatively charged residues in a stretch of 60 residues just preceding the first membrane-spanning segment of the ␦ subunit. Because ␥ and ␦ each forms an ACh binding site with an ␣ subunit (Kurosaki et al., 1987;Blount and Merlie, 1989;Pedersen and Cohen, 1990;Sine and Claudio, 1991;Czajkowski and Karlin, 1991), we determined the effects on agonist and competitive antagonist binding of mutating to Asn or Gln each Asp or Glu residue in the aligned stretch of residues in the ␥ subunit. Among the 10 acidic residues in this stretch of ␥, only the mutation of ␥Asp-174, which aligns with ␦Asp-180, caused
effects of mutations on the binding of agonists
The dissociation constants for ligands were determined from their retardation of the binding of toxin, as described under "Experimental Procedures." Where two values are given, the data were better fitter by a two-site fit than a one site fit. S.E.s and the number if independent experiments are given. Czajkowski et al (1993).
The effects of the mutations of ␥D174N and ␦D180N were most obvious in complexes of just three of the four types of receptor subunits. In mouse receptor, ␥ and ␦ substitute for each other to form a functional complex of three types of subunits with properties very similar to those of the complex with all four types of subunits (Kullberg et al., 1990;Sine and Claudio, 1991). On the other hand, ␣ and  alone, gave no current and ␣ and ␥ or ␣ and ␦, gave much lower currents than ␣, , and ␥, or ␣, , and ␦ complexes (Liu and Brehm, 1993). Therefore, to the extent that we were able to characterize the mutations by their effect on agonist-induced currents, we could be certain that we were characterizing only complexes containing the mutant subunit when either the mutant ␥ or the mutant ␦ was expressed with wild-type ␣ and  subunits. This approach was successful with most of the mutants tested.
␥D174N, however, was not expressed on the oocyte surface with ␣ and  alone. Nevertheless, ␥D174N was incorporated into a functional complex on the oocyte surface when it was expressed together with wild-type ␣, , and ␦. The EC 50 characterizing the ACh-induced current was six times the EC 50 for all wild-type ␣ 2 ␥␦. In the complexes formed by these four subunits in both surface and cytoplasmic membranes, the binding of ACh was characterized by two dissociation constants. The value for the low affinity binding site, presumably the ␣-␥ site (Blount and Merlie, 1989;Sine and Claudio, 1991), was 27 times higher than that for all wild-type ␣ϩϩ␥ϩ␦. When both ␥D174N and ␦D180N were co-expressed with wild-type ␣ and , a single dissociation constant was obtained that was greater than both dissociation constants of wild-type ␣ϩϩ␥ϩ␦. Therefore, both ␥D174N and ␦D180N affect binding, even in complexes of all four types of subunits.
The largest effects of the mutation ␥D174N were obtained after coexpression just with wild-type ␣ and . In this case, the binding of agonists and competitive antagonists were entirely to complexes in intracellular membranes. These complexes could include ␣␥ dimers, ␣␥ trimers, (␣␥) 2 tetramers, and (␣␥) 2  pentamers (Kreienkamp et al., 1995). We obtained little ␣-bungarotoxin binding and negligible toxin binding blocked by ACh when we coexpressed just ␣ and ␥ (data not shown); thus, for this analysis, we can ignore ␣␥-dimers and (␣␥) 2 tetramers. The mutation ␥D174N could, however, have shifted the distribution between ␣␥ trimers and ␣ 2 ␥ 2 pentamers. If these complexes had different binding properties, the effect of ␥D174N on binding could have been due to the shift in the distribution of complexes. ␥Asp-174, however, is not in a region found to effect receptor assembly (Kreienkamp et al., 1995).
Furthermore, the mutation of the aligned residue, ␦D180N, had a comparable effect on ACh binding in functional ␣ 2 ␦ 2 pentamers expressed on the oocyte surface (Czajkowski et al., 1993).
It is remarkable that the mutation to Asn of either ␥Asp-174 or ␦Asp-180 had a 10 -100 times greater effect on the binding of agonists than on the binding of competitive antagonists (Tables II and III). Agonists and competitve antagonists bind to overlapping sites; some of the same residues are labeled by agonist and antagonist affinity labels (Kao et al., 1984;Galzi et al., 1990;Cohen et al., 1991;Middleton and Cohen, 1991), and the mutations of these residues affect the binding of both agonists and antagonists, albeit not equally. Furthermore, ACh receptors altered by chemical modification (Karlin and Winnik, 1968) or by mutations (Bertrand et al., 1992) can be activated by ligands that normally are competitive antagonists, consistent with the overlap of agonist and competitive antagonist sites. Thus, the difference in the effects of mutations on agonist and competitive antagonist binding was not likely due to completely separate sites for the two types of ligands. Nevertheless, the differences could have resulted from nonidentical contacts of agonists and antagonists within overlapping binding sites. In the results presented here, it is clear that the difference was not dependent on the number of ammonium groups on the ligands; the binding of all three agonists was affected much more than the binding of all three antagonists (Tables II and III).
The binding of agonists promotes the transitions of the receptor from the resting state to the higher affinity active and desensitized states. Therefore, alteration of the kinetics of these transitions could affect both the EC 50 and K ligand . The binding of competitive antagonists does not normally promote activation, and the observed K ligand for a competitive antagonist is likely to be simply an equilibrium dissociation constant. The uniformly greater effects of the mutations on agonist binding than on antagonist binding is therefore likely due to effects on the kinetics of the agonist-induced transitions. Consistent with this interpretation, coexpression of ⑀D175N (⑀Asp-175 aligns with ␥Asp-174 and ␦Asp-180), with wild-type ␣, , and ␦, caused an 8-fold increase in EC 50 , most of which could be accounted for by a decrease in the channel opening rate .
Mutations of other residues in or close to the ACh binding site also affected the kinetics of state transitions. Based on photoaffinity labeling, ␣Tyr-93 and ␣Tyr-190 are in or close to the ACh binding site (Dennis et al., 1988;Abramson et al., 1989;Galzi et al., 1990;Cohen et al., 1991). Mutations of these residues also had a much greater effect on agonist binding than on competitive antagonist binding (Sine et al., 1994). An analysis of ␣Y190F showed that the 2 orders of magnitude increases
of mutations on the binding of competitive antagonists
The dissociation constants for ligands were determined from their retardation of the binding of toxin, as described under "Experimental Procedures." Where two values are given the data were better fit by a two-site fit than a one site fit. S.E.s and the number of independent experiments are given. in the EC 50 were attributable to changes both in binding and in gating kinetics (O'Leary and White, 1992;Chen et al., 1995). The structure of the binding site changes on the binding of agonists (Karlin, 1969;Damle and Karlin, 1980) or on the transition to the desensitized state (Galzi et al., 1991b). The involvement of residues in or close to the ACh binding site in the agonist-induced transitions between states is consistent with these residues moving during the transitions. We have determined by cross-linking that ␦Asp-180 is close to the ␣-␦ ACh binding site (Czajkowski and Karlin, 1995), and by symmetry, ␥Asp-174 and ⑀Asp-175 are also likely to be close to the ␣-␥ and ␣-⑀ ACh binding sites. Nevertheless, we do not know whether or not these residues participate directly in the binding of agonists. One possible mechanism, however, that places these residues in the binding sites and incorporates their movement as an integral part of activation is that on the binding of agonist to the ␣ subunit, the side chain of the Asp on the neighboring subunit, ␥Asp-174, ⑀Asp-175, or ␦Asp-180, moves closer to the agonist ammonium group, increasing the electrostatic interaction between these oppositely charged groups and bringing other side chains into more favorable interactions with the agonist. We previously postulated a negative subsite that interacted with the ammonium group of agonists and inferred that on the binding of agonist this negative subsite moved a few Ångstroms closer to the binding site disulfide (Karlin, 1969). We now suggest that ␥Asp-174 contributes to the negative subsite of the ACh binding site formed between ␣ and ␥, and similarly ␦Asp-180 contributes to the negative subsite of the ACh binding site formed between ␣ and ␦ Karlin, 1991, 1995;Czajkowski et al., 1993). The postulated contraction of the binding site crosses the subunit interface and could trigger the sliding of neighboring subunits. This relative movement of the subunits could be a mechanism for the propagation of structural changes across the membrane, from the ACh binding sites, in the extracellular domain, to the gate, close to the intracellular end of the channel (Czajkowski et al., 1993;Akabas et al., 1994;Unwin, 1995). | 7,009.8 | 1996-06-07T00:00:00.000 | [
"Chemistry"
] |
Vortex dynamics in the two-dimensional BCS-BEC crossover
The Bardeen–Cooper–Schrieffer (BCS) condensation and Bose–Einstein condensation (BEC) are the two limiting ground states of paired Fermion systems, and the crossover between these two limits has been a source of excitement for both fields of high temperature superconductivity and cold atom superfluidity. For superconductors, ultra-low doping systems like graphene and LixZrNCl successfully approached the crossover starting from the BCS-side. These superconductors offer new opportunities to clarify the nature of charged-particles transport towards the BEC regime. Here we report the study of vortex dynamics within the crossover using their Hall effect as a probe in LixZrNCl. We observed a systematic enhancement of the Hall angle towards the BCS-BEC crossover, which was qualitatively reproduced by the phenomenological time-dependent Ginzburg-Landau (TDGL) theory. LixZrNCl exhibits a band structure free from various electronic instabilities, allowing us to achieve a comprehensive understanding of the vortex Hall effect and thereby propose a global picture of vortex dynamics within the crossover. These results demonstrate that gate-controlled superconductors are ideal platforms towards investigations of unexplored properties in BEC superconductors.
The phase diagram of superconductors is usually drawn on the T -carrier density plane, whereas, in cold atom 37 systems, the phase diagram is often drawn on the plane of T -1/k F a s , where k F and a s denote the Fermi vector and 38 scattering length, respectively, and thus 1/k F a s represents the normalized interaction strength. Therefore, it has not 39 been possible to directly compare the two BCS-BEC crossover systems yet. Recently, we presented a phase diagram 40 on the T /T F -∆/E F plane for 2D superconductors [1], which is free from the parameters specific to superconductors. 41 On the other hand, in the 2D 6 Li system, the experimental determination of ∆/E F as a function of the interaction 42 strength was recently reported [2]. Combining the phase diagram on the same system published in 2015 [3], we 43 are able to draw a phase diagram on the T /T F -∆/E F plane for the 2D 6 Li system. This allows us to construct a 44 unified experimental phase diagram of the BCS-BEC crossover, which is displayed in Supplementary Fig. 1. Though 45 there remain discrepancies due to the difference in definition of each parameter, the phase diagram shows that the 46 data of Li x ZrNCl and 6 Li just overlap with each other and encourages us to consider the BEC limit from the BCS side.
48
Supplementary Figure 1. A unified BCS-BEC crossover phase diagram from combined experimental data for cold atom superfluids 6 Li and density-controlled superconductors LixZrNCl. Starting from the BCS side, the data points for the 2D superconductor LixZrNCl [1], the system studied in this work, are drawn. Here T * is the gap-opening temperature and Tc is the critical temperature. Starting from the BEC side, the purple data points correspond to the cold atom superfluid 6 Li, taken from previous reports [2,3]. Drawing the phase diagram for both systems of superconductivity and cold atom superfluidity on a common scale reveals the achieved overlap of data points in the BCS-BEC crossover. This ultimately motivates the study of BEC superconductivty starting from the BCS regime in LixZrNCl.
Supplementary Note 3: Basic transport properties and doping level determination 49 To determine the doping level, i.e., Li content x, the Hall effect was used. The linear slope of Supplementary 50 Fig. 2a was used to determine x and a systematic dependency of the slope with changing Li ion concentration is 51 apparent. The device operation for intercalation was analogous to previously established work [4]. To compute the Li 52 content, the Hall coefficient at 150 K was measured and we assume that each Li ion supplies one electron to the ZrNCl 53 system. For the determined doping levels, superconductivity was achieved, and we show the longitudinal resistivity as 54 a function of temperature for each in Supplementary Fig. 2b. With decreasing doping level, the critical temperature 55 T c increased from 11.4 K to 16.8 K for x = 0.47 and 0.0040 respectively. T c was determined by the temperature at 56 which the resistivity is half of the normal-state value at 30 K. The superconducting transition is sharp for high doping 57 levels but is significantly broadened towards lower doping levels. This may be explained by the enhanced fluctuation 58 strength (β in Supplementary Figs. 6 and 7) towards the BCS-BEC crossover. In addition, the dimensional crossover 59 from an anisotropic 3D superconductor to a 2D superconductor can be relevant since the dimensional crossover occurs 60 around x ∼ 0.1, as discussed in previous works [1,4]. In the low doping regime, the transition is better described by in Li x ZrNCl are shown. To calculate E F and k F from n, an ideal parabolic band dispersion in two-dimensions was 81 employed. k F = (4πn layer /ss ) 1/2 and E F = 2 k 2 F /2m * , with n layer the 2D carrier density per layer, s the spin degree 82 of freedom, s the valley degree of freedom, the reduced Planck constant and m * the effective electron mass. In 83 the case of Li x ZrNCl, s = s = 2 and m * = 0.9m 0 , as reported before [5]. Here m 0 denotes the free electron mass. Figure 4. Doping dependence of the ratio of the mean free path l and superconducting coherence length ξ0. Red dots represent the data points from this work while black circles represent the data from the previous work [1]. The green area highlights the moderately clean regime where 0.5 < l/ξ0 < 5, above or below which the Hall anomaly is seemingly not observable as described by Hagen et al. [6]. The inset shows the Hall angle dependence on l/ξ0. Almost no modulation of the l/ξ0 ratio with doping is observed, which is also reflected in the weak dependence of the Hall angle on this ratio. This concludes that the system stays in the relevant range of l/ξ0 over the course of this work and the observed trend in the Hall angle vs doping is not dominated by a change of this ratio.
Supplementary Note 6: Theoretical determination of T c by Hartree approximation 89 As explained in Methods, the time-dependent Ginzuburg-Landau (TDGL) model is given by where γ = π/8T * , λ = −(1/2T * )∂T * /∂E F , ξ = φ 0 /2πB c2 (0), A(r) = Bxŷ, and ∆(r, t) is the superconducting 91 order parameter varying in space and time. 92 We consider the GL Hamiltonian corresponding to Eq. (S8) as where a is another phenomenological parameter. Replacing |∆| 4 with 2 |∆| 2 |∆| 2 in Eq. (S9) by the Hartree approx-94 imation, we obtain the approximated Hamiltonian (S10) Here, the renormalized mass satisfies the self-consistent equation: where · · · is the canonical average using the Hamiltonian H GL and the temperature T . Expanding ∆(r) as Here, N and q are the Landau level index and its degeneracy index, 99 respectively, H N (z) is the N th Hermite polynomial, l := φ 0 /2πB, and the dimensionless magnetic field is defined 100 as h := (ξ/l) 2 = B/B c2 (0). Then, the self-consistent equation (S11) may be rewritten as where c is a cutoff parameter representing the limitation of the gradient expansion in the GL Hamiltonian (S9).
102
We further rewrite Eq. (S12) as = ( Assuming large enough c/h and taking c/h → ∞ in [· · · ] in the right-hand side, we can obtain where ψ(z) is the digamma function, which satisfies ψ(z) = lim n→∞ [ln n − for z → ∞. Then, we can obtain the equation for zero magnetic field as (S14) Based on the right-hand side of Eq. (S14), we define the renormalized T c as 108 (S15) Note that the spontaneous symmetry breaking does not occur at finite temperatures since we consider a 2D system, 109 and T c characterizes a typical temperature for significant changes in physical quantities such as conductivity.
125
Performing the time and space integrations and the canonical average in Eq. (S20), we can finally obtain the following 126 formulas: where µ N := + h + 2N h, and a cutoff c is introduced similarly to Eq. (S12).
Thus, for low enough temperatures, where + h h and σ V ab dominates over the normal-state conductivity σ N ab , 132 the resistivities are given as ρ xx : , and the Hall angle Θ H follows By explicitly setting γ = π/8T * and λ = −(∂T * /∂E F )/2T * in (S25), we finally obtain the opposite sign to that in the normal state.
151
To gain further insight into the dynamics of the condensate, it would be helpful to to rewrite ∆(r, t) as |∆(r, t)|e iχ(r,t) and decompose TDGL equation (S8) multiplied by e −iχ(r,t) into real and imaginary parts The GL Hamiltonian (S9) is introduced in eq. (S27a), which describes the relaxation dynamics of the condensate. In 152 eq. (S27b), we introduce the notation: the latter of which is nothing but (S18). We can regard Eq.
which follows from Eqs. (S27a) and (S27b), and the Ampere-Maxwell equation (see derivation of [9]). Let us see the physical meaning of each term in order to confirm that this equation is really regarded as the momentum balance relation. We start with the right-hand side. Here the electric magnetic fields are denoted by ε = −∂A/∂t and h = ∇ × A. The first two terms in the right-hand side represent the electromagnetic Lorentz force. The third term in the right-hand side is the dissipation force due to the time variation of the modulus of |∆| (This mechanism was first pointed out by Tinkham [10]). The last term in the right-hand side in Eq. (S30) is the other dissipation force due to conversion between the superfluid and normal components. This disspative force is caused by the time-variation of the phase of ∆. Thus these two terms show that the vortex motion is the source of the dissipative force. In the left-hand side, P represents the hydrodynamic momentum flux tensor, which is given in the present case by which coincides with the momentum flux tensor in the London equation [11]. We then finally identify −ρ s Q in the 163 first term in the left-hand side with the superfluid component of the momentum density. We see that this term has 164 the same sign as that of ρ s j s . When λ < 0, the momentum density is antiparallel to the electric current density j s and 165 thus the condensate corresponds to the positive electron density and the dynamics is similar to the electron motion.
166
When λ > 0, on the other hand, the momentum density is parallel to the electric current density j s and thus the 167 condensate corresponds to deficit of electron density and the dynamics is similar to the hole motion. In this section, 168 we argue that the dynamics of the condensate described by the TDGL equation is similar to that of electron (hole) Figure 5. Gap-opening temperature T * and superconducting critical temperature Tc as previously established [1].
The black dashed line is the fitting curve of T * (EF).
Supplementary Note 10: Mean-field critical temperature T *
201
We discuss the difference between ∂T * /∂µ and ∂T * /∂E F for the 2D Fermi gas model (S39) within the mean-field 202 approximation. Using the two-particle binding energy E B instead of the coupling constant g, we obtain the equation 203 to determine T * for a given µ [17, 18]: If we further reduce the doping x of Li x ZrNCl than in the present experiment, the distance between conduction 219 electrons can become larger than the range of the effective attractive interaction. Then, the system may be described 220 by the 2D Fermi gas model with a contact attractive interaction (g > 0): where ψ σ (r) and ψ † σ (r) are the Fermion field operators. In this model, the crossover from the BCS regime to the BEC where θ(z) is the Heaviside step function, P(· · · ) means the Cauchy principal value, µ * is the mean-field chemical
250
When vortices flow parallel to the superflow, the transport current J tr and v v are anti-parallel and thus E and J tr ×B 251 are parallel, i.e. σ xx = 0 and σ xy < 0 for B parallel to z-axis. It then follows that Θ H = −π/2. | 2,962.4 | 2022-09-11T00:00:00.000 | [
"Physics"
] |
Basic Power Inductor Design
A basic procedure for designing a
power inductor is presented. Many papers and textbook
chapters offer more sophisticated methods, but it is harder to
find a clear outline of a basic design process.
Studying and practicing a basic design process is useful for
beginners to understand the fundamental tradeoffs in design and to build
intuition. For more advanced work, the
basic design process is useful as it avoids relying on assumptions that might not
be valid with, for example, high-frequency loss effects that are ignored in the
development of some more sophisticated methods, or that constrain other methods
to narrow, specific cases.
Two options are outlined: starting with a saturation
constraint, and then checking the core/winding loss balance; or, starting by optimizing the
core/winding loss balance, and then checking the saturation constraint.
Effect of a gap
The weird thing about designing a power inductor is that one ordinarily puts an air-gap in the magnetic path. (Any non-magnetic, non-conductive material can be used, as it's functionally equivalent to air for the purpose of magnetic fields.) That adds reluctance and decreases inductance: where N is the number of turns and Rcore and Rgap are the reluctances of the core and gap. From the point of view of creating inductance, that's a bad thing; or more from a design point of view, if we know the inductance we need, increasing reluctance will require a larger number of turns to achieve that inductance, and thus will increase winding resistance and loss. So it's initially a mystery as to why anyone would do that.
The reason for including a gap is to reduce the flux and flux density in the core, which can be calculated as: B(t) = Ф(t)/Ac = Ni(t)/[ Ac (Rcore + Rgap)] where Ac is the cross-sectional area of the core and i(t) is the winding current. Having B too high is a concern for two reasons-core loss and saturation.
So the gap size needs to be chosen avoid the two hazards of too high a flux density in the case of a small gap (3) and too low inductance in the case of a big gap (1). (The latter problem would, in practice, become the problem of needing too many turns to get the inductance we need in a given design problem.) It starts to seem that we have a complicated design process in which there are multiple constraints and considerations. But we can simplify the design process by assuming that we are only interested in the designs that meet an inductance spec, Lsp. In any design, we are going to adjust something to make it meet that spec. Let's assume that we adjust the gap (and thus the gap reluctance) to meet that spec. The next section examines the implications of that constraint.
Constrained designs: reluctance chosen to meet inductance spec
If gap reluctance is chosen to meet the inductance design spec, for any number of turns, all the designs we are considering satisfy (1) and thus Rcore + Rgap = N 2 / Lsp. We can substitute that into (3) to get Or, as it is more commonly written, This is a funny equation. It is something that becomes true if we choose the right gap to satisfy the inductance spec, not something that describes a physical way to get the inductance we are looking for. But it's very useful, because it can show us the design tradeoffs more directly than considering (1) and (3) simultaneously. In particular, it shows that, for a given core area, inductance spec, and current spec, the real tradeoff is between number of turns N and flux density B. We can decrease B by increasing N, at the expense of higher winding loss from the larger number of turns. This conclusion is only true when we maintain constant inductance-examining (3) would lead us to the opposite conclusion about the effect of N on B. But that opposite conclusion would be for the case of constant reluctance, whereas our practical interest is in designing for a particular inductance spec, in which case we use a different reluctance with each N. The beauty of (4) is that it implicitly includes this adjustment of reluctance so we don't need to fuss with it manually.
Note that (4) is written in terms of i(t) and B(t), it applies equally to Idc and Bdc, Ipeak and Bpeak, etc. Whatever parameter of the time varying waveform i(t) you use on the left-hand side, the corresponding parameter for B(t) is the correct thing to use on the right-hand side. If someone asks you, "in the equation LI= NAB, is I the RMS value, the peak value, or what?", the answer is that it can be whichever of those (and more) you want, as long as you use the corresponding version of B on the other side of the equation.
Applying LI = NAB for Design: N vs. B Tradeoff.
In the previous section, we reduced two equations, based on inductance and flux in the magnetic circuit to one equation (4), based on the assumption that we will always adjust the reluctance value to get the inductance we want. (That is accomplished in practice by varying the gap length.) Now we proceed to use this for inductor design.
Based on LI = NAB we see that we have a tradeoff is between number of turns N and flux density B. (You might ask what I mean by B. Peak? Amplitude of the AC component? Dc value? The answer is that all of these are inversely proportional to the number of turns, so I don't need to specify yet.) Increasing N is bad because it leads to high winding losses. On the other hand decreasing N and thus increasing B is bad because it leads to high core loss and/or magnetic saturation. Thus, we'd generally like to decrease N until we run into trouble with either saturation or core losses. In some situations, saturation is the limiting factor, and we decrease N as far as possible without getting too close to saturation ("saturation limited designs"). In other cases, core loss becomes too high before we get close to saturation, and we increase N beyond what's needed to avoid saturation in order to control loss (loss limited designs). Thus there are two different design criteria. One way to deal with this is to assume that one or the other (core loss or saturation) is going to be the primary consideration, and proceed with a design on that basis, and then go back and check that the other issue with the size of B (saturation or core loss) has not become a problem.
Design procedure starting with a saturation limit Figure 1 shows a design procedure starting by assuming a saturation-limited design. This is often easier than the alternative of considering the trade-off between core loss and winding loss as the initial step, because the saturation calculation is simpler than optimizing the loss trade-off. Methods of performing that optimization are addressed in a separate document. The steps in this procedure are described as follows: 1. Choosing a core. Typically one picks a standard core from a catalog.
Although some magnetics design methods use formulas to help you guess the right size core to start with, those formulas tend to be rough approximations, and if you are going to iterate anyway, you can also simply guess-there is no need to guess right. (If you do want to guess right, think about other designs you've seen, and what size they are, such as the two different inductors we used in lab. Very roughly speaking, the size of an inductor should be proportional to its peak energy storage, ½LIpeak 2 ) .
Finding the minimum number of turns
to avoid saturation. The core material data sheet will have a value for the saturation flux density. That is the point at which the core is completely saturated, and the inductance is the same as you get with no core at all. That is way past the point at which you would want to operate to avoid any saturation effect. Choosing to design for flux density equal to that value would be like looking at the datasheet for a polymer material, and deciding it is okay to operate at a temperature where it has not only started to soften, but has been incinerated, leaving only ashes. Unfortunately, core data sheets do not Figure 1: design procedure starting with the assumption that the design will be saturation limited, and then checking this assumption typically list a maximum recommended operating flux density. A rule of thumb might be to use 75% of the saturation flux density. As the saturation flux density is a function of temperature, one should use a value for the highest expected operating temperature, which might be 100 to 130° C. One can also calculate more precisely what happens to the inductance as one approaches saturation [1], but that's rarely done rigorously in practice.
After selecting the maximum safe value of flux density to use (let's call that BMS), one simply must make sure that the operating flux density never exceeds that. In other words, the maximum value that B(t) ever gets to (let's call that Bmax), should be less than or equal to the maximum safe value: Bmax ≤ BMS. The maximum value that B(t) ever gets to will occur when i(t) is at its maximum value; so we can find Bmax maximum from LI = NAB, using in this case the version in which B and I are both the maximum values the functions of time ever hit LImax = NAcBmax. Note that Imax may occur during some transient or other abnormal condition, and may be higher than the peak value during normal study-state operation.
3. Finding winding loss. Winding loss would be simply Irms 2 R, except that many windings have frequency-dependent resistance, and many waveforms in power electronics applications include many frequencies, such as harmonics of the switching frequency, and often also include dc. The frequency dependent resistance effects are an advanced topic not covered in this document, but it should be noted that they are more complex and severe than would be predicted by considering only the skin effect. For the purpose of understanding basic inductor design, it is useful to first ignore these effects. This could be valid in cases in which the dc component of the current dominates Irms, or in cases when special techniques, such as the use of litz wire, are used to control high-frequency loss effects in windings.
Resistance can be calculated for a given wire size and number of turns by looking up the resistance per unit length on a wire table, looking up the average length of a turn on a given core on the datasheet of a bobbin for that core, and multiplying by the number of turns to get the total length of wire. Particularly if one is implementing this procedure in a spreadsheet or computer program, it can be useful to calculate resistance for a given diameter from R = ℓρ/At where ρ is the resistivity of the conductor (Cu or Al) at the expected operating temperature, At is the cross-sectional area of the wire (the area of a circle with diameter equal to the diameter of the wire), and ℓ is the total length of wire, and equal to the average length per turn times the number of turns.
The wire size is often chosen to be the maximum that can fit with the specified number of turns. This results in the lowest dc resistance, although it does not necessarily result in the lowest ac resistance. The number of turns that can fit can be estimated based on the available area for the winding, called the window area. This value can typically be found on the datasheet of the bobbin to be used. The area available for an individual turn of wire would be the window area divided by the number of turns, except that when wire is packed into the window, there is still lots of air space between turns, and the insulation also takes up space. These factors can be estimated by using a "packing factor" Fp such that At=FpAw/N, where A w is the window area available for the whole winding. A safe value for packing factor for a hand-wound coil is 0.5. With care, 0.6 can usually be achieved. Professionals using coil winding equipment can achieve higher values, but without special techniques or specially shaped the wire, the theoretical limit is ideal square packing, which would achieve Fp= π/4.
4.
Finding core loss. Usually this is a matter of looking this up on a chart on a core-material data sheet. Unfortunately, core-material data sheets assume purely sinusoidal excitation, and most power electronics uses other waveforms. Techniques exist to approximately correct for this [2], but that is another advanced topic beyond the scope of this introduction, and so we assume here that we can simply use the data-sheet chart. To use the chart, we need to do the frequency and a value of B. But there's a problem: we are assuming that the flux density is a sinusoidal function of time (perhaps with a dc offset), but we need a single numerical value to apply the chart. There are many different parameters of a sinusoidal that we might use. We could use the peak value, or the RMS value, or the peak-to-peak value, or the dc value, and very few core material data sheets specify which of these to use. There is no way to figure out which to use from fundamental physics, because it is merely a convention for how these data sheets are expressed. So that leads to the one piece of information in this whole document that is something that you couldn't derive yourselfthe one truly valuable piece of information here. Ready? The value used on the datasheet is one half of the peak-to-peak amplitude. That's the same as the zero-to-peak amplitude of the ac component of the waveform, and is often written as B .
To find B , recall that LI = NAB works for any parameter of B as long as we use the same parameter of I on the other side of the equation. So , where Iˆ is one half the peak-to-peak amplitude of the current (a.k.a. The zero-to-peak amplitude of the ac component of the current).
The core loss from the chart is provided in loss per unit volume, and then you need to multiply by the volume of the core. You could calculate the volume from the dimensions, but usually the volume is given directly, which makes this much easier. But you didn't need to read this to figure that out. The only special knowledge you need is what parameter of B to use. That's why that information is in bold above.
5.
Comparing core and winding loss. If these are similar, the design is close to optimal for the trade-off between core loss and winding loss. If the core loss is much smaller than the winding loss, it might seem that there would be an opportunity to re-balance losses, but if you got to this point following this design procedure, there is no such opportunity, because you are already at the saturation limit, and cannot increase the flux density or decrease the number of turns. However, if the core loss is much larger than the winding loss, this means that there is an opportunity to reduce total loss by increasing the number of turns. A larger number of turns could reduce the core loss more than it increases the winding loss. One could figure out how much to increase the number of turns by trial and error, or by using the optimization method described in the second half of this article.
6. Checking total loss. If there is a loss specification, one may simply compare the sum of core and winding loss to the specification to determine whether the design meets the spec. The loss also determines the temperature rise, and with a value for thermal resistance, one may determine whether the loss results in an overheating problem. The maximum temperature would be limited either by the range where the core performs well, or the maximum temperature beyond which the insulation materials are damaged. In many cases, there is no definitive loss spec, and one may wish to evaluate the performance and cost of several designs before making a final decision. Note that if the loss is well below the specification, it may be worth considering using a smaller core to reduce cost (and the size and weight if these are important).
7.
Gapping. The use of LI = NAB was predicated on the assumption that we would use whatever gap length was necessary to get the desired inductance. So the final step is to calculate or measure the gap length needed. If you do this by calculation, the first step is to calculate the necessary reluctance from (1), and then calculate the gap length needed to achieve that reluctance. The standard reluctance formula for a rectangular block gives a reasonable approximation for gaps that are short compared to their lateral dimensions, but as the gap length gets up to a significant fraction of the lateral dimensions, corrections for "fringing" effects are necessary for an accurate calculation. A simple and crude approximation is to use an effective area for the gap reluctance calculated by adding the gap length to the lateral dimensions of the gap. More precise corrections are described in [3], but in many cases, a crude approximation is the only calculation needed, and the actual length needed can be found experimentally.
A core-loss model for optimization
Increasing the number of turns in a winding will increase winding loss (proportional to N 2 ), but decrease core loss (assuming the gap is adjusted for each new number of turns to maintain inductance at the specified value). That indicates that there is likely to be a number of turns that minimizes total loss. Let's figure out what that is. To do that analytically, rather than by iteratively trying numbers of turns, N, we need an expression for the core loss. The manufacturer's data shows core loss at a particular frequency, as a function of flux density, to be a straight line on a log-log plot, indicating that a power law will fit the data well: where B a is half the peak-to-peak flux density. (This is not the same as the peak flux density, B pk. ) This approximation was invented by Steinmetz in 1891 and is called the Steinmetz equation. The constant k f is valid for only one frequency-a different value is needed for each frequency. The best-fit value of β may also be different at each frequency. Typical values of β for ferrites are in the range of 2 to 3.
The term Steinmetz equation is also often applied to a power law as a function of frequency and flux density: which is also a widely useful approximation. However, plots of loss vs. frequency are not as close to straight lines, so (2) is not as good an approximation (1).
The Appendix describes a simple approached to finding the Steinmetz model parameters from plots or tables of loss data, but first we'll discuss applying such a model to finding the optimal number of turns.
Deriving the optimal number of turns
From LIa = NABa we have Ba = LIa /(NA), and so Thus we see that the dependence on N is N -β .
Knowing how core loss and winding loss vary with N, we could write total loss as P =CCN -β + CWN 2 (4) We could find the two constants, C C and C W , and then proceed to optimize N. Or we can start with an initial design, and consider scaling N by a factor M, i.e., N new = MN 0 . We then have, in terms of the core and winding loss of the initial design, PC,0 and PW,0, respectively, P =PC,0M -β + PW,0M 2 (5) To find the optimum value of M, we take the derivative and set it equal to zero: Solving for M: We can also multiply (7) through by M to obtain A common rule of thumb is to make core losses equal to winding losses. This shows that that's only exactly correct in the case that β = 2. In a more typical case of β ≈ 2.5, the core losses should be 4/5 the winding losses to minimize total loss.
Applicability
The results derived in the previous section for the optimal ratio of winding loss to core loss and the optimal number of turns apply as long as the resulting design does not result in saturation at the maximum current the inductor is required to handle. If a check of saturation fails, one must increase the number of turns to meet that requirement. This will increase the winding loss and decrease the core loss, resulting in an overall increase in loss.
The above analysis is based on several other assumptions: High-frequency winding loss effects (skin effect and proximity effect) have been neglected. The results also hold if the ac resistance factor (ratio of ac resistance to dc resistance) is constant, independent of the number of turns, which would be true with some kinds of winding designs optimized for ac resistance after the number of turns is chosen.
Manufacturers' core loss data and the Steinmetz power law model are based on sinusoidal waveforms, but we are often interested in other waveforms. A power-law model still works well, but the correct constants for that power law will be different.
Procedure for Inductor Design
The result above can be applied in an inductor design procedure as shown in this flowchart. It is essential to check for saturation, because in some cases, particularly in an inductor with a large dc current component, or at low frequencies where core loss in small, the value of N calculated above would lead to saturation.
Another option is to start with the assumption that the design is limited by saturation, find the minimum number of turns to avoid saturation, and only optimize N if necessary, following the flowchart in the first section of this article.
Appendix: Fitting the Steinmetz Equation I. Single Frequency
To find parameters for a single-frequency Steinmetz model (1), k f and β, All that is needed is a pair of loss data points at different flux densities (but the same frequency Then the value of k f can be found simply by plugging in the data for one point.
If the data follows the equation exactly, this approach is adequate. If the data is noisy, or deviates from this law, fitting the data to only two points will be less accurate than using more data points. Two approaches to this are: 1. Plot the data on a log-log scale, and graphically choose lines that fit the portion of the data that you are most interested in. The record a two points from the line, not from the original data, to use in the above equations.
2. Take the log of the data, and perform a linear fit to this data in log space.
II. Incorporating frequency dependence
To derive a power law for frequency dependence as well as amplitude dependence (2), we need, in addition to a pair of loss data points at different flux densities but the same frequency, a pair of points at different frequencies but the same flux density. The value of β is found from the first pair of points as above (12). Then similarly, for the two points at different frequencies, Again, the value of k can be found simply by plugging in the data for one point.
Although the assumption that PV is proportional to B β is very good, the proportionalitiy to f α is less accurate. To get a good fit in the region of interest it is important to choose data points in the frequency range of interest. Examining a loglog plot of PV vs. f is useful for determining how wide a frequency range can be modeled using one set of parameters. | 5,672.8 | 2021-04-11T00:00:00.000 | [
"Engineering",
"Physics"
] |
Finite-time generalized synchronization of nonidentical delayed chaotic systems ∗
Abstract. This paper deals with the finite-time generalized synchronization (GS) problem of driveresponse systems. The main purpose of this paper is to design suitable controllers to force the drive-response systems to realize GS in a finite time. Based on the finite-time stability theory and nonlinear control theory, sufficient conditions are derived that guarantee finite-time GS. This paper extends some basic results from generalized synchronization to delayed systems. Because finitetime GS means the optimality in convergence time and has better robustness, the results in this paper are important. Numerical examples are given to show the effectiveness of the proposed control techniques.
As one of the kinds of synchronization, CS requires two identical systems, but it is difficult to find two identical systems in practice because of parameter mismatch and distortion.Therefore, it is necessary to investigate generalized synchronization between two nonidentical systems.GS, which means the state of the response system synchronizes that of the drive system through a nonlinear smooth functional mapping, has been given much more attention.GS is an extension of CS, and is a robust synchronization [21,40], that is, when the parameter of systems changed, the systems can still preserve GS.Since this character, GS has many applications in practice, such as secure communication, biological systems.
Up to now, there are two main approaches used to study GS.One is the auxiliary system approach [1,6,12,15,20,46], introduced by Abarbanel et al. [1], which makes an identical duplication of the response system that is driven by the same driving signals.If the response system and the auxiliary system achieve CS, then the drive system and the response system can realize GS.This approach has the disadvantage that it fails to decide what kind of functional relation exists between the drive and response systems.The other is to design a controller to force two coupled systems to satisfy a prescribed functional relation [35].If we want to know the exact functional relation between two systems, this approach is effective for studying GS of networks.
Recently, there are some papers about finite time consensus [42], stability [2,3,7,44,52], finite time boundedness [14], finite time parameter identification [33], stabilization of general control systems [16,19,28,29] and finite time synchronization of networks [8,10,17,34,49,50].It is noticed that, most results about synchronization are related to an infinite-time asymptotical process, that is, only when the time tends to infinity, the driveresponse systems can reach GS, and in theory, this will not occur in a finite time.But in practice, especially in physical and engineering systems, we often require systems achieve GS in a finite time, so it is significant to investigate the finite-time GS of networks.In [49], finite time synchronization between two different chaotic systems with uncertain parameters was investigated.Yang and Cao discussed finite-time synchronization of complex networks with stochastic disturbances [50].In [8], Chen and Lü investigated finite time synchronization of complex dynamical networks.In [17], finite-time lag synchronization of delayed neural networks was investigated.
To the best of the authors' knowledge, the finite-time GS of two nonidentical systems has not been studied in details yet, this motivated our current research interest.In this paper, we will investigate the finite-time GS between nonidentical systems.Control laws are designed to achieve finite-time GS of drive-response systems.Based on the finitetime stability theory and nonlinear control theory, finite-time GS conditions are given.The main contribution of this paper is it provides an effective controller to realize the finite-time GS of nonidentical systems.Thus, it has more practical applications than the results of infinite-time.
The remainder of this paper is organized as follows.In Section 2, the model formulation and some preliminaries are given.The main results are stated in Section 3. Three illustrate examples are given to demonstrate the effectiveness of the proposed results in Section 4. Finally, the conclusion is made in Section 5.
Notations and preliminaries
In this section, some elementary notations and lemmas are introduced which play an important role in the proof of the main results in Section 3.
Notation.Throughout this paper, R n denotes the n-dimensional Euclidean space.The superscript "T" denotes vector transposition.• is the Euclidean norm in R n .If A is a matrix, A denotes its operator norm, i.e.A = sup{|Ax|: |x| = 1} = λ max (A T A), where λ max (A) means the largest eigenvalue of A.
Consider the following drive-response chaotic systems: where x = (x 1 , x 2 , . . ., x n ) T ∈ R n and y = (y 1 , y 2 , . . ., y m ) T ∈ R m are the state vectors of the drive system and response system, respectively.f (•) and g(•) are continuous vector functions.A, B and C are system matrices with proper dimension.τ (t) is the timevarying delay of system (2), 0 τ 1 τ (t) τ 2 , τ (t) h, τ 1 , τ 2 , and h are constants.The initial values of systems ( 1) and (2) are where x 0 is a vector, Assumption 1.The function g satisfies Lipschitz condition, that is, there exists a positive constant L > 0 such that, for all Consider the controlled response system where u(t) is a controller.Our aim is to design a proper controller to force the controlled response system to achieve GS with the drive system in a finite time.
Definition 1.Given a vector mapping Φ : R n → R m , if there exists a constant t * > 0, (t * depends on the initial vector values x(0) and y(0)) such that and y(t) − Φ(x(t)) = 0 for t > t * , then we say systems (1) and ( 5) achieve generalized synchronization in a finite time.
Main results
Let the synchronization errors between drive system (1) and controlled response system (5) be e(t) = y(t) − Φ x(t) , then the error system between (1) and ( 5) can be described as Therefore, the finite-time GS problem between systems (1) and ( 5) is equivalent to the finite-time stability problem of the error system (6) at the origin.In order to achieve this aim, we design e T (s)e(s) ds where sgn e(t) = diag sgn e 1 (t) , sgn e 2 (t) , . . ., sgn e m (t) , Γ = diag(γ 1 , γ 2 , . . ., γ m ) > 0 is a positive matrix which will be determined later.k > 0 is a tunable constant, the real number α satisfies 0 < α < 1, DΦ(x) is the Jacobian matrix of the mapping Φ(x).
Remark 1.When 0 < α < 1, the controller u(t) is a continuous function with respect t, which leads to the continuity of controlled response system (5) with respect to the system state.If α = 0, u(t) turns to be discontinuous one, which is similar to the controller that have been considered in [31].If α = 1 in the controller (7), then they become typical feedback control issues, which only can realize an asymptotical synchronization in an infinite time.
Substituting ( 7) into ( 6), we have e T (s)e(s) ds Theorem 1. Suppose Assumption 1 holds and there exists a constant l > 0 satisfying the following conditions: then under controller (7), the controlled drive-response systems (1) and (5) will realize finite-time GS in a finite time where e T (s)e(s) ds.
Calculating the time derivative of V (t) along the trajectory of (8), we have e T (s)e(s) ds e T (s)e(s) ds e T (s)e(s) ds From Lemma 2, we get Therefore, By Lemma 1, V (t) converges to zero in a finite time, and the finite time is estimated by Hence, the error vector e(t) will converge to zero within t * .Consequently, under controller (7), systems (1) and ( 5) realize finite-time GS.This completes the proof of the theorem.
If C = 0, then the controlled response system becomes we design and the error system becomes Theorem 2. Suppose Assumption 1 and the following condition hold: then under controller (12), the controlled drive-response systems (1) and (11) will realize finite-time GS in a finite time where Calculating the time derivative of V (t) along the trajectory of (13), we have Therefore, By Lemma 1, V (t) converges to zero in a finite time, and the finite time is estimated by Hence, the error vector e(t) will converge to zero within t * .Consequently, under controller (12), systems (1) and ( 11) realize finite-time GS.This completes the proof of the theorem.
Remark 2. The sufficient conditions given in Theorems 1 and 2 can avoid the problem that networks only realize generalized synchronization when time tends to infinity efficiently, and this has significant and basic meanings in real engineering applications of network synchronization.
Remark 3. Our results have more real meanings than that of [35], and the result in [35] is invalid for delayed systems, while Theorem 1 is still valid for delayed systems, so it is an important and useful extension of [35].
9953 < 0, from Theorem 1, systems ( 16) and ( 17) realize finite-time GS.The initial values are taken as x(0) = (1.5, 2.0, 3.0), y(0) = (0.1, 0.2). Figure 2 shows the relationships between the corresponding state variables of the drive-response systems.Obviously, they are generalized synchronization not complete synchronization.Figure 3 shows the generalized synchronization errors between the drive-response systems.Figure 4a presents the generalized synchronization time versus α for three different values of k. Figure 4b presents the generalized synchronization time versus k for three different values of α.As is shown in Fig. 3, e 1 , e 2 , e 3 have been stabilized to zero at finite time, that is, systems ( 16) and ( 17) reach finite-time GS.
Example 2. Consider the drive Rössler system the response system is three-dimensional cell neural network [5] where 18) and (19).
Conclusions
In this paper, by using a Lyapunov-Krasovskii functional, we investigate the finite-time generalized synchronization problem of nonidentical delayed chaotic systems.Control laws are designed to realized the finite-time generalized synchronization of two chaotic systems.The main contribution of this paper is, we can realize GS in a finite-time.Furthermore, we would like to point out that, it is still a challenging work to investigate the finite-time synchronization of complex networks and fractional neural networks with and without time delays and it might be possible to extend the current results to stochastic chaotic systems with discontinuous dynamic behaviors, which is inspired by the [24,25,26,27].These will be considered in next papers.
Figure 4 .
Figure 4. (a) The generalized synchronization time versus α for three different values of k.(b) The generalized synchronization time versus k for three different values of α.
Figure 8 .
Figure 8.(a) The generalized synchronization time versus α for three different values of k.(b) The generalized synchronization time versus k for three different values of α.synchronization time versus k for three different values of α.As is shown in Fig.7, e 1 , e 2 , e 3 have been stabilized to zero at a finite time, that is, systems(18) and (19) reach finite-time GS.
Figure 12 .
Figure 12.(a) The generalized synchronization time versus α for three different values of k.(b) The generalized synchronization time versus k for three different values of α.
7, e 1 , e 2 , e 3 have been stabilized to zero at a finite time, that is, systems (18) and (19) reach finite-time GS. | 2,653 | 2016-05-20T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Global bifurcation sheet and diagrams ofwave-pinning in a reaction-diffusion modelfor cell polarization
We are interested in wave-pinning in a reaction-diffusion model for cell polarization
proposed by Y.Mori, A.Jilkine and L.Edelstein-Keshet.
They showed interesting bifurcation diagrams and stability results
for stationary solutions for a limiting equation by numerical computations.
Kuto and Tsujikawa showed several mathematical bifurcation results
of stationary solutions of this problem.
We show exact expressions of all the solution
by using the Jacobi elliptic functions and complete elliptic integrals.
Moreover, we construct a bifurcation sheet
which gives bifurcation diagram.
Furthermore, we show numerical results of the stability of stationary solutions.
The model is where W = W (x, t) denotes the density of an active protein, V = V (x, t) denotes the density of an inactive protein, ε, D are diffusion coefficients, W 0 (x) denotes the initial density of the active protein, and V 0 (x) denotes initial density of the inactive protein.
It is easy to see that the mass conservation holds, where m is the total mass determined by the mass of the initial densities W 0 (x) and V 0 (x). Letting D → ∞ in (TP), we formally obtain the following time dependent limiting equation: where W = W (x, t),Ṽ =Ṽ (t) is the density depending only on t. W 0 (x) denotes the initial density, andṼ 0 denotes an initial constant density.
Owing to the mass conservation, the stationary problem of (TP) can be reduced to the following Neumann problem with a nonlocal constraint: where W = W (x), V = V (x), and m is a given initial total mass determined by initial densities.
Straight understanding of a stationary limiting problem for (TLP) is The second equation automatically holds from the first and third equation. Hence the above system is equivalent to For simplicity we concentrate on monotone increasing solutions, since we can obtain other solutions by reflecting this kind of solutions. Thus, we get Here it should be noted that we may omit the condition W (0) > 0, since this condition follows from other conditions. Thus we obtain a stationary limiting problem as where m, ε are given positive constants, W = W (x) is an unknown function, andṼ is an unknown nonnegative constant.
Interesting bifurcation diagrams are obtained in [7] by numerical computations. Kuto and Tsujikawa [4] obtained several mathematical results for (SLP) with suitable change of variables.
The main purpose of this paper is to show exact expressions of all the solutions for (SLP) by using the Jacob's elliptic functions and complete elliptic integrals, and construct a global bifurcation sheet in the space (Ṽ , ε 2 , m). Furthermore, we show numerical results on the stability of stationary solutions.
Each level curve with the height m of the sheet corresponds to the bifurcation diagram in the plane (Ṽ , ε 2 ) for (SLP) with given m. Thus, we can obtain all bifurcation diagrams including all, for instance, even secondary bifurcation branches.
Our method to obtain all the exact solutions essentially based on the method which started in Lou, Ni and Yotsutani [6]. It is developed by Kosugi, Morita and Yotsutani [5] to investigate the Cahn-Hilliard equation treated in Carr, Gurtin and Semrod [1], although we need some extra steps.
This paper is organized as follows. In Section 2 we state main theorems, and show figures of the global bifurcation sheet, bifurcation diagrams and stability results by numerical computations. In Section 3 we give proofs of main theorems by using Propositions 3.1, 3.
The following formulas for the complete elliptic integrals are important.
Now, let us introduce an auxiliary problem to investigate (SLP). LetṼ > 0 be given, let us consider the problem We note that (AP;Ṽ ) is equivalent to for givenṼ > 0, since it is easy to see that a condition 0 < W (0) <Ṽ + 1 holds for any solution of (AP;Ṽ ). The existence and the uniqueness of the solution W (x) of (AP;Ṽ ) is well-known (see, e.g. Smoller and Wasserman [2], and Smoller [3]). However, we need to know more precise information to investigate (SLP). The following theorem gives the representation formula for all solutions of (AP;Ṽ ).
Moreover, the solution is unique. The solution W (x;Ṽ , ε 2 ) has properties The solution W (x;Ṽ , ε 2 ) is represented by is the unique solution of the following system of transcendental equations (2.14) Here, sn(·, ·), cn(·, ·) are Jacobi's elliptic function. K(·) is complete elliptic integral of the first kind.
We show the graph of A(h, s) and E(h, s) in Figures 1 and 2.
Let us define the global bifurcation sheet S by We obtain exact representation of the global bifurcation sheet S as by Theorem 2.2. For each m, we can obtain the bifurcation diagram by directly from the global bifurcation sheet S. We will mathematically investigate precise properties of the global bifurcation sheet and bifurcation diagrams in a forth-coming paper. For instance, we see the following facts: · For m ∈ (0, 1], bifurcation diagrams are the empty set. · For m ∈ (1, ∞), bifurcation diagram given by (2.21) are graphs withṼ axis (smooth single-valued function inṼ ) except m = 2 withṼ = 1.
We show figures of bifurcation sheet, bifurcation diagrams, and profiles of W (x;Ṽ , ε 2 ). Figure 3 shows the global bifurcation sheet S by using the expression (2.18). where M(h, s) is defined by (2.19).
We use results in Kosugi, Morita and Yotsutani [5] for proofs of Propositions 3.1 -3.4. We see from Proposition 1.1 and its proof in [5] that the following lemma holds.
are represented by two parameters (h, s) with 0 < h < 1 and 0 < s < 1 as follows. where E(h, s) and A(h, s) are defined by (2.13) and (2.14) respectively. Moreover, Proposition 3.2 immediately follows from the above lemma. Proposition 3.4 follows from the above lemma and Proposition 3.3. We will give proofs of Propositions 3.1 and 3.3 in Sections 4 and 5, respectively. Now, we give a proofs of Theorems 2.1 and 2.2.
Proof of Theorem 2.1. We see from Proposition 3.2 and Proposition 3.3 that conclusions hold except (2.6). We see thatṼ is a solution of (AP;Ṽ ). Thus, we obtain (2.6) by the uniqueness of solutions of (AP ;Ṽ ).
We get We have and obtain Hence, we get Therefore, we obtain We prepare several lemmas. We see from Lemma 3.2 and the proof of Lemma 3.4 of [5] that the following lemma holds. and and In addition, Then, r(v) is monotone decreasing in (0, ∞) and as v → ∞.
Proof. It is easy to see that (5.10).
We have Let us put We get By virtue of a tool of obtaining Groebner basis, we see that a system of algebraic equation We can see from Strum's theorem concerning zeros for a single algebraic equation that (5.2) does not have real zero h with 0 < h < 1. Hence the system (5.2) for (s, h) has no root in (0, 1) × (0, 1).
Lemma 5.4. LetṼ > 0 be fixed. There exists a unique curve We can obtain the solution exactly, and get .
Thus, we obtain (5.18) by Let us show that E(h, s(h;Ṽ )) is decreasing with respect to h.
Thus, we have
which is easy to prove by differentiation. On the other hand, we show that We show that F 3 (H, s) > 0 in (0, 1) × (0, 1) (5.24) in Appendix. Thus we complete the proof.
Thus we complete the proof by Lemmas 5.4 and 5.5. | 1,785 | 2015-11-01T00:00:00.000 | [
"Mathematics"
] |
Hypothetical endogenous SIV-like antigens in Mauritian cynomolgus macaques
Simian immunodeficiency virus (SIV) infection of Mauritian cynomolgus macaques (MCMs) is an increasingly important nonhuman primate model for HIV vaccine research. We previously reported that in MCMs anti-SIV antibodies can be naturally developed without exogenous infection or vaccination, and that a vaccine targeting SIV protease cleavage sites (PCS) can cross-induce antibodies to non-PCS SIV antigens. We speculate that this is potentially caused by the existence of endogenous SIV-like antigens. External stimuli (such as environmental factors and vaccination) may induce expression of endogenous SIV-like antigens to elicit these antibodies. Database and mass spectrometry analyses were conducted to search for such antigens. We identified endogenous SIV-like DNA sequences in cynomolgus macaque genome and non-PCS peptide homologous to SIV Env protein in PBMCs of a PCS-vaccinated monkey. Our preliminary insights suggest that endogenous SIV-like antigens may be one of the possible reasons for the natural and cross-inducible SIV antibodies in MCMs.
During HIV or SIV replication, each of the 12-protease cleavage reactions is essential for the production of a functional viral particle [19]. A novel vaccine strategy targeting the protease cleavage sites (PCS) has been suggested by our studies [14,19,20] and is being evaluated using MCM SIV infection model [12]. Commonly, vaccine studies are carried out in specific pathogenfree animals to rule out the impact of on-going infection or preexisting immune responses in order to solely evaluate the vaccine efficacy absent of confounding variables. In a pilot study, we used PCS peptide antigens (along with non-PCS peptides) to screen for potentially pre-existing natural antibody responses in MCMs [12], while, unlike in several other NHP species [21], no natural immune response screen study had been reported in MCMs. Specifically, the SIV antigens were twelve 20mer peptides overlapping the twelve PCS (-10/+10) and three non-PCS Gag or Env peptides of SIVmac239 [22][23][24]. In some MCMs antibodies to these antigens were found to be very high in the absence of exogenous infection or vaccination [12]. We also observed that immunization of MCMs with PCS vaccine not only elicited antibodies to the PCS peptides, but also cross-induced antibodies to non-PCS peptides, while the non-PCS peptides share no sequence homology with the PCS peptides [12], suggesting that the PCS vaccine could elicit off-target immune response [25] targeting SIV antigens that are not in the vaccine. Since understanding natural and vaccine cross-inducible immune responses is expected to provide important information and clues for vaccine development [21, 25], we report here a possible reason that may contribute to the existence of these anti-SIV antibodies in MCMs.
Methodology: Humane care guidelines:
The MCM plasma sample used in the current study was collected in our recently published animal work. The human care guidelines were described in detail in that publication [12].
The PCS and non-PCS SIV peptides:
These SIV peptides, derived from SIVmac239 [22][23][24], are twelve 20mer peptides overlapping the twelve protease cleavage sites (-10/+10), named as PCS1 through PCS12, and three non-PCS Gag or Env peptides, named SIVgag, SIVenv1 and SIVenv2. The sequences of these peptides were listed in the recent publication [12]. They were confirmed to be specific for SIV by NCBI protein BLAST and conserved among multiple SIV strains. No sequence homology was shared between PCS versus non-PCS peptides [12].
Identification of SIV-like sequences in the cynomolgus macaque genome: SIV peptide coding sequences were searched in cynomolgus macaque whole genome shotgun sequences for each chromosome using NCBI nucleotide BLAST (BLASTN Suite).
Enrichment of SIV peptide-specific antibodies:
This was performed as previously described [12].
Mass spectrometry identification of endogenous SIV-like antigen:
SIVenv2 peptide-specific antibodies enriched from monkey plasma were incubated with Pierce protein A/G agarose beads (ThermoFisher Scientific, Catalog 20422) and further cross-linked using dimethyl pimelimidate (DMP) (Sigma Aldrich, St. Louis, MO; Catalog D8388). Potential SIVenv2-like antigen was enriched from monkey PBMC lysates by immunoprecipitation using these SIVenv2 antibody-coupled beads and then analyzed by 2D LC-MS/MS as below.
The protein/peptide samples were digested with trypsin in solution. Briefly the samples were concentrated to near-dryness (1-5µl) using a vacuum centrifuge (Savant Speed/Vac Concentrator, Thermo Fisher Scientific). The samples were re-suspended in 25µl of 50mM HEPES pH 8.3 and mixed for 30 minutes. 2µl of 50mM DTT (Sigma) in 100mM ammonium bicarbonate (AB, Fisher Scientific) was added and the samples mixed and incubated at 37°C for 1 hour. 1µl of 200mM Iodoacetamide (Sigma) in AB, was added, the samples mixed and incubated at room temperature (in the dark) for 10 minutes. Trypsin (Pierce, Thermo Scientific, 2.5µg per 100µg protein) was added and the samples were mixed and incubated at 37°C overnight in a humidified chamber. After digestion, the tryptic peptides were collected and concentrated to near-dryness (1-5µl) using a vacuum centrifuge and re-suspended in MS buffer A (below) for further analysis.
Samples were analyzed using a nano-flow Easy nLC I connected in-line to an LTQ Orbitrap XL mass spectrometer with a nanoelectrospray ion source at 2.1 kV (ThermoFisher Scientific, San Jose, CA). The peptide fractions were loaded (5µl) onto a C18-reversed phase trap column (3 cm long, 100 µm inner diameter, 5 µm particles) with 100% buffer A (2% acetonitrile, 0.1% formic acid) for a total volume of 30 µl, and then separated on a C18-reversed phase column (15 cm long, 75µm inner diameter, 2.4µm particles). Peptides were eluted using a linear gradient of 2-35% buffer B (98% acetonitrile, 0.1% formic acid) over 40 min at a constant flow rate of 250nl/min. Total LC/MS/MS run-time was 80 minutes, including the loading, linear gradient, column wash at 95% buffer B, and the equilibration.
Data was acquired using a data-dependent method, dynamically choosing the top 5 abundant precursor ions from each survey scan for isolation in the LTQ (2.0 m/z isolation width) and fragmentation by CID (35% normalized collision energy, with 30 ms activation time). The survey scans were acquired in the Orbitrap over m/z 300-1700 with a target resolution of 60000 at m/z 400, and the subsequent fragment ion scans were acquired in the LTQ iontrap. The lower threshold for selecting a precursor ion for fragmentation was 1000 ions. Dynamic exclusion was enabled using a list size of 500 features, a m/z tolerance of 15ppm, a repeat count of 1, a repeat duration of 30s, and an exclusion duration of 60s, with early expiration disabled.
Raw files were loaded into PEAKS Studio v 7.5 (Bioinformatics Solutions Inc., Waterloo, ON, Canada). Data were refined without merging, with precursors corrected and only spectra with a charge of +2 to +6 being accepted with a filter quality of 0.65 or better. De novo searching was performed on the peak list using the following search parameters: Carbamidomethylation was selected as a fixed modification, oxidation as a variable modification, fragment ion mass tolerance of 0.5 Da, parent ion tolerance of 10 ppm, and trypsin as the enzyme. De novo list was exported with an ALC(%) set to 50. 29] using PAM30 scoring matrix with 'blastp-short' task settings but with composition-based score turned off. (Specific settings: matrix=PAM30, comp_based_stats=0, ungapped=True, seg=no, neighboring word threshold=16, window for multiple hits = 15). The resultant hit list was screened for the term "SIV" by parsing in python, and then manually verified.
Results: A proposed model: Endogenous SIV-like antigens may contribute to natural or vaccine cross-inducible anti-SIV antibodies:
We previously conducted a pilot study to test a novel HIV vaccine strategy, which targets the twelve viral protease cleavage sites (PCS), using the MCM SIV infection model [12]. SIV peptides based on the sequences of the PCS sites (PCS peptides), along with non-PCS peptides, were used as antigens to screen SIV antibody-negative animals for vaccination experiments. PCS peptides were also delivered as immunogens in the form of recombinant vesicular stomatitis viruses and nanoparticles (the PCS vaccine). We detected high-level natural antibodies to PCS and non-PCS SIV peptides [12]. The source of antigens that induced natural SIV antibodies in these monkeys is unknown, since they were healthy animals obtained from SIV-free breeding colony and without any on-going SIV infection. An important clue was suggested by the vaccination experiment, in which the PCS vaccine surprisingly induced antibodies to non-PCS antigens, although the non-PCS antigens share no sequence homology with the PCS antigens [12]. One possible explanation for the cross-induction is that the SIV PCS peptides might, by unknown mechanism(s), have stimulated the production of endogenous SIV-like antigens that contain peptide sequences antigenically similar to non-PCS SIV peptides (Figure 1).
It was reported that expression of endogenous retroviral elements [30-32] can be triggered by external stimuli such as HIV infection in humans [30, 33-42]
and that endogenous retroviral antigens present in normal baboon reproductive tissues can be cross-recognized by HIV/SIV antibodies [43,44]. While other possibilities may exist, we speculate that external stimuli such as environmental factors and the PCS vaccine might activate endogenous retrovirus (ERV), leading to the expression of endogenous SIV-like antigens and induction of antibody responses to the resulting viral antigens (Figure 1).
Identification of potential endogenous SIV-like antigens:
We BLAST searched cynomolgus macaque genomic shotgun sequences for the three non-PCS peptide-coding sequences and identified SIV non-PCS-like sequences on multiple chromosomes (Figure 2A). To examine the expression of endogenous SIV-like antigen on the protein level, we coupled a non-PCS peptide, SIVenv2, to affinity purification columns to enrich its specific antibodies from a PCS-vaccinated monkey. The purified antibodies were then used to immuno-precipitate antigens from peripheral blood mononuclear cells (PBMC) lysates of the PCSvaccinated monkey. The enriched antigens were then analyzed by Mass Spectrometry and searched in sequence databases. A SIVenv2-like peptide sequence was identified from the monkey PBMC by this procedure (Figure 2B). We then performed a BLAST search for the DNA sequence of the SIVenv2-like peptide identified in PBMC and found that it was aligned to the MCM genome ( Figure 2C). These results are consistent with the hypothesis that activation of dormant SIV-like antigens in MCMs by external stimuli might be one of the potential mechanisms for generation of SIV antibodies (natural or vaccine cross-induced) (Figure 1). This possibility will need to be validated by future investigations.
Discussion:
Based on the presence of natural and vaccine cross-inducible anti-SIV antibodies as described in the background, we speculated that endogenous dormant SIV-like antigens could be expressed upon activation by environmental factors or vaccination and subsequently induce antibody responses. A broad body of literature showed the presence of endogenous retroviral sequences in primate genomes [30-33, 43-50]. A vast array of germline-integrated retroviruses, formed during primate evolution and transmitted vertically (from parents to offspring) in Mendelian manner, are classically defined as endogenous retroviruses. As remnants of ancient retroviral infections, these are commonly structurally incomplete with mutations and deletions. However, some of them retain the potential to express viral proteins or peptides in response to cellular stimuli or stress [30, [33][34][35][36][37][38][39]. For example, HIV infection activates human ERVs including HERV-K and LINE-1 [33-42, 51]. It has also been proposed that other than inherited, dormant proviruses can also be acquired through previous infection by retroviruses from the hosts' living environments and carried by the hosts without viral replication and subsequent induction of host immune responses [30,48]. For simplicity, these and the classical endogenous retroviruses are collectively referred to as "endogenous" retroviruses (ERV) in the context of this work, considering their nature of being dormant "intrinsic" residents within the hosts. Importantly, studies in normal baboons suggested antigenic similarity of ERVs with HIV or SIV [43,44]. We speculate that activation of SIV-like ERV in MCMs by environmental factors (such as stress or infection) or vaccination might be an endogenous source of antigens that induce anti-SIV antibodies. This hypothesis seems to be supported by our identification of SIV-like sequences in the genome and SIV-like peptide in PBMC of these animals.
Given SIV-like ERVs as a possible source of antigens that might induce anti-SIV antibodies, their specific identification would be an interesting future direction. In addition, apart from their possible activation by SIV PCS peptides in immunization experiments, in the context of their potential involvement in natural antibody induction, the stimuli that might trigger their activation remain to be explored. ERVs can be activated by a variety of stress signals from the surroundings of the cell, tissue, organ or system levels that alter their transcription environment or epigenetic status, such as infection, injury, oxidative stress and psychological stress [30]. | 2,846.2 | 2018-02-28T00:00:00.000 | [
"Biology"
] |