text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Selective Area Epitaxy of Highly Strained InGaAs Quantum Wells (980–990 nm) in Ultrawide Windows Using Metalorganic Chemical Vapor Deposition We employed the selective-area-epitaxy technique using metalorganic chemical vapor deposition to fabricate and study samples of semiconductor heterostructures that incorporate highly strained InGaAs quantum wells (980–990 nm emission wavelength). Selective area epitaxy of InGaAs quantum wells was performed on templates that had a patterned periodic structure consisting of a window (where epitaxial growth occurred) and a passive mask (where epitaxial growth was suppressed), each with a width of 100 µm for every element. Additionally, a selectively grown potential barrier layer was included, which was characterized by an almost parabolic curvature profile of the surface. We conducted a study on the influence of the curvature profile of the growth surface on the optical properties of InGaAs quantum wells and the spatial distribution of composition in an ultrawide window. Our results showed that, under fixed selective-area-epitaxy conditions, the composition of the InxGa1−xAs and the wavelength of the quantum-well emission changed across the width of the window. Our study demonstrates that increasing the curvature profile of the growth surface of highly strained quantum wells leads to a transition in the photoluminescence wavelength distribution profile across the window, from quasi-parabolic to inverted parabolic. Introduction Light-emitting devices (LED) based on semiconductor heterostructures are currently widely used in practice.The main types of such devices include LEDs, semiconductor lasers, and optical amplifiers.The key element of such devices is the active region.In most cases, active-region designs are based on structures with pronounced quantum size properties.The main types of such structures are quantum wells, filaments, and dots [1].The use of quantum-well structures significantly reduces threshold currents, which is important for semiconductor lasers, and expands the range of compositions that can be used in strained structures, which is important for all types of light-emitting devices [1,2].However, classical approaches based on standard epitaxy (SE), when the layer grows over the entire planar surface, significantly limit the design optimization and development of new structures.In particular, SE provides a uniform active-region (AR) composition over the entire structure surface.In this case, wavelength control is feasible, for example, through the use of various types of photonic crystals [3][4][5][6][7][8], including conventional Bragg gratings; however, the source efficiency and the tuning range will still be limited by the AR composition and thickness, which do not change in the case of SE [9].This is primarily due to the photonic-crystal-operating-wavelength shift from the maximum of the material gain spectrum.At the same time, there are a number of topical tasks that require flexible control of AR spectral characteristics within a single heterostructure, implemented by local changes in the composition and thickness of the quantum-well ARs.Such tasks include the creation of photonic integrated circuits, when it is required to combine a number of active elements of various spectral properties (lasers, detectors, amplifiers, waveguides, and modulators) within a single monolithic structure [10][11][12], as well as the creation of multispectral laser sources, effectively operating in a wide spectral range, for example, for the wavelength division multiplexing [13][14][15], the formation of an absorption section for mode-locked lasers [16]. The available technique that allows flexible control of the quantum-well AR properties is a selective area epitaxy (SAE).Within the framework of this technique, growth occurs in windows on a pre-prepared surface with a dielectric mask, and growth is suppressed on the mask surface.The sizes of windows and masks allow flexible control of the composition and growth rate.As a result, structures with arrays of ARs were demonstrated, showcasing a tuning range of up to 86 nm [17] and 150 nm [18] for the maximum of the photoluminescence spectrum, achieved through varying the AR composition.Despite the achieved results, the issue of describing the basic features of SAE in the case of highly strained quantum wells (QWs) remains.The fundamental principles of SAE, in the case of bulk AlGaAs layers, were studied in [19,20].Experimental studies of the growth mechanisms of InGaAs/GaAs quantum wells are considered in [17].However, the SAE of the structure includes a number of aspects that have not been considered so far and are important for optimizing both the design and process parameters.To date, most studies describe the SAE of QWs in windows with a width not exceeding several tens of microns [21,22].The results of studies of QWs obtained by the SAE in ≥ 100 µm wide windows are presented in [17,[23][24][25]; however, a number of important issues related to the effect of curvature (bending) of layers obtained by SAE on the properties of QWs, as well as the possibility of describing the QW characteristics using existing simulation models, remain unexplored. Here, we consider the SAE mechanisms of highly strained QWs in ultrawide windows.The significant effect of the lower SAE-grown waveguide layer on the radiative characteristics of InGaAs/GaAs QWs is demonstrated, the experimental results on the PL characteristics of InGaAs/GaAs QWs are analyzed within the framework of the vaporphase diffusion model, and potential limitations in describing the growth behavior of highly strained InGaAs/GaAs QWs are identified. SAE QW Experimental Samples and Research Technique Experimental samples were grown by metalorganic chemical vapor deposition (MOCVD).For growth, we used an EMCORE GS3100 (EMCORE Corp., Somerset, NJ, USA) setup with a vertical reactor and resistive heating of the substrate holder.The growth temperature was 750 • C; the rotation speed of the substrate holder was 1000 rpm.Trimethylgallium (TMGa), trimethylindium (TMIn), and trimethylaluminum (TMAl) (Elma-Chem, Zelenograd, Russia) were used as group 3 reagents, and arsine (AsH3) (Salyut, Nizhny Novgorod, Russia) was used as a group 5 reagent.The carrier gas was hydrogen.Epitaxial growth was carried out on an n-GaAs (100) substrate (Wafer Technology Ltd., Milton Keynes, UK). As part of the research, 2 sets of samples were grown.Samples of the first set were grown by SE and included a 0.5 µm thick GaAs buffer, 0.4 µm thick Al 0.3 Ga 0.7 As lower cladding, 0.8 µm thick GaAs waveguide, InGaAs QW located in the waveguide center, and 0.25 µm thick Al 0.3 Ga 0.7 As upper cladding.The first set included 2 samples that differed in the QW growth time: 7 s for the SENQW (standard epitaxy narrow QW) sample and 14 s for the SEWQW (standard epitaxy wide QW) sample.These samples were grown for the purpose of evaluating the technique, which will be further utilized for determining the composition and thickness of QWs. The second set of samples consisted of heterostructures obtained by SAE.Samples were made in 2 stages.In the first stage, a preform was grown by SE: 0.5 µm thick GaAs buffer, 0.4 µm thick Al 0.3 Ga 0.7 As lower cladding, and 0.4 µm thick GaAs waveguide lower part.After that, 100 nm thick SiO 2 dielectric coating was deposited on the obtained preform by reactive ion-plasma sputtering.Next, a mask pattern was formed with alternating 100 µm wide stripes (SiO 2 mask/window) using lithography and a buffer etchant (BOE 5:1).The stripes were formed in the [011] direction.Then the second stage of sample growth using SAE began.Within the second stage, the lower part of the GaAs waveguide layer was grown onto the preforms obtained.The waveguide-layer lower-part thickness varied in the range of 0.12-1.9µm; the conditions were the same and corresponded to the growth rate at SE equal to 19 nm/min; different thicknesses were obtained by changing the growth time.Next, the InGaAs QWs were grown by SAE.The process parameters were consistent with those of the SENQW sample growth and were the same for all SAE samples.Next, 0.3 µm thick GaAs waveguide upper part and 0.3 µm thick Al 0.11 Ga 0.89 As upper cladding were grown at the same time for all SAE samples' growth rate, which corresponds to the growth rate of 21.3 nm/min at SE of Al 0.11 Ga 0.89 As. The thicknesses of SAE-grown layers shown in Table 1 are measured at the window center.The samples of this set differed in the thickness of the GaAs lower waveguide part grown by SAE (SAEWG): the SAEWG1 sample was 0.12 µm, the SAEWG2 sample was 0.6 µm, the SAEWG3 sample was 1.2 µm, and the SAEWG4 sample was 1.9 µm (the description of the samples is given in Table 1).This remark is due to the fact that the lower-waveguide-layer thickness profile changes across the window width in the SAE samples.As the layer thickness increases, the difference between the layer thickness at the window center and at the window edge increases in the SAE samples.The mechanisms determining the thickness profile across the window width described in [19] for bulk GaAs layers demonstrate that a change in thickness significantly modifies the growth mode (a transition from step flow to step bunching is observed) and the structure of monoatomic steps.In addition, the layer-thickness drop between the edge and center of the window changes; in our case, it increases from 13.5 to 272 nm with the lower SAE waveguide thickness increasing from 0.12 to 1.9 µm.We used the following techniques in the study: -Spatially resolved microphotoluminescence (µPL) was used to study the QW luminescence characteristics of samples from both sets.The µPL measurements were performed at room temperature using a T64000 (Horiba Jobin Yvon, Longjumeau, France) spectrometer equipped with a confocal microscope.These spectra were measured using the continuous-wave (cw) excitation at 532 nm (2.33 eV) of a Nd:YAG laser (Torus, Laser Quantum, Stockport, UK) with a power on the samples as low as ~40 µW.The spectra were recorded using a 600 lines/mm grating and liquid-nitrogen-cooled charge-coupled device (CCD) camera with the Mitutoyo 100 × NIR (NA = 0.90) long working-distance objective lens to focus the incident beam into a spot of ~2 µm diameter.The measurements were carried out with point-to-point scanning with a step of 1 µm.-for SAEWG samples, measurements of the thickness profile across the window were carried out using an AmBios XP-1 profilometer (Ambios Technology Inc., Santa Cruz, CA, USA).To do this, the SiO2 mask was preliminarily removed from the samples. Experimental and Theoretical Studies of the Characteristics of SAE QWs For the first set of samples, PL studies were carried out at a temperature of 300 K.The SENQW and SEWQW samples had the maximum wavelengths of the PL spectrum at 932 and 1001 nm, respectively.These samples were grown with the purpose of evaluating the characteristics of SE QWs (thickness and composition) and aligning the measurement results with calculations for analyzing the compositions and thicknesses of SAE QWs.Since a given wavelength can be obtained with QWs of different compositions and thicknesses, calculations of possible combinations of compositions and thicknesses that provide wavelengths of 932 nm and 1001 nm were carried out.These calculations were based on solving the Schrödinger equation for a rectangular QW of finite depth.Mechanical strains in the QW and band discontinuities between the QW and the waveguide were taken into account in accordance with [26].The results are plotted in Figure 2. Next, the composition of the InxGa1−xAs QW was determined, in which the following condition is satisfied: the transition in wavelength from 932 nm to 1001 nm is achieved by doubling the QW thickness.Figure 2 shows that this condition is satisfied only by the composition of the InxGa1−xAs QW, equal to x = 0.34, and the QW thicknesses of the SENQW and SEWQW samples, 16 Å and 32 Å, respectively.We used the following techniques in the study: -Spatially resolved microphotoluminescence (µPL) was used to study the QW luminescence characteristics of samples from both sets.The µPL measurements were performed at room temperature using a T64000 (Horiba Jobin Yvon, Longjumeau, France) spectrometer equipped with a confocal microscope.These spectra were measured using the continuous-wave (cw) excitation at 532 nm (2.33 eV) of a Nd:YAG laser (Torus, Laser Quantum, Stockport, UK) with a power on the samples as low as ~40 µW.The spectra were recorded using a 600 lines/mm grating and liquid-nitrogencooled charge-coupled device (CCD) camera with the Mitutoyo 100 × NIR (NA = 0.90) long working-distance objective lens to focus the incident beam into a spot of ~2 µm diameter.The measurements were carried out with point-to-point scanning with a step of 1 µm.-for SAEWG samples, measurements of the thickness profile across the window were carried out using an AmBios XP-1 profilometer (Ambios Technology Inc., Santa Cruz, CA, USA).To do this, the SiO 2 mask was preliminarily removed from the samples. Experimental and Theoretical Studies of the Characteristics of SAE QWs For the first set of samples, PL studies were carried out at a temperature of 300 K.The SENQW and SEWQW samples had the maximum wavelengths of the PL spectrum at 932 and 1001 nm, respectively.These samples were grown with the purpose of evaluating the characteristics of SE QWs (thickness and composition) and aligning the measurement results with calculations for analyzing the compositions and thicknesses of SAE QWs.Since a given wavelength can be obtained with QWs of different compositions and thicknesses, calculations of possible combinations of compositions and thicknesses that provide wavelengths of 932 nm and 1001 nm were carried out.These calculations were based on solving the Schrödinger equation for a rectangular QW of finite depth.Mechanical strains in the QW and band discontinuities between the QW and the waveguide were taken into account in accordance with [26].The results are plotted in Figure 2. Next, the composition of the In x Ga 1−x As QW was determined, in which the following condition is satisfied: the transition in wavelength from 932 nm to 1001 nm is achieved by doubling the QW thickness.Figure 2 shows that this condition is satisfied only by the composition of the In x Ga 1−x As QW, equal to x = 0.34, and the QW thicknesses of the SENQW and SEWQW samples, 16 Å and 32 Å, respectively.Figure 3 shows experimental thickness profiles across the window width (solid lines) for samples SAEWG1-SAEWG4 obtained by SAE.The dotted lines show the calculated total thickness profile across the width of the window.We used the growth-rate-enhancement (GRE) distributions for GaAs and Al0.11Ga0.89As[20] to calculate the total thickness.The SAE single-layer thickness (H) is determined as follows [20]: where Vplanar is the growth rate of a given material at SE and t is the growth time of the SAE layer. The SAE-grown multi-layer total thickness (Hs) for the SAEWG samples is determined as follows: where GREGaAs and GREAlGaAs are the GREs of the GaAs and Al0.11Ga0.89Aslayers, respectively [20]; VGaAs and VAlGaAs are the GaAs and Al0.11Ga0.89Aslayer growth rates at SE, respectively; tlw, tuw, and tuc are the growth times of the SAE layers of the lower and upper waveguides and the upper cladding, respectively (Table 1).The QW thickness was not taken into account because it is negligible compared with the thickness of the other layers.Figure 3 demonstrates that the specified thickness values align fairly accurately with the corresponding experimental values.However, the simulation and experiment is partially inconsistent for SAEWG3 compared with the other samples.One possible reason for this behavior is a change in the growth mode, such as a transition from step flow to bunching, which can affect the growth rate.The presence of a strained QW may also enhance this effect.However, to pinpoint the factors that contribute to this behavior, further research is needed, especially surface morphology studies.Figure 3 shows experimental thickness profiles across the window width (solid lines) for samples SAEWG1-SAEWG4 obtained by SAE.The dotted lines show the calculated total thickness profile across the width of the window.We used the growth-rate-enhancement (GRE) distributions for GaAs and Al 0.11 Ga 0.89 As [20] to calculate the total thickness.The SAE single-layer thickness (H) is determined as follows [20]: where V planar is the growth rate of a given material at SE and t is the growth time of the SAE layer.Figure 4a-d show the µPL spectra maps at 300 K for the SAEWG1-SAEWG4 samples.The maximum intensity is observed at the window center for all samples, whereas it decreases significantly towards the window edges.It can also be seen from Figure 4a,b that the SAEWG1 and SAEWG2 samples exhibit a gradual redshift in wavelength as one moves from the window center towards the edges.Figure 4c,d reveal a unique wavelength The SAE-grown multi-layer total thickness (H s ) for the SAEWG samples is determined as follows: where GRE GaAs and GRE AlGaAs are the GREs of the GaAs and Al 0.11 Ga 0.89 As layers, respectively [20]; V GaAs and V AlGaAs are the GaAs and Al 0.11 Ga 0.89 As layer growth rates at SE, respectively; t lw , t uw , and t uc are the growth times of the SAE layers of the lower and upper waveguides and the upper cladding, respectively (Table 1).The QW thickness was not taken into account because it is negligible compared with the thickness of the other layers.Figure 3 demonstrates that the specified thickness values align fairly accurately with the corresponding experimental values.However, the simulation and experiment is partially inconsistent for SAEWG3 compared with the other samples.One possible reason for this behavior is a change in the growth mode, such as a transition from step flow to bunching, which can affect the growth rate.The presence of a strained QW may also enhance this effect.However, to pinpoint the factors that contribute to this behavior, further research is needed, especially surface morphology studies. Figure 4a-d show the µPL spectra maps at 300 K for the SAEWG1-SAEWG4 samples.The maximum intensity is observed at the window center for all samples, whereas it decreases significantly towards the window edges.It can also be seen from Figure 4a,b that the SAEWG1 and SAEWG2 samples exhibit a gradual redshift in wavelength as one moves from the window center towards the edges.Figure 4c,d reveal a unique wavelength distribution for samples SAEWG3 and SAEWG4, with minimal change in the central region and a rapid increase in wavelength towards the window edges.To enhance the accuracy of subsequent analyses, we demonstrate the distribution of the µPL spectrum maximum across the window for all SAEWG1-SAEWG4 samples (depicted in Figure 5).According to the dependencies, a continuous increase in wavelength is observed from the center (982 nm) to the edge (992 nm) of the window for the SAEWG1 and SAEWG2 samples.In the case of the SAEWG3 sample, the wavelength undergoes a small change (a 4 nm shift) at the center of the window (−45 µm to 47 µm), followed by a sharp increase to 994 nm towards the edges.For the SAEWG4 sample, the wavelength gradually decreases from 984 nm (the window center) to 976 nm (the edges of the interval from −44 µm to 46 µm) as the distance from the center increases towards these boundaries.As the distance from the window center increases towards the window edges, the wavelength sharply increases, reaching up to 994 nm.Note that the SAEWG samples differ only in the thickness of the lower waveguide layer grown by SAE.These results suggest that the observed dependencies are affected by the changes in the curvature of the lower-waveguide-layer profile.Using Equation (1) to estimate the change in SAE layer thickness between the center and edge of the window, we find that this change increases from the SAEWG1 to SAEWG4 sample from 25 to 395 nm.Our findings indicate that the wavelength variation across the window has a similar form to that observed in [17,23], up to a thickness of 0.6 µm of the lower waveguide layer grown via SAE.For larger thicknesses, the wavelength variation across the window exhibits a different behavior.An analysis of the variation in composition and thickness of the QW layer grown through SAE was performed next.To do this, the basic parameters of the SAE QW (com-position and thickness) were calculated based on the experimental wavelength values obtained in the window (Figure 5).The vapor-phase diffusion model [20,27] was used for the simulation.The main characteristic of the deposited layer during SAE is the change in GRE across the window width.By using the GRE values of the binary compounds, it is possible to calculate the GRE of the resulting ternary solid solution: where GRE InGaAs , GRE InAs , and GRE GaAs are the GREs of InGaAs, InAs, and GaAs, respectively, and x 0 are In mole fraction in the InGaAs solid solution under the same growth conditions at SE.By applying the vapor-phase diffusion model and taking into account the composition value x 0 and layer thickness d 0 during SE, the changes in composition x and thickness d of the layer across the window width during SAE can be estimated: In the case of a QW, the obtained values of the composition x and thickness d make it possible to calculate the wavelength for each point of the window by solving the Schrödinger equation for a rectangular QW of finite depth.When an effective diffusion length (D/k) of 85 µm is used for Ga within the vapor-phase diffusion model, there is good agreement between the experimental and calculated results for GaAs, as shown in [20].In the present study, for the purpose of the QW simulation, the D/k value of 85 µm was chosen for Ga.The D/k value for In was adjusted to obtain a wavelength of 982 nm at the window center.According to [25,28], D/k for Ga is larger than for In.Estimates have shown that the required wavelength at the window center is obtained at D/k for In equal to 25 µm.Figure 6 shows the calculated values of the GRE across the window for GaAs, InAs, and In 0.34 Ga 0.66 As, in which composition corresponds to SE. Figure 7 shows the thickness d and composition x profiles across the window for the In x Ga 1−x As QW grown via SAE under conditions corresponding to SE of a QW with thickness d 0 = 16 Å and composition x 0 = 0.34.The dependence of the change in QW emission wavelength across the window width can be derived from the calculated composition x and thickness d of the QW layer grown via SAE, as shown in the inset of Figure 7.It can be seen that the wavelength changes from 982 nm at the window center to 1060 nm at the window edge.Thus, with D/k values determined for Ga (85 µm) and In (25 µm), the change in the wavelength corresponding to the maximum of the photoluminescence spectrum is as high as 78 nm across the window width.The calculated value is significantly higher than the value obtained from experimental data, with a maximum difference of only about 10 nm in the photoluminescence wavelength between the central and outer regions of the window (refer to Figure 5).According to [17], InGaAs QWs display a pronounced wavelength redshift from the window center to the edge, which is attributed to an increase in the QW thickness and the amount of In in the In x Ga 1−x As QW grown via SAE as one moves from the window center to the edge. Discussion of the Results To explain the significant difference between the experimental (Figure 5) and calculated (Figure 7 inset) results for the PL wavelength change of the SAE QW across the window, further analysis was conducted.The wavelength distribution of the SAEWG1 Discussion of the Results To explain the significant difference between the experimental (Figure 5) and calculated (Figure 7 inset) results for the PL wavelength change of the SAE QW across the window, further analysis was conducted.The wavelength distribution of the SAEWG1 Discussion of the Results To explain the significant difference between the experimental (Figure 5) and calculated (Figure 7 inset) results for the PL wavelength change of the SAE QW across the window, further analysis was conducted.The wavelength distribution of the SAEWG1 sample (Figure 5) was selected for analysis, as it had the lowest curvature in the lower waveguide layer, which should minimize the influence of the profile and morphology on the grown SAE QW.The experimental dependence was described by a fourth-degree polynomial for ease of analysis (Figure 5 inset).It is known that the maximum PL wavelength of the QW is determined by its composition, thickness, and energy depth relative to the waveguide layer, which is also influenced by the waveguide layer's composition.The waveguidelayer composition was kept constant in all experiments.Therefore, the composition and thickness of the QW were the primary factors that influenced the wavelength.The behavior of the maximum PL wavelength across the window for SAE QW was analyzed using two approaches.The first approach used the GRE dependence for GaAs (red curve in Figure 6) to calculate the SAE QW thickness using Equation ( 4).This was justified by the fact that the QWs investigated in this study contained more Ga than In.However, it should be noted that the In content in the QW grown by SAE was 0.34, which is significantly higher than the 0.12 content in the QW studied in [17].To calculate the composition variation of the SAE QW across the window, we followed a two-step process.First, at each point of the window, we obtained the thickness and emission wavelength of the QW.Then, we solved the Schrödinger equation for a rectangular QW of finite depth to determine the composition needed to produce the specified wavelength at the given thickness.As for the second approach, we relied on the GRE dependence observed for the In 0.34 Ga 0.66 As layer (Figure 6, blue curve).Using this data and a process similar to the first approach, we determined the composition of the QW across the window.Figure 8 illustrates the results obtained by both approaches, with the QW thickness (d) and composition (x) of In x Ga 1−x As plotted against the window position (approach 1-the dashed curves; approach 2-the solid curves).The possible dependencies of the composition of In x Ga 1−x As QWs on the window width, obtained by SAE, were calculated based on both the dependencies of QW thickness on window width, which were determined using the approaches described above (Figure 8, curves d 1 and d 2 ), and the experimental distribution of the QW emission wavelength across the window width for the SAEWG1 sample (Figure 5, inset).From Figure 8 (curves x 1 and x 2 ), it can be seen that in order to achieve the experimentally observed change in the wavelength across the window width, the QW In x Ga 1−x As composition should decrease when moving from the window center to the edge as the thickness of the QW increases.It should be noted that such behavior is observed for both approaches to estimate the change in QW thickness across the window width.Within the framework of the vapor-phase diffusion model, such a behavior of the composition can occur when D/k(In) > D/k(Ga), despite the opposite D/k values reported for In and Ga in the literature [25,28].The observed behavior can be attributed to the fact that under certain growth conditions, such as the growth temperature, a maximum In concentration can be achieved in the strained InGaAs QW.Once this maximum concentration is reached, any further increase in the In content in the vapor phase leads to a decrease in its concentration within the QW, as reported in [29]. According to [29], this reduction is caused by the formation of unstrained InAs islands, into which indium can incorporate without a strain-induced barrier.Some of the excess indium is desorbed from the surface of these InAs clusters during growth, and some small part is accumulated into defects, which can be identified as dislocation loops located around small regions with higher In content [29].During SAE, the In content in the vapor phase at the window edge is higher than at the window center.This distribution is confirmed by the calculated GRE for In (Figure 6, black curve).However, it should be understood that the distribution across the window width of the actual thickness d and composition x of the In x Ga 1−x As QW obtained by SAE lies somewhere between the values obtained within the two approximations used.In order to verify this assumption regarding the behavior of QW composition, future studies will focus on QWs with lower In content.This approach should help eliminate the influence of the maximum In incorporation factor in highly strained QWs.It can also be assumed that at high In concentrations, the curvature of the lower part of the waveguide layer obtained by SAE may affect its incorporation. Conclusions It has been shown that a highly strained In0.34Ga0.66AsQW (with a composition corresponding to standard epitaxy) grown by selective area epitaxy on an n-GaAs (100) substrate with an alternating stripe pattern of window/mask, each 100 µm wide, exhibits an InxGa1−xAs composition distribution across the window, where the x value reaches its maximum at the window center and decreases towards the edges.This distribution is opposite to the commonly accepted distribution [17], which we believe is due to the conditions of our selective-area-epitaxy process, where the maximum concentration of In in the QW is achieved.As the concentration of the In precursor in the vapor phase increases, clusters enriched with In are formed on the surface.The formation of these InAs clusters results in reduced In content in the QW.In our case, the In precursor concentration in the vapor phase increases from the window center to the edge. We also demonstrated that the curvature of the growth surface profile, resulting from variations in the thickness profile of the lower waveguide layer grown by selective area epitaxy, impacts the wavelength variation across the window during the growth of the QW under identical growth conditions.Up to a thickness of 0.6 µm in the central part of the window for the lower waveguide layer grown by selective area epitaxy, the wavelength changes smoothly from the center (982 nm) to the edge (992 nm).Upon further increase in the thickness of the lower waveguide layer, up to 1.9 µm, the wavelength changes very weakly across most of the window and only begins to increase sharply at a few micrometers from the edge, reaching 994 nm at the edge of the window. Besides this, as can be seen from Figure 6, reducing D/k for solid solutions leads to the formation of a profile with greater curvature.On the other hand, as the window widens, the GRE value at the window center would decrease and approach 1, which corresponds to growth conditions during SE.This demonstrates additional possibilities for controlling the QW composition distribution across the window. Conclusions It has been shown that a highly strained In 0.34 Ga 0.66 As QW (with a composition corresponding to standard epitaxy) grown by selective area epitaxy on an n-GaAs (100) substrate with an alternating stripe pattern of window/mask, each 100 µm wide, exhibits an In x Ga 1−x As composition distribution across the window, where the x value reaches its maximum at the window center and decreases towards the edges.This distribution is opposite to the commonly accepted distribution [17], which we believe is due to the conditions of our selective-area-epitaxy process, where the maximum concentration of In in the QW is achieved.As the concentration of the In precursor in the vapor phase increases, clusters enriched with In are formed on the surface.The formation of these InAs clusters results in reduced In content in the QW.In our case, the In precursor concentration in the vapor phase increases from the window center to the edge. We also demonstrated that the curvature of the growth surface profile, resulting from variations in the thickness profile of the lower waveguide layer grown by selective area epitaxy, impacts the wavelength variation across the window during the growth of the QW under identical growth conditions.Up to a thickness of 0.6 µm in the central part of the window for the lower waveguide layer grown by selective area epitaxy, the wavelength changes smoothly from the center (982 nm) to the edge (992 nm).Upon further increase in the thickness of the lower waveguide layer, up to 1.9 µm, the wavelength changes very weakly across most of the window and only begins to increase sharply at a few micrometers from the edge, reaching 994 nm at the edge of the window. Besides this, as can be seen from Figure 6, reducing D/k for solid solutions leads to the formation of a profile with greater curvature.On the other hand, as the window widens, the GRE value at the window center would decrease and approach 1, which corresponds to growth conditions during SE.This demonstrates additional possibilities for controlling the QW composition distribution across the window. Funding: This research received no external funding. Figure 1 Figure 1 shows a schematic illustration of the samples under study: (a) the sample obtained by SE and (b) the sample obtained by SAE. Figure 1 . Figure 1.Schematic illustration of the samples under study: (a) the sample obtained by SE and (b) the sample obtained by SAE. Figure 1 . Figure 1.Schematic illustration of the samples under study: (a) the sample obtained by SE and (b) the sample obtained by SAE. Figure 2 . Figure 2. Thickness (d) of the InxGa1−xAs QW as a function of the composition (x), providing emission wavelengths of 932 and 1001 nm. Figure 2 . Figure 2. Thickness (d) of the In x Ga 1−x As QW as a function of the composition (x), providing emission wavelengths of 932 and 1001 nm. Nanomaterials 2023 , 13 Figure 3 . Figure 3.The thickness of the structure grown by SAE across the window width.Solid lines are experimental data; dotted lines represent calculated data obtained using Equation (2). Figure 3 . Figure 3.The thickness of the structure grown by SAE across the window width.Solid lines are experimental data; dotted lines represent calculated data obtained using Equation (2). Figure 5 . Figure 5. Experimental variation of the µPL peak wavelength across the window for SAEWG1-SAEWG4 samples, with a 4th-degree polynomial approximation of the experimental data for the SAEWG1 sample shown in the inset. Figure 5 . Figure 5. Experimental variation of the µPL peak wavelength across the window for SAEWG1-SAEWG4 samples, with a 4th-degree polynomial approximation of the experimental data for the SAEWG1 sample shown in the inset. Figure 5 . Figure 5. Experimental variation of the µPL peak wavelength across the window for SAEWG1-SAEWG4 samples, with a 4th-degree polynomial approximation of the experimental data for the SAEWG1 sample shown in the inset. Figure 6 . Figure 6.Distribution of GRE across the window width for D/k values of 85 µm for Ga and 25 µm for In. Figure 7 . Figure 7. Calculated variation in thickness (d) and composition (x) of the InxGa1−xAs QW grown by SAE for D/k values of µm for Ga and 25 µm for In.Inset: Emission wavelength across the window width for the QW with parameters shown in Figure 7. Figure 6 . Figure 6.Distribution of GRE across the window width for D/k values of 85 µm for Ga and 25 µm for In. Figure 6 . Figure 6.Distribution of GRE across the window width for D/k values of 85 µm for Ga and 25 µm for In. Figure 7 . Figure 7. Calculated variation in thickness (d) and composition (x) of the InxGa1−xAs QW grown by SAE for D/k values of 85 µm for Ga and 25 µm for In.Inset: Emission wavelength across the window width for the QW with parameters shown in Figure 7. Figure 7 . Figure 7. Calculated variation in thickness (d) and composition (x) of the In x Ga 1−x As QW grown by SAE for D/k values of 85 µm for Ga and 25 µm for In.Inset: Emission wavelength across the window width for the QW with parameters shown in Figure 7. Table 1 . Description of experimental samples. * For SAE layers, the thicknesses are from the window center and the growth time is indicated.
8,209.6
2023-08-22T00:00:00.000
[ "Materials Science" ]
Discrete-event simulation of uncertainty in single-neutron experiments A discrete-event simulation approach which provides a cause-and-effect description of many experiments with photons and neutrons exhibiting interference and entanglement is applied to a recent single-neutron experiment that tests (generalizations of) Heisenberg's uncertainty relation. The event-based simulation algorithm reproduces the results of the quantum theoretical description of the experiment but does not require the knowledge of the solution of a wave equation nor does it rely on concepts of quantum theory. In particular, the data satisfies uncertainty relations derived in the context of quantum theory. INTRODUCTION Quantum theory has proven extraordinary powerful for describing a vast number of laboratory experiments. The mathematical framework of quantum theory allows for a straightforward (at least in principle) calculation of numbers which can be compared with experimental data as long as these numbers refer to statistical averages of measured quantities, such as for example atomic spectra, the specific heat and magnetic susceptibility of solids. However, as soon as an experiment is able to record the individual clicks of a detector which contribute to the statistical average, a fundamental problem appears. Although quantum theory provides a recipe to compute the frequencies for observing events, it does not account for the observation of the individual events themselves [1][2][3][4]. Prime examples are the single-electron two-slit experiment [5], single-neutron interferometry experiments [6] and optics experiments in which the click of a detector is assumed to correspond to the arrival of a single photon [7]. From the viewpoint of quantum theory, the central issue is how it can be that experiments yield definite answers. On the other hand, it is our brain which decides, based on what it perceives through our senses and cognitive capabilities, what a definite answer is and what it is not. According to Bohr [8] "Physics is to be regarded not so much as the study of something a priori given, but rather as the development of methods of ordering and surveying human experience. In this respect our task must be to account for such experience in a manner independent of individual subjective judgment and therefore objective in the sense that it can be unambiguously communicated in ordinary human language." This quote may be read as a suggestion to construct a description in terms of events, some of which are directly related to human experience, and the cause-and-effect relations among them. Such an eventbased description obviously yields definite answers and if it reproduces the statistical results of experiments, it also provides a description on a level to which quantum theory has no access. For many interference and entanglement phenomena observed in optics and neutron experiments, such an event-based description has already been constructed, see Michielsen et al. [9], De Raedt et al. [10], De Raedt et al. [11] for recent reviews. The eventbased simulation models reproduce the statistical distributions of quantum theory without solving a wave equation but by modeling physical phenomena as a chronological sequence of events. Hereby events can be actions of an experimenter, particle emissions by a source, signal generations by a detector, interactions of a particle with a material and so on [9][10][11]. The basic premise of our event-based simulation approach is that current scientific knowledge derives from the discrete events which are observed in laboratory experiments and from relations between those events. Hence, the event-based simulation approach is concerned with how we can model these experimental observations but not with what "really" happens in Nature. This underlying premise strongly differs from the assumption that the observed events are signatures of an underlying objective reality which is mathematical in nature but is in line with Bohr's viewpoint expressed in the above quote. The general idea of the event-based simulation method is that simple rules define discrete-event processes which may lead to the behavior that is observed in experiments. The basic strategy in designing these rules is to carefully examine the experimental procedure and to devise rules such that they produce the same kind of data as those recorded in experiment, while avoiding the trap of simulating thought experiments that are difficult to realize in the laboratory. Evidently, mainly because of the lack of knowledge, the rules are not unique. Hence, it makes sense to use the simplest rules one could think of until a new experiment indicates that the rules should be modified. The method may be considered as entirely "classical" since it only uses concepts which are directly related to our perception of the macroscopic world but the rules themselves are not necessarily those of classical Newtonian dynamics. The event-based approach has successfully been used for discrete-event simulations of quantum optics experiments such as the single beam splitter and Mach-Zehnder interferometer experiments, Wheeler's delayed choice experiments, a quantum eraser experiment, two-beam single-photon interference experiments and the single-photon interference experiment with a Fresnel biprism, Hanbury Brown-Twiss experiments, Einstein-Podolsky-Rosen-Bohm (EPRB) experiments, and of conventional optics problems such as the propagation of electromagnetic plane waves through homogeneous thin films and stratified media, see Michielsen et al. [9], De Raedt et al. [10] and references therein. For applications to single-neutron interferometry experiments see De Raedt et al. [10,11]. The same methodology has also been employed to perform discrete-event simulations of quantum cryptography protocols [12] and universal quantum computation [13]. In this paper, we extend this list by demonstrating that the same approach provides an event-by-event description of recent neutron experiments [14,15] devised to test (generalizations of) Heisenberg's uncertainty principle. It is shown that the event-by-event simulation generates data which complies with the quantum theoretical description of this experiment. Therefore, these data also satisfy the inequalities which, in quantum theory, express (generalizations of) Heisenberg's uncertainty principle. However, as the event-by-event simulation does not resort to concepts of quantum theory these findings indicate that there is little intrinsically "quantum mechanical" to these inequalities, in concert with the idea that quantum theory can be cast into a "classical" statistical theory [16][17][18][19][20][21][22][23][24][25][26][27][28]. EXPERIMENT AND QUANTUM THEORETICAL DESCRIPTION A block diagram of the neutron experiment designed to test uncertainty relations [14,15] is shown in Figure 1. We now describe this experiment in operational terms and as we go along, we also give the quantum theoretical description in terms of spin 1/2 particles such as neutrons. Conceptually, the neutron experiment [14,15] exploits two different physical phenomena: the motion of a magnetic moment in a static magnetic field and a spin-analyzer that performs a Stern-Gerlach-like selection of the neutrons based on the direction of their magnetic moments. A magnetic moment S in an external, static magnetic field Be experiences a rotation about the direction of the unit vector e. The unitary transformation that corresponds to such a rotation is given by where γ is the gyromagnetic ration of the particle, t is the time that the particle interacts with the magnetic field, the variable ϕ = γtB is introduced as a shorthand for the angle of rotation, and σ = (σ x , σ y , σ z ) denote the Pauli-spin matrices. The spin analyzer acts as a projector. It is a straightforward (see pages 172 and 250 in Ballentine, see 3) to show that within quantum theory, an ideal spin analyzer directed along the unit vector n is represented by the projection operator where 11 is the unit matrix and S = ±1 selects one of the two possible alignments of the spin polarizer along n (see 14,15). Using Equations (1), (2), it is straightforward to construct the quantum theoretical description of each of the three stages in the experimental setup. Stage 1. The purpose of the first spin analyzer (SA1) is to prepare neutrons with their magnetic moments in the direction of the static magnetic field B 0 z. Then, the particle travels for some time in a region where the field B 0 z is present but as its magnetic moment is aligned along z, the magnetic moment does not rotate. As will become clear later, to test Ozawa's inequality [14,15,29], it is necessary to be able to prepare initial states in which the magnetic moment lies in the x − y plane. In the experiment, this is accomplished by putting in place, the spin flipper SF1. The spin flipper (SF1), in essence a static magnetic field aligned along the x-direction, rotates the magnetization about the x-axis by an amount proportional to the magnetic field. For simplicity, it is assumed that this rotation changes the direction of the magnetic moment from z to y [14,15]. The position of SF1, relative to the direction of flight of the neutrons, is variable. By moving SF1 one can change the time that the particles perform rotations about the z-axis, hence one can control the direction of the magnetic moment in the x − y plane as the particle leaves stage 1. Quantum theoretically, the action of the components of stage 1 is described by the product of unitary matrices where θ 0 = π/2 or θ 0 = 0 if SF1 is in place or not and θ 1 is the variable (through the variable position of SF1) rotation angle, the value of which will be fixed later. Obviously, in the case that SF1 is not present, because the incoming neutrons have their moments aligned along the z-direction, these moments do not perform rotations at all. Stage 2. This stage consists of a pair of spin flippers (SF2,SF3) and a spin analyzer (SA2). The position of (SF2,SF3), relative to the direction of flight of the neutrons, is variable whereas the position of SA2 is fixed. The action of the components of stage 2 is described by the product of matrices (4) where, as a consequence of the variable position of (SF2,SF3), the rotation angles θ 2 , θ 3 , and θ 4 change with the position of (SF2,SF3). The value of variable S 1 = ±1 labels one of the two possible alignments of the spin polarizer along z. Note that because of the projection M(S 1 , z), the matrix T 2 is not unitary. Stage 3. The final stage consists of a spin flipper SF4 and a spin analyzer SA3. The time evolution of the magnetic moment as it traverses this stage is given by where θ 5 is a fixed rotation angle. The value of variable S 2 = ±1 labels one of the two possible alignments of the spin polarizer along z. The matrix T 3 is not unitary. According to the postulates of quantum theory, the probability to detect a neutron leaving stage 3 is given by Ballentine [3] where it is assumed that the detector simply counts every impinging neutron (which in view of the very high detection efficiency is a very good approximation, see Rauch and Werner [6]). In Equation (6), the initial state of the S = 1/2 quantum system is represented by the density matrix The real-valued vector a ( a ≤ 1) completely determines the initial state of the quantum system. Using Equations (3-7) we find independent of θ 3 and θ 5 . Recall that by construction of the experimental setup θ 2 + θ 4 is fixed. We can make contact to the expressions used in the analysis of the neutron experiment [14,15], by substituting θ 1 = 0, θ 2 = φ + π/2 and θ 4 = −φ − π/2 where φ is called the detuning angle [14,15]. We obtain As explained in detail in subsection 2.1, the experimental setup can be interpreted as performing successive measurements of the operators σ φ = σ x cos φ + σ y sin φ and σ y , their eigenvalues being S 1 and S 2 , respectively. Note that these two operators do not commute unless cos φ = 0 and that the observed eigenvalues S 1 and S 2 of these two operators are correlated, as is evident from the contribution S 1 S 2 sin φ in Equation (9). From Equation (9), the expectation values of the various spin operators follow immediately. Specifically, we have and as σ 2 φ = σ 2 y = 1, the variances σ 2 φ a − σ φ 2 a and σ 2 y a − σ y 2 a are completely determined by Equation (10). FILTERING-TYPE MEASUREMENTS OF ONE SPIN-1/2 PARTICLE The neutron experiment [14,15] can be viewed as a particular realization of a filtering-type experiment [3,30]. The layout of such an experiment is shown in Figure 2. According to this diagram, the experiment consists of performing successive Stern-Gerlach-type measurements on one spin-1/2 particle at a time. Conceptually, assuming a stationary particle source, the neutron experiment [14,15] and the filtering-type experiment shown in Figure 2 are identical, see also Figure 1 in Erhart et al. [14], Sulyok et al. [15]. In practice, the difference between the filtering-type experiment and the neutron experiment [14,15] is that in the latter four experiments (labeled by the variables S 1 = ±1 and S 2 = ±1), are required for each of the two opposite orientations of the two spin analyzers SA2 and SA3 whereas the setup depicted in Figure 2 directly yields the results of the four separate runs. The pictorial description of the filtering experiment goes as follows. A particle enters the Stern-Gerlach magnet M 0 , with its magnetic field along direction b. M 0 "sends" the particle either to Stern-Gerlach magnet M 1 or M 2 . The magnets M 1 and M 2 , identical and both with their magnetic field along direction c, redirect the particle once more and finally, the particle is registered by one of the four detectors D +1,1 , D −1,1 , D +1,2 , and D −1,2 . This scenario is repeated until the statistical fluctuations of the counts of the four detectors are considered to be sufficiently small. We label the particles by a subscript α. After the αth particle leaves M 1 or M 2 , it will hit one (but only one) of the four detectors. We assume ideal experiments, that is at any time one and only one out of four detectors fires. We write x (i,j) α = 1 with i = ±1 and j = 1, 2 if the αth particle was detected by detector D i,j and x (i,j) α = 0 otherwise. We define two new dichotomic variables by Note that for each incoming particle, only one of the detectors clicks hence only one of the x In the quantum theoretical description of the filtering experiment, if S 1,α = ±1, the spin has been projected on the ±b direction. Likewise, if S 2,α = ±1, the spin has been projected on the ±c direction. In other words, S 1,α and S 2,α are the eigenvalues of the spin operator projected on the directions b and c, respectively. Then, according to quantum theory, the probability to observe a pair of eigenvalues (S 1 , S 2 ) is given by Ballentine [3], De Raedt et al. [30]. where the state ρ is given by Equation (7) and the M's denote projection operators. It is easy to see that Equation (9) is a particular case of Equation (12). Thus, for virtually all cases of interest, none of the operators in Equation (12) commute, yet quantum theory yields the probability P(S 1 , S 2 |a) for all cases. Clearly, the statement that one can determine the eigenvalues of two non-commuting operators in one experiment contradicts the conventional teaching that non-commuting operators cannot be diagonalized simultaneously and therefore cannot be measured simultaneously. The reason for this apparent contradiction is the hidden assumption that diagonalization and the act of measurement in a laboratory (i.e., a click of the detector) are equivalent in some sense. The filtering-type experiment is a clear example which shows that they are not: according to quantum theory the eigenvalues S 1 and S 2 of the operators σ · b and σ · c, respectively can always be measured simultaneously even though these operators cannot always be diagonalized simultaneously. EVENT-BY-EVENT SIMULATION A minimal, discrete-event simulation model of single-neutron experiments requires a specification of the information carried by the neutrons, of the algorithm that simulates the source and the devices used in the experimental setup (see Figure 1), and of the procedure to analyze the data. • Messenger: A neutron is regarded as a messenger carrying a message. In principle, there is a lot of freedom to specify the content of the message, the only criterion being that in the end, the simulation should reproduce the results of real laboratory experiments. We adopt Occam's razor as a guiding principle to determine which kind of data the messenger should carry with it, that is we use the minimal amount of data. The pictorial description that will be used in the following should not be taken literally: it is only meant to help visualize, in terms of concepts familiar from macroscopic physics, the minimal amount of data the messenger should carry. Picturing the neutron as a tiny classical magnet we can use the spherical coordinates θ and ϕ to specify the direction of its magnetic moment m = (cos ϕ sin θ, sin ϕ sin θ, cos θ) T , relative to the fixed frame of reference defined by the static magnetic field B 0 z. The messenger should also be aware of the time it takes to move from one point in space to another. The time of flight and the direction of the magnetic moment are conveniently encoded in a message of the type [10,11] u = (e iψ (1) cos(θ/2), e iψ (2) sin(θ/2)) T , where (2) . Within the present model, the state of the neutron, that is the message, is completely described by the angles ψ (1) , ψ (2) and θ and by rules (to be specified), by which these angles change as the neutron travels through the network of devices. This model suffices to reproduce the results of single-neutron interference and entanglement experiments and of their idealized quantum theoretical descriptions [10,11]. In specifying the message Equation (14), we exploited the isomorphism between the algebra of Pauli matrices and rotations in three-dimensional space, not because the former connects to quantum mechanics but only because we find this representation most convenient for our simulation work [9][10][11]. The direction of the magnetic moment follows from Equation (14) through A messenger with message u at time t and position r that travels with velocity v, along the direction q during a time interval t − t, changes its message accord- Hence, as the messenger passes a region in which a magnetic field is present, the message u changes into the message u ← e igμ N Tσ·B/2 u, where g denotes the neutron g-factor, μ N the nuclear magneton, T the time during which the messenger experiences the magnetic field. In the event-based simulation of the experiment shown in Figure 1, the time-of-flight T determines the angle of rotation of the magnetic moment through Equation (17) and can, so to speak, be eliminated by expressing all operations in terms of rotation angles. However, this simplification is no longer possible in the event-based simulation of single-neutron interference and entanglement experiments [10,11]. • Source: When the source creates a messenger, its message needs to be initialized. This means that three angles ψ (1) , ψ (2) and θ need to be specified. In practice, instead of implementing stage 1 it is more efficient to prepare the messengers such that the corresponding magnetic moments are along a specified, fixed direction. For instance, to mimic fully coherent, spin-polarized neutrons that enter stage 2 with their spin along the x-axis, the source would create messengers with θ = π/2, and without loss of generality, ψ (1) = ψ (2) = 0. Ignoring all the details of interaction of the magnetic moments with the Stern-Gerlach magnet, the operation of separating the incoming beam into spatially separated beams is captured by the very simple probabilistic model defined by where x = −1, 1 labels the two distinct spatial directions, 0 < r < 1 is a uniform pseudo-random number, (x) is the unit step function and, as before, S = ±1 labels the orientation of the spin analyzer. For each incoming messenger, a new pseudo-random number is being generated. Recall that |m z | ≤ 1 (see Equation (13)) hence the first term of Equation (18) is a number between zero and two. If we would set m z = σ · n , the model defined by Equation (18) would generate minus-and plus ones according to the projection operator Equation (2). Applied to the neutron experiments [14,15], the function of the spin analyzer is to pass particles with say, spin-up, only. In the simulation model this translates to letting the messenger pass if x = 1 and destroy the messenger if x = −1. It is of interest to explore the possibility whether different models for the spin analyzer can yield averages over many events which cannot be distinguished from the results obtained by employing the probabilistic model Equation (18). As the extreme opposite to the probabilistic model, we consider a deterministic learning machine (DLM) defined by the update rule where x = −1, 1 labels the two distinct spatial directions and −1 ≤ u ≤ 1 encodes the internal state of the DLM (the equivalent of the seed of the pseudo-random number generator). The parameter 0 ≤ γ < 1 controls the pace at which the DLM learns the value m z S. The properties of the time series of x's generated by the rules Equation (19) have been scrutinized in great detail elsewhere, see Michielsen et al. [9] and references therein. Suffice it to say here that for many events, the average of the x's converges to m z S and that the x's are highly correlated in time. Obviously, the DLM-based model is extremely simple and fully deterministic. It may easily be rejected as a viable candidate model by comparing the correlations in experimentally observed time series with those generated by Equation (19). However, if the experiment only provides data about average quantities, there is no way to rule out the DLM model. Unfortunately, the neutron experiments [14,15] do not provide the data necessary to reject the DLM model, simply because the spin analyzer passes particles with say, spin-up, only. Hence there is no way to compute time correlations. Although we certainly do not want to suggest that the spin analyzers used in the experiments behave in the extreme deterministic manner as described by Equation (19), it is of interest to test whether such a simple deterministic model can reproduce the averages computed from quantum theory and also obeys the same uncertainty relations as the genuine quantum theoretical model. • Detector: As a messenger enters the detector, the detection count is increased by one and the messenger is destroyed. The detector counts all incoming messengers. Hence, we assume that the detector has a detection efficiency of 100%. This is a good model of real neutron detectors which can have a detection efficiency of 99% or more [31]. • Simulation procedure and data analysis: First, we establish the correspondence between the initial message u initial and the description in terms of the density matrix Equation (7). To this end, we remove all devices from stage 1 and 2 and simply count the number of messages that pass SA3 with S 2 = 1, for instance. It directly follows from Equation (18) that the relative frequency of counts is given by m z , the projection of the message onto the zaxis. In other words, we would infer from the data that in a quantum theoretical description the z-component of the density matrix a z is equal to m z . By performing rotations of the original message it follows by the same argument that a = m initial . For each pair of settings (S 1 , S 2 ) of the spin analyzers (SA2,SA3) and each position of the pair of spin flippers (SF2,SF3) represented by a rotation of φ about the z-axis (see Section 2), the source sends N messengers through the network of devices shown in Figure 1. The source only creates a new messenger if (i) the previous messenger has been processed by the detector or (ii) the messenger was destroyed by one of the spin analyzers. In other words, direct communication between messengers is excluded. As a device in the network receives a messenger, it processes the message according to the rules specified earlier and sends the messengers with the new message to the next device in the network. If the device is a spin analyzer, it may happen that the messenger is destroyed. The detector counts all messengers that pass SA3 and destroys these messengers. For a sequence of N messengers all carrying the same initial message a = m initial , this procedure yields a count N(S 1 , S 2 |a) (recall that φ is fixed during each sequence of N events). Repeating the procedure for the four pairs of settings yields the relative frequencies . (20) Note that the numerator in Equation (20) is not necessarily equal to N because messengers may be destroyed when they enter a spin analyzer. From Equation (20) we compute • Validation. The event-based model reproduces the results of the quantum theoretical description if, within the usual statistical fluctuations, we find that F(S 1 , S 2 |a) ≈ P(S 1 , S 2 |a) with P(S 1 , S 2 |a) given by Equation (9). This correspondence is most easily established by noting that for fixed φ and a, the three expectations Equations (21)(22)(23) completely determine Equation (20) and that, likewise, the quantum theoretical distribution Equation (9) is completely determined by the expectations Equations (21)(22)(23) with F(S 1 , S 2 |a) replaced by P(S 1 , S 2 |a). In other words, for the event-based model to reproduce the results of the quantum theoretical description of the neutron experiment [14,15], it is necessary and sufficient that the simulation results for Equations (21)(22)(23) are in agreement with the quantum theoretical results (see Equation (10) S 1 = a x cos φ + a y sin φ, S 2 = sin φ S 1 , and S 1 S 2 = sin φ). In Figures 3, 4 we present representative results of eventbased simulations of the neutron experiment [14,15], showing that the simulation indeed reproduces the predictions of the quantum theoretical description of the neutron experiment [14,15]. Comparing Figures 3, 4 we can only conclude that it does not matter whether the model for the spin analyzers uses pseudo-random numbers Equation (18) or is DLM-based Equation (19). Summarizing: the event-based simulation model of the neutron experiment [14,15] presented in this section does not rely, in any sense, on concepts of quantum theory yet it reproduces all features of the quantum theoretical description of the experiment. Although the event-based model is classical in nature, it is not classical in the sense that it cannot be described by classical Hamiltonian dynamics. UNCERTAINTY RELATIONS: THEORY The neutron experiment [14,15] was conceived to test an errordisturbance uncertainty relation proposed by Ozawa [29]. By introducing particular definitions of the measurement error (A) of an operator A and the disturbance η(B) of an operator B, Ozawa showed that and In Ozawa's model of the measurement process, the state of the system + measurement devices is represented by a direct product of the wavefunction of the system and the wavefunction of the measurement devices [29]. The operators A and B refer to the dynamical variables of the system while the M A and M B refer to the dynamical variables of two different measurement devices. Furthermore, it is assumed that both the system and the measuring devices (probes) are described by quantum theory, i.e., the time evolution of the whole system is unitary [29,32]. Although this basic premise is at odds with the fact that experiments yield definite answers [1][2][3]33], within the realm of the quantum theoretical model, it "defines" the measurement process, see Allahverdyan et al. [4] for an extensive review. Following Fujikawa [32], Fujikawa and Umetsu [34], inequalities such as Equation (24) Assuming that [C, D] = 0, we have (30) or, taking expectation values, Taking the absolute value of both sides of Equation (31) and using the triangle inequality we find Next, we apply the inequality 2 (X) (Y) ≥ | [X, Y] | [3,35] to each of the three terms in Equation (32) and obtain The derivation of Equation (33) only makes use of the triangle inequality, the notion of a non-negative inner product on a vector space, the Cauchy-Schwarz inequality and the assumption that [C, D] = 0. Therefore Equation (33) is "universally valid" [29,32,34] In contrast, the common interpretation of Heisenberg's original writings [36] suggests an uncertainty relation which reads [14,15,29,32,34] Thereby it is assumed, without solid justification, that (A) and η(B) correspond to the "uncertainties" which Heisenberg had in mind, see also Busch et al. [37,38]. Unlike Equation (24), inequality Equation (34) lacks a mathematical rigorous basis and therefore it is not a surprise that it can be violated [1]. Indeed, the data recorded in the neutron experiment clearly violate Equation (34) [14,15]. In general, in mathematical probability theory as well as quantum theory, inequalities such as the Cramér-Rao bound [39], the Robertson inequality [35], Equations (24,33) are mathematical identities which result from applications of the Cauchy-Schwarz inequality. Being mathematical identities within the realm of standard arithmetic, they are void of any physical meaning and cannot be violated. Therefore, if an experiment indicates that such an identity (i.e., inequality) might be violated, this can only imply that there is an ambiguity (error) in the mapping between the variables used in the theoretical model and those assigned to the experimental observations [30,40]. Any other conclusion that is drawn from such a violation cannot be justified on logical/mathematical grounds. Following Erhart et al. [14], Sulyok et al. [15], we assume that the state of the system is represented by the density matrix ρ = |z z|, that is the magnetic moment of the neutrons are assumed to be aligned along the z-direction. With A = σ x and B = σ y we have Erhart et al. [14], Sulyok et al. [15] M A = σ φ = σ x cos φ + σ y sin φ, and σ(A) = σ(B) = 1. Combining Equations (24,35) yields [14,15] Note the absence ofh in Equation (36), in agreement with work that shows thath may be eliminated from the basic equations of (low-energy) physics by a re-definition of the units of mass, time, etc. [41,42]. Conceptually, the application of Equation (24) to the neutron experiment [14,15] is not as straightforward as it may seem. In a strict sense, in the neutron experiment [14,15], there are no measurements of the kind envisaged in Ozawa's measurement model. This is most obvious from the quantum theoretical description of the experiment given in Section 2: for fixed S 1 and S 2 , the relative frequency of detector counts is given by Equation (9), and "noise" caused by "probes" does not enter the description. Indeed, from the expressions of 2 (A) and η 2 (B) in terms of spin operators, see Equation (35), it is immediately clear that in order to determine 2 (A) and η 2 (B), there is no need to actually measure a dynamical variable. Moreover, in the laboratory experiment, the values of S 1 and S 2 are not actually measured but, as they represent the orientation of the spin analyzers SA2 and SA3, are kept fixed for a certain period of time. Unlike in the thought experiment for which Equation (24) was derived, the outcome of an experimental run is not the set of pairs (S 1 , S 2 ) but rather the number of counts for this particular set of settings. Nevertheless, with some clever manipulations [14,15], it is possible to express the unit operators that appear in Equation (35) in terms of dynamical variables, the expectations of which can be extracted from the data of single-neutron experiments. If the state of the spin-1/2 system is described by the density matrix ρ = |z z|, we have [14,15] where we used ±z|M A |±z = ±z|M B |±z = 0 and P(S 1 , S 2 |a) is given by Equation (9). The expressions Equation (37) are remarkable: they show that 2 (A) and η 2 (B) have to be obtained from two incompatible experiments, namely with initial magnetic moments along x and y, respectively. From the point of view of probability theory, this immediately raises the question why, in this particular case, it is possible to derive mathematically meaningful results that involve two different conditional probability distributions with incompatible conditions. As first pointed out by Boole [40] and generalized by Vorob'ev [43], this is possible if and only if there exists a "master" probability distribution for the union of all the incompatible conditions. For instance, in two-and threeslit experiments [3,[44][45][46] such a master probability distribution does not exist by construction of the experiment. Another prominent example is the violation of one or more Bell inequalities which is known to be mathematically equivalent to the statement that a master probability distribution for the relevant combination of experiments does not exist [30,40,47,48]. However, in contrast to these two examples, in the case of the neutron experiment, one can devise a realizable experiment that simultaneously yields all the averages that can be obtained from two experiments (one with a = x and another one with a = y) of the kind shown in Figures 1 or 2. Our proof is based on the extension of the filtering-type experiment shown in Figure 2 to three dichotomic variables. Imagine that instead of placing detectors in the output beams that emerge from magnets M 1 and M 2 , we place four identical magnets with their magnetic fields in the direction d and count the particles in each of the eight beams. A calculation, similar to the one that lead to Equation (12) for the probability to observe the given triple (S 1 , S 2 , S 3 ). Choosing a = x, c = y, and b = d = x cos φ + y sin φ it follows immediately that S 1 , S 2 , and S 1 S 2 obtained from Equation (12) with a = x agree with the same averages computed from Equation (38). Likewise, S 1 , S 2 , and S 1 S 2 obtained from Equation (12) with a = y coincide with S 2 S 3 , S 1 S 3 and S 1 S 2 computed from Equation (38). UNCERTAINTY RELATIONS: EVENT-BASED SIMULATION In the neutron experiment [14,15] and therefore also in our event-based simulation, the numerical values of (A) and η(B) are obtained by counting detection events. Let N(S 1 , S 2 |a) denote the count for the case in which the direction of the magnetic moment of the incoming neutrons (after stage 1) is a and the analyzers SA2 and SA3 are along the directions S 1 and S 2 , respectively. Then, we have As shown in Erhart et al. [14], Sulyok et al. [15], the neutron counts observed in the single-neutron experiment yield numerical values of (A)η(B) + (A)σ(B) + σ(A)η(B) which are in excellent agreement with the quantum theoretical prediction 2 √ 2 cos φ sin(φ/2) + 2 sin(φ/2) + √ 2 cos φ. We have already demonstrated that the "classical" event-based simulation model produces results for the averages which, within the statistical errors, cannot be distinguished from those predicted by quantum theory. Therefore, it is to be expected that the data generated by the event-by-event simulation also satisfies the universally valid error-disturbance uncertainty relation Equation (24) and as shown in Figure 5, this is indeed the case. As expected, the data produced by the event-based simulation also violate Equation (34), independent of whether we use pseudo-random numbers (see Equation 18) or the DLM rule (see Equation 19) to model the operation of the spin analyzer. Finally, for the sake of completeness, we show that the eventby-event simulation produces data which complies with the standard Heisenberg-Robertson uncertainty relation (σ x ) (σ y ) ≥ | σ z |. Without loss of generality, the state of the spin-1/2 particle may be represented by the density matrix Equation (7), also if it is interacting with other degrees of freedom and the inequality or, using Equation (7), The last inequality also trivially follows from the constraint a 2 x + a 2 y + a 2 z ≤ 1. As in the case of Equation (36), there is nō h in Equation (40), in agreement with the idea thath may be eliminated by re-defining the units of mass, time, etc. [41,42]. The simulation procedure that we use is as follows. 5. Go to step 1. as long as a z ≤ 1. 6. Plot the results for (1 − σ x 2 )(1 − σ y 2 ) and σ z 2 as a function of a z . The results of the event-based simulation are shown in Figure 6. Within the usual statistical errors, the classical, statistical model produces data which comply with the Heisenberg-Robertson uncertainty relation Equation (40). DISCUSSION We have shown that a genuine classical event-based model can produce events such that their statistics satisfies the (generalized) Heisenberg-Robertson uncertainty relation which, according to present teaching, is a manifestation of truly quantum mechanical behavior. One might be tempted to argue that in the event-based model, the direction of magnetic moment is known exactly and can therefore not be subject to uncertainty. However, this argument is incorrect in that it ignores the fact that the model of the spin analyzers generates [through the use of pseudo-random numbers, see Equation (18) or the update rule Equation (19)] a distribution of outcome frequencies. In fact, as is wellknown, the variance of any statistical experiment (including those that are interpreted in terms of quantum theory) satisfies the Cramér-Rao bound, a lower bound on the variance of estimators of a parameter of the probability distribution in terms of the Fisher information [39]. The Cramér-Rao bound contains, as a special case, Robertson's inequality (x) (p) ≥ h/2 [16,19,[23][24][25]49]. The observation that a classical statistical model produces data that complies with "quantum theoretical" uncertainty relations is a manifestation of this general mathematical result. The uncertainty relations provide bounds on the statistical uncertainties in the data and, as shown by our event-based simulation of the neutron experiment [14,15], are not necessarily a signature of quantum physics, conjugate variables, etc. As mentioned in the introduction, the event-based approach has successfully been applied to a large variety of single-photon and single-neutron experiments that involve interference and entanglement. In the present paper, we have shown that, without any modification, the same simulation approach can also mimic, event-by-event, an experiment that probes "quantum uncertainty." As none of these demonstrations rely on concepts of quantum theory and as it is unlikely that the success of all these demonstrations is accidental, one may wonder what it is that makes a system genuine "quantum." In essence, in our work we adopt Bohr's point of view that "There is no quantum world. There is only an abstract physical description" (reported by Petersen [50], for a discussion see Plotnitsky [51]) and that "The physical content of quantum mechanics is exhausted by its power to formulate statistical laws" [8]. Or, to say it differently, quantum theory describes our knowledge of the atomic phenomena rather than the atomic phenomena themselves [52]. In other words, our viewpoint is that quantum theory captures, and does so extremely well, the inferences that we, humans, make on the basis of experimental data [53]. However it does not describe cause-and-effect processes. Quantum theory predicts the probabilities that events occur, but it cannot answer the question "Why are there events?" [54], very much as Euclidean geometry cannot answer the question "What is a point?." On a basic level, it is our perceptual and cognitive system that defines, registers and processes events. Events and the rules that create new events are the key elements of the event-based approach. There is no underlying theory that is supposed to give rise to events and everything follows by inference on the basis of the generated data, very much like in real experiments. The implication of the work presented in our paper is that the beautiful single-neutron experiments [14,15] can be explained in terms of cause-and-effect processes in an event-byevent manner, without reference to quantum theory and on a level of detail about which quantum theory has nothing to say. Furthermore, our work suggests that the relevance of "quantum theoretical" uncertainty relations to real experiments needs to be reconsidered.
9,546.6
2014-03-07T00:00:00.000
[ "Physics" ]
Gate-reflectometry dispersive readout and coherent control of a spin qubit in silicon Silicon spin qubits have emerged as a promising path to large-scale quantum processors. In this prospect, the development of scalable qubit readout schemes involving a minimal device overhead is a compelling step. Here we report the implementation of gate-coupled rf reflectometry for the dispersive readout of a fully functional spin qubit device. We use a p-type double-gate transistor made using industry-standard silicon technology. The first gate confines a hole quantum dot encoding the spin qubit, the second one a helper dot enabling readout. The qubit state is measured through the phase response of a lumped-element resonator to spin-selective interdot tunneling. The demonstrated qubit readout scheme requires no coupling to a Fermi reservoir, thereby offering a compact and potentially scalable solution whose operation may be extended above 1 K. While further improvements in single-and two-qubit gates can be expected, growing research efforts are now being directed to the realization of scalable arrays of coupled qubits [15][16][17][18][19] . Leveraging the well-established silicon technology may enable facing the scalability challenge, and initiatives to explore this opportunity are on the way 20 . Simultaneously, suitable qubit device geometries need to be developed. One of the compelling problems is to engineer scalable readout schemes. The present work addresses this important issue. It has been shown that a microwave excitation applied to a gate electrode drives Rabi oscillations via the electric-dipole spin resonance mechanism 4-6,21-23 . The possibility of using a gate as sensor for qubit readout would allow for a compact device layout, with a clear advantage for scalability. Gate reflectometry probes charge tunneling transitions in a quantum dot system through the dispersive shift of a radiofrequency (rf) resonator connected to a gate electrode [24][25][26][27] . Jointly to spin-selective tunneling, e.g. due to Pauli spin blockade in a double quantum dot (DQD), this technique provides a way to measure spin states. In a similar fashion, the phase shift of a superconducting microwave resonator coupled to the source of an InAs nanowire has enabled spin qubit dispersive readout 22 . In Si, recent gate reflectometry experiments have shown single-shot electron spin detection [28][29][30] . Here, we combine coherent spin control and gate dispersive readout in a compact qubit device. Two gates tune an isolated hole DQD, and two distinct electric rf tones (one per gate) allow spin manipulation and dispersive readout. Spin initialization and control are performed without involving any charge reservoir; qubit readout relies on the spin-dependent phase response at the DQD charge degeneracy point. We assess hole single spin dynamics and show coherent spin control, validating a protocol for complete qubit characterization exploitable in more complex architectures. Results Double quantum dot dispersive spectroscopy. The experiment is carried out on a double-gate, p-type Si transistor fabricated on a silicon-on-insulator 300-mm wafer using an industrystandard fabrication line 6 . The device, nominally identical to the one in Fig. 1c, has two parallel top gates, G R and G C , wrapping an etched Si nanowire channel. The gates are defined by e-beam lithography and have enlarged overlapping spacers to avoid doping implantation in the channel. The measurement circuit is shown in Fig. 1a. At low temperature (we operate the device at 20 mK using a dilution refrigerator), DC voltages V C and V R are applied to these gates to induce two closely spaced hole quantum dots. The control gate G C delivers also sub-μs pulses and microwave excitation in the GHz range to manipulate the qubit. The readout gate, G R , is wire-bonded to a 220 nH surface-mount inductor. Along with a parasitic capacitance and the device impedance, the inductor forms a tank circuit resonating at f 0 = 339 MHz. Figure 1b shows the phase ϕ and attenuation A of the reflected signal as a function of the resonator driving frequency f R . From the slope of the phase trace at f 0 we extract a quality factor Q loaded ≃ 18. The qubit device acts as a variable impedance load for the resonator, and the resonant frequency f 0 undergoes a dispersive shift according to the state of the qubit. To determine the charge stability diagram of our DQD, we probe the phase response of the resonator while sweeping the DC gate voltages V R and V C (see Supplementary Note 2 and Supplementary Fig. 2). The diagonal ridge in Fig. 2a denotes the interdot charge transition we shall focus on hereafter. Along this ridge, the electrochemical potentials of the two dots line up enabling the shuttling of a hole charge from one dot to the other. This results in a phase variation Δϕ in the reflected signal. Quantitatively, Δϕ is proportional to the quantum capacitance associated with the gate voltage dependence of the energy levels involved in the interdot charge transition. Interdot tunnel coupling results in the formation of molecular bonding (+) and anti-bonding (−) states with energy levels E + and E − , respectively. These states have opposite quantum capacitance since C Q,± = −α 2 (∂ 2 E ± /∂ε 2 ) 27 . Here ε is the gate-voltage detuning along a given line crossing the interdot charge transition boundary, and α is a leverarm parameter relating ε to the energy difference between the electrochemical potentials of the two dots (we estimate α ≃ 0.58 eV V −1 along the detuning line in Fig. 2a). The width of the Δϕ ridge, once translated into energy, gives the interdot tunnel coupling, t. We estimate t between 6.4 and 8.5 μeV, depending on whether thermal fluctuations contribute or not to the dispersive response (see Supplementary Note 3). The total charge parity and the spin character of the DQD states can be determined from the evolution of the interdot ridge in an applied magnetic field, B 31 . Figure 2b shows the B-dependence of the phase signal at the detuning line indicated in Fig. 2a. Four representative traces taken from this plot are shown in Fig. 2c. The interdot phase signal progressively drops with B. At B = 0.355 T the line profile is slightly asymmetric, while a double-peak structure emerges at B = 0.46 T. The two peaks move apart and weaken by further increasing B, as revealed by the trace at B = 0.85 T. The observed behavior can be understood in terms of an interdot charge transition with an even number of holes in the DQD, in a scenario equivalent to a (0, 2) ↔ (1, 1) transition. We shall then refer to a "(0, 2)" and a "(1, 1)" state, even if the actual number of confined holes is larger (we estimate around ten, see Supplementary Note 2). The ε dependence of the DQD states at finite B is presented in Fig. 2d. Deeply in the positive detuning regime, different g-factors for the left (g à L ) and the right dot (g à R ) result in four non-degenerate (1, 1) levels corresponding to the following spin states: |⇓⇓〉, |⇑⇓〉, |⇓⇑〉, |⇑⇑〉 22,32,33 . At large negative detuning, the ground state is a spin-singlet state S(0,2) and the triplet states T(0,2) lie high up in energy. Around zero detuning, the |⇑⇓〉, |⇓⇑〉 states hybridize with the S(0, 2) state forming an unpolarized triplet T 0 (1, 1) and two molecular singlets, S g and S e , with bonding and anti-bonding character, respectively (Supplementary Note 3). We use the spectrum of Fig. 2d to model the evolution of the interdot phase signal in Fig. 2b, c. Importantly, we make the assumption that the average occupation probability of the available excited states are populated according to a Boltzmann distribution with an effective temperature T eff , which is used as a free parameter. Because the reflectometry signal is averaged over many resonator cycles, Δϕ ¼ P i hΔϕi i , where 〈Δϕ〉 i is the phase response associated to state i weighted by the respective occupation probability 31 (here i labels the DQD levels in Fig. 2d). Figure 2e shows 〈Δϕ〉 i as a function of ε for T eff = 250 mK. The spin polarized triplet states T − and T + (i.e. |⇓⇓〉 and |⇑⇑〉, respectively) are linear in ε and, therefore, they do not cause any finite phase shift; S g , S e , and T 0 (1, 1), on the other hand, possess a curvature and are sensed by the reflectometry apparatus (Supplementary Note 3). We note that the phase signal for T 0 (1, 1) has a peak-dip line shape whose minimum lies at positive ε (blue trace), partly counterbalanced by the positive phase signal due to S e . The S g state causes a pronounced dip at negative ε (green trace), dominating over the peak component of T 0 . The overall net result is a phase signal with an asymmetric double-dip structure consistent with our experimental observation. This simple model, with the chosen T eff = 250 mK, qualitatively reproduces the emergence of the double-dip structure at B~0.4 T, as well as its gradual suppression at higher B, as shown in the inset to Fig. 2b and in Fig. 2f (increasing the Zeeman energy results in the depopulation of the S g and T 0 excited states in favor of the T − (1,1) ground state, for which Δϕ = 0). Dispersive detection of electric-dipole spin resonance. Now that we have elucidated the energy-level structure of the DQD, we can discuss the operation of the device as a single-hole spin qubit with electrical control and dispersive readout. Electric-dipole spin resonance (EDSR) 6,23,34 is induced by a microwave voltage modulation applied to gate G C . To detect EDSR dispersively, the resonating states must have different quantum capacitances. The DQD is initially tuned to the position of the red star in Fig. 2c, where the DQD is in a "shallow" (1,1) configuration, i.e. close to the boundary with the (0,2) charge state (more details in Supplementary Note 4 and Supplementary Fig. 4). Figure 3a shows the dispersive measurement of an EDSR line. The microwave gate modulation of frequency f C is applied continuously and B is oriented along the nanowire axis. We ascribe the resonance line to a second-harmonic driving process where 2hf C = gμ B B (h the Planck's constant, μ B the Bohr magneton and g the effective hole g-factor). From this resonance condition we extract g = 1.735 ± 0.002, in agreement with previous works 6, 23 . The first harmonic signal, shown in the inset to Fig. 3a, is unexpectedly weaker. Though both first and second harmonic excitations can be expected 35 , the first harmonic EDSR line (inset to Fig. 3a) is unexpectedly weaker. A comparison of the two signal intensities requires the knowledge of many parameters (relaxation rate, microwave power, field amplitude) and calls for deeper investigations. The visibility of the EDSR signal can be optimized by a fine tuning of the gate voltages. Figure 3b shows a high-resolution measurement over a narrow region of the stability diagram around the interdot charge transition boundary at B = 0.52 T; the interdot line has a double-peak structure, consistently with the data in Fig. 2b, c. The measurement is performed while applying a continuous microwave tone f C = 7.42 GHz. EDSR appears as a distinct phase signal around V C ≃ 362.5 mV and V R ≃ 1040 mV, i.e. slightly inside the (1,1) charge region, pinpointed by the black arrow as I/R. Such EDSR feature is extremely localized in the stability diagram reflecting the gate-voltage dependence of the hole g-factor 23 . Figure 3c displays line cuts across the interdot transition line at fixed V R and different microwave excitation conditions. With no microwaves excitation, we observe the double-peak line shape discussed above. With a microwave gate modulation at f C = 7.42 GHz, the spin resonance condition is met at V C ≃ 362.45 mV, which results in a pronounced EDSR peak, the same observed at point I/R in Fig. 3b (see also Supplementary Fig. 4). The peak vanishes when f C is detuned by 20 MHz (cyan trace). At point I/R, resonant microwave excitation enables the spectroscopy of the T 0 (1, 1) state. The inset to Fig. 3c shows the signal we expect from our model (Supplementary Note 4). In a small detuning window, the populations of T − (1, 1) and T 0 (1, 1) are assumed to be balanced by EDSR (see the energy levels in the inset to Fig. 3b); this results in a phase signal dramatically enhanced resembling the feature centered at I/R in the main panel. A further confirmation that the spin transitions are driven between T − (1, 1) and T 0 (1, 1) is given by the extrapolated intercept at 0 T of the EDSR transition line in Fig. 3a, found much smaller (<100 MHz) than t. In the following, we shall use point I/R to perform qubit initialization and readout. Qubit control and readout. The device is operated as a spin qubit implementing the protocol outlined in Fig. 4a. The voltage sequence in the upper part of Fig. 4a tunes the DQD at the control point C (≃1 mV deep in the (1, 1) region) where holes are strongly localized in either one or the other dot with negligible tunnel coupling. A microwave burst of duration τ burst and frequency f C drives single spin rotations between |⇓⇓〉 and |⇑⇓〉; the system is then brought back to I/R in the "shallow" (1, 1) regime for a time t wait for readout and initialization. The dispersive readout eventually relies on the spin-resolved phase shift at I/R, though the reflectometry tone f R is applied during the whole sequence period T M and the reflected signal is streamed constantly to the acquisition module. First, we determine the lifetime T 1 of the excited spin state at the readout point I/R by sweeping t wait after a π-burst at point C. The results are shown in Fig. 4b. The phase signal rapidly diminishes with increasing t wait because spin relaxation depopulates the excited spin state in favor of the non-dispersive T − (1, 1) ground state. The estimated spin lifetime at the readout position is T 1 = 2.7 ± 0.7 μs (see Supplementary Note 5). By shifting the position of a 100 ns microwave burst within a 12 μs pulse, no a Phase response as a function of B and microwave frequency f C . B is oriented along the y direction with respect to the frame of Fig. 1a. The linear phase ridge denoted by a red arrow is a characteristic signature of EDSR. It corresponds to a second-harmonic signal, while the much weaker first harmonic is shown in the lower inset. b Stability diagram at B = 0.52 T (orientation β = 55°and θ = 90°according to the diagram in upper inset of a) with f C = 7.42 GHz and microwave power P C ≈ −80 dBm. EDSR between T − (1, 1) and T 0 (1, 1) (purple arrows in inset) is driven at point I. In the stability diagram, the change of population induced by EDSR is visible as a localized phase signal at point I/R. c Phase shift at V R = 1039.9 mV as a function of V C without microwave irradiation (dark), and under onresonance and off-resonance excitation at f C = 7.42 and 7.60 GHz, respectively. EDSR-stimulated transitions appear as a pronounced peak whose position and line shape are compatible with our model (inset) clear decay of the dispersive signal is observed, which suggests a spin lifetime at manipulation point longer than 10 μs. We demonstrate coherent single spin control in the chevron plot of Fig. 4c. The phase signal is collected as a function of microwave burst time τ burst and driving frequency f C . The spin state is initialized at point I/R (t wait~T1 ). In Fig. 4d the phase signal is plotted as a function of τ burst with f C set at the Larmor frequency f Larmor . The Rabi oscillations have 15 MHz frequency, consistent with Refs. roro and gtensor. The non-monotonous envelope is attributed to random phase accumulation in the qubit state by off-resonant driving at f Larmor ± f R due to upconversion of microwave and reflectometry tones during the manipulation time. Data in Fig. 4d have been averaged over 30 measurements though the oscillations are easily distinguishable from single scans where each point is integrated over 100 ms. Figure 4 witnesses the success of using electrical rf signals both for coherent manipulation by EDSR and for qubit-state readout by means of gate reflectometry. Discussion The measured T 1 is compatible with the relaxation times obtained for hole singlet-triplet states in acceptor pairs in Si 36 and in Ge/Si nanowire double quantum dots 37 ; in both cases T 1 has been measured at the charge degeneracy point with reflectometry setups similar to ours. Nonetheless, charge detector measurements have shown T 1 approaching 100 μs for single hole spins in Ge hut wire quantum dots 38 and ≲1 ms for Ge/Si singlet-triplet systems 39 . This suggests that despite the intrinsic spin-orbit coupling single spin lifetimes in the ms range might be achievable in Si too. Strategies to boost T 1 at the readout point may consist of inserting rf isolators between the coupler and the amplifier to reduce the backaction on the qubit and avoiding high-κ dielectric in the gate stack to limit charge noise. We note that T 1 could depend on the orientation of the magnetic field as well 40 . Future studies on magnetic field anisotropy will clarify whether T 1 , along with the effective g-factors (and hence the dispersive shift for readout) and Rabi frenquency, can be maximized at once along a specific direction. Technical improvements intended to enhance the phase sensitivity, like resonators with higher Q-factor and parametric amplification, could push the implemented readout protocol to distinguish spin states with a micro-second integration time, enabling single shot measurement as reported in a recent experiment with a gateconnected superconducting resonant circuit 41 . Lastly, the resonator integration in the back-end of the industrial chip could offer the possibility to engineer the resonant network at a wafer scale, guaranteeing controlled and reproducible qubit-resonator coupling. The gate-based dispersive sensing demonstrated here does not involve local reservoirs of charges or embedded charge detectors. This meets the requirements of forefront qubit architectures (e.g. Ref. 16 ), where the spin readout would be performed at will by any gate of the 2D quantum dot array by frequency multiplexing. Dispersive spin detection by Pauli blockade has a fidelity not constrained by the temperature of the leads. As recently shown 42 , isolated DQDs can serve as spin qubits even if placed at environmental temperatures exceeding the spin splitting, like 1 K or more. This should relax many cryogenic constraints and support the co-integration with classical electronics, as required by a scale-up perspective 19 . Methods Device fabrication. The fabrication process of the device was carried out in a 300 mm CMOS platform and is described in Ref. 6 . Experimental set-up. The experiment is performed by exciting the resonator input at f R = f 0 = 339 MHz and power P R ≈ −110 dBm. We measure the phase variation Δϕ of the reflected signal isolated from the incoming wave by a directional coupler, amplified by 35 dB at 4 K and demodulated to baseband using homodyne detection. The complete circuit diagram of the experimental setup for qubit manipulation and dispersive readout is provided in Supplementary Note 1 and Supplementary Fig. 1. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
4,620
2018-11-11T00:00:00.000
[ "Physics" ]
The small GTPases Ras and Rap1 bind to and control TORC2 activity Target of Rapamycin Complex 2 (TORC2) has conserved roles in regulating cytoskeleton dynamics and cell migration and has been linked to cancer metastasis. However, little is known about the mechanisms regulating TORC2 activity and function in any system. In Dictyostelium, TORC2 functions at the front of migrating cells downstream of the Ras protein RasC, controlling F-actin dynamics and cAMP production. Here, we report the identification of the small GTPase Rap1 as a conserved binding partner of the TORC2 component RIP3/SIN1, and that Rap1 positively regulates the RasC-mediated activation of TORC2 in Dictyostelium. Moreover, we show that active RasC binds to the catalytic domain of TOR, suggesting a mechanism of TORC2 activation that is similar to Rheb activation of TOR complex 1. Dual Ras/Rap1 regulation of TORC2 may allow for integration of Ras and Rap1 signaling pathways in directed cell migration. The migration of cells in response to chemical signals (chemotaxis) is central to normal physiology and is involved in many pathological conditions, such as cancer cell metastasis. The Target of Rapamycin Complex 2 (TORC2) recently emerged as a key, conserved player in chemotaxis 1 . TOR complex 1 (TORC1) and TORC2 are conserved complexes formed by the serine/threonine kinase TOR. TORC1 is a master regulator of cell growth and its biochemistry and regulation is well understood 2 . TORC2 plays evolutionarily conserved roles in controlling F-actin organization, and is involved in the regulation of various processes, including cell survival, protein synthesis, and metabolism 3,4 . However, in contrast to TORC1, little is understood as to how TORC2 is regulated in any system. Studies performed in Dictyostelium, a widely used model of eukaryotic chemotaxis, suggest that TORC2 acts as an integrator of cell movement and chemoattractant signal relay (when stimulated cells transmit chemoattractants to neighboring cells), thereby promoting group cell migration. In Dictyostelium, TORC2 is activated downstream from the chemoattractant (cAMP) receptor cAR1, a Ras guanine exchange factor (GEF)-containing complex termed Sca1C, and the Ras protein RasC, which is required for cAMP production and the signal relay response [5][6][7][8] . Here, using an unbiased proteomics approach to identify novel regulators of TORC2 in chemotaxis, we discovered that Rap1 binds the TORC2 component RIP3/SIN1, and that, in addition to RasC, Dictyostelium Rap1 regulates TORC2 activity. Further, we found that this interaction is conserved for human Rap1 and SIN1. Finally, we found that Dictyostelium RasC binds the catalytic domain of TOR, which suggests that RasC may regulate TORC2 via a mechanism similar to which Rheb regulates TORC1 19 . We propose that the Ras and Rap1 regulation of TORC2 promotes the integration of Ras and Rap1 signaling pathways in response to chemoattractants, thereby coordinating signal relay with the motility cycle during chemotaxis. Results Rap1 is a conserved binding partner of TORC2 component RIP3/SIN1 whereas RasC binds TOR. To identify proteins regulating TORC2 function, we expressed His/Flag-tagged Pianissimo (HF-Pia), the Dictyostelium orthologue of mammalian TORC2 essential component Rictor, in piaA null cells, and used HF-Pia to purify TORC2 from cells stimulated by the chemoattractant (Fig. 1a). Proteins that co-purify with HF-Pia were identified by mass spectrometry. As expected, known components of TORC2, and of protein synthesis and folding complexes co-purify with HF-Pia (see Supplementary Table S1). Of interest, we found that the small GTPase Rap1 was specifically pulled-down with HF-Pia (Fig. 1b). In addition, in a pull-down screen that we previously performed with recombinant, purified GST-fused Rap1 pre-loaded with non-hydrolyzable GppNHp (active state) or GDP (inactive state) 16 , we found that the TORC2 component RIP3 (orthologue of mammalian SIN1) specifically co-purifies with Rap1 GppNHp (Fig. 1c). Together with our finding that Rap1 co-purifies with Pia, this observation suggests that Rap1 interacts with TORC2 by directly binding RIP3. Similar to most Ras/Rap effectors, RIP3/SIN1 contains a Ras Binding Domain (RBD) 20,21 . Despite low amino acid sequence conservation, all RBDs have a typical ubiquitin-like fold that facilitates binding to Ras proteins. The RBD of RIP3 was previously shown to be important for TORC2 function in Dictyostelium chemotaxis and to bind the active form of the Ras protein RasG, and not RasC, in vitro 22 . However, in vivo, RasC, and not RasG, promotes TORC2 activation 5,23 . To determine if Rap1 directly interacts with the RBD of RIP3 (RIP3 RBD ), we assessed their binding in vitro, compared to the binding of RIP3 RBD to other Dictyostelium Ras proteins (RasB, RasC, RasD, RasG and RasS). We found that only the active forms of Rap1 and RasG bind RIP3 RBD in vitro (Fig. 1d). To quantify and compare the binding of RIP3 RBD to Rap1 and RasG, we performed a guanine nucleotide dissociation (GDI) assay 18 . Nucleotide dependent binding of an effector to a GTP-bound G protein stabilizes the interaction between the G protein and nucleotide. This stabilization results in inhibition of nucleotide dissociation from the G protein/effector complex. RasG and Rap1 were pre-loaded with fluorescent mGppNHp, and then incubated in the presence of excess unlabeled GppNHp with or without RIP3 RBD . The exchange of mGppNHP for GppNHp was measured by monitoring fluorescence decay. The resulting observed rate constant (k obs ) of fluorescence decay was then determined as a measure of effector binding. In the presence of 1 μ M RIP3 RBD , the k obs for mGppNHp dissociation from Rap1 is greatly reduced (k obs Rap1 = 5.0 × 10 −5 s −1 versus k obs Rap1+RIP3 = 1.0 × 10 −7 s −1 ), whereas a RIP3 RBD mutant that disrupts Ras/Rap1 binding 22 (see Supplemental Fig. S1b) has no effect, indicating that RIP3 is binding Rap1 with high affinity, and in a nucleotide dependent way (Fig. 1e). On the other hand, the presence of 1 μ M RIP3 RBD did not inhibit nucleotide exchange on RasG (k obs RasG = 5.9 × 10 −5 s −1 and k obs RasG+RIP3 = 6.0 × 10 −5 s −1 ). Consequently, the observation that RIP3 binds with high affinity to Rap1 GppNHp suggests RIP3 is a Rap1 effector, whereas the lack of high binding affinity of RIP3 for RasG GppNHp suggests RIP3 is not likely an effector of RasG, which is consistent with previous in vivo studies 23 . We then asked if Rap1 binding tRIP3 is conserved in mammals by assessing the interaction of human Rap1b to SIN1 RBD using purified proteins in vitro. We observed that SIN1 RBD binds GppNHp-loaded Rap1b and not nucleotide free Rap1b (Fig. 1f). Therefore, our findings suggest that active Rap1 binds TORC2 through the TORC2-specific subunit RIP3/SIN1, and that this interaction is conserved in mammals. Although others and we have clear evidence that RasC promotes TORC2 activation in response to chemoattractant stimulation in Dictyostelium (present study) 5,6 , we find that, unlike Rap1, RasC does not bind RIP3 (Fig. 1d). However, a constitutively active RasC mutant was reported to co-purify with RIP3 6 , suggesting a possible indirect interaction. Interestingly, in a screen for potential RasC-interacting proteins, performed using a protocol similar to that described to identify Rap1-interacting proteins in Fig. 1c, we found that TOR kinase itself specifically co-purifies with RasC GppNHp (Fig. 1g). We verified that RasC binds directly to TOR using recombinant, purified proteins in vitro. We observed that active RasC, using either GppNHp-loaded or a constitutively active RasC mutant, binds the catalytic domain of TOR [FKBP-Rapamycin Binding (FRB)/kinase domain] with an average of 9 ± 3.3% efficiency in vitro, whereas no binding was observed with inactive, GDP-loaded RasC or a constitutively active Rap1 mutant ( Fig. 1h and Supplementary Fig. S3). Thus, together with previous observations, these results suggest that RasC activates TORC2 by directly regulating TOR kinase. Rap1 regulates the RasC-mediated activation of TORC2. To assess whether Rap1, in addition to RasC, can induce TORC2 activation, we first examined the ability of purified, recombinant Rap1, loaded with GppNHp, to induce TORC2 activation in cell lysates. We monitored TORC2 activation by evaluating its phosphorylation of Akt/Protein Kinase B (PKB) and related kinase PKBR1 hydrophobic motif (T P 435 and T P 470, respectively) 5,7,23 . Consistent with previous findings 5, 6 , RasC GppNHp stimulation of wild-type cell lysates induces the TORC2-mediated phosphorylation of both PKB (~50 kDa band) and PKBR1 (~70 kDa band) (Fig. 2). Interestingly, we found that Rap1 GppNHp also stimulates PKBR1 T470 phosphorylation in cell lysates, suggesting activation of TORC2. We do not know why Rap1 GppNHp stimulation of cell lysates induces PKBR1 phosphorylation and not that of PKB, but this may suggest an involvement of Rap1 in TORC2 substrate selection and will be the subject of future investigations. Rap1 GppNHp and RasC GppNHp do not induce PKBR1 phosphorylation in piaA null cell lysates (see Supplementary Fig. S4), confirming that the observed responses are mediated by TORC2. Further, RasG GppNHp , RasC GDP , and Rap1 GDP fail to promote TORC2 activation in wild-type cell lysates, indicating that only active RasC and Rap1, and not RasG, can promote TORC2 activation. As cells lacking Rap1 are not viable, to test whether Rap1 controls TORC2 activity in vivo, we compared the chemoattractant-induced TORC2 activation in wild-type cells to that in cells displaying elevated Rap1 activity either by overexpressing wild-type Rap1 (Rap1 OE ), expressing a constitutively active Rap1 mutant (Rap1 CA , G12V mutation), or in cells lacking one of the Rap1-specific GAP, RapGAP1 (rapgap1 null), in which Rap1 activity is considerably elevated 14,24 (see Supplementary Fig. S6a). In this assay, PKB T435 phosphorylation is sometimes difficult to detect, but PKBR1 T470 phosphorylation is easily traced. As shown in Fig. 3a, we reproducibly observed elevated and extended chemoattractant-induced PKBR1 phosphorylation in all three conditions tested Table S1. (c) Mass spectrometry data identifying RIP3 in the pull-down performed with GST-Rap1 GppNHp . (d) Interaction between GST-RIP3 RBD and His-tagged Rap1, RasG, RasB, RasC, RasD, and RasS was assessed using recombinant, purified proteins in vitro, comparing the binding of constitutively active (CA) and GDP-bound (inactive) Rap1/Ras proteins. His-Rap1/Ras proteins were detected by immunoblotting. Immunoblots were cropped, but no other bands were present. Amount of Ras/Rap1 proteins used is shown in Supplementary Fig. S1a. (e) Dissociation of mGppNHp from Rap1 and RasG was measured in the presence and absence of 1 μ M RIP3 RBD or RIP3 (K680E, R681E)RBD . (f) Interaction between GST-SIN1 RBD and His-tagged GppNHp-bound or nucleotide free (EDTA) human Rap1b was assessed using recombinant, purified proteins in vitro. His-peptide was used as control. Loadings are equivalent between the input and His-Pull-down conditions. Band at ~24 kDa in GST-SIN1 RBD input corresponds to free GST. Pull-down proteins were revealed by Commassie Blue (CB) staining. (g) Mass spectrometry data identifying TOR in the pull-down performed with GST-RasC GppNHp . (h) The interaction between GFP-fused TOR catalytic domain (FRB/Kin TOR ) and GST-fused GppNHp-or GDP-bound RasC was assessed using recombinant, purified proteins in vitro. GFP-FRB/Kin TOR and GST-RasC were revealed by immunoblotting. Input represent 33% of protein used in assay. Uncropped gel and immmunoblots are shown in Supplementary Fig. S2. Data are representative of at least two independent experiments. where Rap1 activity is elevated compared to that in wild-type cells. Although the extent of the effect of Rap1 CA on PKBR1 phosphorylation varied between experiments, likely due to varying levels of Rap1 CA expression related to plasmid copy number per cell as was previously described [25][26][27] , quantification of the data nonetheless revealed significant (at 40 and 60 sec) or near-significant effects (5 sec, p = 0.053; 10 sec, p = 0.084; 20 sec, p = 0.077). For Rap1 OE and rapgap1 null cells, the effects are strongly significant (for 10 and 20 sec time points, p is between 0.0003 and 0.004). Consistent with an increase in TORC2-mediated PKBR1 phosphorylation, PKBR1 kinase activity is also prolonged in Rap1 CA , Rap1 OE , and rapgap1 null cells, although the effect is not as pronounced as that on PKBR1 T470 phosphorylation (Fig. 3b). This difference is likely due to the fact that the PKB kinase activity is not only regulated by TORC2 but also by other chemotactic effectors and regulatory mechanisms (e.g. GSK3, Protein Phosphatase 2A 28,29 ). The increase in PKBR1 phosphorylation observed in rapgap1 null, and in Rap1 CA and Rap1 OE cells is not observed in rapgap1/ripA double null cells or ripA null cells expressing Rap1 CA or Rap1 OE , nor is there any TORC2-mediated PKBR1 phosphorylation observed in ripA null cells expressing the RIP3 (K680E,R681E) mutant (Fig. 3c). These observations indicate that Rap1 interaction with RIP3 is necessary for the observed effect of elevated Rap1 activity on TORC2-mediated PKBR1 phosphorylation and, thus, that Rap1 plays an important role in controlling TORC2 activation. Of note, we sometimes detect constitutive PKB phosphorylation in ripA null cells ( Fig. 3c and Supplementary Fig. S6b), as was previously observed 7 , but the meaning of which is unknown. Interestingly, however, neither Rap1 OE nor Rap1 CA induce PKBR1 phosphorylation in cells lacking RasC, suggesting that RasC is essential for TORC2 activation and the Rap1-mediated effect on TORC2 (Fig. 3d). Consequently, we propose that Rap1 positively regulates the RasC-mediated activation of TORC2 in response to chemoattractants. Discussion Our findings reveal new, and likely conserved, mechanisms by which Dictyostelium TORC2 is regulated, where both RasC and Rap1 control TORC2 activity through binding of distinct TORC2 components, TOR and RIP3/ SIN1, respectively (Fig. 4). In addition, our findings suggest that RasC plays a major role in TORC2 activation and that Rap1 regulates the RasC-mediated activation of TORC2. Whereas we can't exclude the possibility that Rap1 binds RIP3/SIN1 independently of TORC2, the observation that Rap1 co-purifies with the TORC2 component Pia/Rictor strongly suggests Rap1 binds the TORC2-associated RIP3/SIN1 and, thereby, directly interacts with TORC2. The finding that RasC binds to the catalytic domain of TOR is particularly interesting as it indicates that RasC may activate TORC2 using a mechanism that is similar to the reported Rheb-mediated activation of mTORC1 19 , and that RasC may regulate TORC1 as well. Moreover, as human H-Ras was reported to co-purify with mTORC2 21 , we believe that Ras binding to TOR is likely conserved in mammals. Unfortunately, we were unable to test direct binding of human H-Ras to mTOR in vitro due to the difficulty to obtain stable, recombinant mTOR constructs. Other proteins that we found associated with Dictyostelium TORC2 that are known to or could play a role in regulating TORC2 function in chemotaxis include Rac1A and Rab small GTPases, as well as the actin nucleator Formin A. TORC2 interaction with Formin A could represent a potential link between TORC2 and F-actin. In mammals, Rac1 binds TOR and mediates TORC1 and TORC2 localization to specific membranes 30 , and in fission yeast, a Rab-family GTPase was shown to bind and regulate TORC2 signaling 31 . Therefore, our finding that Rac1A and two Rab GTPases, Rab11A and RabC, bind TORC2 suggests that the role of these small GTPases in regulating TORC2 may be conserved in Dictyostelium. Finally, although the identification of ribosomal proteins in our HF-Pia pull-down is expected, as HF-Pia was exogenously over-expressed, it is possible that the binding of ribosomal proteins to HF-Pia represents a functional interaction. Indeed, some TORC2 functions were shown to require its association with ribosomes in both yeasts and mammals 32,33 . We propose that the dual RasC-and Rap1-mediated regulation of TORC2 in Dictyostelium allows for integration of RasC and Rap1 signaling pathways during chemotaxis and, thereby, coordination of cytoskeletal remodeling, substrate adhesion, and relay of the chemoattractant signal to neighboring cells (Fig. 4). The organized migration of groups of cells is crucial to Dictyostelium and human embryonic development as well as to wound healing, and is also involved in cancer metastasis [34][35][36][37][38][39] . For cells to achieve collective migration as a cohesive group, they must synchronize their movement, which is achieved through cell-cell communication such as signal relay during chemotaxis. Our findings place TORC2 in an ideal position to promote the coordinated regulation of these processes and control group cell migration. Since the interactions between TORC2 and Ras/Rap1 appear conserved, we suggest TORC2 integrates these signals to coordinate cellular migrations in many systems. DNA constructs. His-Flag-Pianissimo (HF-Pia) was generated by adding 6XHis and Flag tags, in tandem, by PCR to the N-terminus of Pia's coding sequence, which was cloned in the pDM304 vector containing a neomycin resistance cassette. Myc-tagged constitutively active Rap1 (Myc-Rap1 CA ; G12V mutation) obtained from Rick Firtel 14 was transferred to the pDM358 vector containing a hygromycin resistance cassette. FRB/Kin TOR (aa 1820-2380) was amplified by PCR and subsequently cloned in the pDM317 vector containing N-terminal GFP and neomycin resistance cassette. The RIP3 RBD (aa 648-717) used in the in vitro binding assay (Fig. 1d), and RasC, were expressed as N-terminal GST-fusion from a pGEX-4T-1 vector. Constitutively active RasC (RasC CA , G62L mutation) was generated by the method of Quick change. 6XHis-tagged wild-type and constitutively active forms of the Rap1 and Ras proteins were described previously 41 . The RIP3 RBD (aa 511-838) and SIN1 RBD (aa 266-374), used in the GDI experiment (Fig. 1e) and in vitro binding assay (Fig. 1f), respectively, were expressed as N-terminal GST-fusion from a pGEX-4T-3 plasmid. His-Rap1b was a kind gift of Alfred Wittinghofer. Cell culture and strains used. Dictyostelium cells were grown in axenic HL5 medium (ForMedium, Hunstanton, Norfolk, UK) at 22 °C and transformants were generated by electroporation. Transformed cells were selected in 20 μ g/ml Geneticin or 50 μ g/ml Hygromycin B (both from Life Technologies, Grand Island, NY, USA) and expression confirmed by immunoblotting. Aggregation-competent cells were obtained by pulsing cells with 30 nM cAMP every 6 min for 5.5h in 12 mM Na/K phosphate pH 6.1 at 5 × 10 6 cells/ml. Wild-type cells are AX3 and all transformants and null strains have an AX3 background. gbpD null cells were described elsewhere 18 , piaA null and rapgap1 null cells were provided by Peter Devreotes and Rick Firtel, respectively, and were previously described 24,42 . The rapgap1/ripA double null strain was generated by disrupting ripA in the rapgap1 null background, as described previously 43 . Pull-downs and mass spectrometry. Sequential His-Flag purification and identification of the isolated proteins by mass spectrometry was performed as previously described 5 . The pull-down screens for RasC and Rap1 effectors were performed as previously described 16,44 . In vitro binding studies. GST-fused RIP3 RBD , -SIN1 RBD , -Rap1, and -RasC were purified by GSH affinity and size exclusion chromatography as previously described 18,45 . Purification of His-tagged Rap1 and Ras proteins, and the in vitro interaction assay with RIP3 RBD was performed as described previously 41 . Purified proteins' quality was verified on gel and quantified (see Supplemental Fig. S1), and equal amounts were used for each interaction assessed. His-Rap1/Ras proteins were detected by immunoblotting with anti-His monoclonal antibody (sc-8036; Santa Cruz Biotechnology, Dallas, TX, USA). Interaction between His-tagged GppNHp-bound or nucleotide free (EDTA) human Rap1b and GST-SIN1 RBD was tested using 25 μ g of the purified proteins incubated in binding buffer (50 mM Tris-Cl, 150 mM NaCl, 5 mM MgCl2, 1 mM β -mercaptoethanol, pH 7.5), and proteins were pulled-down with Ni-NTA affinity resin (Qiagen, Valencia, CA, USA) for 2 hours at 4 °C. The beads were washed three times with ice-cold binding buffer containing 500 mM NaCl, and eluted with 300 mM imidazole in binding buffer. The proteins were resolved on SDS-PAGE and revealed by Coomassie Blue staining. GFP-FRB/ Kin TOR was isolated from Dictyostelium cell lysates using anti-GFP antibody coupled to protein A Sepharose by immunoblotting. Equal loading was controlled with Coomassie Blue staining (CB). (b) cAMP-induced kinase activity of immunoprecipitated PKBR1 was assessed in the indicated strains using H2B as substrate. H2B phosphorylation was detected by autoradiography and PKBR1 revealed by immunoblotting. Graphs represent mean ± SEM of densitometry quantification data of immunoblots or autoradiographs, from at least three independent experiments, expressed as % of the maximal signal detected in wild-type control cells. *p < 0.05, **p < 0.01, ***p < 0.005. Uncropped immunoblots and autoradiographs are shown in Supplementary Fig. S7. beads. Interaction between GFP-FRB/Kin TOR and GST-fused GppNHp-or GDP-loaded RasC was tested using 25 μ g of the purified proteins incubated in binding buffer (50 mM Tris-Cl, 150 mM NaCl, 5 mM MgCl2, 1 mM β -mercaptoethanol, pH 7.5), and proteins were pulled-down with GSH affinity resin. The beads were washed three times with ice-cold binding buffer containing 500 mM NaCl, and eluted with 20 mM glutathione in binding buffer. The proteins were resolved on SDS-PAGE and revealed by GFP and GST immunoblotting. Biochemical assays. The guanine nucleotide dissociation inhibition (GDI) assay, Rap1 activity assay, and PKBR1 kinase assay were performed as previously described 14,18,40 . To test the ability of Rap1, RasC or RasG to activate TORC2 in cell lysates, the GTPases were loaded with GDP or GppNHp as previously described 18 . Aggregation competent wild-type cells were harvested by centrifugation and re-suspended in buffer containing 50 mM Tris-Cl (pH = 7.5), 150 mM NaCl, 5 mM MgCl2, 5 mM DTT, 5% Glycerol. Cells were lysed on 5 μ m Nuclepore filter and the lysate was cleared by 16,000 × g centrifugation for 5 min at 4 °C. Total cell lysate protein content and purified Ras/Rap1 were quantified with Bradford reagent and 400 μ g of cell lysate was stimulated with 1 μ M of nucleotide bound GTPases and samples collected at the indicated times. 50 μ g of proteins were loaded on gel for each sample. TORC2 activity was assessed by evaluating the TORC2-mediated phosphorylation of PKB and PKBR1 as described previously 5 , with the exception that an anti-phospho-p70S6K antibody (Cell Signaling Technology, Danvers, MA, USA) was used to detect phosphorylation of PKB/PKBR1's hydrophobic motif (T P 435 and T P 470, respectively). Significance of the data was analyzed using unpaired T-Test. In Dictyostelium, Chemoattractant (cAMP) stimulation leads to the heterotrimeric G protein (Gα β γ )-dependent activation of RasC (through the RasC specific GEF RasGEFA) and Rap1. In turn, RasC activates TORC2, thereby promoting actin remodeling and the PKB/PKBR1-dependent cAMP production and release (signal relay). Rap1 promotes cell-substrate adhesion and actin remodeling through PI3K and Rac-specific GEFs, and inhibits myosin assembly at the front of migrating cells through Phg2. Here, we show that RasC directly binds the kinase domain of TOR and that Rap1 positively regulates the RasC-mediated activation of TORC2 by directly binding to RIP3/ SIN1, providing a possible mechanism through which TORC2 integrates the RasC and Rap1 signals during chemotaxis. We propose that this integration allows linking the signal relay response to cytoskeletal remodeling and substrate adhesion, and, thereby, coordinating group cell migration. Furthermore, our finding that RasC binds TOR suggests RasC may also regulate TORC1. Note: TORC2 components are depicted according to their previously proposed molecular organization 46 .
5,317.2
2016-05-13T00:00:00.000
[ "Biology" ]
Aberrant activation of bone marrow Ly6C high monocytes in diabetic mice contributes to impaired glucose tolerance Accumulating evidence indicates that diabetes and obesity are associated with chronic low-grade inflammation and multiple organ failure. Tissue-infiltrated inflammatory M1 macrophages are aberrantly activated in these conditions and contribute to hyperglycemia and insulin resistance. However, it is unclear at which stage these cells become aberrantly activated: as precursor monocytes in the bone marrow or as differentiated macrophages in tissues. We examined the abundance, activation state, and function of bone marrow-derived Ly6Chigh monocytes in mice with diabetes and/or obesity. Ly6Chigh monocytes were FACS-purified from six groups of male mice consisting of type 2 diabetes model db/db mice, streptozotocin (STZ) induced insulin depletion mice, high fat diet (HFD) induced obesity mice and each control mice. Ly6Chigh monocytes were then analyzed for the expression of inflammation markers by qRT-PCR. In addition, bone marrow-derived Ly6Chigh monocytes from db/+ and db/db mice were fluorescently labeled and injected into groups of db/db recipient mice. Cell trafficking to tissues and levels of markers were examined in the recipient mice. The expression of many inflammation-related genes was significantly increased in Ly6Chigh monocytes from db/db mice, compared with the control. Bone marrow-derived Ly6Chigh monocytes isolated from db/db mice, but not from db/+ mice, displayed prominent infiltration into peripheral tissues at 1 week after transfer into db/db mice. The recipients of db/db Ly6Chigh monocytes also exhibited significantly increased serum glucose levels and worsening tolerance compared with mice receiving db/+ Ly6Chigh monocytes. These novel observations suggest that activated Ly6Chigh monocytes may contribute to the glucose intolerance observed in diabetes. Introduction Chronic low-grade inflammation is an important contributor to multiple organ failure in patients with diabetes and obesity [1,2]. The prevalence of metabolic syndrome including obesity and diabetes continues to increase. We have previously reported that inflammatory and oxidative stress mediators, including reactive oxygen species produced via protein kinase C (PKC)-dependent activation of NAD(P)H oxidase, contributes to the development of atherosclerotic complications in patients with diabetes and metabolic syndrome [3,4]. Inflammatory M1 macrophages are thought to play important roles in diabetes and obesity through infiltration into adipose tissue and production of reactive oxygen species and inflammatory mediators, which cause chronic local inflammation [5,6] and dysregulation of adipocytokines [7]. Tissue macrophages originate from precursor monocytes produced in the bone marrow, which circulate through the blood until they migrate into and differentiate within tissues. In patients with diabetes and obesity, it is not known whether the precursor monocytes are already activated before arrival at the tissue or become activated upon differentiation. Similarly, the extent to which bone marrow-derived monocytes may contribute to the chronic inflammation observed in obesity and diabetes remains unclear. Pro-inflammatory (CD14 + CD16+) monocytes are more abundant in the peripheral blood of patients with type 2 diabetes compared with normal subjects [8], suggesting that this may indeed be the case. In mice, multiple phenotypic and functionally distinct monocyte subsets have been described, including the so-called "inflammatory" (Ly6C high ) and "patrolling" (Ly6C low ) subsets [9]. However, there have been no previous studies of potential changes in the abundance and function of bone marrow-derived monocytes in obese or diabetic animals. To address this, we employed three mouse models of diabetes and obesity and investigated (i) diseaserelated alterations in the number and inflammatory status of bone marrow-derived monocytes and (ii) the potential contribution of Ly6C high monocytes to tissue inflammation and dysregulation of glucose homeostasis. We confirmed that diabetic bone marrow monocytes had abnormal activation and had an effect on chronic inflammation. Animals and diet Seven-week-old male C57BL/KsJ db/db mice, an experimental model of type 2 diabetes, lean db/+ littermates, and wild-type C57BL/6J mice were purchased from Oriental Yeast (Tokyo, Japan) and housed for 1 week to allow acclimation before use in experiments. Mice were maintained under standard pathogen-free conditions with free access to water and normal laboratory chow. Diabetes was induced in 8-week-old C57BL/6J mice by i.p. injection of streptozotocin (STZ, 50 mg/kg body weight) (Sigma-Aldrich, St. Louis, MO, USA) in 0.1 mol/l citrate buffer (pH 4.5) once daily for 5 consecutive days (n = 11). Diabetes was confirmed by the presence of hyperglycemia (plasma glucose levels >300 mg/dl). Mice injected with citrate buffer alone served as non-diabetic controls (n = 11). In addition, groups of 8-week-old C57BL/6J mice (n = 11) were fed normal control diet (CD) or a high-fat diet (HFD) for an additional 8 weeks. The CD contained 11.5, 70.3, and 18.2% calories from fat, carbohydrate, and protein, respectively, and a total of 3.53 kcal/g. The HFD contained 62.2, 19.6, and 18.2% calories from fat, carbohydrate, and protein, respectively, and a total of 5.06 kcal/g. The mice used in our experiment were male only. All mice were anesthetized with isofurane and bone marrow was harvested by flushing the femurs and tibia, and then they were killed by exsanguination. All methods were performed in accordance with the relevant guidelines and regulations. Every efort was made to minimize the number of animals used and their suffering. All animal protocols were reviewed and approved by the Committee on the Ethics of Animal Experiments, Graduate School of Medical Sciences, Kyushu University (Protocol Number: A19-046-0). Monocyte isolation and labeling Bone marrow was harvested by flushing the femurs and tibia with RoboSep Buffer (Stemcell Tech, Vancouver, BC, Canada) using a syringe with a 27-gauge needle. Cell clumps were removed by passing the cell suspension through a 70-μm mesh nylon strainer and the cells were centrifuged at 300 × g for 6 min. The pelleted cells were resuspended at 1 × 10 8 cells/ml, and monocytes were enriched with an EasySep Mouse Monocyte Enrichment kit according to the manufacturer's instructions. Isolated monocytes (5 × 10 5 to 1 × 10 6 ) were labeled with PKH26 using a PKH26 Labeling kit (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer's instructions. PKH26 fluorescence (a yellow-orange fluorescent dye with long aliphatic tails) technology provides stable incorporation into lipid regions of cell membranes and has been found to be useful for in vitro and in vivo cell tracking applications in a wide variety of systems. In brief, cells were washed once in serum-free RPMI-1640 medium, resuspended in 2 mL kit diluent solution C, mixed with PKH26 at 2 × 10 −3 mol/L in diluent C, and incubated for 10 min at room temperature in the dark. An equal volume of medium containing 10% FBS was added, and the cells were centrifuged, washed once, and resuspended in serum-containing medium for further analysis. FACS sorting Purified monocytes were incubated with Fc blocker (BD Biosciences, San Jose, CA, USA) at 4˚C for 5 min, counted, and incubated for an additional 30 min in the dark on ice with fluorescently labeled antibodies against mouse F4/80, CD11b, CD115, Ly6G, and Ly6C. All antibodies were purchased from Biolegend (San Diego, CA, USA). Cell sorting was performed using a FACSAria II flow cytometer (BD Biosciences, San Jose, CA, USA). Adoptive transfer of monocytes to db/db mice Ly6C high monocytes from db/db or db/+ mice were label with PKH26, resuspended in PBS at 4 × 10 5 viable cells/0.2 ml, and injected into the jugular vein of groups of recipient db/db mice. One week later, one set of mice were analyzed for tissue infiltration. The liver and kidneys were collected, cryopreserved in OCT compound, cut into 10-μm-thick sections on a Leica CM1950 cryostat (Leica Biosystems, Nussloch, Germany), and mounted on slides. Adipose tissue was collected, cryopreserved in super cryoembedding medium (SCEM-(L1)), cut into 50μm-thick sections, and mounted. PKH26+ cells were detected by fluorescence microscopy (model BZ-X700; Keyence, Osaka, Japan) at excitation and emission wavelengths of 551 and 567 nm, respectively. For quantification, the number of cells in 16 high-power fields per sample were counted. RNA extraction and quantitative RT-PCR RNA was extracted from FACS-sorted monocytes using a RNeasy Plus Mini Kit (250) (Qiagen, Chatsworth, CA, USA) and reverse-transcribed using QuantiTect Reverse Transcription Kit (Qiagen). PCR was performed on a Chromo4 real-time PCR system (Bio-Rad, Hercules, CA, USA) with GoTaq Green Master Mix (Promega, Woods Hollow, WI, USA), as described previously [10][11][12]. The mRNA expression of each gene was normalized to the expression of the reference gene β-actin. The specificity of PCR amplification was confirmed by melting curve analysis and agarose gel electrophoresis. Luseogliflozin treatment The SGLT2 inhibitor luseogliflozin (TS-071: (1S)-1,5-anhydro-1-[5-(4-ethoxybenzyl)-2-methoxy-4-methylphenyl1]-1-thio-D-glucitol), was synthesized and kindly provided by Taisho Pharmaceutical Co., Ltd. (Tokyo, Japan). Luseogliflozin was administered by mixing it with the chow at a concentration of 0.1%. This dose was selected to ensure normalization of mild hyperglycemia regardless of variations in daily food consumption in individual mice. Two groups each of 8-week-old db/db and db/+ mice were assigned to receive food mixed with luseogliflozin or a normal diet for 12 weeks. Body weights and blood glucose levels were monitored every 2 weeks. After 12 weeks, Ly6C high monocytes were isolated from bone marrow samples and analyzed by RT-PCR as described above. IPGTT and IPITT Glucose and insulin tolerance tests were performed in db/db mice 4 weeks after transfer of PKH26-labeleled Ly6C high monocytes from db/db or db/+ mice. Mice were fasted for 16 h and then injected i.p. with glucose at 1.0 g/kg body weight. Plasma glucose was determined as described previously [12]. The AUC was calculated by the trapezoidal rule. For the IPITT, mice were injected i.p. with 0.5 U/kg of human biosynthetic insulin (Novo Nordisk, Bagsvaerd, Denmark) 4 weeks after cell transfer, and blood glucose levels were measured as described previously [12]. Urinalysis Four weeks after monocyte transfer to db/db mice, the mice were placed in metabolic cages and urine was collected for 24 h. The urine was mixed, centrifuged at 7500 × g for 5 min, purged of air with a stream of nitrogen to prevent formation of oxidation products, and then stored at −80˚C until analysis. 8-OHdG levels were measured using an 8-OHdG Check ELISA kit (Japan Institute for the Control of Aging, Fukuroi, Japan) as previously described [13]; albumin concentrations were measured using a Mouse Albumin ELISA KIT (AKRAL-121, Shibayagi, Gunma, Japan); and creatinine concentrations were measured using an automated analyzer. 8-OHdG and albumin levels are expressed relative to the urinary creatinine level. Statistical analysis Data are expressed as the means ± standard error (SEM). Student's t-test was used when two groups were compared. Multiple comparisons among the groups were conducted by one-way Upper row: Eight-week-old db/+ and db/db mice were fed normal control diet (CD) for an additional 8 weeks. Middle row: Eight-week-old C57BL/6 mice were injected with citrate buffer (vehicle) or streptozotocin (50 mg/kg) once daily for 5 days and fed CD for an additional 8 weeks. Lower row: Eight-week-old C57BL/6 mice were fed CD or a high-fat diet (HFD) for an additional 8 weeks. Mice were sacrificed and bone marrow cells were analyzed by flow cytometry for the ratio of Ly6C high to Ly6C low monocytes. Ly6C high monocytes were sorted and analyzed by quantitative RT-PCR. (B-D) Representative flow cytometry dot plots (upper panels) and quantification (lower panels) of CD115+ Ly6C high and Ly6C low in monocytes obtained from the bone marrow of (B) db/db mice and db/+, (C) vehicle-and STZ-treated mice, and (D) Ctrl-or HFD-fed mice. Data are the expressed as the means ± SEM. N = 11 mice/group. Ctrl, control diet; HFD, high-fat diet; STZ, streptozotocin. N.S., not significant. (unpaired t-test). Body weight and glucose levels in diabetic mice To assess the development of diabetes in each of the three mouse models studied here, we examined the body weights and blood glucose levels of 8-week-old db/db, db/+ mice, C57BL/6 mice administered STZ or vehicle, and C57BL/6 mice fed CD or a HFD until they were 16 weeks of age (Figs 1 and 2). As shown in Fig 2, the postprandial blood glucose levels were significantly higher in db/db mice and STZ-treated mice than in the age-and sex-matched control mice, starting at the first week of analysis ( Fig 2B and 2D), whereas the HFD mice showed a trend towards higher glucose levels than CD-fed mice, but the difference was not significant (Fig 2). In contrast, although the body weights of db/db and HFD-fed mice were significantly higher than the control mice (Fig 2A and 2E), the STZ-treated mice had significantly lower body weights than the vehicle-treated mice ( Fig 2C). Insulin secretion is decreased by destroying β cells in the pancreas when mice were treated streptozotocin. The condition that glucose cannot be taken into the tissue becomes chronic, a lack of energy in the tissue and weight loss may occur because of getting energy from fat and muscle. Analysis of Ly6C high monocyte subpopulations and expression of oxidative stress-and other inflammation-associated genes To determine whether pro-inflammatory (Ly6C high ) monocytes were more abundant and/or displayed a more activated phenotype in diabetic and obese mice compared with control mice, we harvested bone marrow cells from db/+ and db/db mice, vehicle-or STZ-treated mice, and CD-or HFD-fed mice at 16 weeks of age (Fig 1). Monocytes were identified as F4/80+Ly6G-CD11b+CD115+ cells. Ly6C high and Ly6C low monocytes were enumerated, and the latter subpopulation was purified by sorting for qRT-PCR analysis. Interestingly, the ratio of Ly6C high to Ly6C low monocytes and absolute numbers of Ly6C high and Ly6C low monocytes did not differ significantly between the three types of diabetic and/or obese mice and their relevant controls (~85% and 15%, respectively, Fig 1B-1D), indicating that induction of diabetes or obesity did not increase the proportion of pro-inflammatory monocytes. Therefore, we next compared the functional status of the Ly6C high monocytes by analyzing expression of a panel of genes related to inflammation, including oxidative stress markers. As shown in Fig 3A, mRNA levels of the NADPH oxidase subunits gp91phox and 22phox, peptidyl-prolyl cis-trans isomerase NIMAinteracting 1 (Pin1), p66shc, CCL2, TLR4, mincle, S100a8, and S100a9 were significantly higher in Ly6C high monocytes from db/db mice than the cells from db/+ mice (Fig 3A). In addition, gp91phox, TLR4, NLRP3, S100a8, and S100a9 were significantly higher in Ly6C high monocytes from STZ-treated mice than in control mice, with modest increases in other inflammatory markers (Fig 3B). However, only CCR2, S100a8, and S100a9 mRNA levels were elevated in Ly6C high monocytes from HFD-fed compared with CD-fed mice (Fig 2C). Thus, Ly6C high monocytes from diabetic or obese mice expressed significantly higher levels of oxidative stress and other inflammation markers than cells from the control mice, consistent with a more proinflammatory phenotype. Normalization of blood glucose does not suppress aberrant activation of Ly6C high monocytes in diabetic mice To evaluate whether the elevated expression of inflammation-related markers in Ly6C high monocytes of diabetic mice were related to high glucose levels, we treated one set of db/db and db/+ mice with luseogliflozin, an inhibitor of sodium glucose transporter 2 (SGLT2), the major protein responsible for glucose resorption in the kidney. Luseogliflozin was administered for 12 weeks by mixing with food ( Fig 4A). db/+ mice showed no effects of luseogliflozin on body weights or blood glucose levels, as expected; however, luseogliflozin-treated db/db mice showed significantly reduced blood glucose levels at all time points examined, and their body weights were significantly elevated compared with the control mice after 8 weeks of treatment (Fig 4B and 4C). qRT-PCR analysis of bone marrow-derived Ly6C high monocytes from these mice revealed that, as expected, basal mRNA levels of most inflammation-associated genes were significantly higher in cells from untreated control db/db mice than db/+ mice ( Fig 4D). However, with the exception of S100a8 and S100a9, luseogliflozin had no effect on the expression of inflammation-associated markers in the cells from db/db mice, although there was a small but insignificant trend towards reduced expression of some genes (Fig 4D). Gene expression profiles of Ly6Chigh monocytes from db/+ vs db/db from two experiments show discrepancy (in particular, between Fig 3A and Fig 4D). This may be part of the reason that Ly6C high cells were collected from different time points (~600 mg / dl blood glucose for 16 weeks vs~800 mg / dl for 20 weeks). Infiltration of adoptively transferred Ly6C high monocytes into liver, kidneys, and adipose tissue of db/db mice The liver, adipose tissue, and kidneys play important roles in the regulation of insulin resistance and the development of microvascular complications in obesity and/or diabetes. Therefore, we next asked whether Ly6C high monocytes from db/db mice have a greater propensity to infiltrate into these tissues than do the corresponding db/+ cells. To examine this, Ly6C high monocytes were purified from the bone marrow of db/+ or db/db mice, labeled with the fluorescent dye PKH26, and injected i.v. into recipient db/db mice (Fig 5A). One week later, we assessed the number of PKH26+ cells present in the organs, and we found that a significantly higher number of db/db-derived than db/+-derived Ly6C high monocytes were present in the liver, kidney, and adipose tissue of recipient mice (Fig 6). Moreover, we could detect crownlike structures in the adipose tissue of db/db mice injected with db/db Ly6C high monocytes, but not db/+ Ly6C high monocytes (Fig 6E and 6F). Exacerbation of diabetes by transfer of Ly6C high monocytes into db/db mice To clarify the influence of aberrantly activated Ly6C high monocytes on glucose metabolism and homeostasis in diabetic mice, we examined a number of phenotypic changes in db/db mice at 4 weeks after transfer of db/db or db/+-derived Ly6C high monocytes (Fig 5A). Although the body weights of the two groups of mice did not differ at any time point (Fig 5B), serum glucose levels were significantly elevated at 3 and 4 weeks after transfer of db/db-derived compared with db/+-derived Ly6C high monocytes (Fig 5C and 5D). Additionally, glucose tolerance and insulin sensitivity were both lower in mice that received monocytes from db/db mice compared with db/+ mice (Fig 5E-5H). However, although serum fructosamine and urinary glucose, albumin, and 8-OHdG levels tended to be higher in animals after transfer of db/dbderived monocytes than db/+-derived monocytes, the differences were not statistically significant (Fig 5I-5L). Discussion In this study, we investigated the potential contribution of aberrantly activated Ly6C high monocytes to the development and phenotype of diabetic and obese mice. Although there were no differences between the ratios of Ly6C high and Ly6C low monocytes in the bone marrow of control vs diabetic and obese mice, the latter animals contained more aberrantly activated Ly6C high monocytes expressing higher levels of inflammation-related genes compared with the control mice. The inflammation-related markers were also expressed at higher levels in Ly6C high monocytes from db/db mice than from STZ-induced mice, and relatively few genes were upregulated in the cells from HFD-induced mice. This finding suggested that hyperglycemia may have contributed to the aberrant activation of Ly6C high monocytes; however, treatment of db/db mice with luseogliflozin had little effect on gene expression, with the exception of S100a8 and S100a9. We attempted to evaluate the effect of insulin resistance alone on inflammatory changes in bone marrow monocytes. Type 2 diabetes is a multifactorial disease with insulin secretion failure and insulin resistance. HFD-treated mice were created as a model for insulin resistance without overt hyperglycemia. An alternative explanation is that the composition or duration of HFD feeding may have been insufficient to promote inflammatory changes in the bone marrow-derived monocytes. However, Ly6C high monocytes from both diabetic and obese mice expressed significantly higher levels of S100a8 and S100a9 than control cells. These proteins are known to be involved in NAD(P)H oxidase activation, recruitment of leukocytes [14,15], and promotion of cytokine and chemokine production. Although changes in S100a8 and S100a9 expression could clearly contribute to the aberrant activation of monocytes, clarification of the underlying mechanisms must await further study. To determine whether the aberrant activation of Ly6C high monocytes in db/db and STZtreated mice was due to hyperglycemia, we treated db/db mice with luseogliflozin to prevent glucose resorption, thus lowering blood glucose levels. However, this had no significant effect on inflammation-related gene expression in Ly6C high monocytes from db/db mice, making it unlikely that high glucose levels per se contributed to the monocyte activation status. Several alternative explanations are possible. One possibility is that as the concentration of DAMPs increases as a result of tissue damage, these changes are not improved in relatively early period. Another possibility is that blood glucose levels declined but obesity accelerated, insulin resistance worsened and blood glucose lowering effect could be canceled. Besides reducing the glucose level, luseogliflozin treatment increased the body weight of db/db mice. Our observations are consistent with previous reports in mice [16,17] treated with SGLT2 inhibitors. They reported that SGLT2 inhibitor might increase appetite and energy intake, which attenuated body weight reduction by urinary calorie loss. Future studies should investigate the effects of sustained and larger reductions in blood glucose, possibly by administration of insulin. Eight-week-old db/+ and db/db mice were fed CD or CD mixed with the SGLT2 inhibitor luseogliflozin for an additional 12 weeks. Mice were sacrificed and Ly6C high monocytes were sorted from the bone marrow and analyzed. Body weight (B) and blood glucose levels (C) in db/+ and db/db mice treated as described in (A). Results are presented as the means ± SEM. N = 6 mice/group. � p<0.05, �� p<0.01. CD, control diet; Ctrl, control; Luse, luseogliflozin. (D) qRT-PCR analysis of expression of the indicated genes in monocytes from mice treated as described in 4A. Data are expressed as the means ± SEM. N = 6 mice/group, � p<0.05, �� p<0.01. Luse, luseogliflozin; N.S., not significant. (ANOVA with Fisher's PLSD). https://doi.org/10.1371/journal.pone.0229401.g004 We have developed and validated three pathological models to evaluate the function of bone marrow-derived inflammatory monocytes. Type 2 diabetes is a multifactorial disease with impaired insulin secretion and insulin resistance. Streptozotocin treated mice are the models reflecting insulin secretion failure, and HFD treated mice are the models reflecting insulin resistance. Db/db mice are the models with impaired insulin secretion and insulin resistance. In our experimental system, db/db mice were positioned as the models most similar to type 2 diabetes. We examined monocyte trafficking to evaluate the relationship between aberrant activation of db/db-derived Ly6C high monocytes and chronic inflammation in multiple organs. Previous work has examined the differentiation of circulating monocytes into tissue-resident macrophages in both lean and obese mice and showed that the CCR2 / MCP-1 system was a factor contributing to monocyte migration to adipocytes and was the main signal controlling the appearance of mobilized macrophages in the liver [18]. We found that db/dbderived Ly6C high monocytes showed a higher propensity than their db/+-derived counterparts to infiltrate the liver, kidney, and adipose tissue. Moreover, at longer times after cell transfer (2-4 weeks), glucose and insulin tolerance had deteriorated further in animals injected with Ly6C high monocytes from db/db mice compared with db/+ mice. Previous reports have shown that adipocytes, hepatocytes, and other cell types can release chemokines that promote monocyte migration and tissue infiltration [19][20][21][22]. The CCR2-MCP-1 axis is thought to play a particularly important role in this regard, but other chemotactic signals can also contribute [18]. Thus, upregulation of the CCR2-MCP-1 axis and other chemotactic signals may have contributed to the increased trafficking of db/db-derived compared with db/+-derived Ly6C high monocytes. The impaired glucose tolerance of db/db mice injected with db/db-derived Ly6C high monocytes may also have resulted from secretion of elevated inflammatory cytokine and chemokine levels after differentiation of Ly6C high monocytes into M1 macrophages in the infiltrated tissues. Although IL-1β is known to be involved in the autoimmune process leading to type 1 diabetes, it is also upregulated in the pancreatic islets of diabetic patients and type 2 diabetes animal models, where it causes impaired glucose tolerance [23]. Alternatively, M1 monocytes may have directly infiltrated the pancreatic islets and interacted with β cells via cytokine secretion, further enhancing islet inflammation [24]. Finally, it is possible that the upregulated expression of TLR4 on transferred db/db-derived Ly6C high monocytes may have exacerbated glucose intolerance. A recent study showed that TLR4 signaling mediates inflammatory responses in adipose tissue and skeletal muscle leading to HFD-induced insulin resistance [25]. As a result, crown-like structures (CLS) were observed in adipose tissue of mice treated db/db-derived monocytes stained with PKH26, but none of them observed in mice treated db /+-derived monocytes. CLS were considered to scavenge residual lipid droplets of necrotic adipocytes and reflects adipose inflammation. This result may indicate that abnormally activated db/db bone marrow-derived Ly6C high monocytes caused strong inflammation in the infiltrating adipose tissue. In addition, a previous report demonstrated that relatively Ly6C high monocytes were sorted from the bone marrow of db/+ or db/db mice, labeled with PKH26, and injected i.v. into db/db mice. One group of mice was sacrificed after 1 week and analyzed for infiltration of PKH26+ cells in liver, kidneys, and adipose tissue. A second group of mice were tested for phenotypic changes at various times for 4 weeks after cell transfer. (B-D) Body weight (b), fasting blood glucose levels (c), and postprandial blood glucose levels (d) in db/db mice at the indicated times after transfer of db/+ Ly6C high monocytes (white circles) or db/db Ly6C high monocytes (black circles) as described in 4(a). Results are expressed as the means ± SEM. N = 6 mice/group. � p<0.05. (unpaired t-test) (E and F) IPGTT of db/db mice at 2 weeks after transfer of db/+ or db/db bone marrow-derived Ly6C high monocytes as described in 4(A). (E) Blood glucose levels. (F) IPGTT AUC. Results are expressed as the means ± SEM. N = 6 mice/group, � p<0.05. (unpaired t-test) (G and H) IPITT of db/db mice at 4 weeks after transfer of db/+ or db/db bone marrow-derived Lyc6 high monocytes as described in 4(A). (G) Blood glucose levels. (H) AUC. Results are expressed as the means ± SEM. N = 6 mice/group. � p<0.05. (unpaired t-test) (I-L) Serum fructosamine levels (I), urinary glucose levels (J), urinary albumin levels (K), and urinary 8-hydroxyl-2 0 -deoxyguanosine (8-OHdG) levels (L) in db/db mice at 4 weeks after injection of db/+ or db/db Ly6C high monocytes as described in 4(A). Results are expressed as the means ± SEM. N = 6 mice/group. � p<0.05. (unpaired t-test) Cre, creatinine. https://doi.org/10.1371/journal.pone.0229401.g005 few macrophages infiltrate into adipose tissue in lean mice compared with obese mice [18], which is consistent with the findings of the present study. Current experiment has some limitations. First, in transplantation experiments, the effects of inflammatory monocytes from diabetic donor mice on WT recipient mice have not been evaluated. Second, the effects on other systemic organs including the pancreas have not been observed. Therefore, future studies were needed to confirm these detail mechanisms. In conclusion, our study shows for the first time that (i) Ly6C high monocytes are aberrantly activated in the bone marrow of diabetic mice, (ii) db/db-derived Ly6C high monocytes traffic more effectively into the liver, kidneys, and adipose tissue of recipient db/db mice than do db/+-derived cells, and (iii) recipients of db/db-derived Ly6C high monocytes display worse glucose tolerance than do recipients of db/+-derived Ly6C high monocytes. Thus, aberrantly activated bone marrow-derived Ly6C high monocytes may contribute to the glucose intolerance of diabetic animal models. Supporting information S1
6,297.2
2020-02-25T00:00:00.000
[ "Biology" ]
Impact of Hemolysis on Multi-OMIC Pancreatic Biomarker Discovery: Derisking Precision Medicine Biomarker Development Cancer biomarker discovery is critically dependent on the integrity of biofluid, and tissue samples acquired from study participants. Multi-omic profiling of candidate protein, lipid, and metabolite biomarkers is confounded by timing and fasting status of sample collection, participant demographics and treatment exposures of the study population. Contamination by hemoglobin, whether caused by hemolysis during sample preparation or underlying red cell fragility, contributes 0 – 10 g/L of extraneous protein to plasma, serum, and Buffy coat samples and may interfere with biomarker detection and validation. We analyzed 617 plasma, 701 serum, and 657 buffy coat samples from a 7 year longitudinal multi-omic biomarker discovery program evaluating 400+ participants with or at risk for pancreatic cancer, known as Project Survival™. Hemolysis was undetectable in 93.1% of plasma and 95.0% of serum samples, whereas only 37.1% of Buffy coat samples were free of contamination by hemoglobin. Regression analysis of multi-omic data demonstrated a statistically significant correlation between hemoglobin concentration and the resulting pattern of analyte detection and concentration. Although hemolysis had the greatest impact on identification and quantitation of the proteome, distinct differentials in metabolomics and lipidomics were also observed and correlated with severity. We conclude that quality control is vital to accurate detection of informative molecular differentials using OMIC technologies and that caution must be exercised to minimize the impact of hemolysis as a factor driving false discovery in large cancer biomarker studies. Introduction The current healthcare ecosystem is rapidly evolving toward deploying precision medicine strategies for increasing optimal strati cation of patients to improve clinical outcomes. These actions will predominantly focus on the use of molecular, digital, and clinical biomarkers that will characterize patients on multiple dimensions of phenotypic presentation. Standardization of quality parameters governing sample collection are important to ensure accuracy and reproducibility of potential discoveries ultimately easing translation back into the clinic. Molecular markers, whether genetic, proteomic, lipidomic or metabolomic, hold tremendous promise to deconvolute the biological presentation of patients. The composition of adaptive biological molecules (proteins, lipids, and metabolites) can be signi cantly in uenced by patient demographics, pharmacological agents, and sample handling processes which can hinder potential biomarker discovery and development. Hemolysis represents a common sample processing outcome and can be due to handling, but also disease etiology rendering red blood cells (RBC) more labile for lysis. Hemolysis can occur for a variety of reasons and leads to the release of free hemoglobin into blood collection samples [1]. Due to some medical conditions, or as the result of taking certain medications, this breakdown of RBC's can be increased. Hemolysis has the potential to drastically alter the observed proteome of buffy coat samples due to contamination of hemoglobin and other high-abundance proteins seen in RBC. RBC are mainly comprised of hemoglobin and carbonic anhydrase-1, contributing 97% and 1% of the entire RBC proteome, respectively [2]. The buffy coat fraction of whole blood has been observed to be less than 1% of the blood by volume [3]. As a result, even minor contamination of RBC into the other fractions, or increased hemolysis due to medical reasons, can increase the concentration of hemoglobin and carbonic anhydrase-1 and potentially impact the observed proteome. Guidelines governing omics analysis of clinical samples have been developed over the past decade as the use of such platforms has been broadly adopted in R&D and clinical trial assessments [4,5]. This includes Study Design There were 420 patients enrolled in this study: 224 males and 196 females. These fell into one of the ve categories as follows; healthy volunteers: 33, patients with pancreatitis: 113, early pancreatic cancer: 67, local pancreatic cancer: 115 and metastatic pancreatic cancer: 92. All volunteers participating in this clinical study NCT02781012 gave their informed consent for inclusion before they participated in the study. Research use of the samples was conducted in accordance with the terms outlined within the informed consent form and the terms set forth therein and with the tenets of the Declaration of Helsinki and its later amendments or comparable ethical standards. Sample Collection Whole blood samples were collected via venipuncture into EDTA tubes. All samples were processed and frozen at -80 o C within 3 hours of the blood draw. The plasma fraction was separated using centrifugation at 1200 x g for 10 minutes at room temperature and was aliquoted into separate tubes and frozen. During centrifugation, the buffy coat layer also separated from the red blood cells. The buffy coat layer was collected and diluted with 8mL RPMI buffer, transferred into a 50mL Leucosep tube, and centrifuged at 1200 x g for 10 minutes at room temperature to separate the buffy coat layer further from the red blood cells. Buffy coat was washed three times with PBS and pelleted to remove solution. Finally, the buffy coat was resuspended in 200µL of PBS and split between two tubes before being frozen at -80 o C. A separate vial of blood was collected for serum sample collection in serum separator tubes and was left at room temperature for 30-45 minutes to allow for the clot to form. Serum separator tubes were then centrifuged at 1200 x g for 10 minutes at room temperature. Separated serum was aliquoted and frozen at -80 o C. standardized sample preparation approaches and techniques, quality controls, and the recommended size of cohorts required to ensure statistical signi cance of potential ndings. However, protocols for quality controls regarding sample collection are de cient. Several key challenges have already been demonstrated in using bio uids for biomarker discovery, such as chemical modi cations of proteins or sample degradation during storage. Further, utilizing plasma and serum, which is often employed for convenience of collection, exhibits a wide dynamic range of protein concentrations, making the identi cation of low abundance potential biomarkers all the more challenging. One potentially impactful occurrence that should be included is the effect of hemolysis, which can directly contribute to both aforementioned challenges. Herein, we performed mass spectrometry based lipidomics, metabolomics and proteomics analysis of plasma and serum from over 420 individuals in pancreatic biomarker clinical trial. Buffy coat samples were only subjected to proteomics analysis based on the sample amount obtained. A subset of samples obtained were impacted by hemolysis resulting in contamination of the matrix of interest. A comprehensive assessment of expressional patterns of proteins, lipids and metabolites was performed to identify hemolytic contamination in these samples. The proteome of buffy coat was most impacted, resulting in expressional changes of proteins originating from red blood cells. The use of markers impacted by hemolysis should considered with caution for exploration as biomarkers. Detection Of Hemolysis Upon receipt, all samples were accessioned and qualitatively assigned a colorimetric hemolysis score of 1-3 for plasma and serum and 0-4 for buffy coat following the color scale in Fig. 1[6]. A score of zero was reserved for buffy coat samples appearing clear to opaque white when buffy coat cells were most pure. Given the natural yellowish appearance of plasma and serum, a score of zero was never given, and a score of 1 was considered most pure. Proteomics Protein Extraction 65µL of raw plasma/serum was ltered through a pre-wet 0.22µm cellulose acetate spin lter. 40µL of the ltered plasma/serum was pipetted onto another pre-wet 0.22µm cellulose acetate spin lter and combined with 20µL of 80mg/mL lipid removal agent (LRA). The mixture was placed on a shaker for 30 minutes and then centrifuged. The resulting ltrate was roughly 40µL in volume and was combined with 120µL of Agilent Buffer A. The sample was then loaded into vials and placed on the Agilent 1260 series HPLC, and the top 14 abundant proteins were depleted using the Multi-A nity Removal Column 14 from Agilent. The depleted samples were collected into vials and protein concentration was determined using the Bradford Assay. Buffy coat samples were lysed with a lysis buffer containing 5M Urea, 50mM Tris-HCl pH 8.3, 0.1% SDS, 1% Protease and Phosphatase Inhibitor Cocktail, and Optima LC/MS Water. 100µL of lysis buffer was added to each sample and mixed by pipetting up and down, and then the whole sample was immediately transferred out of the sample vial and into a 1.5mL Eppendorf tube. Each sample was sonicated with four 3-second pulses at 20% ampli cation to fully lyse the cells. Sonicated samples were centrifuged at 17,000 x g for 10 minutes, and the supernatant was then used in the Bradford Assay to determine the protein concentration. Trypsin Digestion Extracted proteins were trypsin digested as previously described [7]. In brief, proteins were reduced with 10mM Tris(2-carboxyethyl) Phosphine (TCEP) and alkylated with 18.75mM iodoacetamide before being precipitated in acetone overnight and digested with trypsin the next day. TMT Labeling of Peptides Extracts were divided in to three parts: 75 µL for gas chromatography combined with time-of-ight highresolution mass spectrometry, 150 µL for reversed-phase liquid chromatography coupled with high-resolution mass spectrometry, and 150 µL for hydrophilic interaction chromatography with liquid chromatography and tandem mass-spectrometry, and analyzed as previously described [8][9][10][11][12]. We used the NEXERA GC system was tted with a Gerstel temperature-programmed injector, cooled injection system (model CIS 4). An automated liner exchange (ALEX) (Gerstel, Muhlheim an der Ruhr, Germany) was used to eliminate crosscontamination from the sample matrix that was occurring between sample runs. Quality control was performed using metabolite standards mixture and pooled samples, applying the methodology previously described [13][14][15][16]. A quality control sample containing a standard mixture of amino and organic acids purchased from Sigma-Aldrich as certi ed reference material, was injected daily to perform an analytical system suitability test and to monitor recorded signals day to day reproducibility as previously described [8][9][10][11][12]. A pooled quality control sample was obtained by taking an aliquot of the same volume of all samples from the study and injected daily with a batch of analyzed samples to determine the optimal dilution of the batch samples and validate metabolite identi cation and peak integration. Collected raw data was manually inspected, merged, imputed and normalized by the sample median. Metabolite identi cation was performed using in house authentic standards analysis. Metabolite annotation was used utilizing recorded retention time and retention indexes, Mediator Lipidomic Analysis A mixture of deuterium-labeled internal standards was added to aliquots of 100 µL serum or plasma, followed by 3x volume of sample of cold methanol (MeOH). Samples were vortexed for 5 minutes and stored at − 20°C overnight. Cold samples were centrifuged at 14,000g at 4°C for 10 minutes, and the supernatant was then transferred to a new tube and 3 mL of acidi ed H 2 O (pH 3.5) was added to each sample prior to C18 SPE columns (Thermo Pierce) and performed as described [19]. The methyl formate fractions were collected, dried under nitrogen, and reconstituted in 50 µL MeOH:H 2 O (1:1, v/v). Samples were transferred to 0.5 mL tubes and centrifuged at 20,000 g at 4°C for 10 minutes. Thirty-ve µL of supernatant were transferred to LC-MS vials for analysis using the BERG LC-MS/MS mediator lipidomics platform as described. Data Analysis Proteins that had missing values in more than 85% of samples were considered unreliable, and therefore removed from further analysis. Data was normalized according to a median centering and variance scaling approach applied across samples [20,21]. Batches due to study cohort were corrected using an empirical Bayesian framework, ComBat [22,23]. Brie y, this method performed location and scale adjustments based on estimated batch effect parameters per protein and returned a corrected dataset for further analysis. The data was then used for identifying differential expression between different hemolysis score in plasma, serum and buffy coat. Missingness was calculated as the proportions of missing proteins in each sample. Work ow, Design and Summary To evaluate the impact of hemolysis on biomarker discovery utilizing a multi-omics platform, we compared proteins, lipids, and metabolites identi ed across plasma, serum, and buffy coat samples (proteomics only) acquired from 420 non-diseased and pancreatic cancer patients. Work ow of Proteomics, Lipidomic and Metabolomic analysis is shown in Fig. 1. A hemolysis score was recorded for each sample, ranging from 0-4 for buffy coat and 1-3 for plasma and serum. A summary of the distribution of hemolysis scores within each sample type can be found in Fig. 2. Buffy coat yielded the largest hemolyzed samples 37.1% -#0, 25.1% -#1, 24.8% -#2, 12.4% -#3, 0.4% -#4 hemolysis. Protocol of isolation of buffy coat from blood may be one of the major reasons for the large number of contaminated buffy coat samples. In proteomics, 7302, 1971, 2146 proteins were identi ed and quanti ed in buffy coat, serum and plasma, respectively, using TMT labeling and 2D online LC-MS/MS. After ltering the data for proteins that have less than 85% missing values, a total of 3648, 453, and 492 proteins in buffy coat, serum and plasma, respectively, were obtained and used for further analysis. In lipidomics, 1318 structural lipids and 106 mediator lipids were identi ed and quanti ed after data ltration for analysis in plasma and serum samples. In metabolomics, a total of 514 and 508 metabolites were identi ed and quanti ed in plasma and serum samples, respectively, after data ltering and kept for further analysis. Differentially Expressed Metabolites And Lipids Lipidomics analysis revealed no signi cant changes in lipid expression for mediator lipidomics data when comparing samples with hemolysis scores of 2 + to 1 in both plasma and serum. However, for structural lipidomics analysis, 5 lipids were found to be down regulated, and 2 lipids up regulated in plasma, and 14 lipids were down regulated, and 11 lipids up regulated in serum (Table 1). More profound effects were seen in the metabolomics data. When comparing samples with hemolysis scores of 2 + to 1, a total of 51 metabolites were found to be down regulated and a total of 25 upregulated due to hemolysis in plasma (Table 1). For the same comparison in serum, 93 metabolites were down regulated and 21 were upregulated due to hemolysis (Table 1). A summary of these results can be found in supplemental table 1. * Differentially expressed species due to hemolysis Missingness A subset of samples with the lowest hemolysis score was created, in this case, a score of 0 for buffy coat samples and a score of 1 for plasma and serum samples. This subset was used to lter the proteins, and only the proteins that have less than 85% missing values were kept in the full proteomics data. The missing proportions of proteins for each sample were computed, and samples were then grouped by hemolysis score of 0: 244 samples, score 1: 165 samples, score 2: 163 samples, score 3+: 85 samples in buffy coat (Fig. 2). The boxplots clearly indicate that as the hemolysis score of a sample increases, the number of proteins that are identi ed across the set within the sample decreases, and the medians of proportions of missing proteins are 0.299, 0.353, 0.406, 0.410 for the groups with hemolysis score 0, 1, 2, 3+, respectively (Fig. 3). This can be explained by an increase in the signal derived from the more abundant hemoglobin proteins contributed from the lysed red blood cells, suppressing the signal of the less abundant proteins and changing the dynamic range of the protein content that would ideally be identi ed from samples with little to no hemolytic contamination. Differential Expressed Proteins To assess the effect of hemolysis on relative protein expression in buffy coat, comparisons between hemolysis groups were performed as shown by volcano plots (Fig. 4A). Overall, 657 samples were included in this analysis. A total of 3,647 proteins were identi ed when assessing the differentially expressed proteins between samples with a score of 0 vs. 1 (Fig. 4A) Table 1). Lastly, we compared samples with a score of 0 vs. 3 + Fig. 4C, and a total of 592 proteins were consistently identi ed across all samples, with 238 proteins differently expressed proteins down-regulated and 187 proteins up-regulated at a 1.3 fold-change threshold and a p-value of 0.05. Hemolysis not only impacted the proteins identi ed but also impacted the quantitation of the differentially expressed proteins. Further, comparisons between samples with no visual hemolysis (scores of 0 for buffy coat, scores of 1 for plasma and serum) were made to samples with visual hemolysis (scores of 1-3 + for buffy coat, scores of 2 + for plasma and serum). Differential expression of proteins was observed using volcano plots shown in Fig. 4, using a threshold of 1.3 fold change with a corresponding p-value of 0.05 to be considered differentially expressed. Overall, 250 proteins were found to be downregulated and 208 upregulated in buffy coat. Similarly, in plasma and serum, 2 proteins were found to be down regulated in samples scored 2 + compared to samples scored 1. A total of 22 proteins in plasma and 13 proteins in serum were found to be up regulated in the same comparison. A summary of these results can be found in supplemental table 1. Impact Of Hemolysis On Hemoglobin To study hemolysis via protein identi cation and relative quantitation, we assessed the expression of Hemoglobin Subunit Alpha (HBA1), Hemoglobin Subunit Beta (HBB), and Hemoglobin Subunit Delta (HBD) across all sample types and grouped by hemolysis score within each sample type. Hemolysis is generally classi ed as the lysis of RBC in circulation or during sample preparation, and as hemoglobin is one of the most abundant proteins in red blood cells, the hemoglobin expression increases due to hemolysis (Fig. 5A) and increased stepwise with increasing hemolysis score. A similar pattern was seen in both plasma and serum, with lower levels observed in samples with a hemolysis score of 1, and signi cantly higher levels observed in samples scored 2+ (Fig. 5b and Fig. 5C). We also assessed the expression of carbonic anhydrase (CA1), histone H2B type 1-L (HIST1H2BL), and ubinuclein-2 (UBN2) (Fig. 6). CA1 is another major protein found in RBC's and is responsible for processing carbon dioxide in the body. The expression of CA1 is low in samples classi ed with a hemolysis score of 0, and increases similar to the hemoglobin protein expression with increasing hemolysis score (Fig. 6). HIST1H2BL and UB2 are both nuclear proteins whose identi cation is expected in buffy coat samples and not from red blood cells. HIST1H2BL and UBN2 expression follow the expected result, with higher expression in samples with hemolysis score of 0 and lower expression with increasing hemolysis score (Fig. 6), indicating signal suppression of these proteins as a result of hemolysis. Discussion Translation of biomarkers into clinical practice requires the comprehensive understanding of the impact of sample handling to avoid false discovery of processing markers rather than disease associated biomarkers. Adaptive omic technologies such as proteomics, lipidomics, and metabolomics demonstrate tremendous promise associating the patient phenotypic with causal biology but are also signi cantly impacted by red blood cell contamination in plasma, serum, or buffy coat. In the current study, we uncovered that buffy coat was the most signi cantly affected by hemolysis in a prospective biomarker study investigating pancreatic cancer and at-risk populations. The incidence of hemolysis was independent of disease conditions (data not shown) but did in uence detection and quanti cation of analytes. Contamination by proteins found in RBC from hemolysis has also been demonstrated in red blood cell storage in an occurrence known as storage lesions. Storage lesions are progressive changes in the morphology, biochemistry, and function of RBC during storage that result in changes in the viability of the RBC and accumulation of contaminating proteins and cells. These changes in RBC ultimately lead to hemolysis, and consequently, a release of the cytosolic contents into solution [1,24]. A study observing changes in the protein distribution of RBC supernatant over a storage period identi ed appreciable increases in proteins, including carbonic anhydrase 1 and 2 (CA1 and CA2), peroxiredoxin-1 and − 2 (PRDX1 and PRDX2), and catalase, as well as others, due to hemolysis of RBC over time in these storage lesions [25]. Similarly, our ndings also conclude these proteins to be contaminants in plasma, serum, and buffy coat due to hemolysis that may occur in vivo or during sample processing. The identi cation of proteins in a sample depends on the dynamic range of the proteins. Identifying less abundant proteins in a sample via LC-MS/MS analysis is challenging at low concentrations as current mass spectrometry capabilities allow for identi cation over a range of 3-4 orders of magnitude [26]. Hemolysis increases the hemoglobin content in the sample of interest. Given that hemoglobin accounts for 97% of the composition of RBC's, with carbonic anhydrase accounting for another 1%, this can create signi cant suppression of signal of low abundant proteins in the bio uid of choice for a proteomic study [27]. In proteomics, sample quantitation is performed using equal volume of uid or equal concentration of protein content. In this study the equal concentration of proteins was used for semi-quantitation, supplemented by Tandem Mass Tags for protein quantitation. The general hypothesis is that the samples are identical with minor changes. Quantitation of proteins is impacted due to hemolysis which leads to an increase in concentration of red blood cell proteins. As contamination increases, the proportion of proteins of interest in the sample decreases and can lead to inaccurate quantitation and false discovery of the biomarkers. Hemolyzed samples should be avoided in omics studies to minimize data analysis variability and data interpretation errors. The use of differentially expressed species (supplemental table 1) as biomarkers of disease in any study should be viewed with caution due to hemolysis. For instance, carbonic anhydrase-1 has been demonstrated as a biomarker in serum for prostate cancer [28]. Further, peroxiredoxin-2 was identi ed as a biomarker in a panel of proteins from plasma for Anderson-Fabry disease [29]. While this may in fact be the case, careful consideration should be taken into sample quality while testing to avoid false positives, and analysis should be performed to conclude these proteins had little to no contribution to their signal from sample handling issues or hemolysis. In clinical settings, omics analysis on serum, plasma or buffy coat samples requires caution while handling samples to avoid hemolysis. Following a set protocol is required when collecting and handling samples and any deviation in sample handling needs to be recorded. In some cases, even after all sample handling precautions have been taken, hemolysis may still occur due to underlying biological factors. In these scenarios, various methods can be used for data analysis to minimize the impact of contamination of proteins like hemoglobin. One approach is to ignore any contributing red blood cell proteins as a biomarker, if considered as contamination. A second approach is to use proteins such as hemoglobin or carbonic anhydrase to normalize the data and speci cally normalize only non-red blood cell containing proteins. This can minimize the impact due to hemolysis in quantitation. However, any attempt that might minimize this effect may not completely negate the impact due to hemolysis. A third approach is to move towards equal volume quantitation compared to equal concentration quantitation, however, this might require technical advancements in instrumentation and technology. Identi cation of contaminating proteins cannot be avoided, and expression of those protein rise with the increase in the hemolysis score. Sophisticated LC-MS/MS technology, biochemical procedures for sample preparation and advance bioinformatics tools need to be used for omics analysis in precision medicine. Using stringent puri cation procedures are of key importance in using blood samples for identi cation and application of biomarkers. This study comprehensively assessed omics variables signi cantly impacted by the increase in hemolysis score in buffy coat and plasma/serum. Differences were identi ed that were associated with increasing hemolysis score, including missingness of proteins identi ed. Integration of lipidomics, metabolomics and proteomics data provided an expanded, comprehensive insight of the impact of hemolysis. Overall, our results will serve as a comprehensive resource to the biomarker community in the eld of blood analysis. Diagnostic applications will be able to leverage these proteins, lipids and metabolites identi ed as hemolytic contamination for future biomarker studies. Declarations Ethics approval and consent to participate This study was IRB approved and all patients consented to participate. Consent for publication MAK: Drafted the work or substantively revised it, provided substantial contributions to the conception and design of the work and acquisition, analysis Figure 1 Work ow of the methods used to study the impact of Hemolysis. Initially, clinical samples were assigned a hemolysis score of 0-4 following the hemolysis scale color legend. In proteomics, plasma and serum were ltered and depleted of the top 14 most abundant proteins, and buffy coat cells were lysed. Proteins were extracted and digested with trypsin before being labeled with TMT 10-Plex. TMT-labeled peptides were analyzed using 2D LC-MS/MS platform and quanti ed using Proteome Discoverer v1.4. In lipidomics, structural lipids were extracted via liquid/liquid extraction method on an automated Hamilton Robotics STARlet system. Extracted lipids were analyzed via direct injection electrospray ionization TOF-MS. Further, mediator lipids were acidi ed and extracted using SPE. Eluted lipids were dried and resuspended for LC-MS analysis. In metabolomics, metabolites were extracted in organic conditions and analyzed using gas chromatographymass spectrometry (GC/MS), reversed-phase liquid chromatography-mass spectrometry (RP-LC/MS), and hydrophilic interaction chromatography-liquid chromatography-tandem mass spectrometry (HILIC-LC/MS/MS). Post-processing of data included inspection, merging, and imputation. Page 16/17 Volcano plots showing a comparison of protein expression between 3 vs 0, 2 vs 0 and 1 vs 0. The expression of protein ratio to the QCP was exhibited as Log2 fold and compared to -Log10 of p-value. Signi cant proteins required minimum 1.5 fold-change difference and maximum p-value of 0.05. Figure 6 is a boxplot of buffy coat proteins expressions of (CA1 = Carbonic anhydrase, HIST1H2BL = Histone H2B type 1-L, UBN2 = Ubinuclein-2) that were identi ed as signi cantly differentially expressed proteins in comparison to their Hemolysis score of 0,1,2 and 3+. Expression values are log2 ratio to the pool. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download.
6,067.6
2021-06-10T00:00:00.000
[ "Medicine", "Biology" ]
Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? A European comparison of the acceptability of transgenesis and cisgenesis Reduced pesticide use is one of the reasons given by Europeans for accepting new genetic engineering techniques. According to the advocates of these techniques, consumers are likely to embrace the application of cisgenesis to apple trees. In order to verify the acceptability of these techniques, we estimate a Bayesian multilevel structural equation model, which takes into account the multidimensional nature of acceptability and individual, national, and European effects, using data from the Eurobarometer 2010 73.1 on science. The results underline the persistence of clear differences between European countries and whilst showing considerable defiance, a relatively wider acceptability of vertical gene transfer as a means of reducing phytosanitary treatments, compared to horizontal transfer. Introduction The European controversy on the acceptability of biotechnologies and notably their use in food has been developing since the 1990s [1]. The aspect of reducing phytosanitary treatment is often brought up by European consumers as the main reason that could lead to an acceptation of GM (genetically modified) foods as shown by different studies [2,3,4,5,6]. At the same time, an international team [7] has recently fully decoded the apple tree genome (Malus domestica L. Borkh) creating future possibilities for more advanced genetic engineering applications, notably for the development of new apple varieties using cisgenesis (intra-species gene transfer). This breakthrough has led authors such as Jacobsen and Shouten [8] to promote the potential of this technique, on the condition that the same communication errors as the ones regarding the development of GMOs (Genetically Modified Organisms) do not occur again. The gene marker for the resistance to antibiotics was traditionally implanted during genetic manipulation in order to differentiate between the cells for which genetic modification was successful and those where it failed. In the case of the cisgenic apple, whose initial form was PLOS developed by Vanblaere et al. [9], this gene marker will be eliminated (for example for the integration of a DNA element producing the enzymatic shear). Righetti et al. [10] show that these new breeding technologies can lead not only to cisgenic plants but also to marker-free transgenic plants. Jacobsen and Shouten [8] propose to exclude organisms created using cisgenesis from the legislation applied to GMOs. The term "gene evolution" applied to cisgenesis (intraspecies vertical transfer) rather than "gene revolution" applied to transgenesis (interspecies horizontal transfer) would summarize, according to these researchers, the greatest potential acceptability for consumers of this type of biotechnology. The purpose of this article is to consider this question seriously at a time when the development of new technologies in genetic engineering is clouding the scientific debate and challenges the public regulation [11,12,13,14]. For example, as described by Kuzma, genetic editing involves changing DNA sequences at targeted locations usually using site-directed nucleases (such as CRISPR-Cas9) and may be a safer process than first-generation GE techniques owing to its precision. Therefore "ironically the same (..) developers who claimed that the process of [Genetically engineering] does not matter for regulatory purposes are now arguing that changes to the engineering process justify looser regulatory scrutiny" [14]. Based on a survey regarding the opinions of Europeans concerning biotechnologies (Eurobarometer 73.1.), our objective is to highlight the differentiated acceptability of different genetic techniques (cisgenic or transgenic) in European countries. It what follows we use the terms genetic engineering, genetically modified and genetically modified organism following the Agricultural Biotechnology Glossary of the USDA (see https://www.usda.gov/topics/ biotechnology/biotechnology-glossary) An organism that is generated through genetic engineering is considered to be genetically modified (GM) and the resulting entity is a genetically modified organism (GMO), a genetically engineered. A Bayesian multilevel structural equation model is estimated in order to take into account individual, national, and European effects of these techniques. The results underline the importance of a "country" effect, and despite strong opposition, a relatively wider acceptability for cisgenesis in comparison to transgenesis, as a means of reducing phytosanitary treatments. In a broad sense, phytosanitary treatments are pest controls (herbicide, fungicides and insecticides) corresponding to the main apple diseases. We don't use the narrow sense (in exportation regulation) for which "Phytosanitary treatments are official pre-shipment or quarantine processes recognized internationally and used by National Plant Protection Organisations (NPPOs) to mitigate biosecurity risks associated with plants or plant-based products." [15]. Original on an empirical level, our contribution is also original through the methodology developed: a multivariate methodology designed to explicitly take into account the complex nature of the data from a pan-European survey that measures attitudes (latent variables) rather than behavior [16,17], which is clearly different from a previous analysis of the survey [18]. After a brief review of literature in order to position the terms of the debate between transgenesis and cisgenesis, and the specificity of its application in the case of the apple, we will present our data. We will then explain our Bayesian econometric strategy. Lastly, we will discuss the results obtained by comparing them to studies that have used similar or dissimilar methods in order to specify their validity. Literature review: Four proposals regarding the acceptability of transgenesis and cisgenesis considered to be different from breeding and commonly classified as a form of genetic modification". Finally, Delwaide et al [38], measuring consumers' willingness to pay, show that European consumers may accept cisgenic food products more readily than transgenic food products. We can thus formulate an initial research proposal: H1. There is a greater acceptability of cisgenesis than transgenesis, while noting a strong correlation between the two. The different studies carried out on the social acceptability of biotechnology all underline that the absence of perceived utility is one of the determining factors in the opposition to biotechnologies regardless of the field of application [1,33]. Being interested in the environment can lead to perceiving certain genetic manipulation techniques, as long as they are presented as an extension of more traditional methods, as relatively acceptable. Consumers often mention the reduction of phytosanitary treatment as the main reason that could lead them to accept to accepting GM foods [2,3,4,5,6], even if its importance can be controversial [39]. Studying the acceptability of GM Tomatoes in USA, Loureiro & Bugbee [3] find that consumers are willing to pay the highest premium for the ''enhanced flavor" attribute, followed by both the ''enhanced nutritional value" and ''pesticide reduction" attributes. The situation may be different for apples as fruit tree farming is highly dependent on phytosanitary treatments. The control of apple scabs, which has a considerably negative impact on the propensity of consumers to purchase apples [40], leads to 10 to 20 antifungal treatments per year. Using choice experiment survey in New Zealand, Kaye-Blake et al. [2] find that the value of GM apples is determined by the specific benefits that can be provided: the willingness to pay for GM apples increases with either improved flavor or fewer insecticides, but the premium is higher for the latter than for the former. Different studies underline that consumers are concerned by the pollution caused by the spread of pesticide residues into the environment. Heiman [6] argues that information on reduced pesticide use in GM crops primes at least two attributes simultaneously-health, and contribution (damage) to the environment. Consumers with a greater interest in science (or training in these areas) generally show accept biotechnologies more readily [35,41,42]. Two research proposals arise from this: H2. Environmental concerns are important for the acceptability of both techniques H3. General interest in science or biotechnology are important factors for the acceptability of both techniques. Joly & Marris [43] underline the specific structure of the debate in each country, highlighting very different acceptability levels between countries, within the same country, and for different applications. Nayga et al. [44] emphasize a greater acceptability of genetically modified plants in South Korea than in the United States. This point is confirmed by meta-analysis of experimental economic studies underlining greater resistance from the European consumers than from American or Asian ones [45,46]. European studies [1,47] or comparative studies between countries [43] show that beyond the average European citizen there is a great diversity in national configurations. There is a convergence between European countries on the general attitude towards biotechnologies, with the caveat that recent members of the European Union show an increase in the number of citizens who are, ex ante, more favorable to them [48,49]. Specifically concerning intragenesis, Lusk & Rozan [50,51] and Rozan et al. [36] have shown, on the one hand, a greater acceptance of this technique when compared with other gene transfer techniques and, on the other, major differences between France and the United States on this point. Delwaide et al. [38] have estimated significant differences in WTP for cisgenesis and transgenesis across countries. We can therefore propose: H4. A considerable portion of the heterogeneity of individual preference towards cisgenesis and transgenesis is explained by taking the national aspects into consideration. Presentation of data: Eurobarometer 73.1 We used data from Eurobarometer 73.1, concerning the attitudes of Europeans towards science in 2010 (see Kronberger et al. [18] for univariate statistics). Approximately 1000 people per country were questioned using a random multiphase sampling process. The survey covers the population from 15 years of age and upwards residing in each member state of the European Union, as well as some associated countries (like Norway, Iceland, and Turkey). Thus a series of questions was asked with an initial scenario given that each individual was supposed to answer: Some European researchers think there are new ways of controlling common diseases in apples -things like scab and mildew. There are two new ways of doing this. Both mean that the apples could be grown with limited use of pesticides, and so pesticide residues on the apples would be minimal. The first way is to introduce artificially a resistance gene from another species such as a bacterium or animal into an apple tree to make it resistant to mildew and scab (. . .) The second way is to artificially introduce a gene that exists naturally in wild/ crab apples which provides resistance to mildew and scab. An assimilation is made between the "vertical transfer of genes" and cisgenesis on the one hand, and between the "horizontal transfer of genes" and transgenesis on the other [1]. As previously indicated, this assimilation is correct on the whole. It does, however, ignore one of the aspects of transgenesis that requires the use of marker genes (not present in cisgenesis). This dimension is one of the controversial elements of transgenesis, absent from cisgenesis. Due to the way in which questions were asked, it is unlikely that Europeans use this argument for the acceptance of one technique rather than another. To simplify, we use the terms "cisgenesis" and "transgenesis". Note finally that according to recent studies [10], marker-free transgenic plants may be produced in the near future. The Table 1 shows the different rates of agreement with proposals concerning the genetic manipulation of apple trees. The beliefs are generally expressed on a Likert scale (totally Agree, Agree vs disagree and totally disagree, and don't know), except for two (label support for transgenesis and cisgenesis) when a dummy is used (yes vs no). Unfortunately, the model cannot handle responses with different distributions. Following Gaskell et al. [34], we dichotomize the responses and consider only the positive response (agree & totally agree) (vs. the negative ones). Note that our strategy is also a way to address the existence of country style response that may lead to biased analysis [52]. It is important to underline that the titles of questions vary in part between the two techniques but our model is a way to handle this problem by providing various estimators for that (individual and national factors, determinants of each response, etc.). See section 4. We also report (in Table 1) the descriptive statistics for four countries (Luxemburg with the lowest support for transgenesis, the Netherlands with the highest support, Turkey with the lowest support for cisgenesis, and Hungary with the highest). The table gives us some outlines of the European heterogeneity. The first conclusion we can come to concerns the opposition of the majority towards these two techniques, making the people questioned "feel uneasy". Europeans emphasized the requirement for the labeling of these apples (81% of the transgenesis and 71% of cisgenesis). The second conclusion is that Europeans appear more favorable to vertical gene transfer for apple trees (a lower rate of people replying that this harms the environment, making them feel uneasy, a higher rate of people replied that it could be useful, or should be encouraged). But it seems that there is a highest variation between countries about this last technology: In Hungary 71.7% of the population think that cisgenesis will harm the environment, in comparison with a rate of 22.4% in the Netherlands, which leads to a gap of 49.7% between these two opposite positions. For transgenesis, the gap is only 27.3%. Finally we can also note that even in the country with the highest support, the agreement with the proposition that transgenesis should be encouraged is relatively low (only 39.8% for the Netherlands). In Table 2, the correlations between the various responses are reported. Only one is not significant (between "gene transfer from other species is fundamentally unnatural" and "gene transfer from the same species will be useful"). As responses across and within each kind of gene transfer are highly correlated, a multivariate analysis taking this structure into account is required. We can therefore ask what determines this attitude and how the answers to different questions correlate. One hypothesis could be that the response observed depends not only on a vector of observed variables (socio-demographic factors, but also values and interests), but also on a general unobserved individual attitude (depending on a vector of individual determinants), and on a general unobserved attitude shared by the citizens of the same country. The last point leads us to try to measure the importance of the national aspect on individual attitudes. Econometric strategy: A multilevel structural equation model Using a standard statistical model is not appropriate if the data studied clearly has a hierarchical structure, notably meaning that the intragroup correlation is statistically significant [53]. A bias in the estimation variance is created when all responses are considered independent. If, on the contrary, we carry out the analysis on groups, taking into account average values, the Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? correlation between variables created in such a way is biased leading to the ecological error [54]. Using multilevel model is therefore a traditional approach method for Eurobarometer data [55]. Our data source also creates a major problem: contrary to experimental economics surveys, it does not directly measure behavior but rather attitudes. This creates the problem of misreporting or a measurement error because of a "social desirability bias": participants may be led to "simply stating a principle" [56]. A solution is to estimate an econometric model with a measurement error using auxiliary data [57]. Another approach is to collect data from products that consumers have already purchased [58]. As European consumers are not able to buy real genetically modified apples, both techniques are inapplicable. We choose to take this data seriously within a latent variable framework. As in psychometrics, our hypothesis is that the attitude cannot be directly observed but inferred from the coherence of the answers given by the individuals [59,60]. Therefore note that this latent variable framework does not have the usual economic interpretation (individual utility). New multilevel factorial models [17,61,62,63,64] are appropriate to correctly deal with the multidimensionality of relationships between Europeans and biotechnologies without an excessive addition of parameters. They notably take into account the heterogeneity both at the individual level and at the group level. A two individual and two national factors model is given bellow [16]. For an individual i in a country j, we have: With g(.) the probit function. As Grilli & Rampichini point out [17], the choice of the binary function often has little influence on the results, we choose the probit function by commodity, as the latent variable would be considered Gaussian (hence the link to the traditional factorial model). Therefore, we have e rij~N (0,1). y rij the response with r = 1,. . .,14 for the 14 different responses, x h,ij the independent variables. We choose the usual socio-demographic (age, gender, occupation, location) and attitudes (political scale, environmental, science and biotechnologies attitudes). l ð2Þ 1;r et l ð2Þ 2;r corresponding to the loading of the response r on the two factors at the national level; l ð1Þ 1;r and l ð1Þ 2;r corresponding to the loadings of the response r on the two factors at the individual level, For questions of model identifiability, reasonable hypotheses must be made: Setting the variance of factors at a certain value (normally a unit), and setting the coordinates of one of the responses on one of the factors to a certain value (normally zero). Then: Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? In our example, the nullity of some coordinates arises naturally. We can group the different dependent variables together into two different factors: an attitude factor for cisgenic apples and an attitude factor for transgenic apples; both these factors can be correlated. As the factors have the same scale (with a unit variance), the loadings for the same response r can be compared between different factors on the same level or different levels. On the other hand, as the latent variables y à rij have different scales, the loadings cannot be compared between responses, it is therefore necessary to standardize them. We can establish an ICC (residual (or conditional) intraclass correlation coefficient) for the response r corresponding to the variance explained by the country level: Where Var 1 ðy à rij Þ is the variance at level 1 (that of the individual) and Var 2 ðy à rij Þ is the variance at level 2 (country level) This coefficient gives the percentage of the variance in the acceptance taken into consideration by the inclusion of a level. Similarly, as for every factorial model, we can calculate the communalities, that is to say the amount of variance for the response r explained by the factors. The communality is also known as the variance proportion that the response r has in common with the other responses. The total communality is the sum of communalities of the country level (Com ð2Þ r ) and the individual level (Com ð1Þ r ). This model is estimated within the Bayesian framework using MCMC (Markov Chain Monte Carlo) [65]. We use Realcom software developed by the Center for Multilevel Modelling [63]. This type of modeling has been shown to be unbiased for models with dichotomic or categorical response variables [66], cross-classified models [67] as well as for cases where the number of categories at the upper level is low [68,69,70]. The Bayesian estimator does not generally allow analytical solutions. Recourse to draws from the posterior parameter distribution is required. Several estimation methods are possible, the most popular being Metropolis Hastings method and Gibbs sampling. The latter is implemented in Realcom. With diffuse (or "flat") prior parameters proposed by Browne [65], we used 100 000 iterations after an initial burn-in of 50 000 iterations. Bayesian models don't give only one point estimate but rather an estimation of the parameter distribution. We follow Koop [71] and report the posterior means and credible interval of the parameter. The parameter can be considered as significantly different from zero if the credible intervals (at 90%, 95% or 99%) don't include zero [72]. Lastly, the question of choice amongst all the alternative specifications arises. We follow Bayesian model selection [73] using the DIC (Deviance Information Criterion) proposed by Spiegelhater et al. [74]. A generalization of information criteria within the framework of multilevel models, DIC(θ i ) is asymptomatically equivalent to the AIC (Akaike information criterion) in the presence of non-informative priors [74]. The weaker DIC(θ i ) is, the "better" the model is. Therefore Jeffery's rule of thumb can be used [73,74,75]. A difference of 10 between two DIC might definitely rule out the model with the highest DIC, as it involved that the model with lowest DIC has approximately a posterior odds of 150:1 to be the true model [73]. In Table 3, we show how our hypotheses can be tested by the parameters of our multilevel model. One of the main advantages of our empirical strategy is that we can fully simultaneously test the four hypotheses. Sequential testing is based on a strong assumption, namely that the hypotheses are independent from each other. This assumption is relaxed here [16]. Results We report in Table 4 the comparison of DICs for the various models. This comparison (a huge difference of 553 between the DICs of models M 6 and M 5 ) leads to selection of the model M 6 (with two individual correlated factors and two national correlated factors) notably due to a considerable reduction of parameters in relation to M 5 even if " D is slightly higher. Therefore, we report and comment only the model M 6 in the next tables. Table 5 explains what these different factors are made of. On the first factor l ð2Þ 1;r , the responses "unnatural", "harms environment", and "make me feel uneasy" are the best represented on the positive side. The responses "useful" and to be "encouraged" go the other way. This factor can be interpreted as a general attitude of opposition to cisgenesis at the national level. The factor l ð1Þ 1;r at the individual level can be interpreted in a similar manner even if we can see here that the standardized loadings are considerably higher. In other words, the individual determining factors have considerably more influence than the national determining factors for the attitude towards cisgenesis. For the factorsl ð2Þ 2;r and l ð1Þ 2;r , concerning transgenesis, the interpretation is similar except on one point. We can see that at both individual and national levels, the responses to the questions "promising idea", "safe", and "to be encouraged" go against the other responses. We can also compare the factors l ð2Þ 1;r and l ð2Þ 2;r for the questions asked in an identical manner for both types of gene transfer (namely: harms the environment, unnatural, makes me feel uneasy, to be encouraged, and GM label). With the exception of questions concerning the fact the gene transfers should be encouraged, we can see that the Table 3. Hypotheses. Hypothesis Validation H1. There is a greater acceptability of cisgenesis than transgenesis, whilst noting a strong correlation between the two. Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? Magnitude and Significance of the correlation between the factors at the individual level (s standardized loadings are greater for l ð2Þ 1;r than for l ð2Þ 2;r . In short, the national context has more influence on the attitude towards cisgenesis than transgenesis. We have also s ð2Þ n12 ¼ 0:43 ðstandard error ¼ 0:19Þ and s ð1Þ n12 ¼ 0:54 ðstandard error ¼ 0:01Þ. Both factors are strongly correlated at the national and individual levels (and slightly more at the individual level). These empirical findings are clearly in line with our first hypothesis (H1). Table 6 summarizes the importance of the inclusion of a "country level". The inclusion of this level accounts for 3 to 8% of the total variance depending on the response. Interpretation of the ICC value differs among researchers, with some arguing that a value less than 5% indicates that multilevel modeling is not needed, whereas others advocate that even small amounts of variance can result in significant differences in model fit, in the presence of categorical variables [17,76,77]. Such values, although moderate in terms of latent responses, imply variations in the probabilities of responses observed for each country. This last point is confirmed by the loading in the previous table. The response to the question whether transgenesis is "perceived unnatural", "make people feel uneasy" and "to be given a label" is more readily explained by the unobserved variables at the country level (s ð2Þ ur is relatively high). They also have a total communality (Com r ) that is relatively low (very low for the request for a label). They tend to vary independently of other responses. Here we can see a greater influence of the way in which public debate is structured. The response "a promising idea" or "to be encouraged" is less explained by the observable national variables (as s ð2Þ ur is very low), whilst considering it as "unnatural" or that it leads to "feeling uneasy" depends considerably on unobservable variables. Concerning cisgenesis, the same interpretation holds concerning the responses "risky" or "unnatural", which are highly dependent on unobservable national variables with a relatively high ICC. Feeling uneasy with this technique is, on the other hand, highly dependent on observable individual variables (Com ð1Þ r high and ICC low). In general, with higher Com r , the attitude regarding cisgenesis appears to be more homogenous than that of transgenesis. To resume, multilevel factor analysis gives us mixed evidence about the importance of national influence (H4). Lastly, the final table (Table 7) provides an understanding of the influence of different explanatory variables on the response to different questions from European reports on transgenic and cisgenic apples. As these are "probit" responses, the estimation of the marginal effect is relatively straightforward. These marginal effects correspond to discrete changes for independent dichotomous variables [78]. We thus highlight the strong effects of age concerning attitudes to transgenic apples, whereas age seems to have less influence on the attitudes regarding cisgenic apples. Practicing a religion increases the probability of replying that transgenic apples harm the environment (+3%), are "unnatural" (+4%), and make people feel uneasy (+13%). This also leads to increasing the probability of replying that cisgenic apples are risky (+8%), harm the environment (+5%), and make people feel uneasy (+12%). Among other things, practicing a religion also reduces the probability of considering cisgenic apples as useful (-3%), and as to be encouraged (-3%). Lastly, in relation to our research hypotheses H2 and H3, we underline the fact that expressing an interest in the environment has contrasted effects on the respective acceptability of cisgenic apples (+9% as useful, +7% as unnatural, +4% to be encouraged, +4% to be given a label) and transgenic apples (+7% harm the environment, +12% unnatural, +17% make feel uneasy, +12% to be given a label). Conversely, concerning people with an interest in science or biotechnology, the effects are more similar between the two technologies leading to a greater acceptability. However, each time there is a positive effect on the demand for creating a label specific to these apples. Discussion and conclusion We have highlighted a general attitude toward genetically modified apples. The two factors expressing opposition to transgenic and cisgenic apples are highly correlated at individual and Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? national levels. In general with a higher Com r , the attitude toward cisgenesis appears to be more homogenous than toward transgenesis. For the latter, we find the same type of heterogeneity (plurality of attitudes, justification means, types of opposition) as defined in previous publications [4,34,39,79]. In coherence with other studies on European consumers [18,38], we underline opposition concerning all genetic engineering techniques even if our study reveals mixed responses with contrasted impacts of being interested in the environment. It is as if cisgenic apples have become part of a new "utility"/"risk" dilemma as highlighted previously by Gaskell et al. [34]. In effect, we underline a more important age effect for cisgenesis than for transgenesis, with increasingly weaker support as age increase (as in Rousselière & Rousselière [47]. Consumers can balance out the risk with expected benefits for technology, but this connection is plural [34], depending on the social and cognitive resources available that may influence their perception of biotechnologies. Note however that the effect of population ageing may be complex and can have structural effects on European societies. The development of functional foods or organic food even with new biotechnologies for example may lead to a greater acceptance by middle-aged and elderly consumers [13,80]. Contrary to previous research, our empirical strategy allows us to test simultaneously four hypotheses (see Table 8). Our first hypothesis H1 seems therefore validated as a high correlation between social acceptability of cisgenesis and transgenesis had been highlighted, with a higher acceptance of the latter. In relation to our research hypotheses H2 and H3, we underline the fact that expressing an interest in the environment has contrasted effects on the acceptability of cisgenic and transgenic apples. Conversely, expressing an interest in science or biotechnology leads to greater acceptability. Finally, our multilevel modeling provides mixed evidence about H4. Although factor loadings are significant at the national level, the estimated values of the various ICCs seem relatively low, or at least mild according to various rules of thumb. Therefore, it is as if there is a convergence between European countries. Unfortunately our model is not flexible enough to include random effects, as in Rousselière & Rousselière [47] where divergence between European countries is largely explain by the strategy of national political parties. However, if we compare transgenesis and cisgenesis, there is still a high difference between countries about social acceptability. Although our work is nonetheless an extension of previous research, one way to address these issues is to extend our work to finite mixture modeling. Multilevel Latent class can allow us to provide a typology of individuals that can be useful to understand simultaneously the various profiles of opponents to biotechnologies and the typology of countries [81,82]. New developments (mixture structural equation models) proposed by Lee & Song [83] that allow parameters to vary for various cluster may be a fruitful modeling for future studies. Yes (in part). Marginal effects of interestscience and interestbiotech significant for respectively 9 and 10 responses. H4. A considerable portion of the heterogeneity of individual preference towards cisgenesis and transgenesis is explained by taking the national aspects into consideration. Mixed evidence: low ICC (between 3 and 8%) but significant loadings at the national factors https://doi.org/10.1371/journal.pone.0183213.t008 Is biotechnology (more) acceptable when it enables a reduction in phytosanitary treatments? Several issues can be emphasized in closing. The first concerns public policy toward biotechnologies. The study confirms the presence of clear differences in the fields of application for biotechnologies. The different studies carried out on the social acceptability of biotechnologies highlight that the absence of perceived utility is a key point [1,32,33,84,85]. Medical treatments developed from biotechnologies are considered less risky than the development of an illness. Inversely, the development of biotechnologies in ornamental horticulture, in other words, the use of biotechnologies in an explicitly leisure context, is strongly rejected [84,86,87]. The second issue concerns the differences observed between different European countries. Our article highlights a greater variability in attitudes toward cisgenesis between European countries in comparison with transgenesis. Significantly, this is also the result found by Lusk & Rozan [51] when they compared the United States and France. Thus, according to these authors, intraspecies transfers or intragenesis transfers are mainly accepted by American consumers (between 52.7% for the transfer of numerous genes of different plants to 77.3% for the transfer of a gene coming from the same plant) while they are mainly refused by French consumers (respectively 17.5% to 37.5% support). Consumers in both countries reject other types of gene transfers overall. This study could be extended to understand the origins of this difference of opinion between countries. According to different studies, the acceptability difference first stems from a "trust gap" between countries highlighted by Priest et al. [88]. While controlling the level of knowledge, trust in scientists [89,90], public authorities [91,92] or manufacturers [5] have a positive impact on the acceptability of genetically modified foods, distrust in public authorities leads to a greater acceptance of alternative foods (organic or local) [93]. On the other hand, trust in environmental associations [94,95] reduces its acceptability. The "trust gap" explains the difference in acceptability of GMOs in Europe and in the United States by the fact that Europeans have a greater trust in consumer and environmental protection associations, and in the United States people have a greater trust in the "biotechnology system". This study confirms that it is necessary to distinguish between an increase in the flexibility of regulations regarding organisms arising from cisgenesis (relative to regulations for organisms coming from transgenesis), and an absence of product labeling policies for these organisms. Advocates of cisgenesis recognize this distinction [8,96]. If cisgenesis is likely to encounter greater acceptability among European consumers, there is still considerable opposition to contend with (beyond the question of breaking through the "barrier between species" or the environmental argument). On the contrary, we know that consumer tolerance to apple scab is possible with a label indicating organic agriculture and/or on more environmentally friendly practices [97,98]. We also find elements in support of the position of the European Commission that classifies this type of technique as particularly close to more traditional transgenic techniques [30]. The creation of a label for this type of product is requested if these types of products were to be developed and authorized for sale. Nonetheless, as for all species subject to pollen dispersion, the question of coexistence between different techniques remains [99,100].
7,807.8
2017-09-06T00:00:00.000
[ "Economics", "Environmental Science" ]
Calcium targets for production of the medical Sc radioisotopes in reactions with p, d or α projectiles The scandium radioisotopes for medical application can be produced in reactions of calcium with proton, deuteron or alpha projectiles. Enriched isotopic calcium material is commercially available mainly as calcium carbonate which can be used directly for production of Sc radioisotopes or can be converted into other calcium compounds or into metallic form. The superiority of application of calcium oxide is shown throughout analysis of use of each target chemical form. Introduction Majority of medically interested radioisotopes are produced in reactions induced by neutrons i.e. in reactors. Nevertheless, the alternative methods of their production are being extensively developed. The advantages and drawbacks of each production route are well presented by M. A. Synowiecki et al in [1]. The studies on the alternative methods were triggered by unplanned shut-downs of reactors several years ago which caused the shortage of isotopes (i.e. 99 Tc) widely used in medical applications. These studies are also stimulated by development of the diagnostic techniques and by search for replacements having longer half-live then isotopes recently used in PET scanning. One of the alternative method of these radioisotopes production is use of the reactions induced by accelerated projectiles such as protons, deuterons or alpha particles. Majority of the studies on the scandium isotopes production via reactions induced by p, d or α projectiles are performed using calcium as a target nucleus. The alternative nucleus is Ti as the source of Sc isotopes but comparison of the cross sections for reactions of both nuclei (Table 1) shows that reaction with Ca nucleus promises much higher production efficiency. As can be found from these data production of 43 Sc with very high efficiency can be achieved even applying target composed of the natural calcium employing reaction of 40 Ca (96.94 % of nat. abundance) with α particles [3]. (-) Unstable in air , quick manipulation is required or best to be handled in the inert atmosphere. Ca (-) Have to be prepared by CaCO3 conversion; it is a two steps procedure; (-) unstable in the air, it requires handling in the inert atmosphere. under the beam (-) Thermal insulator; Decomposes producing CaO+CO2. The target cracks when exposed to intensive beams due to this process (see Fig. 1); (-) Production of large amount of 13 N (decaying in ~10 min via β + to 13 C). (-) If not sufficient cooling can melt in the beam. Treatment after irradiation (+) Dissolves very easily in a weak acids. (+) Dissolves easily in a weak acids; only slightly more difficult than carbonate. Efficiency see next table Chemical form of the target Production of the Sc medical radioisotopes in reaction of Ca nucleus can be done working with unprocessed enriched material i.e. with calcium carbonate (CaCO3, the chemical form of the enriched Ca isotopes mostly available commercially) or with material converted into either calcium oxide (CaO) or metal (Ca). Work with each target form has advantages and drawbacks ( Table 2). The thermal damages mentioned in Table 2 for calcium carbonate can be eliminated by mixing the target material with a good heat conductor e.g. graphite or aluminium as discussed in [5]. The activity of the Sc radioisotopes produced during the same time of irradiation of Ca metallic target would be nearly tripled comparing to activity produced while using CaCO3 (or doubled if working with CaO). This is due to the number of nuclei per cm 2 (N) in targets with thicknesses covering the projectile range in CaCO3, CaO and Ca. The ratio of nuclei numbers in these targets is ~ 1:2:3, respectively, and thus is the produced activity. Av -Avogadro constant = 6.022140857×10 23 mol −1 ; * calculated using SRIM 2013 code; ** example energy for 43 Sc isotope production in 40 Ca reaction with α; However, conversion of CaCO3 into metallic Ca is a time-consuming two steps reduction process [6,7]. The process can be done under vacuum by decomposing carbonate to oxide, followed by the oxide reduction into Ca using metallic (Me) reductants such as e.g. Zr, Ti. CaCO3  CaO + CO2 (heating at temp. > 700 ºC) (2) 2CaO + Me  Ca + MeO (reduction) In addition, process may as well introduce additional contaminants to the target apart from those present in the available starting material. Also the process efficiency (lower than 80%) has to be kept in mind considering metallic target. Therefore, it is better to avoid this conversion if it's not vital. Working with metallic Ca would also require a special vacuum containers and/or construction of a transfer vacuum line to the cyclotron to prevent the contact of Ca with air. Taking these difficulties into account it is much better to work with calcium oxide as target. Although activity produced is only doubled comparing to activity produced with CaCO3 target of adequate thickness (see Table 3) but conversion to CaO is much easier than conversion to Ca. It can be done either by heating the oxide in flow of the inert gas [8] or in vacuum using the resistant heating. The advantages of the second method are: the instant/online control on the decomposition process via controlling the vacuum and gives the possibility of cooling down the produced CaO in the air free atmosphere. Conversion carried in a special vessel/crucible with the perforated cover (Fig. 2.) and venting-in the vacuum apparatus after completion of the procedure, with inert gas allows the transfer of the produced CaO to the glove box for manipulations needed to produce the final target (e.g. pressing the pellet, encapsulating into container, etc.) without special precautions. In addition decrease of the oxygen content in the target results in a significant decrease of the side radioactivity in the irradiation area related to the production of 13 N in 16 O(p,x) 13 N or 16 O(d,x) 13 N reaction. The oxide targets prepared as inserts into graphite bed as described in [5] survived 45 min irradiation with 15 µA proton beam very well. There were no signs of thermal damage of the target. The irradiation conditions are sufficient to produce ~8 GBq of 44g Sc irradiating the CaO converted from enriched up to 99.2 % 44 CaCO3. Taking into account the activity loses during isotope separation and labelling process, such amount of 44 Sc should be enough for diagnosing ~ 75 patients (the estimation based on clinical studies for 44 Sc [9] where 50.5 MBq of 44 Sc-PSMA-617 were applied for single diagnosis). Conclusions Production of the research quantities of Sc radioisotopes can easily be performed using targets made directly from calcium carbonate. However, for clinical application when higher activities are required more favourable is to work with targets made from calcium oxide. Conversion of carbonate into oxide is one step process practically without loses of the often expensive, enriched calcium material. As it has been shown using the calcium oxide instead of carbonate gives nearly double activity within the same time of irradiation and much less undesirable radioactivity (originating from decay of 13 N formed in the side reaction) is produced in the irradiation area.
1,661.8
2020-01-01T00:00:00.000
[ "Medicine", "Physics", "Chemistry" ]
Quantifying the Speed of Chromatophore Activity at the Single-Organ Level in Response to a Visual Startle Stimulus in Living, Intact Squid The speed of adaptive body patterning in coleoid cephalopods is unmatched in the natural world. While the literature frequently reports their remarkable ability to change coloration significantly faster than other species, there is limited research on the temporal dynamics of rapid chromatophore coordination underlying body patterning in living, intact animals. In this exploratory pilot study, we aimed to measure chromatophore activity in response to a light flash stimulus in seven squid, Doryteuthis pealeii. We video-recorded the head/arms, mantle, and fin when squid were presented with a light flash startle stimulus. Individual chromatophores were detected and tracked over time using image analysis. We assessed baseline and response chromatophore surface area parameters before and after flash stimulation, respectively. Using change-point analysis, we identified 4,065 chromatophores from 185 trials with significant surface area changes elicited by the flash stimulus. We defined the temporal dynamics of chromatophore activity to flash stimulation as the latency, duration, and magnitude of surface area changes (expansion or retraction) following the flash presentation. Post stimulation, the response’s mean latency was at 50 ms (± 16.67 ms), for expansion and retraction, across all body regions. The response duration ranged from 217 ms (fin, retraction) to 384 ms (heads/arms, expansion). While chromatophore expansions had a mean surface area increase of 155.06%, the retractions only caused a mean reduction of 40.46%. Collectively, the methods and results described contribute to our understanding of how cephalopods can employ thousands of chromatophore organs in milliseconds to achieve rapid, dynamic body patterning. INTRODUCTION Unlike the slower chromatophore control of flatfish (2-8 s; Ramachandran et al., 1996), coleoid cephalopods can change body patterns in milliseconds. For decades, scientists in the field of cephalopod vision have focused on the goal of creating a complete characterization of the sophisticated coleoid body patterning abilities. As a result, existing reports are sufficient to describe and explain several known body patterns in cephalopods for camouflage and communication (Hanlon and Messenger, 1988;Hanlon, 2007;Langridge et al., 2007;Zylinski et al., 2009;How et al., 2017). Nevertheless, a theoretical framework on cephalopod body patterning, which does not include the dimension of time, will be inherently inadequate in modeling, holistically, the range of dynamic, rapid transformations observed in animals living in the wild. One approach toward studying this topic is by stimulating the visual system of a living, intact animal, using a light flash to elicit muscular activation of chromatophores, and quantifying the response dynamics by tracking surface area changes in time. Experiments conducted in the Gilly laboratory revealed how light flashes elicit startle jet-escape responses in squid, Doryteuthis opalescens (Berry, 1911). The brief, intense light stimulus activates the central nervous system (CNS) at the magnocellular and palliovisceral lobes, which relay information to the stellate ganglia to modulate forceful muscle contractions of the mantle expelling water through the funnel in the process (Otis and Gilly, 1990;Gilly et al., 1991;Gilly and Lucero, 1992;Neumeister et al., 2000;Preuss and Gilly, 2000). Within the stellar nerve, a group of non-giant motor axons innervates chromatophore muscles (Ferguson et al., 1988). In one of these studies (Neumeister et al., 2000), which investigated the effects of temperature on escape responses in restrained squid, the flash stimulus produced transient chromatophore expansions. Responding to the light flash startle stimulus, animals exhibited a robust jet-escape startle response with transient chromatophore expansions. However, when light intensity was decreased by "positioning the flash unit further from the squid" (Neumeister et al., 2000, p. 551), the animal showed chromatophore expansions as sub-jet-threshold startle responses (in the absence of jetting). Squid are a useful species for studying chromatophores because they have fewer and larger chromatophore organs (density: 8 mm −2 , maximum diameter: 120-1,520 µm; Hanlon, 1982) compared to octopus (density: 230 mm −2 ; maximum diameter: 300 µm; Packard and Sanders, 1971) and Sepia (density: 35-50 mm −2 ; maximum diameter: 300 µm; Hanlon and Messenger, 1988), offering a simpler model to study chromatophore control. The Neumeister et al. (2000) study validates a reliable method of using flash stimulation and video-recording the skin, from a close-up perspective, to investigate the synchronicity of chromatophore activity at the single-organ level in squid. Since studying chromatophore response dynamics across all body regions was not the study's primary focus, chromatophore expansions only on the mantle were reported. For this exploratory pilot study, we aim at replicating the sub-jetthreshold behavioral responses to flash stimulation with a different species, Doryteuthis pealeii (Lesueur, 1821), to examine the mechanisms and temporal dynamics of the sensorimotor system underlying chromatophore control in intact animals (Hadjisolomou, 2017). Due to ethical and logistical issues involved with long-distance transportation of D. opalescens for experimentation, D. pealeii was chosen as this species is available to be studied in Woods Hole, Massachusetts. Further, in addition to the mantle, we expanded observations to include chromatophore activity from the understudied regions of the arms, head, and dorsal fin (Figure 1). Young (1976) reported on the CNS control of chromatophores in D. pealeii, elaborating that separate chromatophore lobes in the brain control different body regions. Specifically, the posterior chromatophore lobes (PCL) mainly control chromatophores on the mantle and fin regions, while chromatophores on the arms and head are primarily controlled by the anterior chromatophore lobes (ACL) and pedal lobes (PL). Axons from the PCL connect without a synapse to chromatophore organs through the pallial nerve. Electrode stimulation of PCL neurons in Lolliguncula brevis (Blainville, 1823) causes chromatophore expansion on the mantle and fin (Dubas et al., 1986), but it did not result in retraction of any expanded chromatophores. Both species are part of the same family, Loliginidae (Lesueur, 1821), and have anatomical similarities (Díaz-Santana-Iturrios et al., 2019), thus allowing for approximations between them. We chose these body regions to observe any discrepancies in timed responses due to circuitry differences. By video-recording all body regions in intact, living squid, we quantified the temporal dynamics from light flash stimulation to expansions and retractions at the single-organ level across thousands of chromatophores. Similar to the Reiter et al. (2018) study, which used unrestrained European cuttlefish (Linnaeus, 1758), we measured chromatophore activity from FIGURE 1 | D. pealeii (mantle length approximately 14 cm) expressing disruptive body patterning with some chromatophores expanded (dark bands), while others are retracted. Numbers indicate the different body regions measured in the study: 1 = head/arms, 2 = mantle, and 3 = fin. unrestrained squid. The procedures and methodologies described below enable non-invasive data collection of chromatophore activity from living animals to study behavioral responses in intact organisms. Animals Adult D. pealeii were collected from coastal waters near Woods Hole, Massachusetts, US, in 2014. From large population holding tanks, eight healthy animals (mantle length: 12-15 cm; unknown sex and age) without any visible physical injuries were selected for inclusion. We transported individual squid and housed them together in a 2 m × 1.5 m × 1 m rectangular, light-brown opaque, fiberglass housing chamber connected to an open, temperaturecontrolled (17-19 • C) seawater system. Gravel and sand on the bottom of the housing tank provided a natural substrate for animals to settle. Animals were fed twice a day on an ad libitum diet of live Fundulus fish and crabs 1 . Experimental Design Here, n refers to the number of different body regions examined (head/arms, mantle, and fin). Each body region, therefore, was considered to be an experimental unit. The study was a withinsubjects design consisting of one group of three experimental units, and there were eight animals. One animal was excluded due to a lack of significant chromatophore responses from data analysis (see "Results" section). Experimental Set-Up To collect measurements, we constructed a rectangular rig covered with a layer of black cloth and an additional layer of an opaque, black tarp to prevent light from entering. Experimental Tank and Acclimation The rectangular experimental tank, measuring 53 cm × 43 cm × 18 cm, consisted of white, opaque plastic walls containing 10 L of seawater (Figure 2). For each trial, one squid was placed within the experimental tank inside the rig. To establish habituation to the experimental apparatus, each squid was placed in the experimental tank for 10 min then returned to the group home tank, 24 h before experimental trials began. We created a white "V-shaped" partition configuration to enable the squid to settle naturally at the bottom of the tank, thus preventing chromatophore displacement outside of the camera frame. We placed an overhead light source at a 45 • angle to illuminate the animal for video recording. The ambient light and visual environment determined the chromatophore's state (expanded or retracted) before light flash stimulation. The animals adopted a lighter skin tone to camouflage in the white, uniformly lit tank during trials. Thus, to allow for a lighter skin tone, most chromatophores were retracted before flash stimulation. FIGURE 2 | Diagram of the experimental tank set-up, measuring 53 cm × 43 cm × 18 cm (situated inside the rig; external rig structure and black tarp and opaque covers not shown). The flash unit (1) providing the visual startle stimulus was fixed on the rig at a right angle and 50 cm above the animal (4). The camera (2) and light source (3) were at a 45 • angle above the animal. The white "V-shaped" partition configuration (5) enabled squid to settle naturally at the bottom of the white, rectangular tank (6). Startle Stimulus and Sub-Jet-Threshold Startle Response Animals were presented with light flashes to elicit the startle reflex response. To deliver the startle stimulus in a topdown direction, a Canon SpeedLite 580EX-RT flash unit was fixed on the rig at a right angle and 50 cm above the animal. Similar to the Neumeister et al. (2000) study, we found that D. pealeii have jet-escape startle responses and transient chromatophore expansions to intense light flashes. For this study, the duration of each light flash stimulus was ∼100 µs, with an illuminance of 12,500 lx, providing an even exposure of the stimulus on the animal from this distance. The entire animal was illuminated, but we videorecorded only one specific body region per trial for analysis. The stimulus was sufficient in producing muscular contraction but well below the jet-escape sensory threshold to minimize jetting. Thus, this study's behavioral responses comprised of chromatophore expansions and retractions to light flashes in the absence of jetting. Experimental Trial Procedure Once in the experimental rig, animals were allowed to procedurally acclimate and settle on the bottom of the tank, as evidenced by the animal remaining motionless for at least 5 min. Once an animal habituated, it received a sequence of approximately 90 flashes. For this study, we used a 10-s interstimulus interval (ISI), which does not cause attenuation due to learning, fatigue, or a combination of both (Otis and Gilly, 1990). With an ISI of 10 s, the total sequence duration lasted for 15 min per body region and each region was tested during different sessions. The duration and ISI were tested in preliminary trials and found to be appropriate for the purposes of this study. The rationale was to reduce testing sessions and have only one per body region since 15 min were sufficient. Each flash stimulation was considered an individual trial. The purpose was to elicit the sub-jet-threshold startle response. Each animal received 90 trials for each of the three body regions, for a total of 270 trials for each of the eight animals, thus 2,160 trials in total. One body region per animal was tested at a time (we counterbalanced the order of the body regions tested per animal). For details on video-recording, scoring, image analysis, and statistical analysis see Supplementary Material. Chromatophore Surface Area Changes Following Light Flash Presentation Out of the 2,160 total trials, 230 were suitable to be analyzed by Change Point Analysis (CPA) (Taylor, 2000). Based on CPA, 185 were identified to have significant chromatophore surface area changes. A total of 4,065 individual chromatophores responded to the startle stimulus with either transient expansion or retraction of the pigment. These chromatophores were further analyzed to characterize response activity pre-and poststimulation. The remaining 45 trials showed no significant responses by CPA and were excluded, including all Squid #8 trials and all expansion trials in Squid #3. Additionally, the numbers of trials with chromatophore responses were not equivalent across squid (Squid #2, for example, did not show any retraction responses in any trials). Furthermore, not all squid had all body regions significantly responding to the flash stimulus, and in other cases, there were trials with both significant expansion and retraction instances on the same body region. Thus, there is an unequal distribution of chromatophore numbers and body regions represented in the data (see Supplementary Figures 2-5). The discrepancies in this dataset are the observed behavioral differences between animals; a few animals would swim back and forth often enough to invalidate significant parts of the footage. Additionally, trials were excluded in the process of image analysis if the software was unable to detect chromatophores (Hadjisolomou and El-Haddad, 2017). In such cases, image noise due to fluctuations of color and luminance created artifacts that interfered with chromatophore detection and tracking. However, each significant expansion or retraction followed the same pattern regardless of which body region or squid showed the response. Within these 166 trials, 4,000 (98% out of the total 4,065) chromatophores showed significant expansion. On the head/arms, there were 1,598 chromatophores; from the mantle, there were 1,743; and on the fin, there were 659. Within these 19 trials, we tracked and measured 65 chromatophores showcasing significant retraction. On the head/arms, there were 39 chromatophores; from the mantle 21; and on the fin, there were five. Temporal Dynamics We calculated descriptive statistics on the temporal dynamics of chromatophore surface area changes following the startle stimulus (see Table 1 and Figure 3). We estimated each value with an estimated margin of error of ± 16.67 ms, determined by the inter-frame interval when recording at 60 frames per second frequency. Magnitude of Response We calculated the magnitude of chromatophore expansion or retraction activity by comparing peak response values of response Response time (tR) is the time to reach or pass the 5% value of the maximal response; delay time (tD) is the time to reach or pass the 50% value of the response; rise time (tRt) is the time required to reach the 100% value of the response; response duration (rD) is the time between the 5% values of response before and after the peak. Expansion On average, the relative chromatophore surface area increased by 155.06% across all body regions (4,000 chromatophores DISCUSSION In this exploratory pilot study, we systematically elicited behavioral responses using a light flash stimulus in intact, living squid and analyzed the temporal dynamics and magnitude of thousands of chromatophore surface area changes at the singleorgan level. Here, we report a replication of the following Neumeister et al. (2000) findings using a different squid species. D. pealeii with uniform light skin patterns before stimulation responded to flashes with jetting and chromatophore expansion and lower flash intensities triggered transient darkening in the absence of jetting. Our results demonstrate that it is feasible to use intact, living animals, to measure, non-invasively, the temporal dynamics of chromatophore control during body patterning. We also report the following novel observations: this is the first record of chromatophore activation to light flash stimulation on regions other than the dorsal mantle; our videos show chromatophore activation on the head, arms, and fin, in addition to the mantle. Also, for the first time, we show chromatophore retraction to light flash stimulation: chromatophores that were expanded before stimulation (such as dark bands on the mantle or expanded chromatophores on the head) responded with a transient retraction. Further, we observed synchronous chromatophore expansion and retraction on different parts of the mantle in the same trial (for example, chromatophores on dark bands on the mantle contracted, while chromatophores on light skin expanded). The general temporal dynamic patterns emerging from this data are the following: the speed of expansion and retraction activation was the same across body regions. Differences in response durations were not dependent on the magnitude of response. Finally, the head/arms were faster in most measurements compared to other body regions. The short latencies reported here are suggestive of a reflexive component of the response. Chromatophore Expansion Packard and others have described how flashes of light can elicit responses in chromatophores from dissected octopus skin (Packard and Brancato, 1993;Ramirez and Oakley, 2015). More relevant to this study, Neumeister et al. (2000) reported how light flashes elicit sub-jet-threshold chromatophore expansion in the squid D. opalescens. In agreement with the Neumeister study findings, we demonstrate that the presentation of light flashes elicits chromatophore expansion in a different squid species, D. pealeii. Also, we validate a method to measure chromatophore activity from unrestrained squid. Chromatophore Retraction This is the first study to report chromatophore retraction in response to presentation of a light flash stimulus. Comparing the two different types of chromatophore activity, expansions and retractions, enables a more thorough characterization of the sensorimotor system since the mechanisms underlying each type of action are not well understood. However, out of 4,065 chromatophores analyzed, only 65 showed retraction. As stated in the "Experimental Tank and Acclimation" section, only a small number of chromatophores were expanded in the original experimental set-up. Therefore, chromatophores responded in the only possible outcome given their original state: retracted chromatophores expanded and expanded chromatophores retracted. These findings demonstrate the method's validity in studying the retraction mechanism, an essential part of the chromatophore system in rapid body patterning. Response Time (tR) Our findings indicate similarities when comparing expansion with retraction and between the different body regions. The average response time to reach or pass the 5% value of the maximal response was 50 ms (± 16.67 ms). This was identical across all body regions and between expansions and retractions. These results echo the timing of the startle response mentioned in previous studies (Neumeister et al., 2000;Mooney et al., 2016). Based on these findings, the speed of the onset of rapid body patterning in squid is characterized by a latency of 50 ms. Delay Time (tD) When measuring the average time to reach or pass the 50% value of the response, the head/arms is faster in reaching this mark than the other two body regions in both expansions and retractions. We believe this difference can be explained by the fact that chromatophores on this body region are controlled by separate lobes (ACL and PL; Young, 1976), and thus the temporal discrepancies may be due to the circuitry. The differences between fin and mantle timings average out when we aggregate data for both expansion and retraction. Rise Time (tRt) The rise time to reach the 100% value of the response peak is the same within body regions in expansion and retraction, though there are differences between regions. Thus, each body region has specific temporal benchmarks of maximum response regardless of the chromatophore change type. The chromatophores on the head/arms are the fastest between body regions, followed by the fin in second place, and lastly, the mantle. Considering the slight differences in the magnitude of response between the body regions, it is surprising that the chromatophores on the head/arms are about 33 ms faster than those on the mantle. The time difference may not be explained due to response magnitude since these two body regions are almost identical in that dimension. The circuitry's differences (Young, 1976) may explain these temporal discrepancies on the head/arms (ACL and PL) compared to those of the mantle (PCL). Response Duration (rD) Most discrepancies were found in the response duration, the time between the 5% values of response before and after the peak, between and within body regions when comparing expansion and retraction. We calculated the duration by finding the time difference between the initial response and the return to the pre-flash state following the peak response. Across the type of responses and body regions, chromatophore change duration is short, between 217 and 384 ms. Compared to color changes seen in other species (Ramachandran et al., 1996), the sub-second cephalopod chromatophore change is unparalleled. When it comes to expansion, the chromatophores on the head/arms are the fastest to complete the response and reach preflash surface area values at 300 ms, followed by the fin (+34 ms) and the mantle (+84 ms). A different pattern was observed with retraction responses: chromatophores on the fin had the shortest duration of response at 217 ms, followed by those on the mantle (+33 ms), and lastly by those on head/arms (+50 ms). It is worth noting that the response duration was the only dimension in which retraction had a shorter overall interval than expansion. For example, the most prolonged response duration during retraction (267 ms) was still faster than the briefest response duration in expansion (300 ms). One reason to explain this phenomenon is that chromatophore expansion and retraction may depend on separate mechanisms; during expansion, the surrounding radial muscles pull and expand the pigment (Bell et al., 2013). The retraction mechanism, however, is still not fully understood. Characterization of the Magnitude of Sub-Jet-Threshold Responses Results indicate differences between the scale of chromatophore surface area changes when a chromatophore expands or retracts. While the surface area increased 155.06% on average during an expansion, the retraction only caused a 40.46% decrease. As discussed in the "Response Duration" section, one reason for this may be the different mechanisms involved in expansion compared to retraction. Other discrepancies were found when analyzing chromatophores across the body regions. The mantle and head/arms showed the largest surface area expansions with 168% and 159% corresponding changes, respectively. The fin had a 116% increase on average. It is unclear why there is a close to 50% difference between the fin and the other regions. This may be due to differences in the type and distribution of chromatophores on the fin compared to the head/arms and mantle when it comes to body patterning. It is necessary to investigate further if fin chromatophores do not expand as much as those on the mantle and head/arms and why that would be the case. LIMITATIONS AND FUTURE DIRECTIONS Unequal Distribution of Trials Between Body Regions and Animals The number of trials with significant chromatophore responses was not equal per body region within each animal nor between animals, and thus there was an unequal distribution of body regions and chromatophores represented in the dataset. This unequal distribution precludes the possibility to run statistical analyses in determining significant differences in the temporal dynamics and magnitude of responses. Also, due to ethical considerations, we determined that a larger number of animals to be used was not well-warranted. For future studies, we advise scheduling shorter trials over several days so more data can be collected from fewer animals. Unequal Number of Significant Surface Area Changes Between Expansion and Retraction Out of the 4,065 chromatophores showing significant responses, only 65 showed retractions. The small sample size makes it difficult to generalize the retraction results. To promote the animal adopting a darker skin tone, we ran additional pilot trials using black tanks and white gravel to generate visual contrast between the substrate and walls. The contrast increased the probability of squid expressing a disruptive or uniformly dark pattern. When squid experienced light flashes while having dark patches of skin, we observed more retractions. However, attempting to replicate these trials using black tanks within the rig was impossible due to the video frames' noise resulting from less visibility. Future studies on chromatophore retraction may utilize visual contrast in the environment and appropriate equipment to remove videography noise. Potential Extraocular Chromatophore Responses The overall results of our study showed that the response time (tR) was in line with timings from Otis and Gilly (1990). They argue that "[t]he 50-ms delay for giant axon excitation in the startle-escape is similar to that for mantle contraction, indicating that the major source of behavioral delay lies in the central nervous system and not in conduction time along the giant axon (<10 ms) or muscle activation" (p. 2912). Thus, we may conclude that squid chromatophore responses are dependent on the CNS. To test the possibility that squid skin responds directly to light flashes, we used flash stimulation with a recently deceased squid from the main population holding tank. The squid showed spontaneous chromatophore activity before stimulation, and the aim was to observe if there were any extraocular chromatophore responses to the flash stimulus. We found no discernible changes due to stimulation. However, since we only used one deceased squid to test this, we cannot exclude the possibility that extraocular responses may have contributed to chromatophore activity changes in this study. CONCLUSION In the natural world, cephalopods are renowned for the dynamic range and speed of adaptive body patterning used in camouflage and communication. In this exploratory study, we used a light flash stimulus to elicit transient chromatophore surface area changes to quantify the chromatophore system's temporal dynamics in living, intact animals. Our measurements here verify the early onset of the sub-second chromatophore changes in body patterning with an unparalleled speed. Based on our findings, we argue that measuring the temporal dynamics of complete behavioral responses during body patterning in intact, living animals is a feasible and essential addition to studies using excised isolated skin of subjects. The unexpected differences between body regions and expansion and retraction responses exemplify the need to continue this research line. Such details of timing the temporal dynamics are essential for comprehensive and quantitative descriptions of body patterning. The methodology and findings described in this study collectively contribute to our understanding of how cephalopods can employ thousands of chromatophore organs within milliseconds for rapid, adaptive body patterning. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Ethical review and approval was not required for the animal study because Ethical approval was not required since, at the time this study took place (July, 2014), the Institutional Animal Care and Use Committee (IACUC) protocols were not issued for invertebrate research in the United States and in the Institution where the experiments with live animals were carried out. Nevertheless, procedures were performed to minimize pain and distress of the animals involved. AUTHOR CONTRIBUTIONS SH, KK, AC, and IA contributed to the conception and design of the study. SH ran the video trials, collected data, organized the database, and wrote the manuscript's first draft. SH and RE-H performed the statistical analysis. RE-H wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. FUNDING This study was funded by the City University of New York Doctoral Student Research Grant. The funder had no role in study design, data collection and analysis, decision to publish, or manuscript preparation.
6,255.2
2021-06-18T00:00:00.000
[ "Biology", "Environmental Science" ]
A Standard Indoor Spatial Data Model — OGC IndoorGML and Implementation Approaches † With the recent progress in indoor spatial data modeling, indoor mapping and indoor positioning technologies, several spatial information services for indoor spaces have been provided like for outdoor spaces. In order to support interoperability between indoor spatial information services, IndoorGML was published by OGC (Open Geospatial Consortium) as a standard data model and XML-based exchange format. While the previous standards, such as IFC (Industrial Foundation Classes) and CityGML covering also indoor space, aim at feature modeling, the goal of IndoorGML is to establish a standard basis for the indoor space model. As IndoorGML defines a minimum data model for indoor space, more efforts are required to discover its potential aspects, which are not explicitly explained in the standard document. In this paper, we investigate the implications and potential aspects of IndoorGML and its basic concept of the cellular space model and discuss the implementation issues of IndoorGML for several purposes. In particular, we discuss the issues on cell determination, subspacing and the hierarchical structure of indoor space from the IndoorGML viewpoint. Additionally, we also focus on two important issues: computation of indoor distance and the implementation of indoor context-awareness services based on IndoorGML. We expect that this paper will serve as a technical document for better understanding of IndoorGML throughout these discussions. Introduction Since the first GIS was developed in the 1960s, there has been a significant progress in geospatial technologies.We can interpret the history of geospatial technologies in terms of the size of the spatial extent.The spatial extent of GIS in the first generation covered a large area, such as an entire country or a whole city, and the user groups were limited to experts, like urban designers and civil engineers.Due to the progress of computing and web technologies and positioning methods including GPS, geospatial technologies became available to the public, such as car navigation and web map services.Accordingly, the users of geospatial technologies were expanded to car drivers and web users, while the size of the spatial extent covering, for example, several blocks in a city became smaller than the previous generation.Since the 2010s, mobility technologies such as smart phones allow pedestrian navigation, which requires an even smaller spatial extent of less than a hundred meters, and the user population of geospatial technologies has significantly grown.Additionally, we may forecast the future trend that the size of the spatial extent will become much smaller and the user group of geospatial technologies will become bigger, and indoor spatial information services will be an example of this trend. More particularly, the indoor spaces of buildings become bigger and more complex due to rapid urbanization and large populations in urban areas.For this reason, the efficient management of indoor spatial information is a crucial demand in huge buildings, and a few commercial services have been already provided, such as Google Indoor, to meet such demand. Like other types of information, indoor spatial information has a life cycle consisting of: (1) data collection; (2) data management; (3) data sharing; and (4) spatial analysis and services; where each step is supported by proper methods and technologies.However, we need also a basic indoor spatial theory supporting the entire life cycle, and the indoor spatial data model forms a core part of indoor spatial theory.A clear taxonomy of indoor spatial data models was presented from three viewpoints, including geometry, symbolic space and network connectivity in [1].Although many spatial data models for indoor space have been developed particularly by geospatial standard communities, they mainly focus on only one of these viewpoints.For example, IFC (Industrial Foundation Classes) [2] of buildingSMART and CityGML [3] of OGC (Open Geospatial Consortium) define exchange data formats for 3D building models and standard spatial data models for indoor space.However, their spatial data models include feature models rather than space model and do not fully reflect the properties of indoor space, such as indoor topology. In order to overcome the weakness of previous standard spatial data models for indoor space, IndoorGML was published by OGC [4] in 2014.It includes all of the properties addressed in [1]; geometry, symbolic space and network topology.Unlike the previous spatial data models, it defines an indoor space model rather than feature models for indoor space.It provides a basis of the indoor spatial data model, so that it can be easily extended to meet the requirements of any indoor spatial applications and integrated with other standards such as CityGML. While IndoorGML defines an indoor spatial data model, limited works have been done on its utilization for different applications.In this paper, we investigate the key concepts of the spatial model introduced in IndoorGML and explore how to apply it for diver applications of indoor spatial information.The goal of the paper is therefore to explore the potential of IndoorGML and investigate how to utilize it for practical applications.In particular, we discuss issues about the cell determination in IndoorGML for several practical cases.Additionally, we also propose novel algorithms to compute indoor distance for different cases and the framework for indoor context-awareness using IndoorGML. The paper is organized as follows: In Section 2, we survey the previous work on standard indoor spatial data models and discuss the requirements of spatial data model for indoor space in Section 3. The basic concepts of IndoorGML will be introduced in Section 4, and advanced concepts and implementation issues of IndoorGML will be investigated in the following sections.We will discuss the cell determination issue for IndoorGML in Section 5, the indoor routing and distance computation in Section 6, which are two basic functions of indoor spatial services, and the implementation of the indoor context-awareness with IndoorGML in Section 7. Additionally, we conclude the paper in Section 8. Indoor Spatial Data Models Indoor spatial data models are a fundamental basis of indoor spatial technologies related with each step in the life cycle of indoor spatial information.An excellent survey on indoor spatial data models was presented in [1]. Most of the works on indoor spatial data models are classified into two approaches; geometric approach and symbolic approach.The first approach is mainly focused on the geometric representation of indoor features.For example, the boundary representation models [5][6][7] or tessellation models [8,9] belong to this approach.On the other hand, the symbolic approach emphasizes the semantics and ontology aspects of unit space rather than its geometric properties.It also aims at representing the properties of each unit space identified by a symbol and the topological relationship between unit spaces [10,11].For example, [11] introduced the concept of the symbolic and semantic space model and raised related issues.Additionally, an elaborate symbolic model is also proposed for indoor navigation services in [10].More detailed discussions are to be given in the rest of this paper. While this classification of the indoor spatial data model is useful and most of the indoor spatial data models belong to one of these approaches, they are complementary to each other.Since each approach has its strengths and weaknesses, it may be recommended to integrate the strengths of multiple approaches into a single indoor spatial data model to compensate the weaknesses.For example, a hybrid data model may represent geometric properties on the one hand and also support symbolic concepts of indoor space on the other one.It is a fundamental requirement of the standard indoor spatial data model that will be discussed in Section 3. In addition to geometric models and symbolic space models, other data models were also proposed to facilitate the computation of indoor distance [12,13], which are based on graph models, but also include geometric properties.Additionally, some indoor spatial data models were introduced to support specific indoor spatial information services.For example, Li, et al [14] proposed a spatial data model to support geo-encoding of multimedia in indoor space, and a similar data model for indoor navigation maps was proposed for visually-impaired people by [15].These models are mainly based on hybrid approaches of the symbolic model and the geometric model.An interesting indoor spatial model, called the multi-layered space model, was proposed to represent multiple space layers and integrate them via inter-layer connections in [6].This model is very useful to interpret an indoor space from different viewpoints.A hierarchical structure model for indoor space was also proposed by [16] for indoor pedestrian navigation with hierarchical graph structures. The indoor spatial data models listed above are useful for each proper purpose, but limited to a specific scope.This means that each model is suitable for each application, but its range is not so general as a standard indoor spatial data model.We need a general standard model that fulfills the requirements and strengths of previous indoor spatial data models. Standards for Indoor Spatial Information Prior to IndoorGML, several standards have been published for spatial data models covering indoor space, among which IFC (Industry Foundation Classes) and CityGML are the most widely-accepted ones.The IFC specification was developed and maintained by buildingSMART International as the BIM (Building Information Model) standard [2] and also accepted as the ISO 16739 standard.It defines a conceptual data model and an exchange file format for BIM data [17].The scope of IFC covers interior spaces, as well as outdoor spaces where the part for the interior space model mainly defines the models for indoor features, such as walls, doors, slabs, windows, spaces, etc.The topological relationship between indoor space units is excluded from the data model. CityGML [3] is a geospatial information model and XML-based encoding for the representation, storage and exchange of the virtual 3D city and landscapes.It is defined as an application schema of GML 3.1.1[18], and its geometric models are based on ISO 19107 [5].CityGML provides a standard spatial data model and mechanism for describing 3D objects with respect to their geometry, semantics and appearance.Like IFC, its scope covers indoor and outdoor spaces, and it defines five different Levels of Detail (LoD), where LoD 4 specifies the feature model for interior space.Although it includes more detailed feature types of indoor space than IFC such as interior furniture and installation, its main focus is still on feature modeling rather than space modeling.The topological relationships are not explicitly included in the model like IFC. A Chinese standard for indoor location, called IndoorLocationGML, has been also recently published [19] as an application schema of GML 3.2.1 [18].While IndoorGML mainly focuses on the space concept of indoor space, it emphasizes the indoor location framework supporting both relative reference and absolute reference systems.Since IndoorGML does not provide relative indoor reference systems in explicit ways, IndoorLocationGML can be used with IndoorGML as a complement. Even though these standards cover indoor space, they do not fully meet the requirements mentioned in the next section.In order to overcome the weaknesses of previous standards, IndoorGML was published by OGC [4].The major goal of OGC IndoorGML is to represent and allow for the exchange of spatial information required to build and operate indoor navigation systems [4].For this purpose, it provides a basic framework of the indoor cellular model and a semantic extension for indoor navigation with an XML application schema.Since its publication, several studies on the basic concepts and applications have been done, such as geo-tagging in indoor space by IndoorGML [14], indoor navigation map for visually-impaired people as an extension of IndoorGML [15] and comparison between IndoorGML and CityGML LoD 4 [20].A brief analysis of IndoorGML is given in regards to smart cities [21]. The minimality of the standard scope was one of the key considerations during the development of IndoorGML to achieve the flexibility and extensibility, as well as to avoid conflicts with the existing standards.This means that we have to explore the potential and the implementation concepts of the standard.The previous works on IndoorGML are however limited to specific applications and do not provide a sufficient survey on the strength and potential of IndoorGML.The goals of this paper are therefore to explain the fundamental concepts of IndoorGML and explore its potential and implementation concepts. Requirements for Indoor Spatial Data Models In this section, we study the characteristics of indoor space by comparing with outdoor space and then analyze the requirements for standard indoor spatial data models. Indoor Distance One of the most primitive differences between indoor and outdoor spaces lies on the definition of distance.In general, outdoor space is classified into Euclidean space and constraint space, which is defined as a space where the distance between two points is determined by constraints between them.Road network space is an example of constrained space in the outdoors, since the distance between two points in a road network is determined by the network constraints [22].The types of constrained spaces depend on the characteristics of the constraints.Indoor space is also a constraint space, where constraints differ from network constraints or city facility constraints in outdoor constraint spaces. Indoor space composed of a number of architectural components, such as rooms, corridors, floors, wings and elevators, has more complicated structures than outdoor space.In order to compute the distance between two points in indoor space, we therefore have to consider the architectural structure, more precisely two factors: architectural components, such as walls, doors, stairs, etc., and the connectivity between indoor spaces, as shown in Figure 1.This means the information about indoor connectivity and architectural components is necessary to compute indoor distance.First, the indoor connectivity, called the indoor accessibility graph [13,23], has to be prepared, as we need a road network graph to compute distance on the road network.An indoor accessibility graph is represented as G = (V, E), where a node n ∈ V is a room or a space unit in indoor space and an edge e ∈ E represents the connectivity between two adjacent space units, for example via the door.The edge may contain any additional attributes such as the length of the connection. However, a simple edge connecting two nodes does not fully reflect the distance information particularly when the room connected to the edge has a big area or complicated geometry [24].In order to overcome this problem, we need the geometry information of the room or space unit surrounded by architectural components, such as walls and doors.Once indoor geometry information is provided, the indoor distance is computed by dividing the total path into point-to-door and door-to-door distances as illustrated in Figure 2. In this figure, the path from point p to point q is divided into several sub-paths; the first sub-path from p to door d 1 , the second from d 1 to door d 3 and the third path from d 3 to q.While the distances from p to d 1 and from d 3 to q are called point-to-door distance, the distance from d 1 to door d 3 is called door-to-door distance.The point-to-door distance may be computed by line-of-sight [12] or the Minkowski sum [24], while the door-to-door distance can be easily computed by the shortest path algorithm with pre-computed door-to-door graph [12].Since the vertical distance is not isotropic with horizontal distance, the indoor distance between two points on different floors is computed in different ways from the distance on the same floor.Furthermore, the vertical distance depends on the type of locomotion, whether elevators, stairs or escalators.For this reason, it is an alternative option to define indoor distance as the expected traveling time between two points rather than physical distance.More information such as type of locomotion and contextual information is required to compute the expected traveling time in addition to the accessibility graph and geometry of indoor space.Consequently, the standard indoor spatial data model should contain not only geometry, but also connectivity topology, semantic and additional information, such as context information, to support indoor distance computation. Complex Structures of Indoor Space The structure of indoor space is mainly determined by architectural structures that have unique properties.First, the indoor space consists of a number of cells surrounded by architectural components, such as walls, ceilings and floors, where each cell is separated from the others.Cells in indoor space are horizontally or vertically connected in sophisticated ways via specific types of architectural components like doors and stairs.Furthermore, indoor spatial properties, such as cell geometry and the connectivity structures between cells, differ depending on the type of buildings.For example, subway stations are normally composed of long hallways and platforms on different levels, while office buildings have normally a number of small office rooms connected via corridors.Second, indoor spaces of complex buildings are often composed of areas with different purposes.For example, a shopping mall has a number of stores, warehouses, control rooms, cinemas, sports centers, subway stations, etc., each of which has unique requirements and functions.This complex buildings make the indoor space very complicated.Third, a single indoor space may be interpreted in different viewpoints.For example, an indoor space is partitioned into rooms and corridors, while it is also partitioned into public areas and private areas regarding its security levels.Since each interpretation forms a space layer with a proper partitioning criteria, an indoor space may have multiple space layers. These complicated structures of indoor space are rarely found in outdoor space.Therefore, the indoor spatial data model should efficiently support complex structures of indoor space in terms of geometry, network connectivity and multiple interpretations. Cell-Based Context Awareness As claimed by an early work on ubiquitous and context-aware computing [25], the context has three important aspects; where you are, who you are with and what resources are nearby.The first and third aspects are related with the location of the user, which is normally represented as (x, y, z) coordinates in outdoor space.However, it is more relevant to represent the location of the user in indoor space with the room number than (x, y, z) coordinates, since the context is mainly determined by the type or function of the room.For example, staying in a class room for an hour has a totally different context from staying in a washing room for an hour. One of the most basic functions of indoor context awareness services is therefore to identify the room or space unit where the user is currently located.In general, we call the unit of indoor space the cell, and rooms, corridors and staircases are examples of cells.Therefore, the indoor spatial data model should contain the notion of cell to support the indoor cell-awareness. The indoor cell-awareness is defined as being aware of the cell where the user is currently located.In order to implement it, the indoor spatial data model has to provide the following functions.First, the boundary of the cell must be clearly defined either in two dimensions or three dimensions, such that we can easily discover the cell containing the point (x, y, z) acquired from any indoor positioning method.For example, if the boundary geometry is represented by multiple surfaces, it is very hard to determine whether the point is within the cell or not.It is therefore required to represent the boundary of the cell by a closed geometry, such as a polygon in two-dimensional space or a solid in three-dimensional space. Second, we need a function to correct the cell from noisy position data, since most indoor positioning methods contain a certain level of errors.For example, the reported accuracies of WLAN-based indoor positioning methods are mostly more than two meters [26], which is not sufficiently accurate to identify the current cell since the thickness of wall is much less than two meters.This function is similar with the map matching function of car navigation services, which corrects the current road segment from inaccurate GPS data.For this reason, we call this function indoor map matching.For the implementation of indoor map matching, the information about indoor network and cell types is required, as well as cell geometry, and should be included in the standard indoor spatial data model. Integrating Multiple Datasets For several reasons, the integration of multiple datasets is a fundamental requirement of indoor spatial data model.First, the integration of indoor and outdoor spatial datasets is crucial for seamless services between indoor and outdoor spaces.The indoor spatial data model should therefore provide a mechanism to integrate it with outdoor space.In order to implement a parking lot guidance service of car navigation for example, two datasets covering indoor and outdoor spaces, respectively, have to be integrated in a proper way. Second, several standards for indoor spatial information have been developed, such as IFC, CityGML, KMLand IndoorGML, each of which has its strength and weakness.IndoorGML was designed to avoid duplication and conflicts with other standards and provides only a minimum set of functions of indoor spatial data that are missing in other standards.It is therefore recommended to integrate IndoorGML data with other standards for compensating its weakness and taking advantage of strengths from other standards. Third, it is often necessary to interpret and configure a single indoor space from multiple viewpoints, which is similar to the multiple layer structure of the conventional geospatial data configuration for outdoor space.For example, the layout of an indoor space is given as a topographic map, while the map of CCTV coverage is also useful for security purpose. In general, there are two approaches for the integration.The first approach is a physical integration of multiple datasets of different standards into a single dataset.For example, CityGML provides a mechanism called ADE (Application Domain Extension) [3], which extends CityGML to include additional information.A spatial data model in another standard may be re-defined as an ADE of CityGML, and a conversion process from a dataset to CityGML ADE is required. The second approach is to link multiple datasets in different standards via external references without physical integration.For example, each feature in a dataset D A of a standard data model has an external reference or foreign key to a feature in another dataset D B of a different standard data model and vice versa.This approach is simple and practical when the correspondence between features in D A and D B is one-to-one.Thus, the standard data model should support the integration via the extension mechanism or an external reference. Basic Concepts of IndoorGML In this section, we study the basic concepts of OGC IndoorGML developed to meet the requirements of standard indoor spatial data model listed in Section 3. Note that most of this section is a summary of the IndoorGML standard specification found in [4]. Cellular Space Model IndoorGML is an application schema of OGC GML (Geography Markup Language) 3.2.1 [18], which is an XML grammar for expressing geographical features, based on the spatial data model in ISO 19107 [5].As mentioned in Section 3.3, indoor cell-awareness is a basic requirement of the indoor spatial data model.For this reason, the key concept of IndoorGML is based on the cellular space model.Cellular space is defined as follows: Definition 1 (Cellular space).A cellular space C of a space U is defined as a set of cells such that, for any pairs of cells c i , c j ∈ C, c i ⊆ U and 3. each cell c has its cell identifier c.id. No overlapping between cells is allowed, and the union of all cells is a subset of the given indoor space.This means that there may be shadow areas, which are not covered by any cell, and every position does not necessarily belong to a cell. Based on the cellular space model, IndoorGML provides four main concepts to satisfy the requirements listed in Section 3; cell geometry, topology between cells, cell semantics and multi-layered space model.This cellular space is given as a UML class diagram depicted in Figure 3, which is the core module of IndoorGML.In the following subsections, we will see how each of the concepts is implemented in IndoorGML. Cell Geometry The cell is a basic unit of the cellular space model, and its geometry is defined as a two-or three-dimensional spatial object.Since IndoorGML is based on the geometry model of ISO 19107 [5], the cell geometry can be either a surface or a solid defined in ISO 19107, but there are three options to represent cell geometry in practice; • Option 1, no geometry: The first option is to exclude any geometric properties from IndoorGML data and to include only topological relationships between cells, which will be explained in the next subsection. • Option 2, geometry in IndoorGML: The second option is to represent its geometry within IndoorGML data by geometric types defined in ISO 19107.For example, the three-dimensional geometry of a cell is defined as a solid of ISO 19107.Note that the geometry of the cell is an open primitive as defined in ISO 19107, which means that the boundary of the cell geometry does not belong to the cell.This definition is consistent with the non-overlapping condition of the cellular space defined in Section 4.1. • Option 3, external reference: The third option is to include external references to the object in another dataset that contains geometric data.For example, a cell in IndoorGML data only points to an object in CityGML via the GML identifier that contains geometric properties. These options are not exclusive and may be combined together.For example, while no geometry is included in an IndoorGML data (Option 1), it contains external references to objects in other dataset (Option 2).Furthermore, the geometry defined in IndoorGML is not necessarily identical with the geometry of the corresponding object in other dataset. Topology between Cells Once cells are determined with their identifiers and geometric properties, we need to describe the topological relationship between cells, which is essential to most of the indoor navigation applications.The topology between cells in IndoorGML is basically derived from topographic layout of indoor space by Poincaré duality [27].As illustrated in Figure 4, a k-dimensional object in the N-dimensional topographic space is mapped to a (N − k)-dimensional object in the dual space.This means that a three-dimensional cell in topographic indoor space is transformed to a zero-dimensional node, and a two-dimensional boundary surface shared by two cells is transformed to a one-dimensional edge in the corresponding dual space.The set of nodes and edges transformed from the topographic space by Poincaré duality results in a topological graph connecting adjacent cells in indoor space.Figure 5 shows an example of an adjacency graph derived from a topographic indoor space.From the adjacency graph, we can also derive a connectivity graph considering the type of edges in the adjacency graph.If an edge indicates the boundary of a door, then two ending nodes of this edge are connected via the door.We may define more attributes on the edge to represent additional information, such as distances, directions and types of doors, etc.There are two options to represent the graph derived from topographic space in IndoorGML.The first option is to include the geometries of the node and edge as a point and a curve, respectively.We call the graph with geometric properties geometric graph.The second option is to represent the graph without any geometric properties, which is called the logical graph.However, in most applications of indoor navigation, we need geometric data to calculate indoor distance.There are several studies [12,13] to compute the optimal path and its distance between two points in indoor space, and we will discuss indoor distance in Section 6. Cell Semantics Since every cell in indoor space has its proper function and usage, we need to specify the semantics of cells.In the current version of IndoorGML, we classify the types of cells in terms of indoor navigation and expect that other classifications would be necessary for different applications, such as indoor facility management.Figure 6 shows the current classification of cells in IndoorGML for indoor navigation.In particular, the anchor node in Figure 6 represents a gate or entrance of building and allows connecting indoor and outdoor spaces.Besides the classification of cells shown in Figure 6, more detailed classification of cells and cell boundaries are also defined as attributes.For example, the code list given by OmniClass [28] can be used, which defines the hierarchical classification of features in the building. Multi-Layered Space Model As briefly mentioned in Section 3.4, the same indoor space can be interpreted and represented in different ways.In IndoorGML, a mechanism, called the multi-layered space model is offered to represent an indoor space by overlaying different interpretations [4,29].Each interpretation corresponds to a cellular space layer with its own geometric and topological properties.For example, there are two different layer configurations of an indoor space due to a step in Room 3, as shown in Figure 7; the walkable layer and wheelchair layer. Since each layer forms a cellular space, it includes the geometries of cells, topologies between cells given as a graph and cell semantics.In addition to simple aggregation of cellular space layers, a special type of edge, called inter-layer connection, is also offered in IndoorGML to represent the relationships between nodes in different layers.In Figure 7, we have Room 3 in the walkable space layer corresponding with Room 3a and Room 3b in the wheelchair layer because Room 3 is partitioned into Room 3a and Room 3b due to a step.The multi-layered space model is also useful for many applications, such as in describing the hierarchical structure of indoor space or in tracking moving objects from sensor data.Further detailed discussion is found in [29]. Modular Structure of IndoorGML For the sake of extensibility, IndoorGML has a modular structure as shown in Figure 8.The core module of IndoorGML contains the data model for cell geometry, topology and the multi-layered space model.The indoor navigation extension model, which provides the semantic extension model for indoor navigation, is so far the only extension module defined on the top of the core model.Many other extension modules may be defined depending on applications.For example, the indoor cadastral model is a candidate extension of IndoorGML [30]. Implementation of IndoorGML Core Module Since IndoorGML provides not only a standard indoor spatial data model, but also a format for data exchange, it defines an application XML schema based on GML 3. CellSpace defines a basic unit type of the cellular indoor space model, such as room, corridor and hall.It basically contains a GML identifier as a gml:AbstractFeature object [18] with attributes.As defined in Figure 9, it may also contain a reference to an external object, which may provide additional information.The geometric type of CellSpace may be either a solid or surface depending on the dimensionality of the space or may not contain any geometry as discussed in Section 4.2.While CellSpace represents a cell in indoor space, CellSpaceBoundary defines the boundary geometry of a CellSpace object, and its geometry may be a surface or curve depending on the dimensionality.Since CellSpace and CellSpaceBoundary represent two basic unit types in primal space, further extensions to these basic types can be defined as subclasses with a given semantic context.For example, the feature types for indoor navigation can be defined as subclasses of CellSpace and CellSpaceBoundary for an extension.State and Transition define the feature types of dual space corresponding with CellSpace and CellSpaceBoundary in terms of connectivity topology.They are useful in indoor navigation applications such as optimal routing computation or indoor distance computation.Additional information may be given as attributes of these feature types. Cell Determination in IndoorGML One of the fundamental issues in utilizing IndoorGML, which is however not explicitly addressed in the standard document of IndoorGML, is how to determine the granularity of cells and partition a given indoor space into cells.In this section, we discuss several issues about the cell determination and subspacing. Cell Determination and Subspacing From the topographic viewpoint, it is relatively easy to determine cells and the granularity of cells.However, in addition to the topographic viewpoint, the following cases should be considered to determine the granularity of cells and partition a space into subspaces.More detailed criteria are found in [30][31][32][33][34]: • different properties: if a space has different properties such as kitchen area and living room, it is desirable to partition it into two cells with virtual boundaries. • big space as a cell: if a space is too big, like a long hall-wall or a big convention hall, it is recommended to split it into smaller subspaces.• obstacles: obstacles in indoor space may limit movements and result in partitioning of an indoor space.Interesting works on subspacing and path finding in indoor space with obstacles are found in [33,34]. sensor coverages: it is also possible to divide a space in terms of sensor coverage, such as CCTV viewshed or WiFi and RFID coverages [29] • cell without spatial extent: while cells have spatial extents in most cases, there are also cases where no spatial extent is necessarily required except a point.For example, each image spot in a panoramic image service shown in Figure 13 is represented as a cell without spatial extent except a point.Note that the panorama spot image layer is defined as a separate space layer of IndoorGML, and we define inter-layer connections with the cells in the topographic layer.We also assume that each navigation arrow connecting two image spots is considered as an edge in the connectivity graph for the panorama spot image layer, as in Figure 13. Thick-Wall Model vs. Thin-Wall Model An indoor space is differently represented depending on how to handle the thickness of walls.If we ignore the thickness, the wall is represented simply either as a surface in three-dimensional space or a polyline in two-dimensional space, as shown in Figure 5 in Section 4.3.The adjacency graph contains only the edges between R 1 , R 2 and the external space in Figure 5.We call this representation model the thin-wall model.On the contrary, if we take their thickness into consideration, the walls and doors are also considered as cells, and the adjacency between rooms and wall, such as W 7 and R 1 , is included in the adjacency graph, as shown in Figure 14.By extracting only the navigable parts of the graph, we can derive the accessibility graph of indoor space by removing the non-navigable cells such as walls and the non-navigable links such as edge connecting W 7 and R 1 , depicted as dotted lines in Figure 14. Representing Hierarchical Structures Most indoor spaces have hierarchical structures.For example, a building complex consists of several buildings, each of which is also divided into the east wing and west wing, and each wing is composed of multiple floors, and so on.An efficient way to represent hierarchical structures of indoor space was introduced in [16].The authors of this paper proposed a simplified form of hierarchical graph for indoor space as defined below: Each of these subgraphs, in turn, corresponds to a node in the hierarchical graph H. Edges in H correspond to edges in G between nodes n i and n j of two different subgraphs sub i (G) and sub j (G). It is possible to redefine the hierarchical graph for indoor space by the multi-layered space model of IndoorGML [35].First, we define the single-layered graph and the multi-layered graph using the multi-layered space model as follows: Definition 4 (Multi-layered graph).A multi-layered graph is a graph G M = (N M , E M ) that consists of multiple single-layered graphs and a set of inter-layer connections (E I ), which are a special type of edge connecting two layers, where: Then, the hierarchical graph is considered a specific type of multi-layered graph connecting i-th and (i + 1)-th layers as defined below: Definition 5 (Hierarchical graph of IndoorGML).A hierarchical graph G H = (N H , E H ) is a multi-layered graph with inter-layered connections between i-th and (i + 1)-th layers such that: where R(n i , n j ) denotes the topological relationship between two cells n i , n j . Note that each level of the hierarchical graph indicates a layer of a multi-layered graph; therefore, G 0 is the single-layered graph of the bottom level.The highest level G h , namely the root graph of the hierarchical graph, contains only a node without edge, where h is the height of the hierarchical graphs.Figure 15 shows an example of the hierarchical structure for an indoor space by the multi-layered space model of IndoorGML.S 1 and S 2 in Level 1 are the aggregations of {R 1 , R 2 , C 5 , R 6 } and {C 3 , C 4 , R 7 , R 8 } in Level 0, respectively.T 1 is the entire indoor space as the aggregation of {S 1 , S 2 }.Then, the G 0 layer indicates the base graph of the indoor space; G 1 is the next level layer of the hierarchy; and G 2 is the layer for the root level.The relationships between layers are given via inter-layer connections.Note that the topological properties of inter-layer connections in Figure 15 Computing Indoor Distance Using IndoorGML As mentioned in Section 3.1, the distance is one of the fundamental spatial properties.However, the computation of distance in indoor space differs from outdoor space due to the complex structures of indoor space as discussed in Section 3.2.In this section, we discuss how to compute indoor distance using IndoorGML. Horizontal Distance First, we discuss the horizontal distance, which is the distance between two points on the same floor.The distance between two points p and q in Figure 16 is divided into point-to-door distance and door-to-door distance.In order to compute the point-to-door and door-to-door distances, we prepare two space layers: the topographic layer and the door-to-door layer of IndoorGML, as the right part of Figure 16.Note that the door-to-door graph is a weighted graph so that the weight of the edge represents the distance between two doors.Then, we compute the horizontal distances using the topographic layer, door-to-door layer graph and inter-layer connections of IndoorGML data.The algorithm is given as below.In Line 1 and Line 2 of the algorithm, we find the cells containing p and q, which are R6 and R8, respectively, in Figure 16, using the cell geometry data in the topographic layer.The doors of R6 and R8 are found as D p = {d 3 , d 4 } and D q = {d 6 } from the inter-layer connections of the multi-layered space model in IndoorGML in Lines 3 and 4.Then, we compute the door-to-point distances from p and q to each door in D p and D q respectively using the cell geometry of IndoorGML for Lines 5 and 6.The door-to-point distance within a cell can be computed by the shortest path in a polygon algorithm in O(n log n) computation time where n is the number of vertices in the cell [36].Then the results are DIST(p, d 3 ) = 2 and DIST(p, d 4 ) = 6.We also compute the door-to-door shortest paths connecting d p ∈ D p and d q ∈ D q using the door-to-door layer graph in Line 7. Two shortest paths are obtained from the door-to-door layer graph: p where length(p 3,6 ) = 10 and length(p 4,6 ) = 8.Then, the horizontal distance is the sum of the door-to-point distance and door-to-door distances in Line 8, and the route with the minimum distance is p → d 3 → v 7 → d 5 → d 6 → q; and its distance dist H = 15 while the distance of the alternate route p → d 4 → d 5 → d 6 → q is 17.We can also compute the horizontal indoor distance with the thick-door model in the same way. Vertical Distance We have to consider an additional factor in computing the distance between different floors, called vertical distance.Vertical distance, for example through stairs, is not an isotropic measure with the horizontal distance due to different speeds, energy consumptions, transportation modalities, and so on.A rigorous survey on vertical distance is found at [32]. Figure 17 However, the weight of each vertical edge in the door-to-door graph given at the right part of Figure 17 must be assigned differently from horizontal edges.Furthermore, the door-to-door graph may be directed since up-speed differs from down-speed.Additionally, the weight of the edge also depends on the vertical transportation modality, whether elevators, stairs, escalator or ladders.When we prepare the door-to-door graph layer of IndoorGML data, all of these factors must be reflected to compute the correct vertical indoor distance.Once the topographic layer and door-to-door layer are prepared, we apply Algorithm 1 given in Section 6.1 to compute the indoor distance, no matter whether it is vertical, horizontal or hybrid distance. In this paper, we do not consider more complicated indoor structures, such as the Mercedes-Benz Museum in Stuttgart or Guggenheim Museum in Manhattan, New York City, which have spiral structures, where vertical distance is not clearly separable from horizontal distance.We expect that the indoor distance for these special cases could be computed by Algorithm 1 in Section 6.1 with proper configurations of topographic and door-to-door layers. Multi-Modal Distance The transportation modality is an additional factor to consider in computing indoor distance.For example, when we compute an indoor distance between two points in different terminals of an airport, connected via the inter-terminal railway, we have to integrate multi-modal transportation that comprises horizontal movements, vertical movement via escalators and the inter-terminal railway.In this case, the traveling time is a more proper metric of indoor distance than the physical distance between two terminals.First, we provide an additional layer for indoor transportation, as well as the terminal topographic layer, as shown in Figure 18 in a similar way with door-to-door layer.Note that the weight of the edge in the indoor transportation layer graph is given as the traveling time between two nodes.In order to compute the traveling time from point p in the Terminal 1 lounge to q in the Terminal 3 lounge, we integrate the travel time of three paths, Path 1, Path 2 and Path 3 in Figure 18.The indoor distance is computed in a similar way given in the previous subsection by replacing the door-to-door layer graph with the indoor transportation layer graph and applying Algorithm 1. Context-Awareness by IndoorGML As addressed in [1], the context-awareness is one of the major requirements for indoor spatial services.In this section, we investigate how to implement the context-awareness with IndoorGML.A first observation on the context-awareness in indoor space is the fact that the context of a pedestrian is mainly determined by: (1) the cell where she/he stays; (2) the time interval of staying in each cell; and (3) the sequence of cell visits; which are summarized as the following process: • Step 1: indoor map matching: Step 2: context reasoning from staying interval: F ST (c, I) = ct • Step 3: context reasoning from visit sequence: where p is a point collected from any indoor positioning device, c is a cell, I is a time interval, v * is a sequence of visit v = (c, I) and ct is a context.F I MM , F ST and F VS are the functions for indoor map matching, context reasoning function from staying interval and context reasoning function from visit sequence, respectively.This process is summarized by Figure 19.In the subsequent sections, we will discuss each step. Indoor Map Matching by IndoorGML The first step of indoor context-awareness is to identify the cell where a pedestrian is staying.The indoor position is given as either a point (x, y, z) or a sensor coverage depending on the type of indoor positioning methods.In any case, the position acquired from indoor positioning has inevitably a certain level of errors as mentioned in Section 3.3, which may yield incorrect results of indoor map matching.With the information in IndoorGML, we can improve the accuracy of indoor map matching as the following process: Note that this algorithm presents only the key idea at the conceptual level but not at the implementation level.First for Line 1 of the algorithm above, we can find the cell containing the current position of pedestrian by point-in-polygon or point-in-polyhedron algorithm with the cell geometry information given by IndoorGML.Second, we find the most probable cell by analyzing the indoor accessibility graph given in IndoorGML and the past trajectory.For example in Figure 20, p 0 is the current position and p −i is the position at time t −i where they are collected by an indoor positioning method.Then, the probability that the current cell is in c 3 is very low, since the indoor distance from p −1 to p 0 exceeds the maximum distance that a normal pedestrian could move within the time interval [t −1 , t 0 ].The probability that the current cell is c 8 is higher than c 3 , and the most probable path is c 6 → c 7 → c 8 , depicted in white circles in the figure.In this paper, we do not discuss the implementation detail for Line 2, but several approaches such as the hidden Markov model may be able to estimate the probability of each route [37].While the algorithm is for indoor map matching from a point, we can modify it for the case that the current position is given as a coverage.In this case, the current cell (c candidate in the algorithm) is given as multiple cells that overlap with the sensor coverage.Then, Step 2 should be modified to process multiple candidate cells. In addition to past trajectory, we can also improve the accuracy by analyzing other sensor readings from smartphone such as accelerometer and digital compass for Line 3 of the algorithm.For example, if the speed of a pedestrian is high, she/he is probably running in a hallway rather than a washing room.Additionally, the direction data are also useful to estimate the current cell.If the direction of the movement at p −1 is given as the arrow in Figure 20, then the probability that the current cell is in c 3 is even smaller because it is impossible to move along the arrow direction due to the wall between c 2 and c 3 . Context Reasoning from the Staying and Visit Sequence Once we find the cell where a pedestrian is staying, more useful context on the pedestrian can be also derived from staying and visiting sequences.For example, if 40 students are staying in a classroom for an hour with a professor, they are most probably attending a class.If a person is staying in a washing room for an hour, she/he may have a serious problem.We can deduce the context of a pedestrian from the information about the current cell and the staying time interval.The framework of the context reasoning can be given as a tuple: where S is a set of staying tuples (c, t s , t e ), C Cell is a cell classification, F ST is a context reasoning function from staying data and C CT is the target context classification.IndoorGML provides the semantics for cell classification based on OmniClass [28], which defines a comprehensive classification of building components in terms of functions and spaces. For the same reason, the visiting sequence is also useful data to derive the context of pedestrians.For example, we may expect that if a pedestrian visits only shops for baby clothes in a shopping mall, it is probable that she/he intends to buy something related to baby clothes.If she/he returns to the first shop of the visit, it is highly probable that she/he would buy something at this shop.The framework for the context reasoning with a visiting sequence can be also specified as a tuple: where V is a set of visiting sequence s * (s = (c, t s , t e )) of pedestrian.F VS is a context reasoning function from the visiting sequence.Since the implementation of context reasoning functions and the classification of pedestrian context are dependent on application domains and beyond the scope of this paper, we leave these issues as future works. Conclusions With the increasing demands for indoor spatial information, the standard indoor spatial data model becomes a fundamental component for indoor spatial technologies.For this reason, IndoorGML was recently published by OGC (Open Geospatial Consortium) as a standard indoor spatial data model and exchange format in XML.However, IndoorGML includes only a minimum part of the indoor spatial data model, and limited works have been done for studying its potential and exploring how to apply it in practice. In this paper, we investigated several aspects of IndoorGML and suggested the basic concepts for its applications.In particular, we studied: (1) the determination of cells, space partitioning and cell structuring; (2) the computation of indoor distance considering vertical and horizontal distances and multi-modal transportation; and (3) the implementation of indoor context-awareness based on the cellular space model and multi-layered space model of IndoorGML.We hope that this paper would serve as also a technical document to understand IndoorGML. Since IndoorGML is still at the first stage, we expect that many additional concepts and features are to be added as future works from the following viewpoints.First, more use-case studies on IndoorGML are required, for example from indoor routing services to indoor context-awareness, indoor IoT and indoor big data analysis; second, additional extensions of IndoorGML may be developed for common application domains; third, the improvement on the core part of IndoorGML are also expected throughout these case studies and extensions. 2.1 in addition to the UML class diagram.The UML class diagram in Figure 3 is therefore expressed in a XML schema, which defines the structure of XML data for IndooGML.The XML schema for the IndoorGML core module includes four basic types: State, Transition, CellSpace and CellSpaceBoundary.While CellSpace and CellSpaceBoundary belong to the basic units of the indoor primal space; State and Transition correspond to CellSpace and CellSpaceBoundary of the dual space for the connectivity topology of indoor cellular space.The schemas are given in Figures 9 to 12, respectively. are INSIDE. Figure 15 . Figure 15.Hierarchical structure and multi-layered space model of IndoorGML.(d i and v j indicate the connections via the i-th door and the j-th virtual boundary, respectively.) Algorithm 1 . Indoor_Distance(C, G D , G M , p, q) Input: -topographic layer C: set of indoor cells with cell geometry, -door-to-door (D2D) layer graph G D = (V D , E D ), -multi-layered space model graph G M = (V M , E M ), and -starting and ending points p, q Output: horizontal indoor distance dist H Begin 1. c p ← the cell containing point p; 2. c q ← the cell containing point q; 3. D p ← {d pi | d pi ∈ V D , (c p , d pi ) ∈ L} where L is the set of inter-layer connections; 4. D q ← {d qj | d qj ∈ V D , (c q , d qj ) ∈ L}; 5. DIST p ← {dist(p, d pi ) | d pi ∈ D p };6. DIST q ← {dist(d qj , q) | d qj ∈ D q }; 7. P i,j ← {p i,j | p i,j is the shortest path from d pi ∈ D p to d qj ∈ D q } from D2D layer graph; 8. dist H = min i,j {dist(p, d pi ) + length(p i,j ) + dist(d jq )} where d pi ∈ D p , d qj ∈ D q , and p i,j ∈ P i,j ; 9. return dist H ; End In the above algorithm, d pi and d qj denote the i-th and j-th doors belonging to the cells containing p and q respectively, dist(p, d pi ) is the distance between p and d pi within the cell c p and length(p) means the length of path p. 3,6 : d 3 → v 7 → d 5 → d 6 and p 4,6 : d 4 → d 5 → d 6 , shows an example of the multi-layer configuration with topographic and door-to-door layers to compute vertical distance via the elevator.The topographic layer and door-to-door layer configurations are shown in the left part and the right part of the figure, respectively.Note that the red dotted lines represent the inter-layer connections between the cell at the i-th floor and the door connecting the i-th floor to the elevator shaft cell, while the blue dotted lines indicate the inter-layer connections between the door at the i-th floor and the elevator shaft cell.The topological relationships of all inter-layer connections in the figure are MEET in this case. Figure 18 . Figure 18.Multi-modal transportation in an airport.dt i indicates a screen door at the train platform. Algorithm 2 . Indoor_Map_Matching_From_Point(C, G A , p 0 , P(k), S) Input: -topographic layer C: set of indoor cells with cell geometry, -accessibility graph G A = (V A , E A ), -current point p 0 and past trajectory P(k) = {p −k , p −(k−1) , ...p −2 , p −1 }, and -sensor readings S. Output: current cell c Begin 1. c candidate ← Find c(∈ C) containing p 0 ; 2. c correct ← Correct c candidate by analyzing the past trajectory P(k) and accessibility graph G A ; 3. c ← Improve c correct by analyzing other sensor readings; 4. return c; End Figure 20 . Figure 20.Indoor map matching and indoor accessibility.
12,321.4
2017-04-12T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Hallucination Mitigation in Natural Language Generation from Large-Scale Open-Domain Knowledge Graphs , Introduction The task of graph-to-text generation aims to automatically produce natural language descriptions of knowledge graphs.A knowledge graph G stores factual information as subject-predicateobject triples, where each triple (s, p, o) corresponds to an edge from the subject entity s to the object entity o.The graph-to-text generation task entails, given a subgraph G⊂G, generating a token sequence (y 1 , ..., y n ) to describe G.This task can be accomplished by constructing machine learning models (Clive et al., 2021;Castro Ferreira et al., 2019;Trisedya et al., 2018).The input to such a model is a graph itself-a small fragment of triples from a knowledge graph, as the outcome of some upstream operation, e.g., search, query and data mining.The output is a textual sequence that describes the fragment of triples. Verbalizing triples from knowledge graphs is crucial in a variety of tasks and applications, including systems created for querying knowledge graphs (Liang et al., 2021;Jayaram et al., 2016) as well as systems backed by knowledge graphs for question-answering (Zhou and Small, 2019;Ma et al., 2018) and fact discovery (Xian et al., 2019;Zhang et al., 2018).In these places, knowledge graph fragments must be conveyed to users in various forms, such as query results and discovered facts.Though a tiny part of a whole knowledge graph, such graph fragments can still be complex and thus challenging to comprehend.Instead, presenting them in natural language can help end users understand them better. In graph-to-text generation, the preciseness and naturalness of the textual narration of graph fragments is important.Generating high-quality text can be particularly challenging for large-scale and open-domain knowledge graphs.Specifically, benchmark datasets in this line of research either are hand-crafted and monotonous, e.g., WebNLG (Gardent et al., 2017a) or only include simple, special formations in narrated input fragments, e.g., EventNarrative (Colas et al., 2021) and TEKGEN (Agarwal et al., 2021).Existing graphto-text models, being trained and evaluated on these datasets, are largely not validated for more realistic large-scale, open-domain settings.Section 2 presents this analysis in detail. This paper introduces GraphNarrative, a new dataset that fills the aforementioned gap between graph-to-text models and real-world needs. GraphNarrative consists of around 8.7 million (input graph, output text) pairs.The text in each pair is a Wikipedia sentence, whereas the corresponding graph comprises Freebase (Bollacker et al., 2008) entities and relationships described in the sentence.The large-scale of both Wikipedia and Freebase, the linguistic variation in Wikipedia, and the complexity of sentences and corresponding graph structures make this dataset more aligned with real-world scenarios.For instance, GraphNarrative's 8.7 million input graphs are in 7,920 distinct topological shapes and 22% of the 8.7 million are star graphs, in contrast to 94% and 96% in EventNarrative and TEKGEN, respectively.Section 3 articulates the details of GraphNarrative's creation. Given the demonstrated efficacy of fine-tuning pre-trained language models (PLMs) in producing state-of-the-art results on graph-to-text (more details in Section 4), we adopt the same approach.As pointed out in (Agarwal et al., 2021;Dušek et al., 2018), though, this approach may suffer from information hallucination, i.e., the output texts may contain fabricated facts not present in input graphs.For example, given a two-triple input graph {(Neff Maiava, date of birth, 01 May 1924), (Neff Maiava, date of death, 21 April 2018)}, (Agarwal et al., 2021) reported their model generates "Neff Maiava (1 May 1924 -21 April 2018) was an Albanian actor."Not only the input does not mention Maiava's profession or citizenship, but also in the real-world he was an American Samoan wrestler instead. Very few have considered how to mitigate hallucination in graph-to-text generation, except for (Agarwal et al., 2021;Wang et al., 2021;Ma et al., 2022).The first two studies attempted to address hallucination by further fine-tuning PLMs on WebNLG after fine-tuning on noisier automaticallyextracted datasets.(Ma et al., 2022) adopted a different approach, by filtering out training instances when the ROUGE-1 (Lin, 2004) scores between the input and the output fall below a certain threshold.However, these studies did not quantify the prevalence of hallucination in their models' outputs.Nor did they provide direct experiment results or other evidence to verify the approach in reducing hallucination.We are the first to quantitatively measure the prevalence of hallucination in graphto-text.We also developed a novel approach to mitigating hallucination by aiming at the problem's root-mismatch between graph and text in training data.Given a graph-text pair in GraphNarrative, the approach trims the text, i.e., a Wikipedia sentence, by eliminating portions not represented in the graph.This process, named sentence trimming, is accomplished by analyzing the shortest paths between graph entities within the sentence's dependency parse tree (details in Section 5). We conducted comprehensive automatic and human assessments of text descriptions generated by fine-tuned PLMs, specifically BART (Lewis et al., 2020) and T5 (Raffel et al., 2020).The automatic evaluation results consistently demonstrated that models performed better with the use of sentence trimming, across the datasets of GraphNarrative, TEKGEN, WebNLG, and DART (Nan et al., 2021).The approach led to the increment of 12 and 7 points in BLEU score (Papineni et al., 2002) for GraphNarrative and TEKGEN, respectively.A T5large model fine-tuned on GraphNarrative with sentence trimming achieved state-of-the-art results on the WebNLG benchmark.Furthermore, human evaluation results showed that sentence trimming on average reduced 1.4 entity hallucinations and 1 relationship hallucination per text description. The contributions of this paper are as follows. • A new dataset, GraphNarrative, that fills the gap between existing datasets and large-scale realworld settings. • The first to quantify hallucinations produced by graph-to-text models. • A novel approach, sentence trimming, to hallucination mitigation. • Comprehensive experiments and evaluations that verify the quality and utility of GraphNarrative, as well as the effectiveness of sentence trimming. Limitations of Existing Datasets First, most previous models were trained on small hand-crafted datasets that contain limited entity types and relations.For instance, WebNLG includes 2,730 distinct entities and 354 distinct relations.In contrast, real-world knowledge graphs can be much larger.For example, according to (Heist et al., 2020), Wikidata (Vrandečić and Krötzsch, 2014) has 52,252,549 entities, 2,356,259 classes, 6,236 relations, and 732,420,508 triples.The handcrafted approach cannot scale to these massive knowledge graph, as it is impossible to manually write training graph-text pairs for so many different entity types, relations, and topic domains. Second, the text descriptions in hand-crafted datasets such as WebNLG tend to follow monotonous templates, plausibly because the examples were written by a small number of human contributors.This limits the capability of trained models to use diverse expressions in narrating graph fragments.This lack of linguistic variation can hamper the usability of a text generation system. Third, the graph fragments in existing datasets are largely limited to simple star graphs (each graph consisting of a center entity and some of its one-hop neighbors) or more general acyclic graphs (i.e., one or more trees).The graphs in WebNLG have 41 distinct topological shapes (Appendix D), out of which 32 are acyclic graphs.The cycles are all 2-edge loops or self-loops.In DART, 83% of the graphs are star graphs.In automaticallygenerated datasets EventNarrative and TEKGEN, 94% and 96% of the graphs are star graphs, respectively.Another automatically-collected dataset, AGENDA (Koncel-Kedziorski et al., 2019), has only 2% star graphs.But it only contains 7 distinct relations in the special domain of scientific research.On the contrary, in practical scenarios the input fragments can be of complex, general rather than simple, special formations.While direct measurement is lacking, we used the graphs described in Wikipedia sentences as a proxy for gauging the shape diversity of graphs that need to be narrated.We manually analyzed the formations of graphs presented in 100 random Wikipedia sentences, and we found only 39 of the 100 graphs are star graphs.Similar but automatic analysis of the complete Wikipedia corpus (more details in Section 3, Figure 2) show that only 2 of the 10 most frequent graph formations1 are star graphs, and 3 are cyclic graphs. The GraphNarrative Dataset This section explains how we generated our new dataset GraphNarrative by aligning Wikipedia texts with Freebase.Note that the methodology could be applicable to text corpora beyond Wikipedia and knowledge graphs beyond Freebase.This section also contrasts GraphNarrative with existing benchmark datasets to demonstrate how it addresses current datasets' limitations. Dataset Creation: Graph-Text Alignment For each applicable Wikipedia sentence W , we create the corresponding subgraph G in Freebase, to form a graph-sentence pair (G, W ) as one example instance in the dataset.See Figure 1 for an example.This is achieved by an entity linking step followed by an edge detection step. Entity linking.It maps a span of tokens in the Wikipedia sentence W to an entity e in Freebase.1995), wikification (Csomai and Mihalcea, 2008), and Wikipedia-to-Freebase entity mapping.The entity mapping (more details in Section B.3) created 4,408,115 one-to-one mappings between English Wikipedia entities (i.e., articles) and Freebase entities, through a combination of three engineering methods-by using existing mapping in Freebase, by using Wikidata as the midpoint connecting Wikipedia and Freebase entities, and similarly by using DBpedia (Auer et al., 2007) as the midpoint.For wikification, our simple approach maps a span of tokens in a Wikipedia article D to a Wikipedia entity, if the tokens exactly match either the entity's full title or any of the entity's wikilink anchor text in the same article D. For coreference resolution, we applied the implementation (Lee et al., 2017) in AllenNLP (Gardner et al., 2017) on Wikipedia articles to replace pronouns and aliases with corresponding entities.The results of aforementioned processes were put together-a Wikipedia entity appearance in a Wikipedia sentence, either originally as a wikilink or detected through wikification upon coreference resolution, leads to the detection of the corresponding Freebase entity via the mapping results. Edge detection.Given the Freebase entities detected from a Wikipedia sentence W , it identifies Freebase edges between the entities such that the corresponding relations are described in W .Given a pair of such entities, if Freebase contains only one edge between them, our simple method assumes the corresponding relationship is described in W .If Freebase has multiple edges between them, we include the edge whose label tokens overlap with W .If there are still multiple such edges, we include the edge that is most frequent in Freebase.All these detected edges form the graph G that pairs with W as an instance (G, W ) in the dataset.Note that the simple assumptions in this approach may lead to both false positives and false negatives.In practice, the resulting dataset has solid quality (detailed assessment in Section 6.2).Nevertheless, our workflow of dataset creation allows for more advanced and accurate methods in each component. Characteristics of GraphNarrative This section qualitatively and quantitatively analyzes how GraphNarrative bridges the gap between graph-to-text models and real-world settings. Scale and variety of entities and relations. GraphNarrative contains 8,769,634 graph-sentence pairs, 1,853,752 entities, 15,472,249 triples, and 1,724 relations from 84 Freebase domains (see Appendix B.1).As Table 1 shows, most other datasets are significantly smaller in these aspects.Linguistic variation.Using Wikipedia as the corpus, the graph-text pairs in GraphNarrative allow a model to learn from many Wikipedia authors' diverse narrations.On the contrary, text in handcrafted datasets such as WebNLG and DART tend to follow monotonous templates from a small number of human contributors. Graph structure complexity. The graphs in GraphNarrative contain 1-15 triples and 2-20 entities, in 7,920 distinct topological shapes based on graph isomorphism.(Detailed distributions of graph instances and shapes are in Appendix B.2.) Figure 2 displays the 10 most frequent shapes along with their instance counts.Furthermore, only 22% of the instance graphs are star graphs.On the contrary, EventNarrative and TEKGEN are dominated by star graphs, as Table 1 shows. Following the state-of-the-art approach, we also fine-tuned T5 and BART on GraphNarrative and other datasets in comparison.In training and applying a graph-to-text model, an instance graph is linearized into a token sequence.Following the method in (Ribeiro et al., 2021), the graph in Figure 1 would be linearized as "<H> John Douglas <R> place of birth <T> Morgantown, West Virginia <H> John Douglas <R> education institution <T> Tates Creek High School <H> Tates Creek High School <R> location <T> Lexington, Kentucky" where the special tokens <H>, <R> and <T> denote subjects, relations and objects, respectively. Mitigation of Hallucination The culprit of the hallucination problem discussed in Section 1 is fabrication in training data-textual descriptions containing information not found in input graphs.This is evidenced by that, while graph-to-text models frequently produce hallucination when trained on TEKGEN, it rarely happens on WebNLG.Hallucinated facts are seldom found in the clean, manually-crafted WebNLG but are present in automatically extracted graph-text pairs in TEKGEN due to extraction errors. There could be two plausible directions in tackling graph-to-text hallucination.One is to improve our graph-text alignment method (Section 3.1).The graph extracted from a piece of text during alignment may miss certain entities or relationships due to either extraction errors or disparities between the text corpus and the knowledge graph.The resulting graph-text pair may misguide the trained model to hallucinate facts.A more accurate alignment method can reduce such erroneous pairs and thereby reduce hallucination.However, this method has an inherent limitation-since a knowledge graph in real-world is often far from complete, there will be facts in text that cannot be mapped to the knowledge graph.Nevertheless, in principle, a way to combine this approach with the other approach discussed below is open for investigation. This study explores a different direction in mitigating hallucination.Given a (Freebase subgraph G, Wikipedia sentence W ) pair produced by alignment, we introduce a sentence trimming algorithm (Algorithm 1 in Appendix A) to turn W into a trimmed sentence W trim by eliminating portions that are not present in G while preserving the sentence's main idea.Below we provide a sketch of the algorithm, while keeping its pseudo code and description in Appendix A. First, the algorithm parses W and generates its dependency parse tree (DPT) W tree , using spaCy (Honnibal et al., 2020).Then, for each triple t i = (s i , p i , o i ) ∈ G, it identifies the shortest dependency path (SDP) between s i and o i , i.e., the shortest path between the two entities' tokens in W tree .It then finds the leftmost position index min_pos in sentence W among all tokens on all triples' SDPs, and similarly the rightmost position index max_pos.This process results in the trimmed sentence W trim , a sub-sequence of W spanning from min_pos to max_pos. An example is in Figure 3 which illustrates the Note that, a regular DPT will break up entities such as Backup Software into individual tokens, each for a node in the DPT.To avoid that, we used a modified concept of DPT-we preprocessed entity names and tokenized each entity's name into a single token.Speficially, the two tokens Backup and Software were combined into token BackupSoftware. Datasets We performed experiments on four datasets: GraphNarrative, TEKGEN (the large-scale, opendomain graph-to-text dataset that resembles ours the most), and WebNLG and DART (two humanannotated datasets).Detailed statistics about these and other datasets can be found in Table 1. • GraphNarrative is partitioned into training, development and test sets in accordance with the process elaborated below.Each edge in Freebase belongs to a topic domain.Every instance in GraphNarrative, i.e., a (graph, sentence) pair, is assigned a domain, using the most frequent domain among the graph's edges.We then divided GraphNarrative into seen and unseen partitions according to the numbers of instance pairs in different domains.Domains with very few (less than 2,000) pairs were designated as unseen domains, while the remaining domains are seen.A full list of the seen and unseen domains is in Appendix B.1.All instances in unseen domains go to test set.In the seen partition, 90%, 5% and 5% of the instances are allocated for training, development and test, respectively.This resulted in 7,880,214 instances in the training set, 437,514 in the development set, and 451,906 in the test set, including 13,453 instances from unseen domains.Having unseen instances in the test set helps us evaluate models' generalization ability.Choosing domains with limited discussion ensures that the model has encountered only a few such instances during pre-training of PLMs. • In TEKGEN, each instance pair contains a Wikipedia sentence and a Wikidata subgraph extracted from the sentence.We used the original training, development and test set partitions from (Agarwal et al., 2021).We could not use all instances due to lack of mappings between entity names and their surface texts.Without such information sentence trimming cannot be applied.To maximize the utility of available instances, we used aliases sourced from TEKGEN and leveraged regular expressions to identify time and people's names.Consequently, we obtained 3,811,288 instances for training, 476,439 for development, and 484,958 for test, out of the original 6,310,061, 788,746, and 796,982 instances, respectively. • In the standard WebNLG 2017 challenge dataset, each instance is composed of a graph from DBpedia and one or multiple sentences written by human annotations to describe the graph's content.Its test set is divided into the seen partition, which contains 10 DBpedia categories present in the training and development sets, and the unseen partition, which covers 5 categories absent from the training and development sets.We used the same partitioning as in the dataset. • DART is a data-to-text dataset that comprises pairs of (triple-set, sentence) gathered from a variety of sources, including WebNLG, E2E (Novikova et al., 2017), and sentences collected through crowdsourcing and paired with tables extracted from WikiSQL (Zhong et al., 2017) and WikiTable-Questions (Pasupat and Liang, 2015).We used the original partitioning of training, development and test sets in DART. Human & Automatic Evaluation Metrics Human evaluation metrics.We evaluated the quality of both the GraphNarrative dataset and the sentences generated by models, focusing on whether sentences in the dataset or produced by models fabricate facts that are not in the corresponding graphs narrated by the sentences.To the best of our knowledge, no prior study has quantitatively evaluated the quality of graph-to-text datasets or models with regard to hallucination.Specifically, we define the following four metrics: numbers of hallucinated entities (entities not present in the graph but mentioned in the sentence), missed entities (entities present in the graph but not mentioned in the sentence), hallucinated relations (relations not present in the graph but mentioned in the sentence), and missed relations (relations present in the graph but not mentioned in the sentence). In addition, we also evaluated the quality of sentences using average grammar errors per sentence, on a scale of 1-5: 5 (no errors), 4 (one error), 3 (two to three errors), 2 (four to five errors), and 1 (more than five errors). Experiment and Evaluation Results 1) GraphNarrative dataset quality.Three human annotators evaluated the quality of the graphsentence pairs in GraphNarrative.We randomly chose 100 pairs, where each sentence has the original version and the trimmed version using the algorithm in Section 5.The total 200 pairs were then shuffled so that annotators cannot tell whether a sentence is original or not.Each human annotator scored all 200 pairs using the metrics in Section 6.2, and their scores were averaged.graph-sentence pair are 1.163 and 1.340, respectively.This reflects the challenges in graph-to-text alignment and the source of hallucination, as explained in Section 5. Applying sentence trimming reduced these numbers to 0.306 entities and 0.453 relations, clearly showing its effectiveness in enhancing graph-text alignment.On the other hand, when graphs were extracted from corresponding sentences to form GraphNarrative, information not present in the sentences was seldom introduced into the graphs, as reflected in the small missed entities and relations, both less than 0.1.Sentence trimming only slightly increased missed relations from 0.040 to 0.083, showing insignificant side effect of removing from sentences information covered in corresponding extracted graphs.With regard to grammar, while sentence trimming led to a slight decline in the grammar score, the difference (4.793 vs. 4.613) is not substantial. 2) Model performance on GraphNarrative.We fine-tuned various T5 (small: 60M parameters, base: 220M parameters, and large: 770M parameters) and BART (base: 140M parameters, large: 400M parameters) models on GraphNarrative for 10 6 steps with a batch size of 8 using the Adam optimizer (Kingma and Ba, 2014) and an initial learning rate of 3 × 10 −5 .We employed a linearly decreasing learning rate schedule without warm-up and set the maximum target text length to 384 tokens.Our implementation was based on (Ribeiro dataset quality-a hallucinated entity refers to an entity from the original sentence that is missed in the corresponding extracted graph!We decided to tolerate this potential confusion for the sake of consistent metric definition.et al., 2021), which adapted PLMs from Hugging Face (Wolf et al., 2019) for graph-to-text.The automatic evaluation results of different models on GraphNarrative are in Table 3. Fine-tuning the T5large model attained the best performance across most metrics, consistent with findings on WebNLG in (Ribeiro et al., 2021;Wang et al., 2021). 3) GraphNarrative in enhancing generalization ability.To assess if GraphNarrative may enhance PLMs' generalization ability, we conducted both zero-shot learning and fine-tuning experiments employing GN-T5 and GNST-T5 on WebNLG and DART, where GNST-T5 denotes the fine-tuned T5large model on GraphNarrative with sentence trimming, and GN-T5 denotes the counterpart without sentence trimming.They are also compared with the original T5-large model as a point of reference. Zero-shot results.For zero-shot learning, we directly applied the above-mentioned three models on the test sets of WebNLG and DART.The results are in Table 4.The results reveal that fine-tuning PLM on GraphNarrative substantially improves its generalization capabilities. Fine-tuning results.We subjected the three models to further fine-tuning on WebNLG and DART for 100 epochs with an early stopping patience of 20 epochs, while keeping other hyperparameters consistent with those in Part 2, Section 6.3.No trimming was performed on WebNLG and DART, as their sentences were authored by human annotators, with very few hallucinated or missed entities and relations.Table 5 compares the performance of different graph-to-text models on WebNLG test set, including the reprint of the results from seven prior studies.GNST-T5 fine-tuned on WebNLG outperformed others on most metrics, particularly in the unseen category.This improvement suggests that GraphNarrative enhances the generalization ability of PLMs.Table 7 shows the fine-tuning results on DART test set.The model performance improvement by sentence trimming is not obvious.This is further discussed in Part 4, Section 6.3. BLEU METEOR chrF++ all seen unseen all seen unseen all seen unseen (Gardent et al., 2017b) 33.24 52.39 6.13 23.00 37.00 7.00 --- (Marcheggiani and Perez-Beltrachini, 2018) 4) Ablation study of sentence trimming.We demonstrate the effectiveness of sentence trimming in improving model performance on GraphNarrative, TEKGEN, WebNLG, and DART by fine-tuning PLMs with and without sentence trimming, respectively.(1) For GraphNarrative, we fine-tuned T5 and BART models using the setup described in Part 2, Section 6.3.(2) For TEKGEN, we fine-tuned the T5-large and BART-large models using the serialized triples from (Agarwal et al., 2021), with the same hyperparameters as in Part 2, Section 6.3.(3) For WebNLG and DART, we conducted zero-shot learning and fine-tuning experiments as described in Part 3, Section 6.3.(4) Additionally, on the WebNLG dataset, we carried out further fine-tuning of the T5-large model fine-tuned on TEKGEN in (2), applying the same hyperparameters as in Part 3, Section 6.3. The results of ( 1) and ( 2) are in Tables 3 and 8.The metrics (BLEU, METEOR, chrF++) consistently improve with sentence trimming, further verifying the efficacy of sentence trimming.The results of (3) are in Tables 4, 6 and 7, and Ta-ble 6 also shows the results of (4).In these results, the fine-tuned PLMs on GraphNarrative and TEKGEN with sentence trimming consistently outperformed their non-trimming counterparts.These findings underscore the effectiveness of sentence trimming in enhancing PLM performance.It is worth noting that, as Tables 6 and 7 show, on human-annotated WebNLG and DART the models did not gain much from sentence trimming after they are fine-tuned on these datasets.The main reason is that human-annotated datasets generally have well-aligned graph-text pairs and thus cannot be substantially improved by trimming. 5) Sentence trimming in mitigating hallucination. We randomly sampled 100 graphs from GraphNarrative test set, along with the corresponding sentences generated by GNST-T5 and GN-T5.We shuffled the 200 pairs and used three human evaluators to score the pairs, in the same fashion as in Part 1, Section 6.3.The results are in Table 9, which shows a reduction of 1.4 hallucinated entities and 1.0 hallucinated relations per instance from GN-T5 to GNST-T5, suggesting that sentence trimming effectively mitigates hallucinations.Furthermore, sentences generated by both models exhibit on average less than 0.07 missed entities and 0.38 missed relations per instance.Regarding grammar, sentences generated by GNST-T5 received (Goldie Gets Along, film directed by, Malcolm St. Clair filmmaker) (Goldie Gets Along, film performance actor, Lili Damita) (Goldie Gets Along, film performance actor, Charles Morton actor) Goldie Gets Along is a 1951 American comedy film directed by Malcolm St. Clair (filmmaker) and starring Lili Damita and Charles Morton (actor). Goldie Gets Along was directed by Malcolm St. Clair (filmmaker) and starred Lili Damita and Charles Morton (actor). Table 10: Comparison of generated sentences with and without sentence trimming for sample input graphs slightly lower scores than GN-T5.Nevertheless, these scores remain acceptable, with on average less than one grammar error per instance. Table 10 illustrates the sentences generated by GNST-T5 and GN-T5 for a few input graphs.GN-T5 tends to fabricate facts that are incorrect or nonexistent in the real world (e.g., Arthur Morry's age of death, the renaming of the US Naval Academy, and Goldie Gets Along's year of release) or not present in input graphs (e.g., Goldie Gets Along's genre).In contrast, GNST-T5 generated fluent sentences without fabricating facts, barring a phrase instead of a complete sentence for the second example. 6) Limitations of star graph datasets.As explained in Section 1, existing large-scale datasets such as TEKGEN contain predominantly star graphs.We used GraphNarrative to investigate the limitations of star graph datasets. More specifically, we separated the graphsentence pairs in GraphNarrative into star instances (with star graphs) and non-star instances (without star graphs).We excluded instances with two or three entities, as they could be considered as both paths and stars.Table 11 provides the distributions of these two types of instances.The number of nonstar instances in all three sets is approximately 3.5 times as many as the star instances.To help ensure a fair comparison, we randomly selected an equal number of non-star instances as the star instances for each of the three sets, e.g., there are 290,047 star graphs and the same number of non-star graphs in the training set of our prepared dataset. Using the dataset prepared this way, we finetuned T5-large model with and without sentence trimming for 10 epochs under early stopping patience 5, using the same other hyperparameters as in Part 2, Section 6.3.The results are in Table 12.Across the board, models trained using star instances exhibited the highest performance when tested using star instances too, and similarly regarding non-star instances.Furthermore, models trained on non-star instances and tested on star instances tended to outperform models trained on star instances and tested on non-star instances.These results indicate that a PLM fine-tuned on a dataset consisting solely of star graphs performs poorly when applied to general graph shapes, which are commonly encountered in real-world applications.Fine-tuning PLMs on diverse graph shapes enhances their generalization capability. Conclusion In this paper, we proposed a novel approach to mitigating hallucination in natural language generation from large-scale, open-domain knowledge graphs.We released a large graph-to-text dataset with diverse graph shapes that fills the gap between existing datasets and real-world settings.The experiment results show the effectiveness of our hallucination mitigation approach as well as the usefulness of the dataset. Limitations 1) The creation of GraphNarrative and the sentence trimming method leverage an existing mapping between the knowledge graph entities and Wikipedia entities.Given other text corpora and knowledge graphs, creating such a mapping is a non-trivial undertaking that often requires named entity recognition and disambiguation techniques. 2) The sentence trimming approach may introduce grammatical errors into generated sentences. 3) The method focuses on describing the content of an input graph only, without considering context information such as neighboring entities in the knowledge graph.Such extra information may be preferred by a user given certain application contexts or may make the input graph's narration more natural.4) The creation of GraphNarrative does not consider multiary relationships in knowledge graphs.More specifically, the Freebase used in our work is a version in which multiary relationships were converted into binary relationships (Shirvani-Mahdavi et al., 2023).In general, there is a lack of inquiry into multiary relationships in graph-to-text models. To the best of our knowledge, the only work in this area that discusses such multiary relationships is (Agarwal et al., 2021) and they also converted multiary relationships into binary ones.5) A couple of studies (Agarwal et al., 2021;Wang et al., 2021) attempted to address hallucination by further fine-tuning PLMs on WebNLG after fine-tuning on noisier automatically-extracted datasets.It will be informative to conduct a human evaluation comparison between their approaches and the sentence trimming method proposed in our work.Similarly, our future work includes a human evaluation comparison with the filtering-based method (Ma et al., 2022) which we empirically compared with in Appendix C.1.6) The sentence trimming algorithm only removes irrelevant portions from the beginning and the end of a sentence, leaving the token sequence in the middle intact.It is possible the middle portion also contains tokens irrelevant to the input graph. Ethics Statement In the course of conducting our research, we have striven to remain aware and attentive to potential ethical implications and challenges.Our work was informed by the following ethical considerations.Ethical use of generated content.Given that our research focuses on producing natural language de-scriptions of knowledge graphs, we are particularly aware of the potential misuse of our method for the generation of false, deceptive, biased or unfair contents.Particularly, our sentence trimming method aims to minimize such potential misuse by aiding in reducing hallucinations. We also recognize that natural language descriptions generated using our dataset and algorithm can be repurposed in various ways.We firmly urge users and developers to use this content responsibly, particularly with respect to intellectual property rights.Furthermore, we recommend users clearly label AI-generated content, promoting transparency and trust. Data privacy and bias.Our GraphNarrative dataset uses publicly available data, particularly Freebase and Wikipedia, which do not contain information that violates anyone's privacy to the best of our knowledge. Our reliance on Wikipedia may inadvertently introduce bias, as Wikipedia content can reflect the views of its contributors.We are also aware this potential bias could be more intense in less commonly spoken languages, where the number of contributors might be limited.verse pairs.If the input graph triples to a graphto-text model containing such reverse edges, we only need to simply retain one edge out of each redundant pair.Hence, we did exactly that in preprocessing the whole Freebase dump so that our input graphs have no reverse edges.Furthermore, our pre-processing also removed the mediator (CVT) nodes (Bollacker et al., 2008) by concatenating edges connected through mediator nodes. B.3.2 Graph-text alignment Wikipedia-to-Freebase entity mapping.We collected a Wikipedia-to-Freebase entity mapping between 4,408,115 English Wikipedia titles and their corresponding Freebase entities.The mapping was created by employing three methods, as follows.1) Parsing the Freebase data dump to obtain a Wikipedia-to-Freebase entity mapping using https://github.com/saleiro/Freebase-to-Wikipedia.2) Inferring from a Wikipedia-to-Wikidata mapping in wikimapper (Klie, 2022) core-i18n/en/. The overall Wikipedia-to-Freebase entity mapping is obtained by combining all three methods and eliminating conflicting entity mappings.The mapping file link can be found at https://github.com/idirlab/graphnarrator. Coreference resolution.To produce more graphtext pairs for GraphNarrative, we used AllenNLP's coreference resolution (Gardner et al., 2017;Lee et al., 2017) in default settings to replace Wikipedia token spans with the entities they refer to.We conducted human evaluation to assess the quality of the coreference resolution results on 20 randomly selected Wikipedia articles.The assessment yielded a precision of 91.4% (630 of the 689 resolved entity coreferences were correct) and a recall of 98.3% (11 entity coreferences were missed). #Triples BLEU (GN-T5) BLEU (GNST-T5) We compared sentence trimming with a similar but different filtering method proposed in (Ma et al., 2022).Their method also aimed to reduce disparities in datasets as a way of mitigating hallucination.However, different from our approach which aligns sentences better with input graphs by trimming away portions of sentences, the filtering method removes graph-text pairs from the DART dataset where the ROUGE-1 similarity score between the graph and the text is below 0.8. We applied the same filtering method on GraphNarrative.Table 16 provides a breakdown of the remaining instances after filtering using different thresholds.A relatively low threshold of 0.3 removed 43.32% of the instances in GraphNarrative. When we raised the threshold to 0.8, almost all instances were eliminated.In comparison, the threshold of 0.8 applied on DART allowed for retaining 88% of its instances.This is because the humanannotated DART has well-aligned graph-text pairs.We compared sentence trimming with filtering using the 269,541 instances left in the training set and 71,565 in the development set, under threshold 0.5.We fine-tuned the T5-large model for 10 epochs with early stopping patience 5, using the same other hyperparameters as on the full dataset.The number of training steps is different from the full dataset because this subset is about 30 times smaller.We used early stopping to avoid overfitting.Then we used the resulting model, which we call Filter-T5, for zero-shot prediction on WebNLG and DART test sets.The results are shown in Table 4. GNST-T5 slightly outperformed Filter-T5.To understand this, we compared the statistics of the filtered dataset and the full dataset (and thus the dataset after sentence trimming since trimming does not alter the graphs in the dataset), as in Table 17.The filtered dataset exhibits a significant reduction in size and diversity in terms of number of distinct entities, relations, triples and shapes.We conjecture that this contributes to its performance degeneration in comparison with GNST-T5. C.2 Performance of GNST-T5 and GN-T5 by input size Table 18 shows the performance of GNST-T5 and GN-T5 in BLEU scores on graphs of varying sizes, i.e., number of triples.The results help gauge whether the models generalize well for long inputs.Notably, the performance of both models on extended inputs is better than or on par with their performance on shorter inputs. Figure 1 : Figure 1: A graph-sentence pair in GraphNarrativeOur customized entity linking solution consists of coreference resolution(McCarthy and Lehnert, 1995), wikification(Csomai and Mihalcea, 2008), and Wikipedia-to-Freebase entity mapping.The entity mapping (more details in Section B.3) created 4,408,115 one-to-one mappings between English Wikipedia entities (i.e., articles) and Freebase entities, through a combination of three engineering methods-by using existing mapping in Freebase, by using Wikidata as the midpoint connecting Wikipedia and Freebase entities, and similarly by using DBpedia(Auer et al., 2007) as the midpoint.For wikification, our simple approach maps a span of tokens in a Wikipedia article D to a Wikipedia entity, if the tokens exactly match either the entity's full title or any of the entity's wikilink anchor text in the same article D. For coreference resolution, we applied the implementation(Lee et al., 2017) in AllenNLP(Gardner et al., 2017) on Wikipedia articles to replace pronouns and aliases with corresponding entities.The results of aforementioned processes were put together-a Wikipedia entity appearance in a Wikipedia sentence, either originally as a wikilink or detected through wikification upon coreference resolution, leads to the detection of the corresponding Freebase entity via the mapping results.Edge detection.Given the Freebase entities detected from a Wikipedia sentence W , it identifies Freebase edges between the entities such that the corresponding relations are described in W .Given a pair of such entities, if Freebase contains only one edge between them, our simple method assumes the corresponding relationship is described in W .If Freebase has multiple edges between them, we include the edge whose label tokens overlap with W .If there are still multiple such edges, we include the edge that is most frequent in Freebase.All these detected edges form the graph G that pairs with W as an instance (G, W ) in the dataset.Note that the simple assumptions in this approach may lead to both false positives and false negatives.In practice, the resulting dataset has solid quality Figure 2 : Figure 2: 10 most frequent graph shapes in GraphNarrative, with instance counts Figure 3 : Figure 3: Dependency parse tree of sentence "FlyBack is an open-source Backup Software for Linux based on Git and modeled loosely after Apple's Time Machine." DPT of the sentence W in its caption.The corresponding graph G from the graph-text alignment process is {(FlyBack, software_genre, Backup Software), (FlyBack, operating_system, Linux), (FlyBack, basis, Git)}.Note that entities Apple and Time Machine in W are missing from G. The SDPs for the three triples are (①, ②), (①, ②, ③, ④), and (①, ②, ⑤, ⑥, ⑦), respectively.Given the SDPs, min_pos is attained by FlyBack and max_pos is attained by Git.Hence, W trim is "FlyBack is an open-source Backup Software for Linux based on Git".The sequence "and modeled loosely after Apple's Time Machine.", related to the missing entities Apple and Time Machine, is trimmed from W .Note that, a regular DPT will break up entities such as Backup Software into individual tokens, each for a node in the DPT.To avoid that, we used a modified concept of DPT-we preprocessed entity names and tokenized each entity's name into a single token.Speficially, the two tokens Backup and Software were combined into token BackupSoftware. Figure 4 : Figure 4: Distribution of GraphNarrative instances by number of triples in graphs Figure 5 : Figure 5: Distribution of GraphNarrative instances by number of entities in graphs tences and 20.66 tokens for trimmed sentences.Table 14 provides detailed distribution of sentence lengths.Table 15 presents the average sentence token counts by number of triples in the graphs.It underscores that our model was trained using a diverse set of examples, including those with lengthy sentences and a substantial number of triples. Figure 6 Figure 6 displays the distinct graph shapes in the WebNLG dataset, in descending order by number of instances. Table 1 : Comparison of graph-to-text datasets Table 2 : Human evaluation of GraphNarrative quality Table 3 : Table 2 presents the results.In this and subsequent tables, sentence trimming is denoted ST.The average hallucinated entities and relations 2 per Model performance on GraphNarrative Table 5 : Performance comparison of different graph-to-text models on WebNLG test set Table 6 : Models' performance on WebNLG test set, when fine-tuned with TEKGEN or GraphNarrative and further fine-tuned with WebNLG Table 7 : Fine-tuning results on DART test set Table 8 : Performance of fine-tuning BART-large and T5-large on the TEKGEN dataset Table 9 : Human evaluation of sentences generated by T5-large with and without sentence trimming During World War II, the US Naval Academy in Annapolis, Maryland was renamed the US Naval Academy in Annapolis, Maryland, and the US Naval Academy in Annapolis, Maryland was renamed the US Naval Academy in Annapolis, Maryland. Table 11 : Number of star and non-star instances in Table 12 : Model performance, star vs. non-star graphs Table15presents the average sentence token counts by number of triples in the graphs.It underscores that our model was trained using a diverse set of examples, including those with lengthy sentences and a substantial number of triples. Table 13 : Distribution of distinct GraphNarrative graph shapes by number of entities Table 15 : Average GraphNarrative sentence length by number of triples in graphs Table 16 : Number of remaining instances after filtering using different thresholds of ROUGE-1 similarity scores Table 17 : Statistics of GraphNarrative and its filtered dataset Table 18 : Distribution of GNST-T5 and GN-T5 model performance in BLEU scores on GraphNarrative test set
9,118.2
2023-01-01T00:00:00.000
[ "Computer Science", "Linguistics" ]
Capacitance effect on the oscillation and switching characteristics of spin torque oscillators We have studied the capacitance effect on the oscillation characteristics and the switching characteristics of the spin torque oscillators (STOs). We found that when the external field is applied, the STO oscillation frequency exhibits various dependences on the capacitance for injected current ranging from 8 to 20 mA. The switching characteristic is featured with the emerging of the canted region; the canted region increases with the capacitance. When the external field is absent, the STO free-layer switching time exhibits different dependences on the capacitance for different injected current. These results help to establish the foundation for capacitance-involved STO modeling. Background The conventional way of changing the magnetization of a thin film is usually realized through applying an external magnetic field. In recent years, it has been found both theoretically [1][2][3] and experimentally [4,5] that a spin-polarized current which carries more spin up or spin down electrons can also change the magnetization when passing through the thin film. This effect helps to generate steady precession of the free-layer magnetization in a spin valve structure by an injected spin-polarized current, which results in a periodic variation of the device resistance and forms spin-torque oscillators (STOs) [6][7][8][9][10][11][12]. The advantages of the STO are its capability of generating microwave with ultra-wide bandwidth (from 100 MHz to 60 GHz) and its easy modulation at very high frequency. Its potential application as microwave generator has received unprecedented attention. Among the many unrevealed problems remained in the STO area, much research effort focuses on the STO authentic modeling. However, the capacitance effect is not considered at all in most previous studies [13][14][15]. Capacitance effect [13][14][15] being introduced by intrinsic sources (parasitic capacitance due to the interaction between the multilayer thin films in STOs) and extrinsic sources (lead capacitance due to the connection between the external IC and STOs) is inevitable during the preparation process of spin-torque oscillators (typically GMR multilayers). Therefore, in order to accurately reflect the characteristics of prepared spintorque oscillator devices, it is highly essential to explore the capacitance effect on oscillation characteristics and switching characteristics. Meanwhile, this research not only helps to establish the foundation for capacitanceinvolved STO modeling but also helps to reveal the origin of capacitance effect in nanodevices. Since our findings could be applied in the modeling of authentic STO, which is highly beneficial for supporting and guiding the fabrication process in nanotechnology and nanoscience industry. In this paper, a circuit model where a capacitor connected in parallel with a STO is proposed. The marcospin model is adopted to explore how the magnetodynamics of the STO is influenced by the capacitor. The oscillation characteristics and the switching characteristics are both fully studied. Methods As shown in Figure 1, a giant magnetoresistance (GMR)based STO consisting of a fixed layer, a nonmagnetic layer, and a free layer is modeled with a capacitor connected in parallel. An ideal current source I dc is applied. The time evolution of the free-layer magnetization is described by the Landau-Lifshiz-Gilbert equation with Slonczewski spin torque term [2] where m stands for the free-layer magnetization unit vector, γ stands for the gyromagnetic ratio, and α is the Gilbert damping parameter. H eff is the effective magnetic field acting on the free layer, and it consists of the contributions from the uniaxial magnetic anisotropy field H k , the demagnetization field H d , and the external inplane applied magnetic field H app . We obtain the effective field as: where e x and e z are the unit vectors along x (in-plane easy axis) and z (out-of-plane), respectively. In this study, the field-like spin torque term is considered [16,17]. Thus, the spin transfer torque (STT) term in Equation (1) can be written in general as: where a J and b J are the in-plane and perpendicular (or field-like) spin torque components, respectively. A linear relation between a J and b J is established [18] as: where μ 0 is the magnetic vacuum permeability, η is the spin transfer efficiency, M S is the free-layer saturation magnetization, and V f is the volume of the free layer. In this study, the free layer is composed of a typical CoFeB thin film with a circular shape with a dimension of 250 nm and thickness of 3 nm. The b J term in metallic spin valve structures is small. We define |β| = 10% in this study. Other parameter values are presented as follows [13]: Meanwhile, the continuity of the total dc current and the equal voltage drop across the two parallel branches result in the following equations: where R AP and R P stand for the 'anti-parallel' and the 'parallel' resistance of the STO, respectively, θ(t) is the angle between the magnetization of the fixed layer and that of the free layer, V C (t) is the instantaneous voltage across the capacitor, and I STO (t) is the current flowing through the STO branch, as shown in Figure 1c. The merging of (5) and (6) results in: The magnetic dynamics can then be numerically solved using (1) and (7). To further elaborate how to numerically solve Equations (1) and (7), we transform Equation (1) into the following set of differential equations in a spherical coordinate system: where H theta and H phi stand for the effective field in a spherical coordinate system, Is theta and Is phi stand for the current injected into the STO in a spherical coordinate system. Equation (7) can be transformed into the following set of differential equations in a spherical coordinate system: where I amper stands for the total current injected into the STO and the capacitor, Cap stands for the value of capacitance. By solving Equations (8), (9), and (10) using runge-kutta method [19], the time-varying θ, and I amper are identified, where the magnetic dynamics are then obtained. Results and discussion A. Oscillation characteristics with external field The oscillation characteristics are studied when external field is applied along the easy axis (x-axis) with the value H app =0.05 T. When I dc is applied, the free layer of the STO is in a steady precessional state where a stable frequency is induced. The presence of a parallel connected capacitor shares the injected dc current with the STO, which changes the free-layer magnetization precessional state to a new orbit. The STO oscillation frequencies are presented under different capacitance values in Figure 2. 'Opposite sign', 'Same sign', and 'GMR type' refer to frequency vs capacitance curves when β = −10%, β =10%, and β =0%, respectively. When the injected current I dc is relatively small (8 and 9 mA) and the field-like term is ignored (β =0%), the increase of the capacitance leads to the general decrease of the oscillation frequency, as shown in Figure 2a,b. When the capacitance is in the range of 0.01 to 0.1 pF, this negative correlation is enhanced for β = −10% whereas it changes to positive correlation for β =10%. Meanwhile, when the capacitance is in the range of 1 to 100 pF, this negative correlation is enhanced for β =10% whereas it changes to positive correlation for β = −10%. This phenomenon is due to the fact that the field-like term is dependent on the applied bias voltage [12,16,17]. Either relatively small capacitance value or relatively large capacitance value would result in a large change of the bias voltage, which also induce a large change of the field-like term. When I dc reaches the value of 12 mA, a 'V-shape' trend formed between the frequency and the capacitance. Compared with the minimum peak for β =0%, the minimum peak for β = −10% occurs at lower capacitance value while it occurs at higher capacitance value for β =10%, as shown in Figure 2c. When I dc reaches the value of 20 mA, general positive correlation between the capacitance and the frequency is exhibited for β = −10%, β =10%, and β =0%. The field-like term (either β = −10% or β =10%) can result in higher oscillation frequency in this case. B. Switching characteristics with external field In Part A, it is discussed how the oscillation frequency behaves under different capacitance values. However, it is worth noting that as the injected current I dc increases to a critical value, the balance between the injected spin torque and damping cannot be maintained. The injected spin torque overwhelms the damping, resulting in the reversal of the free-layer magnetization from parallel state to antiparallel. However, in the case where the capacitance value is set to 0.1 pF (Figure 3), when the injected current I dc increases gradually to 114 mA, the magnetization switches from parallel state to a canted state instead of anti-parallel state. The magnetization trajectory in Figure 3c suggests that the magnetization finally stays in a static state with a canted angle. When I dc continues to increase to 246 mA, complete magnetization reversal is achieved from parallel state to anti-parallel state, as shown in Figure 3b. This concludes that the existence of the capacitance realizes a canted region (from 114 to 246 mA in this case) as a transition between parallel state and anti-parallel state. It has also been verified that without the capacitance, no canted region is observed in this system. The capacitance value is varied, and how the canted region evolves is explored in Figure 4. J c1 is defined as the current boundary separating oscillation state with canted state. As shown in Figure 4a, J c1 drastically decreases with capacitance in the range of 0.1 to 1 pF and tends to be stable with capacitance greater than 1 pF. J c2 is defined as the current boundary separating canted state and normal complete switching from parallel to antiparallel. As shown in Figure 4b, J c2 increases with capacitance in a quasi-exponential tendency from 0.1 to 1 pF. This tendency is repeated for capacitance in the range of 1 to 10 pF. The difference between J c2 and J c1 results in the canted region as shown in Figure 4c. Obviously, the canted region maintains a positive correlation with the capacitance. C. Switching characteristics without external field Part A and Part B investigate the situation where the external field H app is applied along the easy axis. In fact, for an in-plane magnetized STO in our system, the premises for a stable oscillation are the injected current and the external field. When the external field is absent, the injected current can only drive complete magnetization reversals from parallel state to anti-parallel. When a relatively small injected current (7 mA) is injected, the variation of STO resistance with the simulation time is shown in Figure 5a. It is found that capacitance can influence the free-layer magnetization switching time. Meanwhile, the trajectory in Figure 5b demonstrates that the existence of the capacitance renders plentiful unstable oscillating cycles before the final switching. It requires more oscillating cycles before final switching with increasing capacitance. When a relatively large injected current (30 mA) is injected, the variation of STO resistance with simulation time is shown in Figure 6. In this situation, the influence of capacitance on switching time is not obvious. The switching time is presented in Figure 7 with different capacitance values. The physical phenomena for STO free-layer switching time actually depends on three main factors: damping constant, in-plane spin torque component, and critical spin torque which intrigues the switching. In our study, we picked damping constant value 0.008, which is approach to the optimal value 0.013 for thin film switching. Thus we only consider the donation from the in-plane spin torque component and the critical spin torque which intrigues the switching. Based on the previous investigation [20], the switching time can be reasonably fitted by: where a J (c) represents the in-plane spin torque components with capacitance considered and a crit (c) represents the critical spin torque which intrigues the switching with capacitance considered. The reason the STO exhibits different dependences on the capacitance for different injected current is because when I dc is relatively small (7 mA), the switching time is very slow since the in-plane spin torque component a J (c) (500 Oe in this case) has just exceed the value of the critical spin torque which intrigues the switching with capacitance considered a crit (c) (450 Oe in this case). However, when I dc is relatively large (30 mA), the switching time is very fast since the in-plane spin torque component a J (c) has increased to level of 10,000 Oe, which far exceed a crit (c). Thus the switching time in Figure 7b is much smaller than the switching time in Figure 7a. On the other hand, when I dc is relatively small (7 mA), the influence of capacitance on a J (c) is smaller than the influence of capacitance on a crit (c). When I dc is relatively large (30 mA), the influence of capacitance on a J (c) is larger than the influence of capacitance on a crit (c). When I dc is relatively small (7 mA), the switching time is mainly determined by a crit (c). However, the a crit (c) value is negatively correlated with the capacitance (calculation not presented here). Thus, for capacitance in the range of 0.01 to 1 pF, the a crit (c) value gradually decreases. For capacitance in the range of 1 to 100 pF, the a crit (c) value gradually increases. This explains the switching time tendency in Figure 7a. When I dc is relatively large (30 mA), the switching time is mainly determined by a J (c). Since the a J (c) is very large and not influenced by the capacitance, the switching time only changes slightly (7.7%) as the capacitance increases. Conclusions In summary, we have shown that with the external field applied, the STO oscillation frequency demonstrates a general negative correlation with the capacitance for injected current ranges from 8 to 12 mA while a general positive correlation with capacitance for injected current 20 mA. Canted regions are revealed for injected current higher than critical value. The free-layer magnetization switches from parallel state to canted state instead of from parallel state to anti-parallel state. When the external field is absent, the STO free-layer magnetization switching time exhibits two stages of variation with the capacitance for both small injected current value (7 mA) and large injected current value (30 mA). However, the variation trends are opposite for small injected current value (decrease in first stage and increase in second stage) and large injected current value (increase in first stage and decrease in second stage).
3,403.4
2014-11-03T00:00:00.000
[ "Physics" ]
Deep fine-KNN classification of ovarian cancer subtypes using efficientNet-B0 extracted features: a comprehensive analysis This study presents a robust approach for the classification of ovarian cancer subtypes through the integration of deep learning and k-nearest neighbor (KNN) methods. The proposed model leverages the powerful feature extraction capabilities of EfficientNet-B0, utilizing its deep features for subsequent fine-grained classification using the fine-KNN approach. The UBC-OCEAN dataset, encompassing histopathological images of five distinct ovarian cancer subtypes, namely, high-grade serous carcinoma (HGSC), clear-cell ovarian carcinoma (CC), endometrioid carcinoma (EC), low-grade serous carcinoma (LGSC), and mucinous carcinoma (MC), served as the foundation for our investigation. With a dataset comprising 725 images, divided into 80% for training and 20% for testing, our model exhibits exceptional performance. Both the validation and testing phases achieved 100% accuracy, underscoring the efficacy of the proposed methodology. In addition, the area under the curve (AUC), a key metric for evaluating the model’s discriminative ability, demonstrated high performance across various subtypes, with AUC values of 0.94, 0.78, 0.69, 0.92, and 0.94 for MC. Furthermore, the positive likelihood ratios (LR+) were indicative of the model’s diagnostic utility, with notable values for each subtype: CC (27.294), EC (9.441), HGSC (12.588), LGSC (17.942), and MC (17.942). These findings demonstrate the effectiveness of the model in distinguishing between ovarian cancer subtypes, positioning it as a promising tool for diagnostic applications. The demonstrated accuracy, AUC values, and LR+ values underscore the potential of the model as a valuable diagnostic tool, contributing to the advancement of precision medicine in the field of ovarian cancer research. Introduction Ovarian cancer is a formidable adversary within the spectrum of cancers of the female reproductive system, marked by its ominous distinction as the most lethal among its counterparts.The complexity of this disease is accentuated by its diverse subtypes, each of which is characterized by unique cellular morphologies, etiologies, molecular and genetic profiles, and clinical attributes.Despite being the eighth most common cancer in women worldwide and the fourth most common cancer in Indian women, ovarian cancer poses a significant challenge due to its asymptomatic nature in the early stages, leading to delayed detection and diagnosis.The World Health Organization (WHO) reported approximately 313,959 new cases and 207,252 deaths globally in 2020, underscoring the urgent need for effective diagnostic strategies (Chhikara et al. 2022). Early diagnosis and treatment of ovarian cancer are pivotal for improving patient outcomes and enhancing the efficacy of therapeutic interventions.However, the asymptomatic nature of the disease in its initial stages often results in delayed detection, making it more challenging to treat at advanced stages, and is associated with lower survival rates.The five common subtypes, high-grade serous carcinoma, clear-cell ovarian carcinoma, endometrioid, low-grade serous, and mucinous carcinoma, together with various rare subtypes, collectively contribute to the intricate development of this formidable disease.The emergence of subtypespecific treatment approaches holds promise in the ongoing battle against ovarian cancer.Nevertheless, accurate subtype identification, which is crucial for unlocking the full potential of targeted therapies, currently relies on traditional diagnostic methods fraught with challenges, including interobserver disagreements and issues related to diagnostic reproducibility. Efforts to combat ovarian cancer are gaining momentum, particularly with the integration of data science and deep learning.The analysis of histopathological images, a cornerstone in the diagnostic process, can be significantly enhanced through the application of deep learning models.Despite this potential, challenges persist, such as the necessity for substantial training data ideally sourced from a single diverse dataset.Overcoming technical, ethical, financial, and confidentiality constraints is paramount for unleashing the full potential of deep learning in revolutionizing ovarian cancer diagnosis. In this rapidly evolving landscape, the convergence of data science and deep learning offers a promising avenue for improving the diagnosis and treatment of ovarian cancer.The power of advanced technology, particularly in the analysis of histopathological images, is key to more accurate and efficient identification of ovarian cancer subtypes.However, addressing the training data challenges, ethical considerations, financial constraints, and confidentiality issues is essential for harnessing the full potential of deep learning in this critical context.By overcoming these hurdles, we can pave the way for a future where early diagnosis becomes more accessible and targeted therapies can be optimized, ultimately improving patient outcomes and advancing the fight against ovarian cancer. The major contributions of this study are as follows. • Deep learning methodologies were incorporated to harness the powerful feature extraction capabilities of EfficientNet-B0, enhancing the ability of the model to capture intricate patterns in histopathological images.The structure of the article is outlined as follows.Sect."Literature review" contains the literature review.In Sect."Materials and methodology", materials and methodology are elaborated.Sect."Results and discussion" presents the results and discussion.Finally, the article concludes in Sect."Conclusion". Literature review Numerous research endeavors have significantly advanced our understanding of ovarian cancer by exploring diverse methodologies, ranging from deep learning applications and innovative imaging techniques to multimodal analyses and novel algorithmic architectures.This literature review synthesizes collective contributions, each offering a unique perspective and transformative insights that collectively propel the field forward.The amalgamation of these diverse studies provides a comprehensive picture of the evolving landscape of ovarian cancer research, showcasing the ongoing quest for more accurate diagnostics and effective treatment strategies. The groundbreaking work conducted by Hu et al. (2023) demonstrated promising results, showing that deep learning methods can effectively segment EOC.The performances of different algorithms, including U-Net, DeepLabv3, U-Net + + , PSPNet, TransUnet, and Swin-Unet, were evaluated using metrics such as the Dice similarity coefficient (DSC), Hausdorff distance (HD), average symmetric surface distance (ASSD), precision, and recall (Hu et al. 2023).Ziyambe et al. (2023) introduced an innovative convolutional neural network (CNN) algorithm to address these limitations and enhance the prediction and diagnosis of ovarian cancer.The CNN was trained on a histopathological image dataset, which was partitioned into training and validation subsets and underwent augmentation before the training process.Remarkably, the model achieved an accuracy of 94%, correctly identifying 95.12% of cancerous cases and accurately classifying 93.02% of healthy cells (Ziyambe et al. 2023).Gajjela et al. (2023) introduced a novel technique, optical photothermal infrared (O-PTIR) imaging, as a label-free and automated method for the histological recognition of ovarian tissue subtypes.Mid-infrared spectroscopic imaging (MIRSI) was used, offering a 10 × improvement in spatial resolution compared to previous instruments.This approach has been used for traditional histopathological identification of ovarian cancer via time-consuming staining and subjective pattern recognition.This technique allows subcellular spectroscopic analysis at crucial fingerprint wavelengths, thus enhancing the identification of ovarian cell subtypes with a notable classification accuracy of 0.98.This study included a robust analysis of 78 patient samples comprising over 60 million data points.Significantly, subcellular resolution using only five wavenumbers surpasses diffraction-limited techniques employing up to 235 wavenumbers (Gajjela et al. 2023).Wang et al. (2023) introduced MMDAE-HGSOC, a multimodal deep autoencoder learning approach.It integrates miRNA expression, DNA methylation, copy number variation (CNV), and mRNA expression data to construct a comprehensive multiomics feature space.A multimodal deep autoencoder network was employed to learn high-level feature representations, and a novel superposition LASSO (S-LASSO) regression algorithm was proposed for the precise identification of genes associated with HGSOC molecular subtypes.The experimental results demonstrate the superiority of MMDAE-HGSOC over existing classification methods.Additionally, this study investigated the enrichment of gene ontology (GO) terms and biological pathways associated with the selected significant genes, offering valuable insights into the underlying mechanisms of HGSOC (Wang et al. 2023).Kodipalli et al. (2023) focused on innovating a novel convolutional neural network (CNN) architecture and compared its performance against established models, including those recognized in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).This study utilized high-quality ovarian CT images, employing cloud services such as the Google Cloud Platform for training and evaluation.The proposed CNN variant achieved an impressive accuracy of 97.53%, surpassing the performance of existing architectures and demonstrating its efficacy in classifying ovarian tumors (Kodipalli et al. 2023).Wu et al. (2023) studied 1142 ultrasound images from 328 patients (from January 2019 to June 2021) to assess the performance of deep convolutional neural networks (DCNNs) in distinguishing various histologic types of ovarian tumors.Task 1 involved classifying benign and high-grade serous carcinomas in the original images, while Task 2 focused on segmented images.Transfer learning was applied to six pretrained DCNNs.The ResNext50 model demonstrated superior performance, achieving 95.2% accuracy in classifying seven ovarian tumor types with notable sensitivity and specificity.Overall, the DCNN has emerged as a promising tool for detailed ovarian tumor classification in ultrasound images, offering valuable computer-aided diagnostic insights (Wu et al. 2023).Bergstrom et al. (2023) proposed a novel approach that addresses the challenge of detecting homologous recombination deficiencies (HRDs) in breast and ovarian cancers, which is crucial for guiding treatment decisions involving platinum-based therapies and PARP inhibitors.By utilizing deep learning on routinely obtained hematoxylin and eosin-stained histopathological slides, this method accurately predicted genomically derived HRD scores.External validation of breast cancer cohorts demonstrated its efficacy in predicting patient responses to platinum treatment, whereas transfer learning extended its clinical utility to high-grade ovarian tumors.Notably, this deep learning model surpasses existing genomic HRD biomarkers, offering a valuable alternative for HRD detection, particularly in medically underserved populations (Bergstrom et al. 2023).Zhang et al. (2019) proposed an innovative image diagnosis system for ovarian cyst classification using color ultrasound images, addressing the challenge of accurately distinguishing between benign and malignant nodules.Our approach combines high-level features from a fine-tuned GoogLeNet neural network with low-level rotation-invariant uniform local binary-pattern (ULBP) features.After enhancing the ultrasound images, we extracted ULBP features to capture texture descriptors and normalized and concatenated them with deep features to form fusion features.These fusion features were then input into a costsensitive random forest classifier for accurate classification.The integration of high-level semantic context and low-level texture patterns effectively discerns the differences between malignant and benign ovarian cysts, thereby reducing unnecessary medical procedures and associated costs (Zhang et al. 2019).The details of the methodology adapted to the performance are presented in Table 1. According to a comprehensive literature review, only Ziyambe et al. (2023) specifically studied ovarian cancer by utilizing histopathological images, albeit with limited two-way classification.Despite the significant advancements in ovarian cancer research through various deep learning applications and multimodal analyses, a research gap persists in the integration of advanced feature extraction methods and traditional classification techniques for enhanced diagnostic accuracy.While studies have demonstrated promising results using individual methodologies, there is a lack of comprehensive approaches that combine the powerful feature extraction capabilities of deep learning models like EfficientNet-B0 with traditional classifiers such as k-nearest neighbor (KNN).This integration could potentially yield more precise and reliable diagnostic tools, addressing the current limitations of interobserver variability and diagnostic reproducibility in ovarian cancer subtype classification.This observation underscores the significant gap in research on the more nuanced exploration of subtype classification in ovarian cancer.Consequently, the need for further investigation into subtype classification has emerged as a promising and largely unexplored area within the field, presenting a valuable avenue for future research. Materials and methodology This section presents detailed information on the datasets used and methodologies adopted in this study. Dataset The UBC-OCEAN (Bashashati et al. 2023) dataset is a valuable resource in the field of histopathological research, providing a diverse collection of high-resolution images for two distinct categories of ovarian cancer: whole slide images (WSIs) and tissue microarrays (TMAs).This dataset contains 505 images.The images were resized to 512 × 512 pixels.Details of the dataset are presented in Table 2. Histopathological images of the ovarian subtypes are shown in Fig. 1. Methodology The methodology employed in this research centers on leveraging the deep features of EfficientNet-B0 for the classification of ovarian cancer subtypes.EfficientNet-B0 is characterized by a streamlined architecture that incorporates MobileNetV2-like bottleneck blocks, specifically MBConv1 and MBConv6.These blocks play a crucial role in feature extraction and offer an effective balance between model complexity and accuracy. To tailor the model for subtype classification, a fine-KNN approach was integrated, involving replacement of the last layer, representing the fully connected layer, with its MBConv1 and MBConv6 blocks.MBConv1 employs a 3 × 3 depthwise separable convolution, followed by batch normalization and a swish activation function.In contrast, MBConv6 incorporates a 3 × 3 depthwise separable convolution with a larger expansion ratio, enhancing the representational power of the model.The architectural details are shown in Fig. 2. The performance evaluation of the model involves key metrics, such as accuracy, which provide an overarching measure of correct classification.Additionally, the area under the curve (AUC) values were computed to assess the discriminative ability of the model across different subtypes, providing a more comprehensive understanding of its diagnostic efficacy. For a more nuanced evaluation, the true positive rate (TPR) and false negative rate (FNR) were calculated for each subtype.This in-depth analysis provides insights into the model's ability to correctly identify instances of each subtype and its potential to minimize false negatives. The diagnostic reliability of the model was further evaluated by calculating the positive likelihood ratios (LRs).This metric offers insights into the ability of the model to provide diagnostic certainty for each subtype, adding a layer of assurance to the classification results. In summary, this methodology integrates the robust feature extraction capabilities of EfficientNet-B0 with the fine-KNN mechanism, presenting a comprehensive approach for the subtype classification of ovarian cancer histopathological images.The subsequent evaluation metrics ensured a thorough analysis of the model's diagnostic process and its potential contribution to precision medicine in the realm of ovarian cancer research. Results and discussion In our study, utilizing the UBC-OCEAN dataset of 725 histopathological images representing five ovarian cancer subtypes, our model showed robust performance.The dataset was thoughtfully split, with 80% for training and 20% for testing, ensuring a thorough evaluation.Our model is implemented on an HP laptop with an 11th generation processor, 16 GB of RAM, and an NVIDIA 3070 GPU and operates efficiently on MATLAB 2022a. With an initial learning rate of 0.001, a mini-batch size of 32, and the ADAM optimizer, our model leverages the fine-KNN approach for nuanced subtype classification.The results revealed high accuracy during testing, confirming the model's ability to distinguish between HGSC, CC, EC, LGSC, and MC.The individual subtype metrics provide deeper insights.The performances of the models on the validation and test data are shown in Figs. 3 and 4, respectively. Based on the insights gleaned from Figs. 3 and 4, our proposed methodology achieves remarkable results.Both the validation and test accuracies reached 100%, confirming the effectiveness of our approach.Furthermore, the area under the curve (AUC) is a crucial metric for assessing a model's discriminative ability, which reinforces its high performance.Notably, during testing, the AUC values were noteworthy for various subtypes, with values of 0.94 for CC, 0.78 for EC, 0.69 for HGSC, 0.92 for LGSC, and 0.94 for MC.It notices that, discrepancy between the original sample sizes and predictive performances for EC and HGSC subtypes even if make balanced using augmentation.While EC and HGSC indeed had the highest sample sizes, several factors could explain the lower predictive performance.Firstly, the inherent biological heterogeneity and overlapping histopathological features of these subtypes might have contributed to the classification challenges, causing the model to struggle in distinguishing them from other subtypes.Additionally, the variability within the EC and HGSC subtypes themselves, which can exhibit a broad spectrum of morphological patterns, may have led to a reduced ability of the model to generalize well to the test data.Furthermore, despite the large sample sizes, there may still be an imbalance in the representation of certain histopathological characteristics within these subtypes, impacting the model's training process.To address these issues, future work could focus on incorporating advanced data augmentation techniques and exploring additional features or integrating multimodal data to enhance the model's ability to capture the distinctive characteristics of EC and HGSC subtypes.This will help improve the overall predictive performance and robustness of the model. Furthermore, the model was evaluated for each class of ovarian cancer, as shown in Tables 3, 4, and 5. As shown in Table 3, the validation dataset demonstrated outstanding performance across all subtypes, with a perfect true positive rate (TPR) of 100% for clear cell (CC), endometrioid (EC), high-grade serous (HGSC), low-grade serous (LGSC), and mucinous (MC) ovarian cancers.The false negative rate (FNR) was also consistently zero, highlighting the ability of the model to correctly identify instances of each subtype.Furthermore, the positive predictive value (PPV) and false discovery rate (FDR) both reached 100%, underscoring the precision and reliability of the classification. From Table 4, it can be observed that while the performance in the test dataset remained strong, some variations were observed.Notably, the sensitivity (TPR) for the EC, HGSC, and LGSC subtypes decreased, with the lowest value observed in HGSC at 42.9%.The corresponding increase in the FNR suggests potential challenges in correctly identifying these subtypes.Despite this, the overall performance remained robust, with TPR values exceeding 90% for CC, LGSC, and MC.The PPV and FDR values provide insights into the precision of the model in the test dataset.The PPVs for EC and HGSC indicated the potential for false positives, with values of 69.2 and 75.0%, respectively.However, the FDR is generally low across all subtypes, demonstrating reliable control over false positives.Table 5 shows the values of the likelihood ratio positive (LR +), which reinforce the overall diagnostic performance.A higher LR + indicates a more reliable positive test result.Notably, the CC, LGSC, and MC subtypes exhibited particularly high LR + values, suggesting their strong ability to correctly identify these ovarian cancer subtypes. The validation dataset shows the exemplary performance of the classification model, indicating its ability to accurately identify ovarian cancer subtypes.The minor discrepancies observed in the test dataset may be attributed to variations in the data distribution, emphasizing the importance of robust model validation.The LR + values provide additional context, indicating the strength of the model in providing reliable positive results.The higher LR + values for the CC, LGSC, and MC subtypes suggested that the model accurately identified these specific ovarian cancer subtypes.Our model, which leverages the UBC-OCEAN dataset and combines EfficientNet-B0 with the fine-KNN approach, demonstrated robust performance with a 100% accuracy rate during both validation and testing phases, indicating its efficacy in accurately classifying ovarian cancer subtypes.The AUC values, a critical metric for assessing the model's discriminative ability, were particularly high for CC, LGSC, and MC subtypes, underscoring the model's strong performance.The validation dataset showed a perfect true positive rate (TPR) across all subtypes, while the test dataset revealed some variability, especially for EC and HGSC, highlighting areas for further refinement.Despite these variations, the overall performance remained robust, with high PPV and low FDR values.The likelihood ratio positive (LR +) values further confirmed the model's reliability in providing accurate positive results, particularly for CC, LGSC, and MC subtypes.These findings collectively demonstrate the model's potential as a highly accurate and reliable diagnostic tool, contributing significantly to the advancement of precision medicine in ovarian cancer diagnosis. Conclusion This research introduces a robust methodology for ovarian cancer subtype classification that integrates deep learning and k-nearest neighbor (KNN) techniques.The model, built on EfficientNet-B0 with fine-KNN, demonstrated an outstanding accuracy of 100% across the clear cell (CC), endometrioid (EC), high-grade serous (HGSC), low-grade serous (LGSC), and mucinous (MC) subtypes during both the validation and test phases.The high area under the curve (AUC) values during testing further underscore the model's discriminative ability.The positive likelihood ratio (LR +) values emphasize its diagnostic utility, particularly for CC, LGSC, and MC.Despite these achievements, future research should explore data augmentation, multimodal data integration, and interpretability to enhance generalizability, transparency, and clinical applicability.External validation and prospective clinical studies are crucial steps toward validating the model's real-world performance and facilitating its integration into diagnostic workflows, thereby contributing to the advancement of precision medicine in ovarian cancer research. Fig. 1 Fig. 1 Samples of ovarian cancer subtypes histopathological images a CC b EC c HGSC d LGSC e MC Fig. 2 Fig. 2 Architecture Diagram: Integration of EfficientNetB0 and Fine-KNN for Ovarian Cancer Subtype Classification a Main architecture b MBConv1 c MBConv6 Fig. 3 Fig. 3 Validation results of Efficientnetb0 with Fine KNN a confusion matrix b AUC Table 1 Summary of Key Research Contributions in Ovarian Cancer: Methods, Datasets, and Achievements Table 2 Details of Dataset before and after augmentation the fine-KNN mechanism.This modification facilitates the nuanced classification of histopathological images into five distinct ovarian cancer subtypes: high-grade serous carcinoma (HGSC), clear-cell ovarian carcinoma (CC), endometrioid carcinoma (EC), low-grade serous carcinoma (LGSC), and mucinous carcinoma (MC).The architecture of EfficientNet-B0 warrants closer examination, particularly regarding the characteristics of Table 3 TPR, FNR, PPV, and FDR of each subtype of Ovarian Can-
4,715.2
2024-07-25T00:00:00.000
[ "Medicine", "Computer Science" ]
Investment Modelling Using Value at Risk Bayesian Mixture Modelling Approach and Backtesting to Assess Stock Risk Background: Stock investment has been gaining momentum in the past years due to the development of technology. During the pandemic lockdown, people have invested more. One the one hand, stock investment has high potential profitability, but on the other, it is equally risky. Therefore, a value at risk (VaR) analysis is needed. One approach to calculate VaR is by using the Bayesian mixture model, which has been proven to be able to overcome heavy-tailed cases. Then, the VaR’s accuracy needs to be tested, and one of the ways is by using backtesting, such as the Kupiec test. Objective: This study aims to determine the VaR model of PT NFC Indonesia Tbk (NFCX) return data using Bayesian mixture modelling and backtesting. On a practical level, this study can provide information about the potential risks of investing that is grounded in empirical evidence. Methods: The data used was NFCX data retrieved from Yahoo Finance, which was then modelled with a mixture model based on the normal and Laplace distributions. After that, the VaR accuracy was calculated and then tested by using backtesting. Results: The test results showed that the VaR with the mixture Laplace autoregressive (MLAR) approach (2;[2],[4]) was accurate at 5% and 1% quantiles while mixture normal autoregressive MNAR (2;[2],[2,4]) was only accurate at 5% quantiles. Conclusion: The better performing NFCX VaR model for this study based on backtesting using Kupiec test is MLAR(2;[2],[4]). INTRODUCTION Economic uncertainty in the current pandemic has changed some aspects of consumer behaviour in Indonesia. Stock investment is on the rise among the Indonesians people today. This is shown by a DBS bank survey, which states that people will choose to save and invest their money rather than spending it on unnecessary consumer products or services [1]. In doing investment, people certainly hope to get a return after a certain period of time, but they also face a risk of loss. Risk assessment is needed to minimize such risk of loss. As most people were in lockdown and shopping was considered more convenient online, sales on e-commerce soared to 66% during the COVID-19 pandemic [1]. This makes risk analysis on e-commerce stocks interesting to research. In 2020, three e-commerce companies took the floor on the Indonesia Stock Exchange (IDX), one of which is PT NFC Indonesia Tbk (NFCX) [2]. This company is the newest e-commerce company joining the IDX and there has been no research on the company related to the risk analysis [2]. Therefore, this study chose NFCX as the subject. Value at Risk (VaR) is a tool commonly used to measure financial risks when the distribution of potential loss positions is not generally known [3]. VaR is generally estimated by assuming returns that have a normal distribution. However, this may not be appropriate because, in reality, return distribution is not always normal [4]. Some results of stock return visualization indicate that the data distribution is leptokurtic, which tends to resemble Laplace distribution. That being said, this does not mean that Laplace distribution is the one suitable for determining VaR. It is partially suitable because of the long-tail and asymmetric conditions that may occur in the stock return data. This kind of financial data could be handled by the asymmetrical distribution of Laplace [5]. Research shows that a parametric method based on skewed and fat-tails is the best method for determining VaR, especially when time variations are considered and independent and identical return distributions are ignored [6]. A method with a mixture model approach has been used in several studies to solve problems that occur in unimodal data [7] [8]. By considering the skewed and the heterogeneity, the mixture model method can improve the accuracy 12 of predictive cases [7] [8]. It has been proven to be better than the separate one [9] [10] and suitable to be applied to the autoregressive (AR) model, a time series model. Meanwhile, a mixture normal autoregressive (MNAR) has been developed both to consider the mixture of normal distribution and to analyse the time series that shows regimeswitching behaviour. The method considers the probability and weight of mixing based on past values, so that a) the stationarity and ergodicity of the underlying stochastic process could be easily established; and b) the explicit expression of the low-order of stationary marginal distribution is known. This is in contrast with other majority nonlinear autoregressive models [11]. The ability of MNAR to overcome the heavy-tailed problems that occur in unimodal data makes it applicable for VaR modelling [12] [13] [14] [15]. As for the pattern of actual conditions, Laplace distribution is able to capture it in mixture autoregressive models [16]. It has been proven to be more robust than the normal distribution in linear mixture models [17]. Since the distribution of returns tends to be peak similar to that of the asymmetric Laplace distribution, the current research uses MLAR for the VaR modelling [18]. Research [18] has shown how Bayesian MLAR approaches model the VaR of Islamic stock; the performance was then compared with the Bayesian MNAR analysis. The results showed that the Bayesian MLAR model performed better than the Bayesian MNAR model. In practice, the results of VaR modelling such as this should be evaluated using a backtesting method in order to determine the best model. Kupiec Test developed by [19] is a backtesting procedure commonly used for VaR modelling studies [6] [20] [21]. All things considered, this study aims to determine the NFCX VaR model using Bayesian mixture modelling and backtesting. II. METHODS The analysis was carried out using stock closing data retrieved from https://finance.yahoo.com/quote/NFCX.JK/history?p=NFCX.JK from 12 June 2018 to 4 August 2020. Generally, there are four steps in doing the risk-return analysis: component identification of the mixture autoregressive models; analysis of the Bayesian mixture autoregressive models; VaR modelling; and evaluation of VaR model using Kupiec test. Before conducting the risk-return analysis, the return of NFCX stock must be calculated by (1). = + (1) where is the stock price return on the day, is the stock price on the day, and is the stock price on the ( − 1) day [22]. The first step of conducting a return-risk analysis is the component identification of the mixture autoregressive models. At this stage, the numbers of autoregressive mixture components and the autoregressive AR(p) components are determined by analysing the histogram of the return and autoregressive modelling respectively. The following is the AR(p) model [23] as seen on (2). The is the return on the ( − ) time, is the autoregressive order, is the parameter of autoregressive, and is white noise. The order in AR(p) model is determined by ensuring that the return data is a qualified stationery in mean and variance. The parameters of the AR(p) model that have been specified are then estimated and tested to find out its significance. Only the significant ones are qualified to be mixed. The AR(p) models that will be mixed in MNAR and MLAR modelling are the same. The difference is that the MNAR model uses normal distribution and MLAR model uses Laplace distribution. In MNAR( ; ( )) model, the conditional distribution of | ; is a normal mixture with conditional density function of (3). where is the proportion of mixed components, and ; , , is the density function of , , . Let = ( , , … , , , , … , , , , … , ) denotes the vector of autoregressive parameters [15]. While in MLAR( ; ( )) model, the conditional distribution of | ; is a Laplace mixture with conditional density function of (4). where Equation (5) is Laplace density function with mean , variance , and (6) is the parameter vector of mixed models. The residual of the MLAR model is also considered to have a Laplace distribution [17]. Markov Chain Monte Carlo (MCMC) is an algorithm of the Bayesian inference commonly used to estimate parameter by generating samples from a given distribution. The subsequent sample is chosen based on the sample taken previously. This is done by determining the initialization at the start of the sampling. As a result, the sample taken forms the MCMC , , … , chain. The distribution of the given depends only on all the preceding θ at the most recent value, which is . The generated samples are not independent, but still identically distributed if the Markov chain is stationary [24]. This parameter is then corrected to obtain a value that is closer to the target of the posterior distribution of ( | ). Gibbs sampler is one of the MCMC methods that can solve multidimensional problems. In normal and Laplace case, = ( , , ) and posterior is ( , , | ). The Gibbs sampler will help estimate , , and iteratively following the sampling scheme. Repeat step two T times, → ∞ In estimating the mixture parameters, step 2 must estimate as much as K of the mixture components of both , , and . The samples generated using the above algorithm will have a convergent and stationary data pattern and will be proportional to their respective distributions [18] [25]. The confirmation of an ergodic Markov chain must be made to identify the existence of limiting distribution in this chain. It can be divided into three sections, namely irreducible, periodicity, and recurrent and transient states [26]. The parameter significance test is used to select the suitable ones for the model. Testing the parameters resulting from the estimation with Bayesian MCMC assumes that the null hypothesis of = 0 and the alternative hypothesis of ≠ 0. The null hypothesis is rejected if in confidence interval (1 − ) of posterior, the credible interval does not contain null [27]. After obtaining the MNAR and MLAR model, deviance information criterion (DIC) for each MLAR model is calculated and the model with the smallest DIC is selected. The DIC formula is shown on (8). is the posterior mean of the deviance that is defined as −2 ( | ) . is the effective number of parameters and is given by [29]. The best model of MNAR and MLAR that has been obtained is used to determine the VaR of each mixture model. It is calculated on (9). where for MNAR best model is shown on (10), and for MLAR best model is shown on (11). The last stage to evaluate the VaR model is by using backtesting, which is a statistical procedure to systematically compare the actual gains and losses with the estimated VaR. The most widely used backtesting, the Kupiec test, which is also known as the POF (proportion of failure) test, measures whether the number of exceptions is consistent with the quantile [30], which follows the binomial distribution. In other words, the information needed to perform the Kupiec test is the number of observations ( ), the number of exceptions ( ), and the quantile [2]. The null hypothesis of this test is = ̂ and the alternative hypothesis is ≠ . The test statistic used is the likelihood ratio ( ) [19] equated on (12). where is the probability of failure in the quantile. is asymptotic with a chi-square distribution ( ) and a degree of freedom is 1. The null hypothesis is rejected if is greater than . Accordingly, the VaR model is declared valid if the null hypothesis is accepted. Finally, the last result of this analysis will show the VaR that the investors of PT. NFC Indonesia Tbk (NFCX) will face. III. RESULTS This section presents the results of the VaR modelling process, which consists of four steps-the identification, the analysis, the modelling and the backtesting. The best mixture model determined based on a normal and Laplace distribution is derived from the comparison of several mixture VaR models. A. Component Identification of Mixture Autoregressive Model The identification determines the number of components and which components of autoregressive and AR(p) to be mixed. The number of components of mixture autoregressive is detected from the histogram, whereas the mixable autoregressive components AR(p) are those with significant parameters. The histogram of the NCFX return data, as presented in Fig. 1, shows that the data has outliers. The outliers of the data are indicated in the histogram as right-skewed or positively-skewed patterns. Besides, Fig. 1 also identifies the shapes of its frequency distribution, namely leptokurtic, platykurtic, and mesokurtic. If the peak of the curve is higher than the normal distribution, it is considered leptokurtic; if it is lower, it is platykurtic; and if it is the same, it is mesokurtic [31]. The histogram shows that the data have a higher peak than the normal distribution, so they are leptokurtic, which tends to be more similar to the Laplace distribution (blue line) than the normal distribution (red line). However, the outliers and the high variance result in a mismatched Laplace distribution. Variability in a histogram is higher when the taller bars are spread away from the mean; and lower when they are closer to the mean. The solution to these cases is to form a combination for each distribution. This results in two components for each distribution. One component is leptokurtic and the other is platykurtic. The two components in the Laplace distribution are shown by the black and yellow dash line, while the normal distribution is presented by the green and magenta dash line. Platykurtic conditions are expected to overcome the high variation. Journal of Information Systems Engineering and Business Intelligence, 2021, 7 (1), 11-21 15 Fig. 1 The Histogram of NFCX Return After obtaining the number of components, the AR(p) was selected using parameter significance. The order of AR(p) was determined by ensuring that the data was stationary in mean and variance. Stationarity of mean was detected from time series plot and augmented dickey fuller (ADF) test. The data in the time series plot, as presented in Fig.2, fluctuate around the mean. The plot indicates that the data are stationary in mean. This result was confirmed by the ADF test. To recall, the null hypothesis is when the data are not stationary in mean, whereas the alternative hypothesis is when the data are stationary in mean. Because the P-value (0.01) is less than the significant level (0.05), the null hypothesis is rejected. Furthermore, the data must also be stationary in variance. Stationarity in variance was detected by rounded value ( ) where the rounded value of the data equals 1. However, the NFCX return data did not fulfill the assumption since = −1; thus, the data must be Box-Cox transformed until = 1. After = 1, the order of AR(p) can be detected. 16 AR(p) order is determined by autocorrelation (ACF) and partial autocorrelation (PACF) plot, namely those which had the same cut off p-lag. The cut off is decided based on the lag that exits the blue interval limit. ACF and PACF cut off lag 1, 2, 4, and 9, respectively. The order of AR(p) models can be seen in Table 1. Parameters were estimated using the Bayesian method with the function of ~ + + ⋯ + , . The prior distribution for autoregressive parameters ( ) was conjugate prior and was noninformative prior. This is defined by limiting the model to a relatively simple likelihood function with a suitable formula for the previous distribution [27]. The noninformative prior is a distribution that has a greater range of uncertainty than the reasonable parameter value [32]. The prior autoregressive parameter is normal distribution, while the standard deviation ( ) is inverse Gamma. The estimation and significance of the parameters results can be seen in Table 1. A 95% credible interval indicates the 2.5 th percentile and the 97.5 th percentile since this has been used by some authors and software [27]. Table 1 shows that all parameters of autoregressive models that contain p = 1 are not significant because the credible interval contains 0. However, all autoregressive models that do not contain p = 1 are significant. B. Analysis of Bayesian Mixture Autoregressive Model The mixture models are created by mixing two components of significant AR(p) as presented in Table 1. The number of mixed models is 42 consisting of 21 MNAR and 21 MLAR. In this study, only six models-three MNAR, and three MLAR-are shown. Parameter estimation began by creating a directed acyclic graph (DAG) for each model. DAG for MNAR and MLAR can be seen in Fig.4. The models are assumed as follows. for MNAR models. The prior distribution for the autoregressive parameters ( ) is the conjugate prior for MNAR models; whereas the prior distribution for MLAR model is the pseudo priors, whose value is determined based on the frequentist estimation [33]. The pseudo prior of the MLAR models is based on the parameter estimation of the AR(p) models. The prior distribution for other parameters ( , , ) is noninformative prior. The results of parameter estimation using DAG structure-as presented in Fig. 4-for each mixture autoregressive models are shown in Table 2. It shows that all MLAR and MNAR models are significant because the 95% credible interval does not contain 0. The selection of MNAR and MLAR models was measured using DIC with those being the smallest DIC is considered the best. Table 3 presents the best model of MNAR, namely MNAR (2; [2], [2,4]); and MLAR, namely MLAR(2; [2], [4]). The DIC of those models are -3293.10 and -3698.88 respectively. The DIC of the MLAR(2; [2], [4]) model is smaller than the MNAR(2; [2], [2,4]); however, the isolated model cannot be said to be the best model for estimating VaR; instead, both models can be used to calculate VaR. IV. DISCUSSION This research has a limitation on the accuracy of the VaR because it is tested only as a one-day investment. This is based on several references that have been referred to [34] [21]. Although it is known that the longer the investment the greater the risks, this information is not enough. The amount of risk that will be faced during the investment period that has been determined in this study-the five-day and twenty-day periods-also needs to be tested for its accuracy so that investors will be more confident in investing. In addition to knowing the accuracy of the risks faced during a specific investment period, a reanalysis is needed to corroborate the finding in this study. This study result denotes that the VaR accuracy done using the Bayesian MLAR approach is more accurate than the Bayesian MNAR in terms of backtesting. This is in line with the comparison results of the mixture models based on DIC, where the best mixture model produces an accurate VaR model. In reality, this does not mean that the best mixture model produces accurate VaR, which can be seen in the research done by [2]. Hence, the backtesting test should be done on all already-formed mixture models. Also, the use of one backtesting method also limits the interpretation of the study because the Kupiec method is not always correct. Thus, backtesting methods other than Kupiec can also be added to increase the level of accuracy. Lastly, the analysis is only based on stock data in the past period. The research should be carried out by considering the factors that influence changes in share prices. Several factors are believed to affect stock prices, including oil and gold market prices and their volatilities [35]. V. CONCLUSIONS This research was conducted to obtain an NFCX stock investment VaR model (NFCX VaR model) obtained with the Bayesian mixture model approach. The best VaR model is accurate at 5% and 1% quantiles based on the backtesting results using the Kupiec test. The research shows that the best VaR model results from MLAR(2; [2], [4]) approach, which consist of two-component Laplace mixed distribution with one component being leptokurtic, and the other platykurtic. This model's goodness shows that the model is accurate at 5% and 1% quantiles with accuracy of testing limited to one day. Based on the discussion results, several further studies can be developed, one of which is to test VaR's accuracy not only during one day but during a predetermined period horizon. The backtesting method used should not only be the Kupiec test in order to improve the accuracy of the model. Modelling should be done with historical data from stock prices and data from factors that affect stock prices.
4,761.4
2021-04-27T00:00:00.000
[ "Business", "Economics", "Mathematics" ]
A non-aqueous procedure to synthesize amino group bearing nanostructured organic – inorganic hybrid materials Amino-functionalized organic–inorganic hybrid materials with a narrow distributed nanostructure of 2–4 nm in size were obtained by means of a template-free and non-aqueous procedure. Simultaneous twin polymerization of novel amino group containing twin monomers with 2,20-spirobi[4H-1,3,2-benzodioxasiline] has been applied for this purpose. The amino groups of the organic–inorganic hybrid material are useful for post derivatization. Organic-inorganic hybrid materials, which contain amino groups, are of great importance for a variety of applications such as supports for catalysts, adsorption of metal ions, or as key precursors for post derivatization. 1,2 It is an important task to obtain nanostructured materials with accessible amino groups due to their great potential for post-functionalization reactions with electrophilic reagents. [3][4][5] Amino functionalized silica materials have been successfully used for CO 2 , 6 water 7,8 or dye absorption 7 and catalysis of Knoevenagel condensation reactions 9,10 and the synthesis of nitrostyrenes. 11 Furthermore, primary and secondary amino groups can be modified in many ways to introduce carboxyl groups from cyclic anhydrides or imine groups from aldehydes. These materials can be synthesized by different strategies which use amino-functionalized reagents. 1,2,8,12,13 Suitable reagents are 3-aminopropylalkoxysilanes or amino group bearing water soluble polymers such as polyvinylamine. 13 The synthetic procedure employing these reagents is based on water as a co-reagent or a solvent. Therefore, co-reagents which contain hydrolytically sensitive groups such as imines or isocyanates cannot by applied when water based procedures are used. To avoid these peculiarities, a synthetic method is needed which does not require water as a solvent or a reagent. Twin-polymerization can be carried out either in organic solvents or even in the melt without the use of any solvent. 14,15 This advantage overcomes all problems resulting from water chemistry. Furthermore, amino groups undergo acid-base reactions with water, and ammonium ions are formed which can have a negative effect on post-reactions of amino groups. In this communication a specific application of twinpolymerization is presented which uses a non-aqueous procedure involving various amino group bearing twin monomers to produce nanostructured organic-inorganic hybrid materials related to amino group containing sol-gel materials. During the last six years the so called twin polymerization has been developed to fabricate organic-inorganic hybrid materials by solely one procedure. [14][15][16][17][18] The principle of twin-polymerization is based on specific twin monomers (TMs), which contain two covalently bonded polymerizable monomeric fragments, i.e. A and B for each polymer. The formation of the two polymers from the twin monomers during polymerization is mechanistically coupled. 17 Thus, polymer -(A) n -can only be formed when polymer -(B) n -is also formed. This is a crucial difference in the polymerization behavior from that of hetero-bifunctional monomers whose two different polymer strands are mechanistically independent of each other during the polymerization procedure. 4,5,13 Simultaneous twin polymerization (STP) of two different twin monomers in one process can yield up to four different polymers (see also Fig. S1 ESI †). 18 A benefit of this type of STP is that the formation of polymer -(A) n -mediates the covalent connection of fragments B and C during polymerization within the organic-inorganic hybrid material. The synthetic procedure for the amino group containing TM (2-5) starts from salicylic alcohol and 3-amino-n-propyldimethoxymethylsilanes. The transesterification reaction was catalyzed by tetra-n-butylammonium fluoride (TBAF) (see Fig. 2). Details of the synthetic procedures are given in the ESI. † Objective of this basic study is the synthetic feasibility of new types of TMs and to study their polymerization by STP to produce hybrid materials bearing accessible amino groups. The STP of 2,2 0 -spirobi[4H-1,3,2-benzodioxasiline] (1) with amino group functionalized twin monomers has been studied. Pure 1 undergoes twin polymerization by either an acid-or a base-catalyzed reaction in melt or in solution. 19,20 However, it is also possible to polymerize 1 without the use of any catalyst at 230 1C. 19 Advantageously, the amino functionalized monomers (2-5) used for the STP can serve as both a component and a basic catalyst. This is an elegant way of synthesis, because impurities resulting from the use of additional catalysts can be avoided. Thus, monomer 1 was simultaneously polymerized with 2, 3, 4 or 5 at a much lower temperature (120 1C) using a stoichiometric ratio of 1 : 1. It is apparent from silicon chemistry that silica and OAMS moieties can undergo Si-O-Si bond formation to form a class II hybrid structure among the phenolic resin/SiO 2 /OAMS hybrid compound. [21][22][23] This feature can be readily evidenced by solid state 29 Si-and 13 C-NMR spectroscopy shown in Fig. 2 and the ESI. † Altogether, the 29 Si-CP-MAS-NMR spectra of hybrid materials (P2-P5) show the expected Q-and D-signals originating from monomer 1 and the accordant amino functionalized monomers (2)(3)(4)(5). In each case Q 4 (À110 ppm) is the most intense signal in the silica region of the spectrum, although the cross polarization technique used overrates Q 2 (À90 ppm) and Q 3 (À100 ppm) signals due to polarization transfer from 1 H to 29 Si. This result indicates that the silica network is highly condensed. The same applies to the OAMS part, as no signals from the monomer are detectable. Instead, a rather broad signal at À17.6 ppm is found, which can be assigned as D(Q). 24 Parts of OAMS are covalently bonded to silica and form a copolymer, related to Co-STP as shown in Fig. 1, ESI. † The broad line widths of the D(Q)-signals are typical for a high dispersion of chemical shifts and a reduced flexibility of the OAMS moiety. The formation of phenolic resin and of OAMS can be clearly evidenced by means of 13 C-CP-MAS-NMR spectroscopy. The signal-structure assignments are depicted in Fig. 2 The solid state NMR spectra again evidence the formation of the phenolic resin, silica and OAMS (Fig. S3 ESI †). The different monomer ratios are reflected in the NMR spectra and the signal intensities of phenolic resin/OAMS change accordingly. The 13 C-NMR spectra show no differences in the chemical shifts. An increasing content of monomer 2 also leads to more intense D-signals as shown in the 29 Si-NMR spectra (Fig. 3). A low ratio of monomer 2 results in D(Q) species. The corresponding signal arises View Article Online at À17 ppm. Because of the formation of the silica-OAMS copolymer, the extractable content of sample P2_1 (0.5 wt%) is low. Longer chains (or rings) of OAMS are formed upon increasing content of monomer 2. The signal is shifted to a higher field (À21 ppm) related to D 2 structures. 24 Also a larger quantity of OAMS could be extracted (P2_4: 31.2 wt%) because less covalent bonds were formed between the silica network and OAMS. Due to the absence of monomer 1 the material P2_5 only shows D-without any Q signals. The intensities of the Q signals of P2_1 to P2_4 are affected by the monomer ratio used. A reduction of monomer 2 leads to increased Q 3 signals because the concentration of the basic catalyst (2) is decreased. High angle annular dark field (HAADF) scanning transmission electronmicroscopy (STEM) images of selected hybrid materials show domain sizes of 2-4 nm (Fig. 4a). This is much lower than comparable phenolic resin/silica composites synthesized by a solgel process. 25,26 The materials are transparent and no macroscopic agglomeration could be observed in all cases ( Fig. 4b and ESI †). Thermogravimetric analysis (TGA) of the organic-inorganic hybrid materials shows a slight weight loss of 2.11 wt% (P2_1) to 8.66 wt% (P2_5) between 30 and 200 1C. Further decomposition of the hybrid material is comparable to that of aminopropylmodified hybrid materials known from the literature. 12 In order to check the accessibility of the amino groups in the hybrid materials after STP, the reaction with different aldehydes was studied (Fig. 5). After washing and drying procedures, the formation of the Schiff base was examined by infrared-(IR) and ultraviolet-visible (UV/Vis) spectroscopy. The treated materials show additional IR-bands at 2230, 1643 and 696 cm À1 belonging to CRN and CQN vibrations and at 1346 cm À1 assigned to the N-O vibration of the nitro group (see ESI †). The conversion of the amino groups within the hybrid material was 14.4-66.7% determined by quantitative elemental analysis of the N-content with respect to 100% theoretical conversion (see ESI †). The functionalized hybrid materials SB1-3 obtained have yellow to orange color shades. UV/Vis absorption spectra of the untreated composite P2_3 and SB1-3 have been recorded by means of diffuse reflectance UV/Vis spectroscopy. The UV/Vis absorption bands are non-symmetric and show several UV/Vis absorption maxima. A bathochromic shift of the UV/Vis absorption band can be observed for Schiff bases SB1-3 in comparison to the untreated composite material P2_3 (l max = 294 nm) (Dṽ (P2_3 -SB3) = 2268 cm À1 ), which confirms the accessibility of the amino groups and the formation of Schiff bases (Fig. 6). In addition, the UV/Vis absorption maximum of the Schiff bases shifts bathochromically from 302 nm (SB1) to 312 nm (SB2) and 315 nm (SB3). This is caused by an increase in the strength of the push-pull p-electron system with the increasing accepting capacity of the corresponding substituent (CN -NO 2 ). Application of this specific type of twin monomer combination in heat-induced coating, particle fabrication or other purposes will be demonstrated soon. This work was performed within the Federal Cluster of Excellence EXC 1075 ''MERGE Technologies for Multifunctional Lightweight Structures'' and DFG SP 392/34-1, supported by the German Research Foundation (DFG). Financial support is gratefully acknowledged.
2,212
2014-07-29T00:00:00.000
[ "Chemistry", "Materials Science" ]
Microstructure and Properties of Ultrafine Cemented Carbides Prepared by Microwave Sintering of Nanocomposites : Ultrafine cemented carbides were prepared by microwave sintering, using WC-V 8 C 7 Cr 3 C 2 -Co nanocomposites as a raw material. The e ff ects of sintering temperature and holding time on the microstructure and mechanical properties of cemented carbides were studied. The results show that the ultrafine cemented carbides prepared at 1300 ◦ C for 60 min have good mechanical properties and a good microstructure. The relative density, Vickers hardness, and fracture toughness of the specimen reach the maximum values of 99.79%, 1842 kg / mm 2 and 12.6 MPa · m 1 / 2 , respectively. Tungsten carbide (WC) grains are fine and uniformly distributed, with an average grain size of 300–500 nm. The combination of nanocomposites, secondary pressing, and microwave sintering can significantly reduce the sintering temperature and inhibit the growth of WC grains, thus producing superfine cemented carbides with good microstructure and mechanical properties. Introduction Cemented carbides are one of the most widespread powder metallurgy products worldwide. The reason for this is that, compared with other cutting materials (such as diamond or high-speed steel), they have an excellent combination of hardness and toughness [1,2]. With rapid industrial development, the design and performance requirements imposed on cemented carbide tools are constantly becoming more onerous, and the performance of the ordinary grain size of cemented carbide cannot fully meet modern industrial demands. Ultrafine (nanocrystalline) cemented carbides have, in theory, superior mechanical properties to conventional coarse-grained cemented carbides [3,4], such as hardness, wear resistance, and flexural strength. Some studies have shown that when the content of Co-phase is constant and when the WC grain size reaches ultrafine or nanocrystalline, the hardness and toughness of cemented carbides are improved [5,6]. When the WC grain size is less than 500 nm, the hardness and toughness of the cemented carbides are greatly improved. Therefore, the development of ultrafine or nanocrystalline cemented carbides with high hardness and high strength has become a key research issue among those working with cemented carbide materials [7]. In recent years, scholars have adopted many new technologies and methods to improve the mechanical properties of cemented carbides, prolong the service life of cemented carbide tools, and reduce the production costs thereof [8][9][10]. The key to the preparation of ultrafine cemented carbides is the effective inhibition of WC grains during the preparation and sintering of the powdered raw materials [11,12]. Therefore, grain growth inhibitors (GGIs) and rapid sintering methods were used in ultrafine cemented carbides to mitigate and suppress the growth of grains at elevated sintering temperatures [13]. Materials and Methods The nano-WC (purity >99.9%, average particle size <200 nm, Shanghai Shuitian Material Technology Co., Ltd., Shanghai, China), nano-V 8 C 7 (purity >99.9%, average particle size <200 nm, Shanghai Shuitian Material Technology Co., Ltd.), nano-Cr 3 C 2 (purity >99.9%, average particle size <100 nm, Shanghai Shuitian Material Technology Co. Ltd.) and nano-Co (purity >99.9%, average particle size <50 nm, Shanghai Shuitian Material Technology Co. Ltd.) powders were used as raw materials. According to a certain ratio (WC:V 8 C 7 :Cr 3 C 2 :Co = 89.5%:0.25%:0.25%:10%), the raw materials were mixed in a planetary ball mill (QM-3SP2, Nanjing Laibe Industrial Co., Ltd., Nanjing, China) for ball milling (150 rpm), at a ball to powder weight ratio of 5:1, wherein the ball milling medium was anhydrous ethanol (liquid-solid ratio 350 mL/kg). After being milled for 8 h, the mixture was dried at 90 • C for 24 h. An appropriate amount of paraffin was added to these materials and mixed evenly. The mixed powders (after stuffing) were pressed into 6 mm × 6 mm × 28 mm cuboid specimens using a hydraulic press. They were then further pressed using a cold isostatic press, to increase the density of the green body. The pressed specimens were vacuum-dried at 100 • C for 24 h. Finally, the dried specimens were sintered in a microwave sintering furnace (RWS-3, Hunan Zhongsheng Thermal Energy Technology Co., Ltd.) at sintering temperatures of 1100, 1200, 1300, and 1400 • C, for holding times of 20, 40, 60, and 80 min. Before sintering, the sintering furnace was pumped out to a vacuum of 1 × 10 −2 Pa, and then argon gas was introduced to form a protective atmosphere at a flow rate of 20 mL/min. The heating and cooling rates were 10-50 • C/min and 8-30 • C/min, respectively, and subsequent dewaxing was conducted at 610 • C for 30 min. The phase composition of the specimen was measured using a D8 AA25 X-ray single crystal diffractometer (Bruker, Germany) with Cu-K α radiation in the range 20 • ≤ 2θ ≤ 90 • . The microstructure and grain size of the specimens were observed by INSPECT F50 scanning electron microscopy (FEI, Hillsboro, OR, USA). The sintered specimens were polished with diamond paste before scanning electron microscopy. The specimen density was measured by a digital solid densitometer using the Archimedes method. The hardness of the specimen was measured using an FM-700 Vickers microhardness tester (Toshiba Teli, Tokyo, Japan) under a load of 1 kgf and loading (displacement-controlled) rate of 50 µm/s. Indentation fracture toughness K IC was estimated by applying the Palmqvist model to the cracks generated by indentation, using the Shetty equation [20]. Figure 1 shows the X-ray diffraction (XRD) patterns of the raw materials and the specimens sintered at different temperatures for 40 min. Results and Discussion Crystals 2020, 10, x FOR PEER REVIEW 3 of 10 Figure 1 shows the X-ray diffraction (XRD) patterns of the raw materials and the specimens sintered at different temperatures for 40 min. As shown in Figure 1a, the diffraction peaks of the composite powders are mainly composed of WC and Co-phases, and there are no diffraction peaks representing either V8C7 or Cr3C2. This is mainly because the amount of grain inhibitors added in this experiment was small (a mass percentage of 0.5%) and not within the detection range of the X-ray diffractometer (mass percentage >1%). When the sintering temperature is 1100 °C, the product is mainly composed of WC, and the diffraction peak intensity of the Co-phase is very weak (Figure 1b). This indicates that a liquid phase has appeared in the specimen sintered at 1100 °C, and some WC begins to dissolve in the liquid Co, resulting in the reduction in Co content. Compared with traditional sintering methods [21,22], the liquid phase can appear at a lower sintering temperature. This is mainly because microwave sintering is rapid and relies on a unique heating mechanism. This method is a new sintering method, which uses the dielectric loss of materials to absorb microwave energy directly through the interaction between the microwave and material particles (molecules or ions). In addition, the microwave field can enhance the ionic conductivity. The high-frequency electric field can promote the migration of charged vacancies in the grain surface, thus resulting in plastic deformation, similar to diffusion creep, and promoting sintering [23]. With the increase in sintering temperature, an η-phase W3Co3C occurs when the sintering temperature is 1200 °C (Figure 1c). This is mainly because, with the increase in sintering temperature, a large amount of WC dissolves in the Co solution and a small amount of the W3Co3C phase dissolves and precipitates. When the sintering temperature is 1300 °C, the reaction products mainly consist of WC, and the W3Co3C phase disappears (Figure 1d). Moreover, the diffraction peak representing WC has a higher intensity than that at 1200 °C, mainly due to the decomposition of the W3Co3C phase at this temperature, and the high solubility of nano-inhibitors V8C7 and Cr3C2 in Co solution. This hinders the dissolution of WC in Co, so that the diffraction peak of WC has a high intensity [24]. When the sintering temperature is 1400 °C, the composition of the reaction product remains unchanged, but the diffraction peak of WC becomes sharper (Figure 1e). This is mainly because, when the temperature is too high, the solubility of the nano-inhibitor in the Co liquid reaches saturation, and it is difficult to inhibit the dissolution and growth of the WC particles in the Co liquid. This results in the increase in WC grain size and the narrowing of its diffraction peak. Figure 2 shows the XRD patterns of the specimens sintered at 1300 ℃ for different holding times. As shown in Figure 2, the specimens are mainly composed of WC and contain a small amount of Cophase. As shown in Figure 2, the diffraction peaks of WC gradually shift to smaller angles with As shown in Figure 1a, the diffraction peaks of the composite powders are mainly composed of WC and Co-phases, and there are no diffraction peaks representing either V 8 C 7 or Cr 3 C 2 . This is mainly because the amount of grain inhibitors added in this experiment was small (a mass percentage of 0.5%) and not within the detection range of the X-ray diffractometer (mass percentage >1%). When the sintering temperature is 1100 • C, the product is mainly composed of WC, and the diffraction peak intensity of the Co-phase is very weak (Figure 1b). This indicates that a liquid phase has appeared in the specimen sintered at 1100 • C, and some WC begins to dissolve in the liquid Co, resulting in the reduction in Co content. Compared with traditional sintering methods [21,22], the liquid phase can appear at a lower sintering temperature. This is mainly because microwave sintering is rapid and relies on a unique heating mechanism. This method is a new sintering method, which uses the dielectric loss of materials to absorb microwave energy directly through the interaction between the microwave and material particles (molecules or ions). In addition, the microwave field can enhance the ionic conductivity. The high-frequency electric field can promote the migration of charged vacancies in the grain surface, thus resulting in plastic deformation, similar to diffusion creep, and promoting sintering [23]. With the increase in sintering temperature, an η-phase W 3 Co 3 C occurs when the sintering temperature is 1200 • C ( Figure 1c). This is mainly because, with the increase in sintering temperature, a large amount of WC dissolves in the Co solution and a small amount of the W 3 Co 3 C phase dissolves and precipitates. When the sintering temperature is 1300 • C, the reaction products mainly consist of WC, and the W 3 Co 3 C phase disappears ( Figure 1d). Moreover, the diffraction peak representing WC has a higher intensity than that at 1200 • C, mainly due to the decomposition of the W 3 Co 3 C phase at this temperature, and the high solubility of nano-inhibitors V 8 C 7 and Cr 3 C 2 in Co solution. This hinders the dissolution of WC in Co, so that the diffraction peak of WC has a high intensity [24]. When the sintering temperature is 1400 • C, the composition of the reaction product remains unchanged, but the diffraction peak of WC becomes sharper (Figure 1e). This is mainly because, when the temperature is too high, the solubility of the nano-inhibitor in the Co liquid reaches saturation, and it is difficult to inhibit the dissolution and growth of the WC particles in the Co liquid. This results in the increase in WC grain size and the narrowing of its diffraction peak. Figure 2 shows the XRD patterns of the specimens sintered at 1300 • C for different holding times. As shown in Figure 2, the specimens are mainly composed of WC and contain a small amount of Crystals 2020, 10, 507 4 of 10 Co-phase. As shown in Figure 2, the diffraction peaks of WC gradually shift to smaller angles with increased holding time. This is mainly due to the increase in the interplanar spacing and the decrease of the diffraction angles of WC with increased holding time [25]. Results and Discussion Crystals 2020, 10, x FOR PEER REVIEW 4 of 10 increased holding time. This is mainly due to the increase in the interplanar spacing and the decrease of the diffraction angles of WC with increased holding time [25]. To observe the surface morphology and pore distribution of the sintered specimens, scanning electron microscopy (SEM) measurements were conducted on specimens prepared at different sintering temperatures and holding times ( Figure 3; Figure 4). To observe the surface morphology and pore distribution of the sintered specimens, scanning electron microscopy (SEM) measurements were conducted on specimens prepared at different sintering temperatures and holding times (Figure 3; Figure 4). Crystals 2020, 10, x FOR PEER REVIEW 4 of 10 increased holding time. This is mainly due to the increase in the interplanar spacing and the decrease of the diffraction angles of WC with increased holding time [25]. To observe the surface morphology and pore distribution of the sintered specimens, scanning electron microscopy (SEM) measurements were conducted on specimens prepared at different sintering temperatures and holding times ( Figure 3; Figure 4). As shown in Figure 3a, when the sintering temperature is 1100 • C, there are many pores on the surface of the specimen (the residual porosity is 8.36%), most of which are 5-50 µm in size. With the increase in the sintering temperature, the number and size of the voids on the surface of the specimens decrease gradually. When the sintering temperature is 1300 • C, the specimen contains fewer pores and these are smaller (the residual porosity is 0.93%), as shown in Figure 3c. This indicates that the grain size of WC can be effectively inhibited at the sintering temperature of 1300 • C, which is beneficial to the rearrangement and densification of the particles, thus reducing the number and size of the pores, and improving the mechanical properties of the specimen. It can be seen in Figure 4 that the trend in the surface structure of the specimens under different holding time conditions is similar to that seen in Figure 3. When the holding time is 60 min, the number of pores on the surface of the specimen is lower (the residual porosity is 0.18%), the pores are smaller, and the microstructure of the specimen is relatively uniform, as shown in Figure 4c. increase in the sintering temperature, the number and size of the voids on the surface of the specimens decrease gradually. When the sintering temperature is 1300 °C, the specimen contains fewer pores and these are smaller (the residual porosity is 0.93%), as shown in Figure 3c. This indicates that the grain size of WC can be effectively inhibited at the sintering temperature of 1300 °C, which is beneficial to the rearrangement and densification of the particles, thus reducing the number and size of the pores, and improving the mechanical properties of the specimen. It can be seen in Figure 4 that the trend in the surface structure of the specimens under different holding time conditions is similar to that seen in Figure 3. When the holding time is 60 min, the number of pores on the surface of the specimen is lower (the residual porosity is 0.18%), the pores are smaller, and the microstructure of the specimen is relatively uniform, as shown in Figure 4c. Figure 5 shows the SEM images of the composite powders and specimens sintered at different temperatures for 40 min. As shown in Figure 5a, the particles are spherical or quasi-spherical, and the average particle size is about 200 nm; however, a small amount of agglomeration occurs after ballmilling. This is mainly due to the small grain size of the raw materials, their high specific surface area, and high activity. After high-energy ball-milling, a small amount of soft agglomeration readily occurs [26]. As shown in Figure 5b,c, the distribution of binder phase Co is not uniform at 1100 and 1200 °C. This shows that WC is dissolving and becoming rearranged in the Co-phase, which leads to the uneven distribution of the Co-phase. When the sintering temperature is 1300 °C, WC grains are polygonal or spherical, with an average size of about 500 nm, and these are uniformly distributed in the Co-phase, as shown in Figure 5d. When the sintering temperature is 1400 °C, the WC grains increase in size, showing quadrilateral or polygonal forms, and the average grain size is about 1 μm (Figure 5e). This is mainly due to the dissolution, growth, and re-precipitation of the WC particles with increasing temperature, resulting in the increase in grain size. Figure 5 shows the SEM images of the composite powders and specimens sintered at different temperatures for 40 min. As shown in Figure 5a, the particles are spherical or quasi-spherical, and the average particle size is about 200 nm; however, a small amount of agglomeration occurs after ball-milling. This is mainly due to the small grain size of the raw materials, their high specific surface area, and high activity. After high-energy ball-milling, a small amount of soft agglomeration readily occurs [26]. As shown in Figure 5b,c, the distribution of binder phase Co is not uniform at 1100 and 1200 • C. This shows that WC is dissolving and becoming rearranged in the Co-phase, which leads to the uneven distribution of the Co-phase. When the sintering temperature is 1300 • C, WC grains are polygonal or spherical, with an average size of about 500 nm, and these are uniformly distributed in the Co-phase, as shown in Figure 5d. When the sintering temperature is 1400 • C, the WC grains increase in size, showing quadrilateral or polygonal forms, and the average grain size is about 1 µm ( Figure 5e). This is mainly due to the dissolution, growth, and re-precipitation of the WC particles with increasing temperature, resulting in the increase in grain size. Figure 6 shows the backscatter electron (BSE) images of specimens sintered at 1300 °C for different holding times, and the EDS spectrum of the sintered specimens (60 min). The trend in Figure 6 is similar to that in Figure 5. When the holding time is 60 min, the structure of the specimen is more uniform, as shown in Figure 6c. The WC grains are polygonal or quasi-spherical, and the grain size is 300-500 nm. The sintering temperature (1300 °C) is about 100 °C lower than that used in traditional sintering processes [21,22], mainly because microwave sintering can reduce the activation energy, accelerate diffusion, and increase the rate of densification [27]. In addition, nanocomposite GGIs have a high solubility and mobility in liquid cobalt, which can prevent the dissolution and growth of the WC particles and further optimise the microstructure of the alloy [28]. Another mechanism causing the aforementioned inhibition is that nanocomposite GGIs segregate to the WC-Co grain boundaries and their triple junctions, which hinders WC grain growth by exerting a pinning force (Zener-drag) on the moving grain boundaries [13]. As shown in Figure 6e, the main components of the selected area are C, W, and Co, and contain a small amount of V, Cr, and O. The reason for the presence of a small amount of O may be that nano-Co powders have high activity and are easily oxidised in air; this may lead to a small amount of Co being oxidised during mixing, moulding, and the like. Figure 6 shows the backscatter electron (BSE) images of specimens sintered at 1300 • C for different holding times, and the EDS spectrum of the sintered specimens (60 min). The trend in Figure 6 is similar to that in Figure 5. When the holding time is 60 min, the structure of the specimen is more uniform, as shown in Figure 6c. The WC grains are polygonal or quasi-spherical, and the grain size is 300-500 nm. The sintering temperature (1300 • C) is about 100 • C lower than that used in traditional sintering processes [21,22], mainly because microwave sintering can reduce the activation energy, accelerate diffusion, and increase the rate of densification [27]. In addition, nanocomposite GGIs have a high solubility and mobility in liquid cobalt, which can prevent the dissolution and growth of the WC particles and further optimise the microstructure of the alloy [28]. Another mechanism causing the aforementioned inhibition is that nanocomposite GGIs segregate to the WC-Co grain boundaries and their triple junctions, which hinders WC grain growth by exerting a pinning force (Zener-drag) on the moving grain boundaries [13]. As shown in Figure 6e, the main components of the selected area are C, W, and Co, and contain a small amount of V, Cr, and O. The reason for the presence of a small amount of O may be that nano-Co powders have high activity and are easily oxidised in air; this may lead to a small amount of Co being oxidised during mixing, moulding, and the like. Table 1 show the relative density, Vickers hardness, and fracture toughness of ultrafine cemented carbides at different sintering temperatures (1100, 1200, 1300, and 1400 • C) for 40 min. Table 1 show the relative density, Vickers hardness, and fracture toughness of ultrafine cemented carbides at different sintering temperatures (1100, 1200, 1300, and 1400 °C) for 40 min. Table 1 show that the relative density, Vickers hardness, and fracture toughness of cemented carbide specimens first increase, then decrease with the increasing sintering temperature or prolonged holding time [29]. When the sintering temperature is 1300 °C, the relative density, Vickers hardness, and fracture toughness of the specimen reach the maximum values of Table 1 show that the relative density, Vickers hardness, and fracture toughness of cemented carbide specimens first increase, then decrease with the increasing sintering temperature or prolonged holding time [29]. When the sintering temperature is 1300 • C, the relative density, Vickers hardness, and fracture toughness of the specimen reach the maximum values of 98.38%, 1759 kg/mm 2 , and 12.2 MPa·m 1/2 , respectively. The hardness and fracture toughness of the WC-V 8 C 7 -Cr 3 C 2 -Co ultrafine cemented carbides sintered at 1400 • C decrease (albeit slightly). The main reason for this is Crystals 2020, 10, 507 8 of 10 that, when the sintering temperature reaches 1400 • C, an excess amount of liquid phase is produced, too much WC dissolution and precipitation occurs, and WC grains grow abnormally, resulting in the decrease in hardness and fracture toughness of the specimen. In addition, the relative density of the WC-V 8 C 7 -Cr 3 C 2 -Co ultrafine cemented carbide sintered at 1400 • C is decreased. This is mainly attributed to the over-burning of the specimen caused by too high a sintering temperature, resulting in swelling and the reduction in the density of the specimen. Figure 8 and Table 2 show the relative density, Vickers hardness, and fracture toughness of ultrafine cemented carbides sintered at 1300 • C for different holding times (20,40,60, and 80 min)-the trend is similar to the effect of sintering temperature on the relative density, Vickers hardness, and fracture toughness of the specimens (Figure 7 and Table 1). When the holding time is 60 min, the relative density, Vickers hardness, and fracture toughness of the specimens reach the maximum values of 99.79%, 1842 kg/mm 2 , and 12.6 MPa·m 1/2 , respectively. This is mainly because the power of the microwave field between the particles is almost 30 times than that of the external field, which leads to the enhancement of surface ionisation. Consequently, the rate of ionic diffusion and ion kinetic energy increases across the entire area, particularly at grain boundaries. This results in the formation of more uniformly distributed grain sizes and a denser body [17,30]. When the holding time is prolonged to 80 min, the relative density, Vickers hardness, and fracture toughness of the specimen begin to decrease, which is similar to the decrease in the properties of the specimen caused by the increase in sintering temperature (i.e., to 1400 • C). Crystals 2020, 10, x FOR PEER REVIEW 8 of 10 98.38%, 1759 kg/mm 2 , and 12.2 MPa.m 1/2 , respectively. The hardness and fracture toughness of the WC-V8C7-Cr3C2-Co ultrafine cemented carbides sintered at 1400 °C decrease (albeit slightly). The main reason for this is that, when the sintering temperature reaches 1400 °C, an excess amount of liquid phase is produced, too much WC dissolution and precipitation occurs, and WC grains grow abnormally, resulting in the decrease in hardness and fracture toughness of the specimen. In addition,, the relative density of the WC-V8C7-Cr3C2-Co ultrafine cemented carbide sintered at 1400 °C is decreased. This is mainly attributed to the over-burning of the specimen caused by too high a sintering temperature, resulting in swelling and the reduction in the density of the specimen. Temperature (°C) 1100 1200 1300 1400 Fracture Toughness (MPa.m 1/2 ) 9.6 10.3 12.2 11.5 Figure 8 and Table 2 show the relative density, Vickers hardness, and fracture toughness of ultrafine cemented carbides sintered at 1300 °C for different holding times (20,40,60, and 80 min)the trend is similar to the effect of sintering temperature on the relative density, Vickers hardness, and fracture toughness of the specimens (Figure 7 and Table 1). When the holding time is 60 min, the relative density, Vickers hardness, and fracture toughness of the specimens reach the maximum values of 99.79%, 1842 kg/mm 2 , and 12.6 MPa.m 1/2 , respectively. This is mainly because the power of the microwave field between the particles is almost 30 times than that of the external field, which leads to the enhancement of surface ionisation. Consequently, the rate of ionic diffusion and ion kinetic energy increases across the entire area, particularly at grain boundaries. This results in the formation of more uniformly distributed grain sizes and a denser body [17,30]. When the holding time is prolonged to 80 min, the relative density, Vickers hardness, and fracture toughness of the specimen begin to decrease, which is similar to the decrease in the properties of the specimen caused by the increase in sintering temperature (i.e., to 1400 °C). Conclusions Ultrafine cemented carbides were prepared by microwave sintering using nanocomposites as raw materials. Cemented carbides, with an average grain size of WC of about 300-500 nm, can be obtained after processing at 1300 • C for 60 min. The relative density, Vickers hardness, and fracture toughness of the specimen reach the maximum values of 99.79%, 1842 kg/mm 2 , and 12.6 MPa·m 1/2 , respectively. With increasing sintering temperature or prolonged holding time, the number and size of voids on the surface of the specimens decrease gradually, and the mechanical properties of the specimens first increase, and then decrease. Microwave sintering can reduce the activation energy, accelerate diffusion, and increase the degree of densification of the alloy. Nanocomposite GGIs can prevent the dissolution and growth of the WC particles, further optimise the microstructure, and improve the mechanical properties of the alloy. Another mechanism causing this inhibition is such that nanocomposite GGIs segregate to the WC-Co grain boundaries and their triple junctions, which hinders WC grain growth by exerting a pinning force (Zener-drag) on the moving grain boundaries.
6,293
2020-06-13T00:00:00.000
[ "Materials Science" ]
Fermi Energy-Incorporated Generalized BCS Equations for the Temperature-Dependent Critical Current Density and the Related Parameters of a Superconductor for All T ≤ T c and Their Application to Aluminium Strips Presented here are the Generalized BCS Equations incorporating Fermi Energy for the study of the {Δ, T c , j c (T)} values of both elemental and composite superconductors (SCs) for all T ≤ T c , where Δ, T c and j c (T) denote, re-spectively, one of the gap values, the critical temperature and the T-dependent critical current density. This framework, which extends our earlier study that dealt with the {Δ 0 , T c , j c (0)} values of an SC, is also shown to lead to T-dependent values of several other related parameters such as the effective mass of electrons, their number density, critical velocity, Fermi velocity (V F ), coherence length and the London penetration depth. The extended framework is applied to the j c (T) data reported by Romijn et al. for superconducting Aluminium strips and is shown not only to provide an alternative to the explanation given by them, but also to some novel features such as the role of the Sommerfeld coefficient γ(T) in the context of j c (T) and the role of V F (T) in the context of a recent finding by Plumb et al. about the superconductivity of Bi-2212. Introduction Adopting the framework of the Fermi energy (E F )-incorporated generalized BCS equations (GBCSEs), we deal here with the calculation of the critical current density j c (T)-for all T between 0 and T c -of a superconductor (SC) which is not subjected to any external magnetic field. Specifically, we address here the data obtained by Romijn et al. [1] for superconducting aluminium strips for which it suffices to apply GBCSEs in the scenario where Cooper pairs (CPs) are formed via the one-phonon exchange mechanism (1 PEM). However, with high-T c SCs in mind, also given here are GBCSEs that enable one to deal in a unified manner with the gaps (Δs), T c and j c (T) of a composite, multi-gapped SC requiring more than 1 PEM. The paper is organized as follows. In order to provide a perspective of the conceptual basis of the conventional, multi-band approach (MBA) to the study of the set {Δ, T c , j c (T)} of a composite SC vis-à-vis that of the GBCSEs-based approach, we include in this section an overview of both approaches. Since the data in [1] are explicable in the conventional approach via both the phenomenological Bardeen equation [2] and the Kupriyanov and Lukichev (KL) [3] theory discussed below, the purpose of this paper is to show that the GBCSEs-based approach provides a valuable alternative explanation of the same data. In the next section are given the E F -incorporated GBCSEs in the scenario when a two-phonon exchange mechanism (2 PEM) is operative. These equations provide a unified framework for the description of the set {Δ 2 , T c , j 0 }, where Δ 2 is the larger of the two gaps of the SC. Application of the GBCSEs to the j c (T) data in [1] is taken up in Section 3 where they are also shown to provide the values of several other related T-dependent parameters such as the Sommerfeld coefficient, the effective mass of electrons, their number density, critical velocity, Fermi velocity, coherence length and the London penetration depth. Sections 4 and 5 are devoted, respectively, to a Discussion of our approach and the Conclusions following from it. Insofar as j c (T) is concerned, it is empirically known that it has its maximum value at T = 0 and that j c (T c ) = 0. It took considerable time for theoretical attempts to evolve before the observed variation of j c (T) between these limits could be explained. Perhaps the earliest such attempt was due to London, who gave an equation for j c (T) valid at T = T c , but which failed at lower temperatures because, as was later realized, it did not take into account the effect of the change in the order parameter with current/temperature. The equation for j c (T) given by the phenomenological Ginzberg-Landau (GL) theory marks the next stage in the said evolution. This equation works well close to T c , but not for much lower temperatures. As is well known, the GL theory reduces to the London theory when the concentration of superconducting electrons is uniformly distributed and that, as was shown by Gor'kov, the microscopic theory in which the energy gap is taken as an order parameter leads to the GL theory near T c . These brief considerations suggest the need to appeal to the microscopic theory in order to explain the observed variation of j c (T) for all T ≤ T c . Before we do so, it is relevant to draw attention to a phenomenological equation for j c (T) given by Bardeen [2] post-BCS. This equation, valid for all T between 0 and T c and obtained by treating the gap as a variational parameter and minimizing the free energy for a given current, is Equation (1) is applicable to SCs for which changes in the energy gap with position can be neglected. Since the thin samples of Romijn et al. [1] satisfied the condition(s) of validity of (1), they applied it to one of their samples and found that it indeed fits their data well. Nonetheless, (1) does not address the core issue of the problem, i.e., to identify the parameters on which j c (T) depends. The knowledge of these is essential because it provides a handle to control j c (T). To unravel what lies beneath the "blanket" of (1), one needs a microscopic theory, viz. the theory given by Eilenberger which is derived from the original Gor'kov theory under the assump- Overview of the GBCSEs-Based Approach in Dealing with the Set {Δ, Tc, jc(T)} of a Composite SC Complementing MBA and presented in a recent monograph [7] is an approach based on the GBCSEs which too has been applied to a significant number of SCs. One of the premises of this approach is that Fermi energy (E F ) plays a fundamental role in determining the superconducting properties of an SC. We recall kernel of which is a super propagator. The latter feature leads to the characterization of a composite SC by CPs with multiple binding energies (|W|s). A salient feature of this approach is that it invariably invokes a λ for each of the ion-species that may cause pairing, whence one has the same λs in the equations for any Δ and the corresponding T c of the SC-as is the case for elemental SCs. Multiple gaps arise in this approach because different combinations of λs operate on different parts of the Fermi surface due to its undulations. Each of the |W|s so obtained is identified with a Δ of the SC. Thus, as shown in [7], with the input of the values of any two gaps of an SC and a value of its T c , this approach goes on to shed light on several other values of these parameters. This is not so for MBA, another feature of which is that even when it is employed to deal with the same SC by different authors, the number of bands invoked is not always the same. EF-Incorporated GBCSEs for the Tc, Δs and jc of a Hetero-Structured SC Equations for |W20| (To Be Identified with Δ20, the Larger of the Two Gaps at T = 0), W2(t = T/Tc) and Tc [7] The equation for |W 20 | is: The equation for |W 2 (t)| for 0 < t < 1 is: ∫ and the equation for T c is: In the above equations, θ 1 and θ 2 are the Debye temperatures of the ion-species that cause pairing and are obtained from the Debye temperature θ of the SC as detailed in [7] and [8]; θ m is the greater of the temperatures θ 1 and θ 2 , and the variation of chemical potential with temperature has been ignored in order to avoid the situation where we have an under-determined set of equations. Thus, we have assumed that the chemical potential ( ) ( ) Equation for a Dimensionless Construct y0 at T = 0 Which Enables One to Calculate j0 and Several Other Superconducting Parameters The construct y 0 is defined as where m*(0) is the effective mass of superconducting electrons at T = 0 and P 0 their critical momentum. The exercise carried out above leads to a multitude of values for the set S = {E F , λ 1 , λ 2 }, each of which is consistent with the |W 20 | and T c values of the SC. In order to find the unique set of values from among them that also leads to the empirical value of j 0 of the SC, we solve the following equation for y 0 [9] for each triplet of {E F , λ 1 , λ 2 } values: , 1 , , 1 , , 1 , , 1 , , 2 ln 1 , , 1 , , 1 , , 1 , , , , , The operator Re ensures that the integrals yield real values even when expressions under the radical signs are negative (as happens for the heavy fermion SCs). Corresponding to each value of y 0 obtained by solving (6) with the input of {θ, θ 1 , θ 2 , E F , λ 1 , λ 2 }, we can calculate several superconducting parameters in terms of θ, E F , y 0 , the gram-atomic volume v g and the electronic heat constant/Sommerfeld coefficient γ 0 at T = 0 by employing the following relations, see [8] and for a correction, [10]: where m e is the free electron mass and e the electronic charge, 1.584 10 cm K , 1.406 10 eV sec K , and N s0 is the number density of CPs and V c0 their critical velocity at T = 0. Comparison of the j 0 -values so obtained with the experimental value of j 0 then leads to the desired unique set of {E F , λ 1 , λ 2 }-values that is consistent with the empirical values of |W 20 |, T c , and j 0 of the SC. 1 1 1 2 2 1 2 0 1 2 1 2 2 2 2 2 0 1 d , , , , 4 d , , , , , 4 (12) as: Equation for the Dimensionless Construct . Each value of y(t) obtained by solving (13) for different values of the set S = {E F , λ 1 , λ 2 } leads to the corresponding value of j c (t) via (11), and to the values of the associated parameters via (7)-(10). Explanation of the Empirical jc(t) Values of Aluminium Strips Via GBCSEs By adding to the RHS of each of the Equations (2), (3), and (4) a term in which λ 2 and θ 2 are replaced by λ 3 and θ 3 , respectively, the above framework is easily extended to a 3PEM scenario where pairing is caused by three ion-species. On the other hand, in order to deal with the data reported for Aluminium strips in [1], we need the reduced framework of 1 PEM. This is obtained by putting λ 2 = 0 in each of the three equations just noted. The j0 Data Reported in [1] are various superconducting properties for six samples of aluminium strips at T = 0. Among these, T c and j 0 are determined experimentally, whereas values of some other parameters such as coherence length ξ and the London penetration depth λ L are model-dependent derived properties. In order to show how the GBCSEs-based approach works in such a situation, we give below the sequential steps that are followed for Sample 1 in [1]. World Journal of Condensed Matter Physics 1) The T c of the sample is 1.196 K. We take its Debye temperature θ to be 428 K [11]. Employing these values and putting λ 2 = 0 (as is appropriate for a 1 PEM), solution of (4) for some typical values of E F are: λ 1 = 0.1665 for any value of E F = 10 -100kθ; λ 1 = 0.1666 for E F = 5kθ; λ 1 = 0.1670 for E F = 2kθ. 2) For some select pairs of {E F , λ 1 } values obtained above, we now solve (6) Table 1, which also gives similar results for the remaining five samples dealt with in [1]. The jc(t) Data Insofar as j c (t) is concerned, Romijn et al.  along with their counterparts as obtained via the phenomenological Bardeen Equation (1) and the KL theory [3]. It follows from this We recall that j 0 of Sample 5 in our approach was calculated via the following values of the associated parameters: θ = 428 K, T c = 1.356 K, λ 1 = 0.1701, E F = 12.6kθ, y 0 = 132.0, γ 0 = 1.36 mJ/mol K −2 , and v g = 10 cm 3 /gram-atom. In order now to calculate j c (t) for this sample, we need to take into account the T-dependence of all these parameters. It seems reasonable to assume that among them, θ, E F and v g retain the values employed for them at T = 0. This assumption enables us to calculate y(t) for any "t" via (13) (with λ 2 = 0); the resulting values are given in Table 2. We could now calculate j c (t) if we knew γ(t), but about which we have no information. However, we know that the heat capacity of an SC has a marked non-linear dependence on t as discussed, e.g. in general in [12] and for superconducting Ga in ( [13], p. 411). We are hence led to calculate γ(t) with the input of j c (t)-rather than the other way around-via the following eq- the LHS of which for any "t" is taken to be given by (17) and the RHS is calculated via (11) with the input of y(t) obtained by solving (13). These values of γ(T) are included in Table 2. Considered together with the values of y(t), they will be shown below to provide a microscopic justification of Bardeen's phenomenological Equation (1). It is also remarkable that y(t) and γ(t) enable one to obtain quantitative estimates of several other t-dependent superconducting parameters, viz., s(t) = m*(t)/m e , n s (t) = N s (t)/N s0 , v F (t) = V F (t)/V F0 , v c (t) = V c (t)/V 0 , ξ r (t) = ξ(t)/ξ 0 , and λ Lr (t) = λ L (t)/λ L0 . The plots of these are discussed below. Discussion 1) The T-dependence of γ(T) in the context of j c (t) is a new feature of the approach followed here. We recall that, as is well known, γ is usually defined via the equation World Journal of Condensed Matter Physics where C p (C v ) is the heat capacity of the SC at constant pressure (volume) at very low temperatures (<10 K). The experimental data are usually plotted in the form C v /T vs. T 2 , which yields an intercept equal to γ and a slope equal to 464.6/θ 3 . The generally reported values of γ in the literature, e.g. in [11], obtained in this manner correspond to T = 0. It should also be noted that the simple relation (19) is invalid when the magnetic and nuclear contributions may be significant and, importantly, that γ is directly proportional to N(E F ), the density of states of electrons at the Fermi level. The latter of these features implies that we are taking into account the T-dependence of N(E F ) via γ(T). 2) It was assumed above that among the five parameters that are required for the calculation of j c (t) via (11), we need to take into account the T-dependence of only two of them, viz., y and γ. Since the T-dependence of j c (t) is then governed which is seen to be almost indistinguishable from the plot of R(t). It therefore follows that our approach based on the microscopic BSE provides a detailed theoretical justification of the phenomenological Bardeen Equation (1) for j c (t). 3) In the approach followed in [1], while both j c (0) and j c (t≠0) depend on e, T c , V F , ρ F and  , the expression for the latter requires additional parameters as is seen from the Appendix in [1]. In the approach followed in this paper, no such additional parameters are required to deal with j c (t≠0); one simply invokes, where applicable, the T-dependence of the parameters on which j c (0) depends, viz. e, θ, y 0 , γ 0 , v g and E F . The two approaches may therefore be said to complement each other. World Journal of Condensed Matter Physics (18) and (13), respectively, and of ( ) ( ) 4) The expression for j c (0) in [1] depends on V F , which is not so in the GBCSEs-based approach where j c (0) depends on V c (0). Nonetheless, it is interesting to note that the value of V F in [1] is assumed to be 1.36 × 10 8 cm/s for all samples and is not T-dependent whereas, in the approach followed here, it differs from sample to sample and is T-dependent. For Sample 5, it varies between (2.07 -6.95) × 10 7 cm/sec for 0 ≤ t ≤ 1. For this sample, the values of ξ 0 and λ L0 too differ in the two approaches: while reported in [1] for these parameters are, respectively, the values 1.32 × 10 −4 cm and 1.10 × 10 −4 cm, the corresponding values determined by us are 2.39 × 10 −5 cm and 1.00 × 10 −5 cm. Figure 2 are the plots of w 1 (t), s(t) and n s (t) vs. t for Sample 5 in [1]. Among these, even though the plot of w 1 (t) is obtained via an E F -incorporated GBCSE with E F = 12.6kθ, it is very similar to the plot one obtains for Δ(T)/Δ 0 for an elemental SC via the usual BCS equation sans E F . While we could not find any experimental data for the parameter ( ) ( ) * e s t m T m ≡ for the SC under consideration, we draw attention to a plot of this parameter for Pb and Ta given in (Figure 7 of [12]). This plot covers temperatures up to about 120 K and therefore does not specifically shed light on the behavior of s(t) in the superconducting state. Nonetheless, it is notable that it displays a parabolic decrease for a major part of the range of temperatures over which it is plotted. As for n s (t), as shown in Figure 2, we find that a good analytic fit to the values cal- − . However, it is also well known that, factually, T-dependences of superconducting parameters often differ from those following from the simple two-fluid model ( [11], p. 48). Figure 3 which is the plot of the reduced fermi velocity v F that our approach has led to, we draw attention to a paper by Plumb et al. [14] World Journal of Condensed Matter Physics who have reported that "Associated with this feature (a kink-like feature observed at extremely low energy along the superconducting node in Bi-2212), the Fermi velocity scales substantially-increasing by roughly 30% from 70 to 110 K. 6) In the context of The temperature dependence of the feature suggests a possible role in superconductivity, although it is unclear at this time what mechanism(s) may lead to this low-energy renormalization". We are hence led to suggest that Figure 3 provides both: a plausible explanation that Plumb et al. sought for their result and a validation of our approach. (7) and (8). For values of W 10 , s(0) and N s0 , see Table 1 As concerns the rather accurate numerical fits that we have obtained for the values of various empirical parameters associated with j c , it is remarkable that each of them is found to vary as some power of (1 − t 2 ). Viewed in conjunction with (1), it provides another example of the deep physical insight that Bardeen had without the benefit of a detailed microscopic theory governing j c . Conclusions It has been shown above that the GBCSEs-based approach provides a valuable alternative to the explanation of the Romijn et al.'s j c (T) empirical data for superconducting Al strips based on the KL [3] approach derived from the Eilenberger equations which, in turn, follow from the microscopic Gor'kov theory when certain simplifying assumptions are made. Unique features of the GBCSEs-based approach are: 1) by appealing to the j c (0) value of an SC, it leads to a unique value of E F that enables one to deal with its {Δ, T c , j c (T)} values in a unified framework, 2) with E F thus fixed, appeal to the j c (t) values of the SC leads to a new finding about how γ(t) varies with t, which is then shown to lead to (3 quantitative estimates of several Tand E F -dependent superconducting parameters, viz., s(t), n s (t), v c (t), v F (t), ξ r (t) and λ Lr (t). It is remarkable that one can obtain these results by remaining within the ambit of the mean-field approximation, i.e. by employing the model (constant) BCS interaction "−V", which for (6) (2), (3) and (4), the corresponding constraints on V are obtained by putting 0 = P in these inequalities and are identical with those in the usual BCS theory. As was mentioned above, a plethora of formulae is known in the literature for calculating j c of an SC, depending upon its type (I or II), size, shape and the manner of preparation. The application of E F -incorporated GBCSEs herein, and to a variety of other SCs in [8] (with a correction in [10]), suggests that the E F of an SC subsumes most of these properties. We conclude by noting that work is in progress to further generalize the GBCSEs given here to deal with the pragmatic situation where the SC is in a heat bath in an external magnetic field, i.e. when both T and H are non-zero-a procedure for which has been given in [15].
5,390
2019-07-23T00:00:00.000
[ "Physics" ]
Morphology of Khorgo Volcano Crater in the Khangai Mountains in Central Mongolia Cenozoic basalt, which is widespread in Mongolia, has been attracting the attention of Central Asian researchers since the beginning of the last century. This study identified the geomorphological shape of the Khorgo volcano. The main purpose of the study is to determine the origin and morphological form of Khorgo volcano, a key representative of Cenozoic volcanism. In general, there are several types of morphological forms associated with lava overflow, and it is important to determine which types are the most common and also to establish a link between them. Geomorphological studies in this area have not been conducted in Mongolia. Spatial improvement and morphometric methods satellite imagery had identified Khorgo volcanic faults. Khangai magmatism had thinned its crust to 45 km during the Tariat-Chuluut volcanic activity. It can be concluded that this was due to the thinning of the continental crust in the Khangai Mountains because of mantle plume. During this time, tectonic faults formed were formed, which had broken through the earth's crust. Part of this fault was formed in the vicinity of Khorgo Mountain from northwest to southeast, and lava flowed with the basic composition, which led to the formation of the current morphological form of Khorgo volcano. The lava flow was less than 45% silica and potassium-dominated, which blocked the Suman River valley and formed the present-day Terkhiin Tsagaan Lake. The morphometric analysis compared the morphology of a typical volcano, which showed that the mouth of the crater of the Khorgo volcano has a slope slanting about 45 degrees, it is about 100 meters in depth, with a diameter of about 500 meters. By comparing the basalt composition of the Khorgo volcano and its morphometric characteristics with other standard volcanoes, it has been determined that it is in the form of a lava dome. INTRODUCTION The Khangai Region in Central Mongolia is a mountainous area covering about 200,000 sq. km. with numerous peaks over 3,500 meters and is one of the important 'domed' structure within the basement blocks of Mongolia [1]. The Khangai region consists mainly of intensively deformed Carboniferous-Devonian and minor Permian-Triassic sedimentary rocks, which were deposited on basement blocks and intruded by huge bodies of granite and granodiorite plutons [2] appertaining to Late Paleozoic to Early Mesozoic periods. The geological interpretation of the isotopic data implies that blocks of the consolidated Precambrian crust were over thrusted onto younger crustal complexes of sea basins between these blocks during the accretionarycollision formation [3] of the fold belts. Numerous high potassium alkaline basaltic provinces of the Late Cenozoic Era, which are covered by unconsolidated Quaternary sediments, are distributed throughout the Khangai Region. Therefore, the stress from the India-Asian collision from the southwest (Altai transgressional belt) and Lake Baikal extensional structures from the north, are playing an important role in neotectonics faulting and perhaps Cenozoic magmatic activation in the Khangai dome [4][5]. There are a number of NE and NW-trending normal faults within the Khangai mountains region (Fig.1). Figure 1. The geographic location of Khorgo volcano in Central Mongolia. Simplified digital elevation map shows the position of major faults of Mongolia Khangai doming began in the middle Oligocene Era and was contemporary with alkaline volcanism throughout the Khangai Mountains. The total amount of surface uplift is about 3 km, with the most active phase of uplift between 3-4 Ma and the present day. The young, normal fault systems in the Khangai are perhaps a response to crustal uplift and doming in the range. In addition, the faults with the clearest evidence for Holocene activity within the Khangai occur at relatively high elevations, suggesting that these areas are extending most actively [3].This activity is related to the peculiar position of Mongolia, situated between the extensional structure of the Baikal rift system and the transgressional mountain belt of Proceedings of the Mongolian Academy of Sciences PMAS Central Asia (the collision zone between India and Asia [6][7]. Late Cenozoic Volcanic fields in the Khangai Region . Based on the geochronological study of volcanic basalt, 17 zones are distinct [8]. The Khangai Range includes basalt zone of the Khangai center, Tariat-Chuluut, Hanuin, Orkhon-Selenge and Ugii Nuur lakes. The Khorgo volcano is located in the Tariat-Chuluut zone (Fig. 2). The Khangai mountain system is one of the largest elements of the Inner Asian mountain belt. Its Late Cenozoic history was marked by numerous volcanic eruptions, which produced morphologically different lava flows that resulted in the forming of several basaltic fields, such as Orkhon-Selenge, Tariat-Chuluut, Khanui and Ugii Nuur (Fig. 2). Late Cenozoic volcanism occurred in the region as eruptions of highly mobile subalkaline basalt and basanite lavas, which spread over tens of kilometers as horizontal lava fields or extended valley flows. Based on existing geochronological data, several stages of volcanic activity with different structural positions and morphology of lava flows are recognized during the last 10 Ma [9]. The Late Miocene-Pliocene stage (10-2 Ma) was characterized by several volcanic episodes [10][11]. They also occurred at the lower reaches of the Chuluut River near the eastern termination of the Tariat Graben and produced a large (24 × 15 km) lava plateau in this area [12]. The Pleistocene-Holocene stage (<1.25 Ma) is reflected in the development of valley lava flows or "lavarivers." The Khorgo Volcano. The Khorgo Lake volcano is a dormant volcano located on the eastern shore of Terkhiin Tsagaan Lake in the The crater walls are nearly vertical at the top (Fig. 3). A loose fan of pumice-like cinder is formed near the eastern and northeastern base of the cone with the inclusion of volcanic bombs as large as 1 m across. A lateral crater that has cut into the southwestern edge of the Khorgo volcanic cone is partially filled with lava. A few large bombs have rolled down into the lateral crater from the slope of the central volcanic cone. Near the lateral crater is a lava dome, some of which has propagated onto the crater slope [13]. The central slope, the lateral crater, and the lava dome had a common feeding conduit striking north-northeast. The Khorgo lava flows are highly porous and have an irregular blocky surface produced by flowing volatile-rich lava breaking through and collapsing its top (Fig. 4a). .a. Outcrop of the porous basalt b. Fresh basalt samples with numerous olivine phenocrysts and olivine bearing xenoliths from Khorgo Volcano The Khorgo lava flow has phonolithic tephrite to alkali basalt-basanite composition and contains olivine-bearing mantle xenoliths and metacysts of anorthoclase (Fig. 4b). The erupted Khorgo volcano lavas form a natural dam on the Suman River, causing the formation of the Terkhiin Tsagaan Lake. There are several types of geomorphological forms associated with lava overflow, hence the choice of Khorgo volcano is related to the fact that Khorgo volcano is a novel study that has never been done before in our country. The purpose of determining the geomorphological shape in relation to the origin of the volcano is to consolidate the theoretical results, to determine the line of lava overflow, to map the direction and consequences of the Proceedings of the Mongolian Academy of Sciences PMAS lava flow, and to determine the geomorphological shape. The Terkhiin Tsagaan Lake. The freshwater Terkhiin Tsagaan Lake is located near the Khorgo volcano (N48°10'15'', E99°43'20'', 2060 m a.s.l.) [14][15]. The area of the lake is 61.4 km 2 , with a length of 16 km, a maximum width of 4.5 km, an average depth of about 6 m and a maximum depth of 19.3 m [15]. Upon formation of the volcano, the valley of the Terkh River was dammed by lava flows [16][17][18][19]. The lake water outflows via the Suman River. The flow of basalt was pushed into Suman River which is believed to be the origin of the Terkhiin Tsagaan Lake. Studies conducted on the Terkhiin Tsagaan Lake sediments have dated organic matter overlying the lavas to between 8.7 and 7.7 Ka using С 14 techniques [20]. Also, the lacustrine sediments of the Terkhiin Tsagaan Lake provide a record back to ca. 8780-year B.P. The basin is filled with approximately 3.5-6 m of lacustrine sediment [18]. A С14 isotopic survey of the essence taken from a depth of 6-10 meter of the lake indicates that it's age is approximately 7.0 Ka [21]. The bottom sediment of the Terkhiin Tsagaan Lake is composed of dark gray mass or fine laminated organic rich mud, in the upper part and medium level there are coarse grained sandy layers, and in the lower part are rare basalt pebbles. The thin gravel-sand layer separates these deposits from the underlying basalt lava bedrock. On the satellite image below (Fig. 5), lava flow from Khorgo volcano is marked as a yellow arrow, the area covered by lava is within the boundary marked in red, and the height result (Lava Plato) is shown as well. Morphometric method Tectonic movements cause linear deformation of the land surface [23]. This is the main sign of fault on the topographic map [7,[24][25]. The morphometric method was applied using topography mapping. For defining the fault of the Khorgo volcano, we used a topographic map of a scale of 1:100 000. Morphometric method, which is used for identifying tectonic fault on a scale at 1:100 000 on topographic maps, was used to identify the location of tectonic fault by making comparisons with satellite and aerial photo images. Spatial improvement method of Remote Sensing A Digital Globe Satellite map of 0.67 m resolution was obtained with remote sensing directional filter method and each pixel was changed by every other pixel in spatial development. In order to do so, it was required to choose various windows called a kernel. Those windows run along the image's row and column and whenever it reaches a particular pixel, it defines the kernel's central value by using values of other pixels also contained in it. This is the instructive method to improve artificial and natural objectives by changing each pixel's radiometric values [29][30]. Sobel operator: When the weight at the central pixels, for both Prewitt templates, is doubled, it gives the famous Sobel edgedetection operator which, again, consists of two masks to determine the edge in vector form. The Sobel operator was the most popular edgedetection operator until the development of edge-detection technique with a theoretical basis. It proved to be popular in as much, on the overall, it gave a better performance than other contemporaneous edge-detection operators, such as the Prewitt operator [31]. PMAS Where (j, k) are the coordinates of each pixel Fjk in the satellite images. This is equivalent to a convolution using the following masks: Components of lava and origin of Khorgo volcano There are following models of the origin of Cenozoic magmatism: 1. Mantle plume and hot spot [1][2]; 2. The mantle plume and hot spot situated between the extensional structure of the Baikal rift system [9,32]; 3. Collision between India and Asia during the Oligocene Period and combined effect of second activation of mantle plume [12,33]; 4. Kindle resulted in continent's plate collision [34]. There is not enough evidence regarding the Plateau of Mongolia, which has however gone through tremendous expansion, and most models link magmatism with plume. Although lesser expansions coincide with magmatism in some areas, the thin feature is not considered as a sufficient evidence as a reason for magmatism [35]. According to Kepezhinskas (1979) the content of SiO2 accounts for 45-50% of magmatism in the Khorgo volcano. Some scientists agree on the tectonic origin of the Khorgo volcano. They measured the volcano of Tariat-Chuluut and the thickness of the basin crust was around 45 km during its active period [37][38][39]. Harris (2009), basing on his xenolith research, proved that Tariat's websterite with garnet and lherzolite with garnet, were formed under P=18-20 kbar pressure at Т=1070-1090°С, and linked the Khangai dome and magmatism to inland's weakening due to deep mantle plume [40]. 2000) made the following depth structure model map [41][42] of Mongolian crust (Fig. 6). Figure 6. Structure of the deep crust in Mongolia (Genshaft and Saltykovsky, 1979; 2000) The Khentii elevation is at 80 km, and the Dariganga area is 110 km [43,45]. On the Khangai elevation level, there are several hot flows and also water springs that follow the Khangai ridge, for example [36]. According to the study and the deep crust structure, Khangai ridge's thin crusts are one of the biggest reasons for Quaternary Era volcano's overflow following the fault line. Zhelubovsky (1945) had maintained that the Tariat-Chuluut basin active volcano in volcanic area has a structure of lava flowing along the fault line [44]. The location of volcanic craters near Khorgo volcano almost in one line proves the existence of tectonic fault [46]. Faults would occur in the Khorgo volcano topographic map because they formed abrupt changes on the surface. Considering the comparative topographic maps of fault, lava base overflowed from north western to south eastern along the northeast behind the Khorgo volcano (Fig. 7). Background of Khorgo volcano's origin The volcanic surface age in the Tariat-Chuluut basin hasn't been estimated yet. Okinova (1940), and Selivyanov (1967, 1972 noted that basalt rocks are from Neogene, early stage of the Quaternary Era, while Murzaeiv (1952) maintained that the Tariat-Chuluut basin belongs to the earlier period of the Quaternary Era. Kozhevnikov (1970) and other scientists proved that it is from the middle and later stages of the Quaternary Era [18,36,42,46]. The sediment of the volcano overflowed through different periods of times in the beginning of the Late Pleistocene or Oligocene Eras to the late periods of the Quaternary Era due to the difference between various habits. The proof is that lava overflow filled some of the river basins during the earlier stage of Quaternary Era and basalt's rocky sheets belong to the Miocene and Pliocene's earlier periods that resulted in the Khangai depression, and the dumping outer surface proves that basalt of this volcano belongs to the Pliocene's late period to the earlier period of the Quaternary Era. The extinction period of volcano with its outer surface creation is estimated by comparing it with younger terraces [18][19]. The geomorphological shape of Khorgo volcano The shape of Khorgo volcano was determined by comparing Mauna Kea's volcanic morphometric results, such as chemical compound of basalt, topography, photographic and satellite images. The direction of overflowed magma that belongs to the Quaternary Era was estimated or identified along the fault, using satellite image. Several types of basic-lava volcanoes have been identified: lava shields, lava domes, lava cones, lava mounds, and lava discs [47][48]. Classic examples of lava shields are found on the Hawaiian Islands. Mauna Loa and Mauna Kea rise nearly 9 km from the Pacific floor. Lava domes are smaller than, and often occur on, lava shields. Individual peaks on Hawaii, such as Mauna Kea, are lava domes. Lava cones are even smaller (Fig. 12). Mount Hamilton, Victoria, Australia, is an example. Lava mounds bear no signs of craters. Lava discs are aberrant forms, examples of which are found in Victoria, Australia [47]. The following (Fig. 13) shows the comparisons of Khorgo and Mauna Kea volcanoes with their morphometric results. Bayankhongor volcanic basins [17-18, 36, 45]. It is also possible to determine the geomorphological shape of volcanoes in Mongolia according to their origin. CONCLUSIONS This significance of this study becomes more important as it determines the geomorphological shape of the Khorgo volcano in relation to its origin. The location of the volcanic crater around Tariat is almost in a straight line, which is directly related to tectonic faults. In this study, we identified Khorgo volcanic faults using remote sensing spatial enhancement and geomorphological morphometric methods. The thickness of the crust during the Tariat volcanic activity is 45 km, and the tectonic movement in the crust is due to the intensity of magmatic fissures. The base lava overflowing from the Khorgo volcano formed a large lava platform, closing the river valley and forming the Terkhiin Tsagaan Lake. By comparing the basalt composition of the Khorgo volcano and its morphometric characteristics with other resembling volcanoes, it was determined that it is a form of lava dome. Furthermore, it is possible to classify geomorphological forms by determining the shape of volcanoes in Mongolia according to their origin.
3,780
2020-05-18T00:00:00.000
[ "Geology" ]
The role of antihydrogen formation in the radial transport of antiprotons in positron plasmas Simulations have been performed of the radial transport of antiprotons in positron plasmas under ambient conditions typical of those used in antihydrogen formation experiments. The parameter range explored includes several positron densities and temperatures, as well as two different magnetic fields (1 and 3 T). Computations were also performed in which the antihydrogen formation process was artificially suppressed in order to isolate its role from other collisional sources of transport. The results show that, at the lowest positron plasma temperatures, repeated cycles of antihydrogen formation and destruction are the dominant source of radial (cross magnetic field) transport, and that the phenomenon is an example of anomalous diffusion. Introduction Recent years have seen many advances in studies with antihydrogen, H. These have included: the first formation experiments involving the controlled mixing of antiprotons and positrons [1,2]; the successful trapping of small samples of the anti-atoms in magnetic minimum neutral atom traps [3][4][5][6]; the demonstration of beam-like propagation of antihydrogen [7,8] and the first explorations of its properties [9][10][11][12]. This work was undertaken at the unique Antiproton Decelerator facility located at CERN [13,14] which supplies pulses of 5.3 MeV antiprotons (p s) every 100 s or so for subsequent experimentation. One of the basic instruments used for almost all antihydrogen experiments to date is the Penning, or Penning-Malmberg, trap. These are examples of charged particle traps (see e.g., [15][16][17]) which are used to collect, control and manipulate p and positron (e + ) clouds and plasmas, and to mix them to form the anti-atoms. Such traps employ strong magnetic fields (typically of tesla strength) for radial confinement of the charged species, as the field is directed along the axis of a series of electrodes, with the latter suitably electrically biased to provide the axial confinement. To date, almost all antihydrogen experiments have involved mixing antiprotons and positrons in a so-called nested Penning trap environment [18] in which typical e + cloud/plasma temperatures, T e , have been below 100 K (though this parameter was not always directly measured) and with densities in the range from n e =10 13 -10 15 m −3 . It has been found that antihydrogen is typically formed via the three-body interaction given by It has been well-documented how the nascent anti-atoms are very weakly bound (by of the order of k T B e for reaction (1) [19][20][21], where their excited nature is denoted by the doublestar superscript), and are thus susceptible to influence from the local fields, and in particular the trap electric field, and the e + plasma self-field. The influence of these has been observed in several experiments in which field ionisation of the antiatoms has resulted in p separation from the e + plasma [22,23], and has also been exploited as a means of probing antihydrogen formation and to gain insight into binding energies [2,7,8,24]. The weakly bound newly formed antihydrogen atoms may also be affected by collisions in the e + plasma: indeed, it has been emphasised elsewhere, and in particular in [21], how the antihydrogen that is detected is a result of a detailed sequence of processes which involve repeated cycles of antihydrogen formation according to reaction (1), and breakup in collision as This process is likely to have a very large cross section, probably in excess of geometric (∼10 −12 m −2 ), leading to collision frequencies for n e =10 14 m −3 of around 10 7 s −1 , which is much faster than the inverse of the time taken for an antihydrogen atom to travel 1 mm (about 1 μs). This reaction will also be in competition with de-excitation of the ** H , which may lead to a state stable against subsequent ionisation. The overall equilibrium rate for the production of stable antiatoms through three-body recombination (1) and subsequent collisional stabilisation (3) is proportional to n T e 2 e 4.5 (see e.g., [19,25,26] for further discussion). (Note that radiative recombination is an alternative route to H formation and is expected to preferentially produce low-lying, deeply bound states at low rate: see e.g., [25]. This process will deplete the p cloud, but will not otherwise disturb it, nor the positron plasma, and it is therefore not included in the discussion here.) Following the initial ATHENA and ATRAP experiments [1,2,22,24,[27][28][29][30] a number of authors undertook simulations and theoretical analyses of various aspects of antihydrogen formation, as applied to the experimental situations [31][32][33][34][35][36][37][38][39][40][41], and as summarised by Robicheaux [20]. In this spirit, a detailed examination of antihydrogen formation was undertaken by Jonsell et al [21] under simulated conditions appropriate to those of the ATHENA experiment, and the present work is in part based upon some of their observations. They found that the fraction of time that an antiproton spends bound as antihydrogen is usually small, typically less than 1% (at T e =15 K and n e up to 10 15 m −3 ). This was interpreted as being due to the rate of the two-body destruction process, reaction (2), greatly exceeding that for the three-body formation rate of reaction (1), due to, as mentioned above, the very large cross section for break-up. Furthermore, they found that the radial distributions of the antiprotons were time-dependent. This was attributed to cross-magnetic field drift of the p s while neutralised as antihydrogen, before break-up, but with the latter occurring at a larger (on average) radius, r (with r=0 the z-and B-field axis of the system) than the formation event. Thus, as time proceeds during a e + -p mixing experiment, the antiprotons progressively move towards the outer edge of the positron plasma. The importance of this is that here the combination of the plasma self electric field  = = n e E E r r 2 e 0 (which is radial,= r r r, in nature with e the elementary charge and  0 the permittivity of free space) and the uniform axial magnetic field,= B B z, provided in the experiment by a solenoid results in an antiproton tangential speed given by T e 0 , proportional to the radial position of the p . As an example, take r=1mm, B=1 T and = n 10 e 14 m −3 , to find v T ∼900 ms −1 , which corresponds to an effective temperature/equivalent kinetic energy of around 33 K. This is to be compared to typical antihydrogen trap depths, which are around 0.5 K. Thus, it is clear that if antihydrogen is formed in dense positron plasmas the radial position of creation will have a direct bearing on the ability to trap the anti-atom for further study. Motivated by the results of the earlier simulations [21], and the importance to the aforementioned experiments involving antihydrogen trapping and also to the creation of beams of (ground state) anti-atoms for hyperfine spectroscopy, we have performed a more detailed study of p radial transport in dense, cold e + plasmas during antihydrogen formation. We have explored the positron density range from 10 13 -10 15 m −3 , with positron temperatures between 10 and 50 K, and for applied magnetic fields of 1 and 3 T. The higher n e and B are typical of early experiments (e.g,. ATHENA [1,22,30]) for which there are data to which semi-quantitative comparisons can be made. More recently lower B fields have been used to help promote H capture in magnetic minimum traps, typically alongside lower n e which has been found to be helpful in the quest for lower H temperatures. The methodology we have adopted is described in section 2, with our results and discussion presented in section 3. We draw our conclusions in section 4. Simulation methodology The simulations were performed using the methodology developed, and described fully, by Jonsell et al [21]: thus, we need only provide a brief summary here. The p trajectories were computed using classical equations of motion, along with those of any e + s within a cylinder of radius and height n e 1 3 . The use of classical equations is valid since quantum mechanical effects are only relevant on much smaller length scales than those of the typical e + -p interactions under the plasma conditions considered here. The particle trajectories were found by integrating Newton's equations of motion for the Lorentz force ( ) =  +é F E v B , where, as above, B is a uniform solenoid field (i.e., we do not attempt to model the situation in which an additional non-uniform magnetic field is applied to confine some of the anti-atoms, though this will typically be a minor perturbation due to the small plasma size relative to the atom trap). In this work we focus on the motion of antiprotons inside the positron plasma, where the electric field is the plasma self electric field (E, as defined above) and has no axial component. The number of trajectories was usually 20 000 for each parameter setting. Since this study does not seek to simulate experimental procedures, which typically involve various means of injecting the antiprotons into the positron plasma, the antiproton trajectories were initialised at zero radius, with a thermal velocity distribution (set by T e ) in the radial and axial directions. Our simulation distinguishes free antiprotons from those bound inside an antihydrogen atom. In the former case the charged particle is subject to a drag force from the plasma [42], as well as a diffusive force related to the drag by the fluctuation dissipation theorem. While bound as a neutral antihydrogen atom the antiproton is not subject to these forces, and instead the interaction with the one or more positrons present is calculated explicitly. This procedure necessitates a clear distinction between bound states and continuum states of any positrons in the vicinity of the antiproton. In the presence of strong magnetic and electric fields typical of antihydrogen experiments, this separation is not well defined. Thus, in our simulations we calculate large numbers of manypositron antiproton collisions and any collision ending with a positron bound to an antiproton by more than k T B e (the binding energy calculated without including external electric and magnetic fields) is defined as formation of an antihydrogen atom. The rate of such events is evaluated (see table 2), and the positron state after the collision is saved as the initial condition for an antihydrogen atom. Our main observable is the radial position of the antiproton, as a function of time. We can also track other quantities such as the change in radial position, Δr, of each jump, the binding energy of the antihydrogen formed etc. We can vary external conditions such as plasma temperature, density (and hence electric field) and magnetic field. Results and discussion Though our simulations assume that the positrons and antiprotons are in thermal equilibrium, the nature of the H formation and break-up cycles (reactions (1) and (2)) mean that the rate of stable (against break-up, say via reaction (3), and which might thus be observed by experiment) H production and the transient rate of H formation in the positron plasma, are not the same. It is expected that the latter will be much higher than the former, and they will not necessarily have the same dependence upon T e . In the analysis presented after the discussion of the simulation results, we attempt to estimate the overall rates of H production and break-up in the positron plasma, as it is these that, under circumstances elucidated by the present study, can govern the cross-field transport of the antiprotons. When a neutral antihydrogen atom is formed its centreof-mass motion will cease to be influenced by the electric and magnetic fields in the plasma, though its momentum vector will be dominated by that of the antiproton at the moment of antihydrogen formation. The antihydrogen will continue largely in the same direction until it is again ionised, and the p resumes its circular motion around the axis of the trap. It will, however, now circulate at a different trap radius because of the motion it made while bound as antihydrogen. Thus, an antiproton will make a radial 'jump' Dr H every time it is bound transiently as an antihydrogen atom. The size of these 'jumps' will follow some distribution . Hence, the combination of these two effects gives a distribution with both negative and positive Dr H , but with a bias towards positive jumps. This will be discussed in more detail below. Considering radial motion due to antihydrogen formation only, starting from r=0, the radial position at some later time, t N , will be the sum of all N jumps, ( ) The time required to make N jumps is on average given by N divided by the antihydrogen formation rate l H increased by the time the p spends as an H, Hence, the average radial p speed due to antihydrogen formation is given by is the fraction of time the p spends as H. The average Dr H can also be written as where r 0 is the radial position before the jump. It is therefore interesting to study how the distribution ( ) r Dr H falls off for long Dr H (see the inset of figure 1). The large Dr H tail of the distribution will, through the influence of Δt, have a very complicated dependence on the binding energies of the Hs formed, and how the binding energy evolves as the H undergoes further collisions. We find this behaviour by fitting to simulated data, such that the tail is well described by a power law ( ) 2for different parameters (see table 1). Our data indicate that the distribution falls off more slowly at small densities, which can be expected, as at higher densities the collision frequency is higher, and thus Δt shorter. In fact, due to the slow fall off the integral in (5) diverges, as is characteristic for anomalous diffusion influenced by Lévy flights [43]. This necessitates a cut-off in the Dr H distribution, which we will discuss further. In addition to the change in radial position when neutral as H, there is also, as explained in section 2, diffusive drift of the bare p s. Considering this drift alone, we can write the diffusion as a series of small jumps in the x-and y-directions: Note, that these displacements are relative the bulk motion of the plasma, i.e. take place in a reference frame rotating with theÉ B drift velocity of the positron plasma. Looking at the mean square radial displacement where the average of ( ) Since we here consider only the thermal drift, which is made up of much more frequent and smaller jumps (as compared to the radial jumps induced by antihydrogen formation), we consider this as a continuous process, relating the number of jumps to time t, through N=λ ep t, where λ ep is the positron-antiproton collision frequency. We can also define the usual diffusion coefficient D through . We will return to this below. The magnetic field dependence of the collision frequency, as well as of the jump size distribution, is not dramatic, leading to a similar rate of radial transport for different field strengths. The antihydrogen formation rate is, however, reduced by a factor of around 20 when the temperature is doubled from 15 to 30 K: see table 2. This, coupled to a not too different jump-size distribution, gives a dramatically reduced rate of radial transport as the temperature is increased. In fact, at higher temperatures normal thermal drift dominates. The slow fall off for large Dr H in figure 1 makes the radial position of the antiprotons, when averaged over many trajectories, very sensitive to a small number of trajectories involving very large radial jumps. This makes the average difficult to calculate, or even undefined. We therefore need to introduce a cut-off radius r c =1 mm, removing any antiprotons which cross this radius from the average. This is physically motivated by the finite radius of any real positron plasma (assuming that the antihydrogen is field-ionised outside the plasma, or that any remaining radial transport outside the plasma is not interesting for our purposes). As a consequence, when radial transport is significant, the average will approach, but never cross r c . At the same time the average will be taken over a decreasing number of trajectories (since some are removed because they crossed the cut-off radius), leading to increasing statistical fluctuations. This cut-off changes the upper limit of the integral in (5) tor r c 0 making the average áD ñ r H and the attendant speed v H well defined. An example of the time evolution of the radial distribution of antiprotons is shown in figure 2. It can be seen that at short times most antiprotons are located well within the plasma (i.e. at r<r c =1 mm). As time progresses the peak of the distribution grows outwards. The distribution is sharply cut off at r=r c , representing the outer radius of the positron plasma. The tail extending beyond r c is removed from the simulation, with the effect that the integral of the distribution with time diminishes from its initial value 1.0, until in the end all antiprotons would have left the plasma (at 0.2 ms the integral is 0.7, while at 1.0 ms it is 0.12). The results of our simulations for various densities, temperatures and magnetic fields are shown in figure 3, and can be contrasted with those shown in figure 4, where antihydrogen formation was artificially turned off. The radial drift is then exclusively due to thermal Brownian motion, which is confirmed by good fits to ( ( ) ( )) á -ñµ r t r t 0 2 , when thermal diffusion is slow (we attribute the deviation from the linear dependence when the diffusion is fast to the velocity- dependence of the friction coefficients from [42], and the difference at large positron densities between the drift velocity of the e + s and theps [21]). Here, the effect of the magnetic field is more important than the temperature, since it pins the charged particles to field lines, thus inhibiting radial motion. The temperature dependence of the radial drift is examined in figure 5. As expected the radial drift is much slower at higher temperatures. We find that the drift varies sharply with the temperature of the positrons; at 10 K all simulated trajectories have crossed r c within 1 ms. Increasing the temperature by only 5 K there are still antiprotons remaining up to 2 ms, but a significant fraction have left the plasma, as is visible by the trend towards saturation at r c . At 50 K more than 90% of the antiprotons remain after 2 ms, but a small fraction has jumped by large Δr and either left the plasma or contributed to raising the average ( ( ) ( )) r t r 0 2 significantly compared to thermal-only diffusion. An experimentally relevant measure of the rate of radial antiproton drift is the fraction of antiprotons reaching the outer radius of the plasma, within a certain time. We plot this fraction after 2 ms as a function of density and magnetic field for a 15 K plasma in figure 6(a), which shows that it rises rapidly above about n e =10 14 m −3 due to the influence of H formation by reaction (1). Again, the dependence on magnetic field is weak. In a 30 K plasma the time scale for the antiprotons to reach r c is longer. For this temperature, and in the density range investigated, we have to wait between 0.01-0.1 s, to find a significant fraction of antiprotons at this radius, as can be seen in figure 6(b). At 15 K the antiproton radial drift contributes little, confirming that a mechanism involving the formation of antihydrogen is the dominant reason for radial transport. In what follows, an attempt is made to develop a simple model of the transport of antiprotons in positron plasmas in an effort to underpin the results of the simulations. Combining the ballistic diffusion with speed á ñ v H from (4), with the normal diffusion during the time ( ) -F t 1 H the p is free, we find where l ep is the positron-antiproton collision frequency and in which we have assumed that áD ñ = áD ñ r r H 2 H 2 : see below. The average time required for the p to diffuse out to some radius r is then The rate of cross-field transport (as an antiproton, distinct from antihydrogen) is proportional to l Dr p ep 2 , which can be written from (10) , as is characteristic for thermal Brownian motion. This naive estimate contains no plasma effects. Comparing to the full treatment in [42] we find, within the relevant parameter range, an approximate agreement ifDr p is multiplied by the dimensionless parameter , which is roughly comparable to the result from the simulation (see figure 4). Further, the scalings with T e and B also roughly agree with the results in figure 4. The next step is to estimate the size of the jumps made as antihydrogen, which is taken to be the thermal H speed (which is assumed to be that of the antiproton), multiplied by its time of flight before destruction in collision. Thus figure 1 (when, as discussed above, only D < r 1 H mm are included in the average). This shows that our estimate for s H is reasonable. However, the thermal velocity will be isotropic, and thus give rise to a Dr H which is symmetric around zero, i.e. áD ñ = r 0 H . This gives only a contribution to diffusion, which we neglect compared to the diffusion of theps in their bare state, that is, we ignore the fluctuations in the H transport, and hence approximate áD ñ = áD ñ r r H 2 H 2 . The net outward drift is instead given by the rotationaĺ E B drift velocity of thep (introduced in section 1), which upon formation is transferred to the H as a tangential velocity T , where we takeŷ to be the direction tangential to thep motion at the time t=t 0 of formation (here r 0 =r(t 0 )). This means that ( )= t r r x 0 0 . The total velocity is thenˆ( where v i th is the thermal velocity in the i-direction, which averages to zero. As such, the change in radial position is given by we find that ‴ »´-C 5.6 10 10 T −1 m 4 -K 5.3 . Using B=3 T, n e =10 15 m −3 and T=15 K, we find that R∼9.7×10 4 , i.e. the radial transport is totally dominated by antihydrogen formation. It is interesting to note that at very early times (small r) thermal diffusion will dominate; that is, if thep originate on the trap axis there will initially be no transport due to the antihydrogen formation process, because v T =0. For the parameters above thermal transport dominates while r5 μm. In a real experiment, unless antiproton injection is restricted to such a narrow radius, transport due to antihydrogen formation will dominate at all times. Varying the parameters, the roles may be reversed. For instance still at B=3 T and n e =10 15 m −3 thermal transport will dominate for T e 130 K. Reducing the density to n e =10 13 m −3 and the magnetic field to B=1 T, thermal diffusion dominates for T e 35 K. According to our simulations, at this density thermal diffusion dominates already at T e =15 K. However, given the crudeness of our estimate, we conclude that it is in fair agreement with our numerical results. In particular, we expect the very sharp dependence on temperature to be correct. Concluding remarks We have identified a mechanism for radial antiproton transport in magnetised positron plasmas: namely the repeated formation and ionisation of antihydrogen atoms. We show through simulations that this is the dominant radial transport process within most of the parameter range relevant for current experiments. We also provide a simple model for the relative rate of antiproton transport both through antihydrogen formation, and through thermal diffusion. This model predicts a sharp dependence on temperature of the relative rates, which is consistent with the results from our simulations. The most recent experiments, where positron densities close to the lower end of the parameter range used in our simulations are typically used [3][4][5][6][7][8][9][10][11][12], may be operating in a region where the thermal and H formation mechanisms are comparable in magnitude. If this is the case, then as T e is further reduced, which is a major goal of the current experiments, radial antiproton transport is likely to increase sharply. Further work will include investigating the latter at higher densities, such as those suggested in [45], where we expect qualitatively new features arising from the mismatch between the drift velocities of the antiprotons and the positrons.
5,873
2017-11-24T00:00:00.000
[ "Physics" ]
Incorporating Diblock Copolymer Nanoparticles into Calcite Crystals: Do Anionic Carboxylate Groups Alone Ensure Efficient Occlusion? New spherical diblock copolymer nanoparticles were synthesized via RAFT aqueous dispersion polymerization of 2-hydroxypropyl methacrylate (HPMA) at 70 °C and 20% w/w solids using either poly(carboxybetaine methacrylate) or poly(proline methacrylate) as the steric stabilizer block. Both of these stabilizers contain carboxylic acid groups, but poly(proline methacrylate) is anionic above pH 9.2, whereas poly(carboxybetaine methacrylate) has zwitterionic character at this pH. When calcite crystals are grown at an initial pH of 9.5 in the presence of these two types of nanoparticles, it is found that the anionic poly(proline methacrylate)-stabilized particles are occluded uniformly throughout the crystals (up to 6.8% by mass, 14.0% by volume). In contrast, the zwitterionic poly(carboxybetaine methacrylate)-stabilized particles show no signs of occlusion into calcite crystals grown under identical conditions. The presence of carboxylic acid groups alone therefore does not guarantee efficient occlusion: overall anionic character is an additional prerequisite. T he occlusion of water-soluble organic molecules into inorganic crystals has been intensely studied in order to modify crystal morphologies, understand occlusion mechanisms, and achieve enhanced mechanical properties such as toughness. 1−11 Recently, it has been shown that various nanoparticles ranging from 20 to 250 nm diameter can be incorporated within calcite crystals grown from aqueous solution via the ammonium carbonate diffusion method. 12−15 Nanoparticle occlusion within the host crystal has been confirmed by electron microscopy studies. 12−17 The resulting nanocomposite crystals can exhibit greater hardness compared to calcite of geological origin. 12,13 Based on studies to date, it seems that carboxylate functionality at the nanoparticle surface promotes efficient occlusion within calcite. However, the design rules for occlusion are not yet understood. This lack of detailed molecular level understanding is a significant barrier to optimizing the occlusion efficiency for calcite and also for extending occlusion to include alternative inorganic host crystals. Ultimately, this is the key to producing new copolymer/crystal nanocomposites that exhibit a range of tailored properties. In the present study, we examine the "carboxylate surface functionality" design rule in more detail. The synthesis of bespoke organic nanoparticles of controllable size, shape, and surface chemistry is a formidable technical challenge. 12,18−20 However, we and others have shown that reversible addition−fragmentation chain-transfer (RAFT)-mediated polymerization-induced self-assembly (PISA) provides a versatile and efficient route for the synthesis of diblock copolymer spheres, worms, or vesicles. 21−25 In particular, PISA syntheses can be conducted in concentrated aqueous solution, 26−40 and the size and morphology of the resulting diblock copolymer nano-objects can be readily adjusted by systematically varying the DP of the core-forming hydrophobic block. 26,41 Moreover, the surface chemistry of such nanoobjects can be readily controlled by using nonionic, 26,42−44 anionic, 45,46 cationic, 47,48 or zwitterionic 49−51 blocks as the steric stabilizer for the PISA formulation. ACS Macro Letters Letter that will be tested herein is the following: is the presence of carboxylate groups alone suf f icient to promote eff icient nanoparticle occlusion within calcite or is overall anionic character also required? Base titration of the carboxylic acid group in CBMA monomer indicated a pK a of ∼2.3 (see Figure S1a, Supporting Information). The quaternary ammonium group in CBMA confers permanent cationic charge, so this molecule becomes zwitterionic after deprotonation of its carboxylic acid group (see Scheme 1a). 52−54 O-Methacryloyl-trans-4-hydroxy-L-proline (ProMA) was synthesized according to a literature protocol (see Scheme S1, Supporting Information). 55 This monomer exhibits two pK a values (pK a1 = 1.5, pK a2 = 9.0, see Figure S1b) owing to its secondary amine and carboxylic acid groups. PCBMA 52 and PProMA 50 were synthesized via RAFT polymerization in aqueous solution (Scheme 1). This was achieved using a water-soluble (below pH 4.5) trithiocarbonate-based RAFT CTA (MPETTC) containing a morpholine group, which was prepared via a two-step synthesis as recently described by Penfold and co-workers (see Scheme S2). 56 Kinetic studies of the RAFT homopolymerization of CBMA and ProMA using MPETTC at 70°C confirmed that high conversions (>90%) were obtained within 3 h and there was a linear evolution of molecular weight with conversion in each case, as expected for well-controlled RAFT polymerizations (see Figures S2 and S3). Aqueous GPC studies indicated relatively low polydispersities (M w /M n < 1.2) for both PCBMA 52 and PProMA 50 macro-CTAs. Self-blocking experiments were conducted by addition of a further charge of the corresponding monomer (i.e., CBMA to the PCBMA 52 macro-CTA or ProMA to the PProMA 50 macro-CTA). In both cases a relatively high blocking efficiency was achieved, suggesting that the majority of trithiocarbonate RAFT chain-ends remained intact (see Figure S4). Sterically-stabilized diblock copolymer nanoparticles were readily synthesized by chain extension of each macro-CTA in turn with HPMA using a RAFT aqueous dispersion polymerization formulation. PCBMA 52 macro-CTA and PProMA 50 macro-CTA have similar degrees of polymerization, so the stabilizer layer thicknesses of the resulting copolymer nanoparticles are comparable. PCBMA 52 -PHPMA 250 and PProMA 50 -PHPMA 300 were targeted since preliminary experiments indicated that such diblock copolymer compositions gave almost identical mean particle diameters. Indeed, transmission electron microscopy (TEM) analysis indicated that both types of nanoparticles possessed narrow particle size distributions with a mean diameter of 34.5 ± 3.4 nm for PCBMA 52 -PHPMA 250 and 33.6 ± 4.4 nm for PProMA 50 -PHPMA 300 . Dynamic light scattering studies confirmed that both types of nanoparticles exhibited essentially unchanged hydrodynamic diameters in the absence and presence of 1.5 mM [Ca 2+ ], which indicated good colloidal stability under the conditions typically used for calcium carbonate formation (see Figure S5). 12−15 Aqueous electrophoresis measurements revealed that both types of nanoparticles were cationic at low pH but became anionic at high pH, with PCBMA 52 -PHPMA 250 and PProMA 50 -PHPMA 300 exhibiting isoelectric points (IEPs) at around pH 6.6 and 4.1, respectively (Figure 1c). The effect of addition of [Ca 2+ ] on nanoparticle zeta potential was also examined at pH 9.5 (Figure 1d). In both cases, the initial highly anionic character observed in the absence of any salt was significantly reduced, suggesting extensive Ca 2+ binding to the steric stabilizer chains. However, the PProMA 50 -PHPMA 300 nano-particles retained a relatively high net negative zeta potential of −25 mV at [Ca 2+ ] = 1.5 mM, whereas the zeta potential for the PCBMA 52 -PHPMA 250 was reduced to just −3 mV under the same conditions. This difference appears to be decisive in dictating the nanoparticle occlusion efficiency in each case (see later). Calcium carbonate crystals were precipitated at an initial pH of 9.5 by exposing an aqueous solution of 1.5 mM [Ca 2+ ] containing 0.01% w/w PCBMA 52 -PHPMA 250 or PProMA 50 -PHPMA 300 nanoparticles to ammonium carbonate vapor at 20°C for 24 h. As expected, experiments conducted in the absence of any nanoparticles, or in the presence of non-ionic nanoparticles, resulted in the formation of 30−50 μm rhombohedral crystals, which is typically characteristic of calcite (see Figures S6 and S7). Similarly, precipitation in the presence of the PCBMA 52 -PHPMA 250 nanoparticles also yielded rhombohedral morphology, but with a minor population of a second crystal phase (see Figures S8a and S8b). Crystals grown in the presence of PProMA 50 -PHPMA 300 nanoparticles were also rhombohedral but had smaller dimensions of 10−30 μm (see Figures S8c and S8d). The internal crystal morphology was evaluated by examining cross-sections of deliberately fractured crystals. There was no evidence of any nanoparticle occlusion within crystals grown in the presence of PCBMA 52 -PHPMA 250 nanoparticles ( Figure 1e). However, when PProMA 50 -PHPMA 300 nanoparticles were used as an additive, the crystals had roughened surfaces and some truncation of the edges was observed, as indicated in the inset of Figure 1f. SEM images of the internal crystal structure confirmed that PProMA 50 -PHPMA 300 nanoparticles were uniformly distributed throughout the whole crystal ( Figure 1f). Further, the apparent voids/occluded nanoparticles were comparable in diameter to the PProMA 50 -PHPMA 300 nanoparticles prior to occlusion. Raman spectroscopy studies (Figure 2a) indicated that crystals containing PProMA 50 -PHPMA 300 nanoparticles possessed various spectral features that are known to be characteristic of calcite; bands at 154 and 280 cm −1 are lattice modes, while bands at 712 cm −1 (υ 4 ) and 1086 cm −1 (υ 1 ) have been assigned to the in-plane bending and symmetric stretching of carbonate, respectively. 57,58 Bulk crystal structures were confirmed by powder XRD studies (Figure 2b). In particular, calcium carbonate precipitated in the presence of PCBMA 52 -PHPMA 250 nanoparticles results in a mixture of calcite and vaterite phases. This is probably because the PCBMA 52 -PHPMA 250 nanoparticles can act as an "impurity" that slightly perturbs normal calcite growth. In contrast, only calcite was detected for calcium carbonate prepared in the presence of PProMA 50 -PHPMA 300 nanoparticles. Thermogravimetric analysis (TGA; Figure 2c) studies confirmed that there was no detectable occlusion of PCBMA 52 -PHPMA 250 while the calcite/PProMA 50 -PHPMA 300 crystals comprised 6.8% w/w nanoparticles. Assuming a copolymer density of 1.22 g cm −3 , this corresponds to 14% v/v (see calculation in the Supporting Information). Nanoparticle occlusion was further confirmed by FT-IR spectroscopy (see Figure S9). Previously, we reported that anionic nanoparticles containing surface carboxylate groups could be occluded within calcite. 12−15 Furthermore, it was suggested that this motif played a key role in promoting occlusion. In the present study, both PCBMA 52 -PHPMA 250 and PProMA 50 -PHPMA 300 nanoparticles also possess surface carboxylate groups. However, the ACS Macro Letters Letter former zwitterionic nanoparticles exhibit no signs of occlusion, while the overall anionic PProMA 50 -PHPMA 300 copolymer nanoparticles are homogeneously incorporated into calcite crystals at approximately 6.8% w/w. These observations indicate that both the presence of carboxylic acid groups and the overall anionic character are required for successful nanoparticle occlusion. A reasonable explanation for these observations is as follows. Ca 2+ ions interact strongly with the anionic carboxylate groups on both the zwitterionic PCBMA 52 and the anionic PProMA 50 stabilizer chains at pH 9.5. However, the overall zeta potential is reduced to around −3 mV in the presence of 1.5 mM [Ca 2+ ] in the former case (Figure 1d), which is insufficient to ensure strong electrostatic adsorption of the PCBMA 52 -PHPMA 250 nanoparticles onto the growing crystal surface. 4 In contrast, PProMA 50 -PHPMA 300 nanoparticles retain an anionic zeta potential of −25 mV under the same conditions, which enables their strong electrostatic binding onto the growing crystal surface. 13,14,59 Thus, the subtle structural differences between these two types of sterically-stabilized nanoparticles has a dramatic effect on their interactions with growing calcite crystals. In summary, this study demonstrates that surface carboxylate functionality is a necessary but not sufficient condition for efficient nanoparticle occlusion within calcite. Overall anionic character appears to be an additional prerequisite, because essentially no occlusion is observed when zwitterionic polycarboxybetaine-stabilized nanoparticles are employed. This work provides a deeper understanding of the design rules for efficient nanoparticle occlusion within this particular inorganic host crystal. ACS Macro Letters Letter
2,560.6
2016-02-12T00:00:00.000
[ "Materials Science" ]
Application of the similarity theory for vortex chamber superchargers On the basis of similarity in vortex chamber superchargers criteria of similarity are defined and their check by numerical researches is made. Vortex chamber superchargers have the big power efficiency of pumping, in comparison with classical jet pumps, at the expense of association of advantages of centrifugal and jet pumps on the basis of the vortex chamber. The concept of vortex chamber superchargers is based on a new principle of energy transfer in jet superchargers at the expense of hydrodynamic effects of rotating flows use. In consequence of pressure decrease near to an axis of the vortex chamber occurs intaking particles of a pumped over flow. Having got to the vortex chamber, particles get tangential velocity interact with a rotating flow at the expense of a momentum exchange. The received criteria take into account of three major factors of similarity: the geometrical sizes, pressure of an active stream on an input in the device, working medium density. Validity of the received criteria is confirmed by a finding of pressure-flow rate characteristics of similar operating modes of a supercharger and their further combination on one characteristic. Introduction Application of jet superchargers in adverse service conditions, and also at pumping liquids and gases with the big solid abrasive particles content is dictated by rapid wear of mechanical mobile operative parts of dynamic and displacement pumps [1]. Jet devices, unlike classical superchargers possess low power efficiency, but high indicators of reliability and durability [2]. At work of direct-flow jet devices in hydraulic and pneumatic transport of bulk solids their use is limited to low indicators of ejection factor and high pressure values of an active stream [3]. In some way solve these problems in special service conditions [4,5] can vortex chamber superchargers, possessing all advantages of jet technics: high reliability and durability, work possibility on any structure and concentration of phases, small overall dimensions. Besides, vortex chamber superchargers (VCS), owing to use of centrifugal force in pumping process, possess the best, in comparison with jet direct-flow pumps, characteristics at bulk solids transportation [6]. Works on studying of working capacity and power characteristics of vortex chamber superchargers are spent throughout last fifteen years [4][5][6]. But, for today, there are no the publications devoted to possibility of preliminary definition of design parameters of the projected device. Besides, there is no possibility of recalculation of characteristics of a supercharger on any standard size. To solve this problem it is possible by use of the theory of similarity and dimension [7]. Designing of vortex chamber superchargers, as well as any another, is based on preliminary extensive researches. Among them experimental researches [4] have importance. In the dimension and similarity theory it is necessary to define conditions which should be observed in experiences with models, and then allocate the characteristic and convenient parameters defining the main effects and process modes [7]. Complicated movement character of a pumped over liquid in vortex chamber superchargers leads to that the problem of new devices designing decides along with design-theoretical engineering of a design of their flowing part by experimental researches in vitro, and also with the help of CFD-models and in natural conditions [8,9]. Preliminary definition of design parameters of a projected supercharger, research of working processes on models and propagation of the received results on full-scale superchargers demands fulfillment of geometrical, kinematic and dynamic conditions of similarity [10]. Therefore the purpose of the given work is definition of similarity criteria of vortex chamber superchargers and their check by numerical researches. Results of research Investigations of vortex chamber superchargers similarity it was spent on the model presented in drawing 1. The VCS [6] works as follows: the basic stream with the flow rate s Q and pressure s p comes through the input tangential channel in the vortex mixing chamber and leaves it through the exit tangential channel. A working stream, having mixed up with the pumped over stream arriving through two input axial channels with flow rates The supercharger can work in two modes [6] which are characterized by two various working processes. Realization of the first working process leads to occurrence of losses in the drainage channel (the channel in1 in drawing 1) and allows to provide relatively high pressure at the small flow rate of the pumped over medium. Realization of the second working process leads to sucking a pumped over stream through both axial channels, i.e. is fulfill undrainage variant of a supercharger work with relatively low pressure at the high flow rate on the device exit. Both working process of energy transfer in the field of centrifugal force unites, but in the first case occurs at preservation of the angular moment (tangential velocity circulation). And, in the second case energy transfer take place at a momentum exchange between interacting streams due to an eddy motion. The concept of vortex chamber superchargers is based on a new principle of energy transfer in jet superchargers at the expense of hydrodynamic effects of rotating flows use -vacuum near a rotation axis and overpressure on periphery of the vortex chamber. In consequence of pressure decrease near to an axis of the vortex chamber occurs intaking particles of a pumped over flow. Having got to the vortex chamber, particles get tangential velocity interact with a rotating flow at the expense of a momentum exchange ( Figure. 2). Further under the influence of superficial (pressure forces) and mass (dominating gravity) in a potential field move on chamber periphery. Near the vortex chamber walls the particles of a pumped over liquid get in exit tangential channel of a supercharger getting kinetic energy of the overpressure in periphery chamber. In classical jet superchargers the only method of energy transfer at the expense of an momentum exchange of interact collision flows is used that is accompanied by essential dissipative process [11]. In vortex chamber superchargers, unlike classical jet devices, the particles get the basic energy mainly in a conservative field at the expense of moving on periphery of the vortex chamber under the influence of centrifugal force. The active flow on periphery has high values of potential energy which can reach 90 % from the spent energy of a supply flow. At physical modeling of hydromechanical processes it is accepted to distinguish geometrical, kinematic and dynamic similarity [10]. At the same time use analogous points -points of geometrically similar figures equally located in position to borders of these figures. Hydromechanical systems name geometrically similar if them analogous the sizes are proportional, and corners in analogous points are identical [7]. Geometrical similarity of superchargers is defined by a constancy of similarity linear factor: . Strictly speaking, for full geometrical similarity it is necessary, that also roughness of vortex chamber surfaces also were similar, but fulfillment of this condition is seldom possible. The kinematic similarity of superchargers says that dimensionless velocity fields in them should be equal. Besides, mechanical trajectories of liquid particles should be geometrically similar also. The conditions of the kinematic similarity are expressed in the form of ratios: As at an outflow from a supercharger in an aerosphere the total pressure is defined by formula Using (3) and (4) it is possible to express a similarity parameters for VCS: The wide experience of hydraulic modeling and turbomachinery modeling shows that at pump work in self-similarity region change of Reynolds number does not render noticeable influence on hydraulic efficiency [10]. For confirmation of choice correctness of vortex chamber superchargers similarity criteria in wide range of pressure, density and the geometrical sizes on the basis of the solution of equations RANS (Reynolds-Averaged Navier-Stokes) numerical research has been made for an incompressible liquid with use SST-model of turbulence with rotation-curvature correction. Set of equations SST model looks as follows [12]: where:  -density; k -kinetic energy of turbulent pulsation; Cd  -cross-diffusion term in SST model. Constants and the specification statement of the equations (9) -(10) can be found in [13]. Rotation-curvature correction in SST-model is realized by multiplication of production of turbulence kinetic energy in the equations (9) -(10) on function [13]: Components of the mean strain tensor: Components of the vorticity tensor - Adequacy of calculations by means of the chosen model confirmed by comparison with experimental data on integrated parameters (pressures and flow rates in all channels) and on distribution of pressure along radius of the vortex chamber on the basis of a Student's and Fisher's tests [8]. In [8,14] results of verification and selection optimum on errors of mathematical model are presented. Experimental researches of the vortex device ( Figure 3) are made for a finding of the best mathematical model on errors. The error estimation of pressure distribution along radius of the vortex chamber is made at the closed outlet tangential channel of a supercharger. Thus the top end cover of the vortex chamber was drained sixteen apertures. Experimental facility included the vortex device, blower, receiver and measuring equipment. At different stages of experimental researches of the vortex chamber superchargers characteristics standard instruments and equipment were used. Their relative error did not exceed 1 %. Pressure in channels was measured cup manometers, ambient temperature -mercury thermometers, the flow rates in a channels -flowmeters. The model of a supercharger with diameter of the vortex chamber 60 mm was used. Pressure differences on flowmeter devices it was measured by micromanometers with the maximum error in 8 Pa. The total error between computed and experimental values on pressure did not exceed 12 % in central zone of vortex chamber. Calculations are made in program complex OpenFOAM (OpenCFD Ltd) [15,16] at following boundary conditions: on a wall -no slip wall, at inflow of the supply channel was set a stagnation pressure, in outflow channels -equality to null of static pressure. The mesh made about 5 million elements for fulfillment of parameter Y+ <2. In Figure 4 pressure-flow rate characteristics of VCS for the observed three cases are presented. For all three variants of superchargers work parameters have been selected so that conditions of geometrical, kinematic and dynamic similarity were observed. Observance of dynamic similarity can be checked by combination of pressure-flow rate characteristics of all three observed cases on one dimensionless characteristic taking into account recalculation on the basis of the (6). I.e. for combination, for example, the first and second variants, the characteristic first needs to be converted using linear factor of similarity / 0.05/ 0.03 1.67 Values of a supercharger pressure and flow rate are similarly computed. The result of characteristics combination is presented in Figure 5. In Figure 5 characteristics of a VCS work are rated to the maximum pressure and to the maximum flow rate of the pressure-flow rate characteristic of the second variant. The points corresponding to different operating modes and transporting of different medium closely lie to the pressure curve of the second variant. It speaks about justice of the statement about similarity of operating modes of all three variants. Especially it is become apparent in the undrainage mode of a supercharger at which deviations of the majority points make no more than 10 %. Fulfillment of the kinematic similarity demands is illustrated in Figure 6. In Figure 6 it is visible that the kinematic similarity consisting in equality of dimensionless velocity field in the device, working on similar regimes is carried out. For comparison of characteristics hydrodinamically similar pumps inter se often use, formed on the basis of mathematical expressions of similarity rules, the so-called, reduced values -dimensionless or dimensional complexes which are defined from physical parameters of hydraulic machine (flow rate, head, power and others). Equality of the last provides presence of partial dynamic working processes similarity of model and full-scale pump. At observance of similarity all conditions, on the basis of the equations (6) -(8), equality for similar regimes of efficiency follows. However, influence of the roughness on hydraulic losses in a supercharger will lead to some difference in values of efficiency. Conclusion Similarity parameters of the vortex chamber superchargers is defined and their check by numerical way is made. 1. Criteria of geometrical, kinematic and dynamic similarity on the basis of ratios of three major factors of similarity are gained: geometrical sizes, actuating medium inlet pressure in the device, actuating medium density. 2. Validity of the gained criteria is confirmed by a finding of pressure-flow rate characteristics of three similar operating modes of a supercharger and their further combination on one characteristic. At the undrainage operating mode a deviation of the majority points make no more than 10 %.
2,914.8
2017-08-01T00:00:00.000
[ "Engineering", "Physics" ]
SUBSTRATE TEMPERATURE AND ANNEALING EFFECTS ON THE STRUCTURAL AND OPTICAL PROPERTIES OF NANO-CdS FILMS DEPOSITED BY VACUUM FLASH EVAPORATION TECHNIQUE which assigned to the (002) plane of hexagonal structure. It was found that the intensity of diffraction peak (002) increases with a decrease of the substrate temperature. Average grain size calculated from XRD results is about 30 nm and weakly depends on the substrate temperature. After 30min annealing at 400 0 C the intensity of (002) peaks increased considerably and at the same time average grain size growth only on 5-10%. Profiler and AFM results have shown that the roughness and morphology of the CdS thin film surfaces vary insignificantly with the variation of substrate and annealing temperature. The transmission and reflection spectrum of CdS thin films measured in the range 400–1000nm. Optical measurements have shown that the films deposited at substrate temperatures of 200 0 C and 300 0 C have bandgap 2.42eV which corresponds to the bandgap of bulk CdS crystal. After annealing the bandgap values were changed negligible. Introduction Cadmium sulfide (CdS) is a common material used in the formation of solar cells based on cadmium telluride (CdTe) and CuInGaSe 2 (CIGS).In such solar devices n-type CdS thin-film acts as an optical window due to high bandgap energy E g =2.42eV.For solar cells application CdS films have to have relatively high conductivity to reduce electrical loses of solar cells, thin thickness to provide high transmission and good uniformity in order to prevent electrical short-circuit effect.The structural, optical and electrical properties of CdS thin films strongly depend on the applied technique and the substrate temperature [1].Different techniques have been reported for the deposition of CdS thin films namely: vacuum thermal evaporation [2][3][4][5][6], spray pyrolysis [7], electro-deposition [8] and chemical bath deposition [9][10][11].Among other techniques the vacuum evaporation technique is very simple, inexpensive and is well established technique for the preparation of large area uniform films.In contrast to many other techniques the process of vacuum evaporation does not lead to heating the substrate, so allows to control the substrate temperature accurately by the external heating.Due to this feature vacuum evaporation technique is applicable for formation of solar cells on a flexible polyimide substrate.Replacing the glass substrate to a flexible substrate reduces the weight of thin film solar cells by 98%.Such solar cells are very promising for application on spacecraft so far as they have high ratio of electric power to weight.Flexible solar cells represent also of interest in the market for the terrestrial applications as they can easily be mounted on the surface of various shapes. For fabrication of solar cell on flexible substrates it is necessary to use low-temperature film deposition techniques because polyimide substrates do not allow heating more than 450 0 C.As such techniques the most promising are the magnetron sputtering method and the method of vacuum thermal evaporation.For evaporation of multicomponent compound with different vapor pressure of components more suitable is the variant of vacuum thermal evaporation method notably the method of vacuum flash evaporation [12][13][14][15][16][17]. Distinguishing feature of this method is that the deposition of film is performed by continuous vacuum evaporation of small portions of multicomponent compound.Small particles of crushed multicomponent alloy whose constituents have different vapor pressure (for instance Cd and S in CdS) are falling into the preheated flash exchanger in a random way.Since the particles have different sizes and are falling in a random way in the flash exchanger at each moment there are some amount of particles which have different temperature and are at different evaporation stages of compound constituents.Simultaneous presence of such particles in the flash exchanger provides at the average the same fluxes of evaporated constituents and hence the stoichiometric composition of deposited film with pinpoints accuracy.Unfortunately as it is not surprising a very limited research works denoted to studies the characteristic of CdS films deposited by vacuum flash evaporation technique.This technique is also called a method of discrete evaporation. The aim of this work was to study the structural, morphological and optical characteristics of nano-structured CdS thin films deposited on the glass substrates by flash evaporation technique as a transmittance and hetero layer in chalcogenide solar cells. Experimental details Thin films of CdS were fabricated by flash evaporation technique on commercial glass slides used as substrate with thickness 1 mm and diameter 20 mm.The substrates were thoroughly cleaned by soap-free detergent ultrasonically and then iteratively rinsing in distilled water to remove the traces of detergent.Thereafter the substrates were cleaned ultrasonically in ethanol and then dried by blowing air.The vacuum chamber before evaporation was pumped to a base pressure of 1.510 -5 Torr.Schematic view of flash evaporation setup is presented in Fig. 1.The starting material was a pure bulk crystal of CdS which is crushed to a powder with particle dimensions of 50 μm -150 μm and then placed inside a batcher. Dispensing the CdS particles from batcher onto the molybdenum flash exchanger was performed due to opening and closing of batcher outlet by the instrumentality of electro-vibrator (Fig. 1). The amount of CdS particles falling into the flash exchanger was controlled by opening time of outlet.The outlet is opened when electro-vibrator attracts up by electromagnet.In our experiments the opening time was varied from 0.2 to 0.5 c.The outlet is opened again for evaporation of a next new portion of CdS particles only after evaporation of all particles contained in the flash exchanger.The evaporation process monitored through the chamber's window. Dispensing the CdS particles was started after the flash exchanger is heated up to temperature 1200 0 C -1300 0 C suitable to evaporate the CdS powder.The distance between the flash exchanger and substrate was about 12 cm. The film thickness and the duration of deposition were typically 200-300 nm and 20-40 min.The CdS films were deposited at substrate temperatures of 100 0 C, 200 0 C and 300 0 C. Asdeposited films were annealed at 400 0 C during 30 min in vacuum.The structural properties of samples were studied by X-ray diffractometer URD-6 in the -2 mode using Cu -K  (λ = 1.5405Å) radiation.The surface morphology and roughness of CdS films were investigated by ZYGO profiler and by atomic force microscopy (AFM)type NEXT supplied by NT-MDT Inc. Transmittance and reflectance, over the wavelength from 400 to 1000 nm were measured using double beam Filmetrics F20 spectrophotometer. Structural properties of as-deposited and annealed CdS thin films Fig. 2 shows the X-ray diffraction patterns of as-deposited (red chart) and annealed (blue chart)CdS thin films deposited by flash evaporation technique on a glass substrate.Diffraction patterns were recorded over the 2θ range from 15 0 to 60 0 with the step 0.1 0 .XRD analyses showed that the CdS films were polycrystalline.Both as-deposited and annealed CdS films exhibit a predominant sharp peak at 2θ around 26.5 0 which can be assigned to either the (002) plane in the case of hexagonal structure [JCPDS card no: 41-1049] or the (111) plane in the case of cubic structure [JCPDS card no: 10-0454]. The exact interpretation of XRD patterns it is quite difficult because most peaks of cubic and hexagonal CdS structure differ negligible within in the very small angle. The summery of XRD data of as-deposited CdS thin films at different substrate temperature are presented in Table 1.In Table 2 are presented the summery of XRD data after annealing. After annealing, as can be seen from Table 2 the structure of the deposited films clearly changed.After the peaks around 2θ = 51 0 (for CdS films as-deposited at 200 0 C and 300 0 C) disappeared and the predominant sharp peaks at 2θ around 26.5 0 drastically have increased (Fig. 2, blue charts).The structural parameters of as-deposited and annealed CdS thin films such as the average grain sizes (D), dislocation density (δ), the number of crystallites per unit area (N) and microstrain (ε) for comparison are presented in Table 3.The average grain sizes (D) of the film was calculated using the Scherrer's equation [18]: whereλ is the wavelength of the X-ray used (λ=1.5405Å), βis the full-width at half-maximum (FWHM) of the (002) peak which has maximum intensity, and (θ) is the Bragg's angle. The variation of the grain size with substrate temperature is presented in Table 3.It is seen that the grain size vary insignificantly.The biggest grain size (30.43 nm) observed for CdS film as-deposited at 200 0 C and (34.74 nm) for CdS film deposited at 100 0 C after annealing. The dislocation density δ defined as the length of dislocation lines per unit volume, has been calculated using the equation [19]: δ=1/D 2 .The results of calculation are presented in Table 3.The small values of (δ) confirmed the good crystallinity of the CdS films.The number of crystallites per unit area (N) and the micro-strain (ε) of the films have been estimated with the use of the following equations [20]: N=t/D 3 and ε=β cos(θ)/4.It is seen from Table 3 the annealing led to decreasing the micro-strains and number of crystallites per unit area of all as-deposited films.The lower values of micro-strains and the number of crystallites per unit area observed for asdeposited CdS films at 200 0 C and for CdS film deposited at 100 0 C after annealing. Optical properties of as-deposited and annealed CdS thin films Using transmittance (T) and reflectance (R) spectra the absorption coefficient has been calculated by the following equation: where t is the thickness of CdS film.Above the fundamental absorption edge the dependence of the absorption coefficient on the incident photon energy is given by Tauc's model [21]: where ħω is the photon energy, E g is the optical bandgap, B is a constant and n is an exponent that depends on the type of optical transitions.For direct allowed transitions n =½.The variation of (αħω) 2 of CdS thin films deposited at different substrate temperatures as a function of photon energy are shown in Fig. 3.In Fig. 4 are presented the variation of (αħω) 2 vs. photon energy of CdS thin films after annealing. Using the liner extrapolation method the value of the optical bandgap for as-deposited CdS thin films is determined to be 2.39, 2.42 and 2.42 eV for 100 0 C, 200 0 C and 300 0 C, accordingly.After annealing the optical bandgap is changed insignificantly and became 2.38, 2.40 and 2.40 eV, accordingly.The results of measurements have shown that the CdS thin film deposited at 200 0 C and 300 0 C have the same optical bandgap as the bulk of CdS crystal E g = 2.42 eV. Morphological properties of as-deposited and annealed CdS thin films. Fig. 5 shows typical 1,5μm x 1,5μm AFM images of CdS films deposited on a glass substrate at different temperatures by flash evaporation technique.For statistics a few AFM images of different sites of the surface of each sample were investigated.Profiler results and AFM images show that the morphology and roughness of the surface weakly depends on the substrate temperature.A typical AFM images for as-deposited and annealed CdS thin films are presented in Fig. 5 and Fig. 6, respectively. Conclusion The structural, morphological and optical properties of the CdS thin films deposited on glass substrate at different temperatures (100 0 C, 200 0 C and 300 0 C) by vacuum flash evaporation technique have been investigated.The annealing effect on these properties has also been investigated.XRD results showed that both all of as-deposited at different temperatures and annealed CdS thin films were polycrystalline, have hexagonal structure and exhibit a predominant sharp (002) peak at 2θ around 26.5 0 .It is found that the intensity of diffraction peak (002) increases with a decrease of the substrate temperature.After annealing the peak at 2θ = 51 0 (for CdS films as-deposited at 200 0 C and 300 0 C) disappeared and the predominant sharp peaks at 2θ = 26.5 0 increased. Using XRD results the structural parameters of CdS thin films such as the average grain sizes (D), dislocation density (δ), the number of crystallites per unit area (N) and micro-strain (ε) was calculated.These results have shown that the best crystalline structure have CdS films asdeposited at 200 0 C and CdS films deposited at 100 0 C after annealing. The results of optical measurements have shown that CdS thin films deposited at 100 0 C have 2.39eVbangap and 2.42eV bandgap have films deposited at 200 0 C and300 0 C.This value of bandgap (2.42eV) is equal to the bandgap of CdS bulk crystal.These values of bandgap after annealing slightly decreased from 2.39eV to 2.38eV and from 2.42eV to 2.40eV. As showed the AFM images and profiler measurements the roughness of the CdS films is 5-8nm and weakly depends on both substrate temperature and annealing. Fig. 5 . Fig.5.A typical AFM images and the distribution of grain size of as-deposited CdS thin films on a glasssubstrate at 100 0 C. Fig. 6 . Fig.6.A typical AFM images and the distribution of grain size of CdS thin films deposited on a glass substrate at 100 0 C after annealing Table 1 . XRD data of as-deposited CdS thin films at different substrate temperatures Table 2 . XRD data of CdS thin films deposited at different substrate temperatures after annealing Table 3 . Structural parameters of CdS films as-deposited at substrate temperature 100 0 C, 200 0 C and 300 0 C and after annealing at 400 0 C during 30 min
3,110.8
2016-01-01T00:00:00.000
[ "Materials Science", "Physics" ]
Editing of the ethylene biosynthesis gene in carnation using CRISPR-Cas9 ribonucleoprotein complex The study aimed to edit ethylene (ET) biosynthesis genes [1-aminocyclopropane-1-carboxylic acid (ACC) synthetase 1 (ACS1) and ACC oxidase 1 (ACO1)] in carnation using the CRISPR/Cas9 ribonucleoprotein (RNP) complex system. Initially, the conserved regions of the target genes (ACS1 and ACO1) were validated for the generation of different single guide RNAs (sgRNAs), followed by the use of an in vitro cleavage assay to confirm the ability of the sgRNAs to cleave the target genes specifically. The in vitro cleavage assay revealed that the sgRNAs were highly effective in cleaving their respective target regions. The complex of sgRNA: Cas9 was directly delivered into the carnation protoplast, and the target genes in the protoplast were deep-sequenced. The results revealed that the sgRNAs were applicable for editing the ET biosynthesis genes, as the mutation frequency ranged from 8.8 to 10.8% for ACO1 and 0.2–58.5% for ACS1. When sequencing the target genes in the callus derived from the protoplasts transformed with sgRNA: Cas9, different indel patterns (+ 1, − 1, and − 8 bp) in ACO1 and (− 1, + 1, and + 11) in ACS1 were identified. This study highlighted the potential application of CRISPR/Cas9 RNP complex system in facilitating precise gene editing for ET biosynthesis in carnation. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-024-01143-0. Introduction Carnations (Dianthus caryophyllus) possess fragile petals and vibrant hues, which has contributed to their widespread popularity in the floricultural industry.These flowers often take centre stage during momentous occasions due to their symbolism of affection, admiration, and memorial sentiments.The cultivation of carnations has undergone remarkable expansion in both its global market value and production volume.The global market value of carnations has already attained an impressive 2.47 billion USD and is anticipated to rise to 2.9 billion USD by 2024 [1,2].However, the delicate nature of these floral arrangements often poses a challenge in maintaining their freshness and longevity.Ethylene (ET) is a naturally occurring plant hormone synthesized from the transcription of specific ET biosynthesis genes aminocyclopropane-1-carboxylic synthase (ACS) and aminocyclopropane-1-carboxylic oxidase (ACO).ET plays a crucial role in determining the vase life of the flowers as it acts as a positive regulator of flower senescence [3][4][5][6][7][8][9]. The flower market continues to exhibit a growing demand for carnations with an extended vase life.Although ET inhibitors such as silver thiosulfate complex (STS) [5,10], aminoethoxyvinylglycine (AVG) [11], nanosilver [5,8,12], and sodium nitroprusside [13] can be employed to delay the senescence of carnation flowers, major concerns have arisen regarding potential human health risks due to chemical exposure and potential environmental contamination resulting from the disposal of these chemical compounds.Therefore, the application of ET inhibitors does not appear to be a promising approach for extending the longevity of carnation flowers.CRISPR/Cas9 system has emerged as the technology of choice for editing unwanted genes due to its simplicity, specificity, efficiency, and versatility.Petal senescencerelated genes have been successfully edited in morning glory, rose and petunia using CRISPR/Cas9 system to extend flower longevity [9,[14][15][16].Xu et al. [9,16] reported that editing of ET biosynthesis genes (ACO1, ACO3, and ACO4) in petunia significantly reduced ET production and improved flower longevity, and the edited genes are also stably transmitted to subsequent generations.In carnation, ET biosynthesis genes encoding ACS and ACO enzymes have been identified as DcACS1, DcACS2, DcACS3, and ACO1 [17][18][19].whereas DCACS1 and DcACO1 were most abundant in petals and gynoecium, notably inducing ET production and flower senescence.Previous studies also have indicated that petal senescence in carnation is regulated by transcriptional regulation of DcACS1 and DcACO1, and the application of ET inhibitors delayed petal senescence by suppressing the expression levels of DcACS1 and DcACO1, as well as ET production [8,12,13].The editing of the ET biosynthesis genes (DcACS1 and DcACO1) in carnation using the CRISPR/Cas9 system will be a promising approach for achieving a permanent reduction of ET production and improvement of flower longevity.Generally, gene-edited mutants were generated through the application of Agrobacterium-mediated transformation; however, this approach often leads to an off-target mutation, a major challenge in genome editing.An alternative approach involves the direct delivery of preassembled CRISPR/Cas9 ribonucleoproteins (RNPs) into protoplasts.As Svitashev et al. [20] demonstrated, this strategy significantly successfully reduced off-target mutations.These RNPs rapidly initiate cleavage at chromosomal target sites upon transfection and are only transiently present in plant cells prior to degradation by proteases and nucleases [21,22].This swift breakdown mechanism within cells can potentially diminish mosaicism and the occurrence of offtarget effects during the regeneration process of entire plants [22].Therefore, in this study, we edited the ET biosynthesis genes (DcACS1 and DcACO1) in carnation by delivering preassembled CRISPR/Cas9 ribonucleoproteins (RNPs) into the protoplast. Plant materials In vitro regenerated carnation plantlets (Dianthus caryophyllus cv.Scarlet) obtained from Gyeongsang National University were subcultured on Murashige & Skoog (MS) medium supplemented with 30 g/L sucrose, 1.0 g/L activated charcoal, and 8.0 g/L plant agar.The culture bottles were placed in a culture room set at a temperature of 25 °C, a photoperiod of 16 h (70 mmol m −2 s −1 ), and a relative humidity (RH) of 70%.Subculturing of the plantlets was performed every 6 weeks using the same fresh medium. Designation of single guide RNA (sgRNA) To edit the ET biosynthesis genes (DcACO1 and DcACS1) in carnation cv.Scarlet, their exon regions were determined based on the complete sequences of DcACO1 (AB042320.1)and DcACS1 (AB605175.1).Five sgRNAs originating from the fourth and the fifth exons of ACS1, as well as two sgRNAs from the second exon of ACO1, were individually designed using the CRISPR RGEN tool (http:// rgeno me.net/) Additional file 1: Fig. S1, following the approach outlined by Park et al. (2015).The exon regions and sequences of the sgRNAs were illustrated in (Fig. 1a, b) and detailed in Additional file 1: Table S1.To ensure precise gene editing, we meticulously selected sgRNAs with no more than two nucleotide mismatches and higher out-of-frame scores.This stringent selection process aimed to guarantee both high specificity and maximum knockout efficiency in the coding regions of the carnation DcACO1 and DcACS1 genes.Readily available recombinant Cas9 protein and sgRNAs were purchased from ToolGen, Inc. (Seoul, South Korea). In vitro cleavage assay Genomic DNA was extracted from young carnation leaves using the HiGene Genomic DNA Prep Kit (Biofact, South Korea), following the manufacturer's instructions.The target regions of DcACS1 and DcACO1 were amplified using Phusion polymerase (Thermo Fisher Scientific, Inc. Vilnius, Lithuania) and specific primer pairs (DcACO1 F: 5ʹ AAC ATC TCC GAG GTC CCT GA 3ʹ R: 5ʹ TGA GAT GAG ATG AGA GTG GCG 3, DcACS1 F: 5' ATG ACA TGC AGA TTC CGC GA 3' R: 5ʹ ACC CTT CCA CGG GTT ACA AA 3ʹ).The PCR products were then purified using an Expin PCR SV kit (GeneAll, Seoul, South Korea).Subsequently, the purified PCR products (150 ng) were digested using purified Cas9 protein (1 μg) and sgRNA (0.5 μg) in a 20 μL reaction mixture that included 1 μL of 10 × NEB 3.1 buffer (NEB) (New England Biolabs Inc. USA), 1 μL of 10X bovine serum albumin (BSA) (Elpis Biotech, South Korea.), and nuclease-free water (NFW).The reaction was incubated for 1 h at 37 °C, followed by an additional incubation with RNase A for 15 min at 37 °C.To halt the digestion, 1 μL of STOP solution (30% glycerol, 1.2% SDS, 250 mM EDTA (pH 8.0) was added to the reaction mixture and incubated for a further 15 min at 37 °C before analysis on a 2% agarose gel.The schematic flow chart of the in vitro cleavage assay is depicted in Additional file 1: Fig. S2. Protoplast isolation Protoplasts were isolated from the leaves of in vitro plants, as performed by [1].Briefly, leaf segments (approximately 1.0 g fresh weight) were finely chopped into 0.5 mm pieces using a sharp razor blade.These 0.5 mm pieces were then placed in a falcon tube containing a mixture of 10 ml of cell and protoplast washing (CPW) solution and 0.5 M mannitol.To induce cell wall plasmolysis, the falcon tube was incubated in a shaking incubator at 30 rpm for 1 h at 25 °C.The plasmolyzed cells were then digested in 10 ml of CPW solution comprising 1.2% cellulase, 0.1% macerozyme, 0.1% bovine serum, and 0.5 M mannitol for 6 h at 25 °C.The resulting protoplast was collected by passing through a 100 µm nylon mesh.The collected protoplast was allowed to float on a CPW solution containing 3.0% sucrose, and 2 ml of W5 was pipetted gently onto the sucrose solution.Next, the sucrose solution and washing solution were centrifuged at 180 ×g for 10 min to examine the formation of a viable protoplast floating band at interphases of the sucrose and W5 layers.The viable protoplast solution was collected into a fresh tube using a Pasteur pipette.This solution was then diluted 10 times with W5 solution and centrifuged for 5 min at 125 ×g.The centrifugation was repeated three times.Finally, the protoplasts were resuspended in W5 solution and allowed to rest for 30 min at 4 °C prior to the transfection process. Delivery of sgRNA: CRISPR/Cas9 RNP complex into protoplast The sgRNAs (25 µg) of ACS1 and ACO1 were separately mixed with 25 µg of cas9 protein in 2 µl of NEB 3.1 and sterile distilled water to obtain a ribonuclease protein (RNP) complex with a final volume of 20 µl.Control samples were established with only Cas9 protein without sgRNA.The RNP complex was then incubated for 15 min at 25 °C in the dark.Following this, 100 µl (2 × 10 5 /ml) protoplasts were gently mixed with the RNP complex, and 120 µl of PEG solution (25% PEG 4000, 0.1 M CaCl 2 , and 0.4 M mannitol) was added to the solution (RNP and protoplasts) for delivery of RNP complex into protoplasts.The mixture was then incubated for 15 min at 25 °C in the dark and washed three times at 5 min intervals using 1 ml, 2 ml, and 5 ml of washing solution.The mixture was then centrifuged at 125 ×g for 3 min.Finally, the protoplasts were resuspended in 1 ml of W5 solution in the dark for 24 h.The protoplast transfection was performed following the optimal protocol of Adedeji et al. [1]. DNA Extraction and targeted deep sequencing Genomic DNA was extracted from both protoplasts transfected with either Cas9 alone or with RNPs, using a DNeasy plant mini kit (Qiagen, Hilden, Germany), following the manufacturer's instructions.The gene loci of DcACO1 and DcACS1 were amplified using a Phusion High-Fidelity PCR kit (Thermo Fisher Scientific, Inc. Vilnius, Lithuania) along with the primer pairs indicated in Additional file 1: Table S2.The amplified PCR products were sequenced using an Illumina MiSeq (Macrogen, Seoul, South Korea).The resulting data were analysed using the Cas analyzer (http:// www.rgeno me.net/ casanaly zer/#!) as performed by [23].The sgRNAs with the highest indel frequency for each gene were selected for further investigation. Delivery of selected sgRNA: CRISPR/Cas9 RNP complex into protoplast and callus induction sgRNA1 (for DcACS1) and sgRNA2 (for DcACO1) were selected for editing the respective genes in the protoplasts.The sgRNAs were selected based on their high indel frequency and distinctive patterns, as determined from the analysis mentioned above.The protoplasts (at a concentration of 1.0 × 10 5 protoplasts/ ml) that were transfected with the RNP complex (ACS1: sgRNA, and ACO1: sgRNA) were cultured in petri dishes (60 × 15 mm) containing 2.5 ml of MS liquid media with 0.5 M mannitol, 1% sucrose, 0.1 g/L casein hydrolysate (CH), and different concentrations of plant growth regulators (PGRs) (Additional file 1: Table S3).The petri dishes were sealed with parafilm and kept in the dark at 25 °C.After 12 days of culture, 1 ml of fresh culture media was added to the dishes every 7 days.After 25 days of culture, the osmotic potential was gradually increased by reducing the mannitol concentration by 20% each week, and the cultures were subjected to low light conditions.After 6 weeks of culture, the formation of microcalli (0.5 mm in size) was observed, and the microcalli were transferred onto MS solid media containing 2% sucrose, 0.2 mg/L 2,4-D, and 0.25% gelrite for microcalli proliferation.Next, 3 week-old calli were transferred to a regeneration medium for shoot induction. Analysis of indel patterns and protein sequence in ACS1and ACO1-edited callus lines For identifying indel patterns of ACS1 and ACO1 in the callus derived from the protoplasts transfected with the RNP complex (ACS1: sgRNA and ACO1: sgRNA), genomic DNA was extracted from single green calli (18 single cell lines), and target regions of the DcACO1 and DcASC1 genes were amplified as described above.The resulting amplicons were sequenced using the Sanger sequencing tool (Macrogen, Seoul, South Korea).The indel patterns were then assessed using DECODR V.3.0 software, while the protein sequences were analyzed through the website www.bioin forma tics.org/ sms2/ trans late. Statistical analysis SPSS Statistics software version 25 (SPSS Inc., Chicago, IL, USA) was used to analyze the experimental data.All experiments were replicated three times, and data were presented as means of three experiments.Significant differences among treatments were determined at p < 0.05 based on the least significant difference test. Generation of sgRNAs and validation of their effi cacy in in vitro cleavage First, the exon regions of the DcAC01 and DcACS1 in carnation cv.Scarlet was verified using Sanger sequencing, resulting in five exon regions in DcACS1 and three exon regions in DcACO1 (Fig. 1a, b).Five distinct sgRNAs (sgRNA1-5) were designed from fourth and fifth exon regions of DcACS1, and two different sgRNAs (sgRNA1 and 2) were designed from second exon region of DcACO1.To ensure the selection of the most suitable sgRNAs, we performed a thorough homology search against the Dianthus caryophyllus reference genome using Cas-Designer from RGEN tools [24].Our criteria included confirming no more than two nucleotide mismatches in the selected sgRNAs. To assess the effectiveness of the designed sgRNAs in editing the target genes, we performed an in vitro cleavage assay for each sgRNA.This assay employed Cas9 protein and in vitro transcribed sgRNAs.As expected, both sgRNA 1 and sgRNA 2 cleaved the 852 bp of the DcACO1 PCR amplicon into approximately 548 and 304 bp (Fig. 1c and Table 1).However, in DcACS1, the sgRNAs exhibited differential cleavage patterns on the 1217 bp of the DcACS1 PCR amplicon, depending on the specific sgRNA used.For sgRNA1, the cleavage resulted in approximately 579 and 397 bp fragments; for sgRNA2, it was approximately 728 and 489 bp fragments; for sgRNA3, it was approximately 802 and 415 bp fragments; for sgRNA4, it was approximately 883 and 334 bp fragments, and for sgRNA5 it was approximately 893 and 324 bp fragments (Fig. 1c and Table 1).The results clearly demonstrate that all the designed sgRNAs were highly effective in cleaving their respective target regions. Assessment of indel percentages and patterns in ACS1 and ACO1 To access insertion/deletion (indel) percentages and patterns in ACS1 and ACO1, CRISPR/Cas9 RNP complex (Cas9 and each sgRNA) was introduced into carnation protoplasts using the PEG-mediated transfection method.This targeted deep sequencing was performed to examine the indel percentages and patterns at the target regions.The results revealed that each sgRNA can induce a range of indel percentages and patterns, with variations depending on the specific sgRNA used, as depicted in (Fig. 2).Specifically, sgRNA1 induced + 1 bp, − 1 bp, − 4 bp at target site of ACO1, with the highest indel percentages observed for + 1 bp and −1 bp (4.9% and 3.03%) (Fig. 2a).Similarly, sgRNA2 also induced diverse indel patterns (+ 1, − 1, − 2, − 4, − 5, and − 25 bp) at the target site of ACO1, with the highest indel percentage (6.89%)observed for − 25 bp (Fig. 2 b).Similar diverse indel patterns were also observed in the target region of ACS1 when the different sgRNAs were employed DcACO1 sgRNA 2 Se quence Inde l R eads Inde l (%) DcACS1 sgRNA 1 Se qu ence Inde l R eads Inde l (%) DcACS1 sgRNA 2 Se qu ence Inde l R eads Inde l (%) DcACS1 sgRNA 3 Se qu ence Inde l R eads Inde l (%) DcACS1 sgRNA 4 Se quence Inde l R eads Inde l (%) DcACS1 sgRNA 5 Se quence Inde l R eads Inde l (%) Fig. 2 Targeted deep sequencing analysis of transformed carnation protoplasts for DcACO1 and DcACS1 genes showing the most frequent mutation patterns.Blue-Target sequence, red-PAM sequence, green-insertions (Fig. 2c-g).Interestingly, indel patterns were even observed outside the target regions of ACS1 and ACO1; DcACO1 sgRNA 1 induced a − 70 bp outside the target region, while DcACS1 sgRNA 1, 2, 3 and 5 induced− 60, − 21, − 17 and − 9 bp indels outside the target region, respectively.Large insertions of + 29 and + 35 bp were observed in sgRNA4, while + 27 and + 29 bp were also observed in sgRNA5 (Fig. 2 and Additional file 1: Fig. S3).In contrast, no indel was detected in the non-transfected protoplast samples (wild type; WT).These results provide compelling evidence of the direct delivery of the RNP complex to carnation protoplasts.The observed total indel percentages ranged from 8.8 to 10.8% for DcACO1 and from 0.2 to 58.5% for DcACS1 (Table 2).After evaluating multiple sgRNAs, sgRNA 2, which exhibited an indel percentage of 10.8%, was chosen as the most promising candidate for editing DcACO1.Similarly, sgRNA 1, with an indel percentage of 58.5%, was selected for editing DcACS1. Callus induction from the protoplasts transfected with selected sgRNA: CRISPR/Cas9 RNP complex The RNP complexes [Cas9: sgRNA1 (for DcACS1) and Cas9: sgRNA2 (for DcACO1)] were introduced into the carnation protoplasts, and the protoplasts were cultured in MS liquid media with 0.5 M mannitol, 1% sucrose, 0.1 g/L casein hydrolysate (CH), and different concentrations of PGRs.The developmental stages, colony formation, and calli formation from the protoplasts significantly varied depending on the PGRs used (Table 3).The protoplasts cultured in media supplemented with only zeatin or NAA died within 6 days of culture.Those cultured in media supplemented with either 0.5 mg/L or 1.0 mg/L 2,4-D alone formed microcolonies but halted their development in the third week of culture.However, when protoplasts were cultured in media containing a combination of distinct cytokinins paired with auxin or a mix of two different auxins, they progressed into multicellular divisions and eventually developed into calli.In media enriched with 1.0 mg/L zeatin and 1.0 mg/L 2,4-D, or alternatively, with 0.5 mg/L NAA and 0.5 mg/L 2,4-D, the initial protoplast cell division became evident within 5-7 days of cultivation (Fig. 3a).Subsequently, multi-cellular division and the formation of microcalli were observed at the 10-day and 14-day marks (Fig. 3b, c).The progression of microcalli development from transfected protoplasts of DcACO1 and DcACS1 can be visualized in Fig. 3d, e.The highest division frequency and several calli formations were observed in 0.5 mg/L 2,4-D with 0.5 mg/L NAA, as well as in 1.0 mg/L zeatin with 1.0 mg/L 2,4-D. The transfected protoplast was cultured in media supplemented with 1.0 mg/L zeatin with 1.0 mg/L 2,4-D and subsequently transferred to MS media with 0.2 mg/L 2,4-D to promote calli proliferation and subsequently on shoot induction media supplemented with different combination of PGRs (Fig. 3f-k). Assessment of indel percentages and patterns in ACS1 and ACO1 of protoplast-derived callus From the above experiment, we randomly selected 18 single calli derived from protoplasts, which were transfected with the RNP complexes [Cas9: sgRNA1 (for DcACS1) and Cas9: sgRNA2 (for DcACO1)].The results of the Sanger sequencing coupled with DECODR online CRISPR analysis software showed that 8 out of 18 samples for DcACO1 and 5 out of 18 samples for DcACS1 were mutants, indicating the indel percentages of 44.4% in DcACO1 and 27.8% in DcACS1 (Table 4).Additionally, in DcACO1, 75% of the callus lines showed monoallelic mutations, while 25% exhibited biallelic mutations.Remarkably, no instances of triallelic mutations were observed in this context.In contrast, among the 18 samples assessed for DcACS1, 60% indicated monoallelic mutations, while the remaining 40% were indicative of triallelic mutations.Biallelic mutations, however, were not detected within this set of samples.Furthermore, different indel patterns were also observed within the mutant samples.Notably, samples 14 (S14), 16 (S16), and 17 (S17) exhibited the highest indel percentages, accounting for 47.8% indel with a + 1 bp (G) insertion, 52.0% with a − 3 bp deletion, and a remarkable 97.5% indel of a + 1 bp (T) insertion combined with a − 8 bp deletion at the designated DcACO1 target site respectively.Similarly, S5 exhibited a + 1 bp insertion, and S8 presented a − 3 bp deletion, with indel frequencies of 46.9% and 31.8%respectively, both at the target site of DcACO1 (Fig. 4).In the case of DcACS1, S6 and S7 exhibited the highest indel percentages, both at 100%, characterized by a 71.3% indel with a − 1 bp (T) deletion, 22.5% with a + 1 bp (T) insertion, and 6.1% with a + 9 bp insertion, as well as 54.8% indel with a -1 bp (T) deletion, 39.3% with a + 1 bp (T) insertion, and 5.1% with + 11 bp insertion, respectively, at the specified target site (Fig. 5). Similarly, S3, S4, and S14 showcased indels of + 1 (A) insertion at 24.7%, + 1 (A) insertion at 20.2%, and + 1 (T) insertion at 44.6%, respectively (Fig. 5).Additionally, we identified frameshift mutations that would disrupt the reading frame and result in the complete loss of function for both genes.Specifically, we found that the alleles with + 1, − 1, and − 8 bp mutations in DcACO1 and − 1, + 1, and + 11 bp mutations in DcACS1 were frameshift mutations.Additionally, we observed in-frame mutations at the target sites, such as a − 3 bp deletion in S7, S8, and S16 (DcACO1) and a + 9 bp insertion in S17 (DcACS1).The in-frame mutations observed in the DcACO1 and DcACS1 genes hold the potential to modify or partially impair their protein function.These predictions were made using the online tool https:// bioin forma tics.org/ sms2/ trans late.html, and the corresponding results can be found in Tables 5, 6. Discussion Floral senescence in carnations is positively associated with increased ethylene production, which is caused by the transcriptional activation of ET biosynthesis genes like ACO and ACS [17][18][19].[9,16] successfully delayed floral senescence in petunia by editing ET biosynthesis genes (ACO1, ACO3, and ACO4) using the CRISPR/ Cas9 tool.In this study, we attempted to edit the ET biosynthesis genes (ACS1 and ACO1) in carnation using the CRISPR/Cas9 tool.Although Agrobacteriummediated transformation technique has been employed for editing target genes with CRISPR/Cas9 tool, this method carries the potential for off-target mutations [20,25].In addition, this method uses selection marker genes for screening of mutants, which has raised concerns among consumers.Instead, CRISPR/Cas9 RNP complex tool, which does not involve the use of selection marker genes, has been increasingly favoured in recent years for editing target genes to reduce the potential for offtarget mutation and to address consumers' concerns.Success in CRISPR/Cas9 RNP complex-mediated editing of target genes depends on various factors, including sgRNA design, transient expression efficiency of Cas9 and sgRNA complex in protoplast, and the formation of protoplast-derived callus [26,27]. In this study, we designed two sgRNAs targeting DcACO1 and five sgRNAs targeting DcACS1, and their editing efficiencies were assessed through in vitro DNA cleavage assays and a protoplast transient assay.All employed sgRNAs exhibited editing activity.However, the results of targeted deep sequencing showed that indel frequencies of two sgRNAs targeting DcACO1 ranged from 8.8 to 10.8% at the DcACO1 locus and of five sgRNAs targeting DcACS1 ranged from 0.2 to 58.5% at DcACS1 locus within carnation protoplasts.These findings are consistent with a previous study by [28], who also found varied indel patterns at cleavage sites when using different sgRNAs for editing the nitrate reductase gene in petunia.The variation in the indel frequencies among the tested sgRNAs may be attributed to several factors, including the secondary structure of the sgRNA molecule, a factor that can influence its affinity for Cas9 binding and its overall stability [29].One more reason for this phenomenon was attributed to discernible doublestrand breaks (DSBs) and the subsequent engagement of We observed that the editing efficiency of sgRNA2 and sgRNA3 targeted for DcACS1 was comparatively low, at 2.44% and 0.2%, respectively.This could be attributed to various factors, including the targeted loci's chromatin states, unwanted hairpin structures, or potential unidentified elements [31,32].In a study done by [31], the chromatin state of the target locus was a significant determinant of the editing efficiency, and the presence of hairpin structures could reduce the editing efficiency by up to 50%.We tested multiple sgRNAs for DcACO1 and DcACS1 and selected sgRNA2 and sgRNA1, respectively, based on NGS results for further experiments.In the case of DcACO1, our analysis revealed that sgRNA1 and sgRNA2 had similar indel frequencies, however, sgRNA2 produced a large − 25 bp deletion with 6.89% indel frequency, which could lead to a complete loss of function in the DcACO1 gene loci.Subsequently, sgRNA1 was selected for DcACS1, as it has a high indel frequency (58.5%) and is less likely to produce off-target effects. Media composition, PGR supplementation, and culture conditions need to be considered to develop an effective method for protoplast culture [7] because their effects on protoplast culture vary depending on genotypes and plant ages [33][34][35][36].We observed significant differences in the developmental stages, colony formation, and calli formation of protoplasts when exposed to various combinations of PGRs.Protoplasts cultured solely in media containing zeatin or NAA experienced cell death within 6 days.Conversely, when cultured with either 0.5 mg/L or 1.0 mg/L of 2,4-D alone, the protoplasts formed microcolonies, but their development halted by the third week of culture.This cessation may be attributed to inadequate cell wall regeneration, a vital process for sustained mitotic division.Poorly developed cell walls can result in abnormal mitosis and prevent further cell division [7].Moreover, we noticed a delay of 2 days Table 5 Illustration of protein sequence changes in the calli derived from carnation protoplast transfected with sgRNA2: DcACO1 * Underlined-PAM sequences (NGG), Red-target sequence, Blue-inserted nucleotide, minus sign (−)-deleted nucleotide, Green-start of amino acid change, Grey-Inserted amino acid, Orange-deleted amino acid Table 6 Illustration of protein sequence changes in the calli derived from carnation protoplast transfected with sgRNA1: DcACS1 * Underlined-PAM sequences (NGG), Red-target sequence, Blue-inserted nucleotide, minus sign (−)-deleted nucleotide, Green-start of amino acid change, Grey-Inserted amino acid, Orange-deleted amino acid in the cell division stages of transformed protoplasts compared to non-transformed protoplasts, which could be attributed to the stress experienced during the transfection process.Additionally, we observed that the proliferation and regeneration of protoplasts were influenced by the specific type and combination of PGRs used; this is consistent with findings in other plant species [33,37].[38] edited the F3H gene in petunia using CRISPR/ Cas9 RNP complex to modify flower colour, whereas they obtained 11.9% of mutants.Similarly, [39] also edited CCD7 and CCD8 in tomato using CRISPR/ Cas9 RNP complex, with an editing efficiency (26%) for two genes (CCD7 and CCD8) altogether and an editing efficiency (90%) for a single gene.In our study, callus lines originating from individual transfected protoplasts (Cas9: sgRNA2 for DcACO1 and cas9:sgRNA1 for DcACS1) were selected to assess targeted indels after an 8 week culture period.The 75% DcACO1-edited callus exhibited the presence of short indels, predominantly featuring a + 1 bp insertion.In contrast, 25% showed a deletion of-8 bp.Of the detected callus lines, 75% were monoallelic mutants, and 25% were biallelic mutants.Similarly, of the DcACS1-edited callus lines, 60% showed short indels, primarily featuring a + 1 bp insertion, while 40% exhibited larger insertions (+ 9 or + 11 bp).This finding aligns with the observations made by [40], who similarly identified mutations with varying patterns in callus lines derived from protoplast-derived calli of canola.The in-frame mutations identified within the DcACO1 and DcACS1 genes have the potential to induce modifications or partial impairments to the functionality of their respective encoded proteins.The protein sequences resulting from the CRISPR/Cas9 edits within both genes were examined in a subsequent analysis.Our findings showed deviations in these sequences when compared to the wild-type protein sequence.The specific nature of these deviations was dependent on the specific type of indel.This observation provides empirical support for the hereditary transfer of mutations within the callus lines, resulting in a complete loss of functionality in the DcACO1 and DcACS1 genes.To the best of our knowledge, the present report provides the first data for precise editing of the carnation genome using CRISPR/Cas9 RNPs. Conclusion Our study has demonstrated the effectiveness of CRISPR/ Cas9 tool in achieving targeted mutagenesis within the DcACO1 and DcACS1 gene loci in carnation protoplasts.Our results revealed that the mutation efficiency rates varied depending on the specific sgRNAs designed for these gene loci.As the CRISPR RNP method facilitated editing of the DcACO1 and DcACS1 without introducing foreign DNA, this approach offers several advantages, particularly in situations where GMO regulations do not apply, thereby enhancing the efficiency and costeffectiveness of CRISPR-Cas9 technology for crop enhancement.It sets the stage for the development of novel floricultural crops with desirable traits, thereby contributing significantly to the advancement of floricultural industry. ( See figure on next page.)Fig. 1 Schematic diagram of the target gene locus.a. Target locus of the two gRNAs designed for the DcACO1 gene.b.Target locus of the five gRNAs designed for the DcACS1 gene.c.Cleavage assay for CRISPR RNPs in DcACO1 and DcACS1 gene.Lanes L, DNA ladder, T, treated with Cas9 and gRNA, C positive control, and N negative control Fig. 4 Fig. 4 Illustration of indel patterns in the calli derived from carnation protoplasts transfected with sgRNA1: DcACO1.These results were generated by analysing the Sanger sequencing results with DECODR software.The top panels display the graphs for the indel distribution rate.The bottom panel shows the list of sequences as alignments, indel patterns, and percentages (%).Insertion (highlighted with purple rectangles) and deletion [minus sign (−)] of mutations are shown in alignments.A 20 bp target and 3 bp PAM site are depicted with green and red lines, respectively Fig. 5 Fig. 5 Illustration of indel patterns in the calli derived from carnation protoplasts transfected with sgRNA1: DcACS1.These results with DECODR software.The top panels display the graphs for the indel distribution rate.The 23 bottom panel shows the list of sequences as alignments, indel patterns, and percentages (%).Insertion (highlighted with purple rectangles) and deletion [minus sign (−)] of mutations are shown in alignments.A 20 bp target and 3 bp PAM site are depicted with green and red lines, respectively Table 1 In vitro cleavage efficiency of sgRNAs targeting the DcACO1 and DcACS1 gene loci Table 2 Mutation rates in the transformed protoplasts, quantified by targeted deep sequencing Table 3 Effect of different PGR combinations and concentrations on protoplast culture *The data represent the mean of three replicates per treatment.Means with the same letter are not significantly different by Duncan's multiple range test (DMRT, p < 0.05).Division frequency, colony formation (no. of cells that divided more than three times), and the number of calli were calculated in 8 days, 12 days, and 9 weeks after protoplast culture, respectively.-Not observed, + Poor, + + Moderate, + + + Good, + + + + Excellent No PGR ( Table 4 Percentages of callus lines found with different mutation types in the target sequence
7,200.2
2024-02-02T00:00:00.000
[ "Biology", "Environmental Science", "Engineering" ]
Progress in Antimonide Based III-V Compound Semiconductors and Devices In recent years, the narrow bandgap antimonide based compound semiconductors (ABCS) are widely regarded as the first candidate materials for fabrication of the third generation infrared photon detectors and integrated circuits with ultra-high speed and ultra-low power consumption. Due to their unique bandgap structure and physical properties, it makes a vast space to develop various novel devices, and becomes a hot research area in many developed countries such as USA, Japan, Germany and Israel etc. Research progress in the preparation and application of ABCS materials, existing problems and some latest results are briefly introduced. Introduction Antimonide based compound semiconductors (ABCS) mainly refer to the antimonide based binary, ternary and quaternary compound semiconductor materials, consisting of the III-group elements (Ga, In, Al, etc.) and Sb, As and other V-group elements, such as GaSb, InSb, Al-GaSb, InAsSb, AlGaAsSb, InGaAsSb and so on. Their crystal lattices are around 6.1Å and they together with the InAs-based materials have been routinely called the "6.1Å III-V family materials". Antimonide based semiconductors with narrow bandgap as the basic feature, in the condition of lattice matched or nearly matched with strain with GaSb, InAs, InP and other commonly used substrates, their bandgap can be adjusted in a wide range coveraging from near-infrared wavelength 0.78 m (AlSb) to far-infrared spectral regions 12 m (InAsSb). The heterojunctions formed between them can have type-I, type-II staggered and type-II misaligned band lineups. The unique band structure and excellent physical properties of ABCS based materials provide great freedom and flexibility for band engineering and structural design of materials and create a broad space for development of high-performance microelectronics, opto-electronic devices and integrated circuits. Applications could include active-array space-based radar, satellite communications, ultra-high-speed and ultra-low power integrated circuits, portable mobile devices, gas environmental monitoring, chemical detection, bio-medical diagnosis, drug analysis and other fields [1][2][3][4][5][6][7][8]. The Physical Properties and Preparation Technology of ABCS Based Materials The in-depth study of antimonide based semiconductor materials and devices applications was rapidly developed in recent ten years. Especially after the antimonide based compound semiconductors program (ABCS program) [9] was launched by Defense Advanced Research Projects Agency (DARPA) of USA in 2001, a series of important developments and breakthroughs have been made in the study of antimonide based microstructure materials and device applications worldwide. The narrow bandgap antimonide based compound semiconductors are widely regarded as the first candidate materials for fabrication of the third generation infrared photon detectors and integrated circuits with ultra-high speed and ultra-low power consumption and also as the important materials for middle and far infrared quantum cascade lasers and thermophotovoltaic cells suitable for medium and low temperature heat sources. The comparison of physical properties of III-V compound semiconductors (at RT) is showed in Table 1. We can see that ABCS have excellent physical properties. For example the InSb has the smallest bandgap, the smallest effective mass of carriers, the largest electronic saturation drift velocity and mobility of any III-V compound semiconductor materials. The relationship between energy gap & spectral wavelength and lattice constant is shown in Figure 1 which also shows the evolution of HEMTs and HBTs transistors for higher frequencies and lower power operation. The relative position between energy gap and band offset of III-V semiconductors is shown in Figure 2. Thus it can be seen that there is a considerable band offset and a rich structure of the energy band alignment in the ABCS heterojunctions. By regulating the compositions of ABCS multiple compounds, it is convenient to carry out the bandgap engineering of novel devices in the condition of the lattice match or the strained match. Antimonide based compound semiconductors can generally be divided into bulk crystals and film materials. The most common bulk crystals are GaSb、InSb and InAs. Due to the relatively low melting point of GaSb and InSb, i.e., 712 and 525 respectively, no diss ℃ ℃ ociation near melting point temperature and small saturation vapor pressure, they can be prepared using the horizontal Bridgman growth of zone melting or vertical drawn VP method which is similar to the growth of Ge bulk crystal. While the InAs (melting point 943 ) bulk ℃ crystal can be grown using liquid covering Czochralski (LEC) Pulling Method or vertical gradient freeze (VGF) method which is similar to the growth of GaAs bulk crystal. Because of their small bandgap, at room temperature, ABCS's intrinsic carrier concentration are too high to get high resistivity (semi-insulating) substrate materials which is a serious impediment to the ABCS's applications in the field of microelectronic devices. At present the ultra-high pure InSb bulk crystal's carrier concentration can be less than 10 13 /cm 3 and the residual hole concentration of GaSb bulk crystals is about 2 × 10 16 /cm 3 . Because the growth process is very immature and there is immiscible gap in the multi-elements antimonide, the ternary, quaternary antimonide bulk crystal materials are rarely used. The commonly used methods for preparation of antimonide film materials are liquid phase epitaxy (LPE), molecular beam epitaxy (MBE) and metal organic chemical vapor deposition (MOCVD or OMVPE). LPE method has the advantages of relatively simple process, less expensive epitaxial equipment, high utilization rate of the source material, high crystalline quality of the epitaxial films, fast growing, particularly suitable for the preparation of thick-film materials and so on. LPE method is a near-thermodynamic equilibrium growth technology, and therefore can not be used for the growth of the metastable ternary, quaternary antimonide materials whose components in the immiscible gap. Its growth rate is generally higher than MOCVD and MBE, and changes from different substrate crystalline phases with the typical growth rate of 100nm/min to a few μm/min. The weakness of LEP is that it can not be used for precision controlled growth of very thin films of nano-scale. That is to say that it is not applicable to the growth of superlattices or quantum-well devices and other complex micro-structure materials. In addition, the morphology of materials grown by LPE is usually worse than that grown by MOCVD or MBE. In recent years, a new method which combines LPE with Zn diffusion technology for low-cost, high efficient GaSb based InGaAsSb homogeneous pn junction thermophotovoltaic (TPV) cells has been developed [8]. This method first grows lattice matched n-In 0.15 Ga 0.85 As 0.17 Sb 0.83 (0.55eV) epitaxial layer on the Te-doped n-type GaSb substrate associated with the LPE, then forms the pn homojunction in the In-GaAsSb layer using Zn diffusion method. The external quantum efficiency of the TPV is as high as 90% at 2 m radiation wavelength and the cut-off wavelength is 2.3 m, very close to the technical parameters of materi- als grown by MOCVD or MBE. In addition, LPE method is also used to grow materials of mid-infrared InGaAsSb, InSb-based infrared detectors, LED and LD. It is a relatively mature, high efficiency, low cost growth technology which is easy to realize the industrialization. Both MOCVD and MBE are low temperature epitaxial growth technology of non-thermodynamic equilibrium. You can grow almost all compositions of the multielements compound thin films including the ternary, quaternary antimonide which is in the immiscible-gap and in the metastable state. Both of them can be used for growth of complex micro-structural materials of ultrathin layers and is very suitable for development of new optoelectronic devices and circuits. Antimonide based materials grown by either MOCVD or MBE have their own characteristics. For a specific device structure, it is still hard to judge which growth method used for growth of the device structure is better. In general, MOCVD is suitable for mass production of epitaxial materials whose device structure is relatively mature and easy to expand the size and production capacity. While the MBE is more suitable for research and development of the novel epitaxial materials with hyperfine and complex structures. Although production-based MBE equipment has been developed, it is still not economical using the MBE for mass production when considering the cost. The first epitaxial growth of antimonides thin film materials using MOCVD was done by Manasevit and Simpson in 1969 who used TMGa and SbH 3 (stibine) source for growing GaSb films [4]. Different from epitaxial materials grown by MBE, The types of metallorganics have a critical influence on the quality of epitaxial materials grown by MOCVD. The commonly used III-group metal-organic sources by MOCVD for antimonide based compounds are 3-methyl compound and 3-ethyl compound, such as: TMGa, TMIn, TMAl, TEGa, TEIn, etc. The commonly used V-group sources are TMSb, AsH 3 , PH 3 , TMBi and RF-N 2 , etc. Antimonides are generally low melting point materials and the temperature of epitaxial substrate is generally about 500 . ℃ In addition to TMIn's lower decomposition temperature (250-300 ), the majority of III ℃ -group metal-organic sources can not be completely decomposed below 500 . ℃ Therefore, to growing InSb material whose melting point is only 525 , new organic source material with a lower ℃ decomposition temperature must be adopted. At present the new organic sources which have been successfully applied for growing antimonides by MOCVD are: TDMASb (trisdimethylaminoantimony, decomposition temperature < 300 ), TBDMSb, TASb (triallyant ℃ imony), TMAA (trimethylamine alane), TTBAl (tritertiarybutylaluminum), EDMAA ( ethyldimethylaminealane) and so on. In addition, because of the lack of room temperature chemical stabilized antimony hydride (SbH 3 ), when growing Al-containing antimonide materials (such as: AlSb, AlGaSb, AlGaAsSb, etc.), it is easy to appear carbon and oxygen contamination problem. This phenomenon may be related to the lack of active hydrogen atoms on the surface of epitaxial materials in which C is general for p-type doping. Even if the Al content in the alloy is only 20%, the doping concentrations of C and O can reach more than 1 × 10 18 /cm 3 in the epitaxial film. This causes certain difficulties in growing n-type doping Al-containing antimonide films. The presence of high concentration of O impurity in Al-containing antimonide materials will make these materials have the semi-insulating properties and difficult to measure their electrical properties. The origin of O impurity is very complex, and both the purity of the metal organic sources and the epitaxial environment and process conditions are closely related. The development of new organic aluminum source such as TMAA, TTBAl, EDMAA etc. is precisely in order to inhibit the serious C contamination problem [4][5]. Thus, growing AlSb and their multielement materials using MOCVD is the most challenging work in all the III-V epitaxial materials technologies. The epitaxial growth of antimonide materials using MBE was following earlier pioneering work of the IBM group of L.L Chang and L. Esak, first on InAs/GaSb and InAs/AlSb films [3]. Different from MOCVD process, MBE uses ultra-high vacuum epitaxial environment with single-element materials for molecular beam sources and is easy to implement epitaxy of atomic layer and in situ real-time monitoring, avoiding the C-pollution problem which exits in Al-containing materials growing by MOCVD and greatly reducing the concentration of O doping. In fact most of the prototype devices having complex fine structures and low-dimensional structures (quantum wells, quantum wires and quantum dots) were first achieved using materials grown by MBE. It is noteworthy that, no matter MOCVD or MBE method, the use of substrates whose surface orientation have a small angle offset (i.e., low-density atomic step on the surface of the substrate) seem to be more accessible high-quality epitaxial layers. Experiments confirmed that the use of GaSb (100) substrates miscut 2° towards (110) or 6° towards (1ī1) B may get higher crystal quality of InGaAsSb and AlGaAsSb epitaxial layers [5]. To overcome the difficulty that antimonides have no semi-insulating substrate materials, the use of GaAs, Si and other heterogeneous substrate materials for epitaxy of ABCS films have also attracted great attention. H. Toyota, etc. [10] reported that they grown high-quality GaSb/AlGaSb multi-quantum well (MQW) structures with a 5nm AlSb initiation layer and a relatively thick GaSb buffer layer (0.5-2.0 µm) grown on Si (001) substrates by molecular beam epitaxy. The photoluminescence (PL) emission around 1.55 µm wavelength was observed for GaSb/ AlGaSb MQW structure at room temperature. Low dislocation density, high-quality GaSb epitaxial films on GaAs (001) substrates stripe-patterned with SiO 2 is also prepared by MOCVD with low temperature epitaxial lateral overgrowth (ELO) method [11]. Apart from common binary, ternary and quaternary antimonides being composed of Al, Ga, In, As and Sb, in order to extend the applications of antimonide-based materials in the far-infrared band ( 5 m), easy to adjust the material lattice constant to match the substrates' lattice constant of GaSb, InAs et al. and develop new functional materials, recently some ternary, quaternary antimonides containing N( 2%), P or Bi( 2%) and fiveelements antimonides such as AlGaInAsSb, GaInNAsSb etc have also aroused people's concern and research interest [12][13][14]. T. Ashlet, etc. [12] found that the addition of a small percentage of nitrogen ( 2%) to GaSb, InSb, and GaInSb materials would significantly change their energy band structures (bandgap become smaller) which is very conducive to develop multi-band infrared detectors. Application of ABCS Materials The early focus of antimonide based compound semiconductors comes from its application prospect in midand far-infrared (photon) detectors, but the first to enter the market and get a large-scale industrial production is high-sensitivity InSb magnetoresistive Hall sensors. In 2004, Asahi Kasei Electronic (AKE) of Japan which account for 70% of the global market share of Hall sensors announced that its InSb Hall sensor output had reached more than 100 million per month. These products are widely used in small brushless DC motors, automotive electronics and consumer electronics products and other fields. InSb-based infrared detector arrays have gained a market dominant position of ground-infrared applications and space instrumentation fields. In addition to these more mature products, antimonide materials have made great progress in the third-generation infrared focal plane array detectors, mid and far infrared quantum cascade lasers, quantum dot lasers, ultra-high-speed, ultra-lowpower and low-noise amplifiers, thermophotovoltaic devices and so on in recent years. The following describes some latest results and trends of development of application of ABSC materials. Microelectronic Devices and Integrated Circuits HEMT and HBT devices and circuits used by millimeter-wave radar and high-frequency digital communications have so far experienced first generation based on GaAs-based materials, second generation based on InP-based materials and is currently to the development of third generation of HEMT and HBT devices and circuits based on antimonide based compound materials with ultra-high speed, ultra-lower power consumption and noise factor. After DARPA launched the ABCS projects in 2001, Rockwell Scientific Company (RSC) starting in 2003, has developed Ka-band (34-36 GHz), W-band (92-102 GHz) and X-band (8-12 GHz) low noise amplifier microwave monolithic integrated circuit (MMIC) and the transmit/receive (T/R) integrated modules based on InAs/AlSb mHEMT through its mature GaAs pHEMT technology platform. Currently ABCS Integrated Circuit was regarded as a core and key technologies to accelerate the development by DARPA and the short-term goal is to develop practical ABCS IC products with integration of transistors more than 5000 and the working voltage of about 0.5 V. The five-stage W-band MMIC LNA chip is shown in Figure 3 [15]. The compact 1.2 mm 2 five-stage W-band LNA using 0.2-μm gate length InAs/AlSb metamorphic HEMTs demonstrated a 3.9 dB noise-figure at 94 GHz with an associated gain of 20.5 dB, f T = 142GHz, f max = 178 GHz. The measured dc power dissipation of the ABCS LNA was only 6.0 mW which is less than onetenth the dc power dissipation of a typical equivalent InGaAs/AlGaAs/GaAs HEMT LNA. The ABCS HEMT structure [15] is grown using MBE on semi-insulating GaAs substrates using an AlSb buffer to accommodate the lattice mismatch and a strained InAlAs cap layer to provide a chemically stable surface layer and minimize gate leakage. Hall measurements show 2DEG of InAs channel concentration and mobility to be 3.7 × 10 12 cm -2 and 19,000 cm 2 /Vs at 295K. Growing the Sb-based HEMTs on Si substrate can combine the high mobility of antimonide based compound materials and excellent features of Si substrate with broad application prospects. M.K. Kwang et al. [16] reported their research results of growing AlGaSb/InAs HEMT structure on Si substrates. By using an AlGaSb buffer layer containing InSb quantum dots for dislocation termination, they can effectively terminate the propagation of micro-twin-induced structural defects into overlying layers, resulting in the low defect material grown on a largely mismatched substrate with a relatively thin buffer layer. Figure 4 shows the schematic of the Al-GaSb/InAs HEMT grown on Si substrate. The high quality AlGaSb/InAs HEMT materials grown on Si (001) substrate with the electron mobility of higher than 16000 cm 2 V −1 s −1 at room-temperature and a sheet density of 2.5 × 10 12 cm −2 were obtained by using this technique. It seems to provide a new way of integrating Sb-based devices and circuits on Si substrate. Infrared Detectors There has been more than 60 years in the study of the infrared photon detectors. The development of the first generation of infrared detectors began in the late forties of the last century, using one-dimensional linear arrays which were made of lead salt such as PbSe, and PbTe to detect the mid-infrared (MWIR) (3-5 m). The second generation infrared detector materials were mainly InSb and HgCdTe (MCT) for the two atmospheric IR windows of the mid-infrared band and the far-infrared band (LWIR) respectively [17]. The devices with the focal plane array structures of one dimension and two dimensions are currently very widely used and more mature products. In recent years, the third generation infrared detectors were researched and developed in many countries, their main features are multi-band infrared detection, high-resolution (high pixels and high frame rate), high operating temperatures, high spatial uniformity, high stability and low cost [18]. As it is difficult for the MCT to achieve large area uniformity and stability, the ABCS superlattice materials is generally considered as the preferred materials of the third-generation infrared detectors [6][7]. In principle, the bandgap of the ABCS superlattice materials can be tailored to cover the entire spectrum area of infrared detection by adjusting the thickness and composition of the ABCS materials [19]. In 2007, C.J. Hill et al. of the Jet Propulsion Laboratory [20] reported the GaSb/InAs type-II superlattice detectors grown on unintentional doped p-type GaSb (100) substrate designed for 2-5 μm and 8-12 μm bands infrared absorption. The LWIR detectors have detectivities as high as 8 × 10 10 Jones (cm.H 1/2 /W) with a differential resistance-area product (RoA) greater than 6 Ohm cm 2 at 80 K with a cutoff wavelength of approximately 12 μm. The measured internal quantum efficiency (QE i ) of these front-side illuminated devices is close to 30% in the 10-11 μm range. The MWIR detectors have detectivities as high as 8 × 10 13 Jones with a differential resistance-area product greater than 3 × 10 7 Ohm cm 2 at 80 K with a cutoff wavelength of approximately 3.7 μm. The measured internal quantum efficiency of these front-side illuminated MWIR devices is close to 40% in the 2-3 μm range at low temperature and increases to over 60% near room temperature. From the RoA and QE i indicators, we can see that the ABCS II-type superlattice mid-infrared detector will have a great potential for application of mid-infrared focal plane array devices of non-low-temperature environment. In addition, InAs/InGaSb type-II superlattice materials have also been widely concerned and in-depth research and they are considered as candidate materials for the third-generation infrared detectors. Two-color or dual-band infrared detectors have the ability of inhibiting the complex background and improving the target detection efficiency and can significantly improve the system performances. Dual-band LWIR/VLWIR type-II superlattice infrared detectors was reported by E. H. Aifer et al. [21]. The cut-off wavelengths of the two bands are 11.4μm and 17 μm respectively. But the quantum efficiency of the dual-band infrared detectors is too low (only 4-5%) compared to the single-band type II superlattice infrared detectors and the device structure needs to be further optimized. High quality GaSb based two-color 288 × 384 MWIR InAs/ GaSb type-II SLS FPAs was reported by M. Münzberg et al. [22] of the Fraunhofer Institute in Freiburg. First, the "blue channel" consisting of 330 periods of p-type of a 7.5 ML InAs/10 ML GaSb was deposited on the GaSb substrate for spectral selective detection in the 3.0-4.1 μm wavelength range. Next, the "red channel" consisting of 150 periods of a 9.5 ML InAs/10 ML GaSb superlattices was deposited for spectral selective detection in the 4.1-5.0 μm wavelength range. The thickness of the entire vertical pixel structure is only 4.5 μm, which significantly reduces the technological challenge compared to dual-band HgCdTe FPAs with a typical total layer thickness around 15 μm. Excellent thermal resolution with Noise Equivalent Temperature Difference (NETD) < 17 mK for the "red channel" and NETD < 30 mK for the "blue channel" has been achieved. Infrared Lasers Solid-infrared laser has important applications in gaseous environmental monitoring, chemical detection, bio-medical diagnosis, satellite remote sensing technology and so on. Antimonide based compound semiconductor with bandgap corresponding to just 2-5 m mid-infrared atomspheric window is an important material of mid-infrared lasers. Research and development of new highperformance antimonide-based infrared laser are very active research subjects in recent years and researchers have made a series of important research results such as AlGaAsSb/InGaAsSb multi-quantum well lasers [23], AlSb/InAs/InGaSb type-II quantum cascade lasers [24], "W"-shaped mid-infrared laser [25], InGaSb quantum dot lasers [26]. Antimonide-based interband cascade laser combining the advantages of quantum cascade (QC) laser and type-II quantum well interband laser has potential to achieve continuous output of high-power infrared laser at room temperature and is an international hot subject of research and development. Mid-infrared interband cascade laser made from InAs/Ga(In)Sb/AlSb muti-quantum wells was reported by C. J. Hill et al of Jet Propulsion Laboratory [27]. This laser structure was grown on p-GaSb(001) substrate by MBE as follows sequence: 0.3 μm GaSb buffer layer, 2-3 μm InAs/AlSb superlattice bottom claddings, multi-quantum well InAs/Ga(In)Sb /AlSb active layers ( be repeated 12-35 times), InAs/ AlSb superlattice top claddings and finally an n-type InAs cap layer. The total thickness of epitaxial layers was more than 8μm. A 15 μm × 1.5 mm laser made from sample J377 lased in cw mode up to 212 K with an emission wavelength near 3.3 m. Significant output power (over 30 mW/facet at 140K) has been obtained from the laser with relatively low injection currents and the laser was able to operate in pulsed mode up to 325 K. A 15 μm×1 mm laser made from sample J435 lased in cw mode at temperatures up to 165 K with a lasing wavelength of 5.43 μm at a current of 70.5 mA. The threshold current density increased from 43 A/cm 2 at 80 K to 470 A/cm 2 at 165 K. The laser was able to operate in pulsed mode up to 325 K with an emission wavelength of 5.7 μm. However, at temperatures higher than 230 K, the spectral linewidth is relatively broad with operation voltages higher than 10 V. GaInSb quantum dot surface-emitting laser (QD-VCSEL) operating in optical communication wavelength band of 1.3-1.55 μm with continuous emission at room temperature by either optical pumping or current injection was reported by researchers of Japan's National Institute of Information and Communication Technology [26]. This laser mainly consists of an antimonide-based quantum dot active layer and two AlAs/GaAs superlattice distributed Bragg reflectors (DBRs). With the development of antimonide-based quantum dots, they have overcome the technical difficulty of preparing a material that emits light in the entire optical communication wavelength bands of 1.3 to 1.55 μm on a GaAs substrate through conventional technologies. In particular, the obtained wavelength of 1.55 μm represents the world's longest emission wavelength of existent surface-emitting laser structures based on GaAs substrate. It has great significance for mass production of low-cost surfaceemitting lasers used in next-generation ultra-high-speed optical communication technology. High-power optically pumped semiconductor vertical external cavity surface emitting laser (VECSEL) operating at 2-μm wavelength was reported by A. Härkönen et al. [28]. The device material was grown on GaSb substrate by MBE and consisted of 15 Ga 0.78 In 0.22 Sb quantum-wells placed within a three-lambda GaSb cavity and grown on the top of an 18-pairs AlAsSb/GaSb Bragg reflector. When cooled down to 5℃ and using 790-nm diode laser for optical pumping, this laser emitted up to 1 W of optical power in a nearly diffraction-limited Gaussian beam demonstrating the high potential of antimonide material for VECSEL fabrication. LED devices based on InGaAsSb/AlGaAsSb multi-quantum well active region sandwiched between two AlAsSb/GaSb n-and pdoped Bragg mirrors structure has realized operation in continuous wave mode under electrical injection at room temperature and exhibited a bright emitting peak near 2.3 m with an external quantum efficiency of 0.16% at 34 A/cm 2 [29]. It shows that antimonides have enormous potential in the development of new high-power, electrical injection and continuous-wave emission mid-infrared optoelectronic devices. Thermophotovoltaic Cells Thermophotovoltaic cells are similar to the solar cells that utilize the thermal infrared radiation of a heated source to directly generate electric power. The current trend of development of TPV is to develop high efficiency, low cost, narrow-bandgap (0.6 eV or less) thermophotovoltaic materials and components applicable to the mid-and low-temperature radiation source ( < 1500 ). It appears that antimonide based compounds ℃ have been one of the leading material systems for thermophotovoltaic device applications and the most studied TPV is GaSb-based InGaAsSb p-n cells fabricated by LPE, MOCVD, MBE and other methods. TPV cells based on InAsSbP, grown on InAs substrate, can have spectral responses in the 2.5-3.4 m wavelength range and it is a hopeful research direction of having great potentials. For further details, please refer to M.G. Mauk's review paper [8]. Conclusions As the first candidate materials for fabrication of the third generation large-scale focal plane arrays infrared (photon) detectors, integrated circuits with ultra-high speed and ultra-low power consumption and new high efficiency thermophotovoltaic devices, the research and development of antimonide based compound semiconductor materials and device applications are in the ascendant, attracting increasingly widespread concern and research interests of researchers and institutions in the world. Compared to currently more mature GaAs-based and InP-based materials growth and device manufacturing process, the growth technology of antimonide based micro-structure materials such as heterojunctions, superlattice quantum wells and self-aligned quantum dots continues to face considerable great difficulties and technical challenges and the manufacturing process of various antimonides devices are far from mature. Therefore, there are tremendous opportunities for R&D and innovations in this area. With the gradual suppression or elimination of the adverse factors affecting device performance in narrow bandgap antimonide based compounds (such as composition segregation, Auger recombination, surface recombination, carrier absorption, etc.) by continuous optimization of material growth techniques, improving the device structure design and manufacturing processes and other technologies, we believe that in the near future, new types of high-performance antimonide devices and integrated circuits will get a wide range of important applications in the infrared imaging technologies, atmospheric environmental monitoring, biomedical diagnostics, multi-function digital radar systems, mobile communications, thermophotovoltaic power generation systems, and many other high-tech fields.
6,312.6
2010-09-09T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Experimental Study of the Air Side Performance of Fin-and-Tube Heat Exchanger with Different Fin Material in Dehumidifying Conditions the Air Side Performance of Fin-and-Tube Heat Exchanger with Different Fin Abstract: Under dehumidifying conditions, the condensed water will directly affect the heat transfer and resistance characteristics of a fin-and-tube heat exchanger. The geometrical form of condensed water on fin surfaces of three different fin materials (i.e., copper fin, aluminum fin, and aluminum fin with hydrophilic layer) in a fin-and-circular-tube heat exchanger was experimentally studied in this paper. The effect of the three different fin materials on heat transfer and friction performance of the heat exchanger was researched, too. The results show that the condensation state on the surface of copper fin and aluminum fin are dropwise condensation. The condensation state on the surface of the aluminum fin with the hydrophilic layer is film condensation. For the three different material fins, increasing the air velocity ( u a,in ) and relative humidity ( RH in ) of the inlet air can enhance the heat transfer of the heat exchanger. Friction factor ( f ) of the three different material fins decreases with the increase of u a,in , however, increases with the increase of RH in . At the same u a,in or RH in , Nusselt number ( Nu ) of the copper fin heat exchanger is the largest and Nu of the aluminum fin with hydrophilic layer is the smallest, f of the aluminum fin heat exchanger is the largest and f of the aluminum fin with hydrophilic layer is the smallest. Under the identical pumping power constrain, the comprehensive heat transfer performance of the copper fin heat exchanger is the best for the studied cases. Introduction Fin-and-tube heat exchanger (FTHE) is a common structural form of a heat exchanger. The heat transfer between the cold and hot fluid is carried out, the refrigerant is in the tube, and the air is outside the tube. This type of heat exchanger has the characteristics of simple structure, easy processing, and assembly and is widely used in the petrochemical industry, aviation, vehicle, power machinery, air conditioning, refrigeration, etc. [1][2][3][4]. When the fin surface temperature of the FTHE is lower than the dew point temperature of the airflow through the heat exchanger, the moisture in the air will be condensed on the fin surface. Thus, heat and mass transfer occur simultaneously during the process of dehumidification [5]. The condensed water will not only directly affect the characteristics of heat transfer and pressure drop of the FTHE but also provide a humid environment for the growth of bacteria and microorganisms, which will cause corrosion to the heat exchanger and bring a series of health problems. Therefore, it is necessary to study the heat transfer and resistance performances of the FTHE under the conditions of dehumidification, as well as the condensation and movement characteristics of the condensed water. The variable speed centrifugal fan can provide the circulating air for the experimental test system. The electrode humidifier was used to control the relative humidity of the inlet air. The temperature of the inlet air was monitored using a dry-bulb thermometer and the air was heated by an electric heater. The nozzles chamber based on ASHRAE41.2 standard [23] was used to measure the airflow rate. Two pressure difference transducers with a ±3.0 Pa precision were used to measure the pressure differences across the nozzles and the heat exchangers, respectively. The grading ring, which connects the four static pressure holes of the same section through an organic plastic tube and plays a role in stabilizing the pressure of the test section, was connected with the corresponding pressure difference transducer. The precisions of the dry and wet bulb temperature transducer and humidity transducer were ±0.2 • C and ±2%, respectively. under the given conditions of relative humidity, inlet temperature, and inlet air velocity. The moist air handling system comprises 7 parts: variable speed centrifugal fan, flow equalization board, grading ring, mixer, nozzles, electrode humidifier, and electric heater. The variable speed centrifugal fan can provide the circulating air for the experimental test system. The electrode humidifier was used to control the relative humidity of the inlet air. The temperature of the inlet air was monitored using a dry-bulb thermometer and the air was heated by an electric heater. The nozzles chamber based on ASHRAE41.2 standard [23] was used to measure the airflow rate. Two pressure difference transducers with a ±3.0 Pa precision were used to measure the pressure differences across the nozzles and the heat exchangers, respectively. The grading ring, which connects the four static pressure holes of the same section through an organic plastic tube and plays a role in stabilizing the pressure of the test section, was connected with the corresponding pressure difference transducer. The precisions of the dry and wet bulb temperature transducer and humidity transducer were ±0.2 °C and ±2%, respectively. The test section was the core of the whole experimental system. In order to ensure the uniformity and stability of the airflow, a flow equalization board, a grading ring, and a mixer were, respectively, installed at the inlet and outlet of the test section. The test heat exchangers were made of copper tubes and 3 fin materials (copper fin, aluminum fin, and aluminum fin with hydrophilic layer). The photos of test specimens with 3 different fin materials are given in Figure 2. Their detailed configurations are tabulated in Table 1. The test section was the core of the whole experimental system. In order to ensure the uniformity and stability of the airflow, a flow equalization board, a grading ring, and a mixer were, respectively, installed at the inlet and outlet of the test section. The test heat exchangers were made of copper tubes and 3 fin materials (copper fin, aluminum fin, and aluminum fin with hydrophilic layer). The photos of test specimens with 3 different fin materials are given in Figure 2. Their detailed configurations are tabulated in Table 1. The visual data acquisition system is the brain of the whole experimental test system, collecting all kinds of experimental data. In order to observe the geometry, condensation position, formation, growth, and movement characteristics of condensation water on the fin surface, 2 cameras were, respectively, placed on the side and top of the The visual data acquisition system is the brain of the whole experimental test system, collecting all kinds of experimental data. In order to observe the geometry, condensation position, formation, growth, and movement characteristics of condensation water on the fin surface, 2 cameras were, respectively, placed on the side and top of the heat exchanger, as shown in Figure 3. Two T-type thermocouples were used to measure the water temperature difference between the inlet and outlet of the heat exchanger. A total of 9 T-type thermocouples were arranged on the fin surface of the heat exchanger to measure the fin surface temperature. All the thermocouples were calibrated with an accuracy of ±0.2 • C. The low temperature circulating water system was used to control the inlet temperature of circulating water into the test heat exchangers. The visual data acquisition system is the brain of the whole experimental test system, collecting all kinds of experimental data. In order to observe the geometry, condensation position, formation, growth, and movement characteristics of condensation water on the fin surface, 2 cameras were, respectively, placed on the side and top of the heat exchanger, as shown in Figure 3. Two T-type thermocouples were used to measure the water temperature difference between the inlet and outlet of the heat exchanger. A total of 9 T-type thermocouples were arranged on the fin surface of the heat exchanger to measure the fin surface temperature. All the thermocouples were calibrated with an accuracy of ±0.2 °C. The low temperature circulating water system was used to control the inlet temperature of circulating water into the test heat exchangers. Data Processing Method The data processing method used in this paper was the Threlkeld method [24] and ASHRAE41.2 standard [23]. Heat transfer of air side: a a a,in a,out Heat transfer of water side: w Data Processing Method The data processing method used in this paper was the Threlkeld method [24] and ASHRAE41.2 standard [23]. Heat transfer of air side : Q a = m a (i a,in − i a,out ) Heat transfer of water side : The average heat transfer rate : The overall surface efficiency : where A tc is the outside surface area of tubes, A af is the fin surface area, η f,wet is the wet fin efficiency obtained according to the method from Liang et al. [25], A 0 is the total air side surface area. Thus, the air side sensible heat transfer coefficient is The overall average Nu is defined by : Nu = hsde/λ The equivalent diameter of the air side is defined by : d e = 4A min L x /A 0 where d e is the equivalent diameter of the air side, A min is the minimum free flow area of the heat exchanger, L x is the fin length along the air flow direction. The friction factor of the air side, according to Kays and London [26], is defined as: where σ = A min /A fr , G = m a /A min , A fr is the frontal area of the heat exchanger, G is the mass flow of air at the minimum free flow area. Under identical pumping power constrain, JF (i.e., thermal performance factor) is used to evaluate comprehensive heat transfer performance of the three different fin material cases, which is defined as: Figure 4. It can be seen that with the start-up of the fan, the air flows through the heat exchanger, the vapor in the air gradually condenses into tiny droplets on the fin surface near the circular tubes (as shown in Figure 4a,b). With the increase of ventilation time, the size and quantity of condensed droplets increase gradually (as shown in Figure 4c). Moreover, with the increase of the volume of condensation droplets, some adjacent droplets gradually merge into large droplets (as shown in Figure 4d). With the continuous condensation, the volume of the condensed droplets increases further. If the adhesion force of the condensate droplet to the fins is larger than the resultant force of the drag force produced by the airflow and the gravity of the condensate droplet, the condensed droplet will still adhere to the fin surface perfectly and gradually form a flat hemispherical shape. When the adhesion force of the condensate droplet to the fins is smaller than the resultant force of the drag force produced by the airflow and the gravity of the condensate droplet, the condensate droplet will slide on the fin surface. In the process of sliding, the condensate will merge with some other condensate droplets and discharge from the fin together (as shown in Figure 4e-h). The condensate water will continue to condense on the fin surface, then merge and discharge. of the condensate droplet, the condensate droplet will slide on the fin surface. In the process of sliding, the condensate will merge with some other condensate droplets and discharge from the fin together (as shown in Figure 4e-h). The condensate water will continue to condense on the fin surface, then merge and discharge. The geometry form and movement characteristics of condensate droplets on the aluminum fin surfaces are shown in Figure 5. Comparing Figure 4 with Figure 5, we can see that the condensation process of condensate droplets on the copper fin surface and aluminum fin surface is similar. They are all dropwise condensation, and they all experience the process of nucleation, growth, coalescence, and discharge from the fin surface. The difference is that the size of condensate droplets on the copper fin surface is generally larger than that on the aluminum fin surface. Figure 6 shows the geometry form and movement characteristics of condensed water on the surface of the aluminum fin with a hydrophilic layer. It can be seen from Figure 6a-c that with the air flowing through the heat exchanger, the condensate water on the surface of the aluminum fin with hydrophilic layer firstly appears near the circular tubes and presents film condensation. With the increase of ventilation time, the size of tiny condensate film gradually increases and merges with the adjacent condensate film (as shown in Figure 6d,e). Moreover, the thickness of condensate film increases gradually. When the resultant force of the drag force produced by the airflow and the gravity of the condensate film is larger than the adhesion force of the condensate film to the fins, the condensate film will slide down the fin surface. In the process of sliding down, it will gradually merge with other condensation films and discharge the fin together (as shown in Figure 6f-h). Then, the next cycle of the condensation process will be carried out. of the condensate droplet, the condensate droplet will slide on the fin surface. In the process of sliding, the condensate will merge with some other condensate droplets and discharge from the fin together (as shown in Figure 4e-h). The condensate water will continue to condense on the fin surface, then merge and discharge. The geometry form and movement characteristics of condensate droplets on the aluminum fin surfaces are shown in Figure 5. Comparing Figure 4 with Figure 5, we can see that the condensation process of condensate droplets on the copper fin surface and aluminum fin surface is similar. They are all dropwise condensation, and they all experience the process of nucleation, growth, coalescence, and discharge from the fin surface. The difference is that the size of condensate droplets on the copper fin surface is generally larger than that on the aluminum fin surface. Figure 6 shows the geometry form and movement characteristics of condensed water on the surface of the aluminum fin with a hydrophilic layer. It can be seen from Figure 6a-c that with the air flowing through the heat exchanger, the condensate water on the surface of the aluminum fin with hydrophilic layer firstly appears near the circular tubes and presents film condensation. With the increase of ventilation time, the size of tiny condensate film gradually increases and merges with the adjacent condensate film (as shown in Figure 6d,e). Moreover, the thickness of condensate film increases gradually. When the resultant force of the drag force produced by the airflow and the gravity of the condensate film is larger than the adhesion force of the condensate film to the fins, the condensate film will slide down the fin surface. In the process of sliding down, it will gradually merge with other condensation films and discharge the fin together (as shown in Figure 6f-h). Then, the next cycle of the condensation process will be carried out. The geometry form and movement characteristics of condensate droplets on the aluminum fin surfaces are shown in Figure 5. Comparing Figure 4 with Figure 5, we can see that the condensation process of condensate droplets on the copper fin surface and aluminum fin surface is similar. They are all dropwise condensation, and they all experience the process of nucleation, growth, coalescence, and discharge from the fin surface. The difference is that the size of condensate droplets on the copper fin surface is generally larger than that on the aluminum fin surface. Figure 6 shows the geometry form and movement characteristics of condensed water on the surface of the aluminum fin with a hydrophilic layer. It can be seen from Figure 6a-c that with the air flowing through the heat exchanger, the condensate water on the surface of the aluminum fin with hydrophilic layer firstly appears near the circular tubes and presents film condensation. With the increase of ventilation time, the size of tiny condensate film gradually increases and merges with the adjacent condensate film (as shown in Figure 6d,e). Moreover, the thickness of condensate film increases gradually. When the resultant force of the drag force produced by the airflow and the gravity of the condensate film is larger than the adhesion force of the condensate film to the fins, the condensate film will slide down the fin surface. In the process of sliding down, it will gradually merge with other condensation films and discharge the fin together (as shown in Figure 6f-h). Then, the next cycle of the condensation process will be carried out. Figure 7 shows the geometry form of condensate water on the fin surfaces of three different materials under different air inlet velocities (u a,in ) at τ = 300 s, RH in = 50%, T a,in = 27 • C, T w,in = 12 • C. It can be found that the condensate droplets on the copper fin surface and aluminum fin surface are dropwise condensation. However, the condensate water on the aluminum fin with the hydrophilic layer is film condensation. In addition, we can see that with increasing of u a,in , the diameter of condensate water distributed on the fin surfaces of three different materials all gradually decrease. Especially when the air inlet velocity reaches 4 m/s, there is almost no large-diameter condensate water on the fin surfaces. This is because, with the increase of u a,in , the drag force acting on the surface of the condensate water increases, the combined force of drag force and gravity will increase, the condensate water with a larger diameter will be quickly discharged from the fin surface. The Effect of u a,in and RH in on the Geometry Form of Condensate Water on the Fin Surfaces The effect of air inlet relative humidity (RH in ) on the geometry form of condensate water on the fin surfaces of three different materials at τ = 300 s, u a,in = 2 m/s, T a,in = 27 • C, T w,in = 12 • C is given in Figure 8. It can be seen from Figure 8a,b that with the changing of RH in , the condensation and movement characteristics of the condensation droplets on the surfaces of copper fin and aluminum fin are similar. When RH in = 40%, there are many condensate droplets on the surfaces of copper fin and aluminum fin, and they are evenly distributed on the fin surfaces. When RH in increases from 50% to 70%, the number of condensate droplets on the surfaces of copper fin and aluminum fin all decrease. Moreover, when RH in reaches 80%, there are almost no condensate droplets on the surfaces of copper fin and aluminum fin. This is because when RH in is very large, the condensation rate is very fast, the condensate droplets will be quickly discharged from the fin surfaces. For the aluminum fin with a hydrophilic layer, as shown in Figure 8c, with the increase of RH in , the thickness of condensation film becomes thinner, and the distribution of condensate film on the fin surface is more uniform. This also shows that with the increase of RH in , the condensation rate is accelerated. The thicker condensate film will be quickly discharged from the surface of the aluminum fin with a hydrophilic layer. surfaces of copper fin and aluminum fin are similar. When RHin = 40%, there are many condensate droplets on the surfaces of copper fin and aluminum fin, and they are evenly distributed on the fin surfaces. When RHin increases from 50% to 70%, the number of condensate droplets on the surfaces of copper fin and aluminum fin all decrease. Moreover, when RHin reaches 80%, there are almost no condensate droplets on the surfaces of copper fin and aluminum fin. This is because when RHin is very large, the condensation rate is very fast, the condensate droplets will be quickly discharged from the fin surfaces. For the aluminum fin with a hydrophilic layer, as shown in Figure 8c, with the increase of RHin, the thickness of condensation film becomes thinner, and the distribution of condensate film on the fin surface is more uniform. This also shows that with the increase of RHin, the condensation rate is accelerated. The thicker condensate film will be quickly discharged from the surface of the aluminum fin with a hydrophilic layer. T a,in = 27 • C and 20 • C. When u a,in ranges from 0.5 m/s to 4.0 m/s, comparing with the case of T a,in = 20 • C, Nu of T a,in = 27 • C increases about 0.06-0.15 times, however Nu of T a,in = 35 • C increases about 0.30~0.73 times compared with that of T a,in = 27 • C. As shown in Figure 9b, f decreases with the increase of u a,in . At the same u a,in , f of T a,in = 20 • C is the smallest and f increases with increasing T a,in . The differences of f between T a,in = 35 • C and 27 • C is larger than that between T a,in = 27 • C and 20 • C. When u a,in ranges from 0.5 m/s to 4.0 m/s, comparing with the case of T a,in = 20 • C, f of T a,in = 27 • C increases about 0.02-0.05 times, f of T a,in = 35 • C increases about 0.34-0.54 times compared with that of T a,in = 27 • C. This shows that increasing the air inlet temperature will not only improve the heat transfer performance of the heat exchanger but also increase the flow resistance. The Effect of Ta,in on Nu and f of Heat Exchanger For the aluminum fin case, as shown in Figure 9c,d and aluminum fin with hydrophilic layer case as shown in Figure 9e,f, the similar trends of the effect of T a,in on Nu and f can be observed. Just the differences of Nu and f of the aluminum fin with a hydrophilic layer between T a,in = 27 • C and 20 • C is not obvious. The Effect of Tw,in on Nu and f of Heat Exchanger The effect of water inlet temperature Tw,in on Nu and f of three different fin materials at RHin = 50%, Ta,in = 27 °C and ua,in = 0.5 m/s-4.0 m/s are shown in Figure 10. It can be seen from Figure 10a The Effect of T w,in on Nu and f of Heat Exchanger The effect of water inlet temperature T w,in on Nu and f of three different fin materials at RH in = 50%, T a,in = 27 • C and u a,in = 0.5 m/s-4.0 m/s are shown in Figure 10. It can be seen from Figure 10a,b that Nu of the three different T w,in (12 • C, 15 • C, and 18 • C) all increase with increasing u a,in , however, f all decrease with increasing u a,in . Under the same air inlet velocity, Nu of T w,in = 18 • C is the largest, and then gradually decreases with decreasing of T w,in . The differences of Nu between T w,in = 15 • C and 12 • C is larger than that between T w,in = 15 • C and 18 • C. When u a,in ranges from 0.5 m/s to 4.0 m/s, comparing with the case of T w,in = 12 • C, Nu of T w,in = 15 • C increases about 0.16-0.75 times, however Nu of T w,in = 18 • C increases about 0.01-0.1 times compared with that of T w,in = 15 • C. At the same u a,in , f of T w,in = 12 • C is the largest and f decreases with increasing T w,in , however, the differences of the three different T w,in are not obvious. This shows that the heat transfer performance of the copper fin heat exchanger can be improved by increasing the water inlet temperature. The reason is that when the water inlet temperature is low, the quantity of condensate generated on the fin surface will increase; at the same inlet air velocity, the thickness of condensate will increase. The purpose of destroying the airflow boundary layer cannot be achieved thus that the heat transfer capacity is weakened, and the pressure drop is increased. For the aluminum fin and aluminum fin with hydrophilic layer cases, similar trends of the effect of T w,in on Nu and f are observed in Figure 10c Figure 11 shows the effect of ua,in on Nu and f of the three different fin materi RHin = 50%, Ta,in = 27 °C, Tw,in = 12 °C. It can be found in Figure 11a that Nu of the different fin materials all increase with the increase of ua,in. This is because the boun layer becomes thinner with the increase of ua,in. The discharged speed of condensat ter from the fin surface is accelerated, which further enhances the disturbance o fluid, thus the heat transfer capacity is enhanced. Compared with 0.5 m/s, Nu of co fin, aluminum fin, and aluminum fin with hydrophilic layer increase about 1.99, 1.30 Figure 11 shows the effect of u a,in on Nu and f of the three different fin materials at RH in = 50%, T a,in = 27 • C, T w,in = 12 • C. It can be found in Figure 11a that Nu of the three different fin materials all increase with the increase of u a,in . This is because the boundary layer becomes thinner with the increase of u a,in . The discharged speed of condensate water from the fin surface is accelerated, which further enhances the disturbance of the fluid, thus the heat transfer capacity is enhanced. Compared with 0.5 m/s, Nu of copper fin, aluminum fin, and aluminum fin with hydrophilic layer increase about 1.99, 1.30, and 2.62 times, respectively, at 4.0 m/s. Under the same air inlet velocity, Nu of the copper fins is larger than aluminum fins, and Nu of the aluminum fins is larger than that of aluminum fins with a hydrophilic layer. The Effect of u a,in on Nu and f of Heat Exchanger As shown in Figure 11b, f decreases with increasing u a,in for all three fin materials. The reason is that with the increase of u a,in , the velocity of airflow increases, and the increase of kinetic energy of airflow is greater than that of pressure drop, and the condensate water can be discharged from the fin surface faster, which makes the air passage smoother and pressure drop decreases. Compared with 0.5 m/s, f of copper fin, aluminum fin, and aluminum fin with hydrophilic layer decrease about 34.46%, 38.81%, and 50.55%, respectively, at 4.0 m/s. At the same air inlet velocity, f of the aluminum fin is the largest, and f of the aluminum fin with hydrophilic layer is the smallest. This is because the condensation water produced on the fin surface will not gather on the fin surface after the hydrophilic layer is attached to the fin surfaces; the discharged speed of condensate water from the fin surface is accelerated, which will reduce the pressure loss of the air side channel. the condensation water produced on the fin surface will not gather on the fin surface after the hydrophilic layer is attached to the fin surfaces; the discharged speed of condensate water from the fin surface is accelerated, which will reduce the pressure loss of the air side channel. The Effect of RHin on Nu and f of Heat Exchanger The effect of RHin on Nu and f of three different fin materials at ua,in= 2 m/s, Ta,in = 27 °C, Tw,in = 12 °C is given in Figure 12. It can be seen from Figure 12a that the Nu of the three different fin materials all increases with the increase of RHin. It shows that increasing RHin can enhance the convective heat transfer intensity of the heat exchanger. This is because with the increase of RHin, the condensation rate on the low-temperature fin surface will accelerate, thus the proportion of latent heat transfer in the heat and mass transfer process will gradually increase. Moreover, with the increase of RHin, the discharge process is accelerated, which increases the disturbance of the air stream, thus the heat transfer performance of heat transfer is enhanced. Compared with RHin = 40%, Nu of the copper fin, aluminum fin, and aluminum fin with hydrophilic layer increased about 0.93, 1.23, and 3.12 times, respectively, at RHin = 80%. Under the same RHin, Nu of the copper fins is higher than that of aluminum fins, and Nu of the aluminum fins is higher than that of aluminum fins with a hydrophilic layer. As shown in Figure 12b, f increases with the increase of RHin for all three fin materials. The main reason is that with the increase of RHin, the condensation and discharge process is accelerated, which enhances the disturbance of the air stream. Thus, the pressure drop of the air side increases. Compared with RHin = 40%, f of the copper fin, aluminum fin and aluminum fin with hydrophilic layer increases about 1.21, 1.38, and 1.03 respectively at RHin = 80%. At the same RHin, f of the aluminum fin is largest, and f of the aluminum fin with hydrophilic layer is smallest. The Effect of RH in on Nu and f of Heat Exchanger The effect of RH in on Nu and f of three different fin materials at u a,in = 2 m/s, T a,in = 27 • C, T w,in = 12 • C is given in Figure 12. It can be seen from Figure 12a that the Nu of the three different fin materials all increases with the increase of RH in . It shows that increasing RH in can enhance the convective heat transfer intensity of the heat exchanger. This is because with the increase of RH in , the condensation rate on the low-temperature fin surface will accelerate, thus the proportion of latent heat transfer in the heat and mass transfer process will gradually increase. Moreover, with the increase of RH in , the discharge process is accelerated, which increases the disturbance of the air stream, thus the heat transfer performance of heat transfer is enhanced. Compared with RH in = 40%, Nu of the copper fin, aluminum fin, and aluminum fin with hydrophilic layer increased about 0.93, 1.23, and 3.12 times, respectively, at RH in = 80%. Under the same RH in , Nu of the copper fins is higher than that of aluminum fins, and Nu of the aluminum fins is higher than that of aluminum fins with a hydrophilic layer. As shown in Figure 12b, f increases with the increase of RH in for all three fin materials. The main reason is that with the increase of RH in , the condensation and discharge process is accelerated, which enhances the disturbance of the air stream. Thus, the pressure drop of the air side increases. Compared with RH in = 40%, f of the copper fin, aluminum fin and aluminum fin with hydrophilic layer increases about 1.21, 1.38, and 1.03 respectively at RH in = 80%. At the same RH in , f of the aluminum fin is largest, and f of the aluminum fin with hydrophilic layer is smallest. The Effect of the Fin Material on JF In order to comprehensively evaluate the heat transfer and resistance characteristics of the three different fin materials cases under dehumidifying conditions, the thermal performance factor JF shown in Equation (9) is used as the evaluation criterion. As given in Equation (9), Nu and f of the aluminum fin are used as reference values. Figure 13 shows the effect of ua,in on JF at RHin = 50%, Ta,in = 27 °C, Tw,in = 12 °C. As shown in Figure 13, the range of JFC,f/Al,f was 1.24-1.53 at ua,in = 0.5 m/s-4 m/s, they were all larger than 1.0. This shows that the comprehensive heat transfer performance of copper-fin-and-circular-tube heat exchanger is better than that of the aluminum fin heat exchanger. Moreover, with the increase of ua,in, the comprehensive heat transfer advantages of the copper fin heat exchanger are more obvious than that of the aluminum fin heat exchanger. However, the range of JFAl,f,h,l/Al,f was 0.48-0.96 at ua,in = 0.5 m/s-4 m/s, they were all smaller than 1.0. That is to say that the comprehensive heat transfer performance of the aluminum fin with hydrophilic layer heat exchanger was worse than that of the aluminum fin heat exchanger. With the increase of ua,in, the difference of comprehensive heat transfer capacity between aluminum fin with hydrophilic layer heat exchanger and aluminum fin heat exchanger becomes smaller. The Effect of the Fin Material on JF In order to comprehensively evaluate the heat transfer and resistance characteristics of the three different fin materials cases under dehumidifying conditions, the thermal performance factor JF shown in Equation (9) is used as the evaluation criterion. As given in Equation (9), Nu and f of the aluminum fin are used as reference values. Figure 13 shows the effect of u a,in on JF at RH in = 50%, T a,in = 27 • C, T w,in = 12 • C. As shown in Figure 13, the range of JF C,f/Al,f was 1.24-1.53 at u a,in = 0.5 m/s-4 m/s, they were all larger than 1.0. This shows that the comprehensive heat transfer performance of copper-fin-and-circular-tube heat exchanger is better than that of the aluminum fin heat exchanger. Moreover, with the increase of u a,in , the comprehensive heat transfer advantages of the copper fin heat exchanger are more obvious than that of the aluminum fin heat exchanger. However, the range of JF Al,f,h,l/Al,f was 0.48-0.96 at u a,in = 0.5 m/s-4 m/s, they were all smaller than 1.0. That is to say that the comprehensive heat transfer performance of the aluminum fin with hydrophilic layer heat exchanger was worse than that of the aluminum fin heat exchanger. With the increase of u a,in , the difference of comprehensive heat transfer capacity between aluminum fin with hydrophilic layer heat exchanger and aluminum fin heat exchanger becomes smaller. The Effect of the Fin Material on JF In order to comprehensively evaluate the heat transfer and resistance characteristics of the three different fin materials cases under dehumidifying conditions, the thermal performance factor JF shown in Equation (9) is used as the evaluation criterion. As given in Equation (9), Nu and f of the aluminum fin are used as reference values. Figure 13 shows the effect of ua,in on JF at RHin = 50%, Ta,in = 27 °C, Tw,in = 12 °C. As shown in Figure 13, the range of JFC,f/Al,f was 1.24-1.53 at ua,in = 0.5 m/s-4 m/s, they were all larger than 1.0. This shows that the comprehensive heat transfer performance of copper-fin-and-circular-tube heat exchanger is better than that of the aluminum fin heat exchanger. Moreover, with the increase of ua,in, the comprehensive heat transfer advantages of the copper fin heat exchanger are more obvious than that of the aluminum fin heat exchanger. However, the range of JFAl,f,h,l/Al,f was 0.48-0.96 at ua,in = 0.5 m/s-4 m/s, they were all smaller than 1.0. That is to say that the comprehensive heat transfer performance of the aluminum fin with hydrophilic layer heat exchanger was worse than that of the aluminum fin heat exchanger. With the increase of ua,in, the difference of comprehensive heat transfer capacity between aluminum fin with hydrophilic layer heat exchanger and aluminum fin heat exchanger becomes smaller. The effect of RH in on JF at u a,in = 2 m/s, T a,in = 27 • C, T w,in = 12 • C is given in Figure 14. From Figure 14, we can see that the range of JF C,f/Al,f was 1.14-1.46 when RH in increased from 40% to 80%. JF C,f/Al,f were all larger than 1.0. Moreover, with the increase of RH in , JF C,f/Al,f gradually approached 1.0. The range of JF Al,f,h,l/Al,f was 0.45~0.88 at RH in = 40-80%. JF Al,f,h,l/Al,f was smaller than 1.0. Moreover, with the increase of RH in , JF Al,f,h,l/Al,f increased gradually and tended to be stable. This shows that with increasing of RH in , the differences in comprehensive heat transfer performance of the three heat exchangers with different fin materials become smaller and smaller. For the three heat exchangers with different fin material, the above research show that the comprehensive heat transfer performance of the copper fin heat exchanger was the best, and the comprehensive heat transfer performance of aluminum fin with hydrophilic layer was the worst under dehumidifying conditions. gradually and tended to be stable. This shows that with increasing of RHin, the differences in comprehensive heat transfer performance of the three heat exchangers with different fin materials become smaller and smaller. For the three heat exchangers with different fin material, the above research show that the comprehensive heat transfer performance of the copper fin heat exchanger was the best, and the comprehensive heat transfer performance of aluminum fin with hydrophilic layer was the worst under dehumidifying conditions. Conclusions For a fin-and-circular-tube heat exchanger, we experimentally studied the effect of three different fin materials: copper fin, aluminum fin, and aluminum fin with hydrophilic layer on heat transfer and resistance performance, and the geometry form and movement characteristics of condensate water on the three fin surfaces were studied, too. JF was used to screen the optimum fin material at different ua,in and RHin. The main conclusions can be summarized as follows: (1) The condensation state on the surface of copper fin and aluminum fin is dropwise condensation. They all experience the process of nucleation, growth, coalescence, and discharge from the fin surface. The condensation state on the surface of the aluminum fin with the hydrophilic layer is film condensation. Conclusions For a fin-and-circular-tube heat exchanger, we experimentally studied the effect of three different fin materials: copper fin, aluminum fin, and aluminum fin with hydrophilic layer on heat transfer and resistance performance, and the geometry form and movement characteristics of condensate water on the three fin surfaces were studied, too. JF was used to screen the optimum fin material at different u a,in and RH in . The main conclusions can be summarized as follows: (1) The condensation state on the surface of copper fin and aluminum fin is dropwise condensation. They all experience the process of nucleation, growth, coalescence, and discharge from the fin surface. The condensation state on the surface of the aluminum fin with the hydrophilic layer is film condensation. (2) Under the same air inlet velocity, Nu and f of T a,in = 35 • C is largest, and then gradually decreases with the decrease of T a,in . Nu decreases and f increases with the decrease of T w,in . (3) At the same u a,in or RH in , for the three different material fins, the heat transfer performance of copper fin heat exchanger is the best, and heat transfer performance of aluminum fin with hydrophilic layer is the worst. f of the aluminum fin is the largest, and f of the aluminum fin with hydrophilic layer is the smallest. (4) Under identical pumping power conditions, the comprehensive heat transfer performance of the copper fin heat exchanger is the best for the studied three different fin materials.
9,161.6
2021-10-27T00:00:00.000
[ "Engineering", "Physics" ]
A Review of Basic Energy Reconstruction Techniques in Liquid Xenon and Argon Detectors for Dark Matter and Neutrino Physics Using NEST Detectors based upon the noble elements, especially liquid xenon as well as liquid argon, as both single- and dual-phase types, require reconstruction of the energies of interacting particles, both in the field of direct detection of dark matter (WIMPs, axions, et al.) and in neutrino physics. Experimentalists, as well as theorists who reanalyze/reinterpret experimental data, have used a few different techniques over the past few decades. In this paper, we review techniques based on solely the primary scintillation channel, the ionization or secondary channel available at non-zero drift electric fields, and combined techniques that include a simple linear combination and weighted averages, with a brief discussion of the application of profile likelihood, maximum likelihood, and machine learning. Comparing results for electron recoils (beta and gamma interactions) and nuclear recoils (primarily from neutrons) from the NEST simulation to available data, we confirm that combining all available information generates higher-precision means, lower widths (energy resolution), and more symmetric shapes (approximately Gaussian) especially at keV-scale energies, with the symmetry even greater when thresholding is addressed. Near thresholds, bias from upward fluctuations matters. For MeV-GeV scales, if only one channel is utilized, an ionization-only-based energy scale outperforms scintillation; channel combination remains beneficial. We discuss what major collaborations use. Introduction The noble elements especially xenon (Xe) and argon (Ar) as liquids have been instrumental in the field of dark matter (DM) direct detection, focused on identifying the missing ∼25% of the mass-energy content of the universe. They have also been key for neutrinos. In the former case, Xe [1][2][3] and Ar [4,5] are each used by distinct large collaborations, and used both to search for continuous spectra, such as the approximate falling exponential expected from the traditional WIMP (Weakly Interacting Massive Particle) [6], or monoenergetic peaks expected from dark photons or bosonic super-WIMPs [7]. In the latter case, argon is used in long-and short-baseline oscillation studies [8,9] and xenon in the search for neutrinoless double-beta decay, as either a liquid [10] or a gas [11]. In all of these cases, there is a clear need for high accuracy and high precision in energy reconstruction and good energy resolution in order to identify signals and backgrounds, and calibrate the detectors. Combining the data with high-fidelity arXiv:2102.10209v1 [hep-ex] 20 Feb 2021 Monte Carlo (MC) simulations can aid in this task. Xe and Ar produce scintillation light, and when an external electric field is applied then ionization electrons can be extracted as well. In a dual-phase time projection chamber (TPC) a gas stage converts the ionization into a secondary scintillation pulse [12] while a single-phase TPC reads out the charge directly [13,14]. Energy scales have been based in the past and present on the scintillation, on the ionization, and on their combination. In this work, we will be reviewing each of these methods, contrasting them and enumerating their strengths and weaknesses, in terms of the mean, median, and mode (e.g. Gaussian peak centroid) of reconstructed energy best matching the true energy (MC truth energies and/or monoenergetic calibration peaks), the width, and the shape (symmetry). Multiple types of particles will be covered, addressing scattering from atomic electrons and nuclei, electron recoil (ER) and nuclear recoil (NR) respectively, and recoil energies from sub-keV up to GeV, from DM-WIMP-induced NR or coherent elastic neutrino nucleus scattering (CEνNS), to neutrino-induced ER. The summaries in each section make idealized recommendations, with detector-dependent caveats, for future DM/neutrino projects. General Examples of usages of each of the possible energy scale definitions are taken from empirical data wherever possible, but also compared to NEST (Noble Element Simulation Technique), which is also used by itself where data are lacking. NEST is a global, experiment-and detector-independent MC framework that allows simulation of scintillation and ionization yield averages and resolutions as functions of incoming or deposited energy, electric field, and interaction type [15]. The values of detector-specific parameters also need to be known in order to permit NEST to simulate the detectors effects for a specific experiment. The most important numbers are g 1 , g 2 , and the magnitudes of the drift and extraction electric fields, if applicable. g 1 and g 2 are respectively defined as the gains of the primary and secondary scintillation channels, the latter from ionization (again, only if applicable). g 1 is always between 0 and 1, and is an efficiency which combines the quantum efficiencies of one's photo-sensors with the geometric light collection efficiency. (It is also known as the photon detection efficiency.) It can include or exclude, depending on choice of units, the probability for certain photon detectors (especially in the vacuum ultraviolet or VUV) to produce more than one photoelectron (phe) for a single incoming photon [16]. Typical values across all experiments using Xe or Ar are ∼0.05-0.20 phe per photon [17][18][19][20][21][22][23]. g 2 is a combination of the electron extraction efficiency for a two-phase TPC with the gas gain, i.e. the number of photons produced per extracted electron times the photon detection efficiency in the gas phase [24]. A typical value is ∼10-30 phe per electron. Some photons, especially in the ultraviolet range, carry sufficient energy in order to generate more than one phe per incident photon, in the photocathodes of certain photon detectors, and stochastically, not consistently. The definition of the unit "photons detected" or "detected photons" or phd for short is simply phe (alternatively called PE by some authors) divided by (1+p 2e ), where p 2e is the probability of producing two (photo)electrons, typically 0.1-0.3, depending on the manufacturer, temperature, and individual phototube: phd = phe / (1 + p 2e ), while a similar translation applies to the g 1 . While taking this effect into account is ideal for the resolving of peaks and achieving the best possible background discrimination for one's final analyses, the convention for how the final results are plotted varies by experimental collaboration, making comparisons more difficult (XENON and DARWIN prefer phe or PE but LUX and LZ prefer phd). PIXeY's 2-phe probability was reported as 17.5% [25], implying that division by 1.175 would convert phe into phd. We define N ph and N e− , as the original numbers of photons and electrons generated from an interaction site, determined by MC truth (integer) or empirical reconstruction (float). N q is their sum. In data, or advanced MC including detector effects like finite detection efficiencies not just mean yields: N ph = S1 c /g 1 and N e− = S2 c /g 2 . S1 and S2 are the primary and secondary scintillation pulse areas. Subscript 'c' denotes correction, primarily for XYZ-position effects, as light collection efficiency may depend significantly on 3D position, especially in a large-scale detector [26]. The energy dependence is intrinsic to the element, unrelated to the position inside a particular detector. It is thus useful to also define two additional terms, L y and Q y , which respectively refer to the N ph and N e− per unit of energy. Our default guiding formula, at least for combined energy-scale reconstruction, is therefore: L is Lindhard factor, quantifying E "lost" (if a detector cannot see phonons) into heat (atomic motion) instead of observable quanta. Called also "quenching" historically, that word is not precise: quanta are not being quenched per se but not being created ab initio, unlike in quenching by impurities or ionization density. It depends on E, making Eqn. 1 circular. Circularity can be avoided by a good MC model like NEST, and by quasi-monoenergetic NR data, as in the LUX D-D analysis [27]. For ER, L is taken to be 1.0, not implying no heat loss but an approximately constant loss as a function of E that can be "rolled into" the definition of work function, essentially raising it. For neutrino experiments, E depositions are tracks not point-like, and dE/dx (energy loss per unit of distance) is more relevant. The L quantifies the effectiveness of an initial NR at producing more elastic recoils as secondary interactions not inelastic, i.e., atomic excitations/ionizations. It can be modeled in other/better ways than Lindhard's, expected to break down at low E's, as seen in Si/Ge [28]. It should not be confused with L e f f used for S1 [17,[29][30][31] at zero field, and not accounting for 57 Co 122 keV γ-rays not being representative of even ER yields, at different E's [32,33]. (L e f f was the ratio of NR to ER light yields.) L(E) can be thought of as merging the scintillation and ionization efficiencies (whereas L e f f is only the scintillation efficiency, again compared to ER's L y ). However, as with the word quenching, it is better to avoid this terminology; it can be confused with the efficiency set in DAQ and/or data analysis, by for example requiring a certain number of PMTs to fire to count an S1 (2-fold coincidence for example in LUX: see Fig. 4 later, ER, where this, g 1 1, and other effects lead to a threshold). The E within the L-factor is the true energy of a nucleus recoiling from a neutron (or ν coherently, or DM hopefully in the future). When quoting imperfect reconstructed energy using Eqn. 1, with 0 < L < 1, the standard unit is keV nr to contrast with the unit defined assuming L = 1 for ER earlier: keV ee . The subscript 'q' indicates that the W value is not based on scintillation or ionization alone. W q is effectively an average over microphysical processes producing excited/ionized atoms. We do not describe them, as they are within other works [34][35][36], nor establish NEST's accuracy here, using it for convenience where data are lacking. The physical details and how NEST captures them are beyond the scope of this paper. In this section in particular we summarize established methods [32,37]. At zero field one only has access to the S1, and for specialized searches for (sub-)GeV DM [38,39] to only S2 due to the low energies involved, as the ionization i.e. charge yield is typically larger and easier to detect, even as E goes to zero. The formula correspondingly morphs into one of these: where L y = N ph /E and Q y = N e− /E are functions of energy E and field E . Each differ for ER and NR; they are not fixed as W q was, for ER. The challenge of E reconstruction increases via an inherent non-linearity: e.g. 2x the E does not mean 2x the light or charge as it does with N q at least for ER. When considering resolution, this causes deviation from the Poisson expectation of 1/ √ E improvement with higher E, which only applies for the combined scale (1/ N q ). A more general power law then works best [40][41][42]. In LXe (liquid xenon) the fact L y is not flat vs. E was first demonstrated by Obodovskii & Ospanov [43] for ER (implying for fixed W that ER Q y is also not constant) and by Aprile et al. for NR Q y [44,45] but not well known in the DM field for ER background at least until much later publications [33,46]. In the case of experiments like EXO (0νββ decay) and DUNE (long-baseline ν's) using Xe/Ar, only the first half of (3) applies, as they measure Q directly in liquid using wire readout planes, instead of in gas. Also, for combined E (Eqn. 1) especially, S2 can be defined using a subset of photon sensors instead of all. A cylindrical TPC typically has two arrays, one at each end. The bottom one alone (subscript 'b' for differentiation from total S2) may be used for E reconstruction as in [47] to adjust for light loss created by inoperative phototubes in the gas, or saturation. Lastly, g 1 can be vastly smaller in kilotonne-scale ν experiments, unlike the range quoted earlier for DM: they rely on Q y . For the purposes of reproducibility we note that all of the work presented here uses the latest stable NEST release at time of writing: Version 2.2.0 [48] for which the default detector parameter file is designed to mimic the first science run of LUX [49] but which we modify as needed to reproduce other experiments, focusing again on g 1 , g 2 , and drift electric field E as the three most salient inputs. NEST-specific Details The main assumptions utilized in NEST to reproduce efficiencies are briefly explained below. 1. A Fano-like factor sets variation in total quanta, with a binomial distribution for differentiating excitons/ions (inelastic scattering). In LXe it is not sub-Poissonian as in GXe, as experimentally verified in each phase [50]. 2. Recombination fluctuations [51][52][53]: the "slosh" of N ph vs. N e− caused by recombination probability for ionization e − 's, which may either recombine to make more S1, or escape to make S2. These are worse (i.e. larger) than expected naïvely (non-binomial). This is distinct from Fano factor, and canceled by combined E. The above are general, but the below depend on the detector, all combining into the final efficiency. 1. g 1 is used to define a binomial distribution for the S1 photon detection efficiency with < S1 >= g 1 N ph . 2. For an S1 to be above the trigger threshold, most experiments require that O(0.1) phe must be observed in N PMTs for N-fold coincidence, where usually N = 2 or 3, within a coincidence window, of 50-150 ns, requiring a basic timing model for singlet and triplet states and photon propagation time. The 2 or 3-fold coincidence prevents triggering on photo-sensor dark counts. Baseline noise O(0.1) phe is also modeled. 3. The pulse areas of single phe are assumed to follow a truncated (negative phe are not possible) Gaussian distribution, with O(10%) resolution differing by photon-sensor, but a detector-wide average is used for NEST, as an approximation. If single phe detection efficiency is reported, it can be used instead of a threshold applied to a Gaussian random number generator, thus taking non-Gaussianity and other detector-specific idiosyncrasies into account. This and others numbers are collected from arXiv, publications, and theses. 4. Drifting, diffusing electrons are removed via an exponential electron lifetime, and are assumed to follow a binomial extraction efficiency, while the number of photons produced per surviving extracted electron depends on the gas density, electroluminescence electric field, and gas gap size, in a 2-phase TPC [54,55]. 5. A special Fano factor, typically also > 1 for S2 accounts for non-Poissonian behavior, due for example to grid wire sagging. 2-4 is normal [56]. S2 photons experience a similar binomial photon detection efficiency as S1 photons, moving along from photons to phe (for S2, from electrons to photons to phe). A raw, total S2 threshold O(100) phe removes the lowest-energy events, to avoid few-electron backgrounds [57]. 6. S1 and S2 XYZ variation is simulated in NEST if provided in analytical form, then realistically corrected back out, based upon finite position resolution, not MC truth positions, thus allowing not only for correct means but correct widths. (Z or drift correction applies only to S1, handled for S2 by the electron lifetime.) 7. N ph falls while N e− rises with drift field in anti-correlated fashion, and fields can be non-uniform. A final step of noise is applied as an empirical smearing to the S1 and S2 pulse areas to match realistic experimental data; however, the above lists capture the vast majority of fluctuations that can shift the low-energy efficiency higher or lower. Additional noise is typically at a level of O(1%) from unknown sources, but likely due to position correction imperfection and other analysis-specific effects, discussed in [24,36,58]. This is uncorrelated noise, as it is applied separately to S1 and S2, while the variation induced by the Fano factor (Step 1 in first list) is correlated "noise" due to raising S1 and S2 together, and the recombination fluctuations constitute anti-correlated noise, as in raising S1 they lower S2, and vice versa. All fluctuations can move events above and/or below nominal thresholds. All steps above are uncertain especially when it comes to a simulation software package such as NEST. It would be infeasible to discuss and address all uncertainties. Thus, we will mention only one that is often largest. The N ex and N i , followed by N ph and N e− , produced at Step (1.) in the first list above, depend on particle, energy, field, density via temperature and pressure, and phase, and at the lowest energies (sub-keV especially, and in particular for NRs) there is a non-negligible systematic uncertainty from the values assumed for the average yields (L y and Q y ) which are always the first step in NEST modeling. For ER Q y , the discrepancy between different data sets and models is as much as 20% below 1 keV, as illustrated by contrasting NEST [58] with PIXeY 37 Ar data [59], x-rays in LUX [60], and 3 H in XENON100 [61]. L y = 0 in NEST for sub-keV ER but not in XENON models, which thus predict higher efficiency [42]. The 50% efficiency point typically is "E threshold." For the purposes of this review paper, what is of greatest importance is that the assumptions, of the default NEST yields models, are not varied, when comparing different energy reconstruction techniques, so that at the very least a robust comparison can be made among them. That being said, we stress the accuracy of NEST in reproducing efficiency even for publications where the authors have no access to the original data is high (not only LUX [62]). XENON1T is an excellent example [58]. Results Xe is examined first (ER then NR) followed by Ar. For Xe, the present-day relevant experiments seeking DM for whom this review is most pertinent include: LZ, XENON, PandaX. The use of LAr is divided, present and past, across DEAP, CLEAN, ArDM, and DarkSide (first two single-phase and zero field, latter two dual-phase and non-zero electric field) on the DM front, and DUNE, MicroBooNE, ArgoNeuT, ICARUS, plus many others, studying neutrinos. Enriched LXe is used by nEXO, a TPC but only one phase, and NEXT (GXe) for the hunt for 0νββ decays. α's and heavier ions different from the medium, with properties like additional quenching, modify the E reconstruction formulae, but we will only focus on basic ER and NR; other recoil types are already covered elsewhere [15,63]. Low Energy: keV-scale (Dark Matter Background, Signal) Basic Recon of Mono-E Peaks The term electron recoil or ER refers to interactions with the electron cloud, such as from beta emissions and the Compton scattering or photoabsorption of gamma rays. In a WIMP search, ER is the primary background, but in a more general DM (or exotic physics) search a monoenergetic peak or even continuous ER spectrum can be the signal [64]. To illustrate the differences among the S1-only, S2-only or Q-only, and combined-E scales for reconstruction the first example is the lowest-energy ER calibration peak available at time of writing for LXe where we have S1 and S2 still: the electron capture decay of 37 Ar at 2.82 keV. It is also a timely example due to the recent XENON1T ER result [42,58]. While this E is not often in the regime of efficiency drop, we do address it in this section. Our source of data here was the seminal work led by McKinsey et al. at Yale/Berkeley [59] who constructed a small-scale calibration chamber, PIXeY, with g 1 = 0.097 ± 0.007 phe/photon and g 2 = (0.78 ± 0.05) * 36.88 = 28.77 ± 1.85 phe/e − (extraction efficiency times the single-e − pulse area). To replicate PIXeY precisely with NEST in Fig. 1, we use g 1 = 0.1015, g 2 = 30.65, only 0.6-1.0σ higher. We studied only 198 V/cm. W q was taken to be 13.5 eV, between Dahl [36] and neriX [65] measurements. In all plots, keV ee means reconstructed energy (ee standing for electron equivalent, for when NR is translated into this scale) as opposed to MC truth, or a known single E from a peak, reported in keV sans subscript. While lower-E calibrations exist than 2.8 keV, this is the lowest where S1 and S2 are identified separately. Others become S2-only [60]. Fig. 1a demonstrates that the S1-only scale, used primarily on XENON10 [29] and continuing on in a subset of XENON100 analyses [66], performs the poorest, with an energy resolution σ/µ of 38.63% for data (38.59 NEST) in red. (Both XENON10/100 had similar g 1 's to PIXeY, but slightly lower, at ∼0.07.) Values in parentheses following values from data now are for NEST, on to subsequent pages, in the style of A (B). An S1 basis is further complicated by non-linearity. Next, in Fig. 1b the S2 only is plotted, leading to an 11.65% (12.20) resolution. While we see later that the combined scale, by including all available information, is typically best at higher energies, this is not the case any longer at keV-scale energies, as (c) indicates. Combined resolution is 15.84% (15.51). This is caused by data from these E's being comprised of upward S1 fluctuations above nominal S1 threshold, due to finite g 1 . An experimenter measures the right tail of S1s essentially. Nevertheless, it is possible to mitigate S1 effects, still include S1 in the E calculation, and obtain the best possible resolution. One way is to fit a skew Gaussian (parameters explained in [53] and [58]): While this has been done for bins in S2 vs. S1 [53] and once for combined E [58] it is most effective for S1: see the improvement in Fig. 1a (blue vs. red). A skew fit, while including an error function er f and similar to the equation used in [59] that should account for triggering, still misses some points, as keV-level S1 becomes non-Gaussian and non-symmetric due to trigger efficiency dropping below 100%. In plot (c) however, the reduced χ 2 drops from O(100) for both data and NEST to 2.6 for the 1.5-5 keV ee range, still too high due to features in the data not captured even by a skew-normal fit, but more sensible. Asymmetries arise from both thresholding bias [27] and microphysics [53,58]. More Advanced Energy Reconstruction Strategies, from keV to MeV Scales, and Resolution A superior mitigation strategy can be found upon realization that the optimal weights for the S1 and S2 pulse areas are no longer simply g 1 and g 2 at the O(keV) scale. We can recast this statement in terms of the (n)EXO-style combined-energy scale first developed by Conti [51]: instead of using a g 1 and g 2 it defines what is known as an angle of anti-correlation for summing S1 plus Q or S2. As energy decreases the angle becomes energy-dependent instead of being fixed as tan −1 (g 2 /g 1 ) [40] and thus no longer respecting "perfect" anti-correlation of quanta, with N ph and N e− always summing to N q = E/W q . Note there is no evidence of anti-correlation breakdown at least in LXe above 1 keV: this effect is caused by inability to reconstruct N ph well in data due to dropping S1 efficiency, as first suggested by Szydagis (2012) and first publicly applied in the PIXeY 37 Ar paper [59]. Parameter w 1 decreases the weight assigned to S1 for low E's, countering thresholding; one could increase the S2 weight, but this is equivalent. Multiplying S1 by one weight, and S2 by another would be redundant. Instead, one weight is applied to S1, and a second weight w 2 to the formula as a whole to bring the average of the energy being reconstructed back to the correct mean after the shift caused by adding w 1 , while simultaneously correcting for any efficiency bias near the S1 and/or S2 thresholds. It is not technically independent then, thus written in Eqn. 5 as a function of w 1 , which itself is a function of energy. To avoid a circular reference, Eqn. 1 can be used to determine its energy dependence, for use in 5, and the process can be iterative, defining E" after E', etc. Knowledge of the proper weights a priori is achievable via MC. Figure 1. Reconstruction of the 2.8224 keV 37 Ar peak in the PIXeY detector [59] compared to NEST. Real data always hollow black circles, NEST MC green squares. Gaussian fits in red, skew Gaussian (better fit) in blue, with the fits to data in long dash and NEST in short (indistinguishable due to NEST's fidelity). Number of events in data 7.4 × 10 4 , while 9.3 × 10 5 in the MC, after all cuts (i.e. all thresholds). (a.) Original, non-linear S1-only E scale used for LXe. Bins with non-zero counts begin very suddenly at the left in both NEST and data due to a cut-off created by triggering only on 3-fold PMT coincidence, and other threshold requirements. The results are highly skewed, driving the asymmetry within the combined-energy fit later. (b.) S2-only, which is quite symmetric, so that Gaussian and skew-Gaussian fits overlap. (c.) The combined-energy scale in common use now for LXe DM detectors. Gaussian fits in red are clearly poorer compared to skew fits, diverging from the histogram in the cases of NEST and data alike. (d.) An optimized combination for energy, as done on PIXeY. Both NEST and data, and Gaussian and skew-Gaussian fits alike, have all become indistinguishable for this stage. The best-fit mean energy has shifted from 3.03 keV in (c) to 2.82 keV for (d). This improvement in precision is also reflected in the sum of the mean quanta from (a) and (b) matching (d), but not (c), which is too high. The skew parameter α decreases from 3 for S1 only (a) to 2 for the combined scale in (c) and 1 in (b,d) Without an MC like NEST tuned on earlier calibration data, it is possible to empirically determine the two weights by calibrating an experiment with monoenergetic peaks (e − capture, x-ray, gamma-ray). In the case of PIXeY's 37 Ar measurement, the values which minimize the width of the Ar peak in reconstructed energy (optimum resolution) are w 1 = 0.19 and w 2 = 1.38 (NEST: 0. 23, 1.35). Looking back at Fig. 1d the very positive effect of applying Eqn. 5 is evident: the resolution is 10.60% (10.68%) close to S2-only (b) but a factor of ∼1.5 improvement over (c) the more traditional "plain" combined scale. Moreover, the Gaussian centroid has dropped from 3.03 keV ee (again, higher because of triggering on high-S1 fluctuations) to 2.83 (2.81) much closer to the true value of 2.82, while the asymmetry in the histogram has nearly vanished, with the best-fit skew Gaussian possessing a positive (right-hand) skewness parameter α < 1 (compared to ∼2-2.5 in (c), which used Eqn. 1). While it is not possible to remove all of the skew, as some is intrinsic (from the physics of recombination probability) [53], in panel (d) the Gaussian (red) and skew (blue) fits are nearly indistinguishable (unlike in plot 1c). This technique, essentially equivalent to inverse-variance weighting, is not widespread for DM, although found in [25]. From its inception, LUX has relied on Eqn. 1 i.e. plot 1c's method, but a PLR (Profile Likelihood Ratio) analysis effectively takes into account energy bias by relying on NEST to produce non-analytic 2D PDFs for both background and signal, relying on MC truth energy converted to S1 and S2, not reconstructed energy [67]. XENON's own MCs for its PLR perform the same function, taking MC truth and/or calibrations as input, and outputting (S1, S2) PDFs mimicking data [68]. Many possible enhancements exist, like Maximum Likelihood [69] and Machine Learning [70]. These can take more than S1 and S2 into account, e.g. bypassing calibrated 3D corrections, feeding raw S1 & S2 plus positions into an artificial neural network (ANN) or Boosted Decision Tree (BDT) from which XYZ dependence emerges, given sufficient training. Ernst and Carrera suggest that it is possible to determine how much is sufficient [71]. Some examples of additional training variables include the E-dependent S1 pulse shape, usable given sufficient statistics [54,72], as well as breakdown of pulse areas into top and bottom arrays, capitalizing on anti-correlation of top vs. bottom light similar to that of S1 vs. S2, as used early on LUX [73], good for detectors sans S2 (0 V/cm). However, idiosyncrasies of individual detectors make it difficult to review such methods, which still rely on S1 and S2 as the two most important variables regardless. ANNs/BDTs are best trained by a combination of S1s and S2s from data and MC, per analysis, and have so far never been applied at < 1 MeV. Given these difficulties, this paper focuses only on the energy reconstruction scales which can be formulated purely analytically with relative ease: again, S1-only (Eqn. 2), S2-only (Eqn. 3), combined (Eqn. 1), and the so-called optimized scale E (Eqn. 5). To broaden applicability to more detectors, we also consider variants. Fig. 2a shows S1-only in red and S2-only in cyan. The dashed red line illustrates how the S1 scale is poorer (the effect propagates into combined energy) when one does not account for the so-called "2-phe effect," mentioned earlier [16]. Accounting for this via dividing it out improves the resolution, as the additional phe do not provide any new information on the original number of photons produced N ph , even though they may be useful in lowering threshold and increasing the sensitivity to lower-mass DM [74]. The solid red NEST line demonstrates the improvement achieved in doing this, plus attempting to reconstruct the integer numbers of photons hitting the photomultiplier tubes (PMTs), instead of only reporting S1 pulse areas. This technique is known as photon counting or spike counting [62] and is easy/feasible only at low E. More importantly than the slight improvement in energy resolution, at only the lowest energies (< 10 keV), this reduces the leakage of background ER events into the WIMP (NR) region in S2 vs. S1 [18]. All points plotted are defined as raw σ/µ, but comparable results can be achieved with Gaussian/skew-fit centroids or medians. Cyan lines represent S2, which as seen before can be better than S1 or even combined-E scales, but only at O(1) keV. It is at least comparable, which is important given the historical use of only S1 for E even in 2-phase TPCs with both channels, and continued usage for gas-less regions like the skin vetos of LZ and [52]. Only combined resolutions published (black) but S1 (red) and S2 (cyan) scales included from NEST to show they are mutually comparable at O(10-1000) keV. The lowest-E point is optimized as done in PIXeY, so anomalously good (low). NEST combined scale blue (below S2 only in cyan except for 37 Ar) and optimized gold. w 1 in optimal scale varies from 0.26 to 1.0 from 2.8 to 662 keV, while w 2 falls from 1.45 to 1.00. (Lines are guides not fits.) (b.) Data from neriX (Columbia's small-scale calibration chamber, like PIXeY) as black points vs. measured recoil energies from Compton scatters [65] compared to NEST in blue vs. true energy known from MC, showing consistent deviation for both in reconstructed energies for a combined but non-optimized scale due to threshold bias (weights as used for NEST in gold in (a) correcting for this not applied purposely in blue in (b) to show this effect). XENONnT. For S2, it is critical to understand the limitations, especially in low-mass-DM experiments like LBECA, where it is the only channel used. It can suffer from poor drift e − lifetime (impurities), incomplete extraction at a liquid-gas interface due to fields being too low, or both. The former effect (same as latter) is shown by the dash vs. solid cyan. Even when lifetime and extraction are known, along with single-e − pulse size, low values lead to high S2 area variation, although the effect is muted above 50 keV. The 41.55 keV "hiccups" are 83m Kr, a combination of 2 decays, at 9.4 and 32.1 keV [75]. An inverse square root is not a good fit to the S2 or S1 alone and not included; it is due to incomplete accounting of quanta, also to linear noise flattening the curves. Such noise impacts higher E's more and is defined in detail in Section 4.3 of [36] and Section III.A of [58]. As seen in the ∼straight lines in log-log, a power law (often plus constant for noise) is reasonable for combined E (blue) but below ∼5 keV even E − 1 2 breaks down, unless the power is free, preventing extrapolation from 10-100+ keV down to where behavior may even be non-analytic, and E resolution ill-defined. The NEST combined scale is in solid blue in Fig. 2a compared to LUX data from its first science run as black circles. LUX used the same combined scale, which again is clearly advantageous compared to single-channel methods, with g 1 = 0.117 ± 0.003 and g 2 = 12.1 ± 0.9 [18,76]. The exception in the plot is the first point (2.8 keV) which should be compared to the optimized NEST in gold. The dashed and dotted blue lines are examples of further resolution improvements. First, by removing linear noise, in MC, as doing so is certainly not as easy in data (noise proportional to the S1 and S2 areas representing, e.g., imperfect position corrections). This is modeled as only 1.4% for LUX. It is typically O(1%) [36,58]. Second, respectively, by simulating a uniform electric field, when the real field varied with position, though not significantly in LUX's first WIMP search run (180 V/cm average) and the (much larger) variation was taken into account in the second [77]. The last few highest-energy points in data (black) do not overlap with MC due to not fully accounting for PMT saturation (S2 clipping) above 500 keV. The advantages of the optimal scale (gold) disappear rapidly above 10 keV, comparing gold to blue. There is a benefit to this. It means that at sufficiently high energies the rotating/re-weighting of a peak in S1 and S2 to find the optimal resolution results in a derivation of g 1 and g 2 [49]. This abrogates the need for multiple peaks, arranged in S2 vs. S1 (means) in what is known as the Doke plot. Such a plot is a straight line due to the anti-correlation between N ph and N e− for ER [78] shown to work across at least four orders of magnitude in energy, and different fields [36,62,79,80]. Alternatively, if studies of anti-correlation both within peaks and across peaks for a given analysis in a certain experiment are possible, then these two methods for deriving the S1 and S2 gains can serve as cross-checks, on top of NEST comparisons and known-spectrum reproduction such as that from tritium betas [76]. As explained in the caption of Fig. 2, the S1 weight w 1 is decreasing toward 0 with decreasing E's while w 2 increases to compensate, but with increasing E's both w's asymptote to 1.0, as expected. In the second plot plane (b) we focus on the mean instead of the width (resolution) demonstrating explicitly with both data from Compton scattering in black [65] and our NEST MCs (despite significant uncertainties) that the thresholding effects raise the reconstructed energy significantly above the true value. This phenomenon becomes most prevalent in the sub-keV regime, however, where resolution becomes ill-defined due to individual photon and electron quanta becoming resolvable, generating a multiple-peak structure [27]. The peaks become not just skewed-Gaussian, but entirely non-Gaussian, or even non-analytic [81]. For this reason, Fig. 2a stops at 2 keV on the x-axis. But Fig. 2b continues below that, focused on mean (i.e. ratio of reconstructed over known energy) not width however. We switch to neriX from LUX here, as LUX does not have a relevant plot published with which we can compare, and did not have direct, quasi-monoenergetic measurements below 1 keV. (Nevertheless, due to similar g 1 's and g 2 's in these and most experiments the results should be quite general.) Data uncertainties are driven in x (E axis) by finite resolution in the Ge detector used for independent energy determination, and in y by uncertainties in neriX's g 1 and g 2 (0.105 ± 0.003 phe/photon, 16.06 +0.9 −1.0 phe/e − ). NEST uncertainties are large only at sub-keV, and are not statistical due to large simulations. Instead they are due to the uncertainty on how to define a central value, using a mean or median or attempted Gaussian fit, due to the multi-peak effect mentioned (photon and e − discretization). At low energy the benefits of not just a combined but optimally-combined (re-weighted) scale are significant: not just a built-in erasure due to w 2 of the growing discrepancy between the reconstructed and real values of energy illustrated effectively in Fig. 2b (see also neriX's Fig. 7 [65]) but a reduction in width that was 50% (relative) for 37 Ar in both PIXeY and LUX. Lastly, as illustrated in Fig. 1d the shape becomes more symmetric at individual energies, with the skew nearly disappearing. While this matters more for monoenergetic ER peak searches (for axion-like particles or ALPs, bosonic WIMPs, et al.) a benefit for a WIMP search, or for any analysis in fact, is better determination of the g 1 and g 2 through tighter windowing around single-energy calibration lines in 2D, in S2 vs. S1, which can occur iteratively, reducing the errors on g 1 and g 2 (5-10% typical) that often drive systematic uncertainties on both yield analyses and final physics results, especially in terms of S1 and S2 thresholds [18,58,76]. The only disadvantage is loss of the field-independence a combined scale usually has, as the yields change with field. As most experiments run at only one electric field however, that is not a true drawback. High Energy: The MeV Scale (Neutrinoless Double-Beta Decay) Far from the hard thresholds, we turn our attention next to 0νββ decay. Searches for this require great resolution for good background discrimination, at Q ββ = 2.458 MeV for 136 Xe specifically [82]. While resolution naturally improves with E due to the greater numbers of quanta produced, effects such as PMT saturation and different noise sources, including position-dependent effects, become more prominent. While machine learning can help a great deal as done on EXO-200 especially with detector-specific idiosyncrasies [70] the analytic optimum scale becomes degenerate with combined E above 0.1 MeV even already as illustrated earlier. Table 1 reviews the resolutions achieved in actual experiments: projections of future performance e.g. for LZ [83] are not included, in order to showcase only what has been demonstrated, or extrapolated with σ/E ∝ 1/ √ E (+ optional constant). In EXO, in its references cited below, a richer formulation was adopted: σ 2 = a + bE + cE 2 . It considers more detector noise sources. For a = c = 0, it simplifies to σ 2 = bE or σ/E = √ b/E. The reader must be cautioned not to conclude that one technology (XENON is two-phase, but others single) is better, as fiducial mass and total exposure time, position resolution, overall background rate in the region of interest and self-shielding, and enrichment in 136 Xe come into play. The XENON series of detectors have focused primarily on DM not 0νββ and so were not enriched. Their intrinsically better resolutions are due not necessarily to the addition of a gas stage (converting Q into S2) but the use of PMTs with single-photon resolution, while EXO used silicon photo-multipliers (SiPMs) with poorer single-phe resolution (not needed at MeV energies) required for their lower radioactivity that is superior to even the custom PMTs for LZ/LUX and XENON [93,[102][103][104][105][106]. Lastly, we do not explore GXe, for which there is much data: NEXT has achieved better resolution than reported here, 0.1-0.3% (0.30-0.74 FWHM) due to lower total-quanta Fano and lower recombination fluctuations [107,108] both accounted for in NEST [48]. (The question of high mass vs. superior resolution is beyond our scope.) What we can do is perform detailed MC scans to predict the best potential LXe resolution, for 2458 keV, but differing conditions; real experiments measure it at a nearby E e.g. 2615 keV from 208 Tl. Validations are not overlaid, but we point to successes in predicting resolution for XENON1T [41,42,58] and postdicting LUX. To narrow the enormous parameter space, infinite e − lifetime and 100% extraction efficiency (or, all-LXe detector like n/EXO) are assumed, with 0% noise in Q readout from the grids, but varying S1 noise level and wide E-field range for completeness. Second, an assumption is made of fixed medium g 1 = 0.1, a conservative baseline based on what is possible now, while NEST systematic uncertainty will be shown, stemming mainly from different assumptions for the Fano factor. Fig. 3a is resolution dependence on g 1 , from a pessimistic scenario of 1%, all the way up to 100%. Higher E-field is only better at low g 1 , due to NEST's strictly empirical Fano factor F q increasing with field. It is unphysical, but needed to match data claimed to be de-noised or low in noise [109]. This is important, given the rush to achieve higher field for better resolution [110] similar to the rush in the DM field, for lower leakage of ER backgrounds into the NR regime [53,56,111]. While L y and Q y are changing The middle-of-the-road default beta model was selected, and g 1 and field frozen at 0.1 and 500 V/cm, and the resolution as a function of the Fano factor assumed is presented, for different levels of noise in the S1 (primary scintillation) signal. As at left, combined E used, except for the dashed line (S1) and dotted (S2 or Q) for comparison to more detectors, as close to the max (worst) possible. with E-field thus changing combined resolution, higher g 1 is naturally better, at least for non-zero field and combined E, due to more photons being collected. XENON1T, with its g 1 ≈ 0.13 and field 120 V/cm, appears to have achieved close to the best possible for those values [41] at 0.8% (Table 1), also best overall. NEST's theory prediction of 0.7% as best possible for XENON's g's and field rises slightly to match at 0.8 when applying XENON's e − lifetime and e − extraction efficiency [58]. In Fig. 3b is resolution's dependence on Fano factor, from a theoretical value [112] (sub-Poissonian, 1) up to the largest experimental one, of Conti et al. [51]. This governs standard deviation: F q N q . While the best-fit (world data) NEST value, for 2.5 MeV and 500 V/cm, is 14 by default, we treat the Fano factor as free in Fig. 3b, extending down to 0.2 due to NEST possibly absorbing detector-specific noises by mistake into the Fano value (even if this is not likely due to matching data across decades [41,109]). All the various EXO-200 results can be explained, as being between ∼2 and 5% noise levels in the detection of scintillation. The dotted cyan line holding steady at 4.4-4.5% explains precisely the seminal result with Q-only resolution of 4.5% in Table 1. It is not affected much by Fano factor, since when one considers only a single channel the recombination fluctuations, which move quanta between the scintillation and ionization signals, dominate [29,36,53]. It is very similar at different fields, since in the minimally ionizing regime (as ER energy approaches and exceeds 1 MeV) Q y and L y asymptote to constants [24,32,113]. The dashed line is too high to explain EXO's S1-only values, but S1 resolution improves with more light at lower E-field (higher E-field increases charge, at expense of light). Regardless of whether it is achieved through ramping up g 1 (not unrealistic for the future given 100% QE devices [114] and high-quality reflectors [115]) or F q dropping to zero (it is not tuneable, at least not without doping of Xe with other materials; only feasible if the intrinsic value is already below Poisson i.e. 1) the best possible value for resolution appears to be 0.4%, a "basement" created by binomial fluctuations in excitation and ionization, combined with non-binomial recombination fluctuations [52,79] if there is no noise (versus fixed 2% example in pane b). Given realistic detector conditions a more reasonable estimate of the minimum possible here is 0.6% (comparable to gas). Even if the total number of quanta is higher than assumed here, due to W being lower, as recently measured by EXO-200, 11.5 ± 0.5 eV [110] as opposed to 13.7 ± 0.2 eV (Dahl [36]) or 13.4 ± 0.4 eV (Goetzke, neriX [65]) or 13.8 ± 0.9 eV (Doke [78]), then this basement is unlikely to change significantly, as even F q = 0 was explored above. This discrepancy, observed in the light not charge channel and thus unlikely to be due to charge amp calibration differences, may be due to SiPMs being more sensitive (relative to PMTs) to wavelengths other than VUV, such as infrared (IR scintillation has been observed in LAr [116]). While at 2 nd order the fluctuation models in NEST would have to be revised, to 1 st order everything discussed here would remain the same, but with g 1 and g 2 estimates decreasing by ∼15%. Energy Reconstruction and Efficiencies for a Continuous Spectrum In searching for either 0νββ decay or the dark matter, a continuous-spectrum background can obscure any potential signal of beta decay or dark matter respectively, in addition to peaks in the background, or calibration peaks [90,[117][118][119][120]. In our final ER analysis, we return to optimal combined E, but consider a non-monoenergetic spectrum. A new challenge appears, as cross-contamination, e.g., between bins in a histogram, makes it difficult to separate the upward fluctuations of lower E's from simultaneous downward fluctuations from the higher bins. This difficulty is exacerbated by the fact that energy resolution is not fixed, so this is not a flat or linear effect with which it is easy to deal analytically. The resolution of course degrades as energy goes to zero. Because the light and charge yields depend on energy, an additional problem is the fact a spectrum flat in (combined) energy is not flat in S1 nor in S2, and not all background spectra are going to be flat. That being said, this is approximately true at low energies for DM searches in LXe TPCs, after the contributions from all background radioisotopes are summed together, from Compton plateaus, neutrinos, and/or beta spectra, as in [24,42,53,58]. A naïve optimization attempt for a uniform spectrum that allows both w 1 , w 2 to vary distorts it more than normal. Better results are obtained fixing w 1 , unlike before. LUX is the example again: it is NEST's default. A generic flat ER background is simulated from 0-20 keV in real energy (it should not be taken to represent the true backgrounds found within [49,118]). An excellent analytic fit for the detection efficiency vs. E for a continuous spectrum is a modified Gompertz function suggested by a Fig. 4a shows how the optimal scale, in gold again, is closer to the correct energies known from NEST MC in grey, relative to traditional combined energy in dark blue again. 3 H (tritium) betas are not flat in energy but their LUX trigger efficiency should be similar enough to a flat spectrum, so it is included in black to verify NEST's reasonableness [76]. A 3 H beta spectrum terminates (Q β ) at an 18.6 keV endpoint, but finite resolution causes the fluctuations around that energy. In gold, the w 1 is held constant at 1.0, but w 2 = 1.025 − e − E 0.35 , basically adjusting for the growing deviation between reconstructed and real energies as showcased in Fig. 2b using an S-shaped curve asymptoting close to 1.0 (without a shrinking w 1 , this weight w 2 has the opposite trend compared to Fig. 2a). E in keV can stem from either the traditional combined scale, or MC truth, which in an actual experiment can be validated with a series of monoenergetic calibration peaks. The χ 2 /DOF = 2.65 for blue compared directly bin by bin with no fit to grey at 0.5-17.5 keV, versus 1.68 for gold. Bin widths are 0.1 keV. The 50% fall-off point at high E's (perfect step function in true E in grey) shifts from 19.5 to 20 keV, and is thus more accurate in gold compared to blue, forcing the endpoint smearing to be symmetric. Fig. 4b reiterates once more how distorted S1/S2-only E scales can be, in red/cyan. While Fig. 1 did show S2-only can be best for low-E peaks, this is not the case for a continuum. Both S1/S2-only are non-uniform, despite the underlying spectrum being flat, and accounting for non-linearities in the underlying S1 and S2 yields, fitting quadratic not linear functions vs. (true) E. The flat top should be 0.005 as in the truth spectrum, due to normalization: bin width over range = (0.1 keV)/(20-0 keV) = 0.005. S1 is pulse area not spike, but in units of phd [16,19,62]. Despite this, there are unnatural peaks at both low and high E, caused by threshold and the maximum E simulated (20 keV) respectively. , and re-weighted optimal reconstruction (gold). Gold outperforms blue even visually, correcting underestimation of efficiency sub-keV, plus overestimation near 1.5 keV (see the text for quantitative goodness of fit comparisons). Tritiated methane (CH 3 T) is the black points, for validation against actual data. Its efficiency curve is markedly similar despite a non-flat spectrum. As it is continuous, the E is still only reconstructed, not known to infinite precision as in MCs, although an attempt to empirically account for smearing was made by LUX [76,79]. The solid black line is the Gompertz fit, superior to a more traditional er f , dashed, while the inset zooms on low energies for clarity, with a linear x-axis and log y now. (b.) The true E's repeated in grey, but now compared to S1 (red) and S2-only (cyan) scales (Eqns. 2, 3), with the former possessing unnatural peaks at left and right, and the latter grossly underestimating efficiency at keV scales. The default public β model (v2.2.0) is used here but comparable results occur with γ-rays. Liquid Xenon Nuclear Recoil (Dark Matter Signal, and Boron-8 Background) Pivoting toward nuclear recoil, the first hurdle is that for this type of recoil the total number of quanta per unit energy is not fixed, unlike what was shown for ER (first by Doke et al. at 1 MeV [78], and confirmed for energies of greater interest to DM experiments by Dahl [36]). This would seem to imply there is no anti-correlation between photons and electrons for NR and thus no benefit to using a combined energy scale for them. However, this does not appear to be the case, as the sum of quanta in actual data is well-fit by a power law, when combining all world data ever collected. This power law simply replaces the flat line (or general linear function but with no y-offset, if not dividing by energy: twice the energy means twice the N q ) that works so well for ER, given a fixed work function W q averaged over both flavors of quantum. This fit is related simply to the L(E) from Eqn. 1. The mixing of units is possible, and depending upon whether one uses an S1-only, S2-only, or combined-energy scale, for NR the unit of keV ee can mean the beta, Compton, or photoabsorption event equivalent energy at which NR produces the same amount of S1, amount of S2, or the sum. In none of these three cases however is the conversion a simple constant or linear function, and can differ wildly, from keV ee being 2-10x smaller than keV nr , depending also on energy [15]. The reason it is smaller: it takes less energy for ER to produce the same number of quanta compared to NR for the same energy deposit (L < 1). For the combined-energy scale it is at least field-independent, as L should not depend on field, only the recoil energy, and the E resolution may be best via combination of information from both S1 and S2 again. (For additional clarity: some authors refer to L as f n [35].) Fig. 5 has all data available on N ph + N e− , from which we extract L. The plots suggest combined E may still be beneficial even for NR, due to anti-correlation. The evidence is indirect, but strong: >300 data points from >20 experiments across nearly 2 decades were combined, respecting the systematics of each (typically driven by how well g 1 and g 2 were known). Remarkably, within uncertainty at least at the 2-sigma level the vast majority of the the hundreds of data points lie along the same straight line in log-log space. Publications reporting continuous lines stemming from e.g. a modified NEST version or their own custom MCs are not ignored, but a few sample points at discrete energies are plotted. Fig. 5's plot style is similar to that pioneered by Sorensen & Dahl in [35] as well as in later works. Uncovering direct evidence of anti-correlation in NR is challenging: monoenergetic neutron (n) sources exist, but internal monoenergetic NR sources do not. The common sources used such as AmBe and 252 Cf produce n's at energies O(1-10) MeV which lead to Xe recoils O(1-100) keV for calibrating DM detectors. While n's can be background [136] they are sub-dominant compared to ER [137,138] and appear more commonly as the stand-in for DM used to calibrate WIMPs, being neutral particles that primarily scatter elastically, as DM should do [6,139]. The closest one can get to separable recoil energies comes from determination of neutron angle and double scattering as done in LUX using a D-D neutron generator, but even in this most optimum situation the direct testing of anti-correlation was not possible: Q y , L y were reported at energies that did not match, and the former data included x-axis (E) error bars driven by uncertainty in angle, while the energy error for (single-scatter) L y data stemmed in turn from uncertainty from the Q y used to establish an in situ S2-only E scale [27,140]. As this is not a paper presenting any novel models (in NEST for instance) we do not focus on breakdown of the total quanta into N ph or L y and N e− or Q y , which numerous papers already discuss at great length [141]. We also pass over the Migdal Effect, which could increase the light and/or charge at keV scales due to additional ER from initial NR; there is no evidence of its existence at present, but it is predicted to describe the behavior of electrons "left over" after a nucleus recoils [142]. Additional phenomena like it are secondary when combining all individual channels. Returning our attention to Figs. 5a-c, the empirical total number of quanta is described by the following power law (black line): (6) N q = αE β , where α = 11.43 ± 0.13 and β = 1.068 ± 0.003; N q /E = αE β−1 ∼ 11 quanta keV Not only can summed data from [27,30,36,38,68,121,122,[125][126][127][128][129][131][132][133][134][135] be described simply with a power fit but with an exponent near 1, implying a nearly fixed number of quanta/keV, ∼11.5 (c f . 73 for ER, coming from 1/W) i.e. fixed L of ∼0.157. The conversion into L, re-arranging Eqn. 1, is: W q = 13.7 × 10 −3 keV is assumed here with no uncertainty as just an example, while the last term is the Taylor expansion to first order at 10 keV. In Figs. 5d-e, the fit to N q from data is compared to several models, starting with the traditional Lindhard approach [139,143]: Where Z = 54, A = 131.293 (average) for Xe. is called "reduced energy." It allows dimensionless L comparison across different elements. A Taylor expansion for Eqn. 8 at 10 keV is L(E) = 0.1790 + 0.0028E. At this E, the value of L for the expansion of Eqn. 7 is close, < 5% lower than the one for Lindhard (8). Eqn. 7's linear approximation is lower for both of its terms, but this can be explained by bi-excitonic and/or Penning quenching, which increases with higher dE/dx, which occurs with increasing E's for keV NR. Where the E's did not match up (sometimes even within the same data set), a simple power law was used to spline (only interpolate not extrapolate) the numbers of photons first to add them to electrons. E-field does not cause measurable differences. (a.) Only directly measured yields using angular measurements to determine E [27,30,121,122]. These are handled as more "trustworthy" in the community due to being quasi-monoenergetic analyses, and thus given the most weight in the fit even if there were fewer points. One simple power law appears to describe the data points across over three orders of magnitude in E, depicted as the black line, dashed or solid, within every plot pane. For (a), an uncertainty band was included (2σ for clear visualization in green, not 1σ). (b.) Dahl's thesis data from the Xed detector taken from broad-spectrum shape spline fits ( 252 Cf) [36]. *Corr(ected) on the y-axis refers to correcting the data in our global meta-re-analysis for effects often not known at the time of data-taking, such as the 2-phe effect, or the extraction efficiency being much less than 100% than the data-takers had originally estimated [16,123,124]. The former can lower the L y measurements, depending on the analysis technique, while the latter raises Q y data points typically. The x-axis was corrected in the sense of energy estimates updated with a more modern combined-E scale whenever possible. (c.) More (indirect) measurements from continuous source bands, from XENON, ZEPLIN, and PandaX [35,68,80,[125][126][127][128][129][130]. Errors in data used whenever reported. For PandaX, L y was not provided, only Q y . The former was estimated using their AmBe bands, by the authors of this work. (d. plus e.) A review of models [27,48,[131][132][133][134][135]]. (f.) Low-E zoom of models, with data included, as larger-size points. Colliding pairs of excitons may lead to de-excitation, and thus less S1 [131]. Some fraction of excitation may also be converted into ionization, adding to Q. Using the values in [30] or [133] it is even possible to show that NEST, similar to the data fit, follows Lindhard closely above O(1 keV) as long as additional quenching is added, for photons (see NR analysis note [15]). This is remarkable given that it was not expected that Lindhard would work even that high in energy [28,131,132]. Yet data, once summed, exhibit no significant deviation from the Lindhard model, down to sub-keV even. At higher energy, the work of Hitachi [45,131], who incorporates quenching, may be more appropriate however; it can be approximated using Lindhard but with k = 0.11 (blue). Fig. 5d also includes the fit to LUX D-D n gun data alone (green) which agrees with standard Lindhard at the 1σ level between 0.7-74 keV, given k = 0.174 ± 0.006, after accounting for extra quenching, separately [27,62]. Sorensen 2015 in yellow in Fig. 5e assumes standard-k Lindhard at high E's, and an atomic-physics motivated roll-off below 1 keV, with a free parameter q (in same units as ) we chose to best match all contemporary data, 1.1 × 10 −5 or 10.5 eV. Only the min (blue) and max (green) k (which is uncertain and can range from 0.1-0.2 according to [28,35,134]) easiest to justify are depicted (Fig. 5d). k = 0.166 would be between, as would k = 0.14 from Lenardo et al., best fit to data as of 2014 [144]. Any NEST version ≥2.0.1 has a power fit of 11E 1.1 (red, Fig. 5d) matching Eqn. 6 with rounding, but more importantly different because of considering not just raw yields but log 10 (S2 c /S1 c ) band means, and giving greater weight to the lowest-E data, of greatest importance to a DM search [139]. (Previously 12.6E 1.05 , before all statistical and systematic uncertainties were properly considered.) The dash purple line of Fig. 5d is not just the power law but the full NEST that also since v2.0.1 to today includes sigmoidal corrections, separate in both L y and Q y , to allow for modeling of NR violating strict macroscopic anti-correlation. These cause the N ph and N e− to realistically drop below the power law, conservatively accounting for non-Lindhard-like behavior below a few keV, and better matching data in this regime such as from the D-D calibration of LUX's second science run [122]. Nevertheless, all models agree very well at low energies, all of them extrapolating at 200 eV to 1-2 quanta (see inset). Below 0.2 keV, NEST conservatively assumes 0 quanta, justifiable from first principles [15,28,48,132]. For greater readability several models have been omitted from Figs. 5d-e, which do not include the work of Sarkis, Aguilar-Arevalo, and D'Olivo [28] nor of Wang and Mei [141]. This is not to say their approaches are not valuable, but the former is markedly similar to Sorensen [132] and to the complete NEST equation with a sub-keV roll-off despite starting with different formulae, while the latter is fit to the LUX data, and thus practically redundant with dashed green. In Fig. 5f we zoom in on energies < 5 keV only, of greatest interest not only for lower-mass WIMPs, but any rest mass-energy WIMP due to the falling-exponential nature of NR from WIMPs, in the Standard Halo Model [6]. The cut-off in quanta may not be as sharp as expected due to the Migdal Effect adding more light at least [145]. Below 3 keV 8 B solar neutrino CEνNS is of great significance as well, interesting in its own right for the first detection of the recently measured coherent scattering [146,147] but from solar neutrinos, and as a background to next-generation LXe-based WIMP experiments [1,148]. In following the ER section, the next step should be a discussion of the energy resolution as a function of energy. This has never been published for the combined scale with NR, however, to the best of our knowledge, except for plotting of the S1-and S2-only scales only once each as far as we know, by Plante [31,149] (but only at zero field, in a dissertation) and Verbus [27,140] respectively. In both situations these were only quasi-monoenergetic reconstructions, tagging neutron scatter angle with a Ge detector in the former case, as typical for all L e f f measurements, and in situ in the same Xe volume (LUX) in the latter. For NEST comparisons to both of these, see Figure 3 bottom in the LZ simulation paper [150]. We also point to potential examples of threshold bias "lifting" E's above correct values (or, Eddington bias, for continuous spectra), as manifesting in light yields [112,151,152] seen much earlier for ER in Fig. 2b but it is handled in later works [140,153] and thus not shown explicitly pre-correction. Figure 6. (a.) Combined (blue) and optimal (yellow) scales for a 50 GeV standard WIMP spectrum, the MC truth for which is grey. Above 2 keV, bins are omitted in log fashion for clarity. LUX detector parameters (i.e. Run03, first WIMP search) used as earlier for similar ER plot, again for illustrative purposes (∼180 V/cm). Yellow corrects for sub-keV efficiency underestimation, and an overestimation at several keV, comparing yellow to grey and blue to grey. (These effects can change from detector to detector, with signal shape.) While a distinct functional form vis-à-vis ER, w 2 is again S-shaped. Better results may be possible with lower w 1 , kept fixed at 1.0 again here for simplicity. In actual data, with ER and NR mixed, backgrounds with potential signals, it is impossible to know a priori if applying keV nr or ee is more appropriate by event. Lacking truly monoenergetic peaks, continuous spectra present an opportunity still for contrasting the S1, S2, combined, and optimized-E scales. Those from all known n sources are highly dependent on geometry, however; thus NEST is insufficient: a full-fledged Geant4 MC [154,155] would be required to model detectors. Instead, an example of a 50 GeV/c 2 mass WIMP will be shown with an unrealistically large cross-section of interaction (1 pb, and 1 kg-day exposure). While artificial, this illustrative example is valid given underlying assumptions for not just total quanta but individual photons and electrons, and resolution in both channels, verified by NEST comparison with data elsewhere [32,34,144] and real experiments will be able to test the ideas presented in future NR calibrations, in XENONnT and LZ. Fig. 6 is a repeat of Fig. 4, but is for NR. The WIMP spectrum is more relevant than a uniform-E spectrum would be, as even for a massive (multi-TeV mass-scale) WIMP it is a poor approximation to what is inherently an exponential spectrum, falling as energy increases. The drop off on the left is of course caused as before by a combination of separate threshold effects that remove the lowest-energy S1s and S2s. The optimal scale shows improvement again over combined, but by itself combined E is not dissimilar from S1-or S2-only for NR, because the lower-quantum/area signals are dominated by detector specifics such as (binomial or Poisson) light collection efficiency. S1 only is most common, used in every experiment starting with the seminal XENON10 result [29] due not to a better energy reconstruction, but signal (NR) versus background (ER) discrimination [36,53]. S2 only may be as good for discrimination if not better however, according to the work of Arisaka, Ghag, Beltrame, et al. [156]. Liquid Xenon Summaries The key points of the LXe (ER) section (with detector-specific caveats) are: • A combined scale reconstructs monoenergetic ER peaks best for DM/ν projects, but below 3 keV at least this is not true according to an 37 Ar study with S2-only best (outperforming S1 as well) if e − lifetime is high. A combination can be established with two numbers, S1 and S2 gains, leading to a 1D histogram (XENON/LUX style) or equivalently a 2D rotation angle (Conti/EXO method). • An optimal weighting of S1 and S2 can result in better resolution than simple combined energy, down to O(1) keV even, and mitigation of threshold bias and skew. Higher, the best resolution occurs when the weights applied to the S1 and S2 are 1/g 1 and 1/g 2 , but machine learning is likely to outperform analytic methods, if more parameters (beyond S1, S2) are considered. • For neutrinoless double-beta decay, O(1%) resolution has been achieved in the relevant energy range by a multitude of different experiments and technologies, while the best feasible may be 0.4-0.6%, in liquid, which may be limited by a Fano factor (often confused with recombination fluctuations) that is higher than in gas. But no one experiment has yet reached its full potential. • For a continuous ER spectrum, the combined scale is a clear winner over S1-only and S2-only alike, at least for a uniform energy distribution (uniform in neither S1 nor S2, as L y and Q y are functions of energy, not flat). But optimization with re-weighting is still possible, just in a different manner than done for monoenergetic peaks, because of cross-contamination between bins. Next we summarize NR; there is good agreement on total yield from different experimentalists. • While impossible to obtain from truly monoenergetic lines, a summation of separate N ph and N e− data sets results in strong evidence of NR anti-correlation akin to ER's and no statistically significant difference from Lindhard even sub-keV, at least given additional high-E quenching. • Despite the point above, the advantages of a combined scale are not significant compared to the S1-only default (but S2 comparable) as so much E is lost to heat (>80%) decreasing pulse areas. • An optimized combination scale, which corrects for order-of-magnitude discrepancies in efficiency below 1 keV, is still best, but likely requires fine-tuning by energy spectrum. It is also likely to be highly detector-dependent and only important after a WIMP discovery is made, to fit the mass and cross-section the most precisely. A uniform spectrum is a bad approximation in any case. Low Energy: keV-scale (Dark Matter Backgrounds / Calibrations) Monoenergetic Peaks For ER in LAr the best example of a low-energy calibration line is the 83m Kr peak, at 41.5 keV, commonly used to calibrate both LXe and LAr experiments, but in this latter case there exists no evidence of the yields depending on the separation time between the individual 32.1 and 9.4 keV peaks [157], unlike in LXe [75], so the MC comparison is easier. The 2.82 keV electron capture from 37 Ar has also been studied in LAr but most commonly at zero electric field in single-phase (liquid) detectors, making it a less ideal choice for complete NEST comparisons to S1, S2, and combined-E histograms [158]. For a WIMP search ER is again the main background, due to 39 Ar in DarkSide-50 (DS-50) at Gran Sasso [159], DEAP-3600 (and formerly CLEAN precursors) at SNOLab [160,161], and ArDM (at Canfranc). Underground Ar depleted in this isotope reduces the background, but it remains dominant [162]. In neutrino experiments, ER is the signal, via neutral-current, charged-current, and elastic-scattering interactions. In this section, we begin by focusing on dark matter at the keV scale, later moving on to cover the GeV scale more relevant for accelerator neutrino experiments like DUNE [163], with the intermediate scale at MeV also important for supernova or solar neutrinos [164,165]. Predictions of NEST's LAr ER model in comparison to experimental data are shown in Fig. 7. The source of data here is DarkSide (DS), specifically several PhD theses of its students [21][22][23]. For this experiment, g 1 = 0.18 ± 0.01 phe/photon [21], higher than within LXe detectors likely because of conversion of VUV photons into visible light using wavelength shifter followed by detection in high-QE visible-light SiPMs/MPPCs/APDs. Another source gives this as g 1 = 0.1856 ± 0.0007 stat ± 0.0008 syst [23]. To most closely center NEST with respect to DS data in Fig. 7, we set g 1 = 0.181 and g2 = 23.8, both well within the above uncertainties. Only one example electric field was studied, of 200 V/cm, comparable to the electric fields studied earlier for LXe. The work function or W q was assumed to be 19.3 eV, a value justified later when discussing reconstruction for neutrino detectors. Reconstructed energy is again keV ee as opposed to keV (without the subscript) for known individual energies. Unlike in LXe, the S1-only energy scale does not necessarily perform the poorest. Our particular 83m Kr example has an energy resolution σ/µ of 6.5%, in Fig. 7a. The S1-based scale is more reliable in LAr due to its greater linearity, wherein the light yield for different betas and gamma rays is quite flat in energy, starting at 40-50 photons/keV and falling as the electric field rises and more energy goes into charge production [78,157,[169][170][171][172][173][174]. Next, in Fig. 7b the S2-only scale is plotted, leading to a resolution of 25%. Reproducing this large width was done artificially by setting the linear noise in the S2 channel to 25.0% (only 2.9% for S1 to match that width, an effectively negligible value). Compare this to 6.0% (S2, so high due to e − trains/tails/bursts [57]) and 1.4% (S1) assumed by default for LUX Run03 within NEST. We interpret the cause of the large value for S2 in LAr as stemming from a lack of full 3D corrections in the initial DS analysis. There is also a clear offset in skew compared to the MC which may come from DS-specific effects difficult to fully capture in a custom MC like NEST that does not account for all detector idiosyncrasies. No (statistical) errors are depicted as they are negligible from this high-statistics calibration. Fig. 7c shows a combined resolution of 7.5%, not improving on the S1-only resolution, but still a marked improvement over S2-only, suggestive of anti-correlation of charge and light in LAr (more robustly established later in this section). The agreement is even worse with MC here, however, than in the S2 plot, but this can be explained by the peak being from a different analysis from (a) and (b), with the Kr calibration sitting on top of a large background, from 39 Ar inside natural Ar, which we did not model. (But this is the only example of a DS combined-energy peak we were able to locate.) In the original source the peak was not centered on 41.5 keV ee , likely due to a systematic offset in g 1 and/or g 2 (see contradictory values above) but we had no difficulty in NEST with this. For ease of comparison for at least the width, 2.5 keV ee was added to data, to force alignment to a 41.5 keV(ee) mean [23]. Fig. 7d is the combined-E resolution vs. E. NEST is in blue. The data have full position corrections from high-statistics 83m Kr calibrations [21][22][23] as in LXe. They are most crucial for S2, in XY (or radius and angle) as Z is already handled by electron lifetime. NEST points for comparison are at semi-log steps in E = 2 − 5000 keV. While retaining the 3% S1 noise term from earlier, they have had the 25% S2 noise from (b,c) removed, so they can be effectively treated as close to the MC-predicted min possible, at least for the given (DS-50) combination of g 1 , g 2 , and E-field. The combined-E resolution drops from the 7.5% in (c) to below 5% in this case at 41.5 keV in (d). The empirical S2 noise term was thus likely accounting for imperfect correction for the (2D) position-dependent S2 light collection. There is no equivalent of Fig. 2b for LAr, as no example of Eddington-like bias could be found. An opposite effect (lower instead of higher reconstructed E) is recorded in Figure 3.11 of [23]. However, this can easily be interpreted as E "loss" into charge when using only S1, which while more linear in LAr compared to LXe, is still not a fixed flat L y at all energies, not even at null electric field [158,175]. High Energies: The MeV and GeV Scales (Neutrino Physics) In moving from the keV scale of DM experiments to the GeV scale of the accelerator neutrino experiments, we study the MeV scale as a boundary case. A 1 MeV beta was chosen as an example of an ER interaction energy just beyond what is considered relevant for a DM search, but on the other hand at the extremely low-energy end for neutrino physics, potentially just barely above threshold in an experiment Figure 7. Reconstruction of the 41.5 keV 83m Kr peak in DS [23] compared to NEST, with data as black dots, NEST grey squares. For clarity, no fits overlaid, although most data are ∼Gaussian. (a.) S1-only E scale most common for LAr-based DM detectors [22]. Our work/conclusions for such a scale should apply not only to TPCs where ionization e − 's are drifted but also to 0 V/cm 1-phase detectors, where these electrons fail to recombine to add to the S1 in LAr for both ER and NR [158,166,167] just as in LXe [32], with the L y having the same shape vs. E as for non-zero fields as field goes to 0 in TPCs (see Doke et al. as well as Wang and Mei for possible reasons [78,141,168,169]). (b.) S2-only, with the slight right-hand asymmetry nearly reproduced by NEST [21]. (c.) Combined-E scale, now standard at least in DS. Optimization as done in LXe could be possible, but not shown. Resolution is poorer than in (a) instead of better due to poor S2 resolution in (b) and that S2 is being combined with S1, on top of 39 Ar. This is explained in the text, as likely due to lack of XY correction creating noise not correlated with S1. For E's below 1 MeV like here this is not likely due to delta rays not being simulated by NEST by itself, as no S2 noise (and little in S1) is needed in the next plot (d) and S1 is not as wide as S2 in (a), while delta rays would affect both. Combined E, canceling them, should still be better in general in Ar as in Xe. (d.) Resolution vs. E for monoenergetic calibrations/backgrounds studied on DS [23]. Only one set of resolutions covering a broad E range was found (black) to compare to combined E from NEST (blue). like DUNE [176]. This is of the same order of magnitude but just slightly above the 39 Ar beta spectrum endpoint (565 keV), and is also near the same energies (976 keV and 1.05 MeV internal-conversion electron and gamma ray, respectively, from 207 Bi) studied by Doke et al. in their seminal 1989-2002 papers [78,169]. Our own re-analysis shows that if one sums L y and Q y vs. electric field, it is a constant number of quanta/keV within experimental uncertainties, and consistent with 1 / W q , given reasonable assumptions on g 1 and g 2 . Table 2. Best (lowest) possible resolution of a 1 MeV electron recoil at 0.5 kV/cm as a function of the S1 photon detection efficiency g 1 and wire noise (in semi-log steps). Each entry is averaged over 10 4 simulations in stand-alone NEST, with the effect of delta rays approximated analytically (based on G4). All photons and electrons are included from the interaction, which is thus being treated as a single site. 1 MeV is also associated with a track length at the border of position resolution in an experiment like DUNE, < 1 cm on average, creating "blips" instead of obvious tracks, straddling the point-like interactions observed in DM detectors vs. the track-like nature of most interactions in LAr TPC, or LArTPC for short, neutrino experiments. Its low dE/dx is also at the cusp of the minimally ionizing regime, making a 1 MeV electron a MIP (Minimally Ionizing Particle) like electrons from GeV-scale neutrino interactions prior to showering [177]. In neutrino detectors there is no g 2 however, as electron charge is directly measured, as on EXO, by wire planes instead of S2. They are 1-phase liquid TPCs with unit gain for charge readout. Table 2 scans different levels of energy resolution associated with noise from the wire readout, showing the best (lowest) resolution for different values of g 1 . Combined-energy and even S1-only resolution appear in the table, for high g 1 paired with high wire noise. Resolution in a DUNE-like detector can be halved already at g 1 = 0.02 i.e. 2%, for 10% wire noise. Such noise, starting even at 1%, comprises Q-only resolution almost entirely. For 0% wire noise, the 0.46% resolution is driven by excitation and recombination, both contributing binomial fluctuations (unlike in LXe). F q drives combined-E results. Its theoretical value 0.1 was assumed, given no other data [168]. For S1 only, a g 1 -based binomial drives what is possible. 0% noise was assumed for S1. Unfortunately, large LArTPC neutrino experiments like DUNE will achieve much lower values of g 1 [178], though g 1 = 2% is already much lower than any Xe or Ar DM experiment has achieved. At higher levels of noise, still realistic for future experiments but energy-dependent, even lower g 1 suffices for making combined energy superior to Q-only. When reading this table from the top down, if the value starts changing this means that the minimum resolution being quoted is from the combination of scintillation and ionization, and no longer just the ionization. If reading across: when the values stop changing that means (at high g 1 ) an S1-only scale is best, as at higher levels of wire noise not only does the Q-only energy scale become unreliable, but the benefit in utilizing the anti-correlation of charge and light washes out for the combined scale, leaving the S1-only scale. The small-scale LArTPC R&D detector, LArIAT (Liquid Argon In A Test beam), has investigated the claim that the combined-energy scale, making use of both ionization charge and scintillation light, is more precise [179,180]. While not targeting the removal of the wire noise, it does effectively cancel out the exciton-ion and recombination fluctuations, shown using a sample of Michel electrons (from muon decay) at a scale of tens of MeV. Here, the definition of combination is more appropriately set for neutrino interactions using differential energy loss along the particle track, updating our earlier equations, starting with (1) but with L = 1 assumed as for all ER: where the number of photons N ph and number of electrons N e− are replaced for the first step by L y and Q y , specific yields per unit energy, each multiplied by energy. Next, to align our terminology with what is more common in the neutrino field we rewrite L y as dS/dE and Q y as dQ/dE [181] (instead of N ph /E and N e− /E, with S and Q standing respectively for scintillation and charge, the quantum values). S is used for the scintillation light instead of L to avoid confusion with Lindhard factor. The resultant formula above is still not what is most commonly used in the field. Instead, that is: This is equivalent to the S2-only scale for two-phase TPCs in Eqn. (3) where E = N e− /Q y (E, E ), except divided by dx and g 2 = 1, given no need for extraction from liquid to gas, nor any photons created by electrons as a secondary process in gas (S2 i.e. electroluminescence). Although, the electron lifetime must still be high, ideally much larger than the full drift time across a TPC, and also well-measured, so that the Q can be corrected in the same way as S2. Q y (N e− /E) is often parameterized with Birks' Law [32,175,182] in terms of dE/dx instead of E: where W i (sometimes called W e ) is not the same as the work function W q defined much earlier, but trivially related. Being defined as E/N i , not E/(N ex + N i ), the convention of ionization W is related to the overall or total work function by W i = (1 + N ex /N i )W q . (N ex /N i are excitons/ions). For LAr, this means W i = (1 + 0.21)W q = 1.21 * (19.5 eV) = 23.6 eV, approximately [78,183]. Note LAr's W q has been labeled W max ph [78], as it is related to the maximum possible L y , when Q y = 0, but this is not possible even at 0 V/cm (it does become possible as ionization density from interactions goes to ∞ and forces recombination in both LXe and LAr [78,184]). Drift electric field is E while k B is known as Birks' constant, and A is the correction factor explained later. Q 0 is effectively the maximum possible charge, at infinite E , defined as E/W i or N i . While Birks is not the only possible parameterization (e.g. there is also the Thomas-Imel box model [185,186]) the focus of this work is on energy reconstruction not the various microphysics models. We take this to be only one, representative example here. Another approach, taking into account not just excitation vs. ionization, but e − -ion recombination, begins the same as for LXe [35,36,144], and LAr experiments used for DM instead of neutrinos: Here r refers to the recombination probability and depends not only on energy and electric field but also the particle or interaction type, N ex is the number of excited atoms initially produced, and N i the number of e − -ion pairs. In this case, N ph + N e− is also N ex + N i . Applying the same revision to Birks' Law suggested in 2011 by Szydagis et al. for Xe [32] to Ar recasts it in terms of first principles: with k(E ) = k B /E only one possible parameterization of recombination's E-field dependence, and in turn charge and light yields, with a more general negative power law possible [32,187]. Only the A from ICARUS [188] is lacking in terms of robust justification, but it is likely a correction needed only when the secondary particle production range cut in the Geant simulation is set too high to allow for delta-ray formation down to keV-scale energies. Delta rays are lower-energy, higher-dE/dx tracks with greater recombination, hence more light, at the expense of charge, explaining why A < 1. Increased simulation time and memory usage associated with lowering the secondary particle production range cut often lead this cut to be set too high in Geant simulations to capture this effect. In Fig. 8a the number of ionization electrons produced from 1 MeV primary electrons is plotted versus both the Geant4 secondary production range cut ("length scale factor") as well as the associated energy threshold for delta rays. The ratio between the values of the two plateaus at the left and right extremes in Q is almost exactly equal to the ICARUS best-fit value of the renormalization constant A (0.800 ± 0.003 in Amoruso et al. 2003 [188]). The default secondary production range cut of 0.7 mm used in the LArSoft (Geant4) simulation allows for only ∼270+ keV delta rays to form [189]. [190]. The default threshold is typically too high by >10 times, in length and/or E, in neutrino simulations [189]. (b.) Demonstration of S versus Q anti-correlation from our reanalysis of Doke et al. 1988 and2002 [78,169], confirmed by the recent LArIAT study [180]. 1 MeV ER at different E-fields, as opposed to many E's at one field (as done for a contemporary "Doke plot"). (c.) Noiseless charge-only resolution vs. fixed electron E using Geant4, with G4 secondary production range cuts of 1 µm (blue) and 0.7 mm (black, in LArSoft), compared to data points at isolated E's on recombination fluctuations with delta rays from [186] (red) and [191] (green, no measurement uncertainty provided). Fig. 8b (upper right) shows that the lower plateau in the large left plot (a) in its lower left corner is closer to correct. The zero-electric-field light yield for a 1 MeV beta, or other electron, is approximately 41 photons/keV [169] while the red line in (b) shows a reduction to ∼0.6 of that value by 500 V/cm, to 24.6 photons/keV. If the work function is 19.5 ± 1.0 eV, as reported also by Doke, this implies a total (S and Q summed) of 51.3 +2.8 −2.5 quanta/keV (an S value different than 41, potentially higher, only strengthens our following arguments by lowering Q). The charge yield is total minus light, leading us to 51.3 − 24.6 = 26.7 +2.8 −2.5 e − /keV. The NEST (2012) value is thus very much within the error envelope of actual data for the lower level, but outside it for the higher one, which also never quite flattens (Fig. 8a upper right). Including more, relevant measurement uncertainties does not sufficiently explain this discrepancy. A similar conclusion can be reached by converting relative charge (green curve in Fig. 8b) to absolute, pointing again toward the lower value of Q being the more correct one. Fig. 8c demonstrates fully accounting for delta rays is also important for correctly predicting ionization-only E resolution for primary electrons at these energies (∼1 MeV). The 2012-2013 version of the NEST LAr ER model was a combination of the Birks-Law and Box (Thomas-Imel) models of recombination, with LET < 25 (MeV·cm 2 )/g driven primarily by Birks, as given by Eqn. 13 (with the Thomas-Imel box still called for any accompanying delta rays). There was no need for renormalization as in Eqn. 11 (so A = 1) due to using a significantly smaller secondary production range cut in Geant4, and the Birks constant electric field dependence was k(E ) = 0.07/E 0.85 (c f . Amoruso's k(E ) = (0.0486 ± 0.0006)/E ). The different power on the electric field dependence, less than 1, on top of the different constant within the numerator is likely due to using Birks to model the recombination directly instead of R = 1 − r or Q Q 0 . In addition, it was based on higher-LET dark matter detector data and extrapolated lower. This is also nearly identical to Equation 8 from Obodovskiy's comprehensive report [187]. One point of confusion to address is the most common quantity used for data/MC comparison in the neutrino field, the recombination factor, R. It is actually an "escape" factor for electrons from ionized atoms: [183]. See the NEST 2013 comparison to ICARUS data (from both muons and protons mixed) in Fig. 9, where a good match is observed. This is a different NEST version from the dark matter experiment (DarkSide) comparisons earlier at lower energies (higher dE/dx) because at time of writing the ER model for LAr in the latest NEST version has not been recast yet into a dE/dx basis for a robust comparison [192,193] assuming a typical LAr density of 1.4 g/cm 3 . Minor discrepancies are observed at low dE/dx, which will be addressed with planned improvements to the NEST LAr ER model in the near future. The corrections discussed here are important not only for energy reconstruction considerations but also for background discrimination, such as for differentiating neutral pions, photons, and e + e − pair production from single electrons which comprise the signal of interest for accelerator neutrino experiments. This particle identification can be carried out with the use of dE/dx by first measuring dQ/dx [194]. Potentially one can add in dS/dx for a low-enough energy threshold, if the g 1 is high enough, as discussed earlier. The exclusion of delta rays common in typical Geant4 simulations for LArTPC neutrino experiments affects not only the mean yields, which can still be simulated accurately with a 20% correction almost independently of electric field and LET, but also resolution: delta rays will degrade the energy resolution and require more complicated corrections if they are deactivated in Monte Carlo simulations (in Fig. 8c). Incorrectly-modeled energy, and thus dE/dx, resolution can lead to potential biases between data and simulation when using reconstructed dE/dx for particle identification: electron/photon discrimination in LArTPC neutrino experiments for example. Liquid Argon Nuclear Recoil (Dark Matter Signal, and CEvNS) The final topic is low-energy (keV-scale) nuclear recoil or NR within LAr, which is important not only for dark matter in experiments such as DS [195] but also for CEνNS on collaborations like COHERENT [196]. The sum of quanta in data is again fit well by a power law, combining all global data on total yields, related to L(E) from the beginning, Eqn. 1. The fit in Fig. 10 is quite similar to that from LXe earlier in Fig. 5 in both the base and exponent. Here we plot total quanta/keV instead of just quanta for greater clarity: in terms of quanta the power would be Fig. 10a, as most LAr data for DM traditionally were 0 V/cm (e.g. microCLEAN, DEAP) thus lacking N e− to add to N ph . L e f f data exist from many sources, collected in [161,[197][198][199] but these do not directly inform L, being just L y . Lindhard works surprisingly well according to Fig. 10b, staying within 2σ for the power fit to data below 50 keV. Deviation below Lindhard at high E is easy to understand as bi-excitonic quenching again as per Hitachi, while (some) deviation at sub-keV is also understandable, as Lindhard's approach should break down at < 1 keV regardless. The total number of quanta per keV is remarkably flat from 1-300 keV, around 15 quanta/keV. Breaking the total down into light and charge, a similar behavior is observed as in LXe, with L y increasing in energy, while Q y decreases [175,200]. Confusion between L y and total yield has led to past claims Lindhard does not fit well [200,201]. Regardless of whether Lindhard fits well or does not in some energy regime, Fig. 10a is indicative of a combined-E scale being worthwhile (in place of S1 only) as in LXe, as argued both by DS [23] and LArIAT [180]. The final plot, Fig. 11, displays the breakdown into the S1 and S2 pulse areas for one example energy coming from the same quasi-monoenergetic technique cited earlier for table-top LXe yield measurements (for both ER and NR) relying upon coincidence between a noble-element detector and a second detector, typically Ge [31,65,175]. For NEST MC, the means of their values for the gains have been assumed: g 1 = 0.104 ± 0.006 PE/photon and g 2 = 3.1 ± 0.3 PE/e − . The drift electric field was 193 V/cm and sample energy 16.9 keV, selected due to being the lowest value for which the S1 and S2 histograms are displayed at an identical energy anywhere within the existing literature. NEST seems to overestimate the L y , but a larger issue is at play: contradictory data exist for NR from over the past ∼two decades: some experiments showed the amount of scintillation staying flat, others decreasing, and several even increasing with decreasing recoil energy [161,197,198], physically possible if one adapts the logic from Bezrukov, Kahlhoefer, and Lindner from LXe on screening [133]. Those showing increase cannot be easily explained as caused by threshold bias, as several distinct sets of results agreed upon the increase [195,200]. Other hypotheses include different collection efficiencies, especially as wavelength shifters are generally used in LAr, and different ones [204], some of which may have efficiencies higher than 100% effectively due to creation of multiple visible photons per individual input photon in the extreme UV, though the data on the existence of this are also contradictory [205]. A related effect may be something in photo-sensors exposed to LAr analagous to the 2-PE effect seen in LXe. A compounding problem is the discrepancy for the zero-field L y for 57 Co (or 83m Kr) used as in LXe to set L e f f . Historically, it is stated as near 40 photons/keV [169] but more recent work, with g 1 known by Doke plot, suggests closer to 50, nearer the max (1/W q ) if Q y → 0, sensible for E → 0 [166] (though as explained earlier, unrecombined e − 's remain). Systematic uncertainty in L y at ∼20 keV may be as high as 40% (L e f f = 0.35, or 0.25) making the 20% shift (x0.8) needed in Fig. 11a reasonable. At time of writing, NEST's compromise solution to the contradiction is to follow the predictions of not just Lindhard but all existing first-principles models, as well as the most recent data (ARIS) suggesting a flat-ish L y building into a mild increase with increasing NR energy, but at a higher level than the traditional ∼10 photons/keV (0.25 L e f f x40), better matching the claims of high L y at low E [175]. (Earlier versions would switch between differing, mutually exclusive solutions.) Fig. 11b suggests an S2-only scale may ironically be more reliable for (NR-based) DM searches with LAr. Contradictions are fewer between data and different models, including NEST, even at other energies, and multi-ms e − lifetime may be easier to achieve through high-level purification [159,206]; however, a note of caution that this may be due to the paucity of non-zero-field data, for measuring Q y , at multiple E-fields. Columnar recombination, not currently simulated, which changes the yield depending on electric field orientation, is a further complication [200]. The combined-E scale essentially removes (fit to all data) Figure 10. (a.) The (E-field-independent) total number of quanta N q per keV for NR in liquid argon: L y + Q y , or (N ph + N e− )/E. The best-fit power law, used in the current NEST's NR model for LAr, is (11.10 ± 1.10)E 0.087±0.025 , surprisingly similar to LXe, with 1σ/2σ error bands in green/yellow. Kimura data [167] are given as fit to data in original paper; SCENE and ARIS points are taken from [200] and [175]. (b.) A model review, collected from [132,143,201] of Lindhard/Lindhard-like approaches. Solid lines assume 45 photons/keV for the (0 V/cm) L y of ER in LAr, with upper and lower dashes covering 50 and 40 respectively to span the uncertainty in light, before addition with Q y . (c.) As in (a, b) NEST repeated here, with two additional model comparisons, from PARIS (used by DS [202]) and [203]. Figure 11. (a.) S1 peak and (b.) S2 peak for 16.9 keV NR in LAr from Cao et al. [200]. Data are in black with errors, with original MC in red (fit borders for it as vertical blue lines) and NEST overlaid in grey over original paper plots: defaults solid, while altered to match data in dash. Large noise levels needed in NEST are comparable to those assumed by SCENE (R 1 and R 2 in legends imply S1 30%, S2 25%). the effect of these and any other recombination fluctuations, which cause the variance in the original numbers of photons and electrons to be identical "at birth" prior to any fluctuations due to propagation and/or detection [36,51]. Such a scale will also remove the effect of delta rays (ER) on the ultimate energy resolution achievable, if driven by anti-correlated fluctuations (recombination). S2 is generally easier to measure, however, than the S1 is: electrons drift upward in a TPC along nearly-straight lines (slight diffusion occurs) from the liquid to the wires or to the wires followed by gas, where many photons are produced per electron. Losses due to the impurities along the drift length can be quantified simply, with an exponential. On the other hand, scintillation photons are produced in all directions and are affected by not only attenuation (not necessarily exponential, but difficult to quantify analytically) but geometric collection efficiency driven by reflection and refraction, and QE. As energy decreases, Q y increases for both NR and ER, for both LAr and LXe, at least down to the keV level before turning around, while L y appears to decrease toward 0. For this reason, NEST models are created using the total yield first, then charge yield, and light reconstructed by subtraction in the code. (The same is true for LXe.) Liquid Argon Summaries In conclusion, the key points of the LAr ER section, coupled to reasonable/common detector parameter values including/especially g 1 and E-field E , are: • A combined S1 + S2 scale continues to reconstruct ER energies best for DM/neutrino experiments, due to anti-correlation between channels, but not if g 1 is very low ( 1%) or g 2 very high (e.g., 2-phase TPC). An additional challenge is created by sitting on top of a continuous background like the beta decay of 39 Ar for combined energies, but noise in Q can make S1 more favorable. • dE/dx is more important than just E at the GeV scales of greatest relevance to neutrino projects and it is most commonly reconstructed utilizing dQ/dx (ignoring dS/dx). • A correction (∼0.8) must be inserted into the simulation of charge yields for use in the traditional Q-only scale, lowering the Q that is output, if the delta-ray production threshold is set above the e − -ion thermalization radius O(1 µm) in MC. Energy resolution may also be affected, not just mean yields, and high-energy, low-dE/dx (MIP) interactions are not immune to this problem due to secondary particle production, handled with e.g. Geant4. • Due to differences in delta rays and other secondaries, an analytical fit may be impossible across all particle types, leading to different recombination probabilities even if you consider only the averages versus dE/dx or energy. • Either escape probability or recombination can be modeled as a function of the dE/dx (or the LET, which includes the effects of density). Next, we summarize the key points of the LAr NR section. We note that because yields change slowly with increasing field especially for the dense tracks of NR that our S1 examples from two-phase TPCs (DS, ARIS, SCENE) should be relevant/applicable to 0 V/cm single-phase detectors as well. • While possible to measure for only approximately monoenergetic peaks, a summation of the few available N ph plus N e− data sets results in evidence for NR anti-correlation (akin to ER's) and modest agreement with Lindhard. This is important for both DM and CEνNS. • Due to uncertainty in the scintillation yield, an S2-only scale may be beneficial, but exploration of combined E may still be interesting in the future (as stated above). Non-zero-field measurements are not as plentiful for charge yields as zero-field light-only ones for NR in liquid argon. Discussion and General Conclusions We have reviewed mainly LXe and LAr, in 2-phase TPCs, and manage to extract insights spanning dark matter and neutrinos, proving once again the remarkable consistency obtained across numerous data sets, and the utility of NEST to probe at least the simple reconstruction methods reviewed. From an historical perspective, it is intriguing that the usage of noble elements in the DM field began with scintillation-only E measurements, with charge primarily used for position reconstruction, while for neutrinos the opposite occurred: ionization-only E scale, with initial scintillation used as a trigger for event activity of interest. Moving forward in both fields, it will be interesting to see how charge and light are combined together to improve E measurements further than what has already been achieved. The new insights we have gleaned from our review and meta-analyses with our own simulations, based on the global, cross-experiment framework of NEST, first for liquid xenon, include: 6. A comprehensive compilation of all existing data and models for NR in terms of total yield not just light, beyond 0 V/cm. Funding: The research of Prof. Levy and Szydagis at the University at Albany SUNY (the State University of New York) was funded by the U.S. Department of Energy (DOE) under grant number DE-SC0015535. Prof. Mooney and his students, Alex Flesher and Justin Mueller, were funded through university start-up funds. Ms. Kozlova was funded through the Russian Science Foundation (contract number 18-12-00135) and Russian Foundation for Basic Research (projs. 20-02-00670a).
23,996
2021-02-20T00:00:00.000
[ "Physics" ]
Risk of fraud classification In this article, we define consumers’ profiles of electricity who commit fraud. We also compare these profiles with users’ profiles not classified as fraudsters in order to determine which of these clients should receive an inspection. We present a statistically consistent method to classify clients/users as fraudsters or not, according to the profiles of previously identified fraudsters. We show that it is possible to use several characteristics to inspect the classification of fraud; those aspects are represented by the coding performed in the observed series of clients/users. In this way, several encodings can be used, and the client risk can be constructed to integrate complementary aspects. We show that the classification method has success rates that exceed 77%, which allows us to infer confidence in the methodology. Introduction This article is oriented to the solution of a real problem through stochastic processes techniques. Institutions/ companies collect information from users/customers to determine their profiles on consumption practices, preferences, and socio-economic features, among other aspects. That is, in general terms, they seek to establish behavioral profiles. This knowledge can facilitate the placement of products or the rapid adaptation of an institution to meet the needs of its users. The coding of the information allows defining these profiles, which constitute representations of the behavior. Such representations provide information to institutions and companies to form teams that can dedicate themselves to optimizing the relationship with these groups characterized by specific profiles. Those profiles are defined using the knowledge about the performance of certain sequences (user history coding). The problem of determining groups and profiles can be approached from discrete stochastic processes tools, since, in this area, there are powerful tools to deal with the problem, see [1], [2] and [3]. The sequences resulting from the coding of user/customer data can be identified as samples coming from discrete stochastic processes. In this article, we develop a method to classify sequences, according to k I previously determined profiles. Then the k I profiles are compared with the performance of other unclassified sequences (or group of). For this purpose, it is necessary to have some tools, (A) a tool that is capable of (i) discriminating between processes by samples from them, (ii) determining whether the processes represented by their samples are from the same stochastic law, (B) a tool that allows drawing a stochastic profile of the behavior of a process based on a series of sequences (group) that are judged to come from the same process. As a consequence of addressing this issue, in this article, we address the problem of classifying clients as fraudsters. We employ a set of real data on electricity consumption. The proposal is to attribute to each classified client, a risk related to the similarity that its series of consumption shows with some group of fraudulent clients, identified through (A)-(B). This article is organized as follows. Section 2 addresses the theoretical foundations and classification strategy. Section 3 describes the data, the coding, and the calculation of the risk to customers. Also, in this section, the notion of fraud customers and the groups found in the database are discussed. The conclusions and considerations are given in Section 4. Theoretical background We begin this section by introducing the notation used in the formalization of stochastic tools. Let (Z t ) t be a discrete time Markov chain of order o (o < 1) with finite alphabet A. Let us call S ¼ A o the state space and denote the string a m a m+1 . . .a n by a n m where a i 2 A, m i n. For each a 2 A and s 2 S define the transition probability P ðajsÞ ¼ Prob ðZ t ¼ ajZ tÀ1 tÀo ¼ sÞ: In a given sample z n 1 ; coming from the stochastic process, the number of occurrences of s is denoted by N n (s) and the number of occurrences of s followed by a is denoted by N n (s, a). In this way, N nðs;aÞ now, two Markov chains (Z 1,t ) t and (Z 2,t ) t , of order o, disposed on the finite alphabet A with state space S: Given s 2 S denote by {P(a|s)} a2A and {Q(a|s)} a2A the sets of transition probabilities of (Z 1,t ) t and (Z 2,t ) t , respectively. Consider now the local metric d s introduced by [1], note that d s is a metric in S (not negative, symmetric and follows triangular inequality) and it allows defining a global notion (in S) of similarity between sequences. Definition 1. Consider two Markov chains (Z 1,t ) t and (Z 2,t ) t of order o, with finite alphabet A, state space S ¼ A o and independent samples z n 1 1;1 ; z n 2 2;1 respectively. Then, set (i) for each s 2 S; d s ðz n 1 1;1 ; z n 2 2;1 Þ ¼ (ii) dmax ðz n 1 1;1 ; z n 2 2;1 Þ ¼ max s2S fd s ðz n 1 1;1 ; z n 2 2;1 Þg; with N n 1 þn 2 ðs; aÞ ¼ N n 1 ðs; aÞ þ N n 2 ðs; aÞ; N n 1 þn 2 ðsÞ ¼ N n 1 ðsÞ þ N n 2 ðsÞ; where N n 1 and N n 2 are given as usual, computed from the samples z n 1 1;1 and z n 2 2;1 respectively. Moreover, a is a real and positive value. The Definition 1 introduces two notions of proximity between sequences, i. is local, ii. is global; both are statistically consistent, since, by increasing the min{n 1 , n 2 }, grows their capacity to detect discrepancies (when the underlying laws are different) and similarities (when the underlying laws are the same). To decide if the sequences follow the same law, is only necessary to check that d s < 1. This threshold is derived from the Bayesian Information Criterion (BIC), see [1]. In the application, we use a = 2, with this value, we recover the usual expression of the BIC, given by [4]. The next notion (Partition Markov Model-PMM) allows postulating a parsimonious model for a Markov process, aiming at the identification of states in the state space, which have in common their transition probabilities. Through this model we build the stochastic profiles. . ., L L j j } if this partition is the one defined by the equivalence introduced by item i. The model given by Definition 2 was introduced in reference [2] as well as the strategy for its consistent estimation that is also based on a metric defined on the state space and based on the BIC. The parameters to be estimated are (a) the partition L, (b) the transition probabilities of each part L to any element of A, P(Á|L) = P s2S P(Á|s). Given a sample of ðZ t Þ t ; z n 1 ; according to [2] the partition is estimated by means of d L given by Definition 3. Definition 3. Let (Z t ) t be a Markov chain of order o, with finite alphabet A and state space S = A o , z n 1 a sample of the process and let L = {L 1 , L 2 , . . ., L L j j } be a partition of S such that for all s, r 2 L, P(Á|s) = P(Á|r). Then, set d L (i,j) between parts L i and L j as , for a 2 A. a a real and positive value. The metric d L is designed to build a structure in the state space, identifying equivalent states, it is applied for example in an initial set consisting of the entire state space S, and whenever d L (i, j) < 1 the elements L i and L j must be in the same part (see properties of d L in [2]). For each part L ofL (estimated partition) the transition probability is estimated byP ðajLÞ ¼ N n ðL;aÞ N n ðLÞ : Note that all equivalent states are used to estimate these probabilities, in this way, an economy is produced in the total number of probabilities to be estimated. In the next subsection we show how integrate the tools presented here to build sequence groups (clusters) with the same stochastic law. We also explain how to define the stochastic profile of each cluster. Clusters of sequences and partition by cluster Given a collection of p sequences C ¼ fz ni i;1 g p i¼1 ; under the assumptions of Definition 1, the notion dmax ii -Definition 1 is used to define clusters in C: We introduce an algorithm that shows how this is done. Algorithm 1 g and go back to 2, * otherwise the procedure ends. Output (clusters of CÞM ¼ fC 1 ; . . . ; C k g That is, the initial M is composed by all the separate sequences and the final M corresponds to the groups of sequences or clusters. Note that given two different sequences z ni à i à ;1 ; z nj à j à ;1 2 C the occurrence of each s 2 S is recorded by N n ià ðsÞ and N n j à ðsÞ respectively, and the occurrence of s followed by a 2 A is computed by N n ià ðs; aÞ and N nj à ðs; aÞ: Already, when defining the new unit iÃ;1 ; z nj à j à ;1 Þ < 1) the count of the ocurrences of s, is given by N ni à ðsÞ þ N nj à ðsÞ and if a 2 A, the ocurrences of s followed by a is N ni à ðs; aÞ þ N nj à ðs; aÞ: That is, in the case of m iÃj à both sequences, z ni à iÃ;1 and z n j à j à ;1 ; contribute to the count attributed to m iÃj à : Once the proximity between sequences is determined in order to build the clusters {C 1 , . . ., C k } and for each cluster we can build a PMM, representing the cluster. In addition, it is possible to quantify the dissimilarity between clusters using the notion dmax. Suppose the cluster i is C i and it is composed by m i independent sequences, the sample size related to C i is P mi m¼1 n im : For each s 2 S; compute the ocurrences of s in C i as N ðC i ; sÞ ¼ Remark 1. If we replace in Definition 3 the sample size n by P mi m¼1 n im and applying the Algorithm 1, substituting (i) C ¼ fz ni i;1 g p i¼1 by S = A o , (ii) dmax by d L , the Output of the algorithm will be the partitionL i of S, related to the cluster C i . The following remark shows how to measure the similarity between two clusters. Remark 2. To establish the dissimilarity between the clusters C 1 and C 2 (since by construccion those are different) we use the Definition 1ii. In the calculation of i-def 2.1, we replace N n k ðsÞðN n k ðs; aÞÞ by equation (1) ((2)), with i = k. We replace also N n 1 þn 2 ðsÞ by N(C 1 , s) + N(C 2 , s) and N n 1 þn 2 ðs; aÞ by N(C 1 , (s, a)) + N(C 2 , (s, a)). Using those ocurrences we can compute the dissimilarity between the clusters. The next section is intended to apply the concepts detailed here as well as the strategies presented to real data. Risk through Discretized Information Data and structure of the analises In Table 1, we describe the data inspected in this paper. The data correspond to serial records of energy consumption of clients of a company of power supply (CPFL) during the period: January 2011 to June 2019. We have two types of records, Irregular classified by specialists in fraud and Other which are not be classified as Irregular. That is, irregular cases have already been classified since they were identified by the fraud detection system of the company. The other cases appear to be normal but could have been disregarded by the system used in fraud detection. The monthly energy consumption sequence of each client i, x q i i;1 is discretized in order to identify it with a sample of a Markov stochastic process (Z i,t ) t , of finite order o in the discrete and finite alphabet A, for i = 1, . . ., 8381. The first inspection to be carried out seeks to identify clusters in Irregular clients, this classification could point to specific fraud practices. So we determine fI 1 ; . . . ; I k I g clusters of irregular clients, applying the Algorithm 1 in the set of irregular clients. For the group Other, we also determine the clusters, say fO 1 ; . . . ; O k O g (by applying Algorithm 1 in Other). So, we can classify customers into consumer practices. Once the Irregular clusters have been constructed, it is possible to quantify the dissimilarity between them, this is done by means of dmax as described in the previous section (see Remark 2), computing dmax(I i , I j ), i 6 ¼ j, i, j 2 {1, 2, . . ., k I }. In a second instance, we compare the behavior of the O l , l = 1, . . ., k O clusters with the irregular ones, computing dmax( We do this comparison in order to identify which could be considered as indistinguishable from some irregular cluster, this happens when dmax(I i , O l ) < 1. Such a comparison generates a risk index in the class fO 1 ; . . . ; O k O g; as to guide the inspection of the company in that class. For each client t 2 O it we define: In this way, the values obtained from equation (3) reported for the clients in the class Other (Tab. 1) are fa v g 7828 v¼1 : By construction for the client t in Other, $! i t 2 {1, . . ., k O } such that t 2 O it allowing the good definition of equation (3). Denote by a v (1), a v (2), . . ., a v (7828) the ordered values in an increasing way. Thus, the client that receives the value a v (1) is the one with the highest risk and the one that receives the value a v (7828) is the client with the lowest risk, taking into account that the threshold equal to 1 allows us to pay attention only in the clients whose values fall in [0, 1). Figure 1 illustrates the situation. Results We compare the series of consumption through a discretization that considers four possible states, and reports the performance of the series in relation to the magnitude of the consumption in the last measurement (at time t) when compared with the two previous measurements (times t À 1 and t À 2). For each client i with x q i i;1 consumption series define the sample z ni i;1 of the discrete process (Z i,t ) t as Then, A = {1, 2, 3, 4} and |A| = 4. In the set of sequences, the smaller one has a sample size equal to 39, the indicated After identifying 12 Irregular clusters, we can explore the dissimilarity between them computing the values of dmax between the clusters (see Remark 2). Table 3 shows the results. In a stochastic way, the table quantifies the differences in fraud practices classified by the company. With the purpose of exploring the dynamics of each irregular cluster, we fit a PMM model for each cluster. This leads us to describe the meaning of each possible state for the (Z i,t ) t process. In Table 4, we report the relation of the states s 2 S with the consumption. Each possible state is composed of the concatenation of a and b in A, so the state is ab. By construction, the states relate the magnitudes of the energy consumption at times t À 3, t À 2, t À 1 and t, thus, for example, the state ab = 13 means {X t ! X tÀ1 & X t ! X tÀ2 } (associated with b = 3) and {X tÀ1 < X tÀ2 & X tÀ1 < X tÀ3 } (associated with a = 1). Note that some combinations are not allowed by construction, those are 12, 22, 33, 43 (see Tab. 5). Table 3. dmax between the irregular clusters fI 1 ; . . . ; I k I g (see Eq. (4) and Remark 2). In bold type, the three highest values. Table 4. States s 2 S and consumption behavior, coding Z i,tsee equation (4). {X tÀ2 X tÀ1 X t < X tÀ2 } Note that the states described in Table 4 indicate a decreasing/increasing trajectory ending, and this behavior is reflected in the partitions generated for each irregular cluster (see Tabs. 6 and 7), with only two possible exceptions, for states that allow consumption maintenance. Consider two large groups: those of increasing final trajectories (a) 13, 14, 23, 24, 34, 44 (including increasing/maintenance) and those of decreasing final trajectories (b) 11, 21, 31, 32, 41, 42. We see that all models (except in two situations) have separated the states into those two large classes. That is, in each part of each model we only find states of one type. For example, let's take I 11 , it is composed by 5 parts From the magnitudes of the estimated probabilities (Tabs. 6 and 7) we see that the clusters show two preferences (in bold), for state 1, cases I 1 , I 2 , I 3 , I 4 , I 7 , I 9 , I 12 and for state 4 the remaining cases, I 5 , I 6 , I 8 , I 10 , I 11 . State 1 indicates decrease in consumption at time t in relation to the other previous instances t À 1 and t À 2 and 4 indicates increase/maintenance of consumption at time t in relation to the other previous instances t À 1 and t -2. Moreover, for all cases I i , i = 1, . . ., 12 the first two elections (the two highest probabilities) fall in states 1 or 4. Note that when the preference is the state 1, the past states (elements of the parts) end in 1 or 2, that is to say, that according to the classification given in Table 4, the process was already in a decreasing final trajectory (except I 3 ). When the preference is state 4, the past states (elements in the parts) end in 3 or 4, that is, according to the classification (Tab. 4), the process was in maintenance or increasing trajectory. The group Other is divided by the Algorithm 1 (coding Z i,tequation (4)) in 391 clusters, so k O = 391. As the purpose of this paper is to identify those customers in the Other category that resemble an irregular cluster, we proceed to measure this similarity. For each I i we calculate dmax between such irregular cluster and the clusters O j , j = 1, . . ., k O . Table 8 summarizes the obtained values. In Table 9, we report which O j clusters behave as irregular. The lower the value of dmax on the right, the higher the risk of group O j as it becomes indistinguishable from an irregular. We note that there are 63 clients that deserve a detailed inspection, since their risks are pronounced (dmax values below 0.7). And certainly, the priority is for the 36 with approximately zero dmax. As set forth in Table 4, Irregular processes define their minimum units, parts of the partitions, according to the types of final trajectories (a) increasing/maintenance final trajectories and (b) decreasing final trajectories, which leads us to inspect the consumption series via a representation showing that trend. The following subsection is intended for this purpose. Increasing and decreasing movements Based on the findings we introduce a complementary coding that allows us another perspective of the study. For that, we consider two movements in the consumption series. For each client i with series x q i i;1 ; define the sample y ni i;1 of the discrete process (Y i,t ) t as Table 7. PMM for the irregular clusters I i , i = 7, . . ., 12 see equation (4) and Remark 1. In bold type the highest probability for the cluster. Table 10. States s 2 S and consumption behavior, codification Y i,tsee equation (5). State Event Then, A = {0, 1} and |A| = 2. We adopt the memory o = 2 in order to facilitate the interpretation in concordance with the previous inspection. See the meaning of the states of the process (Y i,t ) t in Table 10. In Table 11 we show the k I = 22 clusters defined by the Algorithm 1 in the Irregular class (Tab. 1), I 0À1 i , i = 1, . . ., 22 clusters derived from the codification Y i,tsee equation (5). We see that in relation to the irregular clusters via the Z i,t encoding, the Y i,t encoding almost doubles the irregularity modalities. While the Table 2 reports only 3 in 12 (25%) cases with d* < 0.5, Table 11 reports 12 in 22 (55%) cases with d* < 0.5, which explains the increase in the number of groups reported in Table 11. In the Appendix, Tables 16 and 17, we report the PMM for each Irregular cluster derived from the codification (Y i,t ) t . We report 14 models with only two parts, 7 with 3 parts and 1 model with 4 parts. States 00 and 11, which reiterate a trend of consecutive decrease in energy consumption or consecutive increase/maintenance are found in separate parts in all models except in four cases: I 0À1 Table 12 shows the results. According to coding 0À1, the only cluster I 0À1 i that is not associated with any element of the class Other (see Tab. 1) is the I 0À1 20 which has 68 clients. We must not lose sight of the fact that the risk increases when obtaining values of dmax close to zero, and only those cases need to be identified. All criteria are asymptotic so, they should be considered with caution, that is to say that, cases with dmax near to the threshold 1 can wrongly point cases that are regular one. As we can see, from the results reported in Table 13, we see that in relation to the meaning given by the Y i,t coding (see Tab. 10), the total number of cases in the class Other that can be identified with irregular clusters, increases considerably. These results could indicate the relevance of the memory of the process, since we see that coding 1-4 reaches a greater past in comparison with coding 0À1 (compare Tabs. 4 and 10), being able to separate in a more realistic way the class Other of the class Irregular. As the discretization caused by equations (4) and (5) lead us to simplifications of the original information, we proceed to consider both for the classification of clients. In the next subsection we take both codifications into account and propose a strategy for the inspection of potentially fraudulent customers. Risk of clients from two codifications It is always wise to consider that the representations given by the (Z i,t ) t and (Y i,t ) t processes (see Eqs. (4) and (5)) only capture certain aspects of the original consumption series. As those reveal complementary information, in this subsection we consider both to guide the decisionmaking process in the search for undetected frauds. We introduce a function that allows a risk classification integrating both codifications, generated by equation (3). For ; values reported from left to right in increasing order. Using the process (Y i,t ) tsee equation (5 (6) and (7). a t and b t represent marginal risks of client t, since, a t depends on Z i,tsee equation (4) and b t depends on Y i,tsee equation (5). Even more, we can include in the risk definition process other representations, according to the information provided by the inspeccion. In Table 14 we report the number of cases, in the class Other (see Tab. 1), by risk bands. If we consider as low risk those clients with dmax near to 1 or more, there are 79 risk cases that should be inspected, cases inside the set [0, 0.9]  [0, 0.9] (in bold letter -Tab. 14). As exemplified in Table 14, various representations of the original information can be integrated into the definition of a client's risk, in this case, we have adopted two, which have revealed the need to first inspect 79 clients, according to both representations. If the cases indicated for inspection are many, according to the availability of the company, customer selection criteria such as the one described in [5] may be applied. Reference [5] shows that through a robust criterion, it is possible to select a representative client of the cluster that could be first inspected. In the following subsection, we analyze the ability to detecting fraud, of the proposed strategy (see Eq. (3)), under each type of discretization (4) and (5). Assertiveness of classification We reserve this subsection to identify the predictive capacity of the classification given by Algorithm 1. What interests us is the quality of classification of Irregular customers, as these have gone through rigorous inspection processes, being defined as fraud. The database of our inspection is given in Table 1; for this, we proceed as follows. Consider the clusters defined by Algorithm 1 in the Irregular class: I 1 ; I 2 ; . . . ; I k I ; (i) randomly select s% of irregular customers, say t i 1 ; . . . ; t i sk I =100 ; (ii) apply Algorithm 1 in the Irregular class (without the clients selected in (i)) and denote the clusters as I 0 1 ; I 0 2 ; . . . ; I 0 k 0 I : (iii) For each element t ij 2 I ti j find the cluster I 0 ti j such that I 0 ti j \ I ti j ! I 0 i \ I ti j ; 8i 2 f1; . . . ; k 0 I g; (iv.a) compute dðt ij Þ ¼ d maxðt ij ; I 0 ti j Þ and (iv.b) record jj 2 fi 1 ; . . . ; i sk I =100 g : dðt j Þ < 1j: Note that the cluster I 0 t i j such that (iii) is verified can be considered as the most indicated cluster for the client t ij ; since the client t ij is a member of I ti j and the sets I 0 ti j and I ti j share the largest number of customers. Note that under both discretizations, the average percentage of successes is greater than 77% and, the minimum percentages exceed 65%, in the three settings by discretization, see Table 15. Conclusion In practical terms, this article deals with the capacity that discretizations have to extract relevant information contained in observations in series. Such discretizations make it possible to use and adapt tools from discrete stochastic processes. Through the metric -Definition 1 (see [1]), it is possible to measure the similarity/discrepancy between samples of discrete stochastic processes. Such a metric is statistically consistent for establishing the similarity/discrepancy. Based on the metric, in this article is proposed Algorithm 1 that defines clusters of samples, where each cluster contains those sequences that respond to the same stochastic law. From the previously demonstrated properties, see [1], the clusters are then assembled consistently and represent different profiles associated with the sequences inside. To identify how these profiles operate, we use the Partition Markov Models -Definition 2 - [2], which by means of a metric -Definition 3 is consistently estimated using the sequences located in the cluster. We generate a model for each cluster, which gives the minimal representation of the state space (partition) and the transition probabilities for any element of the alphabet. Based on all these elements, we deal with a real problem where there are sequences of observations of energy consumption of two groups (i) Irregular, (ii) Othersee Table 1. We define two types of discretization ( (4) and (5)) through them we proceed to identify the clusters of (i) that group similar Table 14. Number of clients by interval, a t given by equation (6) and b t given by equation (7). In bold the number of cases with high risk. consumption practices, we do the same with (ii). Through Partition Markov Models, we represent the stochastic profile of each cluster of (i)see Remark 1. We identify which clusters of (ii) are confused with the clusters of (i)see Remark 2, which allows us to point out the cases in (ii) that deserve inspection. The classification's rates of success given by the procedure are high, as shown in the study of Section 3.5, and on average, these exceed 77%. This whole procedure allows us to establish risk indicators for (ii) and also an order that indicates the most and least serious cases. We see then that, by means of two discretizations it is possible to point cases to be reviewed, according to the magnitudes of the notion (3), our results - Table 14 states that 79 cases should go through revision, according to (6) and (7). For additional details, see [6].
7,054.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Investigating the Binding Efficacy of Snake Venom Proteins as GLP-1 Analogs for Diabetes mellitus Management: An In silico Study Objective: Diabetes mellitus (DM) is a metabolic condition defined by hyperglycemia driven by insulin deficiency or decreased insulin activity. GLP-1, a gut enzyme, stimulates insulin production and reduces hepatic glucose synthesis to regulate diabetes. GLP-1 agonists enhance insulin sensitivity and decrease blood glucose to relieve symptoms of DM. These medications represent a novel paradigm to manage diabetes as they improve glycaemic control in type 2 diabetic patients. Snake venom proteins have been investigated as a potential medicinal strategy for diabetes treatment. These proteins contain a multitude of bioactive constituents, such as insulinotropic cytotoxins, which have been found to influence insulin secretion and glucose homeostasis.Methods: In the present study, the snake venom proteins long neurotoxin 1 Cytotoxin 7, Cytotoxin 2a, and Cytotoxin 10 were modeled and their therapeutic efficacy as GLP-1 analogs was determined by employing molecular docking techniques. The binding of snake venom protein towards GLP-1 receptors was compared against the positive controls (Exenatide, Liraglutide, Semaglutide, and Lixisenatide). Results: The results demonstrated that the cytotoxins (Cytotoxin 2a, Cytotoxin 7, and Cytotoxin 10) exhibited comparable binding with the positive controls and majorly interacted with the hydrophobic amino acids in the binding pocket of the GLP-1 receptor. The modeled snake venom toxins demonstrated beneficial physicochemical properties and advocated them to be a novel contender for the development of GLP-1 analogs. Conclusion: Despite its beneficial outcomes, the utilization of snake venom proteins as a therapeutic agent for diabetes is still in its initial stages, and additional research is required to assess their effectiveness and safety in patients. IntrodUctIon Diabetes mellitus (DM) is a chronic metabolic illness characterized by increased blood glucose levels and a wide range of comorbidities 1 .It primarily manifests in two forms: type 1 and type 2 diabetes.In both cases, there are abnormalities in insulin signaling and glucose regulation.Type 1 diabetes arises from the pancreas's inability to synthesize adequate insulin to control glucose levels, whereas type 2 diabetes occurs when cells become resistant to the actions of insulin.Glucagonlike peptide-1 (GLP-1) is a hormone secreted by intestinal L-cells in response to food consumption.It plays a vital role in glucose management and exhibits various physiological functions, including the regulation of insulin and glucagon secretion, as well as the slowing of gastric emptying 2,3 .These actions contribute to the maintenance of glucose levels and the enhancement of insulin sensitivity.GLP-1 exerts its effects on glucose regulation by stimulating insulin production from the pancreas [4][5] .It achieves this by binding to GLP-1 receptors on beta cells, which results in an increase in intracellular cyclic adenosine monophosphate (cAMP) levels.The elevation in camp activates the insulin secretion pathway, leading to the release of insulin into the bloodstream 6 .Increased insulin secretion helps lower blood glucose levels and improves the uptake of glucose by cells.Consequently, this controlled and sustained response to glucose following meals facilitates the regulation of blood sugar levels.Beyond its insulinotropic effects, GLP-1 also inhibits the production of glucagon from pancreatic alpha cells.Glucagon is a hormone that stimulates the liver to produce glucose.By suppressing glucagon secretion, GLP-1 further contributes to the reduction of blood sugar levels.This dual action of GLP-1, stimulating insulin release and inhibiting glucagon secretion, helps maintain glucose homeostasis in the body.Glp-1 plays a role in delaying gastric emptying.By slowing down the rate at which the stomach empties its contents, GLP-1 ensures a more gradual increase in blood glucose levels after a meal [7][8] .This delayed gastric emptying allows for better control of postprandial glucose levels and helps prevent sudden spikes in blood sugar.By modulating the rate of nutrient absorption, GLP-1 aids in achieving more stable and controlled blood glucose responses 9 .The involvement of GLP-1 in glucose management is crucial.Its stimulation of insulin secretion, inhibition of glucagon production, and delay in gastric emptying collectively contribute to the regulation of blood glucose levels.Medications known as GLP-1 receptor agonists, which mimic or enhance the effects of GLP-1, have been developed as an effective treatment option for diabetes.1][12][13] . Glp-1 also has a multitude of other significant benefits in dm management.Glp-1, for instance, has been demonstrated to reduce blood pressure, decrease body mass, and improve insulin sensitivity 14 .The propensity of GLP-1 to regulate glucagon release, its insulinotropic actions, and its corresponding impact on the hypothalamus, where it stimulates pathways that govern energy balance and food intake, are all perceived to contribute to these ramifications 15 .Several GLP-1-based medicines for type 2 diabetes mellitus (T2DM) have been developed including, GLP-1 receptor agonists and dipeptidyl-peptidase 4 (DPP-4) inhibitors, which augments endogenous GLP-1 bioavailability by blocking its disintegration in individuals with T2DM which subsequently lower HBA1C levels, and minimizes the risk of cardiovascular events [16][17][18][19][20] .They work by prolonging the activity of endogenous GLP-1, resulting in better glucose control and insulin sensitivity.DPP-4 inhibitors enhance the quantities of active GLP-1 in circulation by preventing GLP-1 breakdown, resulting in better insulin production and glucose management 2 .Traditionally, GLP-1 receptor agonists, such as exenatide [21][22][23] , lixisenatide 24 , semaglutide 25,26 , liraglutide 27 , etc. Are administered to emulate the properties of GLP-1 by binding to and activating GLP-1 receptors.They function by promoting insulin production, blocking glucagon release, and delaying gastric emptying, which leads to improved glucose regulation and insulin responsiveness 28 . A m o n g t h e p r eva l e n t t h e r a p i e s , administration of exogenous insulin is associated with several limitations including, r isk of hypoglycemia, dose optimization, insulin resistance, scar development, and requirement of multiple doses which havdirectly influences efficacy and safety.These limitations emphasize the necessity of complementary therapies that can enhance glycaemic control and the quality of life for individuals with DM.To effectiv there is significant interest in researching alternative medicines such as oral drugs, non-insulin injectables, and glucoseresponsive insulin to effectively treat dmernative therapies intend to grant more practical and efficient therapeutic interventions whilst reducing the probability of long-term consequences related to insulin therapy 29 . Currently, the pharmaceutical efficacy of venom-derived therapeutics is apparent as there are notable fda-approved while a multitude of venom-derived products is under clinical trials to establish their therapeutic efficacy.Proteins and peptides constitute the majority of the dry weight of snake venom and are of particular importance for biomedical studies.Snake venom Contains enzymatic and non-enzymatic proteins and peptides that are classified into various groups depending on their structural architecture and function 30 .Representatives of a single-family possess considerable resemblance in their primary, secondary, and tertiary architectures, despite having diverse pharmacological functionalities and bioactivities.The application of snake venom in numerous pathophysiological circumstances has been referenced in ayurveda, homeopathy, and traditional medicine.Snake venom comprises a variety of neurotoxins, cardiotoxins, cytotoxins, nerve growth factors, lectins, disintegrins, hemorrhages, and enzymes 31,32 .These proteins can be commonly used to address thrombosis, rheumatism, carcinoma, diabetes, and other ailments in addition to inflicting death.When compared to synthetically synthetic molecules, the effectiveness of venom-derived substances can be attributed to their higher bioactivity, selectivity, and stability 33 . Glp-1 is a propitious target for the management of dm because it portrays an important function in the glucose metabolism in the body.Peptide-based medications offer an optimistic strategy for increasing GLP-1 secretion in diabetics and represent a novel therapeutic and management option.Snake venom proteins have been investigated for their putative relevance in the management of diabetes and its comorbidities and the promising role of exendin-4 in mimicking naturally occurring GLP-1 has motivated the present research to model comparable peptides that could help alleviate DM. Methodology Ligand preparation The hypoglycaemic GLP-1 analogs belonging to the insulin secretagogues drug class were appraised as positive control in the present study.The drugs including exenatide (PubChem CID: 45588096), liraglutide (pubchem CID: 16134956), semaglutide (PubChem CID: 56843331), and lixisenatide (PubChem CID: 90472060) were downloaded from the pubchem (https://pubchem.ncbi.nlm.nih.gov/)database as the 2D SDF files 34 .The structure of the positive control was preliminarily analyzed and prepared through protonation in marvin's sketch.The mikowski matrix was implemented to MMFF94 energy minimization of the ligand using divide and conquer algorithm 35 . retrieval of protein structures Comprehending the structural architecture of the proteins is crucial in determining the enzymes' biochemical properties and cellular functions.In the present study, the Cytotoxins and neurotoxins with less than 100 amino acids were selected to build the GLP-1 analogs.The fasta sequence of the venom toxins from Naja naja (indian cobra) including long neurotoxin 1 (UniProt ID:P25668), cytotoxin 7 (UniProt ID:P86382), cytotoxin 2a (UniProt ID: P86538), and cytotoxin 10 (UniProt ID: P86541) was retrieved from the UniProt 36 (https://www.UniProt.Org/) for homology modeling as the 3D structures were unavailable in PDB.The 3D structure of the GLP-1 protein (PDB ID: 5vew) was retrieved from the PDB databank (https:// www.rcsb.org/).The structure was resolved at 2.70 Å using the X-ray diffraction technique 37 . homology modeling Homology modeling enables the construction of stable 3D architecture for the protein by identifying suitable templates to model the structure.In the present study, swiss model (https://swissmodel.expasy.org/interactive) 38was used to construct the 3D structures of the venom protein wherein, 6zfm.1.A (alpha-cobra toxin), 4om4.1.A (cytotoxin 2), 1h0j.2.A (cardiotoxin 3), and 2bhi.2.A (cardiotoxin A3) were taken as a template to construct the 3D structures of long neurotoxin 1, cytotoxin 2a, cytotoxin 7 and cytotoxin 10 respectively.All the structures demonstrated more than 80% structural similarity with that of the template.The modeled structures were downloaded in PDB format and the quality of the structures was analyzed based on GMQE, qmeandisco, and z score parameters. Structural analysis of the modeled peptides The physicochemical properties of the modeled long neurotoxin 1, cytotoxin 2a, cytotoxin 7, and cytotoxin 10 were examined in the ProtParam webserver (https://web.expasy.org/protparam/) 39.The physicochemical variables like hydropathicity (GRAVY), molecular weight, amino acids, and aliphatic index which determine the protein structure and stability were computed using the webserver.The GRAVY score is computed by adding up the hydropathy values of all of the amino acids, dividing that total by the number of residues in the sequence, and then rounding up to the nearest whole number. Molecular docking and visualization The modeled proteins long neurotoxin 1, cytotoxin 2a, cytotoxin 7, and cytotoxin 10 and the GLP-1 protein (PDB id: 5vew) were prepared before docking in ds biovia discovery studio 40 .The nonstructural water molecules and unwanted hetero atoms were eliminated from the protein structure this was followed by the addition of polar hydrogen atoms.The prepared structures were saved in PDB format for further investigations.Molecular docking is a computational method used to predict the binding mode of a small molecule ligand to a target protein.HDOCK (http://hdock.phys.hust.edu.cn/) is a powerful tool for the molecular docking of proteins, allowing researchers to predict the binding modes of ligands to proteins, and providing crucial information for drug discovery and design.HDOCK is a high-performance molecular docking program that utilizes a hybrid molecular dynamics and genetic algorithm approach to predict the binding of a ligand to its target protein. The program starts by generating a pool of initial ligand conformations and using molecular dynamics simulations to explore their binding energies and stabilities.The genetic algorithm then takes over, using this information to evolve the best-performing ligand conformations and refine their positions.Finally, the final docking results are ranked based on their binding energies and other relevant parameters 41 .The best models were downloaded based on the docking score and the structures were visualized for various molecular interactions at the binding site of the receptor protein using ds biovia discovery studio 42 . Molecular dynamic simulation From the results of molecular docking, it is evident the protein P86538 demonstrated better binding among the venom proteins and had comparable results with the positive control.Therefore the docked complex of P86538-GLP-1 was subjected to molecular dynamic simulation (MDS) to examine the stability of the complex.The mds was performed by employing the GROMOS96 54a7 43 force field with the GROMACS package 44 in the linux operating system with graphical interference with the default variables.The protein topology files were prepared and the simulation was carried out in a cubic box.The complex was subjected to energy minimization followed by equilibration by employing the steepest descent approach.In the initial phase of the heating period, the system was maintained at 300 K in the NVT ensemble.The complex was then constrained for 5 NS while gradually allowing the solvent to settle around it, followed by another 10 ND of NPT equilibration by gradually removing the restraints.A berendsen barostat was used to keep the pressure constant while maintaining an average temperature and pressure level 45,46 .The equilibrated systems were then treated to 100 NS of mmpbsa for the last 50 NS while maintaining a pressure of 1 bar and a temperature of 300 K.The md simulation was used to determine the standard fluctuations in the complex's RMSD, RMSF (root mean square fluctuation), gyration radius (RG), sasa (solvent accessible surface area), and hydrogen bond count. Ligand retrieval The drugs administered as GLP-1 agonists including, exenatide (PubChem CID: 45588096), liraglutide (PubChem CID: 16134956), semaglutide (PubChem CID: 56843331) and lixisenatide (PubChem CID: 90472060) were taken as positive controls in the present study.The sdf structures of the approved drugs were downloaded from the pdb database and prepared in the marvin sketch.These drugs belong to the drug class delineated as drugs known as glucagon-like peptide-1 (GLP-1) receptor agonists, which mimic the action of naturally occurring hormones called incretins in the body.In individuals with type 2 diabetes, the body does not produce enough insulin or is unable to use insulin effectively, leading to high blood sugar levels.These drugs work by increasing the release of insulin from the pancreas and reducing the production of glucose by the liver.It also suppresses appetite, leading to a reduction in food intake and weight loss, which can be beneficial in managing T2DM.These drugs are generally administered as a subcutaneous injection and are typically used in combination with other diabetes medications, such as metformin or sulfonylurea. Structural analysis of modeled proteins Assessing the physicochemical parameters is critical in determining the efficacy and bioavailability of the drugs.The hydropathicity (GRAVY) of a peptide-based drug can greatly affect its efficacy and bioavailability 47 .Hydrophobic peptides tend to be more effective when delivered through non-oral routes as they tend to aggregate and form insoluble structures, making it difficult for them to dissolve and be absorbed in the gut when taken orally.While hydrophilic peptides tend to be more easily absorbed when taken orally but are more susceptible to degradation and clearance.The optimal hydropathy of a peptide-based drug will depend on the desired route of administration and desired efficacy and bioavailability 48 . The aliphatic index of a peptide can affect its efficacy as a drug by influencing its hydrophobic character, stability, and bioavailability.Peptides with a high aliphatic index tend to be more hydrophobic and resistant to degradation but may have limited oral bioavailability, while peptides with a low aliphatic index tend to be more hydrophilic and have improved oral bioavailability but may be more susceptible to degradation and rapid clearance from the body.The optimal aliphatic index for a peptide-based drug will depend on the desired route of administration and desired efficacy and bioavailability 49 . The half-life of a peptide-based drug can affect its efficacy by influencing the duration and stability of its effects in the body.Peptides with a long half-life tend to provide a sustained and stable effect, but may be more susceptible to accumulation and adverse effects, while peptides with a short half-life tend to provide a more rapid and transient effect, but may have a higher risk of rapid clearance and limited efficacy.The optimal half-life for a peptide-based drug will depend on the desired duration and stability of its effects on the body and the risk of adverse effects.The physicochemical properties of the modeled proteins are documented in Table 1. Molecular docking The molecular docking analysis was performed to ascertain the binding of the modeled GLP-1 analog (venom proteins) in the binding pocket of GLP-1 proteins.From the results of molecular docking, it is evident that positive controls (exenatide, liraglutide, semaglutide, and lixisenatide) demonstrated significantly better binding than the modeled proteins (P25668, P86541, P86538, and P86382) (Table 2).Amongst the modeled proteins P86358, P86541, and P86382 displayed comparable binding with the positive controls. Visualization From the docking analysis (Table 2) it is evident that the positive controls exenatide and liraglutide exhibited better docking scores and the venom proteins P86541 (cytotoxin 2a) and P86538 (cytotoxin 7) had comparable results with that of the positive controls.These associations were visualized at the molecular levels to identify the interactions at the binding pocket.It was noticed that the protein P86541 was predominantly interacting with hydrophobic amino acids like LEU, ALA, and PHE (Fig. 5) while the protein P86538 was interacting with ILE, VAL, LEU, ALA, and PHE (Figure 6). Molecular dynamic simulation The molecular dynamic stimulation was performed to the P86538-GLP-1 complex to demonstrate its stability.The rmsd graph is a plot of RMSD values over time, which provides insights into the stability and conformational changes of the simulated system.For the P86538-GLP-1 complex it was noticed that the system remained stable from ~35ns to ~65 NS, following a minimal transition the system remained stable from ~70 NS to 100 NS at 0.5 RMSD (Fig. 7).The rmsf graph is used to identify the flexible regions of the protein which are important for function or interaction with other molecules.The peaks in the RMSF graphs are representative of the region that are highly flexible.In general, the secondary structure elements like loops and terminals are highly flexible and exhibit high RMSF, conversely, helices and beta sheets are more rigid (Fig. 8).The compactness of the macromolecule ID determined with radius of gyration (RG) plots, which represents the average distance of all the atoms in the molecule from its centre mass.It was observed that the system remained relatively constant over the period of simulation suggesting that the protein is maintaining its overall structure and Shape (Fig. 9).The sasa values remained relatively constant from ~30 NS to ~100 NS indicating that the molecule was stable during the simulation (Figure 10).Diabetes mellitus is a chronic metabolic disorder characterized by elevated blood glucose levels and a wide range of associated complications.Glp-1 is an interesting target for the management of dm because it plays an important function in the control of blood sugar levels in the body.Glp-1 improves glucose management and lowers the risk of related problems by influencing insulin secretion, glucagon secretion, and stomach emptying.Peptidebased medications offer a promising strategy for increase GLP-1 secretion in diabetics and represent a novel therapeutic and management option for this complex and prevalent condition.The potential significance of snake venom proteins in the management of diabetes and related comorbidities, as well as the promising role of exendin-4 in replicating naturally occurring GLP-1 has inspired the current study to create comparable peptides that could aid in the treatment of diabetes.Insulin therapy is a standard treatment for diabetics that involves the administration of exogenous insulin to assist regulate blood glucose levels.While insulin therapy can be beneficial in regulating glucose levels, it has various limitations that can impair its efficacy and safety.Dose dependence, resistance development, and the risk of hypoglycemia have prompted researchers to look for alternate treatments. Nature has long been a fascinating source for pharmacological drug design.Insects and reptiles are significant repositories for discovering substances with promising therapeutic benefits 50 .Several bioactive proteins and peptides have been isolated from the venom of numerous serpents, beetles, millipedes, lizards, scorpions, mollusks, etc. Snake venom proteins are the subject of current research as several pharmaceutically valuable proteins have already been identified and described from snake venoms 51,52 . Peptides are an intriguing element of snake venom.Although otherwise deadly, some venom proteins can be employed directly as medications or as drug candidates when administered in the appropriate dosage.These peptides are extremely valuable due to their diverse and unique therapeutic potential, as well as their binding ability and specificity toward their targets.Snake venom toxins, like alphaneurotoxins, have proven to be incredibly valuable in evaluating the composition and function of receptor proteins.Snake venom proteins are mainly stable molecules that can withstand the proteolytic conditions of the venomous glands.Furthermore, the innate stability of these proteins enables them to reach their target receptors within their prey.Snake venom cytotoxins are attracting attention as possible therapeutic agents because of their unique properties and mechanisms of action.Originally developed to immobilize prey and defend against predators, these cytotoxic proteins found in snake venom can also have therapeutic effects in certain circumstances.Promising results have been shown in the treatment of cardiovascular diseases, where snake venom cytotoxins have potent vasodilatory and anti-inflammatory effects, making them valuable for managing hypertension, heart failure, and angina.Additionally, some cytotoxins have anticoagulant properties, making them useful for preventing blood clots and treating thromboembolic disorders.Furthermore, snake venom cytotoxins have potent analgesic effects and could be promising candidates for managing conditions such as chronic pain, neuropathic pain, and diabetes.The snake venom neurotoxins are beneficial in managing neurological disorders such as parkinson's and alzheimer's diseases, epilepsy, stroke, and traumatic brain injury.Snake venom neurotoxins also have potent analgesic effects, and they are being investigated for their potential role in drug development.The modeled venom proteins have favorable physicochemical properties, making them promising contenders for the development of insulinotropic cytotoxin and neurotoxin-based drug candidates that could mimic GLP-1.This area of investigation is ongoing, and scientists are exploring the potential of snake venom proteins to improve glycemic control and offer new hope to patients living with diabetes. CONCLUSION One of the key hormones involved in glucose regulation is GLP-1 which is indispensable for insulin secretion, inhibition of glucagon secretion and slowing of gastric emptying, which help to regulate glucose levels and improve insulin sensitivity.These actions help to regulate glucose levels and improve insulin sensitivity, making GLP-1 an important target for the treatment of diabetes.GLP-1-based therapies have provided a new tool for the management of type 2 diabetes, but more research is needed to fully understand their mechanisms of action, safety, and efficacy, particularly in populations with comorbidities such as cardiovascular disease and obesity.Snake venom cytotoxins have garnered attention as potential therapeutic agents due to their unique properties and mechanisms of action.In the present study, the snake venom cytotoxins (cytotoxin 2a, cytotoxin 7, and cytotoxin 10) demonstrated comparable results with the positive controls.It is crucial to recognize that while the therapeutic potential of snake venom cytotoxins is intriguing, more research is necessary to fully understand their mechanism of action and safety profile.While some cytotoxins have demonstrated promise in preclinical studies, it is still a long way from being considered a viable therapy for any condition.Further research is needed to determine the safety and efficacy of snake venom cytotoxins and to develop appropriate dosing regimens and delivery methods. Fig. 1 .Fig. 2 .Fig. 3 .Fig. 4 . Fig. 1. 3d structure of long neurotoxin 1 (P25668), (a) the protein has 71 amino acids.the QMeandisco Global values for the protein were 0.69± 0.05, (b)the modeled protein exhibited 94.24% of amino acids in the ramachandranfavoured region with 3.46% rotamer outliners, (c) the Z-score for the modeled protein was less than 1 indicating the modeled structure was close to the native structure
5,369
2023-06-30T00:00:00.000
[ "Biology", "Medicine" ]
Rational Conditions of Fatty Acids Obtaining by Soapstock Treatment with Sulfuric Acid As a result of alkaline neutralization of oils, a significant amount of soapstock is formed, the utilization of which creates an environmental and economic problem. The production of fatty acids from soapstock using sulfuric acid decomposition is investigated in this work. The peculiarity of the work is the determination of regression dependences of the yield and neutralization number of fatty acids on the soapstock processing conditions: temperature and duration. Soapstock obtained after neutralization of sunflower oil was used as raw material. Soapstock indicators: mass fraction of moisture – 15.4 %, total fat – 71.9 %, fatty acids – 64.5 %, neutral fat – 7.4 %. Rational conditions of soapstock processing are determined: temperature (90–95) °С, duration 40 min. Under these conditions, the fatty acid yield is 79.0 %, the neutralization number is 180.0 mg KOH/g. Quality indicators of the obtained fatty acids: mass fraction of moisture and volatile substances – 1.8 %, mass fraction of total fat – 97.0 %, cleavage depth – 64.5 % of oleic acid, the presence of mineral acids – no. Fatty acids correspond to fatty acids of the first grade according to DSTU 4860 (CAS 61788-66-7). An increase in the temperature and duration of soapstock contact with sulfuric acid increases the yield and neutralization number of fatty acids. This is due to a decrease in the viscosity of the reaction medium, an increase in the depth of cleavage of soapstock soaps with sulfuric acid, an increase in the intensity and duration of mass transfer. The developed rational conditions allow obtaining fatty acids from soapstock, which correspond in composition to fatty acids from refined deodorized sunflower oil. The results allow solving a number of economic and environmental problems associated with soapstock utilization and can be implemented in oil refineries and fatty acid production processes) and on exhaust gases (neutralization or purification using special systems). Developments in reducing the negative impact of exhaust emissions on the environment are important. In [11], it was shown that even with the minimum addition of biodiesel fuel in the amount of (2-5) % to petroleum fuel, the toxicity of diesel engine emissions decreases. Thus, biodiesel is widely used in world practice to improve the ecological state of the environment. But an important aspect in biodiesel production is the issue of raw materials. In [12], it was noted that the cost of sunflower oil reaches 700 $/t, the cost of soapstock (in terms of fats) -130 $/t. Therefore, oils and fats are not a cost-effective type of raw materials. The authors [13] showed that promising raw materials for biodiesel fuel are fatty acids from the waste of alkaline neutralization of oils. In studies, the yield of the obtained fatty acid methyl esters was (52-97) %. But there are still unresolved issues related to measures aimed at steadily increasing ether yield. The ester yield significantly depends on the concentration and moisture content of fatty acids used as raw materials in biodiesel production. Therefore, it is necessary to develop rational conditions and technology for obtaining high-quality fatty acids, which will ensure a high yield of high-quality fuel. Modern technologies for obtaining fatty acids from soapstock are decomposition by mineral acid. In [7], it was noted that sulfuric acid is most commonly used. The use of hydrochloric acid for decomposition is irrational because its cost is greater than that of sulfuric acid. The use of nitric acid has shown that 20 % of the acid is spent not for the direct reaction of soap decomposition, but for its reduction to nitrogen oxides. This causes an increased consumption. Nitrate waters containing organic compounds (fatty acids) are also formed. This creates a risk of water accumulation in the soil. Therefore, this method is also not used. Thus, developments aimed at efficient production of fatty acids from soapstock by treatment with sulfuric acid are relevant. In [14], the research results on the isolation of fatty acids by enzymatic and chemical methods are presented. Enzymatic isolation of fatty acids was performed in the presence of a culture of Yarrowia Lipolytica and glycerol at 28 °C for 48 hours. The chemical method was to treat the soapstock with sulfuric, hydrochloric or orthophosphoric acid. But the disadvantage of the enzymatic method is the significant duration of the process, which makes it impractical. The variation of the parameters of the fatty acid isolation process by these two methods, as well as the qualitative indicators of the obtained fatty acids, in particular, the neutralization number or their concentration in the obtained product are not considered. The authors [15] investigated the conditions of cotton soapstock decomposition with sulfuric acid with prior hydrolysis of soapstock. The influence of decomposition temperature and acid solution concentration on fatty acid yield and gossypol yield is considered. But it is not shown how the process parameters affect the quality of fatty acids. In addition, this technology is quite complex and multi-stage. In [16], the scientific results on biodiesel production, which is fatty acid butyl esters, are shown. Three methods of fatty acids obtaining from soapstock in order to obtain esters are considered: treatment with sulfuric acid; saponification and acid treatment; saponification, washing with sodium chloride solution and acid treatment. The disadvantage of the study is the lack of data on the influence of soapstock of oils and fats. Soapstock creates a problem with its processing, storage and disposal. Soapstocks contain moisture, soap, free fatty acids, neutral triacylglycerols, alkali and other substances. Disposal of soapstocks as household waste causes the problem of environmental pollution (soil, water, air of the waste discharge zone) [2]. Soapstocks contain a significant amount of fat (up to 70 %). In [3], it was noted that fats are capable of oxidation with the formation of toxic products and the release of large amounts of heat. But soapstock contains components that can be used in various industries. Soapstocks are used in soap making, in surfactants production and textile auxiliaries, as a defoamer, for feed purposes [4]. Soapstock usually requires additional pre-treatment, concentration, and so on. The economic feasibility of using this type of waste must be calculated in each case. It is advisable to use soapstock to obtain fatty acids. Fatty acids are the valuable raw material for many industries. One of the priority issues in the world is to preserve natural resources and reduce the negative impact of industrial waste on the environment. To this end, new production technologies, methods of wastewater and air emissions treatment are being developed, new types of equipment are being used [5]. In [6], it is noted that the methods of industrial waste management can be classified into three options: reducing the source of pollution by modifying technology, waste recovery, waste processing to obtain valuable products or neutralization of unwanted components. A particularly important area is the rational use of waste, which cannot be avoided. In addition to improving the environmental situation, this will increase the profitability of production through the sale of not only basic products, but also products derived from waste. Thus, the processing of soapstock to obtain fatty acids is an important area of research that helps to improve the environmental state and provide valuable raw materials for production processes in various areas. Literature review and problem statement Soapstock is a large-tonnage waste of the oil and fat industry. Upon obtaining 1 ton of refined oil, (10-20) % of the soapstock by weight of oil is formed. The value of soapstock is due to the presence of fatty substances in the form of soaps, high molecular weight carboxylic acids and triacylglycerols. The most important component of soapstock is fatty acids. In [7], it was shown that fatty acids are used for the production of higher fatty alcohols, esters, amides, which are used in the production of surfactants, detergents and cosmetics. However, the profitability of production, the industry's need for these substances and the quality indicators of fatty acids used as raw materials remain open questions. The thermodynamic properties of fatty acid esters as fuel components are presented in [8]. The thermophysical properties of biodiesel fuel in the high-temperature gas phase have been studied for thermodynamic calculations of piston engine operating processes. In [9], the production and use of hydrocarbon-enriched fuel from soapstock were investigated. The high quality of the obtained product and the possibility of using it as an alternative to other fuels are noted. The authors [10] noted that compliance with the standards on the toxicity of modern vehicles and special equipment is a topical issue. Various measures are applied: the effect on the engine working process (mixing and combustion -methyl orange, according to acting normative documentation; -sulfuric acid, grade "clean for analysis", according to GOST 4204 (CAS 7664-93-9); -ethyl ether, according to acting normative documentation; -calcium chloride of the highest grade, according to GOST 450 (CAS 10043-52-4). 2. Procedure for determining the quality indicators of the industrial soapstock sample Organoleptic characteristics of soapstock are determined by the standard method according to DSTU 5033:2008 (Method for determining color, consistency and odor). International methods for determining organoleptic parameters: color -ISO 15305, consistency -AOCS Method Cc 16-60, odor -AOCS Cg 2-83. The mass fraction of moisture is determined by the standard method according to DSTU 4603:2006 (ISO 662). The mass fraction of total fat, mass fraction of fatty acids are determined by standard methods according to DSTU 5033:2008 (ISO 17189, IDF 194). 3. Procedure of soapstock treatment with sulfuric acid solution In this work, the treatment of soapstock with sulfuric acid was performed as follows. A portion of the soapstock was placed in a heat-resistant conical flask, water with a temperature of 60 °C in the amount of 50 % by weight of the soapstock was added. The flask is mounted on an electric stove, a stirrer is placed in the flask. While stirring, a solution of sulfuric acid with a concentration of 40 % was added to the flask. The amount of sulfuric acid was adjusted so that excess sulfuric acid was maintained throughout the treatment process (monitoring was performed using the methyl orange indicator or litmus paper). After adding the sulfuric acid solution, the mass was stirred at a given temperature for a given time. The resulting mass was settled for 4-5 hours. The obtained upper layer of fatty acids was washed with hot water until complete removal of sulfuric acid, which was controlled by methyl orange. The absence of sulfate ions was checked using the calcium chloride solution with a concentration of 10 %. Procedure for determining the quality indicators and composition of fatty acids The neutralization number of fatty acids is determined as follows. A portion of fatty acids about 2.0 g is dissolved in (40-60) cm 3 of ethyl alcohol. Then, 0.5 cm 3 of phenolphthalein solution is added and titrated with 0.5 N aqueous or alcoholic solution of potassium hydroxide to a pink color that does not disappear within 30 seconds. The neutralization number (NN) is calculated by the formula: where V -the amount of cm 3 0.5 N potassium hydroxide solution, used for titration; 28.05 -titer of exactly 0.5 N KOH solution multiplied by 1000; K -correction to a titer of 0.5 N potassium hydroxide solution; P -portion of fatty acids, g. Organoleptic parameters of fatty acids are determined by standard methods according to DSTU 4860:2007. processing parameters on the yield and quality of fatty acids. This is important because the fatty acids indicators affect not only the quality and yield of esters, but also the profitability of biodiesel production from fatty acids derived from soapstock. Thus, existing studies have shown that an effective modern method of fatty acids extracting from soapstock is sulfuric acid treatment. The influence of some technological parameters of soapstock processing on the fatty acid yield and the efficiency of biodiesel production from them has been studied. But there are still unresolved issues related to the quality of extracted fatty acids, their composition. These data are important because the quality of fatty acids depends on the quality and economic feasibility of production based on them. The characteristics of fatty acids are influenced by the type and indicators of the oil that was subjected to alkaline neutralization; technological parameters of soapstock processing; type and concentration of acid. High-quality fatty acids will increase the profitability of the enterprise through the sale and processing of acids for various purposes. Therefore, the unresolved issue in processing alkaline neutralization waste is to determine the influence of soapstock treatment conditions on the yield and quality indicators of fatty acids. The aim and objectives of the study The aim of the study was to determine the dependence of the yield and neutralization number of fatty acids on the treatment conditions of oil alkaline neutralization waste (soapstock) with a sulfuric acid solution: temperature and duration of the process. This will make it possible to obtain high-quality fatty acids in industrial conditions and predict the yield and neutralization number of fatty acids. To achieve the aim, the following objectives were set: -to determine the quality indicators of the industrial sample of soapstock obtained by sunflower oil neutralization; -to identify the dependence of the yield and neutralization number of fatty acids on the soapstock treatment conditions with sulfuric acid and to determine the rational parameters of soapstock treatment; -to study the quality indicators and composition of fatty acids obtained under the established rational conditions. Materials and methods to study the rational conditions for soapstock processing 1. Examined materials and equipment used in the experiment The following reagents and materials were used in this study: -rectified ethyl alcohol, according to DSTU 4221:2003 (CAS 64-17-5); -distilled water, according to acting normative documentation; -potassium hydroxide, grade "clean for analysis", according to acting normative documentation; -sodium hydroxide, grade "clean for analysis", according to acting normative documentation; -phenolphthalein, according to acting normative documentation; The mass fraction of moisture is determined by the standard method according to DSTU 4603:2006. The mass fraction of total fat, the depth of cleavage, the presence of mineral acids are determined by standard methods according to DSTU 4860:2007. The composition of the obtained fatty acids and the fatty acid composition of sunflower oil are determined by the standard method according to DSTU ISO 5508-2001 and DSTU ISO 5509-2002. 5. Planning of experimental research and processing results In order to plan our study and process the results obtained, a complete second-order factor experiment was used, the calculations of which were performed in the Microsoft Office Excel 2003 (USA) and Stat Soft Statistica v6.0 (USA) software packages. The experiments were repeated twice. Results of studying the influence of soapstock processing conditions on the efficiency of fatty acid extraction 1. Determining the quality indicators of the experimental soapstock sample The quality indicators of the experimental sample of soapstock obtained during the alkaline neutralization of sunflower oil were previously determined. Qualitative indicators of soapstock are presented in Table 1. Therefore, the experimental soapstock sample has a high value of the mass fraction of total fat and in terms of quality indicators corresponds to DSTU 5033 (CAS 68952-95-4). 2. Identification of the dependence of the yield and neutralization number of fatty acids on the soapstock processing conditions The influence of soapstock treatment conditions with sulfuric acid on the efficiency of fatty acids obtaining has been determined. The aqueous solution of sulfuric acid with a concentration of 40 % was used in the work. The full factorial experiment was used to conduct research and process the results: the number of factors -2, the number of experiments -9, the number of levels -3 [17,18]. Factors and variation intervals: х 1 -temperature of soapstock treatment with sulfuric acid: from 55 to 95 °C; х 2 -duration of soapstock treatment with sulfuric acid: from 40 to 120 minutes. The response functions were the yield ( % of the available fatty acid content in soapstock) and neutralization number of fatty acids. Table 2 shows the matrix of experiment planning with the actual values of the factors, as well as the experimentally determined values of the response functions. As a result of experimental data processing in the Stat Soft Statistica v6.0 (USA) package, mathematical models were obtained that reflect the dependences of response functions on the conditions of soapstock processing. Response functions are marked as follows: у 1 -yield of fatty acids, %; у 2 -neutralization number of fatty acids, mg KOH/g. In normalized form, the regression dependence of the fatty acid yield on the soapstock processing conditions has the form: In normalized form, the regression dependence of the neutralization number of fatty acids on the soapstock processing conditions has the form: No loss of consistency was established (level of significance of regression dependences coefficients p>0.05). The values of the determination coefficients for the yield and neutralization number of fatty acids were 0.86866 and 0.96439, respectively (the values are close to unity). Thus, the obtained models adequately describe the response functions. In dependences (2)-(5): х 1 -temperature of soapstock processing, °C; х 2 -duration of soapstock processing, min. In the equations in normalized form, the values of x 1 , x 2 are substituted in coded form (for example, the minimum value of the parameter is denoted by -1, and the maximum +1). In equations (3)-(5) with real variables, the values of x 1 , x 2 in actual dimensions are substituted for calculations. Table 3 shows the calculated values of the response functions -yield and neutralization number of fatty acids, calculated by equations (3) and (5), respectively. Designations of experiments 1-9 correspond to the planning matrix (Table 2). Fig. 1, 2 show the response surface projection and the response surface, which is the dependence of the fatty acid yield on the temperature and duration of soapstock processing. By analyzing equation (3) and Fig. 1, 2, it is found that the soapstock processing temperature has a more significant effect on the fatty acid yield than the duration. Thus, it is advisable to treat the soapstock at the temperature of (90-95) °C. The processing time of 40 minutes can be used, as under selected temperature conditions the duration of up to 90 minutes has practically no effect on the fatty acid yield. Further insignificant increase in the response function occurs only with a simultaneous increase in the duration and temperature of processing. Under the duration of 40 minutes and the temperature of 90 °C, the fatty acid yield was 79.0 %. Fig. 3, 4 show the response surface projection and the response surface, which is the dependence of the neutralization number of fatty acids on the temperature and duration of soapstock processing with sulfuric acid. By equation (5) and Fig. 3, 4, it is found that the temperature of soapstock processing also has a more significant effect on the neutralization number of fatty acids, i.e. their quality, than the process duration. Therefore, it is advisable to carry out the treatment process at the temperature of (90-95) °C. The duration has a negligible effect on the value of the neutralization number. Therefore, the following conditions can be considered rational: process temperature (90-95) °С, duration 40 min. Under the duration of 40 minutes and the temperature of 90 °C, the neutralization number of fatty acids was 180 mg KOH/g. 3. Research of quality indicators and composition of fatty acids Qualitative indicators of fatty acids obtained under the established rational conditions are determined. The corresponding data are given in Table 4. The obtained fatty acids are of high quality and correspond to the characteristics of fatty acids of light oils and modified fats, obtained without saponification, first grade according to DSTU 4860 (CAS 61788-66-7). For comparative analysis, the composition of the obtained fatty acids and the experimental sample of sunflower oil was determined. The corresponding data are shown in Table 5. The composition of fatty acids obtained from soapstock after alkaline neutralization of sunflower oil has differences in comparison with the fatty acid composition of the sunflower oil sample. There is a correlation for fatty acids, the mass fractions of which are the largest (stearic, oleic, linoleic, behenic). Discussion of the results of studying the dependence of the influence of soapstock processing conditions on the efficiency of fatty acid production Rational conditions for processing soapstock obtained after alkaline neutralization of sunflower oil with sulfuric acid have been determined: process temperature (90-95) °С, duration 40 min. The experimentally determined values of the response functions under rational conditions were: fatty acid yield -79.0 %, neutralization number -180.0 mg KOH/g. Using the real values of the variation factors (temperature and duration of the process) according to equations (3) and (5), it is possible to predict both the yield and neutralization number, which characterizes the fatty acid quality with an error of not more than 6 %. Qualitative indicators of the obtained fatty acids correspond to the characteristics of fatty acids of light oils and modified fats, obtained without saponification, of the first grade according to DSTU 4860 (CAS 61788-66-7). The results of the study indicate the high quality of the isolated fatty acids. This work differs from the existing [7,9,[14][15][16] scientific researches on the extraction of fatty acids from soapstocks by the method of acid action by considering the most important technological parameters. The influence of soapstock processing conditions not only on the yield, but also on the quality of fatty acids, characterized by the neutralization number, was studied. The neutralization number characterizes the purity of the product and is a constant for individual fatty acids. This value is used in technological calculations, when determining the amount of reagents, materials for obtaining products based on fatty acids. The neutralization number controls the conversion degree of fatty acids in further processing. The developed mathematical models make it possible to predict the neutralization number of fatty acids, which is necessary when calculating the required amount of fatty acids for further processing. These data are relevant for oil and fat enterprises and industries, which use fatty acids as the raw material. With increasing temperature of soapstock treatment with sulfuric acid in the experimental range from 55 to 95 °C with the reaction time of 40 minutes, the fatty acid yield increases 1.6 times, with the duration of 80 minutes -1.5 times (Table 2). This indicator characterizes the intensity of soapstock decomposition. It is found that it is advisable to treat the soapstock at the temperature of (90-95) °C, because the maximum values of the response functions are achieved (fatty acid yield -up to 93.9 %, neutralization number -up to 192.9 mg KOH/g), which is confirmed by the data in Tables 2, 3 and Fig. 1-4. The processing duration affects the efficiency of fatty acid extraction as follows. With increasing duration of processing in the experimental range from 40 to 120 minutes, only at the temperature of 55 °C, there is a significant increase in yield (1.75 times). But at this temperature, the yield values are the lowest in the experiment (up to 78.9 %). Under conditions of temperature increase to 95 °С (within the limits of the experiment), prolongation of the process duration practically does not affect the yield (the growth of this indicator no more than 1.2 times). The corresponding data are shown in Table 2. Therefore, the rational duration of the process is 40 minutes (minimum within the experiment). The composition of the obtained fatty acids is compared with the fatty acid composition of the sample of sunflower oil (Table 5). There is a correlation for fatty acids, the mass fractions of which are the largest (stearic, oleic, linoleic, behenic). Thus, fatty acids derived from soapstock contain 37.4 % oleic acid, 51.2 % linoleic acid; fatty acids from sunflower oil contain 30.9 % oleic acid, 62.3 % linoleic acid. During the implementation of the research results in production, it is necessary to use the developed rational conditions (temperature of soapstock treatment with sulfuric acid (90-95) °С, duration 40 min.). Changing the tempera-ture from 95 to 55 °C reduces the efficiency of the process, which is expressed in a decrease in yield by 1.6 times, neutralization number -by 1.5 times, as shown in Table 2. The resulting fatty acids must be washed from sulfuric acid residues and checked for the presence of sulfate ions using 10 % calcium chloride solution. Residual sulfuric acid content can distort the results of determining the neutralization number, composition and other quality indicators of fatty acids. Also, the presence of sulfuric acid can lead to the formation of unwanted reaction products and poor-quality products during the subsequent use of fatty acids. The disadvantage of the study can be considered the use of sulfuric acid, which is dangerous from the safety point of view. But now this method of obtaining fatty acids from soapstocks is rational [7]. And, therefore, scientific developments in this field are relevant, as they contribute to solving the problem of utilization of soapstocks produced by oil and fat enterprises. Research aimed at increasing the efficiency and profitability of processing soapstocks into fatty acids is important. Because there is a wide range of applications of these compounds, including biodiesel production. Promising areas of research are to determine the influence of the conditions of soapstock treatment with sulfuric acid on other indicators of fatty acids: saponification number, ether number. These data will be useful for companies that use fatty acids in the production of soap, esters, etc. Also of interest is the further use of fatty acids obtained under different conditions in the processes of formation of biodiesel fuel and the study of its indicators. This will allow determining rational technological parameters in the fatty acid production from soapstocks in order to obtain biodiesel. Conclusions 1. On the basis of experimental researches, the quality indicators of the experimental sample of soapstock, obtained as a result of alkaline neutralization of sunflower oil, are determined. Soapstock has the following indicators: mass fraction of moisture -15.4 %, mass fraction of total fat -71.9 %, mass fraction of fatty acids -64.5 %, mass fraction of neutral fat -7.4 %. The experimental sample of soapstock complies with DSTU 5033 (CAS 68952-95-4). 2. The dependence of the yield and neutralization number of fatty acids on the conditions of soapstock processing with sulfuric acid is identified. Relevant mathematical models are obtained. Rational parameters of soapstock processing are determined: process temperature (90-95) °С, duration 40 min. The fatty acid yield under these conditions was 79.0 %, the neutralization number was 180.0 mg KOH/g. 3. The quality indicators and composition of fatty acids obtained under the established rational conditions are investigated. Fatty acids have the following indicators: mass fraction of moisture and volatile substances -1.8 %, mass fraction of total fat -97.0 %, cleavage depth -64.5 % oleic acid, the presence of mineral acids -no. Fatty acids correspond to fatty acids of the first grade, obtained without saponification, in accordance with DSTU 4860 (CAS 61788-66-7). The comparative analysis of the composition of fatty acids from soapstock and fatty acid composition of sunflower oil was performed. There is a correlation for fatty acids with the largest mass fractions. Thus, fatty acids derived from soapstock contain 37.4 % oleic acid and 51.2 % linoleic acid; fatty acids from sunflower oil contain 30.9 % oleic acid and 62.3 % linoleic acid. The obtained mathematical models allow carrying out the process of soapstock processing to obtain fatty acids under rational conditions. Data on the neutralization number make it possible to predict the quality of fatty acids obtained at various technological parameters, as well as to use these data in further processes of fatty acid application. Recycling large-tonnage waste -soapstock -will help reduce the negative impact of the oil and fat industry on the environment.
6,440
2021-08-31T00:00:00.000
[ "Chemistry", "Environmental Science" ]
E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron’s input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active. It is well established that a balance between excitation and inhibition, usually denoted as E-I balance, arises during spontaneous cortical activity, both in vitro [1][2][3][4] and in the intact and spontaneously active cortex [4][5][6][7] . This balance, which refers to a relatively constant ratio between excitatory and inhibitory inputs to a neuron, has been theoretically predicted as way to explain how cortical networks are able to sustain stable though temporally irregular, and even chaotic, dynamics [8][9][10] . Since then, the ramifications of such a balanced state in terms of both dynamics and computation have been widely studied, showing how E-I balance results in critical-state dynamics of avalanches and oscillations 11 , with direct implications for the dynamic range 12 , storage of information 13 , and computational power 14 of networks. Recurrent neural networks can use E-I balance to generate asynchronous states even in the presence of strongly shared inputs 15 . Indeed, nearby cortical neurons with similar orientation tuning show low correlated variability, potentially simplifying the decoding of information by a population of such neurons 16 . Balanced networks have also been shown to work potentially in at least two different regimes, linking richness of the internal dynamics, connectivity strength, and functionality: a weak coupling state favoring information transmission, and a strongly coupled state, characterized by complex internal dynamics which could be employed for information processing 17 . Modulating the ratio between excitation and inhibition it is furthermore possible to selectively switch information gating and rerouting between different circuits on and off 18 . The direct link between E-I balance and information transmission, together with observations of an atypical ratio of excitation/inhibition in neurobehavioral syndromes such as autism, has led to the hypothesis that an abnormal degree of E-I balance might be behind a series of psychiatric disorders 19 . Indeed, later causal experimental studies in mice have shown how further elevation of E-I balance, above typical physiological levels, produce a strong impairment of information processing and result in social deficits consistent with those of humans suffering from these conditions 20 . It has been shown that networks of supralinear excitatory and inhibitory neurons, namely of neurons whose non-linearities are purely expansive (no saturation) and which would therefore tend to exhibit unstable behavior, can be stabilized choosing the right type of connectivity matrices, resulting in stabilized loosely balanced Methods We consider autonomous Erdös-Rényi networks containing N neurons characterized by a linking probability p. The membrane potential x i of the rate-encoding neurons obeys where y i is the firing rate, b i the threshold and w ij are the internal synaptic weights. There is no external input. In particular, no external source of noise is present in the main analysis of the system (we show in the Supplementary Material how these results are robust to the addition of a finite amount of external noise). The membrane time constant τ is set to 10 ms for inhibitory and respectively to 20 ms for excitatory neurons. The neural model we employ is described by a non-linear relation between membrane potentials and firing rates and has been used in previous work 32 to derive the Hebbian plasticity rules we will later employ. This transformation is expansive for low firing rates and saturates for very high rates. While a saturation of this type is unavoidable for any realistic biological system, cortical neurons have always been observed to behave in the low firing rate regime, where this saturation is not visible, and the transfer function is typically described by a threshold-powerlaw ∝ ⌊ ⌋ y x n with exponent n between 1 and 5 [33][34][35] . We show however in Fig. 1 how, for low firing rates (encouraged by the intrinsic plasticity rule we employ) both functions are virtually indistinguishable. Adaption of the synaptic weights. The recurrent synaptic weights are continuously adapted using the multiplicative self-limiting Hebbian rule 32 where the membrane potential x i and the activity y i of the postysynaptic neuron are related in this model via (1) by a deterministic sigmoidal transfer function. This allows us to write functions G and H as functions of x i only, where y i is then simply shorthand for y i (x i ). This update rule may be derived from an information theoretical principle, the stationarity principle for statistical learning 36 , which states that the distribution function of the postsynaptic neural activity continuously evolves during the weight adaption process, becoming stationary only once learning is completed. Being autonomous the network considered here is however not confronted with an explicit learning task. Learning denotes in our context therefore the unsupervised process of weight adaption, which minimizes in our case the the Fisher information of the activity of the postsynaptic neuron 32 . The limiting term G(x) in (2) changes sign when the postsynaptic activity y i is either too large or too small in comparison with x 0 , reversing hence the Hebbian learning regulated in turn by H(x). This property of G(x) is useful for the learning rule as it prevents runaway synaptic growth, operating as an effective homeostatic synaptic plasticity mechanism, mounted on top of the Hebbian part of the rule 37 . Our adaption rule, which is also denoted flux rule 32 , is robust with respect to the actual value selected for the references scale x 0 of the membrane potential, as we checked performing test runs with x 0 = 1 and x 0 = 8. For the simulations presented here we used x 0 = 4. We note that Hebbian learning rules like (2) are normally formulated not with respect to the bare presynaptic activities, but with respect to the deviation δy j = y j − 〈y j 〉 of the presynaptic activity y j with respect to its time-averaged mean〈y j 〉. The adaption rule (2) performs in that case a principal component analysis for which the signal-to-noise ratio increases with increasing x 0 32 , being otherwise sensible to input directions y j characterized by a negative excess kurtosis. For the study presented further below we use the same adaption rule for all synapses, namely (2), whose self-limiting behavior stabilizes firing rates, rather than trying to reproduce a particular instance of the wide variety of experimentally observed phenomenological spike time dependent synaptic plasticity (STDP) rules for inhibitory connections 25 . This route would involve therefore the introduction of not well-constrained parameters, transcending in addition the central aims of our investigation. We are interested here to investigate if ongoing Hebbian plasticity and balanced asynchronous dynamics are compatible. The threshold b i = b i (t) entering the transfer function in (1) sets, as usual, the average firing rates. Here we use for the adaption rule for the threshold, which reduces, for y ≈ y t = 0.2, to the somewhat extended expressions one may derive from homeostatic principles for neural activity [38][39][40] . For the adaption rates we used 1/ε b = 10 and 1/ε w = 100 (in seconds). Synaptic pruning. Dale's law states that neurons are either excitatory or inhibitory, namely that w lj w kj ≥ 0 for all l and k. For a Hebbian plasticity rule like (2) to respect Dale's law one needs to prune a synaptic connection whenever the respective w ij changes sign. We do this every 1000 ms of mathematical simulation time, reinserting the pruned link with a weight corresponding to 10% of the correspondingly average excitatory or inhibitory links. Performing test runs where the pruned links were reinserted with a strength of 1% of the average mean yielded nearly identical results. For the reinsertion process the postsynaptic neuron i is connected to a random and previously unconnected presynaptic neuron m, with the sign of the new link w im respecting Dale's law. There are two possible versions. Annealed pruning. Links may change sign when the new presynaptic neuron m is selected freely. The overall number of excitatory and inhibitory links may then drift over the course of the simulation, with only the total connectivity remaining constant. Frozen pruning. Links do not change in character when the new presynaptic neuron m is selected only among those neurons which are of the same type as j. Frozen pruning would correspond from a biological perspective to a separate reshuffling of Gaba and Glutamate receptors. For the results presented here we considered frozen pruning. Short-term synaptic plasticity. We also included short-term plasticity (STSP), a mostly presynaptically induced modulation of the synaptic efficacy lasting hundreds of milliseconds to seconds 41 . STSP may lead both to synaptic potentiation and depression, resulting respectively from an influx of Ca 2+ ions into the presynaptic bulb and from a depletion of the available reservoir of neurotransmitters. These effects are captured within the Tsodyks-Markram model 42 by two variables, u(t) and ϕ(t), encoding respectively the presynaptic Ca 2+ -concentration and the number of vesicles with neurotransmitters. The transient plasticity rules then describe the time evolution of the effective synaptic weight ∼ w ij which is proportional to the bare synaptic weight w ij , to the number of available vesicles ϕ j and to the vesicle's release probability u j . In simulations where STSP is present, ∼ w ij replaces w ij in (1). STSP is transient in the sense that both u j and ϕ j relax to unity in the absence of presynaptic activity y j → 0. Typical time evolution curves for the synaptic efficiency multiplier ϕ j (t)u j (t) are presented in Fig. 1. With the introduction of STSP and making an explicit distinction between E and I inputs, the driving current where {exc} and {inh} denote respectively the set of excitatory and inhibitory neurons. One can define analogously with the average excitatory and inhibitory effective synaptic weights. We note that the original Tsodyks-Markram model 42 describes STSP for the case of spiking neurons and that one can derive (4) by assuming α = β = 0.01 and that a maximal neural activity of y j → 1 corresponds to a firing rate of 40 Hz. Typical values for the time scales entering (4) are T u = 500 ms and T ϕ = 200 ms for excitatory synapses in the medial prefrontal cortex of ferrets 43 and T u = 20 ms and T ϕ = 700 ms for inhibitory layer 2-4 neurons of the somatosensory cortex of Wistar rats 44 . It has been pointed out, that these time scales are also relevant for behavioral control tasks 45 . For our simulations we used U max = 4, α = β = 0.01, T u = 500 ms and T ϕ = 200 ms for all synapses. We did also run control runs involving 500/200 and 20/700 T u /T ϕ pairs respectively for excitatory and inhibitory synapses, which led however only to minor quantitative changes. Results We are interested in investigating under which conditions an autonomous neural network, whose dynamics is described by (1), (2), (3) and (4), evolves towards a stable, irregular and balanced state (SOPBN). The results here presented correspond to networks of both excitatory and inhibitory neurons, where 80% of neurons are excitatory and 20% are inhibitory, and whose connections respect Dale's principle, even when plasticity mechanisms are at play. We have taken membrane time constants of 20 and 10 ms for excitatory and inhibitory cells, respectively. As checks, we have also repeated the simulations with networks consisting of 50% excitatory and 50% inhibitory neurons and with equal membrane time constants, observing no qualitative differences. Unless otherwise stated, we will present results with a total number of neurons N = 400, a fixed 80% fraction of excitatory cells, a link probability p = 0.2 and a target average activity of y t = 0.2. The initial synaptic weights are drawn from Gaussians with means 7.5 (−30.0) and standard deviations 0.375 (1.5) for excitatory and inhibitory synapses, respectively. Our simulations were performed in all cases with a C++ code running on a standard desktop computer. Rate encoding neurons with asynchronous activity spikes. We find that the SOPBN tends to evolve to an irregularly bursting state characterized by time scales of the order of 100-200 ms. The data presented in Fig. 2 illustrates typical two second intervals of activity, as obtained directly at initialization and after one hour of mathematical simulation time. It shows the following: • The system state is very different at the beginning and after one hour: While some neurons are constantly quiet or active directly after initialization, the network exhibits pervading bursts after evolving for one hour. • The mean excitatory 〈 〉 x i exc ( ) and inhibitory 〈 〉 x i inh ( ) inputs a neuron receives are both large in magnitude. The substantially smaller value for the overall mean input expresses E-I balance. This E-I balance is present for arbitrary timeframes within the systems evolution. Averaged over time we have for the system at different times where the brackets denote now averages over the network and over time. We also examined the E-I balance ( ) for individual neurons, obtaining results very close to the network averages shown in Fig. 2. A detailed analysis of the corresponding cross correlations is presented further below. In 21 the authors compare the degree of cancellation (or tightness of the balance) between the van Vreeswijk and Sompolinsky balanced networks, and the SSN, showing that while the first kind requires a very high degree of cancellation, the SSN can operate in a regime of loose balance. These networks have however constant synaptic weights and intrinsic parameters. We observe in SOPBNs, where several parameters are plastic, that while most of the time the network follows a high degree of balance (with correlations close to unity as shown in Fig. 7), this tightness is transiently broken to allow for bursts of activity. Autonomous networks with balanced and increasingly large, but otherwise random synaptic weight distributions, are known to produce a chaotic state in the thermodynamic limit 9 . Testing this prediction we considered the non-adapting case with ε b = ε w = 0. By additionally switching off short-term synaptic plasticity, we find that a N = 400 network leads, depending on the initial weight distribution, either to fixpoints, limit-cycles, or to states of highly irregular activity. We however did not try to determine the relative incidence rates of theses three states. The two types of irregular spiking states, which are illustrated in Fig. 3, as resulting from adapting and from non-adapting dynamics, differ with respect to activity bursts (which are observed also in Fig. 2), which are conspicuously absent in our non-adapting networks. As a note, these irregular spiking states show signs of corresponding to a transient chaotic state (see subsection Analysis of the irregular activity, in the Supplementary Material). Fig. 4 the evolution of the network averages (6) of the synaptic weights. We find that the Hebbian plasticity rule (2) renormalizes the synaptic weights while approximately retaining the balance . The second relation in (7) refers to 80/20 networks, which contain four times as many excitatory as inhibitory neurons. Evolution of balanced synaptic weights. We present in • The balance presented in Fig. 4 is not perfect, with the inhibitory weights being slightly dominating on the long run. • We also considered networks for which the initial weight distribution was strongly not balanced, finding that the adaption rule (2) leads to balanced mean synaptic weights. We will discuss the self organization of E-I balance in more detail further below for the case of 50/50 networks. In Fig. 5 the full distribution of synaptic weights is presented, with the results obtained from a 3600 sec simulation contrasted to the initial weight distribution. It is evident that the redistribution of synaptic weights is substantial, reaching far beyond a simple overall rescaling of the mean, as presented in Fig. 4. The excitatory weights, and to a certain extent also the inhibitory weights, tend to pile up at the pruning threshold, which has been set to zero. Trying exponential and log-normal fits we found that the excitatory weight distribution follows fairly well a log-normal distribution. System size and simulation time effects. The comparison between networks with N = 400 and N = 3200 presented in Fig. 5 shows that the overall functional form of the weight distribution changes qualitatively for the inhibitory weights, but not for the excitatory weights. The small additional peak visible for N = 3200 for the inhibitory links corresponds to the synaptic weights of the links reinserted after pruning. The mean weights, which are also presented in Fig. 5, scale down with increasing systems size. For the data presented in Fig. 5 the connection probability is p = 0.2 for both N = 400 and N = 3200. It is then an interesting question which kind of scaling autonomous Hebbian learning would produce. Our attempts to determine how the synaptic weights scale with respect to the mean number of afferent synapse Z = pN were however not successful. For the data presented in Fig. 5 we note that the ratio of the mean synaptic weights is about a factor two for N = 400 and N = 3200, with the corresponding ratio of Z being 1/8. Comparing weight distributions for a fixed simulation time is not meaningful for systems, as our SOPBN, that do not stop evolving. Average weights continue to drop even for long-term simulations, as evident in part in Fig. 4. We find that the system switches to a new state (characterized either by limit cycles, fixpoints or by very long quiet periods) after extended transients, which are at least of the order of several hours. The irregular state observed, as in Fig. 3, corresponds therefore to a transient state. The transients last however orders of magnitude longer than the time scales relevant for information processing in biological networks, which range typically from milliseconds to seconds. Self-organized balanced synaptic weights. The results presented hitherto in Figs 2, 3, 4 and 5 have been for 80/20 systems where the initial synaptic weights had been drawn from balanced distributions. Going one step further we now examine whether the Hebbian plasticity rule (2) is able to transform a non-balanced weight distribution into a balanced distribution. , as defined by (6). The network contains 320 and 80 excitatory and inhibitory neurons. Also shown is the average balanced weight (red, enlarged in the insets), given by . Left: With shortterm plasticity. Right: Without short-term plasticity, namely for ϕ j ≡ 1 and u j ≡ 1. We present in Fig. 6 the evolution of the synaptic weights for a 50/50 system, for which the initial synaptic weights had been drawn from Gaussians with means 7.5 (−15.0) and standard deviations 0.375 (1.5) for excitatory and inhibitory synapses, respectively. One notices that the autonomous Hebbian learning rule (2) balances the initially unbalanced synaptic weight distribution as fast as possible, that is, on the timescale 1/ε w = 100 s. Equivalent results were obtained for initially unbalanced 80/20 systems. The distribution of synaptic weights self-organizes, as evident from the data presented in Fig. 6, becoming fully symmetric within one hour of Hebbian adaption. The same is found for initially non-balanced 80/20 networks (not shown), for which the final synaptic weight is also balanced, albeit non-symmetric. Would any Hebbian learning rule lead to balanced synaptic weights?. A range of distinct synaptic plasticity rules are Hebbian in the sense that they perform a principal component analysis (PCA) whenever a direction in the space of input activities presents a larger variance with respect to all other input directions 32 . Examples are the flux rule (2), which may be derived from the stationarity principle for statistical learning 36 , and Oja's rule 46 , In order to work with average synaptic weight changes 〈 〉  w ij of comparable magnitude, one needs to rescale the adaption rate ε oja with respect to ε w , which enters the flux rule (2). We use ε oja = 10ε w . In Fig. 8 the time evolution of the average excitatory and inhibitory synaptic weights, as produced by Oja's rule (8), are presented. Oja's rule leads to a complete rescaling of the inhibitory weights and hence to a maximally unbalanced synaptic weight distribution, which is furthermore characterized by intermittent periods of abrupt changes. Synaptic weight growth is limited by both Oja's and by the flux rule, namely as a consequence of the additive damping factor for the case of Oja's rule (8) and as the result of the multiplicative limiting factor G(x) = x 0 + x(1 − 2y) for the case of the flux rule (2). For comparison we performed simulations where we replaced G(x) in (2) by a constant. We find in this case that the excitatory weights are rescaled to zero. The synaptic weight distribution is therefore also maximally unbalanced. The runaway growth of the inhibitory synaptic . Left: Using Oja's rule (8). Right: Using the flux rule (2), as for Fig. 6, but this time with the limiting factor G(x) = x 0 + x(1 − 2y) replaced by a constant, G → 10. Both approaches fail to produce a balanced synaptic weight distribution. Figure 7. The E-I cross-correlation between excitatory and inhibitory inputs for a 50/50 system with N = 400 neurons. Shown is |ρ ± | = −ρ ± , as defined in (10), which was measured either after 1 hour (gray bars), or right at the start (green bars). For the time average a period of 10 sec has been used in both cases. The error bars have been evaluated with respect to 100 initial weight configurations drawn each time from Gaussians with means 7.5 (−15.0) and standard deviations 0.375 (1.5) for excitatory and inhibitory synapses, respectively. The initial synaptic weight configuration is therefore not balanced (as for Fig. 6). Shown are the results for distinct scenarios with Hebbian plasticity (Hebb), short-term synaptic plasticity (STSP) and intrinsic plasticity (intrinsic) being either turned on (green checkmark) or off (red cross). weights showing up in Fig. 8, which is due to the removal of the limiting factor G(x) in (2), is accompanied by a respective evolution of the threshold, via (3), such that the average activity remains close to y t = 0.2. The flux rule (2) is manifestly only a function of the membrane potential x i and of the effective presynaptic activity ϕ j u j y j , which is in turn positive. The overall functional form follows closely that of a cubic polynomial 36 , where the x ± denote the roots of G(x) = x 0 + x(1 − 2y). Stationarity is achieved when the time average of (9) vanishes, that is when the average membrane potential 〈x i 〉 is on the order of the size of the roots x ± and b/2 of G(x)H(x). • The threshold b, which is determined via the sigmoidal (1) by the target activity y t , is of order unity whenever this is the case for the average membrane potential 〈x i 〉. • It is viceversa true, that the average membrane potential 〈x i 〉 will be of order unity, as long as this is the case for the roots x ± and b/2 of G(x)H(x). These two conditions are mutually compatible. It is from this point not surprising that the flux rule leads on average to small membrane potentials, as evident in Fig. 3, and consequently also to approximately balanced synaptic weight distributions. We note in contrast that Oja's rule (8) is explicitly dependent in addition on the weight w ij of the adapting synapse. We conclude that not every Hebbian learning rule will produce balanced irregular dynamics. While we have pointed out here at some differences between the Flux rule and Oja's rule, which may hint at the conditions for a rule to achieve this state, further work is necessary to determine which families of rules can and cannot perform this task. E-I balance in terms of E-I correlations. To now quantify the degree of balance between excitation and inhibition, we compute for a given neuron the cross-correlation ± C i between the total excitatory incoming synaptic current x i exc ( ) , as defined by (5), and the total inhibitory synaptic current x i inh ( ) , averaged first with respect to time and then across all neurons of the network: In Fig. 7 we present the cross correlation |ρ ± | for the 50/50 system discussed in Fig. 6, for which the initial weight configurations are not balanced. Note that the time scale for Hebbian learning is 1/ε w = 100 sec, which is a order of magnitude larger than the interval of 10 sec used for evaluating ρ ± via (10). Analogous investigations for an 80/20 system can be found in the Supplementary Material in Fig. S3. The cross correlation characterizing the E-I balance of the initial state is only marginally dependent on whether short-term and/or intrinsic plasticity are active. Its surprisingly large overall value, about (45-50)%, reflects the presence of substantial inter-neuronal activity correlations, which we did not investigate further. Comparing with the data presented in Fig. 6 one notices that ρ ± is a somewhat less sensible yardstick for E-I balance than the bare synaptic weight balance, which renormalizes to small values in a balanced state. The data shown in Fig. 7 confirms otherwise that the Hebbian plasticity rule (2) leads to a highly balanced state. We have so far considered here networks without any external noise, which would not be the case in the brain. A state characterized by irregular neural activity is generically expected to be robust against moderate noise levels. Performing simulations with additive input noise, characterized by zero means and a standard deviation of (5-10)%, with respect to the mean of the bare input, we found this expectation to hold. The cross correlation ρ ± barely changes as long as the level of noise present remains moderate. The situation changes gradually with increasing noise strength, with E-I balance breaking down when the noise level reaches about 50% of the bare input strength (cf. Fig. S1 in the Supplementary Material). Discussion We have examined here the question of whether it would be plausible for a neural network in which both intrinsic and synaptic (E as well as I connections) parameters are continuously evolving to achieve balance both in terms of weights and activities, in a fully unsupervised way, finding that this is indeed possible. The resulting balanced network (which we have denoted here SOPBN) arises in a self-organized fashion, in analogy to the critical state characterizing possibly certain aspects of cortical dynamics 47 . We studied for this purpose the influence of continuously ongoing Hebbian plasticity within autonomous networks of rate-encoding neurons, finding that the synaptic plasticity rule that follows from the stationarity principle of statistical learning, the flux rule, does indeed induce a balanced synaptic weight distribution, even when the initial distribution is strongly unbalanced. E-I balance induced by Hebbian learning. Comparing the flux rule with and without the self-limiting term and Oja's rule, we have found that Hebbian learning leads to a balanced distribution of synaptic weights, and hence also to a balanced state, whenever the learning rule favors small average membrane potentials. It is not necessary, for this to happen, that the learning rule constrains the overall input to strictly vanish on average, it suffices that the time averaged input remains of the order of the neural parameters, such as the inverse slope of (1). We found that the flux rule, as defined by (2) and (9), fulfills this requirement. An example of a Hebbian rule not leading to a balanced weight distribution is on the other side given by Oja's rule (8). Rate encoding neurons showing spike-like neural activity. An E-I balanced state is characterized in addition to the small average membrane potential by the near cancellation of two large drivings in the form of large excitatory and inhibitory inputs. Such a state is highly sensible to small imbalances resulting either from additional external signals or from internal fluctuations. We find these imbalances to be strong enough in SOPBNs to induce short spike-like bursts in the neural activity, as observed e.g. in Fig. 2. This is quite remarkable, as one could have expected that the rate-encoding neurons used for the present study would be more likely to lead to slowly and hence to smoothly varying dynamical states. Asynchronous neural activity. The near cancellation of large excitatory and inhibitory drivings stabilizes asynchronous neural activity, as illustrated in Fig. 3 in terms of the membrane potential. Using the 0-1 test for chaos 48 we found the asynchronous state in SOPBNs to be at least strongly irregular (cf. Fig. S2 in the Supplementary Material). As indicators for chaos one may have analyzed the time intervals between activity spikes 49 or the Lyapunov exponents of the system. The observation that the synaptic weight distribution changes continuously, as demonstrated in Fig. 6, over time scales of hours, proves in any case that the neural activity is irregular on extended times scales. The limit of infinitely long times is not the focus of this study, as real neural systems are not expected to function for prolonged periods in the absence of stimuli. Absence of a stationary autonomous state. We find, as shown in Fig. 4, that the size of the mean synaptic weights decays slowly but continuously. Experimenting with different ensembles of initial weight statistics we found no instance where Hebbian learning retaining E-I balance would lead to a systematic increase in magnitude of the overall mean synaptic weights. We note, however, that this observation holds only for the here considered case of isolated networks, hence without an additional external driving. An adaption rate ε w that would fade out slowing, being only initially large, would also preempt the long term decay of average synaptic weights. Theory vs. experiment. The dynamic balance of excitation and inhibition is observed experimentally within a range of distinct settings 1,5 . Multielectrode recordings in human and monkey neocortex suggests that E-I balance is caused in essence by local recurrent activity 50 , and not by external inputs, with irregular bursting activity showing up on a range of time scales that starts, as for SOPBNs, at a few hundred milliseconds. It is also interesting that the independent adjustment of synapses connecting inhibitory to layer 2/3 pyramidal neurons in the mouse primary visual cortex has been found to be key for E-I balance to occur on a single-neuron level 51 . These findings concur with the results for the single neuron cross correlation presented in Fig. 7, for which the network average has been performed only as a second step. Furthermore we note that both the self organized bursting states observed in SOPBNs, see Fig. 6, and the alternating up and down states observed for in vitro prefrontal and occipital ferret slices are characterized by the asynchronous participation of all neurons 2 . Outlook. Which configuration of synaptic weights results from continuously ongoing internal Hebbian learning? We presented here a first inroad into this subject, focusing in particular on the self-organized emergence of E-I balance in terms of large but nearly canceling excitatory and inhibitory inputs. We find that not all self-limiting Hebbian plasticity rules are able to do the job. There is on the other hand no need for a Hebbian learning rule to enforce E-I balance explicitly. We find that E-I balance already emerges when the Hebbian learning rule favors membrane potentials which are small with respect to the variance of the inputs, being nevertheless large enough to be relevant for the neural transfer function.
7,697.2
2018-06-12T00:00:00.000
[ "Physics", "Computer Science" ]
The flavour of natural SUSY An inverted mass hierarchy in the squark sector, as in so-called “natural supersymmetry”, requires non-universal boundary conditions at the mediation scale of supersymmetry breaking. We propose a formalism to define such boundary conditions in a basis-independent manner and apply it to generic scenarios where the third-generation squarks are light, while the first two-generation squarks are heavy and near-degenerate. We show that not only is our formalism particularly well suited to study such hierarchical squark mass patterns, but in addition the resulting soft terms at the TeV scale are manifestly compatible with the principle of minimal flavour violation, and thus automatically obey constraints from flavour physics. Introduction In supersymmetric extensions of the Standard Model (SM), any particles with sizeable couplings to the Higgs sector are expected to have masses not too far above the electroweak scale. This concerns in particular the squarks of the third generation, which should be lighter than about a TeV in order not to create a severe naturalness problem. By contrast, the squarks of the first two generations could well be much heavier. This possibility is particularly attractive because the bounds from supersymmetry (SUSY) searches at the LHC are strongest by far for the first two generations of squarks, and because flavour constraints are also easier to satisfy when they are very heavy. The scenario of an inverted mass hierarchy in the squark sector, typically combined with a small higgsino mass parameter and a not too heavy gluino (see e.g. [1][2][3][4] and references therein), is commonly dubbed "natural" or "effective" SUSY, and is increasingly becoming the new paradigm of SUSY phenomenology. In the Minimal Supersymmetric Standard Model (MSSM) with boundary conditions at the Grand Unification (GUT) a e-mail<EMAIL_ADDRESS>scale, light stops and sbottoms with otherwise very heavy squarks are especially interesting because they can lead to radiatively induced large stop mixing [5][6][7]. The latter is needed in the MSSM to obtain a 126 GeV Higgs mass while keeping the stops reasonably light. More precisely, if the first two-generation squarks have masses of the order of 10 TeV, and if supersymmetry breaking is mediated at a very high scale such as M GUT ≈ 10 16 GeV, then the stop masses at the low scale receive significant negative contributions from two-loop running (or possibly even from one-loop running if there is a non-vanishing hypercharge D-term). This allows one to realise a sizeable ratio |A t /mt |, where A t is the stop trilinear parameter and mt is the average stop mass, leading to large one-loop corrections to the lightest Higgs mass. However, in precisely this situation where radiative corrections to the spectrum from the first two generations are important, they may also induce a significant misalignment between the squark and quark mass matrices. The resulting flavourchanging neutral currents (FCNCs) are tightly constrained by experiment. The effects of such a split squark spectrum on flavour observables have already been investigated in [8][9][10][11] (see also [12][13][14][15][16][17][18][19][20] for some recent discussions on FCNCs in selected models with light third-generation squarks). Here, we propose to shed light on this issue using a different strategy. Firstly, having assumed a very high mediation scale, hierarchical squark soft terms at the low scale have to be obtained from some non-universal boundary conditions through the renormalisation group evolution. But even just prescribing such boundary conditions in a model-independent way is nontrivial, since they depend on the chosen flavour basis. Our first result is to propose a formalism to define general soft-term boundary conditions in a basis-independent manner. Secondly, we apply this formalism to the cases where either a subset or all of the third-generation squarks are light, while the first two-generation squarks are heavy and neardegenerate. It turns out that not only is our formalism par-ticularly well suited to study such squark mass patterns, but in addition the resulting TeV-scale soft terms are in many cases manifestly compatible with the minimal flavour violation principle (MFV), 1 as proposed in Reference [21]. In addition, whenever a departure from MFV is observed, it can be quantified precisely. Clearly, realizing split squark scenarios in this way is of great advantage because it helps ensure that there will be no conflict with bounds on D-D and K -K mixing observables, which one might otherwise expect for generic hierarchical soft terms. In Sect. 2, we briefly recall the essentials of the SUSY flavour problem, the concept of MFV, and present our procedure to define fully generic and non-universal boundary conditions for soft-breaking terms. In Sect. 3 we use this scheme to parametrise the boundary conditions leading to third-generation squarks much lighter than the first two generations, and characterise their flavour properties. Section 4 contains our conclusions. In the "Appendix", we address some technical subtleties regarding the definition and running of the CKM matrix and show that our scheme allows to easily deal with, and correct for, CKM-induced uncertainties in the renormalisation group (RG) running. The SUSY flavour sector We follow the conventions of the SLHA2 [22], which we now briefly recall. The matter fields of the supersymmetric Standard Model transform under a global non-abelian flavour symmetry This symmetry is explicitly broken by the Yukawa superpotential as well as by the soft mass matrices for the squarks and sleptons, and by the soft trilinear terms. In the lepton-slepton sector, Y e can always be diagonalised via a suitable SU(3) L × SU(3) E transformation. We will focus on the quark-squark sector, where at most one of the matrices Y u and Y d can be chosen diagonal in a gauge eigenstate basis. After electroweak symmetry breaking, the Yukawa matrices are diagonalised by The misalignment of left-handed quarks is encoded in the CKM matrix, V CKM = V L † u V L d . Rotating quarks and squarks In terms of the interaction-basis soft masses m 2 Q,U,D and trilinear terms T u,d , Our aim is now to establish a formalism for encoding the squark sector soft-term data without fixing a flavour basis. Such a basis-independent formalism has both conceptual and practical advantages which will be discussed in detail below. In order to find a basis-independent parameterisation of the soft terms, we expand them in powers of the Yukawa matrices, covariantly with respect to the spurious G F flavour symmetry. To this end we define the matrices They transform as bifundamentals under an SU(3) Q rotation which sends Q → V Q Q: Given that m 2 Q also transforms as a bifundamental, where the expansion coefficients a q i and b q i are invariant under G F . Likewise, given their respective transformation properties under G F , the right-handed squark mass matrices and the trilinear terms are covariantly expanded as The coefficients a are real because the mass matrices are hermitian, but the c u,d i are generally complex. The parameters m 0 and A 0 are placeholder constants of mass dimension one which could as well be absorbed into the a, b, and c coefficients at one's convenience. Eqs. (7)-(9) define our basis-independent general parameterisation of the squark sector soft terms. Since the matrices appearing on the RHS of Eq. (7) are linearly independent (for generic A and B) [23], there is no loss of generality in this expansion. The same is true for each of Eqs. (8) and (9). Indeed it is a simple exercise in counting to show that the real a q,u,d i and b q,u,d i together with the complex c u,d i coefficients contain exactly the degrees of freedom needed for describing three hermitian 3 × 3 mass matrices and two general complex 3 × 3 trilinear matrices. The bases of flavour-covariant 3 × 3 matrices we are projecting on are not unique, but they are in a sense the simplest choices, being symmetric in Y u and Y d and using the lowest powers of Yukawa matrices possible. These matrix bases turn out to be numerically somewhat peculiar when realistic values for Y u and Y d are inserted. Because of the large hierarchy in the Yukawa couplings, one has B 2 ≈ tr(B)B and A 2 ≈ tr(A)A; that is, some of the basis matrices are nearly parallel in flavour space. In addition, the only non-diagonal structure provided by A and B is the very hierarchical CKM matrix. Therefore, numerically expanding a generic 3 × 3 matrix requires coefficients spanning several orders of magnitude, typically up to the order of m 2 t /m 2 u ∼ 10 10 . The above expansion enables us to adopt a very simple and clear definition of Minimal Flavour Violation (MFV). The basic assumption of MFV is often stated as G F being broken only through powers of Yukawa matrices [21] (see also e.g. [24][25][26]). The usual rationale is that G F could be an exact but spontaneously broken symmetry of some more fundamental theory whose dynamics is responsible for the generation of both the Yukawa couplings and the soft terms. In our framework, we define MFV as follows: all a x i , b x i and c x i coefficients in Eqs. (7)-(9) should be at most O(1) when m 0 and A 0 represent the typical soft mass scale. (In fact the statement "the only sources of G F breaking are powers of Yukawa matrices" is somewhat meaningless when taken on its own, since the above expansion shows that one can parameterise any general soft mass and trilinear matrices in this way. However, if the expansion coefficients are allowed to be arbitrarily large, they could not possibly originate from G F spurions in a weakly coupled theory.) For more details, see also [27,28]. At this point we should emphasise that our approach does not rely on the G F symmetry being in any way fundamental. When we allude to MFV in the following, it is mostly because the MFV condition (in the strict above sense) has certain other appealing properties: Firstly, it is stable and generally even IR-attractive [29,30] under the renormalisation group; secondly, it allows a model to automatically satisfy many stringent bounds from flavour physics. Unification may impose additional relations between the soft terms and hence between the expansion coefficients. GUT relations are typically spoiled at the subleading level by higher-dimensional operators involving GUT-breaking VEVs (for instance, the SU(5) relation Y d = Y e should be violated to obtain a valid fermion spectrum). Neglecting such GUT-breaking effects, one may look for simple conditions on the coefficients to ensure that the soft terms are compatible with grand unification, depending on the actual GUT model. For example, standard SU(5) unification requires m 2 Q = m 2 U . Choosing a basis in which Y u is diagonal, it is clear that for this to hold it is sufficient to choose a q 1 = a u 1 , a q 3 = a u 2 , a q 5 = a u 4 with all other a q,u i = 0. More general patterns are of course possible since our parametrisation is fully general, but they will in general not be MFV-like. In this work we are interested in models where the soft terms are neither universal nor necessarily MFV-like at some very high mediation scale M GUT ≈ 10 16 GeV. We will define the soft-term boundary conditions through the expansion Eqs. (7)- (9). Such a procedure has many desirable features (below, we use the short-hand 1. The soft masses and trilinear terms at any scale Q admit expansions of the form (7)-(9), where both soft terms and Yukawa couplings are understood as those at the scale Q. Thus, the running of the soft masses and trilinear terms can be represented by that of the flavour coefficients. Their renormalisation group equations (RGEs) were studied in References [29,30]. Typically, not only are the evolutions of the coefficients from Q = M GUT down to the TeV scale smooth and bounded, but they even exhibit infrared "quasi"-fixed points, whose values mostly depend on the non-flavoured MSSM parameters. 2. The β-functions of the soft masses and trilinear terms are naturally compatible with the expansions (7)-(9), and the running of the various coefficients sum up different physical effects. For example, the leading coefficients a q,u,d 1 [Q], c u,d 3. The phenomenological impact of the flavour mixing induced by the off-diagonal soft-term entries can immediately be assessed. Indeed, the MFV limit is recovered when all the coefficients are O (1). This means that one can directly spot potentially dangerous sources of new FCNCs simply by looking at the relative sizes of the coefficients. For example, if a q 1 [1 TeV] = 1 but a q 3 [1 TeV] = 1000, then one should expect difficulties with FCNC constraints from K and B physics. Indeed, assuming SUSY masses of the order of 1 TeV, such values grossly violate current bounds on mass insertions; see e.g. Reference [31], with for example [ In practice, this is far less demanding than it seems. For O(1) perturbations, not all the 63 coefficients are equally relevant, so varying only the first few in each expansion is sufficient. 7. As analysed in the "Appendix", provided none of the leading coefficients are particularly large, the softterm expansions are largely independent of the precise parametrisation of the CKM matrix. In particular, the coefficients are similar using the full CKM matrix or its CP-conserving limit, no matter how this limit is taken. By contrast, off-diagonal entries of the soft terms can deviate by tens of percent depending on the chosen CKM matrix. This observation is useful in practice since it permits to compute the coefficients under some simplifying assumptions (CP-limit, no threshold corrections, and/or no experimental errors for the CKM parameters), and then to reconstruct with an excellent accuracy the physical soft terms and thereby reliably compute all the flavour observables. 8. Last but not least, it is easy and straightforward to parametrise boundary conditions where the thirdgeneration squarks are split from the first two generations, since Y u Y † u and Y d Y † d do have precisely such a hierarchy. This possibility will be explored in detail in the next section. To be complete, we should point out that there is one practical issue that needs to be kept in mind. Since the basis matrices span several orders of magnitude and are approximately linearly dependent, it is necessary to maintain a high level of accuracy in the numerical evaluations, otherwise instabilities can easily arise. This is especially true when computing the coefficients of highly suppressed terms such as a u,d For the same reason, a perfectly unitary representation of the CKM matrix must be used, otherwise spuriously large coefficients can arise. Split squarks and MFV The peculiar structure of the MSSM Yukawa couplings should have its origin in some unknown flavoured dynamics at some high scale M F . If supersymmetry breaking is mediated at a scale greater than M F , then one can reasonably expect that this flavour dynamics will also generate some non-trivial flavour structures for the soft mass terms and the trilinear couplings. In that sense, expressing the soft terms directly in terms of the Yukawa couplings through the expansions (7)-(9) can be regarded as an attempt at capturing the relationships between them. If this picture is correct, the expansion coefficients at the scale M F would not be random but would derive from the flavour dynamics at that scale. It is thus quite possible that the various coefficients would actually follow a very definite pattern. With the above idea in mind, our goal is to design flavour structures leading to spectra with light third-generation squarks at the low scale. There are many ways to achieve this. A first possibility is to impose where · denotes the trace in flavour space. More explicitly, let us set When the free parameter α q is close to one, in the basis where Y u is diagonal, m 2 Q has its first two entries nearly degenerate and much larger than the third, which is precisely what we aim for. Note, however, that in this particular case the value of (m 2 Q ) 33 receives large negative loop corrections from (m 2 U ) 33 . In order to generate a realistic spectrum, the GUT-scale (m 2 Q ) 33 cannot not be chosen too small, and/or sizeable positive corrections from the gaugino masses are needed to overcome this effect. At the low scalet L andb L then end up much lighter than all the other squarks. It should be remarked that compared to naively setting our procedure requires the same number of free parameters. But, at the same time, setting the initial conditions in our way is entirely independent of the flavour basis, while Eq. (12) in principle requires one to specify also the four mixing matrices V L ,R u,d . In addition, the parameter α q could bear some physical meaning. First, because Y u Y † u −1 is factored out, its RG evolution is very flat over the whole range down to the electroweak scale. Typically α q changes by 20 % during the evolution. (We explicitly show the evolution of α q for a different scenario in the following discussion; see Fig. 2.) Second, it is tempting to imagine that some unknown flavour dynamics sets α q to exactly one at the scale M F . However, since M F = M GUT , one would then have α q [M GUT ] close but not exactly equal to one. Thus, the only phenomenological constraint on this parameter is for it to evolve down to a value smaller than one at the low scale, so as to avoid inducing negative eigenvalues for the stop or sbottom squarks and the ensuing colour symmetry breaking. We are, however, not aware of any specific flavour model which predicts α q = 1, so for the moment we will treat α q ≈ 1 merely as a parameter choice, and study its implications independently of a possible dynamical generation. A very interesting feature of the boundary condition Eq. (11) is that even if left-squark masses are highly hierarchical, it nevertheless respects the MFV principle since at all scales. So, once evolved to the low scale, we can immediately predict that these initial conditions should be compatible with flavour constraints. Other scenarios can be constructed along the same lines. For instance, to also split thet R from the first-and secondgeneration squarks, one can further impose which is also compatible with the MFV principle when α u ≈ 1. As opposed to the above scenario, the condition that both m 2 U and m 2 Q be hierarchical is radiatively stable (provided that the other states which couple strongly to the stop sector, such as the up-type Higgs and the gauginos, are not too heavy). Together with a small μ parameter, this constitutes a way to realise "natural supersymmetry" within MFV. An example for the typical evolution of the leading expansion coefficients for such a natural SUSY-MFV scenario is given in Fig. 1. The RG evolution and computation of the mass spectrum is done with SPheno [32,33] with boundary conditions adapted according to Eqs. (7)-(9). The a 1 coefficients are not shown because they remain very close to unity, with deviations at the level of less than a percent. The evolution of the a q 3 and a u 2 coefficients is much steeper than that of the other a i . The reason for this is that a q 3 and a u 2 are dominated by the running of y t ; when the y t dependence is factored out, the evolution is very flat, see Fig. 2. On the other hand, there is no way to split the right sbottom from the first two generations without moving away from MFV. Indeed, all the non-trivial terms in the expansion of m 2 D are sandwiched between Y † d and Y d , which are small when tan β is not very large. Specifically, the simplest way to lighten all third-generation squarks is to impose with α q,u,d ≈ 1. Clearly, unless tan β is very large, m 2 D significantly deviates from the MFV assumption. One might worry that this setting conflicts with current flavour constraints, which would thus disfavour lightb R squarks. However, this is not the case. First, note that a large a d at the low scale is harmless, since it does not contribute to the δ d R R mass insertions (this is evident in a basis where Y d is diagonal). The impact of a large a d 2 at the high scale is less obvious, since it can drive other coefficients towards large non-MFV values through the RGE evolution. However, as illustrated in Fig. 3, this effect turns out to be quite limited numerically. Though some coefficients are indeed initially driven towards large values, the quasifixed point behaviour of the RGE evolution then kicks in and brings them back to MFV-like values at the low scale (see e.g. the coefficient a d 4 in Fig. 3). So, even if the low-scale coefficients are not strictly compatible with the MFV principle, they are sufficiently close to MFV to pass all flavour constraints (we also checked this explicitly by direct computation of the flavour observables, using the SUSY_FLAVOR 2.02 code [34]). There is another scenario worth considering. Imagine that for some reasons, the shift from universality induced by the yet unknown flavour dynamics occurs only in the SU(3) Q space, through the Y u Y † u − Y u Y † u combination. Plugging this structure in the soft-breaking expansion, they can be Again, this input respects the MFV requirement. The only difference with the first scenario is to impose inverted hierarchies in the trilinear terms at the unification scale. Such a pattern does not survive to the evolution, however. Looking at the expansion of the trilinear terms, the leading c u,d 1 and subleading c u,d i =1 coefficients do not evolve at the same speed, especially when the former are driven by the gluino mass. So, the cancellation present at the unification scale does not happen at the low scale, and trilinear terms end up being quite similar to those obtained with the first scenario. In this respect, the difficulty mentioned there to obtain a viable spectrum applies here also; a dedicated numerical analysis would be needed to conclude on the valid parameter space of these scenarios. Beyond these specific examples, it is now straightforward to state a more general sufficient condition for obtaining a GUT-scale split spectrum which is guaranteed to be flavoursafe, using our formalism. This condition is that the GUTscale flavour coefficients should at most be O (1) and should approximately satisfy the relations (generalizing the expressions for m 2 Q in Eq. (11) and m 2 U in Eq. (13)) The MFV condition ensures that there are no flavour problems, while the sum rules Eq. (16) ensure that the top squarks are actually split from the first two-generation uptype squarks (note that only a (1), and similarly for a u 1 , a u 3 and a u 5 and the RH stop mass). While this prescription covers a large class of viable spectra, we note that it is of course also possible to obtain flavour-safe natural SUSY mass patterns in a different manner-for instance, as we have seen above, one may deviate from the MFV prescription by splitting also the right-handed sbottom mass, and rely on the RG evolution to produce an almost MFV spectrum at the low scale. For such scenarios, however, safeness from FCNC constraints is not automatic but must be checked in each case. We also note that the above sum rules are tied to small or moderately large tan β. At very large tan β, where y b is of order one, they should be modified to take into account also the remaining terms in Eqs. (7) and (8), which may now contribute to the third-generation squark masses even if their coefficients are O(1). Conclusions Third-generation squarks below the TeV scale are an essential requirement for supersymmetry to be natural, while the squarks of the first two generations are likely much heavier. Therefore it is important to study the physics of non-universal squark masses, and of inverted squark mass hierarchies in particular. In phenomenological approaches which prescribe the soft terms at the TeV scale, such as the pMSSM, this is possible to a limited extent only, since effects arising from the renormalisation group running from the mediation scale are not accounted for. In particular, these effects could lead to radiatively induced flavour-violating squark mass mixings. Given the tight experimental constraints from flavour observables, to fully grasp the implications of non-universal squark masses, one should be careful to account for such effects. In this paper we have studied non-universal squark masses in the case that SUSY breaking is mediated at the GUT scale. We have shown how split squark mass matrices (and trilinears) can be conveniently and generally prescribed in a basis-independent way, and investigated their renormalisation group evolution. When requiring only the top squarks to be light, and the first two generations to be nearly mass degenerate, the most natural prescription automatically respects the principle of minimal flavour violation at the GUT scale. Since MFV is preserved during the RG evolution of the soft terms down to the TeV scale, bounds on FCNCs can easily be evaded. For more general hierarchical soft terms at the GUT scale, the compatibility with flavour observables is not automatic, even though generic soft terms tend to be attracted towards MFV-like structures in the infrared [29,30]. We have confirmed this tendency for the particularly relevant case where all third-generation squarks, including the right-sbottom, are light compared to the squarks of the first two generations. While this scenario strongly violates the MFV hypothesis at the GUT scale, the soft terms become increasingly MFVlike during the running, and end up compatible with flavour constraints at the low scale. Our analysis puts the increasingly popular framework of "natural SUSY" on a more solid footing, showing that it is actually possible to obtain a natural SUSY spectrum at the TeV scale from well-motivated GUT-scale boundary conditions without having to worry about RG-induced flavour violation. Furthermore, our formalism for defining non-universal soft terms in a basis-independent way should be very useful for further studies of the supersymmetric flavour problem beyond minimal flavour violation. A full exploration, within our scheme, of the parameter space leading to natural SUSY is left for a subsequent work. from the workshop "Implications of the 125 GeV Higgs boson", which was held [18][19][20][21][22] March 2013 at LPSC Grenoble and which was partially funded by the LabEx ENIGMASS and the Centre de Physique Théorique de Grenoble (CPTG). Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. Funded by SCOAP 3 / License Version CC BY 4.0. Appendix: Stability of the expansion coefficients The CKM matrix plays a central role in the description of flavour mixing in the quark and squark sectors. Two numerical approximations are often introduced: the CP-conserving limit and the neglect of threshold corrections. At first sight, it may appear reasonable to use an approximate CKM matrix in the running to and from the unification scale. After all, the error should be small, and one can always plug back the exact CKM matrix for computing flavour observables. However, while this procedure obviously suffices to bring back the quark mixing to its physical value, this is not the case in the squark sector. Indeed, in many scenarios, the offdiagonal entries in the squark soft terms at the electroweak scale are entirely driven through RG running from the CKM matrix. For example, starting with universal boundary conditions, flavour mixing in the left-squark soft mass term is given by since v u Y T u = M u · V CKM in the down-quark mass eigenstate basis (in Eq. (17), CKM entries are conventionally denoted as V I J , with I = u, c, t and J = d, s, b instead of I, J = 1, 2, 3). Therefore, if a wrong CKM matrix is used throughout the running, the soft terms are also wrong, and so are the estimated supersymmetric contributions to the FCNC processes. In the present section, our goal is to show that these issues can be circumvented if the squark soft mass terms and trilinear terms are defined through their expansion coefficients. Indeed, to a large extent, these do not depend on the precise value of the CKM matrix entries. So, once the expansion coefficients at the low scale have been computed under some approximation, it is a simple matter to reconstruct with an excellent accuracy the physical soft terms by plugging back the physical CKM matrix. Let us illustrate this procedure. CP-conserving limit for the CKM matrix As a first approximation, the MSSM evolution is often computed in the CP-conserving limit. To this end, the CP violating phase of the SM must somehow be disposed of. There is no unique way to achieve this, since there is no unique way to parametrise the CKM matrix itself, and no matter the chosen procedure, the modulus of at least one of the CKM entries is significantly affected. Let us take m 2 Q as an example. If the true, complex CKM matrix is used, and using the same scenario as in Fig. 1 The purely CP-violating coefficients b 1...3 are entirely induced through the RG running. Numerically, their contributions to Im(m 2 Q ) are extremely suppressed because they are tuned by the small Jarlskog invariant Im The bulk of Im(m 2 Q ) actually comes from Im(Y u Y † u ); see Eq. (17). Let us now compare this with the results in the CPconserving limit. The most frequent CP-conserving prescription is to set δ 13 = 0 in the conventional CKM parametrisation. This is the prescription adopted in the RGE codes SPheno [33] and SOFTSUSY [36]. 2 The only CKM entry significantly affected by this is V td , |V td | δ 13 =0 = 0.0058 vs. |V td | = 0.0085. As a consequence of Eq. (17), the (1, 2) and (1, 3) entries of m 2 Q are then significantly reduced, since they are induced by V * tb V td and V * ts V td respectively: If used to compute FCNC observables, this approximation is particularly dangerous for the b → d and s → d transitions. First, the SM and charged Higgs contributions to Z , γ penguins and boxes, dominated by the top quark contributions hence tuned by V * tb V td , are systematically underestimated. This could still be cured by plugging back the correct CKM matrix in the relevant vertices. This procedure fails, however, to cure the also underestimated gaugino-induced FCNC contributions tuned by (M L L d ) 13 and (M L L d ) 12 . On the other hand, it is easy to check that the expansion coefficients discussed above stay very close to the ones obtained in the CP-violating case. If we project m This remains true for all the other soft terms: the shift in the coefficients is below the percent level for the first five coefficients, and of a few percent for the last four. Thanks to this stability, we can use the coefficients computed in the CPconserving limit together with the true, CP-violating Yukawa couplings to reconstruct the true CP-violating soft-breaking terms with an excellent accuracy. To be precise, this means that if we compute Since such small differences are irrelevant phenomenologically, and since the other soft-breaking terms are equally well reproduced, it is a simple matter to cure at the same time all the contributions to the FCNC from the artefacts of the CP-conserving limit. In practice, it is thus possible to perform the RGE study in the CP-conserving limit, use our prescription on the output file to reconstruct the full-fledged CP-violating flavour structures, and then pass it on to codes like SUSY_FLAVOR [34] to compute reliably the supersymmetric contributions to the FCNC. This is what we actually did to check the compatibility of the scenarios described in the main text with current flavour constraints. This procedure works no matter the CP-conserving prescription. Let us compare, for instance, the δ 13 → 0 limit to the η → 0 limit. In the latter case, |V td | is reduced only by about 7 %, while V ub is suppressed by nearly 60 %, |V ub | η=0 = 0.00132 vs. |V ub | = 0.00349. However, an underestimated V ub does not bear serious consequences because it does not affect the top sector. Loop level FCNC are insensitive to this reduction since d I → d J transitions are dominantly tuned by V * t I V t J . For the same reason, the soft mass terms are closer to the true CP-violating ones, with which nearly matches the real part of the CP-violating result (but stays significantly off for the absolute parts). This can be understood from Eq. (17): the RGE corrections proportional to Y u Y † u depend, to an excellent approximation, only on the third row of the CKM matrix, which stays close to the true one. The η → 0 limit therefore mostly affects tree-level charged-current flavour-changing observables like B → τ ν, and this is easily cured by plugging back the true value for the CKM matrix. In any case, the expansion coefficients extracted in the η = 0 limit are again very close to those obtained in the CP-violating case: From them, the reconstructed soft termm 2 Q matches m 2 Q up to corrections of the order of 10 −7 ×m 2 0 , which is again more than enough phenomenologically. It should be stressed here that our prescription works particularly well when the soft-breaking terms respect the MFV hypothesis, i.e., when none of the leading expansion coefficients are exceedingly large. In that case, their values are extremely resilient to changes in the CKM parameters, and the prescription reproduces the soft-breaking terms with an impressive accuracy. Beyond MFV, the coefficients in the CPconserving and violating cases are not necessarily as close. For example, taking the scenario detailed in Fig. 3, we find that coefficients vary by up to about 20 %. But, crucially, these variations affect mostly the subleading coefficients, whose phenomenological impact is very limited. As a result, the CP-conserving coefficients still permit to reconstruct the full CP-violating soft-breaking terms with an accuracy better than 1 %. Thus, even though we have not tested extensively the range of validity of the prescription when moving out Threshold corrections and experimental errors on the CKM matrix It is well known that the CKM matrix runs very slowly. So, for simplicity, when it is evolved using the MSSM beta functions already from the electroweak scale, it is not subsequently corrected for threshold effects. There is, however, a coincidental fact that tends to slightly enhance the error induced by this procedure: the SM and MSSM beta functions for the CKM parameters have opposite signs [37]. As shown in Fig. 4, neglecting the former, the CKM angles are underestimated at all scales. As a result, CKM-driven flavour mixing in the squark soft-breaking terms, i.e. those arising from both the RGE effects and the GUT-scale boundary conditions, are underestimated. Numerically, the effect on the CKM parameters is small but not entirely negligible. Let us use their SM running between M Z and 1 TeV as a measure of their sensitivity to threshold corrections, see Fig. 4. The variations of the Wolfenstein parameters are all much smaller than their corresponding experimental errors, except for the A parameter [38], which increases by about 2 % from M Z to 1 TeV. In view of this, we can estimate the impact of neglecting CKM threshold effects on the soft terms by decreasing the A parameter. As a rough estimate, we send the A parameter to the low end of its 2σ range, A = 0.823 +0.018 −0.042 . Still using the scenario of Fig. 1, this leads to which deviates by up to about 10 % from the values in Eq. (18). This shows that even supposedly negligible shifts in the CKM parameters can build up sizeable effects in the soft-breaking terms. The expansion coefficients, on the other hand, are the same up to completely negligible shifts of the order of 10 −7 . In other words, these coefficients are essentially independent of the threshold corrections even though soft-breaking terms can deviate significantly. So, whenever the threshold effects for the CKM running are not fully taken care of, one can rely on the same strategy as for the CP-limit, i.e., compute the coefficients and then reconstruct accurately the soft-breaking terms by plugging in the physical CKM matrix. As an interesting corollary, the stability of the coefficients offers a very simple procedure to estimate the impact of the CKM experimental errors on the soft-breaking terms. Only one run is needed with the central values of the CKM parameters to get the expansion coefficients, and once known, it suffices to vary the CKM matrix entering the Yukawa couplings used to reconstruct the soft-breaking terms at the low scale. Let us illustrate this procedure. First, we perform the RGE evolution starting with the electroweak-scale CKM matrix obtained by shifting all the Wolfenstein parameters to the extremes of their 2σ ranges [35]: We do not take into account the correlations between these parameters. The ranges of values for the soft-breaking term entries are then This represents sizeable shifts, up to 30 % (40 %) for the real (imaginary) parts. On the other hand, the expansion coefficients do not change significantly: the first five of each expansion being shifted by less than 10 −6 , while the last three of each expansion by less than 10 −4 . They are thus essentially constant over the experimental ranges for the CKM parameters. This confirms that once the experimental errors on the CKM matrix at a given scale are known, the full RGE analysis needs to be performed only once to derive those on the soft-breaking terms at that scale. Since this rather indirect but nevertheless significant impact of the errors on the CKM matrix elements is in general neglected, this could greatly improve and simplify the study of their effect on the flavour constraints for a given scenario.
9,039.8
2014-09-01T00:00:00.000
[ "Physics" ]
Frequency of Werner helicase 1367 polymorphism and age-related morbidity in an elderly Brazilian population Werner syndrome (WS) is a premature aging disease caused by a mutation in the WRN gene. The gene was identified in 1996 and its product acts as a DNA helicase and exonuclease. Some specific WRN polymorphic variants were associated with increased risk for cardiovascular diseases. The identification of genetic polymorphisms as risk factors for complex diseases affecting older people can improve their prevention, diagnosis and prognosis. We investigated WRN codon 1367 polymorphism in 383 residents in a district of the city of São Paulo, who were enrolled in an Elderly Brazilian Longitudinal Study. Their mean age was 79.70 ± 5.32 years, ranging from 67 to 97. This population was composed of 262 females (68.4%) and 121 males (31.6%) of European (89.2%), Japanese (3.3%), Middle Eastern (1.81%), and mixed and/or other origins (5.7%). There are no studies concerning this polymorphism in Brazilian population. These subjects were evaluated clinically every two years. The major health problems and morbidities affecting this cohort were cardiovascular diseases (21.7%), hypertension (83.7%), diabetes (63.3%), obesity (41.23%), dementia (8.0%), depression (20.0%), and neoplasia (10.8%). Their prevalence is similar to some urban elderly Brazilian samples. DNA was isolated from blood cells, amplified by PCR and digested with PmaCI. Allele frequencies were 0.788 for the cysteine and 0.211 for the arginine. Genotype distributions were within that expected for the Hardy-Weinberg equilibrium. Female gender was associated with hypertension and obesity. Logistic regression analysis did not detect significant association between the polymorphism and morbidity. These findings confirm those from Europeans and differ from Japanese population. Correspondence Introduction Werner syndrome (WS) or Adult Progeria is a rare autosomal recessive disorder characterized as a segmental progeroid syndrome (1).WS patients develop the appearance of advanced aging in middle-age, including age-related disorders usually seen in normal elderly subjects, such as atherosclerosis, diabetes mellitus, osteoporosis, and neoplasias.The major cause of death is myocardial infarction (1)(2)(3)(4)(5). Positional cloning identified WRN as the gene responsible for WS (6).The mutant gene is a member of the RECQ family of helicases together with catalytic and exonuclease activities, and its product probably functions in basic types of DNA transactions, such as replication, repair, recombination, and transcriptional and chromosomal segregation.Recent studies also suggest that RecQ helicases act as genome caretakers (7)(8)(9)(10). Specific WRN polymorphisms have been studied to understand the impact of molecular variants on longevity in WS as well as other age-related disorders and can yield new insights to its biological and pathological role (3,4,8,11).The identification of genetic polymorphisms as risk factors for complex diseases in elderly people can be relevant for their prevention, diagnosis and prognosis. An initial study of WRN:1367 polymorphism carried out in a Japanese population showed that the homozygosity of cysteine predicted a three-fold higher incidence of myocardial infarction than in normal control patients (11), a finding that was subsequently confirmed by Morita et al. (12) in the same population.The relatively homogeneous Finnish population shows high rates of coronary atherosclerosis and the putatively protective effect of the 1367 arginine (R) allele has been investigated in centenarians in comparison with newborns.However, no significant differences were observed between the two age groups (3). Population studies of WRN 1074 Leu/ Phe and 1367 Cys/Arg polymorphisms in Finnish, Mexican and North American subjects revealed only a tendency for the 1074 Phe allele to be associated with coronary stenosis and that the 1367 Arg/Arg (RR) genotype tended to protect against coronary artery occlusion, although without statistical significance (4). The involvement of WRN:1367 polymorphism in age-associated disorders was observed by Ogata et al. (13), who reported lower bone density of the lumbar spine in postmenopausal C allele-carrying Japanese women.Long-term hemodialysis patients also showed an association with this polymorphism (14). In the present study, we investigated allele frequencies and the association of WRN:1367 polymorphism with major morbidities that affect the elderly in a São Paulo, Brazil, community.These people participate in a longitudinal study of Brazilian elderly individuals initiated in 1991.This is the first longitudinal study in Brazil analyzing WRN: 1367 polymorphism in elderly people. Population The study population consisted of 383 participants from the Elderly Longitudinal Study (15).This study began in 1991 and originally involved 1667 people older than 66 years living in a district of São Paulo, Brazil.The mean age of this population was 79.80 ± 5.32 years (range: 66-97).Subjects were evaluated clinically every two years and a subsample of 383 in wave 4 (2000-2001) was invited to participate in our study.This population was composed of individuals of European origin (89.2%),Japanese origin (3.3%), Middle Eastern origin (1.81%), and mixed and/or other origin (5.70%). Clinical inquiries were performed to ob-tain information about previous diseases, current medication use, lifestyle, and anthropometric and blood pressure measurements.We informed participants about the study protocol.Physicians performed the physical exam and blood was collected for laboratory procedures.Positivity for cardiovascular disease was considered to be present when individuals self-reported previous myocardial infarction, coronary heart disease, transitory ischemic attack or cerebrovascular disease, and were taking specific medication prescribed by physicians. Those currently using anti-hypertensive drugs or those with systolic blood pressure above 140 or diastolic blood pressure above 95 mmHg were considered to be positive for hypertension (16,17).Those currently taking insulin or oral medication and those with fasting glucose equal to or above 126 mg/dl were considered to be positive for type II diabetes (18). Positivity for neoplasia was considered when individuals self-reported a previous diagnosis with confirmation in their medical record, which presented the results of histological examination among others. Subjects with a body mass index above 27 kg/m 2 (19,20) specifically for 65 years of age and older were considered to be positive for obesity. Cognitive function was evaluated by the Mini-Mental State Examination (MMSE) screening instrument ( 21) validated for the Brazilian population (22).An MMSE score lower than 24 (out of 30) has 80-90% sensitivity and 80% specificity for discriminating individuals with low cognition level (roughly classified as dementia) from normal subjects (22,23).Depression was characterized by a score above 5 in a validated Brazilian version of the Older American's Resources and Services questionnaire (24). Although some studies have shown that self-reported past history and medical records are usually concordant for selected medical conditions in the elderly (25), past histories were only accepted when there was also evidence in physical examinations, ECG, CT-scan, or physician reports. The Research Ethics Committee of UNIFESP approved this study and all participants gave written informed consent according to the Helsinki Declaration. DNA extraction Whole blood was collected into tubes containing 0.1% EDTA and genomic DNA was isolated according to Lahiri and Nurnberger Jr. (26). PCR products were incubated with PmaCI restriction endonuclease and digestion was performed for 3 h at 37ºC.Restriction fragment length polymorphism products were analyzed by 4% GTG agarose gel electrophoresis and then stained with ethidium bromide.The C allele (cysteine) produces only a 193-bp fragment, whereas the R allele (arginine) produces two fragments, one of 101 bp and the other of 92 bp. Figure 1 shows the allele and genotype patterns. The DNA markers were from Gibco BRL Products (New York, NY, USA). Statistical analysis Genotype and allele frequencies were calculated as described by Emery (27).The chi-square test was applied to determine if genotype distributions were within Hardy-Weinberg equilibrium. Descriptive statistics and logistic regression analysis were performed considering two allele groups: one with the presence of the R allele (RR + CR genotypes together) and the other with the absence of the R allele (CC genotype) in view of the low frequency of the RR genotype.The presence of morbidity interactions was evaluated and significance was calculated by the chi-square test (α = 0.05).Statistical analysis was performed using SPSS 10.0 software. The mean age of subjects with the presence of the R allele (RR and CR genotypes together) was 80.05 ± 5.14 (range: 66-94 years) and the mean age of subjects with absence of the R allele (CC genotype) was 79.64 ± 5.43 (range: 69-97 years).We used t-test statistics to evaluate the two allele groups in relation to age. The odds ratios (OR) and 95% confidence interval (95% CI) were calculated by logistic regression analysis considering the frequency among people with (presence) the R allele compared to that among people without (absence) the R allele.Morbidity, sex and age were considered to be independent variables in the model.The adjusted OR and 95% CI for morbidity interactions were calculated by computing the exponential value obtained by adding the regression coefficients for variable interactions and the coefficients of the respective variable reference. Results The allele frequencies observed in our population were 0.788 for the C allele and 0.211 for the R allele.The observed genotype frequencies were: CC = 0.608, CR = 0.36 and RR = 0.031.Genotype distributions were within Hardy-Weinberg equilibrium (data not shown).The comparison of each allele group concerning age did not show any significant difference (t = 0.73; d.f.= 38 1; P = ns). Table 2 shows the number of subjects with and without the R allele in relation to sex, age and morbidity as well as the logistic regression results.We did not find any significant association between WRN polymorphism and morbidity, sex or age.The adjusted OR and 95% CI for morbidity interactions were: 0.78 and 0.31-1.97for cardiovascular disease and depression, 0.70 and 0.26-1.85for diabetes and neoplasia, 1.03 and 0.27-3.95for dementia and depression, 0.40 and 0.12-1.35for hypertension and obe-sity, and 0.99 and 0.22-4.43 for diabetes and obesity, respectively.These results did not indicate any significant association between the polymorphism and morbidity. Discussion In the present study, we investigated the WRN:1367 allele and genotype frequencies as well as their association with major morbidities affecting elderly people in a Brazilian population sample ranging from 66 to 97 years of age. Genotype distributions were within Hardy-Weinberg equilibrium and did not show any significant age-related effect sufficient to alter gene frequency.Our elderly population showed allele frequencies similar to those of populations of European origin and different from those of Japanese samples (11,12).In a Finnish population, the R allele frequency did not differ between newborns and centenarians, indicating the absence of a protective age-related effect of this allele.The allele frequencies also did not differ among North American adults, Mexican newborns and Finnish newborns and centenarians (3,4).Some of the diseases of the subjects showed gender association, such as obesity and hypertension in women (Table 1).These findings agree with those observed among all residents over 60 in Bambuí, a community in Minas Gerais State (28).Another epidemiological investigation conducted in the Northeast and Southeast regions of Brazil also showed a higher prevalence of obesity and hypertension among women over 50 years (29). The comparison between the number of subjects with R allele with those without R allele in relation to each morbidity (Table 2) did not differ significantly as determined by the chi-square test (data not shown). Logistic regression analysis did not show a significant association of the polymorphism with morbidity, sex, age, or morbidity interactions.The absence of association of this polymorphism with cardiovascular diseases in our Brazilian cohort study confirmed findings observed in samples with similar allele frequencies and differed significantly from findings obtained for Japanese samples. The occurrence of co-morbidities in elderly people is common.In our population sample we also observed an association of depression with cardiovascular disease, of type II diabetes with neoplasia and with obesity and of depression with dementia.Major depression is known to be related to higher cardiovascular mortality as confirmed in a prospective cohort study of elderly Dutch men and women and in a study of 12,866 men recruited from the Multiple Risk Factor Intervention Trial Study (30,31).The association of type II diabetes with pancreatic carcinoma and breast cancer as well as obesity has been reported (32)(33)(34).There are also reports in the literature on the association between depression and dementia (35). Codon WRN:1367 polymorphism did not show a significant association with non-insulin-dependent diabetes, as had been reported for a Japanese sample (11). Investigating WRN polymorphism and haplotype frequencies in different ethnic groups certainly strengthens the power of association and predisposition studies on agerelated morbidities since distinct populations exhibit different and specific gene frequencies (4,36). In addition, the functional significance of Cys/Arg WRN gene polymorphism in agerelated disorders is unknown.It is assumed that the R allele may confer improved WRN protein function in the nuclear localization signal, stability or catalytic activity interaction (7).We are just beginning to understand the molecular basis of the contribution of this polymorphism to protein function and its role in the aging process and pathogenesis of age-related diseases. Our findings, however, do not eliminate the possibility of WRN gene or other polymorphisms and haplotypes playing a role in the etiopathogenesis of these morbidities. Table 2 . Number of subjects with the presence or absence of the R allele in relation to sex, age and morbidity and results of logistic regression analysis. Table 1 . Number and percent of female and male subjects with or without each morbidity.
2,990.4
2005-07-01T00:00:00.000
[ "Biology", "Medicine" ]
Experimental and theoretical study on wax deposition and the application on a heat insulated crude oil pipeline in Northeast China . An experimental loop apparatus of heat insulated waxy crude oil pipeline was established to study the wax deposition behaviors. The effects of fl ow rate and ambient temperature on the thickness and wax content of deposition layer were investigated. A kinetic calculation model for the thickness and wax content of deposition layer in heat insulated crude oil pipeline was established based on the principle of molecular diffusion, aging and shear energy. The results calculated by the model are in good agreement with the experimental values. The wax deposition thickness of a heat insulated crude oil pipeline in different seasons and operation time in Northeast China was predicted according to the theoretical model, which was anticipated that can provide a scienti fi c basis for formulating the wax removal cycle of the pipeline. The predicted results showed that the thickness of the wax deposition layer increases fi rst and then decreases along the pipeline. Received: 21 October 2019 / Accepted: 6 December 2019 Abstract. An experimental loop apparatus of heat insulated waxy crude oil pipeline was established to study the wax deposition behaviors. The effects of flow rate and ambient temperature on the thickness and wax content of deposition layer were investigated. A kinetic calculation model for the thickness and wax content of deposition layer in heat insulated crude oil pipeline was established based on the principle of molecular diffusion, aging and shear energy. The results calculated by the model are in good agreement with the experimental values. The wax deposition thickness of a heat insulated crude oil pipeline in different seasons and operation time in Northeast China was predicted according to the theoretical model, which was anticipated that can provide a scientific basis for formulating the wax removal cycle of the pipeline. The predicted results showed that the thickness of the wax deposition layer increases first and then decreases along the pipeline. Introduction Pipeline transportation is a common method for longdistance blending of crude oil. The safety operation during the transportation process and reducing energy loss is a frequently mentioned topic in current research [1,2]. In the process of waxy crude oil pipeline transportation, the formation of wax deposition could reduce the effective conveying area of the pipeline, which decreases the conveying capacity of pipeline, enhances the energy consumption, and even causes huge safety hazards and economic losses by blocking pipeline [3]. Therefore, it is of great significant to predict the wax deposition rate, thickness and wax content of deposition during the transportation process of waxy crude oil through pipelines for the purpose of formulating wax removal plan [4]. Researchers have conducted a lot of study on wax deposition of oil pipelines. It has been acknowledged that molecular diffusion, shear dispersion, gravity settlement and Brownian diffusion are three main mechanisms governing the wax deposition process [5][6][7]. Hamouda and Viken (1993), and Brown and Niesen (1993) believed that the effect of shear dispersion on the wax deposition process is not significant [8,9]. In addition, the effect of Brownian diffusion on wax deposition is negligible [10]. Singh et al. considered the influence of aging on wax deposition on the basis of molecular diffusion and constructed a wax deposition prediction model for waxy crude oil pipeline [11]. Hernandez et al. improved it on the basis of the model built by Singh et al., taking into account the effects of molecular diffusion, shear stripping and aging and improved the prediction accuracy of the calculation model [12]. Zheng et al. developed an enhanced wax deposition model considering the non-Newtonian characteristics of waxy oil using the law of the wall method [13]. However, the current studies mentioned above were all conducted on the non-thermal pipeline which temperature field is very different from that on heat insulated pipeline. There have been few reports on the study of wax deposition behaviors for heat insulated pipeline transporting waxy crude oil. In the present study, an indoor heat insulated loop experimental apparatus was established considering the actual situation of a new heat insulated pipeline in Northeast China which transports waxy crude oil. The wax deposition behaviors and the influence factors of wax deposition for various experimental conditions were investigated. A theoretical calculated model of wax thickness and wax content in heat insulated pipeline was established based on the principle of molecular diffusion, aging and shear effects. The results of the loop experiment and the model calculation results were compared and analyzed. The wax deposition thickness for the on-site pipeline was predicted for different seasons and operation time. 2 Experiment study 2.1 Experimental apparatus Figure 1 is a schematic view of the indoor heat insulated loop experimental apparatus. The apparatus includes storage tank, circulation tank, pumps, pipelines, mechanical agitation system, water bath system, and Heat and Temperature Control Systems (HTCS). The circulation tank is used to store and heat crude oil at an operating pressure of 1.5 MPa. The pipelines include a reference section and a test section, both of which have the same casting material and the outer walls covered with heat insulation layers. The heat insulation layers are wrapped by water jackets on the outside. The water baths provide the water jackets constant temperature water to simulate the actual ambient temperature. The sizes of the pipes are U48 Â 4 mm while the length is 1.2 m, and the flow in the pipes could be controlled within the range from 0 to 100 dm 3 /min. Experimental procedure The waxy crude oil was collected from the oil transportation station and injected into the storage tank (Fig. 1). The crude oil was heated to a higher temperature (e.g., 50°C) and stirred to be flowable before the experiment started. Meanwhile, the temperatures of the two water baths were adjusted to the target values. The temperature of water bath on reference section was higher than the wax precipitation point of crude oil for the purpose of that no wax was deposited on the reference pipe wall. Conversely, the temperature of water bath on test section was lower than the wax precipitation point of crude oil to ensure wax could precipitate on the test pipe wall. After that, the feed pump was started and the oil was feed into the whole pipe system. Then the storage tank and feed pump were turned off, and the circulation tank and pump were used to keep oil flowing in the pipe during the experiment. By controlling water temperature, the test pipe wall was coated with wax in the inner surface but the reference pipe was not. The effective circulation area in test pipe decreased and the frictional resistance increased, which induced that the pressure drop on test section was higher than that on the reference section. The wax deposition thickness on test section could be inversely calculated through hydrodynamic calculation, i.e., differential pressure method. This method can be performed online without interrupting the experiment and it is the method available that can record the development of the wax thickness over time [14]. After the experiment, the oil was collected to the storage tank by air compressor. The test section was taken off carefully and the wax fraction in deposition layer was measured by Differential Scanning Calorimetry (DSC). The 5~10 mg gel sample was collected from the depositon layer at various sites. Before measurements, the DSC apparatus (Mettler Toledo DSC) was calibrated with ultra-pure indium. The temperature was set at 80°C and keeped at this temperature for 3 min to melt the sample evenly. After that, the gel was cooled with a rate of 5°C/min [15]. The typical thermal spectra curve of the oil used in this study was shown in Figure 2. The heat flow increased from the Wax Appearance Temperature (WAT), which is 43.38°C for the oil used in this study. The first wax precipitation peak occurred at 40.67°C and the second one at 19.63°C. The area (S) enclosed by the thermal spectra curve and baseline from À20°C to WAT was calculated by integral. The average crystallization heat of the wax (Q 0 ) for the present study is about 200 J/g. The wax fraction in gel was calculated by F w = S/(vQ 0 ), in which v is the cooling rate,°C/s. Development of the theoretical calculation model 3.1 Mass balance There is a radial temperature gradient inside the oil pipeline because of the temperature difference between oil and environment. The local solubility of the wax molecule in the oil flow is closely related to the in-site oil temperature. According to the molecular diffusion theory, wax crystals will precipitate from the crude oil and cause a radial wax concentration gradient in the pipe when the temperature of the pipe wall is lower than the WAT. The radial wax concentration gradient could accelerate wax deposition to pipe wall. The mass balance relationship of the process is shown in equation (1) where R is the radius of the pipe, r i is the effective radius for oil flow, F w is the weight fraction of solid wax in the wax deposition layer, L is the length of the pipe, q gel is the density of wax deposition layer, k m is the mass transfer coefficient, C wb is the wax molecular concentration in oil, C ws is the solubility of wax molecules in oil which is a function of the surface temperature of the wax deposition layer T i . In equation (1), r i and F w are both the function of operation time t. Assuming that the wax and the crude oil have the same density, equation (1) can be converted into To facilitate the calculation, the dimensionless thickness of wax deposition layer is set to be d = (R À r i )/R, then equation (2) is reduced to The mass transfer coefficient k m in equation (3) is calculated according to where Sh is the Sherwood number in the mass transfer process, D wo is the wax molecular diffusivity in oil. The Hayduk-Minhas empirical formula was optimized using SPSS software to obtain a suitable formula for calculating the wax molecules diffusivity in mixing waxy crude oil [16], as shown in equation (5) D wo ¼ 22:9 Â 10 À13 Â T 2:671 l 10:2 where T is the absolute temperature of oil, l is the dynamic viscosity of oil, and V is the molar volume of wax. The value of Sh could be calculated depending on the flow pattern of the oil in pipeline. When the crude oil in the pipeline is laminar, it can be calculated using the Hausen correlation if the distance is long [17] and the Sieder and Tate correlation if the distance is short [18], as shown in equation (6): Gz m > 100; Sh where Gz m is the Graetz number in the mass transfer process, which can be calculated by using equation (7) [19] where Re is the Reynolds number and calculated by Re = 2Qq/pr i l, x is the axial position of the pipe, Sc is the Schmidt number and can be calculated by Sc = l/ qD wo [11]. When the crude oil is in a fully developed turbulent state, the local Sherwood number in the pipe could be calculated by the Dittus-Boelter Equation [20]: The wax molecular solubility at the wax deposition layer changes with the oil temperature, and the wax molecular solubility curve of the crude oil obtained by the experiment is shown in equation (9) C ws ðT i Þ ¼ À0:0017T 3 i þ 0:0775T 2 i þ 3:8893T i þ 70:765: The differentiation of wax molecular solubility to temperature can be expressed as Due to the temperature gradient between the surface of the wax deposition layer and the inner wall of the tube, a concentration gradient of wax molecules exists inside the wax deposition layer. Therefore, there is also a diffusion flux of wax molecules inside the wax deposition layer, inducing the wax content inside the deposition layer gradually increased. According to mass balance, the formation process of the wax deposition layer can be expressed as Equation (11) can be simplified to where D e is the effective diffusivity of wax inside the deposition layer which could be calculated by the Cussler correlation [21]: where a is the wax crystal shape factor which could be obtained by observing the wax crystal form of the deposition layer. The wax crystal morphology was processed and analyzed by Image J software and the relationship between a and the volume flow rate of crude oil Q v was as following: Equations (3) and (12) were combined to obtain the wax content of the deposition layer as a function of time, as shown in equation (15). Energy balance The schematic diagram of the thermal analysis on the indoor waxing test pipeline is shown in Figure 3. It is assumed that the whole test section is uniformly covered by wax, and the thickness of the wax deposition layer is d w . The temperature of the outer wall of insulation layer (T 2 ) is equal to the temperature of water bath for the indoor test pipeline or the soil temperature for the actual buried pipeline. The center temperature of oil T 0 is usually known. The heat flow balance relationship at each radius of the heat insulated crude oil pipeline can be obtained according to the theory of thermal resistance, as shown in equation (16). where h i is the heat transfer coefficient, T 0 is the center temperature of oil, T w is the temperature of the inner wall of the pipe, T 1 is the temperature of the outer wall of the pipe, k w is the effective thermal conductivity of wax layer, k steel is the thermal conductivity of the pipe wall, k ins is the thermal conductivity of the insulation layer, R 1 is the outer radius of the pipe, and R 2 is the outer radius of insulation layer. In equation (16), three thermal resistances per unit length (R T ) can be defined as following: Equation (16) can be simplified to get the expressions of T w and T i as shown below. The temperature gradient of the surface of the deposited layer can be obtained after further derivation. The heat transfer coefficient h i can be calculated by following equation [11]: where Nu is the Nusselt number, which calculation method is similar to that of the Sherwood number mentioned above. When the oil in the pipeline is in laminar flow, it can be calculated using the Hausen correlation [17] Gz h > 100; Nu where Gz h is the Graetz number in the energy transfer process, which can be calculated using equation (23). where Pr is the Prandtl number and the calculation method is as follow. where c p is the constant pressure specific heat capacity of crude oil, and k 0 is the thermal conductivity of crude oil. When the oil in pipeline is in a fully developed turbulent state, the Nusselt number in the pipe could be calculated using the Colburn Equation [20]: The effective thermal conductivity of the deposition layer k w can be calculated using the EMT model [22]: where k wax and k oil is the thermal conductivity of wax and oil respectively. For long distance pipeline, the temperature of crude oil at the axial position of different pipe sections can be calculated using the temperature drop formula [23]: where T z is the temperature of the oil at a distance of z from the starting point of the pipe, T s is the temperature of the oil at the starting point of the pipe, T e is the environment temperature, k total is the total heat transfer coefficient, and G is the mass flow of oil. The total heat transfer coefficient for buried pipeline is calculated using by [24]: where D is the calculated diameter, which is equal to take the average of the inner and outer diameters of the insulation layer for the insulated pipe, D 1 is the effective diameter of the pipe, D i and D i+1 are the inner and outer diameters of the pipe and the insulation layer, k i is the corresponding thermal conductivity of the pipe and the insulation layer, a 2 is the convective heat transfer coefficient of the outer wall of the insulation layer to the soil. Kinetic calculation model for wax deposition In the process of waxy crude oil transportation, wax deposition is caused by the interaction of molecular diffusion, aging and shearing. Due to the concentration gradient of wax molecules in the oil flow, the wax molecules continuously diffuse from the center of the higher concentration oil to the lower concentration wax deposition layer. Additionally, the shear stress generated during the transportation causes the wax molecules to peel off at the interface of the wax deposition layer. In Section 3.1, only molecular diffusion and aging effect were taken account. Next, the shearing thinning will be added by flux analysis. The kinetic principle of this process is shown in Figure 4. The mass balance of the unidirectional flow wax deposition process can be described as follows taking into account these factors: the cumulative total weight of the wax deposition layer "T"; the mass of the wax diffused into the deposition layer "A"; the mass of the crude oil diffused out of the deposition layer "B"; the wax increment of the newly formed deposition layer "C"; the mass of the stripped wax caused by shearing "D"; the mass of the crude oil in the newly formed deposition "E". This process is described using the mass balance equation shown below [12]. A and B are of equal mass and therefore cancel each other out in equation (29). T, C, D and E are described below. Among them, J c represents the wax molecule pair flow of oil to the interface of the deposition layer, the expression is shown as J d represents the diffusion flux of wax molecules that have diffused into the deposition layer, the expression is shown as J s represents the shear flux of wax molecules in the deposition layer. Based on the principle of shear energy, the shear flux relationship describing the single-phase flow wax deposition process is established as shown in the following equation [25]: where k s is the shear coefficient, which can be determined by regression of laboratory data, dEs dL r i is the unit shear energy at a distance of L from the initial position of the pipe when the distance from the center of the pipe flow to the surface of the sediment is different. By replacing the r i with d and bringing equations (30a)-(33) into equation (12), the wax deposition calculation model can be expressed as Solution of the calculation model In equations (15) and (34), there are total two unknown variables, i.e., d and F w . The equation set is closed and theoretically solvable. However, it is a differential equation system and can not be solved directly. In this study, the Euler method was used to solve the equation set numerically. The kinetic calculation model of wax deposition in waxy crude oil pipe was solved, and the variation of wax deposition thickness and wax content in the deposition layer with time was obtained. It is assumed that a very thin wax deposition has appeared at the wall of the initial stage, and the initial wax content in the deposition layer is the same as that in the crude oil. The pipe is divided into N segments along the axial direction of the pipe, and the time is divided into M segments. The wax deposition law at different positions and times in the pipe is analyzed. The block diagram of the wax deposition calculation model is shown in Figure 5. Results and discussion The wax deposition thickness in the experimental pipe was obtained by solving the wax deposition calculation model proposed in this paper. The calculated values of the thickness and wax content of the wax deposition layer in the pipe varied with time were compared with the experimental data. Figure 6 shows the comparison of the experimental and calculation values of the deposition layer thickness and wax content at three different flows at a constant ambient temperature. The water bath temperature of the experimental loop was set at 28°C. The flow rate in the section is 1.2, 1.5 and 1.8 m 3 /h, respectively. K s in equation (33) was conversely fitted by the regression of laboratory data, which is about 4.5 Â 10 À12 for the oil and conditions in the study. The remaining parameters and oil properties are shown in Table 1. Figure 6 shows that the thicknesses of the wax deposition layer increase with the experimental time and finally stabilize at a specific value. This is mainly because of the enhanced shear thinning effect with velocity (inversely proportional to effective radius as flow rates is fixed), as shown in equation (33). In addition, the stable thickness decreases with the increase of the flow rate. The wax content increases with the experimental time and the flow rate at a specific time, indicating the aging existed in the wax deposition process. When the temperature of the water bath in the experimental loop is constant, the theoretical values at three different flow rates agree well with the experimental values. experimental loop flow rate Q v was set to 1.5 m 3 /h. The water bath temperature of experimental loop T 2 was kept at 24, 28 and 32°C, respectively. The related parameters and oil physical properties are shown in Table 1. As shown in Figure 7, the thickness of the wax deposition layer decreases with the increase of the water bath temperature. It was attributed to the lower bath temperature induces a lower pipe wall temperature and facilitates the wax crystal diffusing toward the pipe wall. Effect of ambient temperature on wax deposition The wax content increases with the ambient temperature at the same experiment duration. When the flow rate in the pipe is constant, the higher the ambient temperature is, the smaller the thickness of the deposit layer is. The thinner deposit layer increases the temperature gradient in the deposit layer, which increases the diffusion flux in the deposit layer, leads to the continuous diffusion of wax molecules into the deposit layer, and increases the wax content in the deposit layer. Prediction of wax deposition thickness for the on-site pipeline In this study, the crude oil heat insulated pipeline in Northeast China was selected as the research object. The calculation model proposed above was used to predict the wax deposition thickness of this pipeline. The operating parameters and oil physical properties are shown in Table 2. The wax deposition thickness distribution along the pipe varied with time in winter and summer is predicted respectively, as shown in Figure 8. The prediction results show that the wax deposition thickness in the crude oil heat insulated pipeline tends to increase first and then decrease along the pipe either in winter or in summer. This phenomenon was also observed in practice. The maximum of the wax deposition thickness appears at 6 km in the axial direction in winter and at 7.5 km in summer. The process of waxing in winter was taken as a research object to analyze the phenomenon. The oil temperature drops rapidly from the starting point of the pipe to the position of 6 km. There is a large temperature gradient between the center position of the pipe and the interface of the deposition layer considering the low temperature of the wall, resulting in the diffusion of wax molecules. This is considered as the reason that the wax deposition thickness grows rapidly in the front section of the pipe. After a turning distance (i.e., 6 km for winter), the temperature at the center of the oil drops to a lower temperature, which causes the temperature gradient between the center of oil and the deposition layer to decreases and the deposition driving force of the wax molecules decreases. This results in a gradual decrease in the thickness of the wax deposition layer. The thickness of the wax deposition layer in the pipeline in winter is greater than that in summer. This is because the ambient temperature in winter is lower than that in summer, resulting in a larger temperature gradient between the center of the oil and the wall of the pipe, which accelerates the deposition rate of wax molecules. Conclusion 1. In view of the wax deposition condition of the waxy crude oil heat insulated pipeline, an experimental apparatus for the wax deposition loop of the crude oil heat insulated pipeline was established based on the differential pressure measurement principle. A theoretical kinetic calculation model of wax deposition for crude oil heat insulated pipeline was established considering the influence of molecular diffusion, aging and shearing on the wax deposition process. 2. The effects of flow rate and ambient temperature on the wax deposition process were studied both by experimental and theoretical method. The experimental results show that the wax deposition thickness decreases and the wax content increases with the flow rate increasing. The wax deposition thickness decreases and the wax content increases with the ambient temperature increasing. The calculated results of the model are in good agreement with the experimental values. 3. The wax deposition kinetic calculation model was used to predict the wax deposition law of the on-site heat insulated pipe in different seasons. The results showed that the thickness of the wax deposition layer increases first and then decreases along the pipeline.
5,989.2
2020-01-01T00:00:00.000
[ "Physics" ]
Reward Enhances Online Participants’ Engagement With a Demanding Auditory Task Online recruitment platforms are increasingly used for experimental research. Crowdsourcing is associated with numerous benefits but also notable constraints, including lack of control over participants’ environment and engagement. In the context of auditory experiments, these limitations may be particularly detrimental to threshold-based tasks that require effortful listening. Here, we ask whether incorporating a performance-based monetary bonus improves speech reception performance of online participants. In two experiments, participants performed an adaptive matrix-type speech-in-noise task (where listeners select two key words out of closed sets). In Experiment 1, our results revealed worse performance in online (N = 49) compared with in-lab (N = 81) groups. Specifically, relative to the in-lab cohort, significantly fewer participants in the online group achieved very low thresholds. In Experiment 2 (N = 200), we show that a monetary reward improved listeners’ thresholds to levels similar to those observed in the lab setting. Overall, the results suggest that providing a small performance-based bonus increases participants’ task engagement, facilitating a more accurate estimation of auditory ability under challenging listening conditions. There is a growing interest in remote testing, both in the context of basic research (Anwyl-Irvine et al., 2020;Backx et al., 2020;Hartshorne et al., 2019;Shapiro et al., 2020) and clinical screening (Paglialonga et al., 2020;Sevier et al., 2019;Shafiro et al., 2020;Sheikh Rashid et al., 2017;Swanepoel & Clark, 2019;Swanepoel et al., 2019;Watson et al., 2012).The ability to conduct experiments online facilitates rapid data acquisition and provides access to a larger and more diverse subject pool than that available for lab-based investigations (Casey et al., 2017).However, in contrast to the lab setting, online experiments are associated with a lack of control over participants' equipment, environment, and engagement (Chandler & Paolacci, 2017;Clifford & Jerit, 2014).These limitations may be particularly detrimental to auditory assessments that often rely on highly controlled stimulus delivery and necessitate focused engagement from the participant (e.g., Harrison & Mu¨llensiefen, 2018). Tasks that require effortful listening (e.g., when trying to estimate performance at threshold, or the just noticeable difference in a particular acoustic feature) may be particularly susceptible to issues related to task engagement (including attention, motivation, and commitment).In laboratories or clinics, engagement is controlled by creating a "sterile environment" that isolates the participants from potential sources of distraction (e.g., their mobile phone, software notifications, doorbell, housemates, etc.).Compliance and motivation are promoted through face-to-face interaction with the experimenter (Gu eguen & Pascual, 2000;Karakostas & Zizzo, 2016).To understand how these factors affect data obtained from online participants, in this series of experiments, we investigated how performance on one version of widely used auditory speech-in-noise perception tasks differs between in-lab and online settings and whether monetary reward may be used as a mean to encourage participant engagement. We used an adaptive speech-in-noise task based on target materials similar to the Coordinate Response Measure (CRM) corpus of Bolia et al. (2000).The CRM measures the ability to identify two keywords (color and number words) in a spoken target sentence always cued by a so-called call sign.Participants are instructed to attend to the target sentence while ignoring a masker.The CRM is part of a family of adaptive speech reception in noise tests (see also digit-in-noise test commonly used in audiology practice; De Sousa et al., 2019).These paradigms have been shown to be powerful tests of listening in complex environments because of their sensitivity to small intelligibility changes in highly noisy backgrounds, their applicability to testing with different maskers, and their relative independence from semantic/syntactic cues (Brungart, 2001;De Sousa et al., 2020;Eddins & Liu, 2012;Humes et al., 2017).Accumulating work demonstrates that speech reception thresholds (SRTs) estimated with an adaptive CRM task correlate with audiometric thresholds and with age (de Kerangal et al., 2020;Schoof & Rosen, 2014;Venezia et al., 2020), rendering it a potentially efficient proxy of hearing ability (Semeraro et al., 2017).An additional advantage is that the task relies on manipulating the relative intensity of the target and the masker, and performance is largely independent of overall level over a reasonable range.Outcomes are therefore less affected by calibration of equipment compared with other tasks that rely on absolute sound level.These considerations make the CRM, as well as other similar speech-in-noise tasks (De Sousa et al., 2019, 2020), particularly attractive for estimating auditory abilities in online settings. We first asked whether performance among young listeners recruited "blindly" online is consistent with that observed in the highly controlled laboratory setting.Results suggested poorer performance by online listeners.We hypothesized that reduced performance in the online compared with the in-lab sample may reflect a lack of task engagement or motivation among the online cohort.Therefore, building on existing evidence that monetary reward can improve performance in tasks that involve executive or perceptual functions (Libera & Chelazzi, 2006;Plain et al., 2020;Shen & Chun, 2011), we asked whether incorporating a performance-based monetary bonus in a group of online participants could improve speech reception performance relative to an online group that does not receive a bonus.Our results revealed that a monetary bonus improved listeners' threshold and that the resulting SRT distribution was similar to that observed in the lab setting.Overall, the results confirm that providing a small performancebased bonus increases participant task engagement (i.e., the readiness to exert effort and/or allocate sufficient attention to the task), facilitating a more accurate estimation of auditory ability. Methods Participants.Two participant groups ranging in age between 25 and 32 years were tested.An in-lab group (data pooled from de Kerangal et al., 2020 and an additional unpublished study) comprised 81 participants (59 females, mean age 25 AE 3 years) who completed the task as part of a test battery.An age-matched online group of 49 participants (35 females, mean age 26 AE 3 years) was recruited and compensated via the Prolific crowdsourcing platform.All listeners were young, native speakers of British English and reported no known hearing problems.The online sample was not formally tested for hearing problems.We assumed that this cohort of young listeners would exhibit a similar hearing profile to the aged-matched in-lab participants.Experimental procedures were approved by the research ethics committee of University College London, and informed consent was obtained from each participant. Stimuli and Procedure.An SRT for each participant was obtained using target sentences introduced by Messaoud-Galusi et al. ( 2011)-the Children's Coordinate Response Measure (CCRM), which is a modified version of the CRM corpus described by Bolia et al. (2000).The modifications were made to be able to embed the materials in the task as a straightforward command, and using call signs (here the animal name) that would be more appropriate for use with children, without precluding the use of the material in adults, nor changing the essential properties of the corpus.Note that the CCRM as used here is likely to be at least as difficult as the original CRM (both requiring the identification of a color and a number), but here there are six colors rather than four.On each trial, participants heard a target sentence of the form "show the dog where the [color] [number] is."The number was a digit from 1 to 9, excluding the number 7 (due to its bisyllabic phonetic structure, which would make it easier to identify).The colors were black, white, pink, blue, green, or red.Thus, there were a total of 48 combinations (6 colors  8 numbers).Participants were instructed to press on the correct combination of color and number on a visual interface showing an image of a dog and a list of the digits in the different colors. The target sentences were spoken by a single female native speaker of Standard Southern British English that was presented simultaneously with a two male-speaker babble that the participants were instructed to ignore.Each talker in the babble was recorded reading two five-to six-sentence passages that were concatenated together once passages were edited to delete pauses of more than 100 ms.The two talkers were then digitally mixed together at equal levels, with random sections of the appropriate duration from this 30-s long masker chosen for each trial. The overall level of the mixture (target speaker þ babble background) was kept fixed, with only the ratio between the target and masker changing on each trial.The signal-to-noise ratio (SNR) between the babble and the target speaker was initially set to 20 dB and was adjusted using a one-up one-down adaptive procedure, tracking the 50% correct threshold (Levitt, 1971).Initial steps were of 9 dB SNR, decreasing by 2 dB following the first two reversals and then fixed at a step size of 3 dB SNR for all subsequent trials.The procedure terminated after 7 reversals or after a total of 25 trials (the latter was never reached).The SRT for one run was calculated as the mean of the SNRs in the last four reversals.Each participant performed the test in four consecutive runs of approximately 2 min each.To allow a stable measure of a listener's threshold, the SNR was averaged over the last four reversals within each run and then across the last three runs (Run 1 was used as practice).In all individual runs, a stable threshold was achieved within <20 trials.The in-lab data for this experiment are drawn from de Kerangal et al. ( 2020), and it was therefore important to use the parameters used in that study.de Kerangal et al. demonstrated that this parameter set produces reliable thresholds and yields the expected difference in SRT between young and old adults and a correlation between SRT and audiometric measures. The in-lab test was conducted in a double-walled soundproof booth (IAC, Winchester).The task was implemented in MATLAB using a calibrated sound delivery system.Sounds were presented with a Roland Tri-capture 24-bit 96 kHz soundcard over headphones (Sennheiser HD 595) at a comfortable listening level of 70 dB sound pressure level (SPL). For online testing, the task was implemented in JavaScript, and the Gorilla Experiment Builder platform (www.gorilla.sc)was used to host the experiment (Anwyl-Irvine et al., 2020).Participants were recruited and prescreened by the Prolific platform.Otherwise, the same stimuli and test heuristics were used as in the in-lab settings.As is common practice in online auditory experiments, participants were screened for headphone use.We used a strict version of the approach introduced and validated by Milne et al (2020) which yields a 7% false positive rate.In brief, this test uses a combination of Huggins pitch stimuli (Cramer & Huggins, 1958) which are only detectable when L and R channels are presented separately to each ear, and a pair of tones (f1 ¼ 1800-2500 Hz; f2 ¼ f1 þ 30 Hz) presented binaurally that sound smooth when listening dichotically or to each channel alone but contain a beat when the channels are mixed (Oster, 1973).Together, these probes allow us to identify those participants who are listening dichotically through separate L and R channels (i.e., using headphones) from those listening over a single channel or over speakers.The test was validated in a large group of normal-hearing listeners.For full details, information about validation, and the links to experience the task, see Milne et al. (2020). The CCRM task took approximately 10 min to complete.It began with a volume calibration to make sure that stimuli were presented at an appropriate level.A target sentence without a masker was used for this purpose.Participants were instructed to play the sound and adjust the volume to as high a level as possible without it being uncomfortable. At the end of the experiment, participants completed a short questionnaire about their listening environment and equipment.We encouraged honest reports by stressing that "your answers will not affect your payment but will help us to get the best quality data."In particular, participants were asked about how much background noise they experienced during the experiment (0 ¼ not at all, 10 ¼ a lot).This measure was used as a potential exclusion criterion to make sure that group differences in performance were not explained by mere differences in environmental noise.The experiment was piloted to take about 15 min.We thus set the base-pay rate to £2, corresponding to an hourly wage of £8. We have made our implementation openly available and ready for use via Gorilla (https://gorilla.sc/openmaterials/171870). Statistical Analysis.We used the two-sample Kolmogorov-Smirnov (KS) test (Conover, 1998) to ascertain the existence of a statistically significant difference between the (unknown) distributions of the two groups of interest.The KS test is a commonly used nonparametric test of the equality of continuous unidimensional probability distributions, based on the maximum distance between the cumulative distributions of the two samples.Analyses were conducted in the R environment Version 0.99.320. Results Figure 1 shows the probability density function (Panel A) and the cumulative distribution function (Panel B) of the SRT obtained from the in-lab (mean SRT ¼ -16.2 dB, SD ¼ 2.08) and online groups (mean SRT ¼ -15.1 dB, SD ¼ 2.21; mean difference in-lab-online ¼ -1.1 dB).A KS test indicated a significant difference between the two distributions (D ¼ .347,p ¼ .001).The maximal difference occurred at -16.9 dB, which was reached by 47% of the inlab group and only by the 12% of the online group.Despite the low level of background noise reported by the online sample (1.77 AE 2.51 from a range of 0 to 10), we repeated the analysis by excluding those participants who reported a high level of noise (! 5; final sample N ¼ 42).The difference between groups was unaltered (D ¼ .350,p ¼ .002). The overall pattern of results demonstrates that, relative to the in-lab cohort, fewer people in the online group achieved very low thresholds, suggesting that online testing may provide a less accurate measure of listeners' speech-in-noise detection performance.The differences between the online and in-lab groups may arise due to a poorer control of participants' listening environment and/or motivation. Methods Participants.Two hundred young, normal-hearing listeners ranging in age from 22 to 30 years (128 females, mean age 26 AE 2.5) were recruited online as described in Experiment 1.They were randomly assigned to one of two experimental groups.All participants received a fair base payment for the time spent on the experiment (Prolific recommends £8 per hour; £2 for 15 min).One group (N ¼ 100, 62 females) additionally received a performance-based monetary bonus (up to £5) on top of the base pay (BONUS1).The other group (N ¼ 100, 66 females) received no bonus (BONUS-). Stimuli and Procedure.The procedure was similar to that described for Experiment 1.The BONUSþ and BONUSgroups received identical instructions and feedback.Encouraging language was used to maximize participant motivation.After each run, the achieved threshold was displayed, and participants were challenged to try to "beat their score" in the next run.The BONUSþ group was additionally informed that each threshold was linked to a monetary bonus.They were told that at the end of the experiment, they would receive the bonus (up to £5) associated with the best threshold reached (e.g., if they reached thresholds -10, -15, -17, -10 over the runs, they were paid a bonus linked to threshold -17).At the end of each run, participants were shown the current threshold and the bonus, but also the bonus they could receive if they improve their threshold in the following run.The bonus was preassigned to SNR values from -1 to -28 (in steps of 1) through an exponential function so that improvements at lower, more difficult thresholds were rewarded more than improvements at levels expected to be easily reached by young normal-hearing listeners.As in Experiment 1, following the main task, participants answered a set of questions about their listening environment.They were also asked to answer on a Figure 1.A: Probability density distributions of the in-lab (gray) and online (blue) groups.B: Cumulative distribution of the in-lab and online groups.The black dashed line indicates the SRT at which the greatest distance between the two distributions was observed.Overall, the data pattern is consistent with a rightward shift (toward higher SRTs) of the online distribution.SRT ¼ speech reception threshold. scale from 0 to 10 (0 ¼ not at all, 10 ¼ a lot) how motivated they were in performing the task, and how engaging they found the task to be. The base pay was set to £2 (for 15 min) for all participants.The average obtainable bonus for the BONUSþ group was £2 (range £0-5), therefore allowing them to double their pay.The BONUSþ group was only informed of the bonus at the instructions stage.To avoid bias in the selection process, participants were unaware of the possibility of being assigned to one or the other group when they signed up to the study. Results Both the BONUSþ and BONUS-groups reported a similar level of environmental noise (BONUSþ ¼ 1.87 AE 2.75; BONUS-¼1.44AE 2.27; t-test: t(2,198) ¼ 1.203, p ¼ .230).However, to focus on the effect of bonus on performance, we excluded those participants who reported a level of noise ! 5 (on a scale from 0 to 10) resulting in the exclusion of $15 participants from each group (final numbers: BONUSþ N ¼ 84; BONUS-N ¼ 90). Figure 2 KS tests indicated a significant difference between the BONUSþ versus BONUS-distributions (D ¼ .276,p ¼ .003),revealing better performance in the BONUSþ compared with the BONUS-group.The maximum difference occurred at -16.6 dB, which was reached by 47% of the BONUSþ and only by the 21% of the BONUS-group.The comparison of these two distributions with the in-lab distribution indeed showed that the BONUSþ performance was similar to the in-lab one (D ¼ .127,p ¼ .519),whilst the BONUS-was different (D ¼ .304,p ¼ .001).The results thus indicate that the provision of a bonus increased the proportion of highperforming participants in the online group to the levels exhibited by the in-lab cohort. In an additional analysis, we compared the in-lab group with the online data pooled from the online group of Experiment 1 and the BONUS-group of Experiment 2 (for a total of N ¼ 132, excluding participants who reported a level of background noise !5; note all results hold even without excluding participants based on noise reports).A KS test confirmed that online performance in the absence of an additional bonus is worse than that obtained in-lab (D ¼ .318,p < .001), in line with what was observed in Experiment 1. Discussion We report two main findings.First, we showed that the SRT of blindly recruited online participants was poorer than that observed among an age-matched control group in the lab setting.Second, we demonstrated that the provision of a small performance-based monetary bonus improved online listeners' speech-in-noise performance to levels similar to those observed in the lab setting. The results from Experiment 1 revealed that the distribution of the SRT in the online group differed from that obtained from the in-lab cohort: In the lab, 47% of listeners achieved an SRT below $ -17 dB.In contrast, within the online cohort, only 12% of participants reached that threshold.This discrepancy is relevant to consider when using remote testing to build normative data, or to accurately estimate hearing loss across the population. In Experiment 2, we showed that a performancebased monetary bonus increased the proportion of highly performing participants up to levels similar to those observed in the lab.This suggests that the difference in performance between the online and in-lab groups observed in Experiment 1 is not mainly driven by constraints to the sound environment but rather associated with reduced task engagement among the online participants. With the blooming of online experiments, it is important to understand how we can improve the quality of data obtained in remote auditory assessments (Leensen et al., 2011;Milne et al., 2020;Slote & Strand, 2016).Our finding that reward increased the proportion of participants who achieved low SRTs demonstrates that participant attention, motivation, and commitment are important factors to consider when auditory tests involving effortful listening are conducted online. Higher task engagement in the in-lab than in the online population probably results from several factors that characterize the laboratory experience: the authority of the experimenter, the absence of temptation/distractions, the effort taken to come to the lab, and so forth.All these factors are likely to make in-lab participants already quite motivated.Similar considerations may apply to certain online testing situations.For example, participants in remote clinical assessments are likely to be highly intrinsically motivated to do their best, as revealed by studies reporting similar results between testing in the clinic and at home (de Graaff et al., 2018;Whitton et al., 2016).However, in many cases, online participants are unsupervised and anonymous, and often mainly motivated by financial incentives (Buhrmester et al., 2011;Litman et al., 2015).In a recent in-house survey conducted by Prolific.co,approximately 50% of the surveyed users stated that the amount of pay is the factor that most motivates them to take part in a study (https://prolific2.typeform.com/report/PoUZHEmk/ttebnlTEllbRvdcg).Therefore, a monetary bonus is an efficient method for increasing task engagement.This consideration is also supported by the fact that the BONUSþ group in Experiment 2 reported higher ratings of task engagement and motivation compared with the BONUS-group. Previous studies suggest that performance on many crowdsourcing tasks does not differ, and sometimes even exceeds that measured in the lab (Hauser & Schwarz, 2016, but see Harrison & Mu¨llensiefen, 2018;Slote & Strand, 2016).In addition, a monetary incentive (amount of pay) does not always affect performance (van den Berg et al., 2019): For example, previous studies reported no modulatory effects of amount of monetary incentive on the quality of online performance in tasks such as speech transcription (Marge et al., 2010).Internal consistency in psychological surveys and attention in following instructions were also unaffected by different levels of payment (Buhrmester et al., 2011, but see Litman et al., 2015).However, the impact of incentive may depend on the kind of task under investigation.Financial incentives may have little effect on performance when the task is too easy or when return on effort is low, for example, when it is hard to improve performance (Camerer & Hogarth, 1999).Our finding that reward influences performance in the CCRM task is possibly linked to the fact that the return on effort is high: The task relies on attention to fine perceptual details, and increasing effort has the potential to lead to a notable improvement in performance. The effect of incentives on performance may also be nuanced by how the reward is operationalized, in particular whether it is fixed or adaptive.For example, recent studies using demanding auditory tasks and where reward was fixed at a high or low value have reported no effect of reward on behavioral measures such as accuracy or response time (Koelewijn et al., 2018(Koelewijn et al., , 2021;;Richter, 2016; see also Carolan et al., 2021).In contrast, Shen and Chun (2011), using a range of executive and perceptual tasks, demonstrated that reward can encourage participants to perform better when it is progressively increased from trial to trial, but not when the same high reward level is maintained.Furthermore, the effect of reward in Shen and Chun (2011) appeared to persist even when the ultimate outcome was success in a competition (e.g., a monetary reward assigned to the top 10% participants based on performance) rather than money itself (e.g., performance-based earning with no competition).Therefore, particularly in experiments that require many trials, a competitive setting may be a more effective incentive than a small (a few cents) reward per trial. The present study relied on the data from Experiment 1 to adjust the bonus growth rate.It is important to acknowledge, however, that paying a bonus based on performance may disadvantage certain participants (e.g., hearing impaired individuals in the present case) in that the maximum bonus amount will not be equally achievable by all participants despite comparable effort to perform the task.Online settings, where the researcher has no contact with the participants, make it particularly difficult to determine whether poor performance (associated with a low bonus) is due to inability to perform well (e.g., due to hearing impairment), poor understanding of instructions, or lack of engagement with the task.To mitigate this ethical concern, a fair base pay for the time spent on participation in the experiment is therefore critical. Conclusions How reward might motivate performance is an empirical question and a long-standing object of debate.Accumulating evidence suggests that reward does seem to matter particularly in tasks where performance depends on effortful engagement (Camerer & Hogarth, 1999).The CCRM task used here is analogous to many threshold-based tasks commonly used in auditory research.The observed effect of bonus on performance should thus generalize to other auditory tasks, helping to motivate participants to exert the extra bit of effort that is needed when the task becomes just doable. Declaration of Conflicting Interests The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. shows the probability density function (Panel A) and the cumulative distribution function (Panel B) of the SRT obtained for the BONUSþ (mean SRT ¼ -16.1 dB, SD ¼ 2.54) and BONUS-(mean SRT ¼ -15.1 dB, SD ¼ 2.33) groups.Data from the in-lab group (see Experiment 1) are also provided as a benchmark. Figure 2 . Figure 2. A: Probability density distributions (relative proportion) of the online BONUSþ (pink) versus the BONUS-(blue) groups.The insets show the probability density distributions of the BONUSþ (top) and the BONUS-(bottom) groups against the in-lab sample.B: Cumulative distribution of the BONUSþ and BONUS-groups.The data from the in-lab (gray) are plotted as benchmark.The black dashed line indicates the SRT at which the greatest distance between the BONUSþ and BONUS-distributions is observed.Overall, the data pattern is consistent with a leftward shift (toward better SRTs) of the BONUSþ relative to the BONUS-groups.SRT ¼ speech reception threshold.
5,884.2
2021-01-01T00:00:00.000
[ "Psychology", "Computer Science" ]
Worrying Impact of Artificial Intelligence and Big Data Through the Prism of Recommender Systems : Transfer from social to semantic web brought us to an era of algorithmic society, placing issues such as privacy, big data and AI in the spotlight. although neutral by their nature, the power of big data algorithms to impact societies became major concern outcoming with fines issued to Facebook in the US. These events were initiated by alleged breaches of data privacy connected to recommender system technology, which can provide individualized content to internet users. This paper seeks to explain recommender systems, while elaborating on their social effects, to conclude that their overall impacts might be increase in retail sales, democratization of advertising, increase in internet addictions, social polarization (echo chamber issue), and improvement of political communication. Also, more research should be deployed into low intensity addictions, as potential outcome of recommender systems, and it should be explored how they affect political participation and democracy. Introduction Big changes in societies around the world are caused by the development of internet. The new era of big data is characterized by "high volume, velocity, variety, exhaustivity, resolution, indexicality, relationality and flexibility" (Kitchin L B , M Z S Ž 936 Issues in Ethnology and Anthropology, n. s. Vol. 16 Is. 3 (2021) 2013, 262). Technology companies have been taking places on the list of most valuable companies in the world (Statista 2019), followed by increases in internet use, across the globe, from 0.4% in 1995 to 59.6% in 2020 (Internetworldstats 2020). "The speed of development in big data and associated phenomena, such as social media, has surpassed the capacity of the average consumer to understand his or her actions and their knock-on effects", writes Zwitter (2014, 1). Businesses have been moving and extending online during that period, while new internet services have been developing. Instant connectivity between people increased with appearance of social media, taking leadership place in these developments (Sutikno et al. 2016). Era of social web 2.0 started in early 2000s to be called web 2.0, followed by web 3.0 or the semantic web, a period in the development of internet in which we live in (Patel and Jain 2019). This era relies on access to user information by organizations to conduct either AI or not-AI algorithmic analysis, finally outcoming with recommend ads and content. Econ (2016) calls big data the new oil, because it is needed by algorithms and AI to function and run the algorithmic society. However, advances both in hardware and software give corporations an opportunity to handle big data, providing their users instant recommendations, but without transparency and firm regulations for data analyses. This topic gained its international importance after US elections and Brexit vote, which had unexpected outcomes, after targeted political marketing, which included personality detection to deliver promotional content of different emotional intensity (Howard et al. 2019). Donald Tramp became president of the USA on one side, and Great Britain decided opt out from the European Union. Again, although algorithms are neutral by their nature, these events made both politicians and common citizens believe in power of big data to change governments by in depth analysis of the electorate. "Big data and predictive analytics all of a sudden became very concrete for the public-and people came to realize that personal information is in fact a commodity that is sold and traded among information empires and data brokers", writes Mai (2016, 193). Fear of social power holders of being out of control concerning political marketing and its outcomes may have been the leading triggers for imposing new regulations about protection of privacy. On the other hand, the United States fined Facebook for data breaches. Facebook's CEO Mark Zuckerberg promised he would improve privacy of its users, but without a clear plan how his company would do that (Hern and Pegg 2018). Kitchin and McArdle (2016) explore what makes constitutes big data. Laney (2001) writes that the concept of big data is defined by: volume, consisting of massive quantities of data, velocity, created in real time and variety . The new concept of algorithmic society is explained by Balkin (2017), who sees big technology corporations taking place between governments and society members. He calls it a pluralist model of a nation state, in which individual is controlled by both corporations, operating in multiple jurisdictions and governments. Balkin envisions struggle for power between these corporate multinational entities and governments. Algorithms have become the main mediator through which power is enacted in our society, claim Schuilenburg and Peeters (2021) and add, "Governments are increasingly turning towards algorithms to predict criminality, deliver public services, allocate resources, and calculate recidivism rates. Mind-boggling amounts of data regarding our daily actions are analysed to make decisions that manage, control, and nudge our behaviour in everyday life". Milano, Taddeo and Floridi (2020) provide definition of recommender systems in regard to e-commerce as the products offered in the catalogue versus ones that ultimately result in purchases. Other definition configured by Floridi (2008) speaks of good news recommendation, as the one that is clicked on and thus relevant to the user. Another view comes from Abdollahpouri, Burke, and Mobasher (2017) defining recommender system as multi-stakeholder environments where multiple parties can derive different utilities from recommendations. More technical definition explains recommender system as decision making strategy for users under complex information environments (Rashid 2002). Finally, the one by Resnick and Varian (1997) explains recommender system as a means of assisting and augmenting the social process of using recommendations of others to make choices when there is no sufficient personal knowledge or experience of the alternatives. Going into rather different direction, we shall provide our working definition of recommender system, in context of society. We consider recommender system as a process in which user data is analyzed to choose between certain options for each of the users, then show selected options back to them individually, to achieve some or all of the following goals: to get attention, extend usage of online content, assist users get content they are interested in, spark thinking and discussion, change opinions and initiate actions, including purchasing and voting behavior. Data processing can be done in straightforward algorithmic way, but also involving artificial intelligence technologies, out of which machine learning may be the most common one applied in recommender systems (Helberger, Karppinen and D'Acunto 2016). This kind of algorithm learns from great amount of data, then makes predictions and recommendations. The main issue around AI is the black box issue, because it is almost impossible to understand why the algorithm made some decision, as being based on combination of many small correlations. However, it is possible to measure if an AI algorithm is effective. The capabilities of AI technologies are indicated by many research studies, such the one by Kosinski, Stillwell and Graepel (2013) which found AI can accurately predict different personal attributes of social media users, such as sex orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. Other paper indicates AI technology can be accurate at detecting sexual orientation and personality traits from facial images (Wang and Kosinski 2018). These are just examples of how widespread use of AI technologies can be spanning from implementation in recommender systems to integration into numerous other fields and processes. Technology companies have the highest market values compared to companies from all other sectors. Out of top 10 most expensive in the world, 7 are technology corporations including Microsoft, Apple, Amazon, Alphabet, Facebook, Alibaba and Tencent (Statista 2020). On the other hand, when it comes to profitability, 4 of them are in top 10 most profitable corporations, competing with those in oil, financial and automobile manufacturing sectors. One of the main businesses of all noted companies concerns online services. These rankings indicate importance and influence held by technology companies that handle user data. The main influence of recommender systems, and thus the companies which handle them, may be in their overall social effects. That's why recommender systems transcend power of profitability, or market value of companies that control them. In other words, they may impact economic, political and other aspects of societies around the world. "The basic idea is that as big data becomes mainstream and businesses and state agencies apply predictive analysis to generate new information and knowledge about customers and citizens, a shift in focus from data collection to data processing is needed" (Mai 2016, 192). In their effort to analyze ethical challenges of recommender systems Milano, Taddeo and Floridi (2020) note the gap in literature with providers of recommender systems and society at large, mainly omitted from consideration, while main focus is directed towards receivers of the recommendations. Based on presented potentials of AI technologies and impact concerns that have been arising, the following question arises: what are potential social effects of recommender systems? The article explains recommender systems in more detail and reviews empirical evidences presented in around 50 research inquiries on social effects of recommender systems. How Recommender Systems Work Recommender systems make personalized recommendations to users of any online application. Recommendations may be ads, trending content, posts, friends or comments. With many potential purposes, recommender systems are implemented as integral parts of different online services including social media, messaging apps, e-commerce websites, email, search and various related apps. Main segment of any recommender system is an algorithm, which may be set of simple straightforward rules dictating how the content is being processed, or it can involve artificial intelligence. The following are three most common everyday uses of recommender systems: advertising platforms recommend ads to internet user; trending content is proposed to users of social media and posts of friends are selected to be shown to social media users. Main purpose of recommender systems is keeping users engaged by presenting personalized content to them. The way this is done is by harvesting data from users, analyzing them and then delivering content based on outputs of analysis (Aggarwal 2016). Ads The most common recommender system is ads recommendations. Main purpose for this kind of system is delivering ads that will get as much views, clicks and purchases as possible. "The effectiveness of advertisement distribution highly relies on well understanding the preference information of the targeted users" (Li and Shiu 2012, 9). As mentioned before, all recommender systems work with user data in order to make recommendations in the first place. In order to get the data, Terms and Conditions of most online apps must define provision of services in return for data from the user. For example, Google provides email, search and many other services free of charge, but it gets user data in return. These can be used by the company, with or without the possibility to forward the data to third parties. The best way to understand complexity of recommender systems is to focus on ads business, as its algorithms take into account multiple parameters included into its calculations, such as keywords (interests), location and other possible demographics to deliver personalized ads. This kind of marketing analysis is called psychometrics. Social Media On the other hand, trending content on the trending page is part of social media which provides content based on combination of personal interests and popular content at that particular moment. Example of this may be when social media user looks for a new interesting content to consume, while getting options from the profiles that he or she does not necessarily follow. This recommender system will show similar content or "you may be interested in" content, that is consumed by similar users (Ricci, Rokach and Shapira 2010). Different kind of recommender system is the one used by social media that shows posts of friends or connections. In some cases, because of high number of connected profiles, all posts cannot be shown to the user and then selection has Issues in Ethnology and Anthropology, n. s. Vol. 16 Is. 3 (2021) to come into play. Usually, this selection is based on most interactions, so the user gets content from people that he or she interacts with the most. One of the most common recommender systems experienced by social media user is "friends you may know" feature. These are profiles proposed to the user by social media. The criteria used to get these recommendations are level closeness of the profile that gets recommendation to the profile that is recommended. That means they should have common friends. The other criteria can be physical distance, as well. Although there are many different recommender systems all over the internet, there are two main principles for choosing the content that would be recommended to users. The first principle is, if a user has consumed lots of content similar to the other user, then the two users are considered similar, so the same content will be shared between them. The second principle is proposing similar content, by looking at what other content is consumed by those that saw the content that is consumed by the user that gets the recommendation. This is explained by Konow et al. (2010) Illustrating two main principles that recommender system may be based on: similar content and similar users. Example for the first principle is that other content consumed by people viewing or reading the same stuff, as the user, is offered to that user. Example for the second principle is: content with same words in title or same keywords is suggested, as next to be consumed. Applications The most prominent social media such as Facebook, Twitter, Instagram, You-Tube and Tik Talk use various recommender systems. Main business model in these cases is keeping users engaged on social media platform, while gathering data from them to deliver personalized ads. Messaging apps include Messenger, Viber, WhatsApp, WeChat etc. Main purpose of recommender systems within messaging aps are advertising and motivating users to send new messages. For example, we can see ads in Viber after a successful call, within Communities, in the sticker market or on the chats list. Differently, on WhatsApp, ads are integrated in WhatsApp statuses that show pictures, videos, texts and other multimedia that users share with their contacts. Facebook Messenger offers interest-based ads used to initiate text conversations with businesses. Finally, WeChat displays promotional messages on user timeline, or at the bottom of WeChat articles (Sutikno et al. 2016). The most important function of recommender systems for music and video services such as YouTube, NetFlix and Spotify is recommending content. This keeps users on the spot, extending minutes of their use, as it keeps suggesting them content which is interesting to consume. Finally, online shopping platforms that include Amazon and Alibaba use product recommenders to offer items that have the greatest probability to get interest and ultimately get purchased by the users. Google, Yahoo, Bing, Hotmail and others provide various services such as search and email. They use variety of recommender systems to keep their users engaged and offer them the most appealing ads. On the other hand, indirect data handlers are companies that collect data for ads targeting. They handle data provided by consent from numerous websites visited everyday by billions of people. These data are analyzed in a way that it provides companies and advertising agencies an opportunity to show the ads to the people that will most probably be interested in them. Most people avoid reading Cookies Consent, while they give permission for their data to be further distributed or sold. Finally, developers of apps are indirect data handlers, that may use data for recommender systems. Although most common uses of recommender systems are noted here, there are many more implementations of these technologies across fields and disciplines. Social Impact Iliadis and Russo (2016, 1) introduce critical data studies as the "concept that helps capture the multitude of ways that already-composed data structures inflect and interact with society, its organization and functioning and the resulting impact on individuals' daily lives". Hilbert (2013) wrote that technology-initiated shift happened from information to knowledge societies, while elaborating on different kinds of big data: words, locations, nature, behavior, economic activity and others. He considers improvements in medical sphere by using AI algorithms in detecting diseases, but also provides examples from economical sphere in stock market trading including "black box" recommender systems, that can give advices about buying and selling stocks. This is called algorithmic trading. Hilbert concludes that big data holds both promises and dangers for development of societies. The biggest threats seen by Hilbert are potentials for state and corporate control and manipulation and the blind trust in algorithms. Michael and Miller (2013) note that the new world is turning into "camera", with lots of images and video data. These are captured and then processed by companies, law enforcement agencies and individuals. The issue here is that this kind of new world captures also people that have not given consent to be filmed, while AI technology often comes up with surprising findings and conclusions. The knowledge that person is being constantly supervised and filmed in a company, where he or she works, may outcome with different psychological conditions. On the other hand, the goal is to measure results of work, give recommendations to be included into the work process thus helping companies improve and make progress in their fields. Michael and Miller envision advanced market segmentation, in terms of psychometrics, to get a deep look into personalities of internet users even, to the extent that the main question poses to be, "Who are you?" As the boundaries between public and private blur, Michael and Miller think the volume of data will increase, which gives more space for analysis and even better predictions in various aspects of societies. Internet Advertising The beginning of online advertising seemed as an experiment in 1990s when widespread use of internet began. Since then, online advertising has grown into a $112.64 billion industry (eMarketer 2018). In early days of internet only banner ads could be seen on different websites. These ads were not personalized or delivered only to those internet users mostly likely to be interested in what they have to offer. Google ads revolutionized ads market by introducing recommender systems. They enable ads to be delivered exactly to those individuals that expressed interest into products or services provided by advertisers. On the other hand, Google ads enable creators of content to embed their ads into websites they use and therefore earn money per click. When we look at advertising market in Germany between 2000 and 2015 it could be seen that total advertising market decreased from EUR 23.4 bn to EUR 21.9 bn, while share of spending for online advertising within these sums increased from 1 to 22 percents (IAB Europe 2011). Klapdor (2013) concludes that increase of online advertising both in German and other markers across the globe has been considerable. On the other hand, online share of total advertising varies through the world in 2010. The stats provided by Emarketer (2010) include UK (32%), Netherlands (21%), China (19%), France (18%), Germany (17%), Spain (14%), USA (11%) and Italy (5%). As for population age 14 and above in Germany media consumption increased from 466 mins per day in 2000 to 563 mins per day in 2010. Out of these minutes, internet use increased from only 3% in 2000 to 14% in 2010, while use of all other media decreased in the same period including TV, radio, newspapers, books and magazines (Ridder and Engel 2010). The capabilities of internet in terms of retail sales can be seen in findings about the purchase decisions. Yahoo (2010) presented results of a survey about information used for purchase decision that examined products from different fields including apparel, games, electronics, furniture, foods and DIY. They found that internet was the most important source of information among other ones, such as retail stores, flyers, catalogues, newspapers, friends and magazines. Adamopoulos, Ghose and Thodri (2018) write that technology has revolutionized how companies deliver ads and communicate with consumers. Ghose and Todri (2016) write that companies monitor digital footprints of consumers, so they can pay per-impression or per-click to the advertising platforms. Thus, these innovations in the world of advertising have been affecting economies on a grand scale. The first disturbances were felt by offline content providers, such as newspapers, that had to cut their circulation volumes and redirect their efforts towards the online sphere. The same happened for TV stations that had to make their websites attractive for online users and change the way they do journalism. New type of journalism was introduced. Multimedia or online journalism has to compete with citizen journalism and different content creators for attention of news consumers in order to get prolonged use, so that they could sell online ads. Of course, online ads business has been affecting other spheres of life, but the most disturbed at first were traditional media. However, the main idea that comes out is that people consume online services more than before, which opens up space for more content creators, but it also makes a difference on the side of advertisers as well, by enabling them to be more effective (Rosenkrans 2009). New trends in advertising create possibilities for small businesses to get involved into advertising process, thus reaching their target groups with minor amounts of money. Goldfarb (2013) argues that the fundamental economic difference between online and offline advertising is a substantial reduction in the cost of targeting, which boost economic development and empowers small business owners. Internet Addiction Addict is a person whose normal functioning is endangered because of use directed to the substance or object that is matter of addiction (Young 1998). That means internet or gaming addicts could be overwhelmed playing computer games to the extent that they even avoid eating or performing their physiological needs. Drug addicts may fail to perform in their family roles. They could be aggressive, therefore endangering lives of others and themselves. Gambling addicts may loose their basic assets, such as home they essentially need for daily functioning. To become a media addict there are two main criteria. The first one is extended media use, more than person wants to do that and the other one is this use affects normal daily functioning of that person. Young concludes that internet addiction has a lasting impact on brain processes. She compares internet addiction to addictions of drugs and alcohol. Statistics show there are 1 to 8 percents of internet addicts in the US, depending from the research standards (Weinstein and Lejoyeux 2010). Durkee et al. (2012) notes results of a study of European countries with internet addiction levels ranging from 1.2% to 11.8%. Other inquiries from US and Canada report even 20.6% of internet addicts (Błachnio et al. 2019). On the other side of the globe in South Korea more than 30% of teenagers are found to be at risk of internet addiction, while in Japan 23.7% of teenagers were found to be internet addicted (Kawabe et al. 2016). Another study found that Hong Kong teenagers are at great risk to become internet addicts, as Lam (2015) measures internet addiction rate of 24% among them. However, internet addiction is a growing trend (Chi, Hong, and Chen 2020). The issue of addictiveness is discussed by many researchers that examine impacts of recommender systems. This requires serious research attention (Burr et al. 2018;de Vries 2010;Koene et al. 2015;Taddeo and Floridi 2018;Mančić 2010). Some statistics are even worse. For example, Cheng and Li (2014) write that 420 million of people were affected by internet addiction in 2014, which is a global average prevalence rate of 6.0%. However, because of increasing internet use, up to date levels of global internet addiction prevalence might be much higher. Montag et al. (2017) writes that between 1996 and 2017, a total of 1,572 papers examines the issue of internet addiction, with rising numbers each year. Echo Chambers Report from the office of President Obama on big data concludes that new technologies can cause social harms on a grand scale, beyond the damages to privacy (White House 2014). As recommender systems show content consumed by similar people to same group of individuals this makes bubbles of beliefs and attitudes stronger and bigger, thus reinforcing polarizations in societies. Such state of matters goes against different views on social issues, public debate and democratic way of functioning, because this could cause unrest, low tolerance, public anger and violent protests (Harambam et al. 2018;Helberger et al. 2016;Koene et al. 2015;Reviglio 2017;Zook et al. 2017). Some researchers call for change in the recommender systems so they can promote diversity of thoughts and attitudes, especially in news segment (Bozdag and van den Hoven 2015). Reviglio (2017) calls for diversified approach in recommender systems, which means it should include different stands from that of the internet user to provide variety and foster democratic processes. The same is request from Harambam et al. (2018) who proposes adding option to configure recommender systems by end users themselves. This would provide novelty, diversity and relevance. Being in private ownership, without any firm regulatory obligations concerning transparency, big data algorithms are closed circuits. This is identified by D'Ignazio and Bhargava (2015) and it can be a good reason for initiating well prepared digital literacy courses. Authors propose a definition of Big data literacy, because field of internet communication is much more than technical skills one should have to become capable user of new technologies. Thus, big data literacy should be mainly about emancipation, according to D'Ignazio and Bhargava. They also identify various target groups for big data literacy process that include NGOs, those whose work include use of technologies and common citizens. Discussion Based on review of the literature presented in previous sections, we can now elaborate on social impacts of recommender systems. Internet Advertising The typical situation experienced by numerous internet users is that they get recommended ads for exactly the items they intend to buy. They get more options concerning the items they intend to buy, their purchases may be faster, than it would be without recommended ads and finally they may buy more products overall. The fact that internet users get ads based on their interests may be increasing overall retail sales in societies, which is stimulating for economies. Although there are no firm statistics to confirm this notion, some figures indicate that. Emarketer (2019) research shows increase in retail ecommerce sales worldwide from 2014 to 2019 from 6.3 to 12.8 percents (Figure 2 Additional economic consequence of recommender systems may be democratized advertising. Before online advertising, there were only expensive and ineffective ways to promote products in local communities. Word of mouth was the most effective way to promote products for small companies and individuals that create hand made products. Now, with effectiveness of recommender systems implanted into advertising platforms of today, one can reach potential customers quickly, easily and for small amounts of money. This business model can work, even at a rate of one euro per day, invested by the smallest business owners and startups, and it can still make exquisite results (Liu-Thompkins 2019). Whether the post is sponsored, or there is a specific ad, the right people get the ad, those with the highest possibility to notice it, consider it, interact with the content and then finally make a purchase. Thus, both potential effects of recommender systems in advertising, such as increases in overall retail sales and democratization of advertising may be supportive for level of employment and current consumption based economic system. Internet Addiction We identified the time management issue as precondition to internet addiction. Almost every recommender system by its definition can increase time people spend online consuming various services. For example if social media user gets friends recommendation for profiles he or she is really interested in, this would potentially increase online time of that user, because that person may scroll through the list of recommended friends, send requests to them and ultimately interact with them. Recommended trending content has the same purpose. Video platform such as YouTube offer their users suggested content. These are videos, with the most probability the user would be interested in. This may be tempting, as the users would watch more videos, than they would without this feature. This is also issue of time management. The main question may be if it is better for that person to do something else, than consume online content? There are two directions this kind of overuse could go. One person could do the same kind of activity online, for example communicating indirectly through Viber, as done offline, sharing thoughts and ideas. The question is if direct communication is different in quality when compared to indirect online communication. Is it more rewarding and fulfilling to employ senses of touch, sight and smell, than to enjoy only auditory texting experience through instant messaging app? Or is it better to clean house than to communicate? Substitution of one activity in offline world with the other activity in online sphere may be challenging time management issue of those exposed to recommender systems. Spending extended time online may lead to internet addiction, because people are tempted to use internet more and more. On the other hand, except noticeable high intensity internet addiction, Bojic, Marie and Brankovic (2013) recognize low intensity internet addiction, which may be at the core of modern mass societies. In other paper, Bojic and Marie (2013) introduce universal methodology for measuring TV, radio, newspapers and internet addiction, that also calculates levels (intensity) of an addiction. They conclude that most of the survey participants show a low level media addiction. Effects of this kind of addiction are not yet examined by scientific inquiries, while possible having major role on social level in respect to democracy, political participation and other aspects of public sphere activities. Low level addictions may have role in individual feelings of happiness and wellbeing, especially if they are joined with other low level addictions, such as shopping, alcohol and similar ones. In other words, if society members spend lots of time online and then go shopping and do other activities which support consumption society, if they are occupied by various stimuli, how much time is it left for thinking and reflection about various personal and common topics including social and political matters? According to proponents of natural law, political justice is applied to citizens who are free, equal and governed by law (Duke 2019). Are our societies free, if they predominantly consist of citizens mildly addicted to their smart phones, brands and other consumption related activities? Echo Chambers Another issue, that we acknowledged, concerns echo chambers. These are primarily produced by the trending recommender systems, as they reinforce same thoughts and opinions in the digital sphere. As previously noted, recommender systems deliver similar content or the one consumed by similar users. This is a barrier to the social dialogue, because same thoughts and opinions are confirmed all the time. We are witnessing growing polarization in societies across the world, with the main divide being on globalists and nationalists. "High degree of polarization has been connected to unfavorable consequences such as extremism" write Prasetya and Murata (2020, 3) and add that other than the polarization of the cascading news, they found that important contributors to polarization were slow opinion update and low tolerance for opinion difference. These trends made a fruitful ground for biased content with sensational headlines and many of these being "fake news." Baumann et al. (2020) proposes an echo chamber research model based on three main assumptions: aggregated social influence, heterogeneous activity and homophily in the interactions. In their research paper Guo, Rohde and Wu (2018) analyze echo chambers created during the 2016 US elections campaigns. They conclude that Twitter communities discussing Trump showed a higher level of heterogeneity, than those about Clinton. As people want to confirm their views, these kind of contents have the greatest probability to be liked or shared further on social media. At the same time, opinions of those that consume this content may radicalize even more, outcoming with hate speech and cutting people out of social networks because of politics, thus creating divided societies of echo chambers, streaming towards differences and further polarizations. Another potential consequence of recommender systems is using micro targeting for personalized ads in order to impact opinions of individuals and groups, leading towards certain outcomes of elections. Allegedly research results of an inquiry done by Lambiotte and Kosinski (2014) were used in number of political campaigns. The main point was using Facebook likes to determine personality types and then show individual ads with different tone, the ones that will be more likely to be positively accepted by recipients of messages (Isaak and Hanna 2018). In other words, if potential recipient of ad is recognized to be neurotic he or she would receive an "angry ad" packed with negative emotions, while other person, if recognized with high Openness rate, on OCEAN personality scale, would be shown dynamic and positive advertisement. Of course, this could be categorized as recommender system, because it delivers personalized ads, depending from user data, based on psychometric analysis. If one side in political process uses this kind of technology, it may have competitive advantage. However, if every political party engages in this kind of political marketing the conclusion may be that it improves communication as neurotics will get messages of every political option in a way that fits them and with better potential outcomes. In that case, every political party would be equal. The value based conclusion would be that utilizing such psychometric technology would ultimately mean overall improvement in political and other kinds of communication, especially in future, when common people have the knowledge of how recommender systems work and to what kind of technologies they are exposed to. Conclusions Having in mind potential effects of recommender systems and related technologies, including internet advertising related ones, such as increases in overall retail sales and democratized advertising, internet addiction-related ones, such as time management issue and increases in internet addictions and finally democracy-related ones, such as echo chamber issue, and improvement of political communication, some concluding remarks arise. As discussed in the previous section, although may be challenging to measure and quantify recommender systems in any way, they might be causing increases in overall retail sales. This may be stimulating for consumption society and economies across the globe. If people spend more, then there will be jobs for more people. Although time management issue, internet addictions and social polarization may be consequences of recommender systems, it is clear that current social systems depend on consumption and spending (Fisk 1959). It could be easily seen during the ongoing Covid-19 pandemics what happens to the economy and levels of employment when retail sales decrease in some sectors (Baker et al. 2020). More research should be directed towards measuring addictions, with special focus on low intensity internet addictions. Despite the fact that internet use is growing in the world, there is no standardized way to measure internet addiction. On the other hand, internet addiction has not been measured in any worldwide study so far. Additionally, special focus of research should be directed towards different addiction levels, meaning intensity of addiction and elements leading towards internet addiction. Therefore, according to previous arguments, we may formulate a question that might be explored in some future inquiries -if people buy more stuff, does it mean that they are more addicted, or in another words, is there a correlation between volume of retail sales and addictions? The pandemic intensified digital media habits of citizens, such as work and education from home and e-commerce. The digital media became more than L B , M Z S Ž 950 Issues in Ethnology and Anthropology, n. s. Vol. 16 Is. 3 (2021) ever oversaturated with disinformation, fake news, and various suspicious content. Today, there is a great level of social connectivity, especially through various groups on social media that foster activism. Moreover, in the last two years "Cancel Culture" has been widely accepted as social behavior of unfollowing and boycotting a person, brand, company or organization, due to stressing their different or offending opinion or withholding to support a cause. In such a demanding converged media environment it is necessary to develop and enhance citizen media knowledge and media literacy skills, especially critical thinking, that helps strengthen digital immunity of each individual, as well as society. Digital immunity may help citizens to counter fake news, deal with disinformation and potentially harmful content, to critically evaluate sources, deconstruct media messages and recognize diverse forms of advertising. It also promotes freedom of expression, breaks stereotypes and fosters intercultural dialog. When looking into the media literacy issue, in their commentary, Couldry and Powell (2014) recommend that highlighting not just the risks of creating and sharing data, but the opportunities would be a way to go. Critical approach to different content could also be significant pillar of media literacy and media education, especially having in mind the outbreak of echo chambers and fake news across the globe (Vosoughi, Roy and Aral 2018). Special attention should be given to the issue of echo chambers, so that citizens get a full understanding of how challenging these kind of recommender systems might be for democracy. While echo chambers are a real challenge and widely-discussed concept, it should be noted that the empirical evidence for their existence in Europe is mixed. Individuals usually consume information by using variety of sources, among traditional and nonlinear media and not relying solely on social media feed or internet searches. The fact that Facebook was fined by the US regulation authorities for data and privacy breaches, brought spotlight to the issue, leading to increased awareness and discussion about the topic. As there are no immediate value to be acquired by social elites to initiate the topic of recommender systems, this issue stays in the background of the more "important" ones. Online data will be accessible to companies and political parties in one way or other. Ultimately, the question is not about data access, but rather about data processing and bringing to focus social effects created by businesses, political parties and other organizations taking part. Further scientific inquiries may be directed towards potential points of manipulation and examination into how new technologies function and process data.
8,927.8
2021-11-16T00:00:00.000
[ "Computer Science", "Sociology" ]
Trail (TNF-related apoptosis-inducing ligand) induces an inflammatory response in human adipocytes High serum concentrations of TNF-related apoptosis-inducing ligand (TRAIL), a member of the tumor necrosis factor protein family, are found in patients with increased BMI and serum lipid levels. In a model of murine obesity, both the expression of TRAIL and its receptor (TRAIL-R) is elevated in adipose tissue. Accordingly, TRAIL has been proposed as an important mediator of adipose tissue inflammation and obesity-associated diseases. The aim of this study was to investigate if TRAIL regulates inflammatory processes at the level of the adipocyte. Using human Simpson-Golabi-Behmel syndrome (SGBS) cells as a model system, we found that TRAIL induces an inflammatory response in both preadipocytes and adipocytes. It stimulates the expression of interleukin 6 (IL-6), interleukin 8 (IL-8) as well as the chemokines monocyte chemoattractant protein-1 (MCP-1) and chemokine C-C motif ligand 20 (CCL-20) in a time- and dose-dependent manner. By using small molecule inhibitors, we found that both the NFκB and the ERK1/2 pathway are crucial for mediating the effect of TRAIL. Taken together, we identified a novel pro-inflammatory function of TRAIL in human adipocytes. Our findings suggest that targeting the TRAIL/TRAIL-R system might be a useful strategy to tackle obesity-associated adipose tissue inflammation. receptor type 1-associated death domain (TRADD) and the TNF receptor-associated factor 2 (TRAF2). The secondary complex is involved in the activation of kinases such as the protein kinase AKT, the classical MAP kinases extracellular signal-regulated kinases 1/2 (ERK1/2), p38 and c-Jun N-terminal kinase (JNK) as well as the nuclear factor kappa B (NFκB) pathway 17 that can lead to transcription of anti-apoptotic and pro-proliferative genes. Indeed, TRAIL was shown to be a potent inducer of preadipocyte proliferation 18 . Furthermore, TRAIL has a significant impact on adipocyte metabolism and appears to contribute to diet-induced insulin resistance and hepatic steatosis 19 . On the molecular level, TRAIL inhibits insulin-stimulated glucose uptake and lipid formation by caspase-mediated cleavage of PPARγ 12 , hence underlining the important role of TRAIL in systemic metabolism. Interestingly, TRAIL receptor (DR5) knockout mice fed a diet high in saturated fat, cholesterol and fructose (FFC) have a reduced expression of inflammatory genes in white adipose tissue when compared to wild-type littermates 19 . Based on the overall data, we hypothesized that TRAIL might contribute to obesity-induced adipose tissue inflammation by triggering kinase pathways that lead to cytokine and chemokine expression. However, so far it has not been investigated whether and to which extent TRAIL promotes an inflammatory response in human adipocytes. We therefore studied the impact of TRAIL on the production of inflammatory cytokines and chemokines as well as the signaling pathways underlying this effect in human preadipocytes and adipocytes. TRAIL induces a pro-inflammatory response in preadipocytes and adipocytes. In this study, we used the human Simpson-Golabi-Behmel syndrome (SGBS) cell strain as a model system. The cells are neither transformed nor immortalized and represent a well-characterized model system to study human adipocyte biology 20 . SGBS preadipocytes and differentiated adipocytes were treated with 30 ng/ml TRAIL. After 12 hours, RNA was isolated and subjected to an Affymetrix-based (GeneChip Human Gene 1.0 ST Array) mRNA array analysis. In SGBS preadipocytes, 38 genes showed a differential expression profile upon TRAIL treatment when compared to the control. Of these, 3 genes were down-regulated and 35 genes were up-regulated (Supplementary Table 1). Figure 1A displays the heatmap for the 12 genes differentially regulated by TRAIL in preadipocytes, which were related to inflammation (Fig. 1A). Interestingly, all inflammation-related genes regulated by TRAIL were up-regulated. In SGBS adipocytes, a total of 71 genes were regulated by TRAIL compared to the control. Of these, 36 genes were up-regulated and 35 genes were down-regulated (Supplementary Table 2). Figure 1B displays the heatmap of inflammation-related genes differentially regulated upon TRAIL treatment in adipocytes. We next performed a STRING (Search Tool for the Retrieval of Interacting Genes/Proteins) analysis to predict physical and functional protein interactions. It turned out that 5 genes were jointly up-regulated in preadipocytes as well as in adipocytes (tumor necrosis factor alpha-induced protein 3 (TNFAIP3), chemokine C-C motif ligand 20 (CCL-20), interleukin 6 (IL-6), interleukin 8 (IL-8) and monocyte chemoattractant protein 1 (MCP-1)). The 4 secreted factors are known to interact with each other (visualized in Fig. 1C). In addition to the jointly up-regulated factors, TRAIL induced other cytokines and chemokines, e.g. interferon-γ (IFNγ), chemokine C-X-C motif ligand 1 (CXCL-1), chemokine C-X-C motif ligand 6 (CXCL-6), interleukin 1α (IL-1α) and interleukin 1β (IL-1β) specifically in preadipocytes. In adipocytes some cytokine receptors such as TNF-related weak inducer of apoptosis receptor (TWEAKR) and interleukin 7 receptor (IL-7R) were specifically up-regulated. Together, our expression array results indicate that TRAIL triggers a pro-inflammatory response in preadipocytes and adipocytes. SGBS preadipocytes and adipocytes on day 14 of adipogenic differentiation were treated with TRAIL (+)(30 ng/ml) or vehicle (−) as indicated. After 12 hours, RNA was harvested and subjected to mRNA array analysis (GeneChip Human Gene 1.0 ST Array; Affymetrix). Heatmaps display the TRAIL-regulated inflammatory genes in preadipocytes (A) and adipocytes (B). An evidence-based STRING 10 analysis was performed to visualize the network of TRAIL-regulated genes in preadipocytes and in adipocytes (C). In order to understand the common mechanisms underlying TRAIL's effects, our following experiments focused on those secreted factors, which were jointly up-regulated in both preadipocytes and adipocytes and studies were performed with adipocytes. To identify the kinetics of cytokine and chemokine expression and to validate the array data we incubated SGBS adipocytes with 30 ng/ml TRAIL for different periods of time (6, 12 and 24 hours). Overall, TRAIL induced a transient increase of IL-6, IL-8, MCP-1 and CCL-20 mRNA expression. For IL-6 and MCP-1, maximal levels of about 6-fold and 4-fold were reached between 6 and 12 hours after TRAIL treatment, respectively ( Fig. 2A and C). IL-8 and CCL-20 were maximally induced by about 50-fold and 30-fold 12 hours after TRAIL treatment, respectively ( Fig. 2B and D). Thereafter, the expression levels of all these factors declined again. In addition, TRAIL increased the mRNA expression of IL-6, IL-8, MCP-1 and CCL-20 in a dose-dependent manner with the most potent effect seen at a concentration of 100 ng/ml TRAIL (Fig. 2E-H). Importantly, ELISA measurements revealed that this induction of mRNA also resulted in protein production; TRAIL significantly enhanced the secretion of IL-6, IL-8 and CCL-20 after 24 hours of treatment ( Fig. 2I-L). For MCP-1, we could not detect a significant increase, which was likely due to the already high secretion of MCP-1 (>500 pg/ml) in control cells. TRAIL induces IL-6, IL-8 and CCL-20 expression in human primary adipocytes. To exclude cell strain-specific effects of SGBS cells we also studied human primary adipocytes, which were differentiated in vitro from adipose stromal cells isolated from subcutaneous adipose tissue. Upon TRAIL treatment, we observed a significant up-regulation of IL-8 and CCL-20 mRNA expression ( Fig. 3A and D). Of note, there was a high inter-patient variability. Out of 7 samples, 4 displayed an up-regulation of IL-6 expression upon TRAIL stimulation (Fig. 3A). The analysis of MCP-1 mRNA expression data ( Fig. 3C) displayed no consistent tendency upon TRAIL treatment, 2 samples showed a clear up-regulation, 1 sample a weak up-regulation and 4 samples remained equal or showed a tendency towards a weak down-regulation. Caspases are not involved in mediating the effect of TRAIL on cytokine expression. Next, we sought to identify the molecular events causing the TRAIL-mediated pro-inflammatory gene expression. In a first attempt, we focused on the canonical, usually apoptosis-related pathway that can be activated by TRAIL. TRAIL stimulation in adipocytes led to a rapid cleavage of caspase-8 and caspase-3 (Fig. 4A). In line with previous results 12, 21 we did not observe any induction of cell death. We next treated the cells with the pan-caspase inhibitor zVAD.fmk at 20 μM, a concentration known to block the TRAIL-induced formation of active caspase-8 and caspase-3 fragments 18,22 . The TRAIL-mediated up-regulation of IL-8 and MCP-1 was not affected by caspase inhibition (Fig. 4C and D), whereas the up-regulation of IL-6 and CCL-20 seemed to be partially inhibited ( Fig. 4B and E). Interestingly, caspase inhibition alone resulted in a down-regulation of IL-6 and CCL-20 mRNA suggesting that the expression of those factors requires some basal caspase activity. The fold-induction of cytokine and chemokine mRNA expression by TRAIL treatment was similar between zVAD.fmk and vehicle-treated cultures. Therefore, we concluded that the observed caspase activation is not responsible for the effects of TRAIL on inflammatory gene expression. TRAIL induces a phosphorylation of IκBα. We next asked whether the nuclear factor kappa B (NFκB) pathway might be responsible for TRAIL's effects on inflammatory gene expression. This was based on recent findings indicating that IL-6, IL-8 and MCP-1 are all NFκB targets 23,24 and that TRAIL can induce the NFκB activation in other cell types 25,26 . We therefore treated SGBS adipocytes with TRAIL and analyzed the phosphorylation of a central NFκB upstream regulator, the inhibitor of NFκB alpha (IκBα) protein. The phosphorylation of IκBα induces its degradation, thus activating the NFκB pathway. Treatment of adipocytes with TRAIL resulted in a phosphorylation and very weak degradation of IκBα after 6 hours ( Fig. 5A and Supplementary Figure S1 for Image J analysis) suggesting mild NFκB activation. We then decided to perform an electrophoretic mobility shift assay (EMSA) to assess NFκB DNA binding activity in our cells. While nuclear extracts from adipocytes treated with TNF-α, a known inducer of NFκB in SGBS cells, displayed a clearly visible signal on the gel, representing NFκB DNA binding activity, there was no increase in signal in nuclear extracts obtained from TRAIL-treated adipocytes (Fig. 5B). Since the IκBα Western blots had shown a phosphorylation and weak degradation of IκBα, we concluded that the EMSA might not be sensitive enough to detect a mild activation of NFκB taking place. In order to verify the activation of NFκB we decided to apply a reporter gene assay to measure NFκB transcriptional activity. We found that TRAIL significantly enhanced luciferase activity, although this was much lower than the activity gained upon TNF-α addition (Fig. 5C). To further clarify an involvement of NFκB, we used the small molecule inhibitor SC-514 (100 μM) to inhibit NFκB activation. SC-514 blocks the signal upstream of IκBα, the protein kinase IκB kinase-β (IKK-β). The TRAIL-induced phosphorylation of IκBα was effectively blocked by SC-514 (Fig. 5D), demonstrating that this compound is indeed suitable for blocking the canonical NFκB pathway. The addition of SC-514 abolished the effect of TRAIL on IL-6, IL-8, MCP-1 and CCL-20 mRNA expression ( Fig. 5E-H). Also BAY 11-7082, a well-characterized inhibitor of cytokine-induced IκBα phosphorylation displayed a comparable effect (Supplementary Figure S2). We furthermore performed siRNA-mediated knockdowns of IKK-α and IKK-β to curb NFκB function. Unfortunately, we did not achieve a successful knockdown of both IKK forms in adipocytes (data not shown). However, in preadipocytes we were able to achieve a satisfactory knockdown (Supplementary Figure S3) that partially inhibited the TRAIL-induced up-regulation of the studied chemokines and cytokines (Supplementary Figure S3). All in all, these data demonstrate that the canonical NFκB pathway participates in the TRAIL-induced up-regulation of IL-6, IL-8, CCL-20 and MCP-1 expression. TRAIL induces a phosphorylation of ERK1/2. TRAIL is able to activate kinases such as AKT, ERK1/2 and JNK via a non-canonical signaling route. To address this possibility, we treated SGBS adipocytes with TRAIL and analyzed for phosphorylation of the above mentioned kinases at different time points. After 4 and 6 hours, we observed a TRAIL-dependent phosphorylation of ERK1/2 in adipocytes while the kinases AKT and JNK were not activated (Fig. 6A). To elucidate an involvement of ERK1/2 in mediating TRAIL's effects, we used the small molecule inhibitor PD-0325901 (100 nM) to specifically block the MAPK/ERK kinases 1/2 (MEK1/2), the specific upstream kinases of ERK1/2. PD-0325901 completely abolished the TRAIL-induced phosphorylation of ERK1/2 (Fig. 6B). Importantly, inhibition of the ERK1/2 pathway also abolished the TRAIL-induced up-regulation of IL-6, IL-8, MCP1 and CCL-20 mRNA expression ( Fig. 6C-F). These data demonstrate that the ERK1/2 pathway is a key player in the TRAIL-induced cytokine and chemokine expression. Finally, we wanted to know whether TRAIL interlinks the ERK1/2 and the NFκB pathways. Interestingly, inhibition of the ERK1/2 pathway by PD-0325901 did not interfere with TRAIL-mediated IκBα phosphorylation (Fig. 6H). Vice versa, inhibition of the NFκB pathway by SC-514 did not interfere with ERK1/2 phosphorylation (Fig. 6G). Together, these data indicate that in adipocytes TRAIL activates the ERK1/2 and NFκB pathways concurrently and without significant interactions. Overall, we have demonstrated that TRAIL induces a pro-inflammatory response in human adipocytes in an ERK1/2-and NFκB-dependent manner. Discussion Adipose tissue is an important endocrine organ that is in permanent crosstalk with other organ systems of the body 27 . The inflammatory state of adipose tissue in obesity is supposed to play a causal role in the pathogenesis of its comorbidities 3,28,29 . The current study unraveled two entirely new aspects with respect to the inflammatory process in adipose tissue. First, our data demonstrate for the first time that TRAIL induces the production of pro-inflammatory cytokines and chemokines in human preadipocytes and adipocytes. Our microarray analyses revealed four TRAIL-regulated inflammation-related secreted factors that were shared by both preadipocytes and adipocytes. Second, by showing that this induction of cytokine and chemokine expression occurs in an ERK1/2-and NFκB-dependent manner, the current study unveiled the underlying mechanisms by which TRAIL can induce an inflammatory response in human adipocytes. To our current knowledge, the effect of TRAIL on the secretion profile of adipocytes and preadipocytes has not been investigated yet. TNF-α, another member of the TNF superfamily, is known to have an impact on the secretion products of adipose tissue. TNF-α also stimulates the production of pro-inflammatory cytokines and chemokines such as IL-6, IL-8 and MCP-1 in adipocytes 30,31 . The induction of these cytokines and chemokines by TNF-α is mediated by a strong activation of the NFκB pathway alone 32,33 . Furthermore, TNF-α knockout mice are protected against obesity-induced insulin resistance 7 . For TRAIL and with respect to adipose tissue, the mediating pathways were not clear yet. However, in other cell types such as glioblastoma cells, NFκB but also MAP kinases such as ERK1/2, p38 and JNK as well as canonical TRAIL signaling involving caspase activation were shown to be responsible for TRAIL's effects on cytokine and chemokine production 17,25,34,35 . Our experiments clearly demonstrate that TRAIL induces a cleavage of caspase-8 and caspase-3 under the chosen conditions. While the activation of caspases is a prerequisite for TRAIL's effects on adipocyte metabolism and adipogenic differentiation, the current data indicate that caspase cleavage does not seem to be involved in mediating its effects on cytokine and chemokine expression 12,22 . This is in line with earlier studies showing that TRAIL is a potent mitogen in human preadipocytes and that even though a robust cleavage of caspase-8 and caspase-3 was observed, this proliferative effect was similarly independent of caspase activity 18 . We therefore propose that TRAIL activates several pathways at the same time, but the single activated pathways are responsible for very specific functions within the cell. For example, caspases seem to play a role for adipocyte differentiation and metabolic regulation involving PPARγ 12,22 , whereas activation of ERK1/2 appears to be crucial for preadipocyte proliferation 18 . However, we also show here that a simultaneous engagement of both the canonical NFκB and the ERK1/2 pathway mediate the effect of TRAIL on cytokine and chemokine production. Both pathways were previously described as important regulators of cytokine and chemokine production 23,24,36 . It was reported that IKK-β is able to activate ERK1/2 37 and the other way round that ERK1/2 is able to activate NFκB 38,39 . Furthermore, in vascular smooth muscle cells it was reported that activation of ERK1/2 is required for a persistent activation of the NFκB pathway by degradation of IκBβ 40 . The current study did not hint at a crosstalk between the NFκB and ERK1/2 pathways in adipocytes as our data obtained with small molecule inhibitors of both pathways did not show any interconnection. However, an interconnection can also not be fully ruled out since inhibition of the ERK1/2 as well as the NFκB axis completely inhibited the effect of TRAIL. If both pathways would simply run in parallel in response to a common upstream activator, we would rather expect that there is only a partial effect, e.g. the TRAIL-induced NFκB route should still be active upon ERK1/2 inhibition. However, this was not the case and by taking this into account it appears more likely that both pathways end up in the activation of a common downstream signal. Its identification requires further investigations and is behind the scope of this study. Since the discovery of TRAIL and its receptors, their function has been mostly studied in the context of malignant and inflammatory diseases 41,42 . A potential role in obesity and metabolic diseases has just recently been proposed, but previous findings have been rather conflicting. On the one hand, genetically obese mice showed an increased expression of the mouse TRAIL receptor DR5 in adipose tissue suggesting a role in adipose tissue function 12 . In support of this, DR5 −/− mice fed an FFC diet are protected from weight gain, insulin resistance and hepatic steatosis 19 . Furthermore, DR5 −/− mice had lower adipose tissue inflammation on an FFC diet. The mRNA levels of MCP-1 and TNF-α were significantly reduced in knockout animals compared to wild-type littermates supporting a pro-inflammatory role of DR5 19 . On the other hand, when mice deficient of both TRAIL and apolipoprotein E (TRAIL −/− ApoE −/− ) were fed a high-fat diet, the development of diabetic features, such as increased weight and impaired glucose tolerance were observed 43 . The knockout animals also had higher numbers of CD11b + leukocytes and higher levels of IL-6 in the circulation 43 . In line with these findings is a study showing that in mice fed a high-fat diet, weekly injections of recombinant TRAIL resulted in reduced weight gain and improved hyperglycemia as well as hyperinsulinemia 44 . Furthermore, it caused not only a reduction of IL-6 and TNFα on mRNA expression in white adipose tissue, but also decreased circulating IL-6 and TNFα 44 . Our findings suggest that an anti-inflammatory action of TRAIL is probably not mediated at the level of the adipocyte. However, our findings are solely based on in vitro experiments, which are limited since the crosstalk with other cell and organ systems is not taken into account. Our results support a pro-inflammatory role for TRAIL in human adipocytes. TRAIL stimulated the secretion of several pro-inflammatory factors such as IL-6, IL-8 and MCP-1, which were previously shown to be associated with obesity and insulin resistance 45,46 . CCL-20 has been described as an adipokine and its expression in adipocytes increases with elevated BMI 47 . All these factors might act locally in an auto-and/or paracrine manner and further fuel the inflammatory process within adipose tissue, but they might also spill over into the circulation and thus impact other organ systems. One-way ANOVA and Dunnett's multiple comparison were used to test for statistical significance. *p < 0.05, **p < 0.01, ***p < 0.001. For example, IL-6 is known to induce insulin resistance in skeletal muscle and liver cells 48,49 . The importance of our findings with respect to the pro-inflammatory role of TRAIL and its receptor in adipose tissue is further underlined by a study in mice with an adipocyte-specific knockout of CD95, a related member of the TNF receptor superfamily. When these mice were fed with a high-fat diet, they did not display glucose intolerance, adipose tissue inflammation and liver steatosis the way their wild-type littermates did 8 . The reasons underlying this partially conflicting data are currently not known. Different genetic models, diet-related and tissue-specific as well as species-specific effects may, alone or in combination, account for it. Despite any conflict, all current studies underline that the TNF receptor superfamily and their ligands have an impact on adipose tissue homeostasis. This is of particular importance because recombinant TRAIL or TRAIL receptor agonists are currently used in clinical trials for the treatment of cancer. In one phase I trial a maximum serum concentration of 259 μg/ml TRAIL was reached, which is way above the highest concentrations used in this study 50 . Although no severe side effects of TRAIL treatment except fatigue, nausea, vomiting, fever, anemia and constipation 50 have been reported yet, the possible pro-inflammatory effect on adipose tissue should be taken into account. Altogether, our study adds a new piece of information to the biological functions of TRAIL. We describe a novel pro-inflammatory function of TRAIL in human adipocytes, which is mediated by ERK1/2 and NFκB. Based on our findings, blocking the function of TRAIL in adipose tissue might be beneficial to either prevent or ameliorate adipose tissue inflammation. Materials and Methods Recombinant proteins, small molecule inhibitors and cell culture material. Recombinant human TRAIL (375-TEC) was purchased from Bio-Techne (Wiesbaden-Nordenstadt, Germany), TNF-α from Merck (Darmstadt, Germany), zVAD.fmk from Bachem (Bubendorf, Switzerland), SC-514 from Bio-Techne (Wiesbaden-Nordenstadt, Germany), PD-0325901 from Selleckchem (Houston, Texas, USA) and Bay 11-7082 from Invivogen (San Diego, California, USA). Cell culture media and buffers were from Thermo Fisher Scientific (Darmstadt, Germany). Ethical Note. All procedures were performed according to the Declaration of Helsinki guidelines and authorized by the ethics committee of Ulm University (entry number 368/13). Written informed consent was obtained from all subjects in advance. Subjects and primary human adipose stromal cell isolation. Primary human adipose stromal cells were isolated from subcutaneous white adipose tissue obtained from seven subjects undergoing plastic surgery. Collagenase (Sigma-Aldrich, Munich, Germany) digestion was performed as described elsewhere 50 . The mean age was 31.2 ± 12.3 years, the mean BMI was 24.9 ± 5.2 kg/m 2 . Cells were subjected to adipogenic differentiation and treated with TRAIL as described for SGBS cells. Cell culture. Human Simpson-Golabi-Behmel syndrome (SGBS) cells were used as a model system of adipocyte biology 20 . Cells were passaged in DMEM/F-12 (1:1), 100 U/ml penicillin, 100 μg/ml streptomycin (Life Technologies, Darmstadt, Germany), 17 μM D-pantothenic acid and 33 μM biotin (Sigma-Aldrich, Munich, Germany) (referred to as basal medium) supplemented with 10% fetal bovine serum. For the induction of adipogenic differentiation, subconfluent cell cultures were washed with PBS and adipogenic medium (serum-free basal medium with 20 nM human recombinant insulin, 100 nM cortisol, 200 pM triiodothyronine and 10 μg/ml transferrin) supplemented with 2 μM rosiglitazone, 25 nM dexamethasone and 250 μM isobutylmethylxanthine was added. After four days, the medium was changed to adipogenic basal medium alone. On day 14 of adipogenesis TRAIL or an equal amount of vehicle (PBS + 0.01% bovine serum albumin) was added directly to the media. RNA isolation and cDNA preparation. Isolation of total RNA was performed using the peqGOLD HP total RNA kit (peqlab, Erlangen, Germany) or the Direct-zol RNA Mini Prep kit (Zymo Research Corporation, Irvine, California, USA) according to the manufacturers' instructions. cDNA synthesis was conducted using SuperScript II Reverse Transcriptase (Thermo Fisher Scientific, Darmstadt, Germany) according to the manufacturer's instructions. mRNA Expression Arrays. Subconfluent cultures of SGBS preadipocytes were treated with TRAIL or an equal amount of vehicle in basal medium supplemented with 10% fetal bovine serum. Adipocytes were treated with TRAIL or an equal amount of vehicle as described above. RNA was isolated after 12 hours. Triplicate experiments were performed. Integrity of the RNA was analyzed using a Bionanalyzer (Agilent, Santa Clara, California, USA). The RNA array analysis was performed by the Core Facility Genomics University Medical Center Ulm. In brief, 200 ng total RNA as starting material and 5.5 μg ssDNA were used per hybridization (GeneChip Fluidics Station 450; Affymetrix, Santa Clara, California, USA). The total RNAs were amplified and labeled following the Whole Transcript (WT) Sense Target Labeling Assay (http://www.affymetrix.com). Labeled ssDNA was hybridized to Human Gene 1.0 ST Affymetrix GeneChip arrays. The chips were scanned with a Affymetrix GeneChip Scanner 3000 and subsequent images analyzed using Affymetrix Expression Console Software. A transcriptome analyses was performed using BRB-ArrayTools developed by Dr. Richard Simon and BRB-ArrayTools Development Team (http://linus.nci.nih.gov/BRB-ArrayTools.html). Raw feature data were normalized and log 2 intensity expression summary values for each probe set were calculated using robust multiarray average 51 . Filtering. Genes showing minimal variation across the set of arrays were excluded from the analysis. Genes whose expression differed by at least 1.5 fold from the median in at least 20% of the arrays were retained. Class comparison. We identified genes that were differentially expressed among the two classes using a two sample t-test. Genes were considered statistically significant if their p value was less than 0.05 and displayed a fold change between the two groups of at least 1.5 fold. We used the Benjamini and Hochberg correction to provide 90% confidence that the false discovery rate was less than 10% 52 . The false discovery rate is the proportion of the list of genes claimed to be differentially expressed that are false positives. Functional protein association networks between mutually differentially expressed genes were identified using the STRING 10 program (http://string-db.org/). Quantitative real-time PCR. qPCR was performed on a LightCycler 2.0 instrument using the My-Budget 5x EvaGreen qPCR Mix (Bio-Budget, Krefeld, Germany). The mRNA levels of the genes of interest were first normalized to HPRT (hypoxanthine-guanine-phosphoribosyl transferase, ΔCT value) and then to the corresponding control condition (ΔΔCT value) 53 Protein extraction and Western blot. Whole protein extracts were obtained by washing the cells with ice cold PBS and adding lysis buffer (10 mM Tris-HCl pH 7.5, 150 mM NaCl, 2 mM EDTA, 1% Triton X-100, 10% glycerol) supplemented with 1X cOmplete Proteinase Inhibitor Cocktail and 1X PhosSTOP Phosphatase Inhibitor Cocktail (Roche Diagnostics, Mannheim, Germany). Cells were detached by scraping. The lysates were incubated for 20 minutes at 4 °C and afterwards centrifuged at 14000 rpm for 30 minutes at 4 °C. Western blot analysis was performed as described elsewhere 18 . The following antibodies were used: rabbit anti-phospho AKT (S473), rabbit anti-AKT, mouse anti-phospho ERK1/2 (T202/Y204), rabbit anti-phopsho JNK (T183/Y185), mouse anti-phospho IκBα (S32/S36), rabbit anti-IκBα, rabbit anti-caspase-3 (Cell signaling, Cambridge, UK), mouse anti-JNK (BD, Heidelberg, Germany), mouse anti-caspase-8 (Alexis, Grünberg, Germany), rabbit anti-ERK1/2, mouse anti-β-actin (Sigma-Aldrich, Munich, Germany), mouse anti-caspase-8 (Enzo LifeSciences, Lörrach, Germany) and mouse anti-α-tubulin (Calbiochem/EMD Millipore, Darmstadt, Germany). HRP-conjugated goat anti-mouse IgG and goat anti-rabbit IgG were from Santa Cruz Biotechnology (Heidelberg, Germany). Full length blots are provided in the Supplementary Appendix. ELISA. SGBS adipocytes were treated with 30 ng/ml TRAIL as described above. Media supernatants were harvested after 24 hours and cleared by centrifugation. The ELISA for IL-6, IL-8 and MCP-1 was implemented according to the manufacturer's instructions using Ready-SET-Go! ELISA kits (eBioscience, Vienna, Austria). The ELISA for CCL-20 was performed using the CCL20/MIP-3 alpha ELISA kit from Novus Biologicals (Littleton, Colorado, USA) following the manufacturer's instructions. Absorbance was measured on a microplate spectrophotometer reader (ELx800 Absorbance Microplate Reader, BioTek, Bad Friedrichshall, Germany). Electrophoretic Mobility Shift Assay (EMSA). SGBS adipocytes were treated for 2 hours with TRAIL (30 ng/ml), TNF-α (30 mg/ml) as a positive control or vehicle as a negative control. SGBS cells were collected from 10 cm dishes by scraping and centrifugation (10,000 g for 5 min at 4 °C). The preparation of nuclear extracts and EMSAs were performed as previously described 54 . The used oligonucleotides were purchased from Biomers. net (Ulm, Germany). The sequence of the NFκB sense and NFκB antisense oligonucleotides were 5′-AGT TGA GGG GAC TTT CCC AGG C-3′ and 5′-GCC TGG GAA AGT CCC CTC AAC T-3′, respectively. The sense oligonucleotide was labeled with γ-32 P-ATP (Hartmann Analytics, Braunschweig, Germany) using T4-polynucleotide kinase (Thermo Fisher Scientific, Darmstadt, Germany). Reporter gene assay. On day 7 of differentiation SGBS adipocytes were nucleofected with a NFκB Firefly luciferase reporter vector containing five copies of an NFκB response element that drives transcription of the Firefly luciferase reporter gene and Renilla luciferase control reporter vector using the Neon Transfection System (Thermo Fisher Scientific, Darmstadt, Germany). First, adipocytes were trypsinized, counted and a cell solution of 600000 cells was centrifuged and resuspended in 105 μl Resuspension Buffer B of the Neon Transfection Kit (100 μl). The nucleofection was performed with 7.5 μg of the NFκB Firefly luciferase reporter vector 55,56 and with 0.75 μg of the Renilla luciferase control reporter vector (pRL-TK; Promega, Heidelberg) at 1400 Volt with three electric shocks for 10 ms each. Then, 75000 cells were seeded into one well of a 12-well plate. After 24 hours the adipocytes were treated for additional 24 hours with either TRAIL (30 ng/ml), TNF-α (30 ng/ml) as a positive control or vehicle as a negative control. The cell lysates were harvested by scraping and the luciferase activities were measured with the Dual-Glo Luciferase Reporter Kit (Promega, Madison, Wisconsin, USA) according to the manufacturer's instructions. Luminescence was recorded in a Multimode Microplate Reader (Mithras LB 940, Berthold, Bad Wildbad, Germany).
6,736.8
2017-07-18T00:00:00.000
[ "Biology" ]
and comparison of long short-term memory networks short-term traffic ANALYSIS AND COMPARISON OF LONG SHORT-TERM MEMORY NETWORKS SHORT-TERM TRAFFIC PREDICTION PERFORMANCE Summary . Long short-term memory networks (LSTM) produces promising results in the prediction of traffic flows. However, LSTM needs large numbers of data to produce satisfactory results. Therefore, the effect of LSTM training set size on performance and optimum training set size for short-term traffic flow prediction problems were investigated in this study. To achieve this, the numbers of data in the training set was set between 480 and 2800, and the prediction performance of the LSTMs trained using these adjusted training sets was measured. In addition, LSTM prediction results were compared with nonlinear autoregressive neural networks (NAR) trained using the same training sets. Consequently, it was seen that the increase in LSTM's training cluster size increased performance to a certain point. However, after this point, the performance decreased. Three main results emerged in this study: First, the optimum training set size for LSTM significantly improves the prediction performance of the model. Second, LSTM makes short-term traffic forecasting better than NAR. Third, LSTM predictions fluctuate less than the NAR model following instant traffic flow changes. INTRODUCTION Nowadays, the number of vehicles and travel demands are increasing rapidly. This increase is responsible for delays, fuel loss and high emissions globally. For this reason, the efficiency of road capacities should be increased by directing and controlling road traffic with intelligent transport systems (ITS). However, ITS needs information about the current status of traffic variables and future estimates of this information (for example, volume, speed, travel time, etc.). For ITS to be more efficient, it is important that traffic parameters be accurately estimated, especially in the short term. Thus, ITS can make fast and accurate decisions for future traffic situations. For this reason, studies on predicting the short-term future situation of traffic become important. Researchers are working to make these predictions more accurate by developing new methods. Especially as deep learning has proven itself in many areas, the use of deep learning in short term traffic prediction has accelerated. Therefore, there is a need for research that better demonstrates the potentials of deep learning in this regard. The first study on short-term traffic flow estimation was performed using the Box-Jenkins method [1]. Time series methods were used to estimate traffic flow in other studies. [2][3][4][5][6][7][8] However, when artificial intelligence approaches and time series methods were compared, it was observed that artificial intelligence predicted short term traffic flow better [9]. Therefore, in this study, traffic flow estimation models were developed by using artificial intelligence and deep networks approaches and the size of the training sets were discussed [10]. Short-term traffic flow estimation was performed with ANNs in earlier times from deep learning approach. For instance, the dynamic wavelet ANN model was used to estimate traffic flow [11]. Dynamic traffic flow modelling is another approach to determine the amount of traffic flow [12]. ANN and K-NN were used together to estimate traffic flow [13]. In another study, multiscale analysis-based intelligent ensemble modelling was used to predict airway traffic [14]. The traffic flow was modelled for the city of Istanbul using different time resolutions and the results were accurate despite the limited data [15] and some others [15][16][17][18]. Deep learning has recently gained interest in the prediction of various traffic parameters. Long short-term memory (LSTM) is in the sub-branch of deep learning. Previous studies on LSTM have evidence that deep learning and the performance of other methods were compared. For example, LSTM was compared with regression models [19]. As a result, LSTM has generally made better predictions, except in some cases. Researchers developed a model using LSTM to predict the short-term traffic flow in exceptional traffic conditions. In addition, the authors studied the characteristics of traffic data [20]. In another study, LSTM and recurrent ANN models were compared with ARIMA models [21]. As a result, researchers mentioned that artificial intelligence models work better. In the other study, LSTM and recurrent ANN and regression models were compared with LSTM obtaining better results [22]. LSTM and short-term traffic flow were reviewed in the literature, but so far, there was no study on the effect of training set size on LSTM performance. Therefore, in this study, LSTM and nonlinear autoregressive neural networks (NAR) were trained with different training sets size and the optimum size was determined for the problem. In addition, two models were compared, and their results discussed. Thus, the results of this study will help to determine the size of the training set for future studies. 21. This article consists of introduction, methodology and conclusion. The subject and importance of this study are discussed, and the related literature is summarised in the introduction section of this article. The data used and the estimation of missing data are presented in the methodology section. Then, LSTM and NAR approaches were briefly explained, and the parameters of the models used in the study were introduced. Thereafter, NAR and LSTM estimates were tested by hypothesis testing and the results were discussed. Finally, the conclusions of the study were recalled in the conclusion section and recommendations were made for future studies. Data collection and missing data Traffic flow data were collected from the D200 highway. This highway connects the major cities of Turkey. The main road traffic is not interrupted 20 km forward and backward from the counted section. Therefore, there are uninterrupted conditions in the counting section. Data collection was performed with NC-350 traffic counters [23]. The counting was conducted with traffic counting devices placed separately for the left and right lanes. The devices were set to record data every 15 minutes. Devices were counted for 47 days and 4,512 traffic flow data were collected. In the counting process, data cannot be recorded at some time intervals and this is very common. This data is called missing data. This is often the result of faults in the device or the limitations of the counting device. After counting operations, it was found that approximately 1% of the total data was missing. Autocorrelation reveals the degree of relationship of time series points with each other. The points with high autocorrelation are used in making future predictions. To complete the missing data, traffic data with high autocorrelation were used with missing data. The results of the autocorrelation calculation result are given in Fig. 1. Autocorrelation was high at point 672. Each counting operation has 15 min intervals. In other words, every point in the time series is related to the point 7 days (672 / (24 * 4) previous. This is a very common pattern in traffic flows. In this case, the missing data can be completed with the value at the point 672 interval before the missing point. Let X ∊ ℤ be traffic data with missing values and xt ∊ X indicates the traffic flow data at time t. Also, let ∊ ℤ denote the missing data in the series and at time t. According to these definitions, the missing data is completed as in Equation 1. (1) After completing the missing data, the data set was standardised with Equation 2 before training the models. ( 2) where, xstd standardised data, x raw data, mean of the dataset, sx standard deviation of the dataset. Long short-term memory Long short-term memory network (LSTM) is an advanced type of recurrent neural networks (RNNs) that can overcome the long-term dependence problem. RNNs produced successful results in sequence prediction tasks. However, it is often difficult for RNNs to learn long-term patterns [24]. LSTM can understand short-or long-term dependencies with the help of units that learn when to forget and when to update the information. Let xt be the input vector, ht be the output of the LSTM unit and Ct be the cell state at time t. In the first step, how much of the information in the Ct-1 will be forgotten is determined by forget gate. The forget gate is a layer that uses sigmoid function and uses ht-1 and xt to generate values between "0" and "1". Therefore, ft in Fig. 2 can be written as: The next step is to identify new information that will be stored in the cell state. This step consists of two sub-steps: The first step is the input gate, which determines what information to update. The second step determines the vector containing the candidate values. In Fig. 2, the output value of the input gate is represented by it, while the output value of the second section is indicated by . The it and can be written as: and (4) After these steps, the old state vector (Ct-1) is updated to reveal the new state vector (Ct). The update process can be written as: (5) Analysis and comparison of long short-term memory… 23. Fig. 2. Long short-term memory network unit The last step is to determine the hidden state (ht): The output gate (ot) is the process that determines which parts of the cell state will be in the output and can be written as: (7) where σ() is the sigmoid function, W(f,i,c,o) matrices are the network parameters, b(f,i,c,o) is the bias matrices. And ⊙ denotes the product operation. LSTM can successfully overcome the exploding/vanishing gradients problem with these processes and gates [25]. Nonlinear auto-regressive neural networks Nonlinear autoregressive neural networks (NAR) are a customised neural network (ANN) model for time series. NAR predicts the future value by using the past data of the time series. NAR needs a training set like other ANNs. Let X ∊ ℤ be the traffic flow data and xt ∊ X denotes the traffic flow value at time t. In this case, the future traffic flow value will be: x _(t+1)=f(x_t,x_(t-1),…x_(t-d)). Where, x _(t+1) is the prediction value of the NAR, f(x) expresses the NAR black-box function and the d is the delay value. Backpropagation algorithm [26] and Levenberg-Marquardt method [28,29] were used for training. The connections of the NAR with the hidden and the output layers are shown in Fig. 3. The model uses a delay parameter to estimate the traffic flow at time t + 1. In this study, in the hidden layer tangent hyperbolic and in the output layer linear function were used as activation functions. To determine the appropriate NAR architecture, the number of E. Dogan hidden layer neurons was tested from 5 to 35. Then, the RMSE of different NAR architectures were analysed and it was decided that 3-10-1 was the appropriate NAR architecture. In this section, we first introduced the creation of training and test sets. Then, the effect of the size of the training sets on the predictions of NAR and LSTM was examined and finally, the prediction results of the two methods were evaluated by statistical tests. The pseudo-code for the creation of training and test sets with these representations is as follows: 1. Start 2. Let, n := |X|, r :=|tm|, p :=|ej|, 3. m = 1, 4. j = 1, 5. ej = {xt | (t>(n-(j*r+p) ⋀ t≤(n-j*R)} 6. tm={{xt | (t>(n-(j*r) ⋀ t≤(n-(( j-1)*r)), 7. If j < p and m<r Then, j = j + 1 and turn back to Step 4 If j = p and m<r Then, m= m + 1 and turn back to Step 4 If j = p and m = r Then, Stop. The delay parameter or lag value was kept equal in the LSTM and NAR models, and this value was set to . Thus, regardless of parameter d, the effect of data set size on performance was compared. NAR and LSTM models with training set size 480 were named NAR5 and LSTM5 and the test results were given in Fig. 4 using box-plot. In Figs. 4 and 5, the outliers were shown with the (+) sign. If these (+) signs are counted from Figs. 4 and 5, it is understood that while LSTM produces ten outliers, the NAR has four outliers. This result indicates that LSTM predictions are rarely more than expected. When the median values were examined, it was seen that the value of NAR5 was higher than the value of LSTM5. In addition, it was observed that the range of LSTM5 was smaller than NAR5 with the examination of the upper/lower whiskers. Simply put, the LSTM approach was able to produce better results than the NAR with the smallest training set size examined. Fig. 5 that the LSTM produces lower RMSE for all training set sizes. It was observed that NAR error values were oscillated by increasing the size of the training set, but no clear decrease was observed. Furthermore, it is understood from Fig. 5 that the LSTM error values tend to decrease clearly for the same training set size increase. Thus, following examination of the average RMSE values of the models, it was found that the lowest error was in for NAR and LSTM. Based on this, the error values of the models due to their training with , training set size were examined more closely. Fig. 5 shows that the maximum RMSE value of NAR25 is 17 veh. However, the maximum prediction error value of LSTM25 was about 13 veh. To observe the prediction of the models in more detail, the 17th test day was examined in Fig. 6. And to observe the estimations of the models in more detail, the 17th test day was examined in Fig. 6. The results of the remaining test days are presented in Appendix 1 for the reader's review. The coefficients of estimation of the two models were calculated and it was determined that both models produced high R 2 values. The calculated R 2 values for the remaining days can be examined in Appx 2 and 3. Like the RMSE values examined in the previous figures, LSTM predictions produced R 2 values higher than NAR predictions for all test days. A remarkable situation was seen during the comparison of the models on the line graph. In Fig. 6, the prediction line of NAR makes high fluctuations to approach the actual value. On the other hand, the fluctuation of LSTM was less than NAR. The same examination was performed for the other test days and the same result was reached. In the light of these results, it was concluded that LSTM was less affected by instant traffic flow changes than NAR model. Although the LSTM was found to be more accurate than NAR, the statistical significance of this result was tested by t-test. The established hypothesis statements were established as follows: H0: If LSTM is used instead of NAR, the mean RMSE does not change. (μLSTM =μNAR) H1: If LSTM is used instead of NAR, the mean RMSE is decreased. (μNAR> μLSTM) where, μNAR and μLSTM represent the mean of the estimation errors of NAR and LSTM, respectively. as less mean prediction errors than NAR. The confidence level of the hypothesis test was 95% (α = 0.05). The p-value was examined from the table and it was seen that p <0.05 was found for the other training set sizes except for the 5-day training set. In the light of these results, except for 5-day training set, H0 was rejected and H1 was accepted. The statistical analysis confirmed that the LSTM model usually predicted traffic flow more accurately than the NAR model for 15-min data. In addition, the improvement in the predictive performance of the NAR model was not observed by increasing the size of the training set. However, the improvement in the predictive performance of the LSTM model was clearly observed by increasing the size of the training set. However, it was determined that the increase in the size of the training set should be at certain levels. For this study, it was found that this size should have 2400 data (25 days) number for both models. CONCLUSION Accurate short-term traffic forecasts will improve the decision-making capabilities of traffic control systems. Thus, traffic flow and traffic safety will reach better levels. In this study, training sets of different sizes were created. Then, the effects of these clusters on the predictive performance of LSTM and NAR models were examined. In terms of short-term traffic estimation, it was understood from the analysis results and statistical tests that LSTM models have better predictions than NAR models. The conclusions of this study are as follows:  This study showed that a large amount of training set does not increase performance. For this reason, the optimum training set size of the new deep learning approaches should be determined.  The larger training set size does not always mean better performance for LSTM and NAR.  Improvement in LSTM estimation performance is observed towards optimum training set size. However, the same feature cannot be mentioned for NAR.  LSTM is less affected by instant traffic flow changes than the NAR model. Therefore, LSTM produces stable results from NAR for short-term traffic prediction.  Statistically, the LSTM approach performs better than that of NAR when the training set size is greater than 480.  It was observed that LSTM produced more outliers than NAR. Therefore, in rare cases, E. Dogan LSTM is likely to make high errors.  In this study, the size of the LSTM training set was discussed in the context of the prediction of traffic flow. The effects of other parameters of LSTM will be investigated in future studies. For this study, tests were performed for a time interval of 15 minutes, which is commonly used in the literature. In addition, smaller time intervals can be investigated in future studies. Another limitation of this study is the use of only one data set. Future studies will be enriched with different data sets from different regions. ITS will be an indispensable tool in the future traffic control of cities. This will make future traffic flow forecasts much more important. Therefore, it can easily be foreseen that the studies will continue for more effective use of deep learning in road traffic prediction.
4,194.4
2020-06-30T00:00:00.000
[ "Computer Science", "Business" ]
Fully Depleted, Trench-Pinned Photo Gate for CMOS Image Sensor Applications Tackling issues of implantation-caused defects and contamination, this paper presents a new complementary metal–oxide–semiconductor (CMOS) image sensor (CIS) pixel design concept based on a native epitaxial layer for photon detection, charge storage, and charge transfer to the sensing node. To prove this concept, a backside illumination (BSI), p-type, 2-µm-pitch pixel was designed. It integrates a vertical pinned photo gate (PPG), a buried vertical transfer gate (TG), sidewall capacitive deep trench isolation (CDTI), and backside oxide–nitride–oxide (ONO) stack. The designed pixel was fabricated with variations of key parameters for optimization. Testing results showed the following achievements: 13,000 h+ full-well capacity with no lag for charge transfer, 80% quantum efficiency (QE) at 550-nm wavelength, 5 h+/s dark current at 60 °C, 2 h+ temporal noise floor, and 75 dB dynamic range. In comparison with conventional pixel design, the proposed concept could improve CIS performance. Introduction Conventional pixel design for complementary metal-oxide-semiconductor (CMOS) image sensors (CIS) commonly uses fully depleted photodiodes as a photo-sensing element. With rapid CIS developments, the photodiode structure evolved from the early planar pinned photodiode (PPD) [1,2] to the current deep PPD [3], proposed to address the backside illuminated (BSI), high-resolution image sensor market. Even so, this technology is limited by the need to employ ion implantation that can cause crystal damage and metal contamination [4,5]. Indeed, to meet high-resolution imaging requirements, shrinking the pixel size led to space search in silicon depth for the storage of photo-generated charges. As charges are stored in deep PPD, it becomes more difficult to drain them via a surface transfer gate (TG) for the reading. To help vertical charge draining toward the surface TG, three-dimensional (3D) gradual doping through careful implantation control can create an electric field. Moreover, high-energy ion implant is necessary for the definition of deep PPD. However, such implantation processes cause defects that may not be entirely suppressed due to limited annealing temperatures, and there are risks of introducing undesirable impurities. This induces deep-level traps in silicon, leading to dark current degradation. We propose in this paper a novel concept for pixel implementation, introducing a vertical pinned photogate (PPG) to replace deep PPD using epitaxial silicon as the active layer. In continuation of the former planar charge-coupled device (CCD) and implanted trench photo MOS concept [6][7][8][9], the proposed integration requires no implantation [10], thus avoiding issues of defect creation and contamination. Secondly, to efficiently drain charges stored in the PPG, the development of a buried vertical TG in pipe form between the charge storage region and surface sensing node (SN, also called floating diffusion or FD) is proposed. This makes the transfer path direct in comparison with the conventional implementation of gradual doping and surface TG. Moreover, the proposed pixel structure takes benefit from capacitive deep trench isolation (CDTI), formerly developed for dark current reduction [11] and more recently for fully depleted memories for global shutter applications [7,12]. Its use as a pixel sidewall allows active lateral surface passivation with surface potential pinning for the PPG. Finally, for backside surface passivation, an electrostatic approach by integration of an oxide-nitride-oxide (ONO) stack is proposed. The ONO stack holds a certain quantity of charges to establish, by field effect, a sufficient density of oppositely charged carriers at the silicon (backside) surface. It also plays the role of a photon transmission layer through its optical low-absorption and antireflection properties in the visible spectrum. To assess the proposed concept, a BSI, p-type, 2-µm-pitch pixel (in square layout) with vertical TG (in square layout) was designed (see illustration in Figure 1a) to operate at a supply voltage range of [0 V, 3.3 V], using 90-nm-node CMOS facilities. The pixel was fabricated (shown in Figure 1b) in test structures with variations of key parameters for optimizing performance. The achieved performance was evaluated by measuring the test structures. temperatures, and there are risks of introducing undesirable impurities. This induces deep-level traps 44 in silicon, leading to dark current degradation. 45 We propose in this paper a novel concept for pixel implementation, introducing a vertical pinned 46 photogate (PPG) to replace deep PPD using epitaxial silicon as the active layer. In continuation of the former planar charge-coupled device (CCD) and implanted trench photo MOS concept [6][7][8][9], the 48 proposed integration requires no implantation [10], thus avoiding issues of defect creation and 49 contamination. Secondly, to efficiently drain charges stored in the PPG, the development of a buried 50 vertical TG in pipe form between the charge storage region and surface sensing node (SN, also called 51 floating diffusion or FD) is proposed. This makes the transfer path direct in comparison with the 52 conventional implementation of gradual doping and surface TG. Moreover, the proposed pixel 53 structure takes benefit from capacitive deep trench isolation (CDTI), formerly developed for dark 54 current reduction [11] and more recently for fully depleted memories for global shutter applications 55 [7,12]. Its use as a pixel sidewall allows active lateral surface passivation with surface potential 56 pinning for the PPG. Finally, for backside surface passivation, an electrostatic approach by 57 integration of an oxide-nitride-oxide (ONO) stack is proposed. The ONO stack holds a certain 58 quantity of charges to establish, by field effect, a sufficient density of oppositely charged carriers at 59 the silicon (backside) surface. It also plays the role of a photon transmission layer through its optical 60 low-absorption and antireflection properties in the visible spectrum. 61 To assess the proposed concept, a BSI, p-type, 2-µm-pitch pixel (in square layout) with vertical 62 TG (in square layout) was designed (see illustration in Figure 1a Figure 1a illustrates the designed BSI pixel with integration of a vertical PPG, and a buried vertical TG beneath a surface SN, a sidewall CDTI, front-side readout transistors, and a back-side ONO stack. The pixel structure is implemented from an epitaxial silicon layer of a few micrometers. Vertical Pinned Photo Gate (PPG) A commonly used material for the digital CMOS core process is an epitaxial p-type substrate, called the vertical PPG, which was implemented in this epitaxial silicon layer with a uniformly doped concentration N A . This substrate specification implies that the signal charges are holes to be collected and stored in the PPG. The silicon volume of the pixel is limited by sidewall CDTI, and the PPG takes a large part of it with a good fill factor in volume. By surface passivation of sidewall CDTI, the PPG forms a potential well (shown in Figure 2a,b) to collect and store photo-generated holes. The potential well can be determined by solving the Poisson equation [1]. Issued from a one-dimensional solution, the X-axis (horizontal, passing through the PPG's center) potential profile is given by From the obtained profile, the minimum potential at the PPG's center, called depletion voltage V D , and the potential well's depth, ∆V D , can be extracted. They have the following relationship: with where V S is the lateral surface potential, and W d is the silicon width between face-to-face CDTI gate electrode. V S is pinned when the lateral Si-SiO 2 interface is set in inversion mode via CDTI bias control. Such an operation mode cancels the thermal generation of dark current from this interface [11,13]. The parameters involved in relationship (2b) are the key to pixel operation with optimal performance because of the tradeoff between full depletion voltage, transfer efficiency, and full well. In particular, increasing ∆V D improves full well capacity, but also lowers V D , thus reducing the potential difference (versus SN) for charge transfer. For design optimization and determination of key parameters, two-dimensional (2D) and 3D process and device simulations were carried out. 75 A commonly used material for the digital CMOS core process is an epitaxial p-type substrate, 76 called the vertical PPG, which was implemented in this epitaxial silicon layer with a uniformly doped 77 concentration NA. This substrate specification implies that the signal charges are holes to be collected 78 and stored in the PPG. The silicon volume of the pixel is limited by sidewall CDTI, and the PPG takes 79 a large part of it with a good fill factor in volume. By surface passivation of sidewall CDTI, the PPG 80 forms a potential well (shown in Figures 2a,b) to collect and store photo-generated holes. The 81 potential well can be determined by solving the Poisson equation [1]. Issued from a one-dimensional 82 solution, the X-axis (horizontal, passing through the PPG's center) potential profile is given by From the obtained profile, the minimum potential at the PPG's center, called depletion voltage 84 VD, and the potential well's depth, ∆VD, can be extracted. They have the following relationship: where VS is the lateral surface potential, and Wd is the silicon width between face-to-face CDTI gate 87 electrode. VS is pinned when the lateral Si-SiO2 interface is set in inversion mode via CDTI bias 88 control. Such an operation mode cancels the thermal generation of dark current from this interface 89 [11,13]. 90 The parameters involved in relationship (2b) are the key to pixel operation with optimal 91 performance because of the tradeoff between full depletion voltage, transfer efficiency, and full well. 92 In particular, increasing ∆VD improves full well capacity, but also lowers VD, thus reducing the 93 potential difference (versus SN) for charge transfer. 94 For design optimization and determination of key parameters, two-dimensional (2D) and 3D 95 process and device simulations were carried out. 99 There was a preliminary investigation into lateral deep transfer gates [14]. However, for this 100 pixel structure, we preferred a vertical shallow trench transfer gate, placed in the central surface area Vertical Transfer Gate There was a preliminary investigation into lateral deep transfer gates [14]. However, for this pixel structure, we preferred a vertical shallow trench transfer gate, placed in the central surface area of the pixel (see Figure 3a), in front of the maximum fully depleted photo gate voltage position to maximize the charge transfer efficiency at low operating voltage. The vertical TG in pipe form surrounds a wire-shaped channel for charge transfer from PPG to surface SN. It needs to ensure both charge holding and transfer operations. In the off state, a potential barrier for stored holes as high as ∆V D is desirable, while, in the on state, the Y-axis (vertical, passing through the PPG's center) potential profile from the backside surface of the PPG to the front-side surface SN should be in a monotonic decrease for holes, so as to completely transfer the stored charges without lag. Figure 3b depicts the vertical potential profile for different TG voltages. The vertical TG is buried with its controlling electrode below the silicon surface, thus leaving its surface area to SN. This allows SN to be implemented with smaller capacitance, so as to enhance the charge-to-voltage factor (CVF). 103 The vertical TG in pipe form surrounds a wire-shaped channel for charge transfer from PPG to 104 surface SN. It needs to ensure both charge holding and transfer operations. In the off state, a potential 105 barrier for stored holes as high as ∆VD is desirable, while, in the on state, the Y-axis (vertical, passing 106 through the PPG's center) potential profile from the backside surface of the PPG to the front-side 107 surface SN should be in a monotonic decrease for holes, so as to completely transfer the stored 108 charges without lag. Figure 3b depicts the vertical potential profile for different TG voltages. 109 The vertical TG is buried with its controlling electrode below the silicon surface, thus leaving its 110 surface area to SN. This allows SN to be implemented with smaller capacitance, so as to enhance the 111 charge-to-voltage factor (CVF). 117 The pixel makes use of front-side planar readout transistors implemented in a shallow n-type well 118 surrounding the buried TG (shown in Figures 1a,b). The well region also works as a top pinning layer Front-Side Readout Transistors For the pixel readout, the conventional 2T5 pixel architecture was adopted with the use of PMOS transistors. This architecture is compatible with the correlated double sampling (CDS) readout mode. The pixel makes use of front-side planar readout transistors implemented in a shallow n-type well surrounding the buried TG (shown in Figure 1a,b). The well region also works as a top pinning layer of PPG, and as a source of carriers (electrons) for the lateral and bottom surfaces to reach the inversion mode. Backside Positively Charged ONO Stack The ONO stack can be integrated by successive deposition of three dielectric layers on the backside silicon surface, using plasma-enhanced chemical vapor deposition (PECVD). The first deposited layer, oxide, plays the following roles: (1) chemical surface passivation, with lower interface state density at the Si-SiO 2 interface than the case of nitride deposition on silicon [15]; (2) retaining charges of the second deposited layer, with reproducible parameters. The second layer is hydrogenated silicon nitride SiN_x:H, and the thin film can be positively charged thanks to its K centers [16], thereby playing the role of a field-effect passivation and antireflection layer. The third layer is oxide, deposited mainly to complete the optical antireflective effect. With antireflective consideration for a nearly 100% light transmission ratio at 550-nm wavelength, the three stacked layers were chosen to have the following thicknesses: 20 nm for the first oxide, 60 nm for the nitride, and 170 nm for the last oxide. Based on Sentaurus's electrical simulations (see Figure 4), the ONO stack needs to hold a surface density of positive charges over 5 × 10 11 cm −2 at the Si-SiO 2 interface to ensure backside surface passivation with the surface potential pinned to the value imposed by the n-type well of PMOS transistors. Experimentally, the ONO stack was deposited onto silicon wafers in different configurations, and the COCOS characterization technique (corona oxidation for characterization of semiconductors) was employed to estimate the Si-SiO 2 interface states and fixed charge distribution [17]. Those dielectric parameters were correlated to dark current measurements [15]. The best configuration possessed a density of fixed charges of 6.9 × 10 11 cm −2 and a density of interface states of 4.5 × 10 9 eV −1 ·cm −2 . It was chosen and integrated in the pixel fabrication process. Results and Discussion The designed p-type, 2-µm-pitch pixel was fabricated using 90-nm MOS technology with wafer backside thinning and ONO stack deposition. It was integrated in test structures with variations of key parameters such as the epitaxial layer thickness (from 2.7 to 4.8 µm) and doping concentration N A (two levels: A and B). The test structures were measured to evaluate the pixel characteristics, including full well capacity, quantum efficiency (QE), and dark current. Full Well Capacity and QE The pixel under test operated properly with no lag observation during charge transfer. Figure 5a plots the full well capacity against epitaxial layer thickness for two doping levels A and B. One can observe a 1.7-µm equivalent zone without any charge storage. This zone comes from top and bottom space charge extension and vertical transfer gate depth. Beyond this zone, the full well capacity went up linearly with the epitaxial layer thickness. In addition, it went up more quickly for a higher doping level B, reaching 13,000 h+ for a 4.8-µm silicon thickness. Figure 5b represents QE as a function of wavelength for three values of epitaxial layer thickness. QE was improved when increasing the thickness, which can be explained by the fact that PPG had more silicon space (in depletion) to absorb incoming photons and collect photo-generated holes. For the 4.8-µm epi layer thickness, the maximum QE reached 80% at 550 nm (pixel without color filter and micro lens). It is noted that QE for large wavelengths was enhanced mainly because there were reflecting layers and components on the front side of the pixel, which roughly doubled the silicon absorption thickness. 181 Measuring the dark current's temperature dependence permits extraction of the activation 182 energy, which indicates the predominant mechanism of dark current generation [18]. We performed 183 dark current measurements on the p-type pixel array at and above 60 °C, below which signals were 184 too weak to be practically measurable. The intrinsic pixels with dark current centered at 4.5 h+/s had 185 their activation energies around 1.1 eV, corresponding to the silicon bandgap. This means that dark 186 currents in the intrinsic pixels were dominated by a diffusion mechanism. It also means that the Dark Current Dark current of the p-type pixel was measured and compared with an n-type counterpart: 1.75-µm-pitch, planar pinned photodiode, BSI, with CDTI pixel isolation. Figure 6a compares statistic distributions of dark current measured at 60 • C from the test pixel array. There were two types of pixels: p-type PPG and n-type PPD. The p-type exhibited the intrinsic peak (representing normally operated pixels) at 4.5 h+/s with a much smaller dispersed population. In contrast, the n-type counterpart peaked at 17e − /s with a larger tail, resulting in an average value of 23 e − /s. The large tail distribution of the n-type may be due to high-energy implantation process steps needed for its fabrication. 181 Measuring the dark current's temperature dependence permits extraction of the activation 182 energy, which indicates the predominant mechanism of dark current generation [18]. We performed 183 dark current measurements on the p-type pixel array at and above 60 °C, below which signals were 184 too weak to be practically measurable. The intrinsic pixels with dark current centered at 4.5 h+/s had 185 their activation energies around 1.1 eV, corresponding to the silicon bandgap. This means that dark 186 currents in the intrinsic pixels were dominated by a diffusion mechanism. It also means that the Measuring the dark current's temperature dependence permits extraction of the activation energy, which indicates the predominant mechanism of dark current generation [18]. We performed dark current measurements on the p-type pixel array at and above 60 • C, below which signals were too weak to be practically measurable. The intrinsic pixels with dark current centered at 4.5 h+/s had their activation energies around 1.1 eV, corresponding to the silicon bandgap. This means that dark currents in the intrinsic pixels were dominated by a diffusion mechanism. It also means that the silicon surface surrounding the PPG was effectively passivated. The remaining pixels of higher values of dark current (peaks at around 100 h+/s and above) had their extracted E a near 0.8 eV. Figure 6b compares standard deviations of dark currents between the two types of pixel arrays, i.e., p-type and n-type. The differences were significant, e.g., at 100 • C, the value was~200 h+/s for the p-type compared with~1000 e − /s for the n-type. The p-type with smaller standard deviation denoted that the pixel structure was more robust to temperature elevation. Main Characteristics Other measured results included a conversion voltage factor (CVF) of 90 µV/h+, a photo-response non-uniformity (PRNU) of 0.5%, and a temporal noise floor of 2 h+. Thanks to the high full well capacity for the thick structure, a dynamic range of 75 dB was estimated. Table 1 summarizes the main performances of the designed pixel. Conclusions We designed and realized a 2T5 rolling shutter image sensor, p-type, and 2-µm-pitch pixel to evaluate and validate our proposed concept based on a native epitaxial layer zone for photon detection, charge storage, and charge transfer to the sensing node. It integrates a vertical PPG, buried vertical TG, sidewall CDTI, and backside ONO stack. Testing results show a high QE, high full well capacity with no lag, and low dark current. In comparison with the conventional pixel design, the proposed concept can bring better performance. This new structure featuring a silicon trench etching process may be deployed for either large or small pixel sizes, as well as thin or deep silicon thicknesses, targeting a photo response ranging from the ultraviolet (UV) to near-infrared (NIR) wavelength.
4,639.6
2020-01-28T00:00:00.000
[ "Engineering" ]
Detection Anomaly in Video Based on Deep Support Vector Data Description Video surveillance systems have been widely deployed in public places such as shopping malls, hospitals, banks, and streets to improve the safety of public life and assets. In most cases, how to detect video abnormal events in a timely and accurate manner is the main goal of social public safety risk prevention and control. Due to the ambiguity of anomaly definition, the scarcity of anomalous data, as well as the complex environmental background and human behavior, video anomaly detection is a major problem in the field of computer vision. Existing anomaly detection methods based on deep learning often use trained networks to extract features. These methods are based on existing network structures, instead of designing networks for the goal of anomaly detection. This paper proposed a method based on Deep Support Vector Data Description (DSVDD). By learning a deep neural network, the input normal sample space can be mapped to the smallest hypersphere. Through DSVDD, not only can the smallest size data hypersphere be found to establish SVDD but also useful data feature representations and normal models can be learned. In the test, the samples mapped inside the hypersphere are judged as normal, while the samples mapped outside the hypersphere are judged as abnormal. The proposed method achieves 86.84% and 73.2% frame-level AUC on the CUHK Avenue and ShanghaiTech Campus datasets, respectively. By comparison, the detection results achieved by the proposed method are better than those achieved by the existing state-of-the-art methods. Introduction In order to improve the safety of public life and assets, video surveillance systems have been widely deployed in public places such as shopping malls, hospitals, banks, and streets. In most cases, how to detect video abnormal events in a timely and accurate manner is the main goal of social public safety risk prevention and control. Video abnormal events are defined as abnormal or irregular patterns in the video that do not conform to normal patterns. ese incidents often include fights, riots, violations of traffic rules, trampling, holding arms, and abandoning luggage. However, due to the ambiguity of anomaly definitions, the scarcity of anomalous data, and the complex environmental background and human behavior, video anomaly detection is a major problem in the field of computer vision. In a nutshell, most of the current research work on video anomaly detection can be divided into two steps, such as feature extraction and normal model training [1]. Feature extraction can be achieved by manual technology or automatic feature extraction technology (representation learning or features based on deep learning). In normal model training, normal samples are used for learning, and then samples that do not conform to the learned model are judged as abnormal events. en, the classification according to features can be divided into three different methods [2]. e first type is the trajectory-based methods [3]. is type of method obtains trajectory features by tracking the target. However, in dense scenes, the target tracking is a big problem. e second type of methods is based on global features [4,5]. is type of method takes the video frame as a whole and extracts some low-level or middle-level features such as spatiotemporal gradients and optical flow. In a moderately crowded and dense environment, these methods can keep effective. e third type is the grid feature-based methods [6]. is type of method often divides the video frame into multiple small grids through dense sampling and then extracts the underlying features of a single grid because each grid can be individually evaluated. According to different normal model training methods, the present methods can also be divided into three different types. e first type is the cluster-based method [7]. is type of method is often based on an assumption that the normal sample belongs to a category or is relatively far from the cluster center. e abnormal samples do not belong to any category or are far away from the cluster center and then cluster the normal samples to build the model. e second is the method based on sparse reconstruction [8,9]. is type of method assumes that the sparse linear combination of patterns can represent normal activities with the smallest reconstruction error. Because there is no abnormal activity in the training data set, it can represent abnormal patterns with a large reconstruction error. e third type is probabilistic model-based methods. is method considers that normal samples conform to a certain probability distribution, while abnormal samples do not conform to this distribution. Recently, the latest progress of deep learning has proved the obvious advantages of deep learning-based methods in many computer vision applications [10]. As one of the tasks in computer vision, video anomaly detection is no exception. Different from traditional manual feature-based methods, deep learning methods often use pretrained networks to extract high-level features from videos or use existing network structures to establish end-to-end anomaly detection models based on normal models. For the former idea [11,12], there is not much difference between the two steps of traditional abnormal event detection. For the latter idea [13][14][15][16], the two steps of feature extraction and model building are often jointly optimized in a deep network. In the framework of deep learning, this paper proposes a new anomaly detection method based on Deep Support Vector Data Description (DSVDD) for anomaly detection tasks. rough DSVDD, not only can the smallest size data hypersphere be found to establish SVDD but also useful data feature representations and normal models can be learned. To this end, DSVDD uses a jointly trained deep neural network to map normal sample data to the smallest volume hypersphere. en, in the test, the samples mapped inside the hypersphere are judged as normal, while the samples mapped outside the hypersphere are judged as abnormal. e RGB graph and the optical flow graph are composed of a 6-channel data and directly input into a DSVDD model; that is, it can detect the appearance abnormality and movement abnormality at the same time. e experimental results on the two public data sets of Avenue [9] and ShanghaiTec [17] show that the detection results of the method proposed in this paper are excellent, which exceed the state of the art. Principle of Algorithm e overall process of the method proposed in this paper is shown in Figure 1. In the training phase, the RGB images and optical flow diagrams of the training samples are intensively sampled and then merged into a 6-channel data to train the DSVDD model. In the testing phase, the RGB image and optical flow diagram composition of the video frame to be tested are also obtained after inputting the 6channel data into the learned DSVDD model. It is determined whether the area is abnormal. In this section, the principle of SVDD is first briefly introduced, and then the training and testing process of video abnormal events based on DSVDDD is described. SVDD. SVDD is a description method based on boundary data (support vector). Its goal is to find a hypersphere that contains all or almost all training samples and has the smallest volume (the center is c ∈ F k , and the radius is R > 0). In fact, the SVDD optimization problem can be transformed as follows: In (1), the slack variable allows a soft boundary; ξ i ≥ 0 and v ∈ (0, 1] are hyperparameters to control the balance between the penalty term and the volume edge of the hypersphere. erefore, a point that falls outside the hypersphere, such as ‖ϕ k (x i ) − c‖ 2 F k > R 2 , is decided to be abnormal. SVDD has been widely used in fields such as anomaly detection, face recognition, speech recognition, image restoration, and medical imaging [21]. DSVDD. DSVDD learns a deep neural network ϕ(·; W) with the weight W, so that the input normal sample space can be mapped to a hypersphere with the center and radius of the smallest. e normal sample is mapped in the hypersphere X⊆R d , and the abnormal sample is mapped on the hypersphere. Specifically, for the sample area input space X⊆R d and output space F⊆R p , a neural network with L ∈ N hidden layers can project the input space to the output space X ⟶ F, where W � W 1 , W 2 , . . . , W L are the weights of the hidden layers ℓ � 1, 2, . . . , L { } correspondingly. erefore, ϕ(x; W) ∈ F is the characteristic representation of the input sample x ∈ X. e goal of the DSVDD method is to jointly optimize the network weights W and the output space to meet the minimum hyperspherical constraints of the center c and the radius R. en, given the training sample D n � x 1 , x 2 , . . . , x n , the soft-boundary objective function of DSVDD is as follows: For (2), in the SVDD method, the minimization of R 2 means to minimize the volume of the hypersphere. e second item is the penalty items that are mapped out of the hypersphere through the neural network, such as those from the center of the hypersphere ‖ϕ(x i ; W) − c‖ greater than radius R. e hyperparameter v ∈ (0, 1] controls the balance between the volume of the hypersphere and the deviation of the boundary, which allows certain points to be mapped to the outside of the sphere. e last item is the network parameter weight W the attenuation regularization term, where λ > 0 and ‖ · ‖ F represents the Frobenius norm. e optimization of (2) enables the network to learn weights W, so that the data points can be closely projected to the center of the hypersphere c nearby. For this reason, the deep network must extract the common factors of data changes. In fact, normal samples can often be mapped closer to the center of the hypersphere c, while abnormal samples are mapped farther from the center or outside the hypersphere. In this way, a compact description of the normal model is obtained. In actual tasks, it is often assumed that the training samples are all normal samples, so the objective function can be simplified to a single-class classification problem as follows: DSVDD simply uses a secondary loss to punish the distance of each deep network representation ϕ(x i ; W) and c. e second term is the regularization term of network parameter weight attenuation W, λ > 0. Equation (3) can also be regarded as a hypersphere with the smallest volume as the center. However, unlike Equation (2) using a soft boundary, Equation (3) shrinks the sphere by minimizing the average distance from the center of all data representations, instead of directly penalizing the radius and data representation that falls outside the sphere. Similarly, in order to map the samples as close to the center of the hypersphere as possible, the deep neural network must extract the changing common factors. e weights W of the neural network in DSVDD can be optimized by common back propagation methods (such as stochastic gradient descent). Because the network weight W and hypersphere radius R are with different scales, it is impossible to optimize DSVDD with one learning rate. erefore, it is necessary to alternately optimize the network weights W and hypersphere radius R by the alternate minimization/block coordinate descent method. Test Phase. Given test sample area x′ ∈ X, the anomaly score can e calculated as follows: where W * are the trained network model parameters. It is worth noting that network parameters can fully describe the DSVDD model. And predictions can be made without storing any data, so DSVDD has a very low storage complexity. erefore, the computational complexity during testing is small. In order to infer whether the test sample area is an abnormal sample, thresholds can be set on s(x ′ ) to make judgments as follows: where θ is the threshold that determines the sensitivity of the detection method in this paper. Dataset. is paper evaluates the performance of the DSVDD method on two publicly available data sets, i.e., the Avenue data set [9] and ShanghaiTech data set [17]. e Avenue data set is one of the most widely used benchmarks for video anomaly detection. It contains 16 training video clips and 21 test video clips, including 47 abnormal incidents that occurred on the streets of the Chinese University of Hong Kong. Each video is about 1 minute long and has a resolution of 640 × 360. Normal events are walking on the street, and abnormal events include running, loitering, and throwing. ShanghaiTech data set [17] is one of the largest newly proposed datasets for video anomaly detection. Computational Intelligence and Neuroscience Unlike other data sets, the video clips in this data set come from 13 different cameras with different lighting conditions and camera angles. It has 330 training video clips and 107 test video clips containing 130 abnormal events. e resolution of the video frame is 856 × 480. Abnormal events in this data set include chasing and noise. Evaluation Index. According to previous work [14], this paper calculates the frame-level receiver operating characteristic (ROC) curve and uses the area under the curve (AUC) score as an evaluation indicator. A higher AUC score indicates better anomaly detection performance. If an area in the video frame is judged to be abnormal, the frame is judged to be abnormal. We first obtain the anomaly scores of all video frames and then calculate the frame-level AUC scores. Supplementary Details. For the two data sets, each frame is adjusted to a size of 320 × 240, and the optical flow image is calculated by the RAFT optical flow method provided in [22] through a network pretrained on the things data set. e original video frame and the calculated optical flow graph are combined into a 6-channel data, then cropped into 16 × 12 grid images according to the size of 20 × 20, and then input into DSVDD for training and prediction. e deep neural network part of DSVDD is in accordance with Conv (16, 3 × 3)-Leaky ReLU-ConvTran (32, 3 × 3)-BN-Leaky ReLU-ConvTran (64, 3 × 3)-BN-Leaky ReLU-FullyConnectd64 structure. In the training phase, the batch size is set to 128, the initial learning rate is 0.0003, the weight decay is 0.0001, and the training is performed 1000 iterations. On the Avenue dataset, the DSVDD method proposed in this paper is superior to the results obtained by other methods, with an AUC score of 87.4%, which is 2.3% higher than the baseline method proposed in 2018 [24]. As far as we know, in terms of the frame-level AUC scores of all test videos in this data set, the DSVDD proposed in this paper has achieved the best results. It is worth noting that the Object-centric auto-encoder [20] method achieved 89.3% of the frame-level AUC in their paper, but this is calculated through different indicators in their paper and the actual calculation of Object-centric. e frame-level AUC score obtained by the auto-encoder [20] method should be 86.5%, which is 0.9% lower than the method proposed in this paper. On the ShanghaiTech dataset, the method DSVDD proposed in this paper achieves a frame-level AUC score of 74.5%, which is 1.7% higher than the baseline method proposed in 2018 [24] and second only to Object-centric auto-encoder [20]. e method achieved 78.5%. e Object- [20] method uses an object detectionbased method for anomaly detection, and its performance largely depends on the output of its object detection algorithm. erefore, detection-based methods cannot determine abnormal events that have not occurred before, and this often occurs in abnormal detection. Similarly, the MemAE method [23] requires the help of a pretrained pose estimator to achieve better results, so it is limited to detecting abnormal events related to people. In contrast, the DSVDD method proposed in this article does not have this limitation and is very reliable when applied to various scenarios. Obviously, in addition to these two specially limited methods, the DSVDD method proposed in this paper is at least 1.7% ahead of other methods in frame-level AUC. In Figure 2, some examples of abnormal score curves in the method proposed in this paper are shown, and some key frames with normal or abnormal events are given. Among them, the abscissa is the number of video frames, and the ordinate anomaly score has been normalized to 1. It can be seen that in the two data sets, the method proposed in this paper can correctly distinguish between normal and abnormal events. If an abnormal event occurs suddenly, such as running as shown in Figure 2(a), the abnormal score will increase sharply. If the abnormal event occurs slowly, as shown in Figure 2(b), the abnormal score will gradually increase. If the object that caused the abnormality disappears from the camera's field of view, the abnormality score will quickly decrease to close to 0. Conclusion In this paper, a video anomaly detection method based on DSVDD is proposed. DSVDD can be seen as a combination of deep learning and SVDD. It uses a jointly trained deep neural network to map normal sample data to the smallest volume hypersphere. en, in the test, the samples mapped inside the hypersphere are judged as normal, while the samples mapped outside the hypersphere are judged as abnormal. A large number of experimental results on two public data sets show that the proposed method is significantly better than the existing methods, which proves the effectiveness of the anomaly detection method proposed in this paper. In the future, we will reduce the computational complexity on the basis of ensuring the accuracy of the algorithm and focus on improving the real-time performance of the algorithm to better apply it to actual scenarios. [25][26][27]. Data Availability e datasets used in this paper can be accessed upon request.
4,177.6
2022-05-04T00:00:00.000
[ "Computer Science" ]
Nonlinear Analysis for a Type-1 Diabetes Model with Focus on T-Cells and Pancreatic β -Cells Behavior : Type-1 diabetes mellitus (T1DM) is an autoimmune disease that has an impact on mortality due to the destruction of insulin-producing pancreatic β -cells in the islets of Langerhans. Over the past few years, the interest in analyzing this type of disease, either in a biological or mathematical sense, has relied on the search for a treatment that guarantees full control of glucose levels. Mathematical models inspired by natural phenomena, are proposed under the prey–predator scheme. T1DM fits in this scheme due to the complicated relationship between pancreatic β -cell population growth and leukocyte population growth via the immune response. In this scenario, β -cells represent the prey, and leukocytes the predator. This paper studies the global dynamics of T1DM reported by Magombedze et al. in 2010. This model describes the interaction of resting macrophages, activated macrophages, antigen cells, autolytic T-cells, and β -cells. Therefore, the localization of compact invariant sets is applied to provide a bounded positive invariant domain in which one can ensure that once the dynamics of the T1DM enter into this domain, they will remain bounded with a maximum and minimum value. Furthermore, we analyzed this model in a closed-loop scenario based on nonlinear control theory, and proposed bases for possible control inputs, complementing the model with them. These entries are based on the existing relationship between cell–cell interaction and the role that they play in the unchaining of a diabetic condition. The closed-loop analysis aims to give a deeper understanding of the impact of autolytic T-cells and the nature of the β -cell population interaction with the innate immune system response. This analysis strengthens the proposal, providing a system free of this illness—that is, a condition wherein the pancreatic β -cell population holds and there are no antigen cells labeled by the activated macrophages. Introduction Type-1 diabetes mellitus (T1DM) is an autoimmune disease in which, mainly, the production of pancreatic β-cells decays at a rate proportional to leukocyte cell growth [1]. This disease is primarily mediated by lymphocyte T-cells, which recognize pancreatic β-cells as antigens. The progressive destruction of β-cells takes place, leading to a complete loss of insulin production and dysregulation of glucose metabolism. A peak in the global population of individuals under the age of 20 with T1DM or type-2 diabetes is estimated for the year 2045 [2,3]. The development of new treatment options is hampered by our limited understanding of the human pancreas' organogenesis due to the restricted access to primary tissues. However, diabetes is an immunological disease that impacts people worldwide without discrimination of race, sex, nationality, or social status [4]. In recent decades, substantial increases in diabetes prevalence have demonstrated latency and consistency, with 415 million people worldwide living with diabetes [5]. The International Diabetes Federation estimated that 425 million people in 2017 had T1DM, and as a projection for 2045, that number could reach up to 629 million, which is a global increase of 48% in only 28 years [6]. The average age of prevalence is in the range 20-64 years, which represents 327 million people worldwide; this is a 72% increase over the past 65 years. The global increase from 2017-2045 is overwhelming; a projection of a 35% could increase the cases of T1DM that can escalate to type-2 diabetes in North America and the Caribbean. In the South-Central Americas, there is a projected increase of 62%; meanwhile, European expectations are for a 16% increase in the population with T1DM. The Western Pacific region is the region with the lowest estimated increase; it is projected to be 15%. Southeast Asia, Africa, and the Middle East report higher projections that go from 72% up to 156%, which indicates a state of alarm that needs further research and political strategies. Since the projections are globally overwhelming in terms of how T1DM is increasing over time, robust strategies based on mathematical modeling will improve future health [7]. Models help to explain a system, study the effects of different components, and give predictions about their behavior. Analysis of models via computational and applied mathematical methods is a way to deduce the consequences of the interactions. Moreover, mathematical models allow one to formalize the cause and effect process and relate it to the biological observations. Furthermore, they yield insights into why a system behaves the way it does, thereby providing links between network structure and behavior [8]. At present, there are many studies related to this topic; for instance, mathematical modeling of the glucose-insulin relation was well studied through the years 1989-2012 based on a comparative analysis between the clinical and nonclinical scheme [9], and continues to be studied in a more experimental setting [10]. However, the task of designing therapies [11] that can endorse a reliable path to the minimum suppression set of β-cells continues to be a primary research issue. Authors in [12] reported a mathematical approach based on the two-day fit modeling of glucose behavior, and they discussed that the parameters played a critical role in allowing the computation of standard tools used as functional insulin therapy, considering the basal rate of insulin and insulin sensitivity factor. Recently, authors in [13] reported the first combined interaction between the variables glucose, insulin, free fatty acids (FFAs), and growth hormone as a set of nonlinear ordinary differential equations. Nonlinear control theory has proven to be a reliable strategy for presenting results related to a positive bounded invariant domain under the existence of upper bounds, providing a broad understanding with biological implications for a suitable treatment option [14,15]. The study of biological models with nonlinear characteristics is an open invitation for designing a control scheme to establish a statement wherein control inputs contribute to treatment feasibility. Since most of the biological models present prey-predator characteristics, a robust control design enhances the possibility of implementation by applying an embedded system that can consider real-time data [16,17]. The lack of intrinsic robustness in biological systems against various perturbations is a crucial factor that prevents successful stability behavior [18]. The blood glucose control of diabetics can be categorized into open-loop and closed-loop systems, which can feasibly be analyzed by nonlinear theory. Typical diabetic treatment is based on open-loop control in which the patient measures their glucose level during the day and tries to regulate it in a healthy range by injecting an appropriate dose of insulin. This method is not accurate because of the time duration between measurements. Closed-loop control systems present the opposite case because a feedback scheme continually compares the processing data against a reference value, repeating the cycle while searching for the ideal value, as reported in [19][20][21][22] and the observer-based nonlinear control design in [23]. Those scientific studies have in common an ordinary differential equation base system that involves the insulin parameter as an input control, where mathematical approaches or computational strategies are applied. However, this type of control analysis is useful when T1DM is part of personal daily care. Nevertheless, transferring the considerations obtained in these mathematical analyses as a possible treatment of diabetes remains a challenge [8]. Recent research suggests a more in-depth development of insulin proliferation due to β-cells' behavior. In [24], the authors conclude that researchers around the world must continue to monitor trends in type-1 diabetes incidence while working in the areas of prevention, early detection, and improving treatment. Furthermore, in [25], the authors tackled the use of protein biomarkers associated with risk factors in developing cardiovascular diseases when diabetes family antecedents prevail and pass in offspring from the gestational diabetes stage. They conclude that a deeper understanding of the leading causes of diabetes development could improve this topic of research. Therefore, the contribution of this work lies in the mathematical analysis of the complex cellular relation between pancreatic cells when an element of the immune response labels a specific population of cells. Nonlinear theories such as bounded positive invariant domains, localizing compact invariant set, and the Lyapunov method provide a better understanding of how cell-cell relations take place. Thus, we can hopefully define critical parameters that are responsible for the reduction of the pancreatic cell populations, avoiding the high glucose trigger level that may need an insulin control input in the future. The presented results identify some main parameters that could reduce, delay, or avoid levels of non-desired cells, suggesting this as a possible immunotherapy treatment in terms of the variables and parameters of the discussed model. This paper is organized as follows: The Section 1 is addressed to the literature related to controllers applied in the different models of T1DM, likewise the mathematical theory that is associated with nonlinear models. The Section 2 presents the mathematical model described by a set of fifth-order nonlinear differential equations and the mathematical background necessary to solve the problem of a bounded positive invariant domain existence. Moreover, this section presents the analysis of necessary conditions for stability due to the presence of an invariant plane, and preliminary simulations are shown. The Section 3 presents the nonlinear model with three control inputs, in order to establish asymptotic stability. Finally, Section 4 provides discussion and Section 5 presents conclusions. Mathematical Model of T1DM In 2010, Magombedze et al. proposed a model of the T1DM behavior that consists of five ordinary differential equations. It describes the dynamical correlation between diabetes and immune system response. That is, the interaction of the population sets of resting macrophages (x 1 ), activated macrophages (x 2 ), antigens (x 3 ), autolytic T-cells (x 4 ), and β-cells (x 5 ), giving the following system [26]: According to Magombedze et al., the dynamics of each cell population set is as follows: (1) Resting macrophages have a constant supply rate a with a natural death rate c and increase due to the recruitment of activated macrophages with a maximum rate b + k, while g is the rate at which resting macrophages become active due to interaction with antigenic cells. (2) Activated macrophages have a supply rate g due to interaction between resting macrophages and antigenic cells, and a natural death rate k. (3) Antigen cells increase due to the release of both antigenic peptides by activated macrophages and β-cell antigenic peptides by dead β-cells due to the interaction between β-cells and T-cells with a rate l and q, respectively; m is the rate at which antigenic cells are cleared from their population. (4) Autolytic T-cells have a constant supply rate s T and a natural death rate µ T with proliferation rate s due to a profile of cytokines and chemokines, induced by activated macrophages. (5) β-Cells have a constant supply rate s B and a natural death rate µ B , while q is the depletion of the β-cell population due to interactions between β-cells and T-cells. The description of each parameter and its estimation are taken from [26] and summarized in Table 1. Parameters marked with ( * ) are critical values in the islet regeneration region in the pancreas [27]. Rate of antigen uptake Mathematical Preliminaries The general method of LCIS is applied to determine the location of a domain including all compact invariant sets of differential equation systems. This method is useful in cases which are necessary to understand the long-term behavior of a dynamical system. Now, consider a nonlinear system represented as follows:ẋ be a function such that h is not the first integral of the system (2). The function h is exploited in the solution of the localization problem of compact invariant sets and it is called a localizing function. h| U denotes the restriction of h on a set U ⊂ R n . S(h) denotes the set {x ∈ R n | L f h(x) = 0}, whereas L f h(x) is the Lie derivative in the vector field of f (x). In order to determine the localizing set, it is necessary to Mathematical Development In this section we compute the domain of attraction containing all compact invariant sets of the system (1), making it possible to determine lower and upper bounds. Bounds are defined by inequalities depending on the system's parameters, giving a global insight about the ultimate densities of each cell population in long time intervals; thus, in the biological sense, these mathematical assumptions define the minimum and maximum carrying capacity of the cell population. The achievement of the bounded positive invariant domain (BPID) is possible when all upper bounds of (1) cross each other as a result of applying the LCIS method. The BPID establishes that if all trajectories of a system enter the positive invariant domain, they remain within it all the time. Notice that obtaining BPID for the system (1) defined in the R 5 0,+ orthant, which contains all state variables of the system under analysis, cannot be achieved due to the complexity of the system even when the system satisfies positiveness-that is, even if all the state variables are considered positive (x n > 0). In this particular case, BPID is possible if the planes x 1 and x 4 are equal to zero. Nevertheless, this consideration is biologically meaningless since x 1 represents resting macrophages, an essential population set in the immunological response against any foreign or internal antigen. On the other hand, an analysis considering the invariant plane x 4 = 0 implies a low response of the autolytic T-cells; therefore, the activation of lymphocyte cytotoxic T-cells is null. Autolytic T-cells represent a subset of helper T-cells, and in the scheme of proposing a suppression of the immunological response, it implies the possibility of a feasible treatment. Thereby, the mathematical implications lead to the idea that control input is necessary. LCIS for the Invariant Plane It is essential to mention that x 4 represents a critical variable that leads to the activation of lymphocyte cytotoxic T-cells with the help of autolytic T-cells, which pursue the elimination of an antigen. In this scenario, the antigen represents β-cells; once activated, macrophages label them. Hence, in this work we assumed that the invariant plane can hypothetically be considered as a control input representing a feasible scheme to pose a treatment parameter that leads to the non-activation of this cell population. Under this consideration, the system represented by (1) becomes: if and only if the conditions (4)- (6) are satisfied under the restriction of R 5 0,+ due to biological implications. Proof. The following localizing function then, the set K(h 1 ) exists in the positive orthant that contains the upper bounds for the variables In order to establish an upper bound for x 3 , we propose the localizing function h 2 = x 3 ; therefore, the set K(h 2 ) is obtained by applying the iterative theorem [29]. Hence, the set that contains all compact invariant sets of model (3) is represented as: The BPID defined by the invariant plane x 4 = 0 denoted by (7) establishes a leading path to the hypothesis in which a control input may guarantee the non-activation of the autolytic T-cells despite a population set of activated macrophages that are demanding a high response from autolytic T-cells to activate lymphocyte T-cytotoxic cells as an innate response. As a consequence, the β-cell population that is labeled as an antigen by activated macrophages will have less volume compared with the population set of β-cells that continue the function of producing insulin. Therefore, the mathematical analysis of the invariant plane exhibits the potential existence of immunotherapy that can counteract the growth rate in which β-cells come labeled as antigens. 2.3. LCIS for R 5 + ∩ {x 4 > 0} In this subsection, the mathematical analysis tackles the scheme when autolytic T-cells are activated and cannot deactivate their primary function, the activation of the lymphocyte cytotoxic T-cells. In [26], the authors reported a proliferation of T-cells due to the profile of cytokines and chemokines induced by activated macrophages (sx 2 x 4 ), and also estimated the population for each set of cells. Therefore, the method of LCIS contributes to obtain the maximum carrying capacity that all cell sets will have in a long time period, independently of how they behave. Hence, let us propose some localizing functions in a way that can mathematically express the maximum and lower carrying capacity. In order to define the maximum or minimum density of resting macrophages, activated macrophages, and antigen cells, consider the localizing function h 3 = x 1 + x 2 − βx 3 , wherein β is a free parameter, while its Lie derivative is contained in the set S(h 3 ) expressed in the form where the β domain is while the upper bound for the set K 3 (h 3 ) is as follows: Hence, the upper bounds for resting and activated macrophages are given by The ultimate density of the β-cell population is determined by a second localizing function h 4 = x 5 , where the set S(h 4 ) is defined as S(h 4 ) = {µ B x 5 = s B − qx 5 x 4 } , leading to a set K(h 4 ) which defines the maximum cell population by A third localizing function h 5 = x 3 + x 5 is used to determine the maximum population set of antigen cells, leading to the set S(h 5 ) defined as S( under the condition m ≥ µ B , implying that there exists an upper bound for the population of antigen cells. The set K(h 5 ) is achieved under the restriction of the set K(h 3 ) as Furthermore, a localizing function h 6 = x 4 is defined to obtain the carrying capacity of T-cells, given a set S(h 6 ) defined by wherein an upper bound of T-cell exists if β fulfils the condition (9), (11), and x 2 < µ T s is satisfied, as long as so that the free parameter β is held under the intersection Hence, the set K(h 6 ) is defined as follows: Additionally, throughout the localizing function h 6 the lower bound of these cells can be obtained. In this case, the function S(h 6 ) may also be presented as thus, the set K(h 6 ) is defined as Therefore, if conditions (9), (15), and (16) are satisfied, then the BPID existence for system (1) is achieved and contained inside of the following domain: A summary of lower and upper bounds for each cell population can seen in Table 2, complementing by quantitative cell sets those qualitative results presented in [26]. Table 2. Upper and lower bounds. Localizing Functions Conditions Localizing Set The case of a BPID with x 4 > 0 includes all trajectories of the system and tends to the only non-negative and non-zero equilibrium point, satisfying the biological sense. That is, the cell populations of resting macrophages and active macrophages depend on the decay rate of β-cell protein and the macrophages' death rate values, respectively. However, the decay rate of β-cell protein must be greater than or equal to the natural death rate of β-cell. Therefore, these conditions exploit a closed-loop strategy to prove that the hypothesis is valid based on nonlinear control. This condition reinforces the hypothesis made when invariant plane x 4 holds. Conditions (16) and m ≥ µ B establish the potential feasibility of suppressing at least two main variables of the model, as part of the immunological response, by control inputs. Nevertheless, these inputs in a closed-loop design aim at suppressing the innate response by a combined population of both activated macrophages and autolytic T-cells, avoiding the creation of antigen cells by β-cells. Nonlinear Controller Design The β-cell population triggers insulin markers to maintain normal blood glucose levels by regulating carbohydrate lipid and protein metabolism through its mitogenic effects via blood vessels. The only test that can provide a direct measurement of the β-cell population set is the C-peptide test [30], which has proven to be a reliable clinical test, despite measurement risk factors that depend on the level of expertise of the clinician. This type of analysis depends on human estimation and numerical assumptions [31]. The C-peptide test is a useful indicator of β-cell function measurement that allows direct discrimination between insulin sufficiency and insulin deficiency in individuals with T1DM. In this case, we propose three control inputs for the variables related to activated macrophages, β-cells presented as antigens and autolytic T-cells. As previously discussed, this is derived as the biological implication due to a population set of resting macrophages (x 1 ) that label a population set of β-cells (x 5 ) as antigens (x 3 ), thereby x 1 becomes activated macrophages (x 2 ) and directly responsible for autolytic T-cell (x 4 ) stimulation, wherein their primary function is to trigger the lymphocytes' cytotoxic T-cells response. The following control hypothesis aims to prevent diabetes by reinforcing the immunological response; hence, inhibiting the x 2 responses would contribute to a decreasing x 5 labeled as x 3 , and as a consequence, it will not be eliminated. Therefore, it is necessary to control the populations of these undesired cells to avoid both the destruction of x 5 once the antigen tag is made and an activation of lymphocyte cytotoxic T-cells is called for by the immune response. In accordance with the aforementioned, system (1) can be expressed in closed-loop control form as follows:ẋ where u 1 , u 2 , and u 3 are control inputs that in a biological sense have the objective of preserving β-cell population in pancreas islets. Then, to determine the conditions for each control input, we consider the following Lyapunov candidate function: where its derivative isV = ∑ 5 i=1 β i x iẋi , β i ∈ R 5 + with i ∈ {Z + ≤ 5} are free positive parameters, and after substituting eachẋ i ∈ R 5 + of system (21) into the derivative of (22),V is defined bẏ By inspection of Equation (23), it can be seen that three control inputs may not be necessary and stability conditions could be satisfied by less than three of them. Nevertheless, in [26] it is established that if the magnitudes of parameters {a, g, q, s, s T , s B } decrease, it could be a positive result towards the control of diabetes in early diagnostics. On the other hand, if the control inputs parametrization contains some of the parameters defined by {c, k, m, µ B , µ T }, it means that a diabetic condition is evident and the disease needs to be treated. Therefore, less than three control inputs implies the necessity of a parameter combination of both sets, meaning that a patient could be under a diabetes condition. Thereby, the following proposition aims to define the entries based on the parameters that have a direct impact on preventing diabetes. Hence, the control inputs are as follows: Substituting the control inputs (24)-(26) into (23) and completing the quadratic form for variables x 2 , x 4 , and x 5 gives thatV is as follows: Now, completing the quadratic form for the positive terms that contain x 1 and factorizing the common term −β 1 , the equation (27) is defined aṡ where the following condition for β 2 must be satisfied to guarantee the positiveness of A: Hence, since all variables present nonlinear dynamics in the positive orthant due to their biological implications, and as the analysis demonstrates in the previous section, Equation (22) satisfies Lyapunov asymptotic stability if the following inequality is also satisfied: The proposed control inputs are biologically sound. That is, these involve the parameters that trigger the progression of diabetes and those cell populations that influence the diagnosis of this disease, also supported by the stability analysis. However, a physical implementation is still a challenge because it is not possible to modify cell populations such as resting macrophages, since they are produced naturally by the immune system. Hence, if a treatment or a form of intervention can reduce such parameters, it could give positive results in diabetes prevention. Numerical Simulations This section presents numerical simulations. The simulation setting is according to the parameter values given in Table 1. Figure 1 shows the BPID construct with the invariant plane K 1 . As can be seen, when there is no set of T-cells concentration into the system, regardless of the initial populations of the cells that remain inside it, they will tend to their equilibrium level, which means a stable condition of the disease where there is no progression. Therefore, it can be concluded that the population of β-cells in the pancreas is optimal. In this case, this simulation is considered valuable due to the feasibility of proposing at least one control input for the system (1) that could help to ensure a stable population of T-cells, or at least prevent their indiscriminate spread. Moreover, the invariant plane of x 4 = 0 implies that there is no immunological response. Therefore, all cell populations tend to their optimal concentrations. Figure 2a,b presents the upper bound domain of the localizing set K(h 3 ) for both resting and activated macrophages; Figure 2c presents the upper bound domain of the localizing set K(h 5 ) for antigen cells concentration, demonstrating that the variables associated with activated macrophages have immediate response once the β-cells are presented as antigen. It is a natural response, since they influence the lymphocyte cells by activating the autolytic T-cells. Figure 2d presents the upper and lower bounds for the T-cell population as a result of the localizing set K(h 6 ). As can be seen from the localizing set, "it is important to highlight that exists a minimum level of a cell population that has a direct impact on triggering cytotoxic T-cells," Figure 2e presents the upper bound for the β-cell population resulting from the localizing set K(h 4 ). Therefore, this analysis exposes the complex interaction between these cell populations, demonstrating that when there are no activated macrophages that label a population set of β-cells as antigens, there is no autolytic response required. This leads to the development of a mathematical proposal supported by nonlinear control theory to treat this disease. Figure 4 presents the convergence of the cell populations to the desired cell concentrations for T-cells and β-cells due to the control actions (24), (25), and (26). In both figures, the solid line, dotted line, and dashed line represent the open-loop natural response, the closed-loop system behavior, and the equilibrium of each state variable, respectively. According to these figures, it can be seen that if there is no control action, cell populations will not reach their state of equilibrium and the system is susceptible to parametric variation that could generate diabetes. Discussion A mathematical analysis of a nonlinear model aimed at understanding the complex relation between β-cells with the immunological response through autolytic T-cells and macrophages demands a deeper understanding of biological concepts related to T1DM. The existence of a BPID, considering all upper bounds for the system state variables, gives conditions to establish a better understanding of the correlation between cells and the evolution of T1DM. According to the literature, in T1DM there is an inverse correlation between β-cells and autolytic T-cells, implying that when the autolytic T-cell population increases, the β-cell population decreases, activating a diabetic condition. Through the LCIS obtained in this study, cell populations can increase or vice versa. However, they stay limited within minimum and maximum population levels. At this point, we note that in a steady-state condition, cell populations never reach their state of equilibrium (see Figure 2), some of them even tend to their upper or lower bounds, such as the case of resting macrophages, activated macrophages, and autolytic T-cells, while an increase of β-cells is due to T-cell population decreases. Nevertheless, as can be seen in Figure 2c,d, the β-cells will not be greater in number than the autolytic T-cells because β-cells' upper bound becomes the autolytic T-cells' lower bound; at most, these populations would be equal. Therefore, it is vital to notice that autolytic T-cells cannot be zero in any case, making it so that a diabetic condition could appear at any time. The above emphasize the importance of a closed-loop control system able to compensate the proliferation of autolytic T-cells against the death rate of β-cells. Additionally, in this study we propose a closed-loop control system, and as can be seen in Figures 3 and 4, the proliferation of these cells is stabilized to their state of equilibrium, decreasing the rate in which macrophages become activated due to the interaction with antigenic proteins, diminishing the death rate of β-cells by the cell-cell interaction with autolytic T-cells as well as the proliferation of T-cells due to the profile of cytokines and chemokines induced by activated macrophages. We have proven that the control inputs, in accordance with the mathematical analysis, can keep cell populations at a stable level towards diabetes control in early diagnostics. However, nonlinear controller design for biological models entails more complex analysis compared with other nonlinear systems such as electric, mechanical, or chaotic. It is complex to analyze because the cell population sets cannot be directly controlled; this is only possible by manipulating some of their parameters. The feasibility of immunotherapy treatment requires a stable protocol based on clinical experimentation that supports mathematical hypotheses or suggestions, in which this research contributes to giving a basis for nonlinear control, with the corresponding biological implications, that may provide theoretical support for some clinical experiments related to this topic. Conclusions The assumption of considering autolytic T-cells as an invariant plane implies the existence of an input treatment that delays the proliferation of these cells due to activated macrophages, reducing the antigen population of β-cells; as a consequence, all dynamics can converge to the equilibrium point asymptotically. The mathematical analysis suggests three control inputs that are directly related to the state variables: activated macrophages (24), antigens (25), and autolytic T-cells (26). In this case, control input (24) implies the existence of a counterpart that has a direct impact on avoiding any activation of macrophages by assuming that there is no antigen identified as a threat. The control input (25) aims at counter resting the antigen factor associated with β-cells based on the straight relation between the activated macrophages and the autolytic response. The last control input (26) is associated with holding the autolytic T-cell population that has a direct effect on reducing the β-cell population due to the influence of a higher cytotoxic cell of the immune response. Therefore, a mathematical analysis considering control-input based on a closed-loop provides a theoretical basis that leads to a more in-depth search aiming for an immunotherapy treatment that can be a reinforcement to actual procedures. It is essential to acknowledge that the stimulation and propagation of diabetes is a combination of events, and there is no single event that is responsible for it. Simulations suggest that mathematical analysis on the search for upper bounds, as they are presented in Table 2, implies that a free parameter such as (9) leads to the establishment of different assumptions in which upper and lower bounds for the autolytic T-cells are achieved (16). In nonlinear controller design, the stability condition (30) represents a free parameter in which a high value of macrophage death rate and macrophage deactivation rate helps to maintain the stability of the cell populations. Hence, these parameters represent a suppression condition in which macrophages interact with the autolytic T-cells, giving a leading path on research in order to a deepen our understanding towards establishing an immunotherapy treatment. Therefore, the existence of a strategy that decreases the value of parameters g, l, q, and s, while increasing c and k, could give positive results towards the control of diabetes in early diagnostics. Funding: This research received funding by the title project: Diseño de controladores y observadores no lineales en modelos relacionados a diabetes Mellitus Insulinodependiente, by Tecnológico Nacional de México/Instituto Tecnológico de Tijuana. Conflicts of Interest: The authors declare no conflicts of interest.
7,481.6
2020-04-24T00:00:00.000
[ "Mathematics", "Medicine" ]
Classes of Labour in comparative perspective Classes of Labour is a monumental piece of work that will no doubt set the scene for scholarly debate on the nature of work and life in India for a long time to come. While this monograph draws on some previously published work, it contains a great deal of ethnography and analysis that will be new to the reader, even to those of us already familiar with Parry’s writings on the classes of labour in and around the Bhilai Steel Plant (BSP). The explicit framing of class around the process of structuration, the location of Bhilai within the Nehruvian development project, the implications for substantive citizenships, and the comparative angle of the conclusion are but a few of the newly expanded themes in the book. Moreover, while certainly lengthy — close to 700 pages — Classes of Labour is an extremely accessible and jargon-free monograph that does not necessarily need to be read cover to cover. Each chapter stands on its own and excellent end-of-chapter summaries highlight the specific contribution each chapter makes to the overall argument of the book. I second all the appreciations of the book that have been expressed to date, and will not reiterate them here, apart from stating that both the depth and breadth of the material covered are truly exceptional, and the quality of the ethnography and analysis unparalleled. Here, I would like to make one main and two smaller reflections on the book based on my reading of it in light of my own empirical research on labouring classes in Tamil Nadu. My main aim is to identify ways in which future scholarship can build on Classes of Labour and explore further avenues for the study of labour and class across India. My first reflection is that Classes of Labour is the first study in over 20 years that seeks to present a novel guide map to ‘read’ the landscape of labour in India. Since the earlier works of Breman, Holmström and Harriss of the 1970s and 1980s, all carefully reviewed in chapter 2, no scholars have attempted to map the Indian economy and society in as comprehensive a way as Parry does in this monograph. We are presented with a new map to help us make sense not only of how people in India are slotted into labour markets, but also of how their specific insertion is shaping their social lives more broadly — from their experiences of childhood, marital lives, and places of residence, to their outlooks on the future, the nature of their citizenship, 3 and even their propensity to suicide. Organised around the processes of class structuration and inspired by both Weber and Giddens, this book provides us with a novel analytical framework to reconceptualise the Indian world of work. The landscape of labour that emerges in the book clearly replaces the earlier image of a mountain slope, marked by gradations of labouring classes, with that of a more closed-off citadel of naukri (held by the government employees of the steel plant) against which all other kam (work in private firms and the informal economy) looks like a rather bare plain. Or, in Parry's own words: 'there is still much to be said for the citadel, and that a class analysis of the landscape of labour in Bhilai is more revealing … than an analysis in terms of a multiplicity of strata ' (2020: 56). The evidence for this is convincing. In the case of Bhilai at least, there exists a clear labour elite with benefits and rewards that the majority outside of the permanent BSP labour force can only dream of. Comparisons with other sites, undertaken in the last chapter of the book, reveal a remarkably similar picture across steel manufacturing towns in terms of a clear class boundary between a permanent labour force and all the rest, even though that boundary might be drawn in slightly different places in different sites. Parry's analysis does raise an important question of comparison, one which Breman too picks up on in his review of Classes of Labour (2021). Breman does not consider Parry's class analysis applicable to, for example, Ahmedabad, where he argues 'labour is not split up in a two-class dichotomy but spreads out over a wider range of employment-cum-livelihood modalities ' (2021: 143). Here, graded inequality, shaped by the class-caste nexus, prevails, marked by different classes of labour that each have their own boundaries, life and work experiences, and limits to mobility. Here, the main naukri-kam fault line seems less salient. So, what explains their differences? In my view, both authors provide ample and convincing evidence of the specific contexts they describe, but they rather under-emphasise the very specificity -and to some extent even exceptionality -of those contexts. What does the landscape of industrial labour -and indeed this particular class divide -look like in India's many cities, towns and villages where no BSP, RSP or DSP can be found at all. In the vast majority of working sites across India, the economy is not only largely informal but also highly dispersed, fragmented and made up of numerous small-scale ventures that offer various degrees of pay, job security and opportunity for collective organising. How then can we apply the above class analysis to places where the landscape of labour is not organised around a domineering (state) enterprise that structures the surrounding economic environment, and where class divisions are not created by interventions from the Nehruvian state in the shape of a major state-owned and run enterprise? Furthermore, what form does the process of class structuration take in the many places of work where a labour elite of the sort documented in Bhilai is largely absent? What are the 'salient fault lines,' to stay with Parry's language, that set different groups of workers apart from one another in such places? Do smaller industrial centres, towns and villages also have a singular fault line, such as the one between naukri and kam, or do we end up with smaller and more graded distinctions between social classes of labour? And, what role do caste, ethnicity and gender play in the shaping of those fault lines? This is not to reject either Parry or Breman's maps -which I think are absolutely apt for the contexts studied -but to set an agenda for future research on classes of labour and processes of class structuration in places that less resemble the large industrial centres mentioned above. How are classes formed and how do they transform into social classes in such sites? Parry himself acknowledges that the processes identified in Bhilai may not easily transfer onto the maps of labour elsewhere and that class structuration is likely to unfold differently according to context. Indeed, class structuration, he writes in his reply to Breman, 'is a continuous process that is never complete and class boundaries are never finally crystallized. When they approach that state, however, the labour elite emerges as a distinct class cut off from other segments of the manual workforce. When to the contrary structuration is weak, the barriers are low and the fault line between them may not appear that much more significant than other breaks on the labour hierarchy, which will look more ladder-like than dichotomous ' (2021: 156). Multiple breaks and a ladder-like hierarchy might well be the outcome in the vast majority of India's work environments. Tiruppur, India's leading garment manufacturing and export centre in Tamil Nadu, is a place that I am most familiar with. Tiruppur city and its wider region are dotted with a multitude of private enterprises -both large and small -in which very few workers have anything resembling the perks of the permanent BSP worker. Some are certainly more regularly employed and with more benefits than others, but the multiple fault lines are rarely overlapping. The best earning tailors, for example, may well be employed in the least secure jobs and lack any social benefits. Those enjoying more regular work in large private firms, by contrast, are likely to take home a much smaller pay cheque than the tailors who follow contractors from one subcontracting unit to another. How do we apply the above class analysis to such work environments? Do we only have one labour class there -an underclass resembling Parry's description of the lower rungs of the private and informal economy? Or do we resort to the picture of a slope with gradations of security, permanency, etc. Or does the most salient class divide fall in a different place, perhaps between company managers and its manual labour force? Or, do we need to rethink our class analysis here altogether? The scope for future scholarship remains wide open, to say the least. My second reflection regards Parry's analysis of informal labour in Bhilai, unpacked in great detail across chapters 8 and 9, which zoom in on the private sector and the informal economy that surround the steel plant. Here, Parry sketches a picture of gradually less secure, less well rewarded and more taxing work, without any state benefits or any form of union representation. One individual caught my attention. His name is Kedarnath, who hails from an ex-untouchable caste in rural Bihar. Having worked in various jobs across India, he landed in Bhilai in 1984 where -after ups and downs -he finally established himself as a rather successful construction contractor. By 2003, he employed about 50-60 workers on 6 different sites, and later even obtained large construction contracts in his own right. He owned a pick-up truck, cement mixers, invested in land, and bought a house in a middle-class residential neighbourhood (2020: 366). We don't learn very much more about his personal life, but I wonder whether today -by Parry's own definition -Kedarnath could be called middle-class and I also wonder in what ways he would differ from members of the permanent BSP workforce in terms of social class position? Of course, one can see that his contracts and earnings are probably far less secure than those of a regular BSP worker, but his income may well exceed the latter's monthly take-home pay by a phenomenal amount. Kedarnath's life trajectory also reveals a remarkable example of upward mobility, even though it is of course highly gendered, and could no doubt more easily be reversed. His story nonetheless raises the question of what routes other than that of naukri exist towards the world of the middle-class? What alternative pathways -outside of regular BSP employment -may propel one into the social middle class, with its recognisable life experiences and consumption patterns? Could enterprise and self-employment at the fringes of the steel plant economy possibly be one of them? My final reflection relates to something that was quite new to me in Classes of Labour. It regards the political implications of the above class analysis, particularly in terms of citizenship. Class, Parry argues, 'undermines the equal claims of citizens' (2020: 59). While the state is supposed to be the guarantor of the rights of all its citizens, what we see in Bhilai is that through its policies and legislation, the state has paradoxically created a class division that provides substantive citizenship to only a small and shrinking minority. For the many, meaningful citizenship (beyond the formal right to vote) remains a largely unachievable goal. Parry's point, that this divide between citizen and denizen is of the state's own making, and that this is what contributes to the undoing of Nehru's vision for democracy in India, cannot be overstated. In his own words, it is not caste but class that 'robs Indian democracy of its reconstructive energy ' (2020: 34). Whereas in Western countries miners and steelworkers typically acted as the militant vanguard of the working classes, pulling up the rights of less privileged sections of the labour force below them, in Bhilai the labour elite pursued its own interests leaving those below it to fend for themselves and their rights. This insight can usefully be applied beyond Bhilai as few workers across India avail of substantive citizenship rights. Parry asks at the end of the first chapter -what is wrong with Nehru's vision and what possibilities does it still have? The answer to this question appears to remain open. Did the vision -the idea of expanding democracy and citizenship through flagship projects of inclusion -fade because of its impossible implementation? As Parry shows, one weakness of the vision is that it could never be extended to more than just a few million working class people across India during its heydays, and that even the size of this small group of beneficiaries started shrinking quite drastically from the 1970s. Or, was the vision itself misguided, in that from the outset it tied rights, entitlements and citizenship to state employment rather than seeking to guarantee such rights for all irrespective of employment status? Perhaps the recent Right to Food, Right to Work, and Unorganised Workers' Social Security Acts are trying to steer neoliberal India into a direction of guaranteeing at least some basic citizenship rights for all. The verdict on their success is obviously still out, but further comparative work on the citizenship implications of class formation could be highly insightful. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long Classes of Labour in comparative perspective as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
3,274.6
2023-05-04T00:00:00.000
[ "Economics" ]
Recessive Mutations in TRMT10C Cause Defects in Mitochondrial RNA Processing and Multiple Respiratory Chain Deficiencies Mitochondrial disorders are clinically and genetically diverse, with mutations in mitochondrial or nuclear genes able to cause defects in mitochondrial gene expression. Recently, mutations in several genes encoding factors involved in mt-tRNA processing have been identified to cause mitochondrial disease. Using whole-exome sequencing, we identified mutations in TRMT10C (encoding the mitochondrial RNase P protein 1 [MRPP1]) in two unrelated individuals who presented at birth with lactic acidosis, hypotonia, feeding difficulties, and deafness. Both individuals died at 5 months after respiratory failure. MRPP1, along with MRPP2 and MRPP3, form the mitochondrial ribonuclease P (mt-RNase P) complex that cleaves the 5′ ends of mt-tRNAs from polycistronic precursor transcripts. Additionally, a stable complex of MRPP1 and MRPP2 has m1R9 methyltransferase activity, which methylates mt-tRNAs at position 9 and is vital for folding mt-tRNAs into their correct tertiary structures. Analyses of fibroblasts from affected individuals harboring TRMT10C missense variants revealed decreased protein levels of MRPP1 and an increase in mt-RNA precursors indicative of impaired mt-RNA processing and defective mitochondrial protein synthesis. The pathogenicity of the detected variants—compound heterozygous c.542G>T (p.Arg181Leu) and c.814A>G (p.Thr272Ala) changes in subject 1 and a homozygous c.542G>T (p.Arg181Leu) variant in subject 2—was validated by the functional rescue of mt-RNA processing and mitochondrial protein synthesis defects after lentiviral transduction of wild-type TRMT10C. Our study suggests that these variants affect MRPP1 protein stability and mt-tRNA processing without affecting m1R9 methyltransferase activity, identifying mutations in TRMT10C as a cause of mitochondrial disease and highlighting the importance of RNA processing for correct mitochondrial function. Mitochondrial respiratory chain deficiencies lead to insufficient ATP production from oxidative phosphorylation (OXPHOS), resulting in a wide range of clinical presentations broadly recognized as ''mitochondrial disorders.'' Mitochondrial diseases are genetically diverse, owing to the necessary expression, co-ordination, and activity of factors encoded by both the mitochondrial and nuclear genomes for proper mitochondrial function. The 16.6 kb human mitochondrial DNA (mtDNA) encodes only 22 tRNAs, 2 rRNAs, and 13 polypeptides that are essential components of four of the five OXPHOS complexes. 1 The remaining subunits of the respiratory complexes and all of the factors involved in mtDNA expression and maintenance are encoded by the nuclear genome, synthesized in the cytosol, and imported into mitochondria. Thus, there are a large number of potential genetic causes of mitochondrial disease, which has often complicated attempts to identify the correct genetic diagnosis. The advent of next generation sequencing has greatly expanded the list of known gene mutations associated with mitochondrial disease, 2 including several genes involved in mitochondrial (mt)-tRNA processing and maturation. 3,4 In mammalian mitochondria, all mt-tRNAs required for mitochondrial protein synthesis are encoded by the mitochondrial genome. Transcription of mtDNA produces long polycistronic transcripts that require further processing. Most mitochondrial open reading frames are separated by at least one mt-tRNA gene, with the structure of mt-tRNAs acting as ''punctuation'' marks in the transcript 5 prior to mt-tRNAs being excised at the 5 0 end by the RNase P complex and at the 3 0 end by the RNase Z enzyme. The mitochondrial RNase P in animals is composed of three proteins, MRPP1, MRPP2, and MRPP3 6 (encoded by TRMT10C [MIM: 615423], HSD17B10 [MIM: 300256], and KIAA0391 [MIM: 609947], respectively), whereas RNase Z is encoded by a single gene, ELAC2 (MIM: 605367). [7][8][9][10] In addition to cleavage from the polycistronic transcripts, mt-tRNAs undergo many further modifications, with at least 30 different modified residues reported. 3,11 One crucial modification is m 1 R9 methylation, which is probably important for the correct folding of most mt-tRNAs. In the case of mt-tRNA Lys , the unmodified in vitro transcript folds into an extended bulged hairpin, 12 but with the sole modification of N1 methylation of adenosine 9 (m 1 A9), the tRNA adopts the classic cloverleaf structure. 13 It has been demonstrated that MRPP1 and MRPP2 can form a stable sub-complex that is active as a methyltransferase and is uniquely able to methylate both adenosine and guanine nucleotides at position 9. 7,14 19 of the 22 mt-tRNAs contain either A or G at position 9 and it is likely that all of these are subject to m 1 R9 methylation. 7,11,14 We studied two children with suspected mitochondrial disease from unrelated families. Subject 1 (male) was the second child of healthy, non-consanguineous, white British parents, with a healthy older sister. He was born at term by a normal vaginal delivery after a normal pregnancy with a birth weight of 3.7 kg. He did not require resuscitation but was noted to be hypotonic and weak soon after birth. He fed poorly and gained weight slowly, partly due to gastro-esophageal reflux. Neonatal screening revealed significant hearing impairment subsequently confirmed to be sensorineural deafness. He was also found to have a raised plasma alanine transaminase of 439 U/L (normal range 4-45 U/L). Blood lactate levels ranged from 5 to 10 mmol/L (normal range 0.7-2.1 mmol/L) and his CSF lactate level was also elevated at 4.8 mmol/L. Ophthalmological examination, echocardiography, and an MRI were all normal, though the latter was of poor quality. There was a clinical suspicion of craniosynostosis and a lateral skull X-ray appeared to show fused sutures but no further investigations were undertaken. Ultrasound of the kidneys was normal. Blood spot acylcarnitine analysis and plasma biotinidase were normal, plasma amino acid analysis was normal with the exception of a raised alanine concentration, and urine organic acid analysis was normal apart from a raised lactate concentration. He deteriorated rapidly, requiring tube feeding, and at the age of 4 months suffered rhinovirus bronchiolitis requiring ventilatory support with CPAP. It proved impossible to wean him off ventilatory support and he died at 5.5 months of age after withdrawal of this support. Subject 2 (female) was the second child of unrelated parents of Kurdish origin with a healthy older brother. She was born at term by caesarean delivery, after a normal pregnancy, weighing 3.05 kg. Hypotonia, poor sucking, and feeding difficulties were evident from early in the neonatal period and hyperlactatemia (7.4 mmol/L; normal range 0.90-1.70 mmol/L) with a high lactate to pyruvate ratio (170; normal range 30-50) was recorded at 1 month. She gained weight poorly and nasogastric feeding was commenced at 3 months. At 3.5 months, echocardiography demonstrated left ventricular hypertrophy and lumbar puncture revealed elevated CSF lactate (3.1 mmol/L; control < 2.2 mmol/L). She also had significantly impaired liver function (AST: 84 UI/L, normal range 15-60; ALT: 52 UI/L, normal range 7-40; gGT: 262 UI/L, normal range Respiratory chain complex (RCC) activities are expressed as nmol/min/mg protein 6-25 UI/L). Brain MRI was undertaken at 2 months of age and was of poor quality but showed findings suggestive of bifrontal polymicrogyria. Acoustic oto-emissions were abnormal at 4 months, suggesting deafness. Unfortunately, auditory evoked potential was not done. She died at 5 months of age from respiratory distress. Informed consent for diagnostic and research studies was obtained for both subjects in accordance with the Declaration of Helsinki protocols and approved by local Institutional Review Boards in Newcastle upon Tyne, UK, and Paris, France. Biochemical analysis of skeletal muscle samples identified clear mitochondrial enzyme defects involving both complex I and IV in both subjects, whereas complex III activity was normal in subject 1 but decreased in subject 2; both cases showed sparing of complex II activity (Table 1). Histopathological analysis of muscle from subject 1 revealed evidence of subsarcolemmal mitochondrial accumulation (ragged red fibers) and a mosaic pattern of cytochrome c oxidase (COX) deficiency ( Figures 1A-1F). Analysis of muscle DNA from both subjects excluded mtDNA abnormalities (mtDNA rearrangements and point mutations) and mtDNA copy number was shown to be normal in each case (data not shown). Whole-exome sequencing via previously described methodologies and bioinformatic filtering pipelines 2,15 identified biallelic variants in TRMT10C (MIM: 615423; GenBank: NM_ 017819.3; also known as MRPP1 and RG9MTD1). Com-pound heterozygous c.542G>T (p.Arg181Leu) (ClinVar: SCV000264779.0) and c.814A>G (p.Thr272Ala) (ClinVar: SCV000264780.0) variants were identified in subject 1, whereas subject 2 was homozygous for the c.542G>T (p.Arg181Leu) variant, also identified in subject 1. Sanger sequencing was undertaken to validate the variants and confirm that these segregated with disease in each family ( Figure 1G). Both identified TRMT10C variants are predicted to result in amino acid substitutions affecting evolutionarily conserved residues ( Figure 1H) and are rare: the c.542G>T (p.Arg181Leu) variant is present on the ExAC database (10/120,324 alleles) and ESP6500 (1/11,824 alleles) whereas the c.814A>G (p.Thr272Ala) TRMT10C variant is absent on ExAC, ESP6500, and COSMIC. In silico predictions via SIFT, PolyPhen-2, and aGVGD suggest that the biophysico impact of the p.Arg181Leu substitution are relatively benign but that the proximity of the Arg181 residue to the TRM10type domain (predicted from Met191) could hint at a crucial structural role that only an arginine residue can perform. In silico modeling of the TRMT10C variants via RaptorX and Phyre2 produced disparate predictions of MRPP1 protein structure and thus could not be used to indicate any potential misfolding as a consequence of the variants. To investigate the functional effects of the identified TRMT10C variants, Western blot and mitochondrial protein synthesis assays were performed in fibroblast cell lines derived from both affected individuals and age-matched control subjects. These data showed that the steady-state levels of MRPP1 were markedly decreased in the subject cell lines, suggesting that the variants affect the stability of the protein (Figure 2A). On the other hand, levels of MRPP2 and MRPP3, the other two subunits of RNase P, were unchanged in fibroblasts from affected individuals (Figure 2A). The loss of MRPP1 protein level correlates with decreased steady-state levels of subunits of complex I (NDUFB8) and complex IV (COXI) (Figure 2B), in agreement with the multiple respiratory chain defects observed in muscle. We used blue native PAGE to determine the effects on the assembly and stability of the respiratory chain complexes 16 and show a marked decrease of fully assembled complex I and complex IV, with a slight decrease in complex III levels ( Figure 2C). The low steady-state levels of mtDNA-encoded proteins was due to impaired mitochondrial protein synthesis in subject fibroblasts as demonstrated by reduced incorporation of 35 S-labeled methionine and cysteine ( Figure 2D). Because MRPP1 is known to be an essential subunit of the mitochondrial RNase P, 6 which is responsible for 5 0 cleavage of mt-tRNAs from the polycistronic mitochondrial transcripts, we investigated whether fibroblasts from affected individuals showed evidence of impaired mito-chondrial RNA processing. Northern blot analyses showed an increase in RNA precursor RNA19 when detected with either an MT-ND1 or MT-RNR2 probe ( Figure 3A). However, the steady-state levels of the mature mRNAs were not significantly affected. No increase in precursors of MT-CO2 or MT-CO3 were observed, although the steady-state levels of mature MT-CO3 appeared to be slightly decreased in subject 2 ( Figure 3A). Processing of mt-tRNAs at the 3 0 end is carried out by ELAC2. 7,8,10 Because both subjects had functional copies of ELAC2, it would be expected that the mt-tRNAs would be processed at the 3 0 end, but not at the 5 0 end, resulting in mt-mRNAs with an uncleaved mt-tRNA at the 3 0 end. The resolution of the Northern blots for mt-mRNAs was not sufficient to distinguish between mature mRNA and these pre-processed transcripts, thus, high-resolution Northern blot experiments were performed to assess the levels of mature mt-tRNAs ( Figure 3B). Surprisingly, the steady-state levels of mt-tRNAs were not significantly altered in the affected individuals relative to controls, suggesting that the severe mitochondrial translation defect was not due to absence of cleaved mt-tRNAs. However, mt-tRNA Phe and mt-tRNA Leu(UUR) appeared to have slightly (C) Blue-native PAGE analysis of OXPHOS complex assembly using mitochondrial extracts in 1% DDM from control and subject fibroblasts (as described previously 16 ) demonstrated decreased assembly of OXPHOS complexes I and IV and to a lesser extent complex III in subject fibroblasts. (A-C) Samples (25 mg protein) were fractionated through 4%-16% native gels (Thermo Fisher Scientific) transferred onto PVDF membranes and subjected to western immunoblotting. Subunits from individual OXPHOS complexes were detected using specific antibodies: complex I (NDUFA13), complex II (SDHA), complex III (UQCRC2), complex IV (COXI), and complex V (ATP5B). (D) In vitro labeling of de novo synthesized mitochondrial translation products with EasyTag EXPRESS 35 S Protein Labeling Mix (Perkin Elmer) followed by fractionation through a 17% SDS-Polyacrylamide gel and autoradiography (as described previously 32 ). Coomassie brilliant blue staining of the gels was used to demonstrate equal loading. lower steady-state levels in subject fibroblasts relative to controls. To further investigate precursor processing, we carried out RNA-seq analysis of mitochondrial RNA isolated from control and affected individuals. Differential analyses of mt-mRNA and mt-tRNA gene expression in TruSeq library datasets and small RNA library datasets, respectively, revealed no significant differences in mitochondrially encoded mt-mRNA, mt-rRNA, and mt-tRNA levels between the samples (not shown). However, when we investigated the changes in the abundance of reads across the entire mitochondrial transcriptome, we found an increase in the regions that span gene boundaries, where RNA processing is required to release individual mitochondrial RNAs from the precursor transcripts ( Figure S1). Together, these data confirm an impairment of mt-tRNA processing efficiency without severe effects on mature mt-mRNA or mt-tRNA steady-state levels. It is possible that the cleavage of mt-tRNAs by mt-RNase P is less efficient in cells harboring TRMT10C/MRPP1 variants, but that the mt-tRNAs that are cleaved are very stable, thus retaining steady-state mt-tRNA levels to approximately wild-type levels. All tRNAs undergo post-translational modification at numerous sites to promote their correct function. 11,17 The mt-tRNAs are not exceptions and cleavage from the polycistronic mt-RNA transcripts is just one step in their maturation. In addition to their role in RNase P activity, MRPP1 and MRPP2 act as an m 1 R9 methyltransferase. 14 Methylation of either G or A at position 9 is vital for the correct structure and function of mt-tRNAs. 12,13,18 Thus, we sought to investigate the impact of the TRMT10C variants on m 1 R9 methyltransferase activity in subject fibroblasts. To this aim, we utilized two experimental approaches: (1) primer extension analysis of individual mt-tRNAs during which the reverse transcriptase-mediated extension of a radiolabelled primer is inhibited by the presence of m 1 R9 modification 19 and (2) RNA-seq analysis, because m 1 methylation at position 9 has been shown to increase the sequencing error rate at this position. 7 Primer extension analysis of mt-tRNA Leu(UUR) revealed no difference between control subjects and affected individuals ( Figure 3C). Similarly, there was no change in the sequencing error rates between subject and control samples ( Figure 3D). These data indicate that the m 1 R9 methyltransferase activity is not affected by the p.Arg181Leu and p.Thr272Ala MRPP1 variants. (B) High-resolution Northern blot analyses of mt-tRNAs using radiolabelled oligonucleotide probes against mt-tRNAs with 5S rRNA as a loading control (performed as described previously 33 ). (C) Primer extension analysis of m 1 G9 in tRNA Leu(UUR) using a radiolabeled primer (5 0 -TTATGCGATTACCGGGCTCTGC-3 0 ) annealing 1 base downstream of the modified residue. Primer and 3 mg RNA were denatured at 95 C for 5 min and cooled on ice. Primer extensions were carried out using AMV reverse transcriptase (Thermo Fisher Scientific) at 45 C for 1 hr and stopped by heating at 85 C for 15 min. After ethanol precipitation, the samples were analyzed by fractionation through a 12% polyacrylamide-urea gel and autoradiography. Extension of the primer is partially inhibited by the presence of methylated G9 in mt-tRNA Leu(UUR) leading to the accumulation of a single-base extension product (labeled m 1 G) that is detectable in both control and in case subject RNA at similar levels. (D) Sequencing error rates at position 9 in mt-tRNA Phe , mt-tRNA Val , and mt-tRNA Glu determined by RNA-seq analysis of mitochondrial RNAs extracted from control and subjects' fibroblasts. The relative abundance of individual nucleotides and indels generated by the presence of m 1 R9 was analyzed as described previously. 7 We have demonstrated that tRNA 5 0 processing is affected in fibroblasts from affected individuals with mutant MRPP1, which is consistent with current knowledge of the function of MRPP1 as a component of mt-RNase P. However, to definitively prove that the mitochondrial OXPHOS defect is a consequence of the TRMT10C variants, lentiviral rescue experiments were performed to complement the respiratory phenotype expressed in cultured cells. Fibroblast cell lines from affected individuals were transfected with a lentiviral vector carrying a copy of the wild-type TRMT10C gene encoding MRPP1. The complemented cell lines displayed increased expression of MRPP1 protein level ( Figure 4A), leading to a restoration of mitochondrial translation (Figures 4A and 4B) and normal levels of fully assembled respiratory chain complexes ( Figure 4C). Furthermore, the level of mt-RNA precursors, elevated in subject fibroblasts, normalized after lentiviral transduction with wild-type TRMT10C ( Figure 4D). These data verify the pathogenicity of the c.542G>T (p.Arg181Leu) and c.814A>G (p.Thr272Ala) TRMT10C variants, establishing these variants as causative of mitochondrial disease associated with multiple respiratory chain abnormalities. Prenatal diagnosis has subsequently been offered to both families in subsequent pregnancies after the identification and validation of pathogenic TRMT10C variants. In the first family harboring compound heterozygous changes, both TRMT10C variants were identified in a later pregnancy after chorionic villus biopsy, leading to termination. In family 2 (homozygous variant), the fetus was heterozygous for the c.542G>T (p.Arg181Leu) TRMT10C variant and mitochondrial respiratory chain activities were normal in the chorionic villus biopsy sample (data not shown), supportive of an unaffected clinical status. Of interest, we noted that both affected individuals showed decreased MRPP1 steady-state protein levels in fibroblasts, although the levels in subject 1 (compound heterozygote variants) had lower levels than subject 2 (homozygous variant), suggesting that the p.Thr272Ala mutant MRPP1 protein is less stable than the p.Arg181Leu mutant. However, despite having higher residual levels of MRPP1, cells from subject 2 exhibited a more severe impairment of mitochondrial protein synthesis resulting in lower steady-state levels of respiratory chain complexes I and IV, implying the p.Arg181Leu mutant protein is more stable but less active than the p.Thr272Ala mutant protein. The impairment of mt-RNA processing observed in fibroblasts from the affected individuals was not as severe as we anticipated, with steady-state levels of mature mt-mRNAs and mt-tRNAs largely unaffected. However, these data fit well with reports of mutations in other proteins involved with RNA processing, given that mutations in ELAC2 20 and HSD17B10 21 have both been shown to lead to an accumulation of mt-RNA precursors without effects on the levels of mature mt-mRNA and mt-tRNAs. Furthermore, it has been shown that loss of MRPP2 levels leads to a reduction in steady-state levels of MRPP1. 21,22 Given that the increase in RNA precursors (RNA19) we report was similar to those seen in subjects with deleterious HSD17B10 (MRPP2) variants (MIM: 300438), 21 it is possible that decreased MRPP1 protein levels are particularly important for RNA processing because MRPP2 levels . Lentiviral Expression of Wild-Type TRMT10C Restores RNA Processing, Expression of mtDNA-Encoded Proteins, and OXPHOS Assembly in Subjects' Fibroblasts (A) Western blot analysis of fibroblast extracts transduced with lentiviral particles expressing wild-type TRMT10C. MRPP1, COXI, and SDHA were detected with specific antisera. SDHA was used as a loading control. (B) In vitro mitochondrial translation in subjects' fibroblasts transduced with control lentiviral particles or particles expressing wild-type TRMT10C. Analyses were performed as in Figure 2C. (C) Blue-native PAGE analysis of OXPHOS assembly in rescued subjects' fibroblasts performed as in Figure 2B, showing restoration of complex assembly. (D) Northern blot analysis of RNA processing in complemented fibroblasts analyzed as in Figure 3A. For MT-RNR2 (16S mt-rRNA), short (S) exposure was used with a long (L) exposure shown to highlight the weaker band corresponding to RNA19. Representative images from three independent lentiviral rescue experiments are shown for each panel of this figure. were not shown to be diminished in either of the cases documented here ( Figure 2B). Conversely, we did not find any evidence of altered m 1 R9 methyltransferase activity in fibroblasts from either affected individual, implying that the observed defect in mitochondrial protein synthesis is due to a decrease in the efficiency of mt-RNA processing rather than any effects on mt-tRNA modification. The m 1 R9 methyltransferase activity is carried out by a stable protein complex of MRPP1 and MRPP2, 14 whereas the RNA processing requires MRPP3, 6 which contains the active site of the nuclease activity of RNase P. 23,24 Therefore, one attractive hypothesis is to suggest that the p.Arg181Leu and p.Thr272Ala TRM10C variants disrupt the interaction between MRPP1 and MRPP3 without affecting the complex with MRPP2, although this requires future investigation. Knowledge of defects in mt-tRNA processing or modification leading to disease has expanded in recent years including the aforementioned variants in HSD17B10 (MRPP2) 21 Mutations in TRMT10C (MRPP1) can now be added to this growing list as we show that the introduction of wild-type MRPP1 into fibroblasts from affected individuals is sufficient to rescue their mitochondrial defects, confirming these TRMT10C variants as pathogenic in mitochondrial disease associated with impaired mitochondrial translation. Accession Numbers The data reported in this paper have been deposited in GEO at GSE79120 and in ClinVar at SCV000264779.0 and SCV000264780.0 for c.542G>T and c.814A>G, respectively. Supplemental Data Supplemental Data include one figure and can be found with this article online at http://dx.doi.org/10.1016/j.ajhg.2016.03.010.
4,957.4
2016-04-28T00:00:00.000
[ "Biology", "Medicine" ]
Smart disaster prediction application using flood risk analytics towards sustainable climate action Disaster prediction devices for early warning system are used by many countries for disaster awareness. This study developed smart disaster prediction application using microcontrollers and sensors to analyze the river water level for flood using flood risk analytics. Specifically, it monitors the river water level, water pressure and rainfall using microcontroller, applying statistical modeling algorithms for river flood prediction, and monitor flood in a web-based system with SMS notification and alarm to the community as an early warning. The researchers used the system development method to measure the prototype feasibility study. The researchers applied the statistical modeling algorithm as the data can be observed from time to time or on a daily basis for the predictive analytics. Based on the 7-days observation result, rainfall resulted in precipitation average of 10.96 mm, water pressure with an average of 40.92 pound per square inch (psi) and water level averaged 138.78 cm. The tropical depression during the 7 days’ observation reflected the average data result from the sensors as the target of the study. The result of the prototype device used the City Disaster Risk and Reduction management office (CDRRMO) as history logs for a flood risk and it was proven accurate which makes a good use for disaster prediction. Introduction Flash flood becomes one of the major problem in a natural disaster that can cause damages to property that may affect human living as well.Based on the study of Tingsanchali (2012), flood impact is one of the most significant disasters in the world.More than half of global flood damages occur in Asia.Causes of floods are due to natural factors such as heavy rainfall, high floods and high tides, etc. Problems become more critical due to more severe and frequent flooding likely caused by climate change.Flood loss prevention and mitigation includes structural flood control measures and non-structural measures such as flood forecasting and warning, flood hazard and risk management [1].Lagmay et al. (2017) stated that the project NOAH (Nationwide Operational Assessment of Hazards) in 2012 to raise the Filipinos' awareness of natural hazards.The Philippine Government launched this responsive program for disaster prediction specifically for government warning agencies to be able to provide a 6 hr.lead-time warning to vulnerable communities against impending floods and to use advanced technology to enhance current geo-hazard vulnerability maps.To disseminate such critical information, a Web-GIS (Geographic Info.System) tool is used for the flood-prone hazard areas [2]. As of nowadays, many flood early warning systems are created to promote community awareness in a flood disaster.One of the related study by Krzhizhanovskaya et al. (2011) presented a prototype of the flood early warning system (EWS) developed within the UrbanFlood FP7 project.The system monitors sensor networks installed in flood defense, detects sensor signal abnormalities, calculates dike failure probability, and simulates possible scenarios of dike breaching and flood propagation [3]. Many flood monitoring and disaster prediction devices nowadays are used by many countries for disaster awareness.Disaster prediction devices may solve the difficulty in gathering information about the flood risk using data parameters that can be used. This study developed smart disaster prediction application using flood risk analytics towards sustainable climate action.Specifically, it aims to monitor the river water level, water pressure and rainfall using Raspberry Pi microcontroller and sensors, design a notification management system for community flood warning, apply statistical modeling algorithms for river flood prediction and flood risk analytics, and deploy the flood monitoring in a web-based system.This will help for accurate disaster forecast based on the parameters used in thr study to predict possible flood that will affect the community. The study encompassed the monitoring of the flood prone area in Biñan Laguna.Using the Internet of things with sensor technology to monitor the flood risk, the study will focus on the following areas.First, the river water level is included since the disaster risk office uses four (4) water alert levels.Second, rain is also included wherein the study will measure the precipitation of the rainfall.Third, the water pressure as the new contribution in the study wherein it will measure the water pressure of the river water current.The monitoring of the flood will undergo predictive analytics using prediction equation for statistical modeling algorithm to observe the behavior of the river and rainfall for possible flood risk. Related literature/ studies According to Blaikie et al. (2014), the past decade had been a very significant period in relation to flood around the world, for several reasons.Some of the most extensive, damaging and costly floods have occurred in developed, wealthy countries.Flooding in less developed countries (LDCs) has appeared to be increasingly frequent and serious.Such floods have become increasingly associated with climate change: the popular and media perception has been of an increased frequency of floods and storms supposedly resulting from global warming [4].Pati et al. (2017) mentioned in their research "Flood Vulnerability of the Town of Tanay, Rizal, Philippines" that the social vulnerability of the flood-prone barangays in the town was also determined using proxy indices.The model successfully predicted the flood depths and delineated the spatial extent of flooding in the different barangays of the town.Barangay Tabing Ilog had the highest overall vulnerability index as the most vulnerable to flood and needs a comprehensive flood risk preparedness.This means that most of the people who lived near river needs a flood risk preparedness to become aware for a flash flood [5]. To solve the risk of a flash flood, another paper from Santillan et al. (2013) presented how geospatial technologies using Geographic Information System which can be used in near-real time monitoring and flood forecasting.They developed and parameterized a nearreal time flood extent monitoring numerical model for Marikina River, Philippines using River Analysis System (HEC RAS) program.They developed a forecasting system for Marikina River that provides water level forecasts for the next 48 hours.Forecasts are results of model simulation of basin hydrology as well as river and floodplain hydraulics, using recorded data of rainfall events 3 days ago to present time as primary input of the models [6]. Flash flood prediction is a good thing that every community must have especially in a riverbank.De Castro et al. ( 2013) proposed a paper to develop a technology on Flash Flood Warning System Using SMS with advanced warning information based on prediction algorithm regarding increasing water level and water speed.These two factors were considered as triggers to the flashflood, thus become components of the regression/ statistical modeling algorithm devised by the researchers.Based on the training data captured for seven days, the regression equation was developed while the actual/real time data were input to the regression model.Prediction of the current and forthcoming risk on flood is computed by the system based on the model and is sent through SMS to registered users for early warning purposes [7]. To provide accurate analysis to the data that will be gather for a possible flood, Kitagami et al. ( 2016) established an effective application of IoT (Internet of Things) for disaster prediction.Applications for flood disaster prediction, an early warning of the flash flood caused by locally heavy rain is required as well as a flood impact analysis based on water level and rainfall monitoring in the whole drainage basin.To evaluate the method, they developed a prototype system and conducted a field trial in Quang Nam Province, Vietnam.As the result of our evaluation, the proposed method can reduce the network load for flood monitoring, and can issue the flood warning at proper timing [8]. Some flood risk analysis uses a statistical modeling algorithm that plays an important role for the sensor networks.Likewise, the study of Basha et al. (2008) presented a sensor network with a statistical modeling algorithm for river flood prediction.It is based on a regression model that performs significantly better than current hydrology research versions at 1 hour predictions For prototyping and validation purposes, we tested this model using 7 years of data from the Blue River in Oklahoma [9]. Raspberry Pi is new in the Philippines when it comes to Alert System.In relation, Kayte et al. (2017) developed a project to monitor the water level in dam using the advanced concept of IOT employing Raspberry Pi sensor is placed in the dam to serve the same purpose automatically and forward the status to raspberry pi and upload status on web.By this project each and every variation of water level is informed to control room through internet and nearby people can be informed in time thus saving lots of lives avoiding the unpleasant scenarios [10]. To synthesize all the information, most flood prediction nowadays used microcontroller and sensor technology which monitor the river flash flood.Flood risk analytics provides accurate data for online flood forecasting.Recent Studies uses wireless sensor devices such as water level, water pressure and rainfall separately.Likewise, the current study aims organize and combine all the existing technology today using embedded system.Similarly, the study of Kate et al., the proposed study will apply the same technology.Live monitoring of the water pressure, water level and rainfall sensors as combined from the previous study of Basha et al. and De Castro et al.This study differs from the previous studies since web-based application provides data observation represented as chart with flood analytics for early prediction.For the data analysis, a statistical modeling algorithm for river flood prediction was used with the same algorithm based on the regression model. Methodology The researchers used the system development design for the study.System plan using system architecture was applied.Unit testing was conducted to validate the data coming from the system project as controlled by the microcontroller with sensors.After the data collection, these were analyzed by the system project using regression analysis under statistical modelling.All the data are interpreted by a web-based system using flood risk analytics and the result will be generated using the predictive analytics to calculate the flood history. Under system development methodology, the researcher applied the waterfall of the system development life cycle.In requirements analysis, data requirements table, data gathering procedures, and hardware specification are used in the project.In system design, the use-case diagram, sequence diagram, system flow, and data flow diagram were used in the study.In implementation phase, functional testing was used for sensor modules.In integration and testing, all the logs from the sensor devices (rain gauge, water flow meter, water level) will be used as history logs with the help of statistical modelling algorithm for flood prediction.In deployment phase, the microcontroller and sensors were deployed in the river bank while the flood monitoring website are handled by the Disaster Management team.Figure 2 represents the system flow of the Smart Disaster Prediction Application.Using the Raspberry Pi as the microcontroller, it can receive data from sensors and transfer sensor logs to the web and display history logs and live reports.The website will be able to display flood risk analytics report using flood prediction based on history logs.If the monitored data reaches the critical level, controlled SMS (short messaging service) notification and alarm will be triggered as early warning.The researcher used the regression model using statistical modeling algorithm for the study.Likewise, the statistical modeling of Basha et al. (2008) for flood prediction as presented in Figure 3, regression analysis uses a set of statistical processes for estimating the relationships among variables.Regression models involve the following parameters and variables: The unknown parameters, denoted as ß, which may represent a scalar or a vector, the independent variables, X, and the dependent variable, Y.A regression model relates Y to a function of X and ß. (Y = ƒ (X, ß)) as sample result presented in figure 3.0. To obtain the analytical model of the flow discharge, the regression model must be fit with an application.The obtained river flow regression models used in the real-life validation of the river flood prediction to provide results from the following parameters such as precipitation in millimeters for the rain, pounds PSI (per square inch) for the river water pressure and length in centimeters for the water level.The proposed methods are applicable for the solution of tasks.The prediction model used by De Castro et al. was based on Multiple Regression Analysis is an example which was used in this study.In their study, the water level, and velocity was recorded during the 7-days observation based on the result above [11]. The Prediction Model using the training data for 7 trials will be the input to the equation given: Y=aX+b (1) Equation ( 1) is a regression equation for the water level risk.The variable Y is a predicted model, while a is the dependent variable coefficient and b1 and b2 are the model factors computed based on the input values.While x1 is the water velocity and x2 is the water level.This will forecast the probability of a risk in the rise of water level.The model was computed with the input training data for seven days. To compute for a and b for one variable, equations 3a and 3b could be utilized.While for b1 and b2, the following equations are used: (3d) In this case, b1 and b2 are the forecasted values for the water level, water pressure and rainfall that was used as well in this study for prediction.These are demonstrated in equations 3c and 3d.Once the values for a, b1 and b2 are computed, these coefficients are entered into the multiple regression equation, which will result to the final model for predicting the water level, water pressure, rainfall and the relative risk. The researchers applied the statistical modeling algorithm since the data can be observe time to time or day to day basis for river flood prediction.The Table 2 represents the data that was used generated from the sensor devices.For the rain gauge, rainfall was used measured in millimeters.For the water flow sensor, water pressure was measured in PSI.Sensor water level will measure in centimeter.Data from the main device will transfer through wireless transmission going to reports server to examine the collected data. Results and discussion The prototype testing was conducted to determine the accurate result generated by the rain gauge for the rainfall precipitation, water level sensor for the river water level and water pressure.The 7-day test was done on December 14 to December 20, 2017 during the tropical depression Urduja.Table 3. Rainfall result -Precipitation. Table 3 represents the actual result of the rainfall measured in millimeter (mm).Based on the 16 hours a day observation result within 7 days, the average rainfall recorded as 10.96 mm far from 50 mm as the critical rainfall in the Philippines.Based on the final prediction, the computed deviation average was 3.71 while the level of risk evaluated as "average".Based on the 16 hours a day observation result within 7 days, the average water level resulted to 137.78 cm far from 390 cm as the critical water level.Based on the final prediction, the computed deviation average was 8.94 while the level risk evaluated as "average".Compared to the result of Chang et al., the total number of typhoon or heavy rainfall events in Taiwan is 132, which include 8640 hourly data sets.The observed water level and one to three steps ahead water level predictions of Shihmen reservoir in testing phase was used to provide accurate and reliable water level prediction.As the average result, it was observed that an average of 244 m from 238 m including the 3 hour-step prediction [12]. Likewise, the paper from De Castro et al. uses water level and velocity from the 7 days' trial test of their flood warning system.The same with the current study, their data results recorded per hour with computed average velocity of 10.97 m 3 /s (cubic metres per second) and water level of 17.28 CI (cubic inch).The overall average of 7 days' observation resulted to 15.04 m 3 /s velocity and 17.28 CI water level.Based on the final Prediction using Prediction model.The highest deviation calculated is 3.57 and the lowest is -3.44. Conclusions In summary, using the prototype of the Smart Disaster Prediction Application, the three important sensor devices attached to microcontroller accurately responds after calibration to measure the rainfall precipitation measured in millimeter, water pressure measured in pound PSI and water level measured in length centimeter. Rainfall resulted to 10.96 mm since on that particular day remarked as moderate rain since the storm surge did not land the Laguna area directly on that test date.The effect of precipitation caused the water level in the river to rise from its lowest 40 centimeters to 138.78 cm as the average water level.The water pressure responds on the increased of the water level with an average of 40.92 pounds' psi.54.7 psi of water flows in 12 inches per second as resulted to slow moving of the water pressure far from the critical water level of 125 psi.The three (3) results gathered from the sensor devices was observed during 7 days.The observation results reflected that day 3 and day 4 had the highest amount of average rainfall, water pressure and water level that resulted to increase of water level in the river after 2 days of tropical depression. This prototype test during the tropical depression Urduja in the Philippines had shown the accuracy of the prototype in revealing the information regarding the rainfall, water pressure and water level.The result given by the prototype device shows that it can be used by the City Disaster Risk and Reduction Management Office for disaster prediction. Figure 1 Figure1represents the prototype architecture design of the study.Sensors data logs will receive by the microcontroller and sends data wirelessly to another microcontroller to analyze data and responds if the data reaches the critical level and will be used as a sign for emergency response through alarm and SMS (short messaging service). Figure 4 Figure 4 represents the monitoring chart from the web-based project displaying the 7 days (per hour) monitoring of the rainfall precipitation wherein day 2-Thursday had the highest rainfall average 19.19 mm. Table 4 . Water pressure measured in pound PSI.Table4represents the actual result of the water pressure measured in pounds PSI.Based on the 16 hours a day observation result within 7 days, the average pressure computation resulted to 40.92 psi far from 125 psi as the critical water pressure of the river.As final prediction, the computed deviation average was 4.53 while the level of risk evaluated as "average". Figure 5 Figure 5 represents the monitoring chart from the web-based project displaying the 7 days (per hour) monitoring of the water flow pressure wherein day 4 had the highest average result of 77.19 psi. Figure 6 8 MATEC Figure 6 represents the monitoring chart from the web-based project displaying the 7 days (per hour) monitoring the water level wherein day 3 had the highest average of 248.8 cm water level. Table 5 . Water level measured in cm length. Table 5 represents the actual result of the water level measured in centimeter (cm).
4,377.8
2018-01-01T00:00:00.000
[ "Engineering" ]
Micro-Structure , Ac Conductivity and Spectroscopic Studies of Cupric Sulphate Doped PVA / PVP Polymer Composites A series of polyvinyl alcohol/polyvinyl pyrrolidone polymer composite films doped with different amount of cupric sulphate (CuSO4) were prepared by means of solution casting technique. These films were characterized by X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), Ultraviolet-Visible absorbance spectroscopy (UV-Vis) and Ac conductivity measurement studies. XRD patterns of these films recorded at room temperature show the increase in amorphousity of the matrix with the increase in the concentration of CuSO4 in polymer composites. Microstructural parameters were computed using an in-house program employing XRD data. Recorded FT-IR spectra give information about the stretching and bending of the characteristic absorption bands in these films. The variation in the transmittance has been studied with the help of recorded UV-Vis spectra and hence the optical band gap present in the samples is also calculated. The measured Ac conductivity shows how the conductivity varies in these films with the presence of different amount of CuSO4 in these films. Introduction Extensive investigations on conductive polymers have been taken place in recent years in view of their important applications in electronic, electrochemical and optical devices [1] [2].Electrical conductivity can be ob-tained in insulating polymer either by modifying the electronic structure of the polymer chain through doping with metallic ions or by filling the material with electrically conducting particles [3].However, the properties of polymer composites depend upon the nature of the host polymer and different characteristics of inorganic fillers like their chemical nature, size, crystallinity, concentration and distribution in polymer matrix [4].Different additives are usually added to polymer in order to modify and improve its properties [5]- [9]. Polyvinyl alcohol is a well-known semicrystalline, water-soluble and biodegradable polymer used in practical applications because of its easy preparation, excellent chemical resistance and physical properties [10]- [12].It has a carbon chain backbone with hydroxyl groups attached to methane carbons.These OH-groups providing the bridging between adjacent chains lead to photoluminescence, mechanical strength and endure various dopings.Due to this property, the polymer allows homogeneous dispersion and good environmental stability to embedded metal particles [13] [14]. Polyvinyl pyrrolidone is a vinyl polymer possessing planar and highly polar side groups due to the peptide bond in the lactam ring.It deserves a special attention among the conjugated polymers because of the high environmental stability, easy processability and moderate thermal conductivity [15] [16]. When these two polymers are mixed, the interaction between PVA and PVP is expected to occur through interchain hydrogen bonding between the hydroxyl group of PVA and the carbonyl group of PVP [17]- [20]. In this work different amount of cupric sulphate is added to PVA/PVP polymer blend and the resulting films are characterized using various techniques like X-ray, FT-IR, and UV-Vis spectroscopy.The obtained results have been quantified in terms of microstructural parameters derived from XRD studies. Theory Normally, XRD pattern from polymers are known to consist of broadened Bragg reflection.This broadening occurs due to various factors like, i) Instrumental broadening; ii) Crystallite size, i.e. number of unit cells participating to scatter X-rays in phase; iii) Lattice strain, which is due to paracrystalline matrix of these polymers; iv) Stacking faults and others.For correcting the broadening due to instrument, we collect the XRD data from a well drilled iron and employing Stokes method.The entire XRD patterns from polymer are corrected for instrumental broadening.The intensity of a Bragg profile can be expanded in terms of Fourier coefficients and it is given by ( ) A n are strain coefficients [21] [22].Fourier analysis of a Bragg reflection profile must always be per- formed [23] over the complete cycle of the fundamental form ( ) We do this analysis with the available truncated range by introducing truncated correction [24].For a paracrystalline material, with Gaussian strain distribution ( ) hkl A n [22]- [27] turns out to be ( ) ( ) Here m is the order of the reflection and g d d = ∆ is the lattice strain.Normally one also defines mean square strain 2 ε that is given by 2 g n.This mean square strain is dependent on n (or column length L nd = ), whereas g is not.With exponential distribution function for column length, we have, In the above equation ( ) refers to the width of the distribution and p is the smallest number of unit cells in a column. The whole powder pattern of samples were simulated using individual Bragg reflection represented by the above equations using ( ) ( ) where hkl ω are the appropriate weight functions for the (hkl) Bragg reflections.Here s takes the whole range (2θ ≈ 6˚ to 80˚) of XRD recording of the sample.BG is an error parameter introduced to correct the background estimations [24] [28]- [31].Whole XRD pattern is simulated using Equations (1) to (5). Preparation of Cupric Sulphate Doped PVA/PVP Composite Films Samples were prepared using solution casting method.Stock solution PVA is prepared by dissolving 5 wt% of PVA in distilled water, stirred for 6 hours at room temperature using a magnetic stirrer, double filtered and allowed to settle for a day.Similarly stock solution of 3% of PVP is also prepared in distilled water.Different concentrations of PVA/PVP solutions cast on petri dishes and allowed to dry at room temperature.After complete drying the films were peeled out of these dishes.The film with 50/50 concentration was found to be blended well hence the solution of this percentage was used to prepare the different concentrations of cupric sulphate doped PVA/PVP films.The cupric sulphate was added to the 50/50 (PVA/PVP) stock solution with different weight concentrations of 0.2%, 0.4%, 0.6%, 0.8% and 1.0%, stirred at room temperature using magnetic stirrer for half an hour to get more homogeneous solution and then the solution was poured into petri dish placed on flat surface and allowed to dry completely at room temperature.The dried composite films were peeled out and cut in suitable sizes and were used in these studies. X-Ray Diffraction (XRD) Recording XRD patterns of pure and CuSO 4 doped PVA/PVP polymer composite films of different concentrations were recorded using Rigaku Miniflex II Desktop X-ray Diffractometer equipped with CuK α radiation (wavelength = 1.5406Å) and a graphite monochromator.The samples were scanned in the 2θ range 6˚ -80˚ and the specifications used for the recording are 30 kV and 15 mA with the scanning speed of 5˚/min.After correcting for instrumental broadening the microstructural parameters of these polymer composites were computed by employing whole powder pattern fitting method.Analysis of X-ray data is given in theory section.Fitted XRD pattern for the samples is as shown in Figure 1. Recording of Ultraviolet and Visible (UV-Vis) Spectra The absorbance of pure and CuSO 4 doped PVA/PVP polymer composites were recorded at room temperature in the UV-visible wavelength range using Labtronics MODEL LT-2800 double beam UV-Visible spectrophotometer.The recorded UV-Visible spectra of pure and CuSO 4 doped PVA/PVP films is as shown in Figure 2. Recording of Fourier Transform Infrared (FT-IR) Spectra The Infrared transmission spectra of these polymer samples were recorded at room temperature in the wave number range of 4000 -500 cm −1 using Perkin Elmer Spectrum.The recorded FT-IR spectra of pure and CuSO 4 doped PVA/PVP polymer composite films are given in Figure 3. Ac Conductivity Measurement Ac conductance measurements for these films were made using Hioki LCR 3532 Hi-tester in the frequency range from 50 Hz to 5 MHz at room temperature. X-Ray Diffraction Studies Experimental and simulated XRD pattern of pure and CuSO 4 doped PVA/PVP polymer composite films are shown in Figure 1.From Figure 1 a broad peak at 20˚ and a less intense peak at 41˚ observed for pure PVA/ PVP film shows semicrystalline nature of the film.As the concentration of the dopant increases the peaks becomes more broad and less intense which are due to the disruption of the PVA/PVP crystalline structure by the added CuSO 4 .Using this XRD data microstructural parameters were computed from line profile analysis [32].The obtained data from the line profile analysis were further used in the refinement by whole powder pattern fitting method.These values are listed in Table 1. Figure 1 shows the goodness of the fit between the experimental and simulated XRD profile by whole powder pattern fitting method.From Table 1, it is seen that the average crystallite size varies with the concentration of dopant.Also the average lattice strain in these polymer composites are found to be vary between 0% and 1.5%, further the broadness of the peak which is generally the measure of FWHM is also given in Table 1. The crystalline shape ellipsoids are obtained by plotting experimentally obtained size values and it is given in Figure 5. Area under these ellipsoids gives the crystalline area and it changes with the concentration of cupric sulphate.Maximum crystallite area was obtained for pure sample and minimum for 0.8% cupric sulphate in PVA/PVP.This is only a graphical illustration of shape ellipsoids with percentage. UV-Visible Spectroscopy Analysis A casual glance at Figure 2 indicates that PVA/PVP has high transmission and it decreases with the increase in concentration of CuSO 4 in doped films.This is due to the formation of intermolecular hydrogen bonding between the ions of the dopant and the OH.The decrease in transmission for doped PVA/PVP films reflects the variation in the optical band gap which arises due to the change in polymer structure.We have made an attempt of evaluating optical band gap of these polymer composites using tauc plot.The derived tauc plot drawn against (αhν) 1/2 and hν is as shown in Figure 6.The calculated band gap values along with conductivity, electronic specific heat and statistical performance index are given in Table 2. Since there are too many physical parameters have been experimentally determined, we have used multivariate analysis technique which is as follows.Each parameter that we have determined has been given a weightage.Then the sum of the weightage times the value of the physical parameters has been computed by normalising it with total sum of the weightage.This parameter has been identified as statistical performance index.In this work 35% weightage is given to crystallite size, 25% to each conductivity and energy gap and remaining 15% to electronic specific heat.From Table 2 it is found that the statistical performance index is high in the range Fourier-Transformed Infrared (FT-IR) Spectroscopy Analysis The FT-IR spectra recorded in the wave number range 4000 -500 cm −1 exhibit bands characteristic of stretching and bending vibrations of the films.FT-IR absorption bands positions and the assignments of all the prepared samples are listed in Table 3. From Table 3 it is found that the bands corresponding C-H stretching, C=O stretching, C-O stretching and C=O bending shift towards higher wavenumber region while O-H stretching towards lower waver number region.This clearly indicates that the stretching and bending vibrations are affected by the presence of copper ion. Ac Conductivity Measurements Prepared films of known thickness d were placed between the electrodes of known area A of LCR meter.The measured conductance ( ) σ ω from 50 Hz -5 MHz was used to calculate conductivity ( ) σ ω using the fol- lowing equation. ( ) ( ) From Figure 4 it is observed that for 0.2% cupric sulphate doped PVA/PVP sample conductivity remains same as that of pure PVA/PVP sample.As the concentration dopant increases conductivity is also increases due to more ions available for conduction.For 1% cupric sulphate doped PVA/PVP sample conductivity reaches to 2 μS/m at higher frequency region.The calculated values of conductivity at frequency 1 kHz for various concentrations of CuSO 4 in PVA/PVP blendis also given in Table 2. From Table 2 it is found that with increase in concentration there is an increase in Ac conductivity, which is due to the doped free ions.In normal metals, the Ac conductivity becomes complex athigh frequencies.Here also there is an inherent DC conductivity contribution in these composites.A plot of conductivity and specific heat at 1 kHz with respect to concentration has been given in Figure 7.This variation of conductivity shows an interesting behaviour which is independent of specific heat. Conclusion Following important results emerge from the present investigation.PVA/PVP doped with CuSO 4 form physically and thermally stable uniform transparent films.X-ray investigations indicate the description of well organized polymer network.FT-IR study clearly indicates that the change in the observed absorption bands in the region 1000 to 1500 cm −1 is due to the presence of copper ions.UV-Vis spectra show the existence of optical energy gap in these films and it has maximum value corresponding to a concentration of 0.2% CuSO 4 in the samples.Conductivity studies also show a maximum corresponding to a particular concentration of 0.2% CuSO 4 in these blends.All these results indicate that the film is stable, photosensitive and reasonable improvement in conductivity. Figure 1 . Figure 1.Experimental and simulated XRD pattern of pure and CuSO 4 doped PVA/PVP polymer composites. Figure 4 . Figure 4. Variation of conductivity with log frequency. Figure 5 . Figure 5. Variation of crystallite size of pure and CuSO 4 doped polymer composite films. Figure 7 . Figure 7. Variation of conductivity and electronic specific heat with concentration. Table 1 . Microstructural parameters of the samples using exponential distribution function. Table 2 . Optical band gap values with stastical performance index.6% CuSO 4 in PVA/PVA.In this range of concentration the values of crystallite size, conductivity, optical band gap and electronic specific heat which are determined experimentally are good. Table 3 . Assignment of IR characteristic bands using FT-IR.
3,127.6
2015-09-30T00:00:00.000
[ "Materials Science", "Physics" ]
Universal optical setup for phase-shifting and spatial-carrier digital speckle pattern interferometry Digital speckle pattern interferometry (DSPI) is a competitive optical tool for full-field deformation measurement. The two main types of DSPI, phase-shifting DSPI (PS-DSPI) and spatial-carrier DSPI (SC-DSPI), are distinguished by their unique optical setups and methods of phase determination. Each DSPI type has its limited ability in practical applications. We designed a universal optical setup that is suitable for both PS-DSPI and SC-DSPI, with the aim of integrating their respective advantages, including PS-DSPI’s precise measurement and SC-DSPI’s synchronous measurement, improving DSPI's measuring capacity in engineering. The proposed setup also has several other advantages, including a simple and robust structure, easy adjustment and operation, and versatility of measuring approach. Background Deformation measurement, especially three-dimensional (3D) deformation measurement, is essential to the quantitative description of object change and the accurate determination of mechanical properties.Traditionally, deformation measurement is carried out by the use of displacement transducers, such as strain gauges [1].However, displacement transducers suffer from the disadvantage of being a spot measurement technique, which leads to low spatial resolution and insufficient information for full-field deformation measurement.Optical techniques such as digital speckle pattern interferometry (DSPI) [2,3], digital image correlation [4], and Moiré method [5] have become preponderant methods in the measurement of deformation for objects with rough surfaces due to their full-field, stand-off, and non-contact measurement nature.Moreover, optical methods, particularly DSPI, are also very precise tools.DSPI is mainly divided, based on their optical setups and interferometric phase extraction methods, into two categories: phase-shifting DSPI (PS-DSPI) and spatialcarrier DSPI (SC-DSPI).The SC-DSPI is also known as digital holographic interferometry [6,7]. PS-DSPI utilizes the interference between an object beam from a measuring target and a reference beam from a fixed surface to measure the out-of-plane deformation, and the interference between object and reference beams from the measuring target via different paths to measure in-plane deformations [8,9].3D deformation measurement is then realized by combining one optical setup for out-of-plane deformation measurement and two optical setups for in-plane deformation measurement together.The three channels are enabled in turn when performing the 3D measurement, resulting in asynchronous measurement of the 3D deformations.However, synchronous measurement of 3D deformations is desired in practical applications to enable the change and mechanical model of the measuring object to be characterized properly.Therefore, the inability of PS-DSPI to perform synchronous measurement limits its employment in practical engineering.Furthermore, PS-DSPI is usually unsuitable for dynamic measurement due to the amount of time consumed in the process of obtaining the interferometric phase.The dominant phase extraction method in PS-DSPI is the temporal phase shift, which carries out several phase shifts and requires the measuring target to be stationary during the phase shift [10].Dynamic deformations are not easily measured, even if the time interval between adjacent phase steps is very short.Though other phase extraction methods, such as spatial phase shift [11] and phase of difference phase shift [12], have been used in DSPI to make dynamic deformation measurement possible, these methods are difficult to use, result in a more complicated system structure, and provide less reliable measurement results.Consequently, these fast phase extraction methods are rarely used in commercial PS-DSPI instruments. SC-DSPI also uses a multi-channel optical setup, usually a three-channel setup, to measure 3D deformations [13,14].The three channels work simultaneously, and three speckle interferograms are recorded in an image frame.The information of the three interferograms can be separated in the frequency domain, and their corresponding phase maps can later be calculated if proper spatial carrier frequencies are used [15].The combination of the three phase maps allows the final 3D deformation to be obtained.SC-DSPI's measurement characteristics make synchronous measurement of 3D deformations possible because the three speckle interferograms are recorded together in one frame and the three phase maps are obtained simultaneously.Dynamic measurement of deformations is also possible because only one image frame is used to measure deformations, eliminating the need for a specified time interval [16].The dynamic measurement speed depends on the camera frame rate.Though SC-DSPI outperforms PS-DSPI in terms of synchronous and dynamic measurement, its disadvantages include a lower-quality phase map [17], greater loss of laser energy, and much smaller measuring area, thus limiting its use in practical applications. PS-DSPI and SC-DSPI have their respective characteristics and are employed in different applications.However, their respective defects limit their wide use in engineering.Their area of application could be expanded if both techniques could be combined together.However, this idea is not easy to realize due to their distinct optical setup. We have built a universal optical setup for both PS-DSPI and SC-DSPI.3D deformations can be measured by this optical setup using either PS-DSPI or SC-DSPI.Thus the flexibility of deformation measurement in engineering is fulfilled by the use of the proposed optical setup.The optical setup is also very simple, robust, and easy to use. Arrangement of universal optical setup The universal optical setup for PS-DSPI and SC-DSPI adopts a three-channel optical arrangement.Each channel consists of an object and reference beam pair derived from an individual laser.Components in each channel are almost the same, but the laser wavelength can be different.The incident angles, or illumination angles, of the three object beams striking the measuring target are artificially arranged to achieve optimal 3D deformation measurement results.The illumination angles will be discussed later. The optical arrangement of the universal optical setup is depicted in Fig. 1.Considering the similarity of the three channels, the optical arrangement of only one channel is described to show the optical interference process.The laser beam is divided into object and reference beams by a beam splitter.The object beam then strikes the measuring target after being expanded by a negative lens or other optical components or parts with similar function, such as a microscope objective.The scattered light from the target is collected by an imaging lens, such as an aspheric lens, then reaches the image sensor of the camera via an aperture.The aperture works as a regulator of light intensity in PS-DSPI mode and a filter of spatial frequency in SC-DSPI mode.The reference beam is coupled into an optical fiber via a piezoelectric-transducer-driven mirror.The elongation of the piezoelectric transducer (PZT) is automatically controlled by a computer to modulate the optical path of the reference beam, resulting in the phase shift in the PS-DSPI measurement.The emergent light from the fiber strikes the camera sensor at a small angle between it and the optical axis.This angle determines the carrier frequency, a key parameter in the SC-DSPI measurement.The object and reference beams encounter each other on the camera sensor, resulting in optical interference.The generated speckle interferograms are captured by the camera and recorded by the computer for further processing. The other two channels follow the same principle, but have different illumination angles and reference beam incident angles.The differences in the incident angles of the reference beams guarantee the separation of the interferometric signals from the three channels in the frequency domain, when the setup works in the SC-DSPI mode.The illumination angle differences among the three channels result in different displacement sensitivity coefficients.The combination of these displacement sensitivity coefficients forms a displacement sensitivity matrix with which the relationship between the 3D deformations and the interferometric phases obtained by PS-DSPI and SC-DSPI is built.The phase determination and deformation calculation procedures are discussed in the next section.Various illumination angle combinations among the three channels yield different displacement sensitivity matrices.Among these combinations, right-angle distribution and homogeneous distribution, described in Fig. 2, are the two simplest and optimal arrangements.In both types, the magnitudes of the illumination angles are equal, but the directions differ. When PS-DSPI is used to measure 3D deformations, the three channels are enabled in turn by opening the shutters in front of each laser.Only one interferogram, generated by a pair of object and reference beams from a channel, is captured by the camera at a time.The implementation of a round of measurements using the three channels in turn yields three equations which express the mathematical relationship between the interferometric phases and image intensities.When SC-DSPI is used for measurement, the three shutters are opened together, resulting in three pairs of object and reference beams emerging on the camera sensor simultaneously.Each object beam-reference beam pair generates an interferogram, resulting in the simultaneous recording of three independent interferograms.The three interferograms are later separated in the frequency domain after a Fourier transform is performed on them.The interferometric phases are extracted from the separated interferograms after an inverse Fourier transform is performed. Phase determination using PS-DSPI The interferogram generated by the PS-DSPI can be expressed as where I(x, y) is the intensity distribution of the interferogram, I 0 (x, y) is the background light, B(x, y) is a coefficient correlating with the contrast, ϕ(x, y) is the The interferogram intensity I(x, y) is recorded by the camera, and the carrier frequencies f x and f y are determined by the incidence angle of the reference beam, but the three remaining variables in Eq. ( 1) are unknown, making the equation unsolvable.Additional conditions need to be added to resolve this problem.Typically, the additional condition is a series of artificial phase changes.The method to solve the equation by artificially changing the interferometric phase is known as phase shifting.This method can be further divided into temporal and spatial phase shifting.The temporal phase shifting, which changes the phase over time, is the dominant phase determination method in PS-DSPI due to its ease of use and ability to formulate high-quality phase maps.The number of steps and phase change intervals are multifarious [18].For example, the popular four-step temporal phase shift changes the phase four times with an interval of π/2.As a result, four equations are obtained as Solving Eq. (2) for ϕ(x, y) results in the following expression: After the measuring target has been deformed, the phase shift is carried out again to determine the interferometric phase according to the deformed state.The phase difference is then determined by simply subtracting the phase before deformation from the phase after deformation.This is expressed as where ϕ a (x, y) and ϕ b (x, y) are the phase distributions after and before the deformation, respectively.The other two phase differences Δϕ 2 (x, y) and Δϕ 3 (x, y) are determined by performing the same procedure on the other channels. In the proposed universal optical setup, the phase shift is carried out by the PZT.A PZT elongation of λ/8, where λ is the laser wavelength, causes a phase shift of π/2, which is the amount required by the four-step temporal phase shift.Fine control of a well-calibrated PZT aids in the precise determination of the interferometric phase using PS-DSPI. Phase determination using SC-DSPI Due to the simultaneous recording of the three interferograms in the SC-DSPI mode, the image intensity is the sum of all interferograms, which is expressed by where I s0 (x, y) is the sum of the background lights. Aided by Euler's formula, Eq. ( 5) can be transformed to where C i (x, y) = B i (x, y)exp[jϕ(x, y)]/2, * denotes the complex conjugate.After a Fourier transform is performed, Eq. ( 6) is transformed to where FT denotes the operation of Fourier transform, (f ξ , f η ) are the coordinates in the frequency domain, and Eq. ( 7) shows there are a total of seven components in the frequency domain, where P i (f ξ − f ix , f η − f iy ) and Q i (f ξ + f ix , f η + f iy ) are three pairs of conjugate components and A(f x , f y ) represents the low-frequency background signal.The locations of P i (f ξ − f ix , f η − f iy ) and Q i (f ξ + f ix , f η + f iy ) are determined by the carrier frequencies f ix and f iy .All seven components can be well separated by fine adjustment of the incidence angles of the reference beams and the aperture in the universal optical setup.To intuitively describe the frequency spectrum obtained by SC-DSPI, Fig. 3 illustrates a distribution of the seven components that was generated by the proposed optical setup in an experiment.More information about the synchronous recording and separation of the multiple interferograms can be found in Refs.[19] and [20]. contain the same interferometric phase, either of them can be used for phase extraction.This is realized by applying an inverse Fourier transform on the selected component and performing further calculations.For example, if P i (f ξ − f ix , f η − f iy ) is chosen, the phase distribution according to the first channel is where IM and RE denote imaginary and real parts of the complex number and where FT -1 is the inverse Fourier transform operation. The phases according to the other two channels, as well as the phases after deformation, are obtained by the same means.Finally, three individual phase difference distributions Δϕ 1 (x, y), Δϕ 2 (x, y) and Δϕ 3 (x, y) are determined by subtracting the phases before deformation from the corresponding phases after deformation. Calculation of 3D deformations The relationship between the deformation and interferometric phase difference in PS-DSPI and SC-DSPI can be expressed by where Δϕ(x, y) is the difference, d !x; y ð Þ is the deformation vector, and s !x; y ð Þ is the displacement sensitivity vector, which is dependent on the illumination angles. If the right-angle-distribution optical arrangement is used, Eq. ( 11) can be transformed to where λ 1 , λ 2 , and λ 3 are the wavelengths of the three lasers; u(x, y), v(x, y), and w(x, y) are the three components of d !x; y ð Þ in three dimensions, and α is the illumination angle. To simplify the calculation, all laser wavelengths are assumed to be the same.This assumption, used with Eq. ( 12), results in the following expressions for the three deformation vector components: For the homogeneous-distribution optical arrangement, Eq. ( 11) becomes If the laser wavelengths are assumed to be the same, the deformation vector components have the following expression: The solutions of v(x, y) and w(x, y) are the same for both the right-angle-distribution and homogeneousdistribution types, but the solutions of u(x, y) are different. Results and Discussion An experimental setup based on Fig. 1 and Fig. 2a was built to verify the validity of the presented universal optical setup.Three single-longitudinal-mode diodepumped-solid-state lasers, all with a wavelength of 532 nm, were used as the light sources.A complementary- metal-oxide-semiconductor (CMOS) camera (CatchBEST Co. Ltd., MU3C500M-MRYYO, 500 Mega pixels, 14 fps) and an aspheric lens with a focus length of 100 mm were used to capture images.The location of the aspheric lens was carefully adjusted to obtain clear images.Three PZT chips (Thorlabs, Inc., PA4FE, 150 V, 2.5 μm travel) were used to actuate the phase shifts.The illumination angles were set to around 30°, and the incidence angles of the reference beams were carefully adjusted to guarantee that all components in the frequency domain were well separated.An object with a circular planar surface was used as the measuring target.Out-of-plane deformation w(x, y) was generated by applying a load to the center of the target back, while the in-plane deformations u(x, y) and v(x, y) were generated through rotation of the object surface.All motions were finely controlled using manual micrometer heads.The measuring area in the experiment was 60 mm × 40 mm. With self-developed programs, both the PS-DSPI and SC-DSPI modes were activated to measure the 3D deformations.The obtained phase differences corresponding to each channel are shown in Fig. 4. Figure 4(a1), (a2) and (a3) are the three phase differences obtained by PS-DSPI and Fig. 4(b1), (b2) and (b3) are the phase differences obtained by SC-DSPI.These phase differences are wrapped due to the arc tangent operation expressed in Eqs. ( 3) and ( 9).The real phase differences are finally obtained after image smoothing and phase unwrap operations are performed [21].All of the phase maps in Fig. 4 present clear and regular patterns, illustrating the capability of both PS-DSPI and SC-DSPI to obtain high-quality phase maps.However, differences in image quality between the phase maps obtained by PS-DSPI and SC-DSPI can be found after partial enlargement of the original phase maps is processed.Local regions of the phase maps of the same size, corresponding to the first channel in the PS-DSPI and SC-DSPI modes respectively, are marked by yellow boxes in Fig. 4(a1) and (b1).The enlarged parts corresponding to the marked regions, as depicted in Fig. 5, clearly show that the speckle particles in the phase map obtained by PS-DSPI are much smaller than those in the phase map obtained by SC-DSPI.This means that the noise in the PS-DSPI phase maps can be filtered more easily than the SC-DSPI's phase maps, or, in other words, the phase smoothing process is performed more times in the SC-DSPI mode, leading to larger error being induced in the phase smoothing process.Consequently, PS-DSPI measurement is usually more accurate than SC-DSPI measurement.However, SC-DSPI reflects its value with its ability to perform dynamic and synchronous 3D deformation measurement. The final 3D deformations shown in Fig. 6 were determined after the calculations described by Eq. ( 13) were performed.The horizontal coordinates represent the object surface plane and the vertical coordinates represent the deformation change.Figure 6(a1), (a2) and (a3) show the 3D deformations u, v and w obtained using PS-DSPI, respectively, while Fig. 6(b1), (b2) and (b3) show the 3D deformations obtained using SC-DSPI.The in-plane deformations u and v are orthogonal and vary linearly along the horizontal direction.These results indicate that the magnitude of the relative displacement of the circular surface caused by the rotation increases gradually and linearly.This is in accord with the results of theoretical analysis.The out-of-plane deformation w presents a distribution that decreases gradually from the periphery to the loading center.Though slight differences can be found between the results obtained by PS-DSPI and SC-DSPI due to the impossibility in duplicating the loading, this deformation data proves that reasonable results can be obtained by both PS-DSPI and SC-DSPI.Consequently, with the proposed universal optical setup, both PS-DSPI and SC-DSPI can be used to measure 3D deformation. Conclusion A universal optical setup, with a simple structure, for both PS-DSPI and SC-DSPI is introduced, aiding in the flexibility of full-field 3D deformation measurements.Experimental results show that clear phase maps with regular pattern and reasonable deformation measurement results can be obtained using this setup, verifying the validity of the presented method.Compared to traditional separate PS-DSPI and SC-DSPI setups, the performance of the proposed setup is not degraded.Moreover, its versatility improves the adaptive capacity relative to measuring target variability.Potential DSPI instruments, based on the proposed universal optical setup, will gain more applications and play an important role in practical engineering. Fig. 1 Fig. 1 Universal optical setup for phase-shifting and spatial-carrier digital speckle pattern interferometry Fig. 2 Fig. 2 Typical illumination layouts for the universal optical setup.a Right angle distribution.b Homogeneous distribution Fig. 3 Fig. 3 Frequency spectrum obtained by the SC-DSPI Fig. 5 Fig. 5 Comparison of phase maps.a Partial enlargement of phase map from PS-DSPI.b Partial enlargement of phase map from SC-DSPI
4,425.4
2016-09-20T00:00:00.000
[ "Physics" ]
Influence of Extractive Solvents on Lipid and Fatty Acids Content of Edible Freshwater Algal and Seaweed Products, the Green Microalga Chlorella kessleri and the Cyanobacterium Spirulina platensis Total lipid contents of green (Chlorella pyrenoidosa, C), red (Porphyra tenera, N; Palmaria palmata, D), and brown (Laminaria japonica, K; Eisenia bicyclis, A; Undaria pinnatifida, W, WI; Hizikia fusiformis, H) commercial edible algal and cyanobacterial (Spirulina platensis, S) products, and autotrophically cultivated samples of the green microalga Chlorella kessleri (CK) and the cyanobacterium Spirulina platensis (SP) were determined using a solvent mixture of methanol/chloroform/water (1:2:1, v/v/v, solvent I) and n-hexane (solvent II). Total lipid contents ranged from 0.64% (II) to 18.02% (I) by dry weight and the highest total lipid content was observed in the autotrophically cultivated cyanobacterium Spirulina platensis. Solvent mixture I was found to be more effective than solvent II. Fatty acids were determined by gas chromatography of their methyl esters (% of total FAMEs). Generally, the predominant fatty acids (all results for extractions with solvent mixture I) were saturated palmitic acid (C16:0; 24.64%–65.49%), monounsaturated oleic acid (C18:1(n-9); 2.79%–26.45%), polyunsaturated linoleic acid (C18:2(n-6); 0.71%–36.38%), α-linolenic acid (C18:3(n-3); 0.00%–21.29%), γ-linolenic acid (C18:3(n-6); 1.94%–17.36%), and arachidonic acid (C20:4(n-6); 0.00%–15.37%). The highest content of ω-3 fatty acids (21.29%) was determined in Chlorella pyrenoidosa using solvent I, while conversely, the highest content of ω-6 fatty acids (41.42%) was observed in Chlorella kessleri using the same solvent. Introduction Many recent studies have focused on the chemical composition of seaweeds, their positive contributions to human health and their possible usage as foodstuffs. Seaweed consumption has a long tradition in Asian countries and has increased in European countries in recent years, therefore approximately 20 species of edible algae are now available on the European market. Nowadays, freshwater algae and seaweeds have been extensively studied as good sources of many bioactive substances such as fatty acids, sterols, proteins, amino acids, minerals, polysaccharides or selected halogenated compounds, with extensive health benefit activities [1][2][3][4][5][6]. Freshwater algae and seaweeds, like fruits and vegetables, exhibit antibacterial, anti-inflammatory, anticancer, antiviral, anticoagulant, and other interesting properties [7][8][9]. There are many possibilities for their usage, especially in medicine, pharmacy and the food industry. For instance, seaweeds have been utilized industrially as a source of agar, carrageenans and alginates [1,10,11] and freshwater algae and seaweeds have been evaluated as nutraceutical foods [12,13]. Lipids, including their fatty acids (FAs), are essential human nutrients that can be classified as saturated (SFAs), monounsaturated (MUFAs), and polyunsaturated FAs (PUFAs), according to the absence or presence of unsaturated bonds. Humans are able to synthesize SFAs and MUFAs, but unfortunately, PUFAs with the first double bond on the third or sixth carbon atom (essential fatty acids-EFAs) are essential because they cannot be synthesized by humans [4,14]. The contemporary Western human diet is known for an increased intake of SFAs and ω-6 FAs that results in an imbalance of ω-3 and ω-6 FAs [4,23]. Seaweeds contain a higher proportion of ω-3 FAs, which are components of all cell membranes and are precursors of biochemical and physiological reactions in the body, acting against atherosclerosis, hypertension, inflammatory diseases, cystic fibrosis, rheumatoid arthritis and helping prevent mental illnesses [2,4,24]. The chemical composition of seaweed is affected by many factors such as the seaweed species, location and time of harvest, intensity of light, water chemistry at the location, and the part of plants used [2,25]. Studies of the lipid profile in seaweeds have investigated the seasonal variation [26], effect of growth conditions [27,28], and differences among diverse seaweed tissues [29]. Most studies which were focused on the total lipid contents and on the FA profiles have concerned fresh seaweeds [26,28,29]. However, the effect of algae and seaweed processing, i.e., drying, packaging, transporting, and subsequent storage, on the lipid content and FAs composition is rarely studied. The present paper evaluates and compares the lipid profiles of nine commercially available seaweed microalgal and cyanobacterial products, and autotrophically cultivated samples of the green microalga Chlorella kessleri and the cyanobacterium Spirulina platensis. In addition, yield of lipids and FA profiles of seaweeds extracted using different solvents were compared. The new information presented herein should be useful to support more abundant consumption of dry seaweed products as a source of ω-3 and ω-6 FAs, and also for better revaluating the real contribution of freshwater algal and seaweed products to PUFA enrichment of the human food chain. Total Lipid Contents Total lipid contents of analyzed samples were determined in extracts obtained by different solvents (Table 1). Evidently, extraction using a 1:2:1 mixture of methanol/chloroform/water (solvent I) resulted in higher contents of lipids among all determined samples, ranging from 1.32% (P. palmata) to 18.02% (S. platensis), whereas the extraction with hexane (solvent II) was less effective in relation to lipid contents, that ranged from 0.64% (P. palmata) to 13.41% (S. platensis) for the same algal samples as in the previous analysis. Moreover, the total lipid content seemed to be influenced by the cultivation or technology process. Here, the results showed much higher total lipid contents in autotrophically cultivated microalgae compared to freshwater edible products in pill form using both solvent systems. In detail, autotrophically cultivated cyanobacterium S. platensis contained 18.02% lipids (I), in comparison with 10.23% in the spirulina product (I). Similarly, 18.01% was measured in the autotrophically cultivated green microalga C. kessleri compared to 3.70% in the green microalga C. pyrenoidosa product after extraction with solvent I. Further, the content of total lipids obtained from the green microalga C. pyrenoidosa with solvent I (3.70%) was lower than established by Ortega-Calvo et al. in dry C. vulgaris algal product (8.6%) after extraction by dichloromethane/methanol (2:1) [19]. Similarly, D'Oca et al. presented results of lipid contents in C. pyrenoidosa in the range from 1.55% to 20.74%, depending on the different solvents and extraction methods used [18]. A similar result (14.3%) for total lipid content in the cyanobacterium S. platensis after extraction by a mixture of chloroform/methanol (2:1) was published by Babadzhanov et al. [17]. On the other hand, Ortega-Calvo et al. reported much lower contents of lipids in the cyanobacterial food products of S. platensis, S. maxima, and Spirulina sp. (6.4%-7.5% in dry weight) using a mixture of dichloromethane/methanol (2:1) [19]. It was obvious that freshwater green microalgae and cyanobacteria contained higher concentrations of lipids than seaweed products. This could be caused by the specific metabolism and growth conditions of these algae. In accordance with our findings, significant discrepancies in the efficiency of various solvent mixtures used for lipid extraction were reported in several studies [15,[17][18][19][20]. That could be caused by the presence of different lipid components in the algal biomass. Generally, a higher amount of polar lipid compounds in algal biomass results in worse lipid extraction yields by nonpolar solvents and vice versa [18]. Significant differences in the FA composition of the nine algal products and the autotrophically cultivated S. platensis and C. kessleri were established. Variations in FA contents are ordinarily attributable both to environmental and genetic differences [20,32]. Nevertheless, the extractive solvents had an influence on the number and amount of identified FAs. The impact of the solvents used on the total number of identified FAs and different amounts of determined SFAs, MUFAs and PUFAs are presented in Figures 1-4. From Figure 1 can be concluded that 9 to 22 or 7 to 21 FAs were identified, respectively, depending on whether solvent I or II was used. The solvent I was found more effective than solvent II in relation to a number of identified FAs for almost all samples, except for autotrophically cultivated S. platensis, where the same efficiency for both solvents was established. The different efficiency of the two solvents used for the extraction of algal lipids in relation to the proportions of the different FAs is evident too from the obtained results shown in Figures 2-4. Higher amounts of SFAs ( Figure 2) were obtained from all samples except for Spirulina genus using solvent II. The solvent I was more effective for PUFAs for almost in all samples, except for both Spirulina genus samples and the product from the brown seaweed H. fusiformis ( Figure 3). Further, solvent II was more effective for MUFA extraction in both samples of the green freshwater algae C. kessleri and C. pyrenoidosa and in the three brown seaweed products L. japonica, and U. pinnatifida (W, WI). Finally, in samples of autotrophically cultivated S. platensis and E. bicyclis, no difference between the two solvents used was observed ( Figure 4). Fatty Acid Profiles FA compositions of cyanobacterial, microalgal and seaweeds products and two samples of autotrophically cultivated green microalga and cyanobacteria obtained by different solvent extractions are presented in Tables 2 and 3 and the results are given in % of total FAMEs. Saturated Fatty Acids Pursuant to published data, the most abundant groups of algal lipids among the total FAMEs are SFAs or PUFAs, depending on the algal species [15,33]. The majority of the investigated samples showed the highest proportions of SFAs in their FAMEs distribution regardless of the solvent used. The highest contents of SFAs obtained with the solvents I and II were established in the red seaweeds P. palmata (86.58%/93.26%) and P. tenera (65.56%/76.56%). Conversely, the cultivated freshwater green microalga C. kessleri had the lowest contents of SFAs (28.87%/29.29%). Monounsaturated Fatty Acids MUFAs were distributed in less amounts than SFAs and their contents ranged from 6.73% (S. platensis) to 30.38% (L. japonica) in the solvent I extract and from 4.41% (P. tenera) to 33.55% (L. japonica) in solvent II extracts. In general, large differences were found in MUFA contents among the analyzed algal species. The highest contents of MUFAs were determined in brown seaweeds, whilst the lowest contents were detected in the samples of cyanobacteria and red seaweeds, depending on the solvents used. In keeping with reports on brown seaweeds, MUFAs with higher numbers of carbons were identified as more abundant than in other algal species. The summed contents of oleic acid C18:1(n-9) in L. japonica gave the highest amount (26.65%/30.69%) of total FAMEs, unlike reported data (8.4%) [36]. In the two products from U. pinnatifida (W, WI), higher contents of oleic acid C18:1(n-9) (13.32%/9.35% in W; 11.91%/15.30% in WI) were determined than reported in published data (6.79%-10.2%) [20,35]. The same situation was observed in the last product from the brown seaweed H. fusiformis, where 10.00% and 11.28% were determined, contrary to a published 7.68% values for total FAMEs expressed as a sum of C18:1 [15]. Polyunsaturated Fatty Acids PUFA contents ranged from 3.04% (P. palmata) to 61.52% (cultivated C. kessleri) using solvent I, whereas solvent II extracts were in the range from 0.00% (P. palmata) to 42.70% (cultivated C. kessleri) of total FAMEs. The extraction of lipids by solvent II seemed to be insufficient for the isolation of PUFAs with higher carbon numbers, as they were not detected in most of the analyzed samples, except for C20:4(n-6) determined in the brown seaweed products. Generally, α-linolenic (ALA) and linoleic acids (LA) are the primary precursors of ω-3 and ω-6 EFAs, respectively. Both are formed by the gradual desaturation of oleic acid in the endoplasmic reticulum and plantae chloroplasts. Importantly, humans cannot synthesize ALA due to the absence of the ∆ 12 and ∆ 15 desaturases required for the synthesis of ALA from stearic acid (18:0) or PUFAs with the first double bond on the C3 (ω-3) and C6 (ω-6) from the methyl-end. Thus, the level of these PUFAs in the human body depends on their intake from the diet [4]. Generally, ω-3 PUFAs play crucial roles in many biochemical pathways which results in various health benefits, especially cardioprotective effects that result from their considerable anthiatherogenic, antithrombotic, anti-inflammatory, antiarrhytmic, hypolipidemic effects, and other health benefits, based on the complex influence of the concentrations of lipoproteins, fluidity of biological membranes, function of membraned enzymes and receptors, modulation of eicosanoids production, blood pressure regulation, and finally on the metabolism of minerals [4,[38][39][40][41][42]. Fish oil is considered as the main source of essential PUFAs. Nevertheless, fish also cannot synthesize these PUFAs because of the absence of crucial enzymes and the high level of essential PUFAs in fish oil is a direct consequence of the presence of marine microorganisms and algae in the fish trophic chain. The highest content of ω-3 FAs was determined in the green microalga C. pyrenoidosa (21.29%, I) and the highest content of ω-6 FAs was observed in the other cultivated freshwater green microalga C. kessleri sample (41.42%, I). Fatty Acid Profiles of Autotrophically Cultivated Cyanobacteria and Microalga Autotrophically cultivated S. platensis showed a similar FA composition to the cyanobacterial product of S. platensis, except for a slightly higher content of C16:0 and lower content of C18:3(n-6) in the cultivated alga. In contrast, the other autotrophically cultivated freshwater green alga C. kessleri showed a higher amount of PUFAs than the microalgal product of C. pyrenoidosa, the highest of all analyzed samples. The content of linoleic acid C18:2(n-6), which was the predominant PUFA in cultivated C. kessleri, exceeded the amount of this PUFA in product from C. pyrenoidosa by 51.6%. PUFAs/SFAs Fatty Acids Ratio The PUFAs/SFAs fatty acids ratio (hereinafter referred to as ratio) could be used for a rapid evaluation of FA profiles of analyzed samples; the higher value of this ratio means more health benefits. Ratio in the product from S. platensis (0.57/0.66) and in autotrophically cultivated S. platensis (0.46/0.74) are in accordance with the reported ratios in Spirulina sp., S. platensis, and S. maxima (0.25-0.75) [19] and in the cyanobacterium S. platensis (0.54) [17]. Based on the obtained results and data presented in literature [15,[17][18][19][20], it is evident very significant differences exist within the FA profiles in the same species of algae and seaweeds depending on the used solvents and methods of analysis. Further, chemical composition of seaweed and microalgae is affected by many factors (species of seaweed, location and time of harvest, light intensity, water chemistry and the used part of plants); therefore, the results obtained from various analyses may differ [2,25]. Samples and Chemicals The study was conducted with eight representative species of dried cyanobacterial, microalgal and seaweed products purchased in a special local store in dried form; they were represented by green microalga (Chlorophyta), cyanobacteria (Cyanophyceae), brown seaweeds (Phaeophyta), red seaweeds (Rhodophyta), and two samples of autotrophically cultivated freshwater green microalga Chlorella kessleri (No. 260) and cyanobacterium Spirulina platensis (No. 27) obtained from the Culture Collection of Autotrophic Organisms, Institute of Botany, Academy of Sciences of the Czech Republic, Centre of Phycology (Trebon, Czech Republic). Both autotrophically cultivated species were harvested in exponential growth phase. Characteristics of all the samples are summarized in Table 4. All product samples were pulverized with a mixer (Vorwerk Thermomix TM 31, Wuppertal, Germany) to obtain a homogenous powder with a particle size of 1 mm and they were stored in airtight plastic bags at room temperature (25 °C). Freshwater green microalga Chlorella kessleri and cyanobacterium Spirulina platensis were cultivated autotrophically in a solar photobioreactor as described in the study by Masojídek et al. [43]. For the cultivation of microalgae, BG11 culture medium was used [44]. After the cultivation, the algal biomass was lyophilized (Alpha 1-4 LSC, Christ, Osterode am Harz, Germany) and stored in airtight plastic bags at room temperature (25 °C). All used chemicals were of analytical grade and were purchased from Merck (Darmstadt, Germany), except for the standard mixture of 37 FAMEs (FAME Mix, Supelco, Bellefonte, PA, USA), and methyl undecanoate purchased from Sigma Aldrich Chemical Co. (St. Louis, MO, USA). Total Lipids Determination Total lipids of the analyzed samples were extracted using two different solvents. Either a mixture of methanol/chloroform/water (1:2:1, v/v/v) according to the modified method of [45] or n-hexane was used. Specifically, a portion (2 g) of every dried ground sample was weighed into an extraction thimble and subjected to a Soxhlet extraction for 4 h with 100 mL of the solvent mixture. Subsequently, the solvent was removed on a vacuum rotary evaporator (Laborota 4010 Digital, Heidolph, Schwabach, Germany) and the lipid extracts were dried at 105 °C for 2 h (Venticell 111 Komfort, BMT, Brno, Czech Republic). The amount of total lipid contents of all samples was determined gravimetrically [18]. GC Analysis of FAMEs FAs were determined by gas chromatography (GC) of their methyl esters (FAMEs) in the lipid extracts obtained by above described method, excluding drying. Briefly, 0.5 M sodium hydroxide in methanol (4 mL) was added to the lipid extract (obtained from 2 g of sample) in a 250 mL flask. The flask was closed and heated for 30 min under nitrogen on a heating block (LTHS 250, Brnenska Druteva, Brno, Czech Republic). Then, freshly prepared 15% boron trifluoride in methanol (5 mL), was added to methylate the samples. After 2 min, heptane (5 mL) and sodium chloride (saturated solvent, 2 mL) were added and the sample was removed from the heating block. Next, heptane (15 mL) and sodium chloride (saturated solvent, 40 mL) were added to extract the FAMEs, the mixture was shaken and phases were separated and subsequently washed with sodium chloride (saturated solvent, 40 mL). The heptane phase was separated and anhydrous sodium sulfate was added. Quantitative determinations of FAMEs were conducted using a Shimadzu GC-2010 gas chromatograph (Shimadzu Corporation, Tokyo, Japan) equipped with a flame ionization detector (FID) and a HP-88 (Agilent Technologies, Englewood, CO, USA) capillary column (100 m × 0.25 mm, 88% cyanopropyl-arylpolysiloxane stationary phase with the thickness of 0.25 μm). The injection volume was 1.0 μL, the temperature of injection port was 250 °C with the split ratio of 1:100 and nitrogen was used as a carrier gas, temperature program was 80 °C/5 min, 200 °C/30 min, 250 °C/15 min. Identification of FAMEs was conducted by comparing their retention times with those of a 37 FAME reference standard. For quantification of FAMEs, methyl undecanoate was used as an internal standard. The FA results are expressed as a percentage of total FAMEs. Statistics The results of total lipids were expressed as means with standard deviations (SD) of each sample. Each sample was analyzed in triplicate (n = 3). Statistical differences among the samples were estimated by unpaired t-test and a probability value of p < 0.05 was considered to be statistically significant. Statistical analysis was performed using the StatPlus:mac LE Version 2009 software (AnalystSoft Inc., Atlanta, GA, USA). The analytical FA composition results are expressed as the average of six analyses (n = 6). Conclusions This study has examined nine commercially available edible cyanobacterial, microalgal and seaweed products and, moreover, autotrophically cultivated samples of the green microalga Chlorella kessleri and the cyanobacterium Spirulina platensis. Lipid content and FA profiles were determined using two different solvents, a mixture of methanol/chloroform/water (1:2:1, v/v/v, solvent I) and hexane (solvent II). In addition, yields of lipids and FA profiles after the extraction with different solvents were compared and, furthermore, comparison of data obtained from the determination of microalgal and cyanobacterial products and autotrophically cultivated microalga and cyanobacterium was accomplished. Evidently, edible microalgal and cyanobacterial products contained a higher proportion of lipids than edible seaweed products using both solvent systems, and the highest lipid content was observed in autotrophically cultivated C. kessleri and S. platensis. From the lipid content point of view, the cultivated algae appear to be a better source of lipids than analyzed processed algal products. The highest content of PUFAs, especially ω-3 FAs, was determined in the microalgal product of the green alga C. pyrenoidosa and two products of the brown seaweed U. pinnatifida (W, WI). Even though fresh microalgae and unprocessed algae usually contain higher amounts of lipids, the dried edible microalgal product of C. pyrenoidosa examined in this work had a relatively high total lipid content and the highest level of PUFAs, especially ω-3 FAs. This investigation of edible cyanobacterial, microalgal and seaweed products and cultivated algae attested to the presence of health-promoting nutrients, such as PUFAs, especially essential ω-3 FAs, and this fact makes them a useful food supplement.
4,635.4
2014-02-01T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Optimal Kernel ELM and Variational Mode Decomposition for Probabilistic PV Power Prediction : A probabilistic prediction interval (PI) model based on variational mode decomposition (VMD) and a kernel extreme learning machine using the firefly algorithm (FA-KELM) is presented to tackle the problem of photovoltaic (PV) power for intra-day-ahead prediction. Firstly, considering the non-stationary and nonlinear characteristics of a PV power output sequence, the decomposition of the original PV power output series is carried out using VMD. Secondly, to further improve the prediction accuracy, KELM is established for each decomposed component and the firefly algorithm is introduced to optimize the penalty factor and kernel parameter. Finally, the point predicted value is obtained through the summation of predicted results of each component and then using the nonlinear kernel density estimation to fit it. The cubic spline interpolation algorithm is applied to obtain the shortest confidence interval. Results from practical cases show that this probabilistic prediction interval could achieve higher accuracy as compared with other prediction models. Introduction Driven by the global severe condition of fossil fuel depletion and growing environmental pollution, as an environmentally friendly renewable energy, photovoltaic (PV) generation is an exemplar of widely used power generation methods in the renewable energy industry. PV generation is susceptible to surface solar irradiance, and its output is strongly random, which challenges frequency regulation, peak load regulation, and system reserve. With the increase in grid integration capacity, the randomness of PV generation brings more and more risks to power system scheduling and operation. More accurate prediction of PV power can provide a reliable basis for power grid dispatching decisions [1]. It is of significant importance to ensure system security, stability, and optimal operation. There are four main techniques to predict PV power output, namely physical, artificial intelligence (AI), statistical, and hybrid approaches [2][3][4]. The physical method uses numerical weather prediction (NWP) data and measured data. The statistical method establishes a relationship between historical data and forecasted variables based on data-driven formulations such as regression models [5,6], time series [7], and cluster analysis of clearness index [8,9]. For AI methods, there are artificial Table 1. Literature review on recent works. Variational Mode Decomposition Principle Signal decomposition is often used in hybrid prediction methods. The common signal decomposition techniques are wavelet packet decomposition (WPD) [30] and empirical mode decomposition (EMD) [31]. EMD can decompose signals into different frequency characteristics and solve the envelope and instantaneous frequency decomposition based on Hilbert-Huang transform. Although Hilbert-Huang transform is an effective decomposition method, EMD also has some drawbacks, such as unavailability of a rigorous mathematical model, interpolation option, and is sensitive to sampling and noise. The authors in [31] described that EMD exhibits a lack of "sparseness" characteristics, whereas VMD could probably render a slightly higher degree of sparseness than EMD. To overcome these limitations, in 2014, an alternative multiresolution technique named variational mode decomposition (VMD) was presented [32]. VMD is a model of entirely non-recursive variational, and the modes are simultaneously determined. The VMD model looks for multiple modes and their center frequencies. The band-limited modes recreate the input signal precisely or in a least squares manner. The instantaneous frequency of each analytic signal has practical physical significance in VMD, whereas this is not the case for EMD. In this paper, the VMD is selected as the signal decomposition method. VMD aims to decompose the original series u into a series of band-limited modes u k , where individual mode compacts a center frequency ω k identified in the decomposition. Individual mode u k for the bandwidth can be calculated with the procedures presented in [33]. Kernel Extreme Learning Machine A single-hidden layer feedforward neural network (SLFN) structure is an extreme learning machine (ELM), whose input weights and hidden layer biases are chosen arbitrarily. Well-known neural network learning methods including the back propagation (BP) algorithm require the user to manually establish a significant number of training parameters, and such procedure gives a prediction output that can be of local optimum. Differently, the ELM requires to establish the number of hidden layer nodes for the model without the task to modify the bias of the hidden layer units and the network's input weights. The Moore-Penrose generalized inverse matrix theory is used to obtain the optimal output weights for ELM. Moreover, to minimize the training error, its output weights can be solved by only one step. It has the characteristic of rapid learning and great generalization capability [34][35][36]. However, the number of its hidden nodes is difficult to be determined and its output is of stochastic volatility. By introducing the kernel function to ELM and comparing with SVM theory, the kernel extreme learning machine (KELM) algorithm was developed. KELM enhances the algorithm's learning accuracy [37]. The detailed proof process can be referred to references [38,39]. For N random sample (x i ,t i ), where t i is the relating target class label of the training sample x i , x i ∈ R n , t i ∈ R n , i = 1, 2, . . . , n. The ELM's output function is [39] is the output, a row vector of the hidden layer concerning the input x, which for the relationship for the samples from the input space to the hidden layer feature space, L is the number of hidden layers, H is the hidden layer output matrix, I is unit sparse, C is penalty coefficient, and T is the output matrix of the SLFN. If the feature mapping h(x) is unknown, then a kernal matrix for ELM needs to be defined. The output of the KELM model can be calculated by Equation (2) below. where Ω ELM = h(x i ) · h x j = k x i , x j is a kernel function. In Equation (2), the hidden layer feature mapping h(x) does not required to be defined and the function does not identify the number of hidden layers L. The kernel k(u, v) that substitutes h(x) and L needs to be defined. The stable kernel function substitutes the ELM's arbitrate mapping and the output weight becomes robust. KELM constitutes an enhanced generalization capability than ELM. Various kernel functions exist such as polynomial kernel, linear kernel, and radial basis function (RBF) kernel. RBF kernel shows great learning capability in practical challenges and the amount of unknown parameters is less compared to polynomial kernel. Therefore, the RBF kernel function is considered in this work. The RBF kernel function and output weight of the KELM model is written as where g is the kernel parameter, and β is the output weight between the output and hidden layers. The ELM with the kernel function overcomes the shortcomings of the traditional ELM [38], but its performance is affected by the penalty coefficient C and g. Therefore, the firefly algorithm (FA) is chosen to optimize the parameters. Optimization of KELM's Parameters through FA Since the kernel parameter and penalty coefficient of KELM are the two major factors affecting the prediction performance, this paper selects the firefly algorithm which is characterized with fewer parameters, better stability, and convergence to optimize these two parameters for further enhancing the prediction model's accuracy. FA is a swarm optimization technique developed by simulating the flashing characteristics of fireflies in nature [40]. In the algorithm, each firefly represents a solution of the solution space and is randomly distributed in the searching space. Each firefly individual has its own brightness and neighborhood, known as the radius of the decision domain, r i d 0 < r i d < r s , where r s is the visual range of the firefly. To simplify the problem, the following three hypotheses are considered to describe FA: (1) All fireflies are unisex. A firefly can be attracted to other fireflies of either sex; (2) The firefly's brightness is relevant to the analytical expression of the objective function. To solve the maximization problem, brightness is considered to be a proportion to the objective function's value. Alternative forms of brightness could be established in a similar approach to the fitness function in some optimization techniques; (3) Attractiveness is related to the firefly's brightness. The dimmer firefly will approach the brighter firefly for two flashing fireflies. Attractiveness is related to the brightness of the fireflies and will reduce with increasing distance between fireflies. A firefly will move arbitrarily in the space if there are no fireflies brighter than itself. The details for the FA are given in [41]. The solution to the KELM based on FA optimization is shown in Algorithm 1. Initialize: population of fireflies x i (i = 1, 2, . . . , n), the number of iterations t ← 1 Set light absorption coefficient γ, MaxGeneration, calculation dimension d, the attractiveness β, the randomization parameter α while (t<MaxGeneration ) for i = 1:n all n fireflies for j = 1:n all n fireflies end for j end for i Rank the fireflies and determine the optimal solution end while Output the optimum C, g Influential Factors of PV Output PV output forecasting is a complex nonlinear problem. Therefore, environmental factors and weather conditions should be considered. In this paper, different weather conditions including solar radiation, wind speed, and temperature are studied. The full extent of the impact of these weather conditions on PV power output is analyzed [42][43][44]. PV Output for Different Weather Conditions The PV outputs as shown in Figure 1 under the different weather conditions, such as three main weather conditions, sunny, cloudy, and rainy day in 2015, are selected from the Ashland station in Oregon, USA. We can conclude that, from Figure 1, the PV output is stable: basically, there is normal distribution on sunny days. On cloudy days, the PV output forecasting becomes more challenging with greater randomness and volatility of solar radiation. The PV output is changing all the time on rainy days, and the average output of PV power is very small. This situation will affect the safety and stability of PV power plant operation [45]. Therefore, it is essential to classify a large number of PV output historical data according to their weather conditions to enhance the prediction's accuracy. The Influences of Different Meteorological Factors on PV Power Output Different meteorological factors can change the PV power output. If we take all terms into account, the complexity of prediction will be increased. Historical meteorological data encompassing solar radiation, temperature, and wind speed, and PV power outputs for the whole year of 2015 are The Influences of Different Meteorological Factors on PV Power Output Different meteorological factors can change the PV power output. If we take all terms into account, the complexity of prediction will be increased. Historical meteorological data encompassing solar radiation, temperature, and wind speed, and PV power outputs for the whole year of 2015 are selected, and the Kendall correlation coefficient method is applied to analyze the factors affecting PV power [46,47]. The result of the Kendall rank correlation coefficients is provided in Table 2. As shown in Table 2, the PV power output has a maximum correlation with solar radiation, which is 0.977. The correlation between PV power output and the temperature is 0.265, which is a weak correlation. Further, the correlation between PV power output and wind speed is 0.094, which is irrelevant. These three factors are positively related to PV power output. As a result, solar radiation and temperature are chosen as the input variables of the FA-KELM model. Prediction Interval Evaluation Indices The prediction interval results are composed of the upper and lower boundaries and correspond to certain expected confidence levels, which are different from point forecasts. The common predictive indices used here are given in Appendix A. Kernel Density Estimation (KDE) In order to get the confidence interval as high as possible and the average interval bandwidth as small as possible, the kernel density distribution interval is estimated by the predicted error between the point predicted value and actual value of PV power. For a specific predicted error e, the formula of its probability density function can be written as follows [30,48,49]: where k(x) is the kernel function, and Gauss kernel function is used,ê i represents the point predicted error samples for PV power, N is forecast sample size, and h is the PV interval bandwidth. In this paper, the bandwidth is calculated by the following equation [50]: where σ stands for sample standard deviation, and FIQR is the interquartile sample range. If the kernel density estimation is used for each data point, the computation will increase when the sample number is increasing. This work allocates the power prediction errors into equal intervals in hours. Considering that the length of the power section is ∆P and the range of power fluctuation is [P 1 , P h ], the interval can be calculated as Energies 2020, 13, 3592 9 of 21 where i = 1, 2, . . . , l, l is the number of sectors. Moreover, l is calculated as According to the PV power point prediction value, the probability density curve of the power prediction error is computed by a kernel density estimation, then the PV power prediction interval is obtained under a certain confidence level. Suppose that the probability distribution function of a PV power prediction error e is F(ε), where ε is a random variable of prediction error, then the probability prediction range of the actual value of PV power under a given confidence level 1 − α is whereP is the point prediction value, and G(α 1 ) and G(α 2 ) are both the inverse functions of the probability distribution function F(ε). Through the sensitivity analysis, it is found that when α 1 = α/2 and α 2 = 1 − α 1 , the confidence interval is minimum. The practical implementation of the prediction interval under a certain confidence level is summarized as follows: Step 1: Determine the corresponding power interval for a predicted value; Step 2: Find the error probability density curve corresponding to the above interval, and solve the corresponding probability distribution function with the integral; Step 3: Adopt the cubic spline interpolation method to fit the probability distribution curve of the prediction error. Then, solve the α/2 and 1 − α/2 intervals of the prediction error for the PV output; Step 4: Calculate the prediction interval for the power output according to Equation (8). Probabilistic Prediction Interval Model A probabilistic prediction interval is formed under a certain confidence level, and the interval predicted value is calculated by determining the lower and upper bounds of the confidence interval, which will provide more comprehensive information for power system decision-makers. The flowchart is shown in Figure 2 below. The procedure can be explained as follows: Step 1: Preprocess the power data of the PV power plant with normalization. where x i represents the original input data, x max = max(x), x min = min(x), u ∈ [0, 1] is the normalized data, and i = 1, 2, . . . , m, m is the total input number; Step 2: Use the clustering algorithm to classify the original PV power data into sunny, cloudy, rainy, and other weather conditions; Step 3: Use the VMD algorithm to decompose different weather conditions data, then the subseries is divided into training set and testing set; Step 4: Add the corresponding meteorological data into the data set from Step 3, and the FA-KELM algorithm is adopted to create the forecasting model of intra-day-ahead. Finally, the subseries will be added to obtain the final point prediction value; Step 5: According to the error between the point prediction value and the actual value used to estimate the kernel density distribution, the confidence interval under a certain confidence level will be obtained by the cubic spline interpolation method. Then, the prediction interval model is built; Step 6: Analyze the predicted results according to the prediction interval evaluation indices. The procedure can be explained as follows: Step 1: Preprocess the power data of the PV power plant with normalization. where represents the original input data, is the normalized data, and = 1, 2, ..., , is the total input number; Step 2: Use the clustering algorithm to classify the original PV power data into sunny, cloudy, rainy, and other weather conditions; Step 3: Use the VMD algorithm to decompose different weather conditions data, then the subseries is divided into training set and testing set; Step 4: Add the corresponding meteorological data into the data set from Step 3, and the FA-KELM algorithm is adopted to create the forecasting model of intra-day-ahead. Finally, the subseries will be added to obtain the final point prediction value; Step 5: According to the error between the point prediction value and the actual value used to estimate the kernel density distribution, the confidence interval under a certain confidence level will be obtained by the cubic spline interpolation method. Then, the prediction interval model is built; Step 6: Analyze the predicted results according to the prediction interval evaluation indices. Data Collection The data from the Ashland PV power plant with the capacity of 15 kW per unit in Oregon, USA (latitude: 42.19; longitude: 122.70; altitude: 595m) are selected for modeling and evaluating the Data Collection The data from the Ashland PV power plant with the capacity of 15 kW per unit in Oregon, USA (latitude: 42.19; longitude: 122.70; altitude: 595 m) are selected for modeling and evaluating the prediction performance. The research duration time is between 6 a.m. and 6 p.m. from 1 January 2015 to 31 December 2015, in which some "incomparable" data of 11 days are found by comparing and analyzing the original data [51]. Then, these data are removed in the prediction. The database from the website contains data for every 5 min, and we derive the average data value for every 15 min. We collect data for 12 h per day, that is, there are 48 data points per day with a sampling rate of 15 min per sample. The corresponding weather information per day was obtained from the historical weather data through the website given in reference [52]. The data which fluctuate excessively and irregularly are eliminated. The data sets are normalized to the interval of [0, 1]. In the meantime, 75% of the data are treated as the training set, while the remaining 25% is regarded as the testing set. The original sample data of the PV power series in April 2015 are shown in Figure 3 below. Cluster Analysis The self-organizing map (SOM) neural network [53,54] consists of an input layer and a competition layer. The main idea is that neurons in the competition layer of the network will compete with each other in order to gain the response opportunity to input variables. Finally, only one neuron will win. Usually, it is mainly used for clustering. minutes. We collect data for 12 hours per day, that is, there are 48 data points per day with a sampling rate of 15 minutes per sample. The corresponding weather information per day was obtained from the historical weather data through the website given in reference [52]. The data which fluctuate excessively and irregularly are eliminated. The data sets are normalized to the interval of [0,1]. In the meantime, 75% of the data are treated as the training set, while the remaining 25% is regarded as the testing set. The original sample data of the PV power series in April 2015 are shown in Figure 3 below. Cluster Analysis The self-organizing map (SOM) neural network [53,54] consists of an input layer and a competition layer. The main idea is that neurons in the competition layer of the network will compete with each other in order to gain the response opportunity to input variables. Finally, only one neuron will win. Usually, it is mainly used for clustering. Considering the randomness and intermittent nature of PV power, it will generate significant errors if we make predictions directly through the original data. Therefore, according to the already known weather conditions, cluster analysis for the original data sample in the year 2015 is conducted through the SOM neural network. Its clustering distribution is shown in Figure 4, and the clustering results are shown in Table 4. Considering the randomness and intermittent nature of PV power, it will generate significant errors if we make predictions directly through the original data. Therefore, according to the already known weather conditions, cluster analysis for the original data sample in the year 2015 is conducted through the SOM neural network. Its clustering distribution is shown in Figure 4, and the clustering results are shown in Table 3. Cluster Analysis The self-organizing map (SOM) neural network [53,54] consists of an input layer and a competition layer. The main idea is that neurons in the competition layer of the network will compete with each other in order to gain the response opportunity to input variables. Finally, only one neuron will win. Usually, it is mainly used for clustering. Considering the randomness and intermittent nature of PV power, it will generate significant errors if we make predictions directly through the original data. Therefore, according to the already known weather conditions, cluster analysis for the original data sample in the year 2015 is conducted through the SOM neural network. Its clustering distribution is shown in Figure 4, and the clustering results are shown in Table 4. VMD Decomposition In this paper, VMD is used to decompose the original PV power data into finite subseries, then build the prediction model for each subseries, which can reduce the non-stationary characteristic of prediction data. VMD transfers signal decomposition to the framework of variational theory, and the optimal solution of the variational model is obtained by iterative calculation to determine the center frequency and bandwidth of each mode. The sum of all modes is the source signal. Empirical mode decomposition (EMD) is another popular decomposition method. It decomposes signals into characteristic modes. Further, it has the advantage of not using any defined functions as a basis, but instead adaptive generation of intrinsic modal functions based on the analyzed signals. On sunny days, the data collected from the first 125 days out of 160 days are treated as the training sets, and the data from the last 35 days are regarded as testing sets, which are obtained from the clustering algorithm of Section 5.2. The main parameters need to be set for the VMD program including penalty factor α, discriminant precision e, and mode number K. K has an influence on the decomposition process, and it will be under decomposition when K is too small; otherwise, it will be over decomposition. A simple and effective process is adapted to determine the mode number. A sensitivity analysis is conducted on the K value ranging from 3 to 10 with the parameters in the VMD-FA-KELM algorithm presented in Table 4. The analysis shows that while K is greater than 6, the central frequency of the modes will be similar. Therefore, choosing K = 6 will be the optimal choice. VMD decomposes the PV power series with the decomposition results for three weather conditions, i.e., sunny, cloudy, and rainy, presented in Figure 5. IMF1 to IMF6 represent the subseries. It is worth noting that only six IMFs have been plotted as determined by the algorithm. Table 3 depicts the number of days for other weather conditions including cloudy, rainy, and fog. Simulation Results under Different Confidence Levels To validate the proposed approach, the simulation results under different confidence levels are compared in this section. Select sample data of 16 sunny days, 12 cloudy days, and 7 rainy days were obtained from the cluster analysis of Section 5.2 to get the corresponding point prediction value. Based on this, the prediction error is obtained, and the prediction interval model based on the kernel density estimation is then produced. The relevant parameter settings of the proposed model are shown in Table 4 below. Based on the VMD-FA-KELM algorithm and kernel density estimation, the prediction interval results of sunny, cloudy, and rainy weather for each two-day period at 90% and 70% confidence levels, respectively, are shown in Figures 6 and 7, respectively, where the daily time interval is from 6 a.m. to 6 p.m. Further, its corresponding prediction interval indices are given in terms of the prediction interval coverage probability (PICP) and prediction interval normalized averaged width (PINAW). As shown in Figures 6 and 7, we can see that the actual PV power almost falls within the 90% confidence level, and a small part is outside the 70% confidence level. This shows that the smaller the confidence level is, the narrower the interval width is. In order to satisfy the corresponding confidence level, it is shown that when the confidence level is high, the average width of the interval gets wider. As such, the probability of real PV power falling into the prediction interval is greater. Moreover, when the confidence level is low, the average width of the interval is narrower. Then, the probability of real PV power falling into the prediction interval is smaller. As the confidence level decreases gradually, the corresponding prediction interval normalized averaged width will decrease, and the Energies 2020, 13, 3592 13 of 21 coverage probability will decrease as well. The width of the sunny prediction interval is relatively narrow, which indicates that the sunny data are more stable, and the prediction interval accuracy is higher than cloudy weather and rainy weather. Further, on the same day, the larger the point predicted error is, the greater the range of error fluctuation is. Moreover, at noon, the error is the largest, and the interval is the widest. Simulation Results under Different Confidence Levels To validate the proposed approach, the simulation results under different confidence levels are compared in this section. Select sample data of 16 sunny days, 12 cloudy days, and 7 rainy days were obtained from the cluster analysis of Section 5.2 to get the corresponding point prediction value. Based on this, the prediction error is obtained, and the prediction interval model based on the kernel density estimation is then produced. The relevant parameter settings of the proposed model are shown in Table 5 below. Based on the VMD-FA-KELM algorithm and kernel density estimation, the prediction interval results of sunny, cloudy, and rainy weather for each two-day period at 90% and 70% confidence levels, The comparisons in Table 5 show that whether it is sunny, cloudy, or rainy, the interval width of the 90% confidence level is wider than that under the 70% confidence level, and the corresponding prediction interval coverage probability is also higher than the 70% confidence level by more than 10%. The width of the prediction interval is minimum in sunny weather, and the prediction interval coverage probability is higher than cloudy and rainy days. As shown in Figures 6 and 7, we can see that the actual PV power almost falls within the 90% confidence level, and a small part is outside the 70% confidence level. This shows that the smaller the confidence level is, the narrower the interval width is. In order to satisfy the corresponding confidence level, it is shown that when the confidence level is high, the average width of the interval gets wider. As such, the probability of real PV power falling into the prediction interval is greater. Moreover, when the confidence level is low, the average width of the interval is narrower. Then, the probability of real PV power falling into the prediction interval is smaller. As the confidence level decreases gradually, the corresponding prediction interval normalized averaged width will decrease, and the coverage probability will decrease as well. The width of the sunny prediction interval is relatively narrow, which indicates that the sunny data are more stable, and the prediction interval accuracy is higher than cloudy weather and rainy weather. Further, on the same day, the larger the point predicted error is, the greater the range of error fluctuation is. Moreover, at noon, the error is the largest, and the interval is the widest. The comparisons in Table 6 show that whether it is sunny, cloudy, or rainy, the interval width of the 90% confidence level is wider than that under the 70% confidence level, and the corresponding prediction interval coverage probability is also higher than the 70% confidence level by more than 10%. The width of the prediction interval is minimum in sunny weather, and the prediction interval coverage probability is higher than cloudy and rainy days. As shown in Figures 6 and 7, we can see that the actual PV power almost falls within the 90% confidence level, and a small part is outside the 70% confidence level. This shows that the smaller the confidence level is, the narrower the interval width is. In order to satisfy the corresponding confidence level, it is shown that when the confidence level is high, the average width of the interval gets wider. As such, the probability of real PV power falling into the prediction interval is greater. Moreover, when the confidence level is low, the average width of the interval is narrower. Then, the probability of real PV power falling into the prediction interval is smaller. As the confidence level decreases gradually, the corresponding prediction interval normalized averaged width will decrease, and the coverage probability will decrease as well. The width of the sunny prediction interval is relatively narrow, which indicates that the sunny data are more stable, and the prediction interval accuracy is higher than cloudy weather and rainy weather. Further, on the same day, the larger the point predicted error is, the greater the range of error fluctuation is. Moreover, at noon, the error is the largest, and the interval is the widest. The comparisons in Table 6 show that whether it is sunny, cloudy, or rainy, the interval width of the 90% confidence level is wider than that under the 70% confidence level, and the corresponding prediction interval coverage probability is also higher than the 70% confidence level by more than 10%. The width of the prediction interval is minimum in sunny weather, and the prediction interval coverage probability is higher than cloudy and rainy days. Comparison of the Proposed Model with Other Models The proposed VMD-FA-KELM model is compared with three other models, namely VMD-PSO-KELM, WPD-FA-KELM, and VMD-KELM. In the PSO algorithm, the number of iterations is 200, the group size is 60, and the values of learning parameters c 1 and c 2 are both 2. The several case studies of point prediction on 11 August 2015 sunny day, 10 May cloudy day, and 14 September rainy weather are shown in Figures 8-10, respectively. Figures 8-10 show that all models have high prediction accuracy in sunny weather, and the PV power forecasting output can fit the actual output better. In cloudy and rainy weather, the prediction results fluctuate violently because the PV power output has more randomness and uncertainty. As shown in Figure 9, the prediction value obtained by the FA-KELM algorithm is closer to the actual value. This means that FA is more optimal for KELM. 10 show that all models have high prediction accuracy in sunny weather, and the PV power forecasting output can fit the actual output better. In cloudy and rainy weather, the prediction results fluctuate violently because the PV power output has more randomness and uncertainty. As shown in Figure 9, the prediction value obtained by the FA-KELM algorithm is closer to the actual value. This means that FA is more optimal for KELM. In order to further compare the proposed method with other methods, the normalized root mean square error index NRMSE e and normalized mean absolute error index NMAE e are used to measure the point prediction error in Table 7, while the constructed PI at the 90% confidence level is summarized in Table 8 in terms of the reliability index, prediction interval coverage probability (PICP), sharpness index, and prediction interval normalized averaged width (PINAW). In order to further compare the proposed method with other methods, the normalized root mean square error index e NRMSE and normalized mean absolute error index e NMAE are used to measure the point prediction error in Table 6, while the constructed PI at the 90% confidence level is summarized in Table 7 in terms of the reliability index, prediction interval coverage probability (PICP), sharpness index, and prediction interval normalized averaged width (PINAW). From the indices results of Table 7, we can see that the proposed model has the smallest point prediction error, and the values of both e NRMSE and e NMAE are below 10% in all weather conditions. This indicates that the proposed model gives the best performance in all models. From Table 6, the proposed prediction interval model at the 90% confidence level achieves a PICP value of 97.96% and a PINAW value of 9.98% in sunny weather. The PICP of the prediction interval for the proposed method meets the corresponding confidence level, that is, the confidence level is greater than 90%, and the width of the prediction interval is the narrowest, which indicates that the proposed prediction interval model can construct the prediction interval effectively and more practically. From the indices results of Table 6, we can see that the proposed model has the smallest point prediction error, and the values of both e NRMSE and e NMAE are below 10% in all weather conditions. This indicates that the proposed model gives the best performance of all the models. From Table 5, the proposed prediction interval model at the 90% confidence level achieves a PICP value of 97.96% and a PINAW value of 9.98% in sunny weather. The PICP of the prediction interval for the proposed method meets the corresponding confidence level, that is, the confidence level is higher than 90%, and the width of the prediction interval is the narrowest, which indicates that the proposed prediction interval model can construct the prediction interval effectively and more practically. Further, from Tables 6 and 7, the forecast errors of sunny days are smaller than those of cloudy days and rainy days. Based on the same VMD decomposition technique, the prediction error of the single KELM model is larger than that from the PSO-KELM model and FA-KELM model. This means that after optimizing the penalty parameter C and kernel parameter g, a better KELM model can be obtained. It effectively avoids the random selection of the two parameters of KELM. The FA-KELM prediction is better than PSO-KELM: this implies that the FA algorithm has a stronger global searching ability and generalization ability than the PSO algorithm. Based on the same FA-KELM algorithm, the prediction error decomposed by VMD is less than WPD. This shows that the VMD algorithm can effectively overcome the disadvantages of the selection of wavelet bases and the number of decomposition layers in WPD decomposition. Compared with the undecomposed KELM prediction method, the prediction effect of the decomposed ones is better, which indicates that the VMD algorithm can effectively reduce the non-stationarity of the sequence. At the same time, compared with the common BP neural network, the prediction error of KELM is smaller. In summary, the proposed FA algorithm produces an overall optimum. The above analysis coincides with results reported from Figures 6-10, which can provide more accurate decision-making and ensure better security of a power supply. Potential Application to Power Systems The statistical analysis and comparative study of a large amount of irradiance data under different weather types show that sunny, cloudy, and rainy days are typical weather types. The irradiance variation of these three types is different and has their distinct characteristics, and the probability of these three weather types is higher and covers the weather states corresponding to most of the dates. Without a loss of generality, it aims to explain the framework in a simplified way by reducing the calculation scale of the interval prediction but to demonstrate the benefit gained from this approach. With the advancement of computers, it can be foreseen that the number of clusters could be increased significantly to get a practical solution within the allowable time constraints. It is possible to compare the probability forecast of normal distribution and the proposed probability prediction of kernel density distribution to get a better prediction interval. As reported in [55], microgrids consist of smart buildings with solar panels. The solar energy can be trade with each other in a peer-to-peer approach. This potential application aims to optimize the energy consumption of PV energy merged with energy storage systems (ESSs), such as electric vehicles for a microgrid community of multiple buildings. Commonly, a community may be equipped with PV systems, heat pumps, and multiple sensors, etc. Energy production prediction based on machine learning and short-term weather forecasts can help identifying possible management and optimal usage of various systems (e.g., heating and cooling) to enhance the system operation. For example, neighbors in the Brooklyn Microgrid project produce, consume, and buy power in their community with a transactive energy platform based on blockchain [56]. The platform facilitates distributed energy supply systems that is highly based on renewable-based sources such as solar energy generation for a more resilient, low-carbon, and customer-driven economy to deploy smart cities [57]. Energy harvesting from solar energy will be important to have a good prediction of solar irradiance. Conclusions This paper proposed a novel hybrid model for the day-ahead or intra-day-ahead PV power output prediction interval considering the principle of VMD and FA-KELM. The proposed approach shows promising results as compared to existing methods without PV power series decomposition. VMD decomposition has been used for the first time to decompose the PV power series of different weather conditions, which overcomes the disadvantages of the selection of wavelet bases and the number of decomposition layers in WPD decomposition. The decomposing technique is useful in identifying the complexity of the IMFs series. In addition, the hybrid use of VMD and FA-KELM has shown to be an effective method to construct the optimal PIs. In addition to this, to the best knowledge of the authors, it is the first time in applying this integrated approach to solar irradiance prediction. Study results show that the presented hybrid method can give excellent quality of PIs, with significance for practical applications in system operation, planning, and risk assessment. Solar irradiance prediction is a non-linear and non-deterministic problem and the mathematical model will not be obtained easily. As such in this instance, the authors focus on the artificial intelligence approach. With many simulations done, the authors have confidence that they are getting a good solution as compared with other mainstream methods. Further work will be carried out in a sensitivity study to search for a near global optimal solution. Conflicts of Interest: The authors declare no conflict of interest. Appendix A The prediction interval coverage probability (PICP) is calculated based on the number of occurrences that the output value falls inside the PI for a given confidence level α, is given by [46] where a i is a Boolean value, t i represents the predicted target value, and U i and L i are upper and lower bounds of prediction interval, respectively. Moreover, N is the number of the prediction sample. Based on satisfying the confidence level α, the larger the PICP value is, the greater the confidence level is, where the number of actual PV power falling into the prediction interval is larger. The larger the PICP value is, the greater the confidence level is. Then, to account for the fact that, The PI normalized averaged width (PINAW) considers the output value will easily fall inside the wider PI as follows [58]: R depicts the range of the predicted targets. The PINAW value is used to measure the ability of predicted results for describing uncertain information. When PICP is constant, the smaller the value of PINAW is, the better the predicted results are. Normalized mean absolute error (NMAE) and normalized root mean square error (NRMSE) are considered to evaluate the deterministic prediction [59]: where P rated is the rated power of the PV unit,Ŷ i is the point predicted value, and Y i is the actual point value at time i.
9,363.2
2020-07-13T00:00:00.000
[ "Computer Science", "Engineering" ]
Brevundimonas brasiliensis sp. nov.: a New Multidrug-Resistant Species Isolated from a Patient in Brazil ABSTRACT To increase knowledge on Brevundimonas pathogens, we conducted in-depth genomic and phenotypic characterization of a Brevundimonas strain isolated from the cerebrospinal fluid of a patient admitted in a neonatal intensive care unit. The strain was identified as a member of the genus Brevundimonas based on Vitek 2 system results and 16S rRNA gene sequencing and presented a multidrug resistance profile (MDR). Several molecular and biochemical tests were used to characterize and identify the species for in-depth results. The draft genome assembly of the isolate has a total length of 3,261,074 bp and a G+C of 66.86%, similar to other species of the genus. Multilocus sequence analysis, Type (Strain) Genome Server, digital DNA-DNA hybridization, and average nucleotide identity confirmed that the Brevundimonas sp. studied represents a distinct species, for which we propose the name Brevundimonas brasiliensis sp. nov. In silico analysis detected antimicrobial resistance genes (AMRGs) mediating resistance to β-lactams (penP, blaTEM-16, and blaBKC-1) and aminoglycosides [strA, strB, aac(6′)-Ib, and aac(6′)-Il]. We also found AMRGs encoding the AcrAB efflux pump that confers resistance to a broad spectrum of antibiotics. Colistin and quinolone resistance can be attributed to mutation in qseC and/or phoP and GyrA/GyrB, respectively. The Brevundimonas brasiliensis sp. nov. genome contained copies of type IV secretion system (T4SS)-type integrative and conjugative elements (ICEs); integrative mobilizable elements (IME); and Tn3-type and IS3, IS6, IS5, and IS1380 families, suggesting an important role in the development and dissemination of antibiotic resistance. The isolate presented a range of virulence-associated genes related to biofilm formation, adhesion, and invasion that can be relevant for its pathogenicity. Our findings provide a wealth of data to hinder the transmission of MDR Brevundimonas and highlight the need for monitoring and identifying new bacterial species in hospital environments. IMPORTANCE Brevundimonas species is considered an opportunistic human pathogen that can cause multiple types of invasive and severe infections in patients with underlying pathologies. Treatment of these pathogens has become a major challenge because many isolates are resistant to most antibiotics used in clinical practice. Furthermore, there are no consistent therapeutic results demonstrating the efficacy of antibacterial agents. Although considered a rare pathogen, recent studies have provided evidence of the emergence of Brevundimonas in clinical settings. Hence, we identified a novel pathogenic bacterium, Brevundimonas brasiliensis sp. nov., that presented a multidrug resistance (MDR) profile and carried diverse genes related to drug resistance, virulence, and mobile genetic elements. Such data can serve as a baseline for understanding the genomic diversity, adaptation, evolution, and pathogenicity of MDR Brevundimonas. Multidrug-Resistant Brevundimonas brasiliensis sp. nov. Microbiology Spectrum associated with resistance to antibiotics and toxic compounds (16 copper homeostasis, 2 resistance to fluoroquinolones, 1 beta-lactamase, 1 multidrug resistance efflux pump, 1 copper tolerance), while 14 were related to invasion and intracellular resistance (Fig. 1C). The data of whole-genome sequencing, circular representations, and subsystem category distributions are shown in Fig. 1. The distribution of protein-coding genes into the cluster of orthologous groups (COG) functional category showed a total of 2,743 genes (Fig. 1D). The majority of known protein-coding genes were associated with "metabolism" (n = 1,013; 36.93%), followed by those related to "cellular processes and signaling" (n = 666; 24.28%), and "information storage and processing" (n = 508; 18.51%). The number of genes associated with "unknown functions" was 556 (20.26%) and with defense was 34 (1.23%) (Fig. 1D). Phylogenetic tree and biochemical analysis. The genomic sequence of Brevundimonas sp. presented only one 16S rRNA gene sequence, indicating that the genome assembly was not contaminated by other organisms. Therefore, a phylogenetic tree was constructed based on the 16S rRNA gene sequence (1,459 bp) of our strain and all 16S rRNA gene sequences (n = 44) of known Brevundimonas species deposited in GenBank. The 16S rRNA reference sequence of Henriciella pelagia strain LA220 was used as an outgroup. The results confirmed that the Brevundimonas sp. represents a member of the genus Brevundimonas. In this initial taxonomic classification, Brevundimonas sp. was most related to Brevundimonas olei with a sequence identity of 99.71% (with 68.5% bootstrap support), followed by Brevundimonas naejangsanensis (BIO TAS2-2) ( Fig. 2A). To define the characteristics of Brevundimonas sp., biochemical tests were performed and compared with Brevundimonas olei, Brevundimonas naejangsanensis, Brevundimonas diminuta, and Brevundimonas vesicularis (Fig. 2B). Unlike B. olei, our strain was oxidase positive and motile. The results also showed that Brevundimonas sp. had a yellow color, it was catalase positive, and it only assimilated Multidrug-Resistant Brevundimonas brasiliensis sp. nov. Microbiology Spectrum L-arabinose. The nonutilization of D-mannitol is unique to our strain when compared with other Brevundimonas species (Fig. 2B). Genetic relatedness. To further determine the taxonomic affiliation of Brevundimonas sp., a multilocus sequence analysis (MLSA) was performed with five housekeeping genes found in complete genomic and reference sequences of Brevundimonas (see Table S3 in the supplemental material). The phylogenetic trees (Fig. 3A) were generated based on the concatenated sequences in the following order: atpD (1,536 bp), recA (1,080 bp), ileS (2,922 bp), rpoD (1,923 bp), and trpB (1,224 bp), which yielded an alignment of 8,684 bp. The MLSA tree exhibited the close association between our Brevundimonas sp. and Brevundimonas naejangsanensis FS1091 (Fig. 3A), followed by Brevundimonas naejangsanensis DSM 23858. A phylogenetic tree based on 19 reference genome sequences and the Brevundimonas sp. was constructed using Type (Strain) Genome Server (TYGS). The TYGS-based results showed that Brevundimonas sp. are most closely related to Brevundimonas naejangsanensis DSM 23858 (Fig. 3D), with dDDH values (formula d4) of 50.8%, also positioning Brevundimonas sp. as a novel species. Although Brevundimonas olei presented .99% 16S rRNA sequence identity with the Brevundimonas sp., it had no housekeeping genes or genome sequence available in genetic sequence database for comparison. Therefore, Brevundimonas olei was not included in MLSA, ANI, dDDH, or TYGS analysis. Genome properties and comparative functional analysis. To investigate general evolutionary patterns of genomes, we constructed two phylogenetic trees based on the set of core and accessory genomes of our strain with 49 reference and complete genomes of Brevundimonas deposited in GenBank. The trees were divided into seven showing the relationship between Brevundimonas sp. strain with Brevundimonas reference sequence strains deposited at NCBI. Tree inferred with FastME 2.1.6.1 (147) from GBDP (BLAST genome distance phylogeny method) distances calculated from genomic sequences. Branch lengths are scaled in terms of the GBDP distance formula d5. The numbers above the branches are GBDP pseudobootstrap support values . 60% of 100 replications, with an average branch support of 75.1%. The tree was rooted at the midpoint (148). main clusters, according to topological structure and evolutionary distance. The relative positions of Brevundimonas brasiliensis sp. nov., Brevundimonas naejangsanensis DSM 23858, Brevundimonas naejangsanensis FS1091, and Brevundimonas naejangsanensis B1 species (clade 5) varied between the two trees. Brevundimonas brasiliensis sp. nov. and Brevundimonas naejangsanensis DSM 23858 were segregated under a common node in the core genome tree, although the strains segregated together under distinct nodes in the accessory genome tree ( Fig. 4A and B). Additionally, we found amino acid alterations in PhoP (Arg81Cis) and qseC (Ile283Leu) that mediate resistance to colistin antibiotics, as well as double amino acid substitution in GyrA (S83L and D87H) and single amino acid substitution in GyrB (Leu-466), which are associated with quinolone resistance (see Fig. S1 in the supplemental material). DISCUSSION The prevalence of certain MDR Gram-negative bacteria is increasing dramatically in patient care settings (25). Here, we reported a phenotypic and a systematic genomic characterization of a Brevundimonas clinical strain isolated from the cerebrospinal fluid of an infant admitted in the neonatal intensive care unit (NICU). Although it is considered a rare human pathogen, there has been an increase of infections caused by Brevundimonas spp. in recent years (26,27), including in hospitalized children. Explanations for this include the following. Babies are more vulnerable to colonization and infection with pathogens due to an immature immune system. Novel molecular and phenotypic methods are providing more accurate and robust identification of these pathogens (28)(29)(30). As Brevundimonas spp. are becoming known for their resistance properties to many different antibiotics (10,31,32), we analyzed the resistance profile to the antibiotics most commonly used to treat infections caused by Gram-negative bacteria. The studied Brevundimonas sp. was classified as MDR, presenting resistance to b-lactams, polymyxin, aminoglycosides, and fluoroquinolones. In contrast, we only observed susceptibility to tigecycline. Although the resistance mechanisms in the Brevundimonas genus remain poorly understood (10), it is known that the resistance profile can be highly varied. For instance, Brevundimonas vesicularis and Brevundimonas diminuta are the main species isolated from human infections (26). Studies have reported that both species may be resistant (17,31,32) or susceptible (9,26,33) to most antibiotics tested in this study. Since species identification of the Brevundimonas isolate was not possible with the Vitek 2 system, WGS was performed on the Brevundimonas sp. for a more accurate identification and characterization of the isolate. The genome size and GC content were similar to most of the Brevundimonas spp. deposited in NCBI. The RAST and eggNOG analysis showed that most genes were related to cellular processes, which are essential to the bacteria (34). Notably, the genes related to the defense mechanisms present in eggNOG and disease in RAST analysis were associated with a multidrug resistance profile. Preliminary phylogenetic analysis based on 16S rRNA gene sequences confirmed that our strain belongs to the genus Brevundimonas. It shared the highest similarity to Brevundimonas olei MJ15 (99.71%) followed by Brevundimonas naejangsanensis BIO TAS2-2 (99.37%). Although the 16S rRNA gene is widely used to differentiate strains at the genus level (35)(36)(37), it has poor discriminatory power at the species level since 16S rRNA genes are identical or highly homologous among different species (38). To aid bacterial identification, biochemical tests were performed, revealing that our strain is an oxidase positive and motile bacillus, unlike Brevundimonas olei. Multilocus sequence analysis (MLSA) based on several housekeeping genes has become a high-resolution technique to elucidate taxonomic relationship and phylogenetic analysis of closely related strains and subspecies (39,40). The MLSA scheme based on five housekeeping genes (atpD, recA, ileS, rpoD, and trpB) showed that the Brevundimonas sp. isolate was clearly separated from Brevundimonas naejangsanensis FS1091 and Brevundimonas naejangsanensis DSM 23858, indicating a novel species within the Brevundimonas genus. ANI or dDDH analysis has been most widely used as a gold standard for species delineation (24). Studies have reported that dDDH is considered necessary when strains share more than 97% 16S rRNA gene sequence similarity (41,42), as it was observed for Brevundimonas sp., Brevundimonas olei, and Brevundimonas naejangsanensis BIO TAS2-2. To provide more accurate evidence to support that the Brevundimonas sp. strain is a novel species, ANI and dDDH analyses were performed with the Brevundimonas sp. and complete genomic and reference sequences from the genus Brevundimonas available in GenBank. Our data revealed that values for ANI (,95%) and dDDH (,70%) were lower than those generally accepted for species-level, showing that the isolate Brevundimonas sp. represents a novel species. To further validate our results, a phylogenetic tree inferred with genome BLAST distance phylogeny (GBDP) was constructed with the Type (Strain) Genome Server (TYGS), using the strain Brevundimonas sp. and reference sequences deposited in GenBank. The TYGS results also indicate that the strain Brevundimonas sp. is a novel species. Based on these findings, the name Brevundimonas brasiliensis sp. nov. was proposed. To gain insights into similarity and distance within the genus Brevundimonas, we constructed two phylogenetic trees based on the set of core and accessory genomes (19). Brevundimonas brasiliensis sp. nov., B. naejangsanensis DSM 23858, B. naejangsanensis strain FS1091, and B. naejangsanensis B1 were grouped into the same clade in both trees. However, the phylogenetic trees presented a different topology. Brevundimonas brasiliensis sp. nov. showed evolutionary relatedness to the B. naejangsanensis DSM 23858 on the core gene tree, but they were no longer sisters on the accessory genome tree, suggesting that noncore genes were likely to make them diverged. KEGG analysis showed that most important pathways in core, accessory, and unique genes among four Brevundimonas strains are associated with "metabolism." Among these genes, most were related to "amino acid metabolism," "carbohydrate metabolism," and "overview," suggesting important roles in the maintenance of cellular function and survival. Important drug resistance genes were identified in unique gene clusters for human disease. Brevundimonas brasiliensis sp. nov. also harbored the highest number of singletons among the four strains, presenting specific genes associated with resistance and virulence genes. Singleton genes such as species-specific or strain-specific genes are those present in only one genome, which are usually acquired by horizontal gene transfer (43). All of these genomic features suggest a high versatility of Brevundimonas species in adapting to a wide range of environments, including health care environments. We checked if the presence of AMRGs corresponded to phenotypic profiles and observed that the b-lactams in Brevundimonas brasiliensis sp. nov can be associated with penP, bla TEM-16 , and bla BKC-1 genes. The penP gene encodes a narrow-spectrum b-lactamase that displays a more effective hydrolysis only of first-and second-generation penicillins and cephalosporins (44)(45)(46). The bla TEM-116 gene has been reported in a variety of clinical isolates (28,47). Studies have related that TEM-116 b-lactamase can confer resistance to ceftazidime, cefotaxime, and aztreonam (48,49). The bla BKC-1 gene encodes a Brazilian Klebsiella carbapenemase (BKC-1) that can confer resistance to penicillins, broad-spectrum cephalosporins, and aztreonam and decreased susceptibility to carbapenems (50). Interestingly, BKC-1 was described for the first time in Brazil in three Klebsiella pneumoniae strains (51) and more recently in a Citrobacter freundii strain (52), further showing that the bla BKC-1 gene is spreading to other pathogens. Colistin resistance in Gram-negative bacteria can be attributed to mutation in PhoPQ, PmrAB, qseC, and plasmid-borne genes, such as mcr and its variants (59)(60)(61). Our strain displayed amino acid alterations at position 283 in qseC (Ile283Leu) and position 81 in phoP (Arg81Cis). Similar mutations have been reported by Pitt et al. (62) as conferring colistin resistance in K. pneumoniae. Resistance to quinolones is frequently acquired by mutations in the quinolone resistance-determining regions (QRDRs) of the target genes, such as gyrA, gyrB, parC, and parE (28, 63). Our strain displayed a double amino acid substitution in GyrA, serine to leucine at codon 83, and aspartic acid to histidine at 87 (GyrA-S83L-D87H). Studies have reported that Ser83-Leu substitution in GyrA is usual, but an additional mutation in codon 87 is associated with higher levels of quinolone resistance than mutations at other codons within the QRDR (63,64). Although the GyrB subunit is less commonly associated with quinolone resistance (65), B. brasiliensis sp. nov presented amino acid substitutions at position 466 in GyrB (Glu466-Leu). Similar findings have been reported in quinolone-resistant B. diminuta (32). Antimicrobial resistance can also be acquired by altered expression of porins leading to decreased penetration of antibiotic within bacteria or increased efflux of antibiotics from the bacterial cell due to overexpression of efflux pump acting synergistically with the outer membrane mutation (66). The oqxBgb gene present in Brevundimonas brasiliensis sp. nov. can encode proteins that are part of multidrug efflux pumps responsible for fluoroquinolone resistance (67,68) The acrA-like, acrB-like, and tolC-like genes found in B. brasiliensis sp. nov. encode a well-studied RND-based tripartite efflux pump (AcrAB-TolC) in Escherichia coli, which is able to export chloramphenicol, fluoroquinolone, tetracycline, rifampin, novobiocin, fusidic acid, nalidixic acid, and b-lactam antibiotics (69)(70)(71). Brevundimonas brasiliensis sp. nov. also carried oprM and mexL genes. OprM is the outer membrane component present in Burkholderia vietnamiensis and Pseudomonas aeruginosa (72,73). This outer membrane protein is a component of MexAB-OprM, MexXY-OprM, MexJK-OprM, and MexVW-OprM efflux systems, and it mediates multidrug resistance in P. aeruginosa (74,75). Although mexAB, mexXY, mexJK, and mexVW genes were not found in our strain, the mexL encoded a TetR family repressor (MexL) that is a negative regulator of MexJK expression that can be associated to tetracycline and erythromycin resistance (76)(77)(78). Mobile genetic elements (MGEs) play an important role in the dissemination of antibiotic resistance and emergence of MDR pathogens worldwide (96). Still, the distribution of mobile genetic elements in the Brevundimonas genus remains scarce. In our study, whole-genome assemblies of B. brasiliensis sp. nov. presented several MGEs that can be associated with antibiotic resistance and/or virulence, including transposons, insertions, putative ICE with T4SS, and putative IME. Antibiotic gene cassettes [strA, strB, aac(6')-Ib, aac (6')-Il, sul1, dfrA21], IS6100, and Tn6001 were located closely at scaffold 38. IS6100 plays a role in strA and strB expression in Xanthomonas campestris pv. vesicatoria (97) and have been identified in many bacteria (98). Although the bla VIM-3 gene was not found in our isolate, studies have shown that Tn6001 can contain a bla VIM-3 -harboring integron In450 and is associated to the dissemination of carbapenem-nonsusceptible Pseudomonas aeruginosa and extensively drug-resistant P. aeruginosa (99,100). Bouallègue-Godet et al. (101) showed that dfrA21, which encodes resistance to trimethoprim, may be located in plasmids and inserted as a single resistance cassette in a class I integron of Salmonella enterica. The aac(6')-Ib gene, responsible for most amikacin-resistant strains, is usually found in integrons, transposons, plasmids, and chromosomes of different bacterial species (102-104). Brevundimonas brasiliensis sp. nov. also presented ISKpn23 and putative ICE with T4SS harboring resistance genes (bla BKC-1 and floR) and virulence genes (tufA and/or virB11). Studies have shown that ISKpn23 plays an important role in expression of bla BKC-1 of K. pneumoniae (51,105). The floR gene has been described for the small plasmid p1807 (106) of Glaesserella parasuis and on the multidrug resistance region of an incomplete Tn4371-like integrative and conjugative element (ICE) in the P. aeruginosa chromosome (107). The virB11 virulence gene in our strain was associated with IME and putative ICE with T4SS. Campylobacter jejuni carries the virB11 gene localized on the pVir plasmid that encodes various genes that are homologous to a type IV secretion system (91,108). In our study, plasmid sequences were not detected using WGS, so it is uncertain whether many virulence and resistance genes are localized on plasmids. Furthermore, we could not correlate all the detected antibiotic resistance or virulence genes to MGEs due to the lack of literature. In conclusion, we characterized a novel species of Brevundimonas, which is capable of infecting patients admitted to neonatal intensive care units. Since cases of Brevundimonas infection are being reported with increasing frequency, our report provides valuable information on this novel species that may be useful for surveillance, particularly in health care settings. MATERIALS AND METHODS Bacterial isolate. The Brevundimonas sp. was recovered from the cerebrospinal fluid of an infant hospitalized at the Neonatal Intensive Care Unit (NICU) of the Hospital Geral de Palmas, Palmas, Tocantins, Brazil. This isolate was sent to the Central Laboratory of Public Health of Tocantins-Brazil (LACEN/TO/BR), a health care facility from the Brazilian Ministry of Health that receives samples of antimicrobial resistance for surveillance. The sample was sent for identification and antimicrobial susceptibility testing using the Vitek 2 system (bioMérieux, Marcy l'Etoile, France). However, species identification of Brevundimonas sp. was not possible using the Vitek system. Identification of the bacterial isolate at the genus and species level was further analyzed using whole-genome sequencing (WGS) by our research group. We also used 49 representative and complete sequences of Brevundimonas type strains in this study. Data is available in GenBank as of June 2022 (https://www.ncbi.nlm.nih.gov/genbank/) (see Table S1 in the supplemental material). Antimicrobial susceptibility. The drug susceptibility of the Brevundimonas sp. was performed using the Vitek 2 system (bioMérieux, Inc., Hazelwood, MO, United States) following the Clinical and Laboratory Standards Institute guidelines (Clinical and Laboratory Standards Institute) (109). Phenotypic detection for the production of carbapenemases was carried out by modified Hodge test, synergy test, and the EDTA test under the CLSI guidelines (109) as described elsewhere (110)(111)(112)(113). The MIC values of colistin and tigecycline were determined by the broth microdilution method, and results were interpreted based on the European Committee on Antimicrobial Susceptibility Testing (EUCAST, 2021; https://www.eucast.org/) criteria. The Brevundimonas sp. isolate was tested for susceptibility against 16 antibiotics as follows: amikacin, ampicillin, ampicillin/sulbactam, cefepime, cefoxitin, ceftazidime, ceftriaxone, cefuroxime axetil, ciprofloxacin, colistin, ertapenem, gentamicin, imipenem, meropenem, piperacillin-tazobactam, and tigecycline. Multidrug-resistant (MDR) Brevundimonas sp. isolate was defined by nonsusceptibility to at least one agent in three or more antibiotic categories (114). DNA isolation and library preparation for sequencing. Total DNA extraction was performed using the Wizard Genomic DNA purification kit (Promega, Madison, WI, United States). The quantification of DNA was made using NanoVue Plus (GE Healthcare Life Sciences, Marlborough, MA, United States). The integrity of DNA was verified by electrophoresis analysis. Bacterial DNA concentration was also measured fluorometrically (Qubit 3.0, kit Qubit dsDNA broad-range assay kit; Life Technologies, Carlsbad, CA, United States). Samples were submitted to sequencing reaction using 1 ng of total DNA. Nextera XT DNA library prep kit (Illumina, San Diego, CA, United States) was used for library production. The libraries were amplified using a short cycle PCR program. In the first PCR step, the index 1 (i7) adapters and index 2 (i5) adapters were added for sequencing cluster generation. The purification of the library was performed using 0.6Â Agencourt AMPure XP beads (Beckman Coulter). For checking the library quality and DNA fragment size, samples were analyzed by electrophoresis on 1.5% agarose gel. The libraries were quantified with a fluorometric method Qubit 3.0 using Qubit dsDNA broad-range assay kit (Life Technologies, Carlsbad, CA, United States) and normalized to 4 nM by standard dilution method. Libraries were pooled, denatured by addition of 0.2 N NaOH, and diluted to the final concentration of 1.8 pM. A PhiX control reaction was made in the final concentration of 1.5 pM. The run-length was a paired-end run of 75 cycles for each read (2 Â 75), plus up to eight cycles each for two index reads. 16S rRNA phylogeny and biochemical identification. We identified a 16s rRNA gene sequence from our genome annotation. All curated 16S rRNA gene sequences from genus Brevundimonas were searched for in the GenBank database (see Table S2 in the supplemental material). The nucleotide sequences of 16s rRNA were aligned using multiple sequence alignment software (MAFFT) (124) (https://www.ebi.ac.uk/Tools/msa/ mafft/). The construction of the maximum likelihood (ML) phylogenetic tree and the selection of the best assembly model were performed using the PhyML v3.0 program (125) and JModelTest (126), respectively. The Brevundimonas sp. was subjected to biochemical tests using the Bactray I, II, III Systems according to the manufacturer's instructions (LaborClin, Paraná, Brazil). The results were compared with other Brevundimonas species reported in the literature. Multilocus sequence analysis. Multilocus sequence analysis (MLSA) was conducted with five housekeeping genes, atpD (beta subunit of ATP synthase), ileS (isoleucina-tRNA ligase), recA (RecA protein), rpoC (DNA-directed RNA polymerase beta subunit), and trpB (beta chain of tryptophan synthase), which were retrieved from Brevundimonas reference species and the complete genome from the NCBI (National Center for Biotechnology Information) (https://www.ncbi.nlm.nih.gov/) (see Table S3 in the supplemental material). The genes were aligned and concatenated in the following order: atpD, recA, ileS, rpoC, and trpB. The phylogenetic tree was built with the PhyML v3.0 program (125) based on the best model chosen by JModelTest (126). Core and accessory genome comparison. The complete and reference genomes of the genus Brevundimonas (see Table S6 in the supplemental material) were analyzed together with Brevundimonas brasiliensis sp. nov. using the Roary pipeline to infer the core and accessory genome trees (130). Genome analysis with OrthoVenn and KEGG. For these analyses, we used the species closest to our strain according to the genomic core tree. Whole-genome comparison analysis of Brevundimonas brasiliensis sp. nov. against the selected genomes of Brevundimonas was performed using the OrthoVenn2 web server (https://orthovenn2.bioinfotoolkits.net) (131). Annotation of high-level functions and other high-throughput metabolism data was performed by Bacterial Pangenome Analysis Pipeline (BPGA) (132) against the Kyoto Encyclopedia Genomics and Genes Database (KEGG) (133). Thus, detailed identification of core genes, accessory genes, and unique genes was possible. Characterization of resistance and virulence factors. The draft genome was screened for the presence of antimicrobial resistance (AMR) genes with the Rapid Annotation using Subsystem Technology server (RAST) (120) (https://rast.nmpdr.org/). BLAST was performed using two databases as follows: the comprehensive antibiotic resistance database (CARD; https://card.mcmaster.ca/) (134) and the antibiotic resistance gene ANNotation (ARG-ANNOT database) (135). Ethics statement. In this work, we did not access the medical records of the patient. The Brevundimonas sp. and the anonymous archival data related to sample type were obtained from the Central Laboratory of Public Health of Tocantins (LACEN/TO, data's owner). The studies involving human participants were reviewed and approved by the Committee of Ethics in Human Research of the Federal University of São Carlos (CEHRFUSC), and the need for informed consent for conducting this study was waived by the committee (no. 1.595.268). Patient consent was not required since the data presented in this study do not relate to any specific person or persons. Written informed consent from the participants or their legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and institutional requirements. Permission to conduct the present study was obtained from the Health Department of the State of Tocantins (Secretaria da Saúde do Estado do Tocantins -SESAU) and LACEN/TO. Data availability. The raw reads are available in the Sequence Read Archive under BioProject accession number PRJNA882454. This strain was deposited at the Bacteria Collection from Environment and Health (CBAS) of the Oswaldo Cruz Foundation (FIOCRUZ) (http://cbas.fiocruz.br/), under the (accession number CBAS 910). SUPPLEMENTAL MATERIAL Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.4 MB.
5,905.2
2023-04-17T00:00:00.000
[ "Biology" ]
Projective Crystalline Representations of \'Etale Fundamental Groups and Twisted Periodic Higgs-de Rham Flow This paper contains three new results. {\bf 1}.We introduce new notions of projective crystalline representations and twisted periodic Higgs-de Rham flows. These new notions generalize crystalline representations of \'etale fundamental groups introduced in [7,10] and periodic Higgs-de Rham flows introduced in [19]. We establish an equivalence between the categories of projective crystalline representations and twisted periodic Higgs-de Rham flows via the category of twisted Fontaine-Faltings module which is also introduced in this paper. {\bf 2.}We study the base change of these objects over very ramified valuation rings and show that a stable periodic Higgs bundle gives rise to a geometrically absolutely irreducible crystalline representation. {\bf 3.} We investigate the dynamic of self-maps induced by the Higgs-de Rham flow on the moduli spaces of rank-2 stable Higgs bundles of degree 1 on $\mathbb{P}^1$ with logarithmic structure on marked points $D:=\{x_1,\,...,x_n\}$ for $n\geq 4$ and construct infinitely many geometrically absolutely irreducible $\mathrm{PGL_2}(\mathbb Z_p^{\mathrm{ur}})$-crystalline representations of $\pi_1^\text{et}(\mathbb{P}^1_{{\mathbb{Q}}_p^\text{ur}}\setminus D)$. We find an explicit formula of the self-map for the case $\{0,\,1,\,\infty,\,\lambda\}$ and conjecture that a Higgs bundle is periodic if and only if the zero of the Higgs field is the image of a torsion point in the associated elliptic curve $\mathcal{C}_\lambda$ defined by $ y^2=x(x-1)(x-\lambda)$ with the order coprime to $p$. The nonabelian Hodge theory established by Hitchin and Simpson associates representations of the topological fundamental group of an algebraic variety X over C to a holomorphic object on X named Higgs bundle. Later, Ogus and Vologodsky established the nonabelian Hodge theory in positive characteristic in their spectacular work [19]. They constructed the Cartier functor and the inverse Cartier functor, which give an equivalence of categories between the category of nilpotent Higgs modules and the category of nilpotent flat modules over a smooth proper and W 2 (k)-liftable variety. This equivalence generalizes the classical Cartier descent theorem. Moreover, it is the starting point of the theory of Higgs-de Rham flows in [12] To attach representations of the fundamental group to these O X -linear objects on X, one still needs an analogue of the classical Riemann-Hilbert correspondence. Unfortunately, there is no direct generalization of Riemann-Hilbert correspondence in the characteristic p case. However, in the p-adic case, a good p-adic analogue of the category of polarized complex variations of Hodge structures, the category MF ∇ [a,b] (X /W ), is introduced by Fontaine and Laffaille [6] for X = Spec W (k) in their study of p-adic Hodge theory. The theory of Fontaine and Laffaile was later generalized by Faltings [4] to the general geometric base case. The objects in this theory are called Fontaine modules and consists of a quadruple (V, ∇, Fil, ϕ), where (V, ∇, Fil) is a filtered de Rham bundle over X and ϕ is a relative Frobenius which is horizontal with respect to ∇ and satisfies the strong p-divisibility condition. The latter condition is a p-adic analogue of the Riemann-Hodge bilinear relations. Then the Fontaine-Laffaille-Faltings correspondence gives a fully faithful functor from MF ∇ [0,w] (X /W ) (w ≤ p − 2) to the category of crystalline representations of π 1 (X K ), where X K is the generic fiber of X . This can be regarded as a p-adic version of the Riemann-Hilbert correspondence. Faltings [5] has established an equivalence of categories between the category of generalized representations of the geometric fundamental group and the category of Higgs bundles over a p-adic curve, which has generalized the earlier work of Deninger-Werner [2] on a partial p-adic analogue of Narasimhan-Seshadri theory. In order to establish a p-adic analogue of the Hitchin-Simpson correspondence between the category of representations with small coefficients, namely GL r (W n (F q ))-representations, and the category of Higgs bundles, Lan, Sheng and Zuo introduced the notion of Higgs-de Rham flows, which can be considered as an analogue of the Yang-Mills-Higgs flow attached to a Higgs bundle in the complex analytic situation. The latter is used to solve the Yang-Mills-Higgs equation. The stability of the Higgs bundles guarantees the existence of solutions, the so-called Yang-Mills-Higgs connections. Roughly speaking, a Higgs-de Rham flow is a sequence of graded Higgs bundles and filtered de Rham bundles, connected by the inverse Cartier transform defined by the fundamental work of Ogus and Vologodsky and the grading functor by the attached Hodge filtrations on the de Rham bundles (for details see Section 3 in [12] or Section 3.1 in this paper). The following diagram presents a Higgs-de Rham flow over X 1 ( X 1 is the special fiber of X ): A Higgs-de Rham flow is said to be periodic (of period f ∈ N) if f is the smallest integer such that there exists an isomorphism of Higgs bundles φ : (E f , θ f ) ∼ = (E 0 , θ 0 ). For Higgs bundles over X n /W n (k), the key point of defining the Higgs-de Rham flow is to find the suitable lifting C −1 n of C −1 1 over the truncated Witt ring W n (k). This was done by Lan-Sheng-Zuo in section 4 of [12]. Because of the importance of it in lifting the Higgs-de Rham flow to X /W (k), we recall the construction of functor C −1 n briefly in Section 2. Theorem 0.1 (Theorem 1.4 in [12]). Let X be a smooth proper scheme over W . For each integer 0 ≤ w ≤ p−2 and each f ∈ N, there is an equivalence of categories between the category of p-torsion free Fontaine-Faltings modules over X of Hodge-Tate weight ≤ m with endomorphism structure W (F p f ) and the category of periodic Higgs-de Rham flows over X of level ≤ w and whose periods are f . Remark. It is straightforward to generalize Theorem 0.1 to the logarithmic setting. To be more precise, let X be a smooth proper scheme over W and let D ⊂ X be a simple normal crossings divisor relative to W . Then, for each positive integer f , there is an equivalence of categories between the category of strict p n -torsion logarithmic Fontaine modules (with logarithmic structure along D × W n ⊂ X × W n ) with endomorphism structure of W n (F p f ) whose Hodge-Tate weights ≤ p − 2 and the category of periodic logarithmic Higgsde Rham flows over X × W n (with logarithmic structure along D × W n ⊂ X × W n ) whose periods are factors of f and nilpotent exponents are ≤ p − 2. By Theorem 6.6 in [12], a periodic Higgs bundle must have trivial Chern classes. This fact limits the application of the p-adic Hitchin-Simpson correspondence. For instance, Simpson constructed a canonical Hodge bundle Ω 1 X ⊕ O X on X in his proof of the Miyaoka-Yau inequality (Proposition 9.8 and Proposition 9.9 in [21]), which has nontrivial Chern classes in general. In fact, the classical nonabelian Hodge theorem tells us that the Yang-Mills-Higgs equation is still solvable for a polystable Higgs bundle with nontrivial Chern classes. Instead of getting a flat connection, one can get a projective flat connection in this case, whose monodromy gives a PGL rrepresentation of the fundamental group. This motivates us to find a p-adic Hitchin-Simpson correspondence for graded Higgs bundles with nontrivial Chern classes. A projective flat connection ∇ on a bundle V over C is a (usual) connection whose curvature has the special form Θ = ω ⊗ Id V , where ω is a rational closed (1, 1)-form representing 1 rk(V ) c 1 (V ). Note that, if [ω] ∈ H 2 (X, Z), then by the Lefschetz theorem on (1, 1)-classes one can actually find a line bundle L with a metric connection ∇ L such that (V, ∇) ⊗ (L, ∇ L ) ∨ becomes a flat bundle. Inspired by this we first introduce the 1-periodic twisted Higgs-de Rham flow over X 1 as follows Here L is called a twisting line bundle on X 1 , and φ L : On the Fontaine module side, we also introduce the twisted Fontaine-Faltings module over X 1 . The latter consists of the following data: a filtered de Rham bundle (V, ∇, Fil) together with an isomorphism between de Rham bundles: . We will refer to the isomorphism ϕ L as the twisted ϕ-structure. The general construction of twisted Fontaine-Faltings modules and twisted periodic Higgs-de Rham flows are given in Section 1.5 and Section 3.2 (over X n /W n (k), and multi-periodic case). Theorem 0.2 (Theorem 3.3). Let X be a smooth proper scheme over W . For each integer 0 ≤ a ≤ p − 2 and each f ∈ N, there is an equivalence of categories between the category of all twisted f -periodic Higgs-de Rham flows over X n of level ≤ a and the category of strict p n -torsion twisted Fontaine-Faltings modules over X n of Hodge-Tate weight ≤ a with an endomorphism structure of W n (F p f ). Theorem 0.2 can be generalized to the logarithmic case. The precise statement is as follows. Theorem 0.3 (Theorem 3.4). Let X be a smooth proper scheme over W with a simple normal crossing divisor D ⊂ X relative to W . Then for each natural number f ∈ N, there is an equivalence of categories between the category of strict p n -torsion twisted logarithmic Fontaine-Faltings modules (with pole along D×W n ⊂ X ×W n ) with endomorphism structure of W n (F p f ) whose Hodge-Tate weight ≤ p − 2 and the category of twisted f -periodic logarithmic Higgs-de Rham flows (with pole along D × W n ⊂ X × W n ) over X × W n whose nilpotent exponents are ≤ p − 2. One of our goals is to associate a PGL n -representation of π 1 to a twisted (logarithmic) Fontaine-Faltings module. To do so, we will need to generalize Faltings's work. Following Faltings [4], we construct a functor D P in section 2.5, which associates to a twisted (logarithmic) Fontaine-Faltings module a PGL n representation of theétale fundamental group. Theorem 0.4 (Theorem 2.10). Let X be a smooth proper geometrically connected scheme over W with a simple normal crossing divisor D ⊂ X relative to W . Suppose F p f ⊂ k. Let M be a twisted logarithmic Fontaine-Faltings module over X (with pole along D) with endomorphism structure of W (F p f ). Applying D P -functor, one gets a projective representation In Section 3.4, we study several properties of this functor D P . For instance, we prove that a projective sub-representation of D P (M ) corresponds to a sub-object N ⊂ M such that D P (M/N ) is isomorphic to this subrepresentation. Combining this with Theorem 3.3, we infer that a projective representation coming from a stable twisted periodic Higgs bundle (E, θ) with (rank(E), deg H (E)) = 1 must be irreducible. The next theorem gives a p-adic analogue of the existence of projective flat Yang-Mills-Higgs connection in terms of semistability of Higgs bundles and triviality of discriminant. Theorem 0.5 (Theorem 3.10). A semistable Higgs bundle over X 1 initials a twisted preperiodic Higgs-de Rham flow if and only if it is semistable and has trivial discriminant. Consequently we obtain the following theorem on the existence of nontrivial representations ofétale fundamental group in terms of the existence of semistable graded Higgs bundles. Theorem 0.6 (Theorem 3.14). Let k be a finite field of characteristic p. Let X be a smooth proper geometrically connected scheme over W (k) together with a smooth log structure D/W (k). Assume that there exists a semistable graded logarithmic Higgs bundle Finally we give two applications in Section 4 to show how the general machinery developed in the previous sections works in some concrete situations. Taking the moduli space M of graded stable Higgs bundles of rank-2 and degree 1 over P 1 with logarithmic structure on m(> 3) marked points we show that the self map induced by Higgs-de Rham flow stabilizes the component M (1, 0) of M of maximal dimension (dim = m−3 ) as a rational and dominant map. Hence by Hrushovski's theorem [8] the subset of periodic Higgs bundles is Zariski dense in M (1, 0). In this way we produce infinitely many PGL 2 (F p f )-crystalline representations, which are irreducible in PGL 2 (F p ). By Theorem 3.14, all these representations lift to PGL 2 (Z ur p )crystalline representations. For the case of 4 marked points {0, 1, ∞, λ} we state an explicite formula for the self map and use it to study the dynamic of Higgs-de Rham flows for p = 3 and several values of λ. In the last subsection 4.5, we consider a smooth projective curve X over W (k) of genus g ≥ 2. In the Appendix of [20], de Jong and Osserman have shown that the subset of twisted periodic vector bundles over X 1 in the moduli space of semistable vector bundles over X 1 of any rank and any degree is always Zariski dense. By applying our main theorem for twisted periodic Higgs de Rham flows with zero Higgs fields, which should be regarded as projectiveétale trivializible vector bundles in the projective version of Lange-Stuhe's theorem (see [14]), they all correspond to PGL r (F p f )-representations of π 1 (X 1 ). Once again we show that they all lift to PGL r (Z ur p ) of π 1 (X 1 ). It should be very interesting to make a comparison between the lifting theorem obtained here lifting GL r (F p f )-representations of π 1 (X 1 ) to GL r (Z ur p )-representation of π 1 (X 1F p ) and the lifting theorem developed by Deninger-Werner [2]. In their paper, they have shown that any vector bundle over X /W which isétale trivializible over X 1 lifts to a GL r (C p )representation of π 1 (X K ). Twisted Fontaine-Faltings modules In this section, we will recall the definition of Fontaine-Faltings modules in [4] and generalize it to the twisted version. 1.1. Fontaine-Faltings modules. Let X n be a smooth and proper variety over W n (k). And (V, ∇) is a de Rham sheaf (i.e. a sheaf with an integrable connection) over X n . In this paper, a filtration Fil on (V, ∇) will be called a Hodge filtration of level in [a, b] if the following conditions hold: and locally on all open subsets U ⊂ X n , the graded factor Fil -Fil satisfies Griffiths transversality with respect to the connection ∇. In this case, the triple (V, ∇, Fil) is called a filtered de Rham sheaf. One similarly gives the conceptions of (filtered) de Rham modules over a Walgebra. 1.1.1. Fontaine-Faltings modules over a small affine base. Let U = SpecR be a small affine scheme ( which means there exist anétale map W n [T ±1 1 , T ±1 2 , · · · , T ±1 d ] → O Xn (U ), see [4]) over W and Φ : R → R be a lifting of the absolute Frobenius on R/pR, where R is the p-adic completion of R. -ϕ is an R-linear isomorphism V and ∇ on V , i.e. the following diagram commutes: Let M 1 = (V 1 , ∇ 1 , Fil 1 , ϕ 1 ) and M 2 = (V 2 , ∇ 2 , Fil 2 , ϕ 2 ) be two Fontaine-Faltings modules over U of Hodge-Tate weight in [a, b]. The homomorphism set between M 1 and M 2 constitutes by those morphism f : which is parallel with respect to the connection, satisfies the cocycle conditions and induces an equivalent functor of categories Morphisms between Fontaine-Faltings modules are those between sheaves and locally they are morphisms between local Fontaine-Faltings modules. More precisely, for a morphism f of the underlying sheaves of two Fontaine-Faltings modules over X , the map f is called a morphism of Fontaine- 1.2. Inverse Cartier functor. For a Fontaine-Faltings module (V, ∇, Fil, {ϕ i } i∈I ), we call {ϕ i } i the ϕ-structure of the Fontaine-Faltings module. In this section, we first recall a global description of the ϕ-structure via the inverse Cartier functor over truncated Witt rings constructed by Lan, Sheng and Zuo [12]. Note that the inverse Cartier functor C −1 1 (the characteristic p case) is introduced in the seminal work of Ogus-Vologodsky [19]. Here we sketch an explicit construction of C −1 1 presented in [12]. Let (E, θ) be a nilpotent Higgs bundle over X 1 . Locally we have is the homomorphism given by the Deligne-Illusie's Lemma [1]. Those local data (V i , ∇ i )'s can be glued into a global sheave H with integrable connection ∇ via the transition maps {G ij } (Theorem 3 in [13]). The inverse Cartier functor on (E, θ) is . Remark. Note that the inverse Cartier transform C −1 1 also has the logarithmic version. When the log structure is given by a simple normal crossing divisor, an explicit construction of the log inverse Cartier functor is given in the Appendix of [11]. As mentioned in the introduction, we need to generalize C −1 1 to the invers Cartier transform over the truncated Witt ring for Higgs bundles over X n /W m (k). We briefly recall the construction in section 4 of [12]. 1.2.1. Inverse Cartier functor over truncated Witt ring. Let S = Spec(W(k)) and F S be the Frobenius map on S. Let X n+1 ⊃ X n be a W n+1 -lifting of smooth proper varieties. Recall that the functor C −1 n is defined as the composition of C −1 n and the base change F S : X ′ n = X n × F S S → X n (by abusing notation, we still denote it by F S ). The functor C −1 n is defined as the composition of two functors T n and F n . In general, we have the following diagram and its commutativity follows easily from the construction of those functors. These categories appeared in the diagram are explained as following: • MCF a (X n ) is the category of filtered de Rham sheaves over X n of level in [0, a]. • H(X n ) (resp. H(X ′ n )) is the category of tuples (E, θ,V ,∇, F il, ψ), where -(E, θ) is a graded Higgs module over X n (resp. X ′ n = X n ⊗ σ W ) of exponent ≤ p − 2; -(V ,∇, F il) is a filtered de Rham sheaf over X n−1 (resp. over X ′ n−1 ); -and ψ : GrF il (V ,∇) ≃ (E, θ) ⊗ Z/p n−1 Z is an isomorphism of Higgs sheaves over X n (resp. X ′ n ). • MIC(X n ) (resp. MIC(X ′ n )) is the category of sheaves over X n (resp. X ′ n ) with integrable p-connection . • MIC(X n ) (resp. MIC(X ′ n )) is the category of de Rham sheaves over X n (resp. X ′ n ). Functor Gr. For an object (V, ∇, Fil) in MCF p−2 (X n ), the functor Gr is given by where (E, θ) = Gr(V, ∇, Fil) is the graded sheaf with Higgs field, (V , ∇, F il) is the modulo p n−1 -reduction of (V, ∇, Fil) and ψ is the identifying map Faltings tilde functor (·). For an object (V, ∇, Fil) in MCF p−2 (X n ), the (V, ∇, Fil) will be denoted as the quotient Fil i / ∼ with x ∼ py for any The construction of functor T n . Let (E, θ,V ,∇, F il, ψ) be an object in H(X n ) (resp. H(X ′ n )). Locally on an affine open subset U ⊂ X (resp. U ⊂ X ′ ), there exists (V U , ∇ U , Fil U ) (Lemma 4.6 in [12]), a filtered de Rham sheaf, such that The tilde functor associates (V U , ∇ U , Fil U ) to a sheaf with p-connection over U . By gluing those sheaves with p-connections over all U 's (Lemma 4.10 in [12]), one gets a global sheaf with p-connection over X n (resp. X ′ n ). Denote it by T n (E, θ,V ,∇, F il, ψ). the construction of functor F n . For small affine open subset U of X , there exists endomorphism F U on U which lifts the absolute Frobenius on U k and is compatible with the Frobenius map F S on S = Spec(W (k)). Thus there Locally on U , applying functor F * U /S , we get a de Rham sheaf over U . By Taylor formula, up to a canonical isomorphism, it does not depends on the choice of F U . In particular, on the overlap of two small affine open subsets, there is an canonical isomorphism of two de Rham sheaves. By gluing those isomorphisms, one gets a de Rham sheaf over X n , we denote it by 1.3. Global description of the ϕ-structure in Fontaine-Faltings modules (via the inverse Cartier functor). Let (V, ∇, Fil) ∈ MFC p−2 (X n ) be a filtered de Rham sheaf over X n of level in [0, p − 2]. From the commutativity of diagram (1.3), for any i ∈ I, one has (1.5) As the F n is glued by using the Taylor formula, for any i, j ∈ I, one has the following commutative diagram To give a system of compatible ϕ-structures (for all i ∈ I) . In particular, we have the following results , for some positive integer n; is an isomorphism of de Rham sheaves. 1.4. Fontaine-Faltings modules with endomorphism structure. Let f be a positive integer. We call (V, ∇, Fil, ϕ, ι) a Fontaine-Faltings module over X with endomorphism structure of W (F p f ) whose Hodge-Tate weights is a continuous ring homomorphism. We call ι an endomorphism structure (for some n ∈ N) together with isomorphisms of de Rham sheaves and Comparing σ i+1 (ξ)-eigenspaces of ι(ξ) on both side of . Conversely, we can construct the Fontaine-Faltings module with endomorphism structure in an obvious way. 1.5. Twisted Fontaine-Faltings modules with endomorphism structure. Let L n be a line bundle over X n . Then there is a natural connection ∇ can on L p n n by 5.1.1 in [10]. Tensoring with (L p n n , ∇ can ) induces a self equivalence functor on the category of de Rham bundles over X n . Definition 1.3. An L n -twisted Fontaine-Faltings module over X n with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [a, b] is a tuple consisting the following data: to denote the category of all twisted Fontaine-Faltings modules over X n with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [a, b]. A morphism between two objects (V is equivalent to give a strict p n -torsion Fontaine-Faltings module over X n with endomorphism structure of W n (F p f ) and whose Hodge- It induces a trivialization of flat bundle τ p n j : be an L n -twisted Fontaine-Faltings module over X n with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [a, b]. Then one gets a local Fontaine-Faltings module over R j with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [a, b] We call M (τ j ) the trivialization of of M on U j via τ j . Logarithmic version. Finally, let us mention that everything in this section extends to the logarithmic context. Let X be a smooth and proper scheme over W and X o is the complement of a simple normal crossing divisor D ⊂ X relative to W . Similarly, one constructs the category T MF ∇ [a,b],f (X o n+1 /W n+1 ) of strict p n -torsion twisted logarithmic Fontaine modules (with pole along D × W n ⊂ X × W n ) with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [a, b]. Projective Fontaine-Laffie-Faltings functor The functor D Φ . Let R be a small affine algebra over W = W (k) with a σ-linear map Φ : R → R which lifts the absolute Frobenius of R/pR. If Φ happens to beétale in characteristic 0, Faltings (page 36 of [4]) constructed a map κ Φ : R → B + ( R) which respects Frobenius-lifts. Thus the following diagram commutes which is equipped with the natural ϕ-structure and filtration. where the homomorphisms are B + ( R)-linear and respect filtrations and the is defined via the connection on V , which commutes with the ϕ's and hence induces an For each i ∈ I, the functor D Φ i associates to any Fontaine-Faltings module over X a compatible system ofétale sheaves on U i,K (the generic fiber of U i ). By gluing and using the results in EGA3, one obtains a locally constant sheaf on X K and a globally defined functor D. In the following, we give a slightly different way to construct the functor D. Let J be a finite subset of the index set I, such that {U j } j∈J forms a covering of X . Let (V, ∇, Fil, {ϕ i } i∈I ) be a Fontaine-Faltings module over X . For each j ∈ J, the functor D Φ j gives us a finite Z p -representation of π 1 ( U j , x). Recall that the functor D Φ does not depends on the choice of Φ, up to a canonical isomorphism. In particular, for all j 1 , j 2 ∈ J, there is a natural isomorphism of By Theorem 2.6, all representations D(V (U j ), ∇, Fil, ϕ j )'s descend to a Z prepresentations of π 1 (X K , x). Up to a canonical isomorphism, this representation does not depend on the choice of J and s. This representation is just D(V, ∇, Fil, {ϕ i } i∈I ) and we construct the Fontaine-Laffaille-Faltings' D-functor in this way. Theorem 2.1 (Faltings). The functor D induces an equivalence of the ]modules whose objects are dual-crystalline representations. This subcategory is closed under sub-objects and quotients. In this case, we call the pair Thus, there is a natural restriction functor res from the category of π 1 (S, s)sets to the category of π 1 (U, s)-sets, which is given by The second one follows the very definition, one can find the proof in 5.1 of [18] As a consequence, one has the following result, which should be well-known for the experts. We still give a proof for the reader's convenience. Proof. According to Proposition 2.4, we have the following commutative diagram, with two bijective horizontal maps F s . The fully faithful of the restriction functor follows from Corollary 2.3. Under a fully faithful functor, a morphism is an isomorphism if and only if its image under this functor is an isomorphism. So the corollary 2.5 follows. In the following, we fix a finite index set J and an open covering {U j } j∈J of S with s ∈ j U j . Then for any j ∈ J, the inclusion map U j → S induces a surjective group morphism of fundamental groups τ j : π 1 (U j , s) ։ π 1 (S, s). Theorem 2.6. Let (Σ j , ρ j ) be a finite π 1 (U j , s)-set for each j ∈ J. Suppose for each pair i, j ∈ J, there exists an isomorphism of π 1 (U ij , s)-sets η ij : Σ i ≃ Σ j . Then every Σ j descends to a π 1 (S, s)-set (Σ j , ρ j ) uniquely. Moreover, the image of ρ j equals that of ρ j . Proof. Fix j 0 ∈ J. One has an isomorphism of π 1 (U jj 0 , s)-sets η jj 0 : Σ j ≃ Σ j 0 . As F s is an equivalent functor, there is a covering of U j of U j with isomorphism η j : F s ( U j ) → Σ j of π 1 (U j , s)-sets for each j ∈ J. Denote The η j and η jj 0 are π 1 (U J , s)-isomorphisms, so do f j . Denote The equivalence of F s over U J induces the following commutative diagram of finiteétale coverings of U J . , , In particular, for all j 1 , j 2 , j 3 ∈ J, By Corollary 2.5, there is a unique isomorphism of finiteétale coverings of Using Corollary 2.5 once again, one has So one can glue { U j } j∈J by isomorphisms {f ij } i,j∈J into a finiteétale covering S of S. Applying the fiber functor F s on the structure isomorphisms The bijections F s (f j ) and η j give us isomorphisms of permutation groups Since the F s (f j ) and η j are isomorphisms of π 1 (U j , s)-sets, the following diagram commutes Let ρ j denote the composition The commutativity of diagram (2.4) means that ρ i descends to ρ j . Other statements can be easily deduced from the surjectivity of τ j and τ ij . 2.4. Comparing representations associated to local Fantaine-Faltings modules underlying isomorphic filtered de Rham sheaves. In this section we compare several representations associated to local Fontaine-Faltings modules underlying isomorphic filtered de Rham sheaves. To do so, we first introduce a local Fontaine-Faltings module, which corresponds to a W n (F p f )-character of the local fundamental group. We will then use this character to measure the difference of the associated representations. Let R be a small affine algebra over W (k) and denote R n = R/p n R for all n ≥ 1. Fix a lifting Φ : R → R of the absolute Frobenius on R/pR. Recall that κ Φ : Under such a lifting, the Frobenius Φ B on B + ( R) extends to Φ on R. Element a n,r . Let f be an positive integer. For any r ∈ R × , we construct a Fontaine-Faltings module of rank f as following. Let be a free R n -module of rank f . The integrable connection ∇ on V is defined by formula ∇(e i ) = 0, and the filtration Fil on V is the trivial one. Applying the tilde functor and twisting by the map Φ, one gets The ϕ is parallel due to d(r p n ) ≡ 0 (mod p n ). By lemma 1.1, the tuple (V, ∇, Fil, ϕ) forms a Fontaine-Faltings module. Applying Fontaine-Laffaille-Faltings' functor D Φ , one gets a finite Z p -representation of Gal( R/ R), which is a free Z/p n Z-module of rank f . Lemma 2.7. Let n and f be two positive integers and let r be an invertible element in R. Then there exists an a n,r ∈ B + ( R) × such that Φ f B (a n,r ) ≡ κ Φ (r) p n · a n,r (mod p n ). (2.5) Proof. Since D Φ (V, ∇, Fil, ϕ) is free over Z/p n Z of rank f . one can find an element g with order p n . Recall that D Φ (V, ∇, Fil, ϕ) is the sub-Z p -module of Hom B + ( R) (V ⊗ κ Φ B + ( R), D) consisted by elements respecting the filtration and ϕ. In particular, the following diagram commutes Since the image of g is p n -torsion, Im(g) is contained in D[p n ] = 1 p n B + ( R)/B + ( R), the p n -torsion part of D. Choose a lifting a n,r of g(e 0 ⊗ κ Φ 1) under the sur- Then the equation (2.5) follows. Similarly, one can define a n,r −1 for r −1 . By equation (2.5), we have Φ f (a n,r · a n,r −1 ) = a n,r · a n,r −1 . Thus a n,r · a n,r −1 ∈ W (F p f ). Since both a n,r and a n,r −1 are not divided by p (by the choice of g), we know that a n,r · a n,r −1 ∈ W (F p f ) × . The invertibility of a n,r follows. Comparing representations. Let n and f be two positive integers. For all be isomorphisms of de Rham R-modules. Let r be an element in R × . Since d(r p n ) = 0 (mod p n ), the map r p n ϕ f −1 is also an isomorphism of de Rham ii). The multiplication of a n,r on Hom B + ( R) V ⊗ κ Φ B + ( R), D induces a W n (F p f )-linear map between these two submodules Proof. i). We only give the W n (F p f )-linear structure on D Φ (M ). Let g : One checks that a.g is also contained in D Φ j (M (τ j )). Let δ be an element in Gal( R/ R). Then In this way, D Φ j (M (τ j )) forms a W n (F p f )-module with a continuous semilinear action of π 1 (U K ). ii). Recall that D Φ (M ) (resp. D Φ (M ′ )) is defined to be the set of all morphisms in Hom B + ( R) V ⊗ κ Φ B + ( R), D compatible with the filtration and ϕ (resp. ϕ ′ ). Comparing the rank of D Φ (M ) and D Φ (M ′ ), we only need to show that a n, , which means that f satisfies the following two conditions: 1). f is strict for the filtrations. i.e. is an isomorphism of projective W n (F p f )-representations of Gal( R/ R). In particular, we have an bijection of Gal( R/ R)-sets 2.5. The functor D P . In this section, we assume f to be a positive integer with F p f ⊂ k. Let {U j } j∈J be a finite small affine open covering of X . Let U j = (U j ) K . For every j ∈ J, fix Φ j as a lifting of the absolute Frobenius on U j ⊗ W k. Fix x as a geometric point in U J = j∈J U j and fix j 0 an element in J. Let (V, ∇, Fil, ϕ, ι) be a Fontaine-Faltings module over X n with an endomorphism structure of W (F p f ) whose Hodge-Tate weights lie in [0, p−2]. Locally, Applying Fontaine-Laffaille-Faltings' functor D Φ j , one gets a finite W n (F p f )representation ̺ j of π 1 (U j , x). Faltings shows that there is an isomorphism ̺ j 1 ≃ ̺ j 2 of Z/p n Z-representations of π 1 (U j 1 j 2 , x). By Lan-Sheng-Zuo [12], this isomorphism is W n (F p f )-linear. By Theorem 2.6, these ̺ j 's uniquely descend to a W n (F p f )-representation of π 1 (X K , x). Thus one reconstructs the W n (F p f )-representation D(V, ∇, Fil, ϕ, ι) in this way. Now we construct functor D P for twisted Fontaine-Faltings modules, in a similar way. Let ,f (X n+1 /W n+1 ) be an L n -twisted Fontaine-Faltings module over X n with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [0, p−2]. For each j ∈ J, choosing a trivialization M (τ j ) and applying Fontaine-Laffaille-Faltings' functor D Φ j , we get a W n (F p f )-module together with a linear action of π 1 (U j , x). Denote its projectification by ̺ j . By Corollary 2.9, there is an isomorphism ̺ j 1 ≃ ̺ j 2 as projective W n (F p f )-representations of π 1 (U j 1 j 2 , x). In what follows, we will show that these ̺ j 's uniquely descend to a projective W n (F p f )-representation of π 1 (X K , x) by using Theorem 2.6. In order to use Theorem 2.6, set Σ j to be the quotient π 1 (U j , x)-set Obviously the kernel of the canonical group morphism Let's denote by ρ j the composition of ̺ j and GL D Φ j (M (τ j )) → Aut(Σ j ) for all j ∈ J. By Corollary 2.9, the restrictions of (Σ j 1 , ρ j 1 ) and (Σ j 2 , ρ j 2 ) on π 1 (U j 1 j 2 , x) are isomorphic for all j 1 , j 2 ∈ J. Hence by Theorem 2.6, the map ρ j 0 descends to some ρ j 0 and the image of Up to a canonical isomorphism, this projective representation does not depends on the choices of the covering {U j } j∈J , the liftings Φ j 's and j 0 . And we denote this projective W n (F p f )-representation of π 1 (X K , x) by Similarly as Faltings's functor D in [4], our construction of the D P functor can also be extended to the logarithmic version. More precisely, let X be a smooth and proper scheme over W and let X o be the complement of a simple normal crossing divisor D ⊂ X relative to W . Similarly, by replacing X K and U j with X o K = X K and U o j , we construct the functor from the category of strict p n -torsion twisted logarithmic Fontaine modules (with pole along D × W n ⊂ X × W n ) with endomorphism structure of W n (F p f ) whose Hodge-Tate weights lie in [0, p − 2] to the category of free W n (F p f )-modules with projective actions of π 1 (X o K ). Summarizing this section, we get the following result. Theorem 2.10. Let M be a twisted logarithmic Fontaine-Faltings module over X (with pole along D) with endomorphism structure of W (F p f ). Te D P -functor associates to M and its endomorphism structure n a projective representation ρ : Twisted periodic Higgs-de Rham flows In this section, we will recall the definition of periodic Higgs-de Rham flows and generalize it to the twisted version. 3.1. Higgs-de Rham flow over X n ⊂ X n+1 . Recall [12] that a Higgsde Rham flow over X n ⊂ X n+1 is a sequence consisting of infinitely many alternating terms of filtered de Rham bundles and Higgs bundles which are related to each other by the following diagram inductively where -(V, ∇, Fil) Remark. In case n = 1, the data of (V , ∇, Fil) (n−1) −1 is empty. The Higgs-de Rham flow can be rewritten in the following form In this way, the diagram becomes In the rest of this section, we will give the definition of twisted periodic Higgs-de Rham flow (section 3.2), which generalizes the periodic Higgs-de Rham flow in [12]. 3.2. Twisted periodic Higgs-de Rham flow and equivalent categories. Let L n be a line bundle over X n . For all 1 ≤ ℓ < n, denote L ℓ = L n ⊗ O Xn O X ℓ the reduction of L n on X ℓ . In this subsection, let a ≤ p − 2 be a positive integer. We will give the definition of L n -twisted Higgs-de Rham flow of level in [0, a]. Definition 3.1. Let f be a positive integer. An f -periodic L 1 -twisted Higgsde Rham flow over X 1 ⊂ X 2 of level in [0, a], is a Higgs-de Rham flow over And for any i ≥ 0 the isomorphism strictly respects filtrations Fil (1) f +i and Fil (1) i . Those φ (1) f +i 's are relative to each other by formula f +i ). Denote the category of all twisted f -periodic Higgs-de Rham flow over X 1 of level in [0, a] by HDF a,f (X 2 /W 2 ). 3.2.2. Twisted periodic Higgs-de Rham flow X n ⊂ X n+1 . Let n ≥ 2 be an integer and f be a positive integer. And L n is a line bundle over X n . Denote by L ℓ the reduction of L n modulo p ℓ . We define the category T HDF a,f (X n+1 /W n+1 ) of all f -periodic twisted Higgs-de Rham flow over X n ⊂ X n+1 of level in [0, a] in the following inductive way. Definition 3.2. An L n -twisted f -periodic Higgs-de Rham flow over X n ⊂ X n+1 is a Higgs-de Rham flow which is a lifting of an L n−1 -twisted f -periodic Higgs-de Rham flow It is constructed by the following diagram for 2 ≤ ℓ ≤ n, inductively ? ℓ−1 is a lifting of the Hodge filtration Fil • Repeating the process above, one gets the data Fil i+f . And these morphisms are related to each other by formula φ i+f ). Denote the twisted periodic Higgs-de Rham flow by The category of all periodic twisted Higgs-de Rham flow over X n ⊂ X n+1 of level in [0, a] is denoted by T HDF a,f (X n+1 /W n+1 ). Remark. For the trivial line bundle L n , the definition above is equivalent to the original definition of periodic Higgs-de Rham flow in [12] by using the identification φ : (E, θ) 0 = (E, θ) f . Note that we can also define the logarithmic version of the twisted periodic Higgs-de Rham flow, since we already have the log version of inverse Cartier transform. X is a smooth proper scheme over W and X o is the complement of a simple normal crossing divisor D ⊂ X relative to W . Similarly, one constructs the category T HDF a,f (X o n+1 /W n+1 ) of twisted f -periodic logarithmic Higgs-de Rham flows (with pole along D × W n ⊂ X × W n ) over X × W n whose nilpotent exponents are ≤ p − 2 . Equivalence of categories. We establish an equivalence of categories between T HDF a,f (X n+1 /W n+1 ) and T MF ∇ [0,a],f (X n+1 /W n+1 ). Theorem 3.3. Let a ≤ p − 1 be a natural number and f be an positive integer. Then there is an equivalence of categories between T HDF a,f (X n+1 /W n+1 ) and be an f -periodic L n -twisted Higgs-de Rham flow over X n with level in [0, a]. Taking out f terms of filtered de Rham bundles together with f − 1 terms of identities maps , one gets a tuple This tuple forms an L n -twisted Fontaine-Faltings module by definition. It gives us the functor IC from T HDF a, We construct the corresponding flow by induction on n. In case n = 1, we already have following diagram By this isomorphism, we identify (V 0 , ∇ 0 ) with C −1 1 (E 0 , θ 0 ). Under this isomorphism, the Hodge filtration Fil 0 induces a Hodge filtration Fil f on (V f , ∇ f ). Take Grading and denote Inductively, for i > f , we denote (V i , ∇ i ) = C −1 1 (E i , θ i ). By the isomorphism the Hodge filtration Fil i−f induces a Hodge filtration Fil i on (V i , ∇ i ). Denote Then we extend above diagram into the following twisted periodic Higgs-de Rham flow over X 1 For n ≥ 2, denote This gives us a L n−1 -twisted Fontaine-Faltings module over X n−1 By induction, we have a twisted periodic Higgs-de Rham flow over X n−1 where the first f -terms of filtered de Rham bundles over X n−1 are those appeared in the twisted Fontaine-Faltings module over X n−1 . Based on this flow over X n−1 , we extend the diagram similarly as the n = 1 case, Now it is a twisted periodic Higgs-de Rham flow over X n . Denote this flow by It is straightforward to verify GR • IC ≃ id and IC • GR ≃ id. This Theorem can be straightforwardly generalized to the logarithmic case and the proof is similar as that of Theorem 3.3. Theorem 3.4. Let X be a smooth proper scheme over W with a simple normal crossings divisor D ⊂ X relative to W . Then for each natural number f ∈ N, there is an equivalence of categories between T HDF a,f (X o n+1 /W n+1 ) and T MF ∇ [0,a],f (X o n+1 /W n+1 ) 3.2.4. A sufficient condition for lifting the twisted periodic Higgs-de Rham flow. We suppose that the field k is finite in this section. Let X be a smooth proper variety over W (k) and denote X n = X × W (k) W n (k). Let D 1 ⊂ X 1 be a W (k)-liftable normal crossing divisor over k. Let D ⊂ X be a lifting of D 1 . Proposition 3.5. Let n be an positive integer and let L n+1 be a line bundle over X n+1 . Denote by L ℓ the reduction of L n+1 on X ℓ . Let be an L n -twisted periodic Higgs-de Rham flow over X n ⊂ X n+1 . Suppose -Lifting of the graded Higgs bundle (E, θ) (n) i is unobstructed. i.e. there exist a logarithmic graded Higgs bundle (E, θ) (n+1) i over X n+1 , whose reduction on X n is isomorphic to (E, θ) (n) i . PROJECTIVE REPRESENTATIONS AND TWISTED HIGGS-DE RHAM FLOWS 31 -Lifting of the Hodge filtration Fil (n) i is unobstructed. i.e. for any lift- , whose reduction on X n is Fil Then every twisted periodic Higgs-de Rham flow over X n can be lifted to a twisted periodic Higgs-de Rham flow over X n+1 . Proof. By assumption, we choose (E ′ , θ ′ ) which is a lifting of (V, ∇) (n) i . By assumption, we choose a lifting Fil which is a lifting of (E, θ) i+1 . From the φ-structure of the Higgs-de Rham flow, for all m ≥ 0 there is an isomorphism , one gets a lifting of (E, θ) By deformation theory, the lifting space of (E, θ) n is a torsor space modeled by H 1 Hig X 1 , End (E, θ) (1) n . Therefore, the torsor space of lifting n as a graded Higgs bundle should be modeled by a subspace of H 1 Hig . We give a description of this subspace as follows. For simplicity of notations, we shall replace (E, θ) (1) n by (E, θ) in this paragraph. The decomposition of E = p+q=n E p,q induces a decomposition of End(E): (End(E)) k,−k := p+q=n (E p,q ) ∨ ⊗ E p+k,q−k Furthermore, it also induces a decomposition of the Higgs complex End(E, θ). One can prove that the hypercohomology of the following Higgs subcomplex gives the subspace corresponding to the lifting space of graded Higgs bundles. Thus by the finiteness of the torsor space, there are two integers m > m ′ ≥ 0, such that By twisting suitable power of the line bundle L n+1 we may assume m ′ = 0. By replacing the period f with mf , we may assume m = 1. For integer i ∈ [n, n + f − 1] we denote Then (3.5) can be rewritten as and φ n+1 i+1 as follows. According to the isomorphism the Hodge filtration Fil Taking grading on equation (3.7), one gets a lifting of φ and a twisted Higgs-de Rham flow over X n+1 ⊂ X n+2 which lifts the given twisted periodic flow over X n ⊂ X n+1 . Remark. In the proof we see that one needs to enlarge the period for lifting the twisted periodic Higgs-de Rham flow. 3.3. The choice of the twisting line bundle and semi-stable Higgs bundle with trivial discriminant. Let X 1 be a smooth proper W 2liftable variety over k, with dim X 1 = n. Let H be a polarization of X 1 . Let r < p be a positive integer and (E, θ) 0 be a nilpotent semistable Higgs bundle over X 1 of rank r. Recall the main result in the Appendix of [12]: There is a Higgs-de Rham flow over X 1 with initial term (E, θ) 0 . In the construction of the Higgs-de Rham flow given by Theorem 3.6, the key step is to prove the existence of Simpson's graded semistable Hodge filtration Fil (Theorem A.4 in [12] and Theorem 5.12 in [16]), which is the most coarse Griffiths transverse filtration on a semi-stable de Rham module such that the associated graded Higgs module is still semi-stable. Denote (V, ∇) 0 := C −1 1 (E 0 , θ 0 ) and Fil 0 the Simpson's graded semistable Hodge filtration on (V, ∇) 0 . Denote (V, ∇) 1 := C −1 1 (E 1 , θ 1 ) and Fil 1 the Simpson's graded semistable Hodge filtration on (V, ∇) 1 . Repeating this process, we construct a Higgs-de Rham flow over X 1 with initial term (E, θ) Since the Simpson's graded semistable Hodge filtration is unique, this flow is also uniquely determined by (E, θ) 0 . The purpose of this subsection is to find a canonical choice of the twisting line bundle L such that this Higgs-de Rham flow is twisted preperiodic. Firstly, we want to find a positive integer f 1 and a suitable twisting line bundle Under these two condition, both (E, θ) 0 and (E, θ) f 1 are contained in the moduli scheme M ss Hig (X 1 /k, r, a 1 , a 2 ), which is constructed by Langer in [15] and classifies all the semistable Higgs bundles over X 1 with some fixed topological invariants (which will be explained later). Following [15], we introduce S ′ X 1 /k (d; r, a 1 , a 2 , µ max ) the family of Higgs sheaves over X 1 such that (E, θ) is a member of the family if E is reflexive of dimension d, µ max (E, θ) ≤ µ max ,a 0 (E) = r,a 1 (E) = a 1 and a 2 (E) ≥ a 2 . Here µ max (E, θ) is the slope of the maximal destabilizing sub sheaf of (E, θ), and a i (E) are defined by By the results of Langer, the family S ′ X 1 /k (d; r, a 1 , a 2 , µ max ) is bounded (see Theorem 4.4 of [15]). So M ss Hig (X 1 /k, r, a 1 , a 2 ) is the moduli scheme which corepresents this family. Note that a i (E) = χ(E| j≤d−i H j ) where H 1 , . . . , H d ∈ |O(H)| is an E-regular sequence (see [9]). Using Hirzebruch-Riemann-Roch theorem, one finds that a 1 (E),a 2 (E) will be fixed if c 1 (E) and c 2 (E) · [H] n−2 are fixed. Proof. Since c 1 (C −1 1 (E 0 , θ 0 )) = pc 1 (E 0 ) and c 1 (L 1 ) = 1−p f 1 r ·c 1 (E 0 ), we have Theorem 3.10. A semistable Higgs bundle over X 1 with trivial discriminant is preperiodic after twisting. Conversely, a twisted preperiodic Higgs bundle is semistable with trivial discriminant. Proof. For a Higgs bundle (E, θ) in M ss Hig (X 1 /k, r, a 1 , a 2 ), we consider the iteration of the self-map Υ. Since M ss Hig (X 1 /k, r, a 1 , a 2 ) is of finite type over k and has only finitely many k-points, there must exist a pair of integers (e, f 2 ) such that Υ e (E, θ) ∼ = Υ e+f 2 (E, θ). By Proposition 3.9, we know that (E, θ) is preperiodic after twisting. Conversely, let (E, θ) be the initial term of a twisted f -perperiodic Higgs-de Rham flows. We show that it is semistable. Let (F, θ) ⊂ (E, θ) be a proper sub bundle. Denote (F (1) i , θ So µ(F e ) ≤ µ(E e ) (otherwise there are subsheaves of E e with unbounded slopes, but this is impossible). So we have This shows that (E, θ) is semistable. The discriminant equals zero follows from the fact that ∆(C −1 1 (E, θ)) = p 2 ∆(E). Corollary 3.11. Let (E, θ) ⊃ (F, θ) be the initial terms of a twisted periodic Higgs-de Rham flow and a sub twisted periodic Higgs-de Rham flow. Then µ(F ) = µ(E). Sub-representations and sub periodic Higgs-de Rham flows. In this section, we assume F p f is contained in k. Recall that the functor D P is contravariant and sends quotient object to subobject, i.e. for any sub twisted Fontaine-Faltings module N ⊂ M with endomorphism structure, the projective representation D P (M/N ) is a sub-projective representation of D P (M ). Conversely, we will show that every sub-projective representation comes from this way. By the equivalence of the category of twisted Fontaine-Faltings modules and the category of twisted periodic Higgs-de Rham flows, we construct a twisted periodic sub Higgs-de Rham flow for each sub-projective representation. Let X be a smooth proper W (k)-variety. Denote by X n the reduction of X on W n (k) . Let {U i } i∈I be a finite covering of small affine open subsets and we choose a geometric point x in i∈I U i,K . Proof. Recall that the functor D P is defined by gluing representations of ∆ i = π 1 (U i,K , x) into a projective representation of ∆ = π 1 (X K , x). Firstly, we show that the sub-projective representation V is actually corresponding to some local sub-representations. Secondly, since the Fontaine-Laffaille-Faltings' functor D is fully faithful, there exists local Fontaine-Faltings modules corresponding to those sub-representations. Thirdly, we glue those local Fontaine-Faltings modules into a global twisted Fontaine-Faltings module. For i ∈ I, we choose a trivialization M i = M (τ i ) of M on U i , which gives a local Fontaine-Faltings module with endomorphism structure on U i . By definition of D P , those representations D U i (M i ) of ∆ i are glued into the projective representation D P (M ). In other words, we have the following On the other hand, one has D(M j /N j ) = V j = a 1,r V i by diagram (3.12). Thus one has D(M j /N ′ i ) = D(M j /N j ). Since D is fully faithful and contravariant, N ′ i = N j . In particular, on the overlap U i ∩ U j the local Fontaine-Faltings modules N i and N j have the same underlying subbundle. By gluing those local subbundle together, we get a subbundle of the underlying bundle M . The connection, filtration and the ϕ-structure can be restricted locally on this subbundle, so does it globally. And we get the desired sub-Fontaine-Faltings module. Let E be a twisted f -periodic Higgs-de Rham flow. Denote by M = IC(E) the Fontaine module with the endomorphism structure corresponding to E . By the equivalence of the category of twisted Fontaine-Faltings modules and the category of periodic Higgs-de Rham flow, one get the following result. Finally we arrive at the main theorem of our paper: Theorem 3.14. Let k be a finite field of characteristic p. Let X be a smooth proper scheme over W (k) together with a smooth log structure D/W (k). Assume that there exists a semistable graded logarithmic Higgs bundle (E, θ)/(X , D) 1 with discriminant ∆ H (E) = 0, rank(E) < p and (rank(E), deg H (E)) = 1. Then there exists a positive integer f and an absolutely irreducible projective We only show the result for D = ∅, as the proof of the general case is similar. By Theorem 3.10, there is a twisted preperiodic Higgs-de Rham flow with initial term (E, θ). Removing finitely many terms if necessary, we may assume that it is twisted f -periodic, for some positive integer f . By using Theorem 3.3 and applying functor D P , one gets a PGL rank(E) (F p f )representation ρ of π 1 (X o K ′ ). Since (rank(E), deg H (E)) = 1, the semi-stable bundle E is actually stable. According to Corollary 3.11, there is no non-trivial sub twisted periodic Higgs-de Rham flow. By Corollary 3.13, there is no non-trivial sub projective representation of ρ, so that ρ is irreducible. Remark. For simplicity, we only consider results on X 1 . Actually, all results in this section can be extended to truncated level. Constructing crystalline representations ofétale fundamental groups of p-adic curves via Higgs bundles As an application of the main theorem (Theorem 3.14), we construct irreducible PGL 2 crystalline representations of π 1 of the projective line removing m (m ≥ 4) marked points. Let M be the moduli space of semistable graded Higgs bundles of rank 2 degree 1 over P 1 /W (k), with logarithmic Higgs fields which have m poles {x 1 , x 2 , . . . , x m } (actually stable, since the rank and degree are coprime to each other). The main object of this section is to study the self map Υ (Corollary-Definition 3. If we consider the lifting problem over an extension k ′ of k, which contains Σ, then there are exactly p liftings of the twisted 1-periodic Higgs-de Rham flow over P 1 W 2 (k ′ ) . 4.4. Examples of dynamic of Higgs-de Rham flow on P 1 with fourmarked points. In the following, we give some examples in case k = F 3 4 . For any λ ∈ k \ {0, 1}, the map ϕ λ,3 is a self k-morphism on P 1 k . So it can be restricted as a self map on the set of all k-points ϕ λ,3 : k ∪ {∞} → k ∪ {∞}. In the following diagrams, the arrow β → γ means γ = ϕ λ,3 (β). And an mlength loop in the following diagrams just stands for a twisted m-periodic Higgs-de Rham flow, which corresponds to PGL 2 (F 3 m )-representation by Theorem 3.4 and Theorem 3.14. 4.5. Projective F -units crystalline on smooth projective curves. Let X be a smooth proper scheme over W (k). In [12] an equivalence between the category of f -periodic vector bundles (E, 0) of rank-r over X n (i.e. (E, 0) initials an f -periodic Higgs-de Rham flow with zero Higgs fields in all Higgs terms) and the category of GL r (W n (F p f ))-representations of π 1 (X 1 ) has been established. This result generalizes Katz's original theorem for X being an affine variety. As an application of our main theorem, we show that Theorem 4.6. The D P functor is faithful from the category of rank-r twisted f -periodic vector bundles (E, 0) over X n to the the category of projective W n (F p f )-representations of π 1 (X 1,k ′ ) of rank r, where k ′ is the minimal extension of k containing F p f . Remark. For n = 1 the above theorem is just a projective version of Lange-Stuhe's theorem. Theorem 4.7 (lifting twisted periodic vector bundles). Let (E, 0)/X 1 be an f -periodic vector bundle after twisting line bundle. Assume H 2 (X 1 , End(E)) = 0. Then for any n ∈ N there exists some positive integer f n with f | f n such that (E, 0) lifts to a twisted f n -periodic vector bundle over X n . Translate the above theorem in of representations: Theorem 4.8 (lifting projective representations of π 1 (X 1 )). Let ρ be a projective F p f -representation of π 1 (X 1 ). Assume H 2 (X 1 , End(ρ)) = 0, then there exist an positive integer f n divided by f such that ρ lifts to a projective W n (F p fn )-representation of π 1 (X 1,k ′ ) for any n ∈ N, where k ′ is the minimal extension of k containing F p fn . Assume X is a smooth proper curve over W (k), de Jong and Osserman (see Appendix A in [20]) have shown that the subset of periodic vector bundles over X 1,k is Zariski dense in the moduli space of semistable vector bundles over X 1 (Laszlo and Pauly have also studied some special case, see [17]). Hence by Lange-Stuhe's theorem (see [14]) every periodic vector bundle corresponds to a (P )GL r (F p f )-representations of π 1 (X 1,k ′ ), where f is the period and k ′ is a definition field of the periodic vector bundle containing F p f . Corollary 4.9. Every (P)GL r (F p f )-representation of π 1 (X 1,k ′ ) lifts to a (P)GL r (W n (F p fn ))-representation of π 1 (X 1,k ′′ ) for some positive integer f n divided by f , where k ′′ is a definition field of the periodic vector bundle containing F p fn . Remark. It shall be very interesting to compare this result with Deninger-Werner's theorem (see [2]), they have shown that any vector bundle over X , which is preperiodic over X 1 , lifts to a GL r (C p )-representation of π 1 (XK).
14,806.4
2017-09-05T00:00:00.000
[ "Mathematics" ]
Effects of hyperbaric oxygen preconditioning on cardiac stress markers after simulated diving Hyperbaric oxygen preconditioning (HBO-PC) can protect the heart from injury during subsequent ischemia. The presence of high loads of venous gas emboli (VGE) induced by a rapid ambient pressure reduction on ascent from diving may cause ischemia and acute heart failure. The aim of this study was to investigate the effect of diving-induced VGE formation on cardiac stress marker levels and the cardioprotective effect of HBO-PC. To induce high loads of VGE, 63 female Sprague–Dawley rats were subjected to a rapid ambient pressure reduction from a simulated saturation dive (50 min at 709 kPa) in a pressure chamber. VGE loads were measured for 60 min in anesthetized animals by the use of ultrasonography. The animals were divided into five groups. Three groups were exposed to either diving or to HBO-PC (100% oxygen, 38 min at 303 kPa) with a 45 or 180 min interval between HBO-PC and diving. Two additional groups were used as baseline controls for the measurements; one group was exposed to equal handling except for HBO-PC and diving, and the other group was completely unexposed. Diving caused high loads of VGE, as well as elevated levels of the cardiac stress markers, cardiac troponin T (cTnT), natriuretic peptide precursor B (Nppb), and αB-crystallin, in blood and cardiac tissue. There were strong positive correlations between VGE loads and stress marker levels after diving, and HBO-PC appeared to have a cardioprotective effect, as indicated by the lower levels of stress marker expression after diving-induced VGE formation. Introduction The formation of gas emboli as a result of a reduction in ambient pressure (decompression) is a major cause of injury associated with diving (Vann et al. 2011). High loads of decompression-induced venous gas emboli (VGE) may result in cardiorespiratory decompression illness (DCI) with cough, dyspnea, pulmonary edema, shock and in the most severe cases, fatal outcome. Circulating VGE is effectively trapped in the lungs, and may cause increased pulmonary artery pressure, cardiac overload, and heart failure (Muth and Shank 2000). Moreover, blood perfusion and pulmonary gas exchange are impaired, and the arterial partial pressure of oxygen decreases relative to the increased number of VGE, resulting in hypoxia with subsequent cardiac ischemia and cell death (Butler and Hills 1985;Vik et al. 1990). The phenomenon of preconditioning, in which a period of sublethal cardiac stress can protect the heart against injury during a subsequent ischemic insult, has been the subject of intense research over the last two decades (Yellon and Downey 2003). Hyperbaric oxygen (HBO), which has been used as a preconditioning stimulus prior to ischemia, has been shown to provide wide-scale cardioprotective effects (Cabigas et al. 2006;Yogaratnam et al. 2010). HBO preconditioning (HBO-PC) in rats exposed to simulated diving has recently shown promising results in reducing the incidence, severity, and complications of DCI (Martin and Thom 2002;Butler et al. 2006;Katsenelson et al. 2009;Fan et al. 2010;Ni et al. 2013). However, the potential cardioprotective effects of HBO-PC in relationship with gas emboli formation have not yet been investigated. In this study, rats were exposed to a simulated dive followed by severe decompression stress inducing high loads of VGE. First, we aimed to investigate whether a simulated dive with subsequent VGE formation would lead to increased levels of cardiac stress markers indicating cardiac stress and injury, and second, investigate the effect of HBO-PC on these markers. Three different cardiac stress markers in rat serum and cardiac tissue were selected; serum cardiac troponin T (cTnT), a biomarker of cardiac injury (Thygesen et al. 2012); cardiac gene expression of the natriuretic peptide precursor B (Nppb), which is a biomarker of acute heart failure (Nakagawa et al. 1995;Braunwald 2008); and the cardiac gene and protein expression of aB-crystallin, a small heat shock protein with a key role in protecting the heart from injury (Latchman 2001;Whittaker et al. 2009;Christians et al. 2012). We hypothesized that high loads of VGE from simulated diving would result in an elevation of these cardiac stress markers (cTnT, Nppb, and aB-crystallin). We further hypothesized that HBO-PC would protect the heart from diving-induced VGE formation, resulting in lower levels of these stress markers. Ethical approval The experimental protocols were approved by the Norwegian Committee for Animal Experiments, and were performed according to the Guide for the Care and Use of Laboratory Animals published by the Directive 2010/63/ EU of the European Parliament. Groups I to IV were observed in anesthesia for 60 min after diving (gr. I-III) or no diving (gr. IV). Groups IV and V served as two different control groups. Group IV assessed the potential effect of anesthesia and handling without diving, and Group V was not exposed to anything (i.e., diving, chamber exposure, handling, or anesthesia). All the animals were housed in groups of three per cage in an animal facility. Light was controlled on a 12:12-h lightdark cycle at a room temperature of 21.0 AE 0.9°C (SD) and humidity 51 AE 9% (SD). The animals had free access to water and were placed on a pellet rodent diet. HBO preconditioning Animals in Groups II (HBO45) and III (HBO180) were exposed to 100% oxygen for 5 min at normobaric pressure (101 kPa) in a pressure chamber, followed by an increase in ambient pressure (compression) at a rate of 200 kPa min À1 -303 kPa. The animals were kept at that pressure for 38 min while breathing 100% oxygen. Because HBO exposure results in elimination of nitrogen gas (N 2 ) from tissues (Foster and Butler 2009), the animals were exposed to air at the same ambient pressure (303 kPa) for 7 min immediately after the HBO exposure. According to the exponential model proposed by Foster et al. (1998), and using a critical tissue half-time (whole rat) of 10 min (Lillo and Parker 2000), this would cause N 2 tissue tensions to differ ≤0.7 kPa between the groups prior to the dive. The rats were then decompressed at a rate of 200 kPa min À1 back to 101 kPa. The animals in the HBO45 and HBO180 groups were allowed to rest in their cages, breathing normobaric air, for 45 and 180 min, respectively, before simulated diving. The diving (gr. I) and nondiving (gr. IV) groups were exposed to normobaric air in a similar chamber at the same time, whereas the HBO45 and HBO180 animals were exposed to HBO. Simulated diving and VGE detection The animals were compressed with air in a pressure chamber at a rate of 200 kPa min À1 from 101 to 709 kPa, breathing hyperbaric air for 50 min to obtain tissue saturation (Lillo and Parker 2000), and then decompressed linearly back to 101 kPa at a rate of 50 kPa min À1 . Immediately after diving, the animals were anesthetized with a mixture of; midazolam 0.5 mg 100 g À1 , fentanyl 5 lg 100 g À1 , and haloperidol 0.33 mg 100 g À1 , which was administered as one bolus subcutaneous injection. The pulmonary artery and ascending aorta were insonated for 60 min using a 10 MHz transducer connected to a GE Vingmed Vivid 5 scanner. Gas emboli appeared in the pulmonary artery and aorta as bright spots and were recorded for 1 min at discrete time points (15, 30, and 60 min). The data were stored and played back in slow motion for analysis, in which the images were then graded (scan grade 0-5) according to a previously described method by an observer blinded to the experimental condition of the rats (Eftedal and Brubakk 1997). Scan grades were converted to the number of emboliÁcm 2 Áheart cycle À1 as previously described by Nishi et al. (2003). Animals that did not survive the 60 min postdive observation period due to severe DCI were excluded from further analysis. Animals in Groups I-IV were handled equally except for the differences in pressure profiles and breathing gas compositions. Serum cTnT analysis After the 60 min postdive observation period, the abdomen was opened and blood from the abdominal aorta was collected into serum tubes. The serum used for the cTnT measurements was prepared by centrifugation at 10,000 rpm at 4°C after blood collection. A high-sensitivity cTnT assay (hs-cTnT; Roche Modular System E, Roche Diagnostics GmbH, Mannheim, Germany) was used to detect an elevation in cTnT (Giannitsis et al. 2010). This assay permitted the measurement of concentrations ≥10 ng L À1 (Omland et al. 2009). Preparation of myocardial tissue for mRNA and protein analysis Immediately after blood sampling, the thoracic cavity was opened and approximately 50 mg of myocardial tissue sections of the right and left ventricle were rapidly excised and rinsed in RNAlater buffer solution (Ambion Inc., Austin, TX). The tissue was then transferred to 1.5 mL fresh RNAlater and kept at room temperature for up to 4 h before storage at À80°C. To prepare the lysates for mRNA and protein analysis, the myocardial tissue was thawed at room temperature, weighed and then transferred into 5 mL round-bottom polystyrene tubes containing 10 volumes per tissue weight (lL mg À1 ) of RNeasy Fibrous Tissue lysis buffer (Qiagen, Valencia, CA) and was mechanically disrupted using an UltraTurrax rotor/stator (IKA Werke GmbH & Co., Staufen, Germany) until completely homogenized. The lysate was split into two equal volumes: one part was used for realtime reverse transcription polymerase chain reaction (qRT-PCR) analysis, where the total RNA was extracted on a Qiacube nucleic acid extractor using the RNeasy Fibrous Tissue mini kit (Qiagen) according to the manufacturer's recommendations; and the other part was used for western blotting analysis, where 1% protease inhibitor solution (Qiagen) was added into the lysate, and the sample was precipitated by adding an equal volume of 10% ice-cold TCA followed by incubation on ice for 20 min. After centrifugation (16000 g), the protein pellet was washed with 100% ethanol and then resuspended in 150 lL loading buffer (Invitrogen, Carlsbad, CA). Nppb and aB-crystallin gene expression Prior to the analysis, the total RNA concentration and purity was determined using a NanoDrop 2000 spectrophotometer (NanoDrop Technologies, Wilmington, DE) (Schroeder et al. 2006). The mRNA expression levels of the genes encoding for the natriuretic peptide precursor B (Nppb) as well as the small heat-shock protein, aB-crystallin, were analyzed in the left and right cardiac ventricles for each of the five treatment groups (n = 6-8 from each group by random selection) with qRT-PCR using Qiagen QuantiFast FAM-labeled target probe assays with the QuantiFast Probe RT-PCR Plus kit (Qiagen) in a one-step qRT-PCR normalized against MAX-labeled Hprt. The PCR was run on a C1000 thermal cycler (Bio-Rad, Pleasanton, CA) with a CFX96 Optical Reaction Module and analyzed on the CFX Manager Software version 2.0 using the DDC T method in which the relative quantity of the target genes was normalized against the relative quantity of the control across the samples (Livak and Schmittgen 2001). aB-crystallin protein expression Resuspended protein lysates from each of the five treatment groups (n = 5 from each group by random selection) were analyzed using 1D polyacrylamide gel electrophoresis in 10% NuPage Novex Bis-Tris gels (Invitrogen) in one (3-(N-morpholino) propanesulfonic acid) electrophoresis buffer. The gels were run at the same time before they were electroblotted onto nitrocellulose membranes and blocked for 1 h in 5% fat-free dry milk diluted in phosphate-buffered saline (PBS) + 0.1% Tween (PBST). The membranes were first incubated together with primary antibody against aB-crystallin (ADI-SPA-222; Enzo Life Sciences, Farmingdale, NY) for 1 h in blocking buffer, washed for 3 9 10 min in PBST, further incubated for 1 h in a secondary IRDye-conjugated antibody in PBST and finally washed 3 9 10 min in PBST and 1 9 10 min in PBS. As a loading control, the membranes were then treated with an antibody against b-tubulin (AB6046; AbCam, Cambridge, MA). All membranes were probed with the same batch of b-tubulin control to ensure that the protein quantification was equally performed across all samples and gels. The fluorescence signals were detected using an Odyssey scanner (Li-Cor Biosciences, Lincoln, NE) and the signal intensities and relative protein quantification were calculated using Image Studio 2.0 (Li-Cor Biosciences). Statistical analysis The data were expressed as the median with ranges or as the mean AE SEM. We employed nonparametric tests due to the limited number of rats. The Mann-Whitney U-test and Kruskal-Wallis test were used to evaluate the differences in VGE loads and the cardiac stress markers, cTnT, Nppb, and aB-crystallin, between the groups. Fisher's exact test was used to evaluate the ratio of animals in each group, and the ratio of animals with low (grades 0-3) or high (grades 4-5) bubble grades, that had cTnT values above the detection limit. Selected bivariate relationships were examined using Spearman's rank correlation test. P < 0.05 was considered statistically significant. On the basis of the estimates obtained from previous studies (Wisloff and Brubakk 2001;Wisloff et al. 2004), 12 rats in each of the three diving groups would provide a power of 0.86. VGE loads detected by ultrasonography The diving protocol resulted in a 25% mortality rate in all the animal groups during the first hour after diving. The animals that died had massive amounts of VGE with a scan grade 5 (on a scale from 0 to 5) or~10 embol-iÁcm 2 Áheart cycle À1 . HBO-PC had no effect on VGE formation, which was measured as the maximum amount of emboliÁcm 2 Áheart cycle À1 (Dive: 3.9, HBO45: 4.1, HBO180: 4.4, P = 0.92, n = 12 in each group, Fig. 1). No gas emboli were detected in the ascending aorta of any of the animals that survived the observation period. Preconditioned animals appeared to tolerate higher VGE loads compared to non-preconditioned animals. For Figure 1. The maximum number of venous gas emboli (VGE) in the pulmonary artery after diving, which was measured as emboliÁcm 2 Áheart cycle À1 , did not differ between the three groups of diving rats (n = 12 in each group). HBO45/HBO180: hyperbaric oxygen preconditioning (HBO-PC) followed by a 45 or 180 min rest interval between HBO-PC and diving. The data are presented as the means AE SEM. Figure 2. Hyperbaric oxygen preconditioning resulted in reduced postdiving serum cardiac troponin T levels. Serum cardiac troponin T (cTnT) levels (ng L À1 ) were higher in the diving (n = 10) compared to the HBO180 (n = 7), nondiving (n = 18) and unexposed (n = 9) animals, but were not different from the HBO45 (n = 9) animals. The cTnT level in the animals with values below the detection limit (10 ng L À1 ) was established at 9 ng L À1 . The data are presented as the median AE interquartile range. All the animals were handled equally and differed only in the pressure and breathing gas exposures, except for the unexposed animals, which were kept shielded in their cages until further blood sampling. HBO45/HBO180: hyperbaric oxygen preconditioning (HBO-PC) followed by a 45 or 180 min rest interval between HBO-PC and diving. *P < 0.05, **P < 0.001 significantly different from the diving animals. example, in animals with a scan grade of less than 4 (n = 18), elevated cTnT levels were found in only 1/12 of the preconditioned animals in contrast to 4/6 of the nonpreconditioned animals (P = 0.02). All the animals with a scan grade ≥4 (n = 9) demonstrated elevated levels of cTnT. aB-crystallin gene and protein expression None of the groups showed altered levels of aB-crystallin mRNA expression in cardiac tissue after simulated diving. However, the relative protein level of aB-crystallin in the non-preconditioned diving animals (gr. I) was increased by 4.0-fold, 6.9-fold, and 12.6-fold in the right ventricle compared to the HBO180 preconditioning (gr. III, P = 0.02), nondiving (gr. IV, P < 0.01) and unexposed animals (gr. V, P < 0.01, Fig. 4). In addition, aB-crystallin in the right ventricle was positively correlated with cTnT (r s = 0.72, P = 0.00005). Discussion The primary findings of this study were that strenuous simulated diving with subsequent VGE formation resulted in increased levels of cardiac stress marker expression (cTnT, Nppb, and aB-crystallin) in rat serum and cardiac tissue. Moreover, HBO-PC prior to the dive appeared to provide cardioprotection, as indicated by the lower expression levels of these stress markers. The HBO-PC effect was more pronounced when there was a longer (180 min compared to 45 min) interval between HBO-PC and diving. In addition, a strong positive correlation was found between the amount of VGE and stress marker levels in the serum and cardiac tissue. . Cardiac tissue level of brain natriuretic peptide precursor (Nppb) was increased after simulated diving. Nppb mRNA expression was increased in the left cardiac ventricle in all the diving animal groups compared to unexposed animals. Differences were shown as the relative fold expression compared to the control gene Hprt. All the animals were handled equally and differed only in the pressure and breathing gas exposures, except for the unexposed animals, which were kept shielded in their cages until further tissue sampling. HBO45/HBO180: hyperbaric oxygen preconditioning (HBO-PC) followed by a 45 or 180 min rest interval between HBO-PC and diving. Values were expressed as the means AE SEM, n = 6-8 in all groups. *P < 0.05 significantly different from the unexposed animals. Relative protein levels of aB-crystallin in the right cardiac ventricle of diving animals were increased compared to HBO180, nondiving and unexposed animals. The differences were shown as the relative protein quantity compared to the b-tubulin control. For each group, a representative western blot from one animal is shown. All the animals were handled equally and differed only in the pressure and breathing gas exposures, except for the unexposed animals, which were kept shielded in their cages until further tissue sampling. HBO45/HBO180: HBO-PC followed by a 45 or 180 min rest interval between HBO-PC and diving. Values were expressed as the means AE SEM, n = 5 in all groups. *P < 0.05, **P < 0.01 significantly different from the diving group. Elevated serum levels of cTnT induced by simulated diving were positively correlated with VGE loads. Cardiac troponins are components of the contractile apparatus in cardiomyocytes and demonstrate nearly absolute cardiac tissue specificity and high clinical sensitivity (Omland et al. 2009;Thygesen et al. 2012), and are the preferred biomarkers for the diagnosis of cardiac injury. Thus, our findings indicated that VGE formation after diving induced cardiac injury. The diving protocol resulted in a 25% mortality rate due to the massive amounts of VGE (scan grade 5), and all the surviving animals with a scan grade ≥4 showed elevated cTnT levels. Experiments by Butler and Hills (1985) and Vik et al. (1990) demonstrated a proportional relationship between VGE loads and impeded gas exchange with systemic hypoxia and cardiac overload. These previous findings and the increase in cardiac stress markers in this study, indicate that the diving protocol induced severe decompression stress with cardiorespiratory DCI. However, in the rats that died, there may of course have been injuries to other organ systems (e.g., brain and spinal cord) that contributed to the cause of death. However, in surviving rats no gas emboli were detected in the systemic circulation; injuries to other organ systems than the cardiopulmonary are therefore less likely. Cardiorespiratory manifestations of DCI are rare and have been reported to occur in approximately 2-6% of recreational diving accidents (Francis and Mitchell 2003a;Vann et al. 2011). Such manifestations only occur after highly provocative dives and represent a lethal form of DCI. If not treated immediately, acute heart failure may progress into cardiorespiratory collapse and death (Francis and Mitchell 2003a). Currently, there have been no published controlled diving studies of troponin release in animals or humans, but elevated troponins due to diving has been previously described in a case report (Chenaitia et al. 2010). Thus, this is the first study to demonstrate an association between gas emboli formation and troponin release after diving. HBO-PC 180 min prior to simulated diving resulted in lower cTnT levels compared to non-preconditioned diving animals (Fig. 2) despite no differences observed in the VGE loads (Fig. 1). Thus, it appears that HBO-PC protects the heart against injury from decompressioninduced VGE. This novel observation was consistent with the findings obtained by Martin and Thom (2002) and Butler et al. (2006), who demonstrated that similar HBO-PC protocols prior to simulated diving protected rats against severe decompression stress without reducing gas emboli formation. Martin and Thom found that HBO-PC reduced DCI manifestations from the central nervous system, and Butler et al. (2006) showed that HBO-PC resulted in less overall signs of DCI compared to control animals, and demonstrated lower levels of inflammatory markers in the blood, lungs, and urine after the dive. In this study, two of the animals in the HBO45 group exhibited very high VGE loads (scan grade 4 and 5) throughout the entire 60 min postdiving observation period. These two rats had the highest cTnT levels measured, which may explain why the HBO45 group did not result in statistically significant lower levels of serum cTnT compared to the diving group. In the animals with low-tomoderate VGE loads (scan grade 0-3), significantly more animals showed elevated cTnT levels in the non-preconditioned group (67%) compared to the preconditioned groups (8%). However, all the animals with high VGE loads (scan grade 4 or 5) exhibited elevated cTnT levels. Thus, HBO exposure appeared to protect the heart against low-to-moderate loads of VGE; however, this protective effect was not evident when the VGE loads were high. Nppb expression in the left ventricle was increased in all the diving groups compared to control animals (Fig. 3), and this increase was associated with increased cTnT and VGE levels. Nppb mRNA encodes for BNP and is a well-established biomarker of acute heart failure (Braunwald 2008). This natriuretic peptide is synthesized and released by cardiomyocytes in response to hemodynamic stress when the ventricles are subject to increased wall tension. The action of this peptide functions to oppose the physiological abnormalities that occur during cardiac overload and acute heart failure. It is shown that the expression of Nppb is increased within 1 h in response to cardiac overload (Nakagawa et al. 1995). Thus, the increase in Nppb expression in the left cardiac wall after simulated diving in this study indicated that the heart was exposed to increased wall tension and cardiac overload. During and after decompression, VGE is thought to form or grow on the endothelial surface in peripheral tissues until they are swept away by the bloodstream and trapped in the small vessels of the pulmonary circulation (Stepanek and Webb 2008). In animal studies, VGE trapped in the lungs caused increased pulmonary artery pressure, which may result in cardiac overload and heart failure (Bove et al. 1974;Vik et al. 1990). Although, this phenomenon has not been shown in human studies (Valic et al. 2005), observations after highly provocative dives with severe cardiorespiratory DCI indicated that heart failure and cardiac arrest may occur due to a massive VGE load in the pulmonary vascular bed (Muth and Shank 2000;Francis and Mitchell 2003b). The elevation of cardiac stress markers in this study reflected VGE-induced cardiac stress and injury but provided no indication of the mechanisms behind this elevation. However, it is well known that cardiac ischemia, pulmonary embolism, and acute heart failure may all result in elevated cTnT and BNP levels (Giannitsis et al. 2000(Giannitsis et al. , 2010Thygesen et al. 2012). Additional measurements of cardiopulmonary hemodynamics (e.g., pulmonary artery pressure and arterial oxygenation) in this study, could have added further information about the mechanisms. The concept of cardiac preconditioning has been extensively studied (Hausenloy and Yellon 2011), and HBO-PC is one strategy used to induce cardiac protection. Exposure to HBO is thought to induce a cardiac stress response of a reduced intensity that initiates cardioprotective responses, thereby reducing the damage caused by subsequent, more severe stress. The functional basis for the protective effects of HBO preconditioning is only partially understood. Protective mechanisms of HBO involve increased oxidative stress, which induces heat-shock proteins, nitric oxide production, antioxidant enzymes, and modulates inflammatory responses (Nishizawa et al. 1999;Martin and Thom 2002;Cabigas et al. 2006;Thom 2009;Christians et al. 2012). In this study we investigated the effect of HBO preconditioning on the small heat-shock protein aB-crystallin, which interacts with cardiac proteins and is likely to play a key role in protecting the heart from cardiac overload and ischemia (Martin et al. 1997;Latchman 2001;Kumarapeli et al. 2008;Christians et al. 2012). aB-crystallin can protect the heart against various stresses by binding to and stabilizing cytoskeletal structures. In addition, aB-crystallin exhibits antiapoptotic and immunomodulatory properties, and administration of aB-crystallin is likely to diminish the extent and severity of ischemic lesions, including cardiac infarction, stroke, and arterial occlusion (Ousman et al. 2007;Arac et al. 2011). We found that aB-crystallin protein levels were increased in cardiac tissue after diving, and that HBO-PC rats displayed lower aB-crystallin levels compared to non-preconditioned rats. Thus, the high aB-crystallin levels in the heart after diving, may reflect that the heart have been exposed to a high level of diving-induced stress. Furthermore, the significantly lower aB-crystallin levels after HBO-PC and diving may reflect that HBO-PC induced a mild stress to the heart activating the cardioprotective properties of aB-crystallin. The activation of aB-crystallin prior to the dive could probably have prevented a subsequent higher increase in aB-crystallin levels after diving, because the heart was already preconditioned against the stress from diving. However, aB-crystallin mRNA expression levels were not significantly affected by HBO-PC or simulated diving. Therefore, the increased levels of aB-crystallin in response to diving are most likely initiated by protein stabilization and/or activation rather than by de novo transcription. On the basis of previous and present findings, it is likely that aB-crystallin plays a central role in protecting the heart against the injury from diving-induced VGE. A limitation of this study was that the aB-crystallin protein samples from different groups were run on different membranes, and caution should therefore be used when interpreting this data. In this study, cardiac stress markers were positively correlated with VGE. However, a recent study has shown that even presumably safe dives with few VGE and no DCI symptoms were associated with significant cardiac strain and increased levels of BNP (Marinovic et al. 2010). Grassi et al. (2009) showed that a water dive that was considered safe resulted in increased plasma BNP levels and that the same dive simulated in a dry hyperbaric chamber did not affect BNP levels. While underwater, divers are exposed to immersion resulting in significant hemodynamic changes (Pendergast and Lundgren 2009). When investigating how decompression-induced VGE formation affects the cardiovascular system, factors known to affect hemodynamics must be controlled. Simulated diving in a dry pressure chamber, such as the one used in this study, will eliminate the hemodynamic effects of immersion, enabling the differentiation between the effects of immersion and VGE on the cardiovascular system. However, a limitation of chamber diving is that dry diving is not the same as water diving with regards to cardiorespiratory stress. HBO exposure is the main treatment of gas embolism and DCI after diving (Vann et al. 2011); however, vascular gas embolization can also result from a reduction in ambient pressure in caisson work, aviation, extravehicular activity during spaceflight or escape from pressurized vessels (Gennser and Blogg 2008;Vann et al. 2011), as well as from gas entry into the vasculature during inhospital procedures, for example, cardiac surgery with extracorporeal bypass and through central venous and hemodialysis catheters (Tibbles and Edelsberg 1996;Muth and Shank 2000). Thus, the implication of this study is that HBO may not only be used in the treatment of DCI after diving but also in a prophylactic manner to prevent DCI and/or injury due to vascular gas embolization during inhospital procedures. However, whether these novel findings can be translated into humans requires further investigation. A better understanding of HBO-PC mechanisms is important because it will facilitate preventive measures that will increase the safety of persons in risk of vascular gas embolization. In conclusion, we found that the cardiac stress markers, cTnT, Nppb, and aB-crystallin, are elevated in rat serum and cardiac tissue after inducing high loads of VGE from simulated diving, and that there is a strong positive correlation between stress marker levels and postdive VGE loads. We have further shown that HBO-PC may prevent cardiac injury induced by gas embolism, as indicated by the reduced levels of these cardiac stress markers.
6,608
2013-11-01T00:00:00.000
[ "Biology" ]
Factors affecting Malaysian university students’ purchase intention in social networking sites : This study applied the unified theory of acceptance and use of technology 2 to examine acceptance and use of social networking sites in a marketing setting. This study uses 370 regular higher education students in Malaysia as respondents. Quantitative method is used. The findings revealed that performance expectancy (PE) and hedonic motivation were the main factors that influence users’ online purchase intention (PI) through social networking sites (SNSs) in Malaysia. As for moderating influences of gender and age, the results showed that gender significantly moderated purposed association between these four elements and the online PI, while the moderating effect of age was only recognized in PE. Findings of this research offer practitioners with better insights that would aid them in developing effective online marketing strategies to attract online purchasing users through SNSs. PUBLIC INTEREST STATEMENT This study was carried out to provide higher consciousness to marketers on the status of the social network marketing notion. The research specifically looks into students' market segment's favorites towards the employment of social networks in gaining marketing information. The current research can also help researchers to gain a better understanding of unified theory of acceptance and use of technology 2 (UTAUT2) model as the confirmed and appropriate model to measure the acceptance and use of social networks sites by the students for online purchase intention. This research found that the UTAUT2 can be generalized to other e-commerce and social media contexts. At the end of this study, we expect to find several benefits for marketers and also student clienteles that can be succeeded with the use of social network marketing. The public and administration segments would both profit from this study so as to greater understand the purchase choice-making behavior of university students in Malaysia. Introduction Social networking sites (SNSs) are now a developing phenomenon in marketing. Traders are increasingly utilizing social media to target teenagers and youths, and SNSs are a main place in that trend (Dervan, 2015). Dealers are starting to appreciate the usage of SNSs as a part of their selling tactics to reach clients (Tanuri, 2010). With the change of consumer, purchase performance and acceptance of social network commerce are appearing. Currently, many e-commerce sites are using SNSs as marketing tools to maintain customers in an effective sympathy of their online purchase intentions (PIs) and to create more up-to-date and precise purchase choices. They utilize SNSs as useful tools for consumers to exchange shopping information and experiences. SNSs are a web-based individual centered service, platform, or sites that offer an occasion for dealings to involve and interrelate with possible clients, inspire an enlarged feeling of friendliness with clients, and shape all vital dealings with potential clients (Davis Mersey, Malthouse, & Calder, 2010). There are numerous SNSs accessible in the market like Facebook, Twitter, MySpace, and Google Plus. Persons and trades are acquiring benefits from these mediums which act as a platform to offer goods and services or being in contact with their acquaintances or clients. For example, Facebook is a novel shape of e-commerce in the twenty-first century as it delivers novel worth of facilities to web operators to express themselves and network with others (Laudon & Traver, 2015). By employing Facebook, enterprises and persons can upload the picture of their goods or amenities with a full account of it, and customers can buy the goods that they want by only commenting on the comment inbox. This is the way where buyer and sellers use Facebook to lead their e-commerce. Twitter is a different SNS employed by most individuals at present. It created a space where enterprises do direct e-commerce, share and send information to clients and offer services and goods for clients (McIntyre, 2009). It comprises comments, remarks, views of the spectators, and a search engine that mines those tweet outlines. Through Twitter, dealers can rapidly respond to the clients' demands. The e-commerce change via SNSs has also caused countless chances for Malaysians. The average age for Malaysians is about 25-26 years old, which shows that Malaysia has youth who may simply adopt information technology in doing small business (Zaremohzzabieh et al., 2016). Furthermore, the majority of Malaysian people are "techno savvy" and depend intensely on social media for different reasons comprising online shopping (Valentine & Powers, 2013). Previous study has confirmed that SNSs in purchasing goods online play the most significant role on the purchasing preferences for the Malaysian young Muslim consumers (Haque, Sarwar, Yasmin, Tarofder, & Hossain, 2015). This indicates that the Malaysian youth has the ability to accept e-commerce in the framework of SNSs. Hence, it is necessary for consumer behavior investigators and e-retailers to have a greater identification on the issues that influence the online PI of the Malaysian youths, mainly students. Until now, there are numerous technology acceptance models that have been established to investigate factors that affect consumers' online PI via social media (Escobar-Rodríguez, Carvajal-Trujillo, & Monge-Lozano, 2014;Nunkoo, Juwaheer, & Rambhunjun, 2013;Sin, Nor, & Al-Agaga, 2012). In consumer context, the unified theory of acceptance and use of technology 2 (UTAUT2), which was advanced by Venkatesh, Thong, and Xu (2012), is usually applied to incorporate several advances from the original UTAUT model to explain the preparation of online PIs and actual online purchases. This research applies UTAUT2 model as there have been no previous researches that apply the UTAUT2 model in SNSs related studies, especially in developing countries like Malaysia. Therefore, this study puts forward the UTAUT2 model to explain online PI by means of SNSs that includes the four confirmatory factors of the UTAUT2 model (i.e. performance expectancy (PE), effort expectancy (EE), social influence (SI), and hedonic motivation (HM)) and the other two moderator factors (i.e. age and experience). Theoretical background and hypotheses testing Originally, UTAUT2 was derived from UTAUT model, which was suggested by Venkatesh and Davis (2000). The UTAUT2 offers a description for the acceptance and use of information communication technologies (ICTs) by clients (Venkatesh et al., 2012) since the UTAUT was originally devised to clarify the causes that affect the acceptance and adoption of ICTs by workforces. In numerous studies, it has been applied in a consumer context. Examples of applications of the UTAUT2 in consumer contexts include the consumer adoption of low-cost carrier web pages and social commerce technologies (Abed, Dwivedi, & Williams, 2015). Compared to the initial UTAUT model, which the results of some earlier studies clearly show the model accounts for nearly 25% of the variance in behavioral intention (BI; Zaremohzzabieh, Samah, Omar, Bolong, & Shaffril, 2014), the expansions suggested in UTAUT2 made a large development in the variance described in BI (56-74%) and information technology use (Venkatesh et al., 2012). The UTAUT2 model integrates three new constructs and fresh associations (Venkatesh et al., 2012) and redefines the seven variables from the perspective of the consumer instead of defining them from the perspective of the employees of an organization (Venkatesh et al., 2012). It considers that the discrete consumer's intention to use ICTs is influenced by PE, EE, SI and facilitating conditions (FC), HM, price value (PV), and habit (H). Furthermore, PE, EE, SI, and PV constructs affect PI in adopting and using a technology, while FC and H constructs are antecedents of actual technology use. The UTAUT2 model posits individual difference variables, such as age, gender, and experience (exclude voluntariness, which is part of the initial UTAUT) to moderate several UTAUT2 associations. According to the last paragraph, the research pattern of this paper is advanced in which PI to use SNSs as a dependent variable. It is defined as customer's motivation with intention to purchase through SNSs. PE, EE, SI, and HM, which are four constructs of the UTAUT2 model, are independent variables of this research model. In this study, H and FC constructs are not theorized to influence PI in our model while these two constructs only determine technology use. Furthermore, SNSs are introduced as an affluent-learning curve, unrestricted-to-use form regardless of through PC or cell phones, and needing less non-stop time and energy (Xu, 2014). These structures permit the SNS users to have extra support for erudition, device, setting, and time to endure purchasing via SNSs. For this reason, this study assumes the impact of FC can be low in the recent research perspectives and we exclude this variable from our research model. In contrast, the existing practical fact on the effect of FC on the acceptance of ICTs is contradictory. The construct PV also has not been adapted to the research model as marketers can get to their intended consumers at a cheaper price, occasionally even with zero cost through SNSs, and online purchasing through SNSs does not represent a monetary cost for the consumer. However, the role of age and gender of participants as moderator variables are included in our model, we have not considered experience as a moderator variable given that our analysis is carried out on participants' own experience in online purchasing. Performance expectancy (PE) PE is pronounced as the level to which a singular considers that the services which online purchase via SNSs provide will satisfy his or her needs. In UTAUT, PE shares similar definition with perceived usefulness (PU) in the model of Technology acceptance model (TAM; Davis, 1989). It is the greatest influencing factor of intention in customer context as stated by Venkatesh et al. (2012). In addition, PE has been empirically validated in online consumer behavior. Through using structural equation modeling (SEM), Sun, Cao, and You (2010) in China established that PU (or PE) has a positive correlation to BI to use e-commerce. This statement was supported by Amaro and Duarte (2013), claiming PE as an influential element in predicting online-travel purchasing activities. In other study, San Martín and Herrero (2012) also employed UTAUT as a research framework and illustrated that PE has a significant correlation to the adoption of online-travel purchasing. Moreover, Nawi, Nasir, and Al Mamun (2016) studied the relationships of UTAUT constructs and how they were affected by social media as a small business platform among students. The country of the study was Malaysia. They confirmed that the students with high PE have high BIs to use social media in their small business. Based on earlier research results, there is a significant correlation among PE and BI in Malaysian context (Amin, 2007). Thus, this study suggests the first hypothesis: H1: PE has a significant positive correlation with the university students' PIs through SNSs. Effort expectancy (EE) EE is identified as the level of affluence linked to the usage of the online purchase services via SNSs. Due to the simplicity of SNSs, EE was likely to perform a noteworthy role leading to immediate use of SNSs by young consumers. Venkatesh, Morris, Davis, and Davis (2003) took perceived ease of use (PEU) construct to describe and define EE construct as the degree of ease related with technology use. Earlier empirical study (Amaro & Duarte, 2013) supported PEU (EE) is vital to have positive correlation towards online-travel PIs. Based on UTAUT model, Mandal and McQueen (2012) claimed that EE could impose a significant correlation on BI to adopt social media among microbusiness owners. Furthermore, Hong, Sin, Lun, and Zhou (2015) carried out a survey among Malaysian university students and recognized that EE significantly influence university students' PIs to adopt Facebook-Commerce, using Statistical Analysis System (SAS) approach. As previous findings, rooted in UTAUT/ UTAUT/2 model, this study states second hypothesis: H2: EE has a significant positive correlation with the university students' PIs through SNSs. Social influence (SI) SI is described as the level that a human is affected by other people (e.g. family and friends) around him or her to decide whether or not to accept and use online purchase amenities via SNSs. The study attempts to improve understanding of SI on PI within the context of SNSs. SNSs can be a significant source of SI on PI. SI is another construct that has been appeared in numerous procedures in TAM model. According to Venkatesh et al. (2003), a subject norm (SN) in TAM model is comparable construct which summarizes similar concept with SI. Furthermore, Cheung and Lee (2010) found that SN is an important factor in determining BI to use SNSs for social connections and relations. Brocke, Richter, and Riemer (2009) claimed that social reason for students to connect with their friends and peers is certainly a tendency in verifying the SNSs acceptance. Besides, Litvin, Goldsmith, and Pan (2008) claimed that social media are deliberated as vital information source when customers are making purchasing choices. Lee, Qu, and Kim (2007) in Korea discovered that SN (SI) had a significant influence on customer's intensions to purchase online tickets. In Malaysia, Hong et al. (2015) indicated SI is significant to have a significant correlation with the university students' PI to accept Facebook-Commerce. Accordingly, the hypothesis is suggested: H3: SI has a significant correlation with the university students' PIs through SNSs. Hedonic motivation (HM) HM is identified as the desire developed from using the online purchase services via SNSs. With the incorporation into the UTAUT2 model of HM (intrinsic motivation) construct, Venkatesh et al. (2012) claimed that the purpose is to complete the model of UTAUT, which takes into account just the extrinsic motivation value, via the PE. In the consumer use of information technology context, they also stated both PE and HM constructs are considered crucial factors of information technology use. According to Brown and Venkatesh (2005), HM has been included as an important factor in many consumer behavior studies and previous IS research. In earlier study, HM construct has also been realized to be an important factor of the information technology use in the context of consumer behavior (Childers, Carr, Peck, & Carson, 2002). Furthermore, To, Liao, and Lin (2007) emphasize that HM construct is gradually significant due to the recognizable motivations attracting customers to visit a sellers' online websites. In Taiwan, Liao, Fei, and Chen (2007) uncovered those online shopping motives of adults from HM not only affect the search intention, but indirectly affect the PI. Chiu, Wang, Fang, and Huang (2014) demonstrated that HM is positively correlated to consumer's PIs in online shopping. From these past findings, the below hypothesis is proposed: H4: HM has a significant correlation with the university students' PI through SNSs. Venkatesh et al. (2012) suggested that PE, EE, and SI on the information technology use can differ because of age and gender in UTAUT and UTAUT2 models. In Taiwan, Liu, Chang, Huang, and Chang (2016) conducted an investigation to identify the moderating effects of gender and age on the UTAUT model. The results showed there is important correlation between gender and age and the model. Concerning gender, Cha (2011) found in their studies that the attitudes of females to SNSs and toward shopping activities on SNSs are more positive than males'. Moderator effects-gender and age Using gender and age as moderating variables in an adapted UTAUT model, Rahman, Jamaludin, and Mahmud (2011) found that the correlation between EE and the usage of digital library was moderated by gender and age. Moreover, a study conducted by Lee and Kim (2013) suggest that age moderates the effects of EE and SL on PI of the App-Book. Meanwhile, a study by Tai and Ku (2013) specified that gender was a moderator between SL and BI in the context of m-stock trading. More precisely, the moderating role of age and gender provide important novelty for the current study. Thus, the last hypothesis is suggested: H5: Gender differences and age are able to moderate the correlation between independent variables and the university students' PI through SNSs. Participants Four hundred and thirty-four university students from four Malaysian higher institutions joined in the research. From 380 questionnaires returned, only 370 of them were completely answered and could be used for the analysis. Among the participants, 39.2% were male and 60.8% were female. In terms of age categories, the data showed the majority of respondents were between 18 and 24 years old (55.7%) while 44.3% of them aged above 24. Matched with conventional innovation diffusion analyses (Rogers, 2003) that uncover early users of technological novelties as characteristically younger in age, research results in the context of consumer are consistent. With regard to ethnicity, respondents were Malay (54.3%, n = 201), Chinese (29.2%, n = 108), Indian (7.0%, n = 26), and others (9.5%, n = 35). Of the 370 participants, 49.5% (n = 183) hold a bachelor's degree, 34.3% (n = 127) have a master's degree, and 3.2% (n = 12) hold PhD degree. According to the level of income, the respondents with the monthly income higher than 4000 MYR per month comprised one-half of the income group (50.0%), followed by those with monthly income within the range lower than 1000 MYR per month (17.6%). The results of the study also showed the most prevalent SNS among the respondents is Facebook where 89.2% are using it, followed by Twitter (25.1%), LinkedIn (5.1%), Instagram (2.4%), and others (27%). Almost, majority of the respondents (60.3%) use SNSs less than 3 h a day. Only 1.6% does over 16 h. In addition, all respondents have at least purchased online once (during the last six months at the time of the survey), however, it appears as most of them have experience shopping online as respondents purchased variety of products online in which purchasing cloths was more prevalent (56.2%) category from the seven products researched. The online purchase of flight ticket was the second prevalent kind of products (39.7%) followed by movie ticket (35.9%), respectively. The respondents' favorite buying was contrast with the account from Marketing Interative.com (2014) which stated that the most often bought item by online purchasing in Asia-Pacific, included airline tickets and reservations 59%, clothing 57%, tours and hotel reservations 53%, and event tickets 50%. Other than the items listed above, the participants also bought accessories, cosmetics goods, electronic gadgets, and books using SNSs. The results of the study confirm the earlier research conducted by Sin et al. (2012) in Malaysia which included clothing and accessories 52.5%, travels/hotel arrangement 37.7%, books, magazine, newspapers 27.6%, concert or movie tickets 25.6%, computer software and hardware 20.2%. The findings of this study revealed that clothes and e-tickets have been marked as a needed product or service and movie ticket revealed as an interesting service. Procedure A quantitative study was employed to collect information precisely from students in academies placed in the Selangor, Klang Valley. According to previous studies, university students account for a major part of the population of social network users (Smith & Caruso, 2010) which implies they can understand the content in SNSs well and its capacities. Besides that, several researchers have suggested that student samples are a major target market for online marketers (e.g. Ha & Stoel, 2009). In other words, the likelihood of students being significant online consumers is high indicating the selected sample will provide sufficient information about how online consumers behave in SNSs. Additionally, along with the growth of educational services in Malaysia, it is noteworthy for traders and consumer behavior investigator to distinguish Malaysian students' populace viewpoint toward online marketing because of their function in online trading activities in Malaysia (Sabri et al., 2008). As a result, choosing students as our sample was suitable. A multistage cluster sampling technique was utilized for this study with the information obtained from four selected universities located in Klang Valley in Malaysia. All together, the four universities that were sampled in this study are: Universiti of Malaya, Universiti Putra Malaysia, Universiti Multimedia, and Sunway University. The sample was filtered through a qualifying question at the beginning of the survey, asking whether the respondent has purchased a product using the SNSs, as these study targets respondents who have experience in online purchasing. Accordingly, if respondents have experience in purchasing products using the SNSs, they were directed to move to the next sections. Instead, the response of having no prior online product purchase experience was coded with the number two. The data-set with the responses of the number two on the qualifying question was eliminated. As such, the data from 370 respondents who have purchased products online were kept for further analysis. Furthermore, the sample size of the main study was 434, as a minimum sample size of 200 has been suggested as a goal for SEM research to prevent frequent convergence failure and inappropriate solution glitches (Kline, 2011). As such, the use of respondent-driven sampling method provided acceptable access to an appropriate sample size given the purpose of the study. Measures This study adopted a set of measurement items according to the original UTAUT model, the modified UTAUT model (UTAUT2), other researches related to e-commerce, ICT acceptance and the consumer literature Kim, Chung, & Lee, 2011;Venkatesh et al., 2003Venkatesh et al., , 2012Wen, 2012). Following the procedure explained, the questionnaire consists of 37 items that were rated on 7-point Likert scale varying from strongly agree to strongly disagree. The inquiries in the questionnaire were authenticated and confirmed based on the ideas of a team of scholars, whose opinions were solicited, whether the queries were proper for analyzing consumers' approval and usage of SNSs to purchase goods and services. Consistent with the panel's opinions, some modifications were made to the items to construct the statements more meaningful. A pilot study was then performed on a selection of 50 university students who had previous experience in online purchasing via SNSs, while subjects who had not formerly purchased anything online were removed. Built on the findings of this pilot study, only small alterations were made to the phrasing of some items to enhance additional simplicity and easiness. Data analysis We applied the SEM analysis in this study. The benefits of using this method are to correct statistical estimation by selecting the measurement error in the estimation procedure. It allows the checking of several associations concurrently. SEM also examines far additional complicated models like examining mediation and moderation. Lastly, it confirms high validity and credibility for the constructs by applying the average variance extracted (AVE) and construct reliability (CR). Data preparation The data were distributed normally, which is proven by the skewness values from −1.418 to 0.189, and the kurtosis values from −1.116 to 3.027 for entire constructs. Byrne (2010) asserted that if the skewness value is among− 2 and +2, and the kurtosis value is between −7 and +7, data were suitable to assume multivariate normality. For model fit, Kline (2010) recommended applying model fit indices, counting the chi square/degree of freedom ratio (CMIN/DF), the comparative fit index (CFI), the goodness-of-fit index (GFI), and the Tucker-Lewis index (TLI). A rule of thumb for the fit indices is that values equal to or bigger than 0.90 show adequate fit (Kline, 2010). Also, the model may be categorized as satisfactory if the root mean squared error of approximation (RMSEA) is between 0.03 and 0.08. This model presented good fit indices: CMIN/DF = 1.775, p < 0.01, CFI = 0.914, GFI = 0.914, TLI = 0.944, RMSEA = 0.046. As stated by Kline (2010), the model offers a suitable fit for the model. AMOS 22 software was employed to examine the data. In the part of reliability analysis, the factor loadings of all questionnaires are higher than general standard 0.50 and composite reliability ranges from 0.802 to 0.867; so, all factors in the measurement model had adequate reliability. In the part of validity analysis, all constructs display sufficient convergent validity and discriminant validity, while all AVE value in each construct ranged from 0.50 to 0.608 and the value of square root of AVE of each dimension was bigger than the correlation coefficients of the pairwise dimension. Structural model This model comprises PE, EE, SI, and HM as exogenous variables, and PI acts as an endogenous variable. As seen in Figure 1, PE is significant in explaining the proportion of PI through SNSs (β = 0.20; p-value = 0.019). Thus, H1 is supported. This result is consistent with previous studies (Escobar-Rodríguez & Carvajal-Trujillo, 2013Pascual-Miguel, Agudo-Peregrina, & Chaparro-Peláez, 2015;Venkatesh et al., 2012), in which PE was discovered to have a significant impact on the intention to use the technology. Accordingly, this result could imply that consumers who expect to gain benefits from using SNSs as marketing tool are more possible to have intention to use SNSs, which provides benefit to them in the purchasing process. The findings of this study also indicate that PI among university students in Malaysia is significantly influenced by HM (β = 0.644, p-value = 0.000). Hence, H4 is supported. Above all, the results of this research propose that online purchasing is more influenced by an intrinsic or hedonic purpose rather than an extrinsic or utilitarian motivation Pascual-Miguel et al., 2015;Venkatesh et al., 2012). Given that the nature of online consumption of SNSs is hedonic, consumers may be more entertainment-oriented when searching for and purchasing products using SNSs. Furthermore, the findings support the motivational theory (Deci & Ryan, 1975). According to the theory of motivation, user acceptance is determined by two factors, extrinsic motivation and intrinsic motivation. When extrinsically driven, benefits associated with using a product external to a system motivate use. Intrinsically driven use of a system is likely when enjoyment is attained .The motivational theory marries the constructs PE and HM of the UTAUT2, since the extrinsic motivational factor closely emulates that of PE while the intrinsic driver compliments HM (Slade, Williams, Dwivedi, & Piercy, 2015;Venkatesh et al., 2012). For this reason, it was expected that consumers who viewed SNSs as beneficial and perceived it as pleasurable would be more likely to use it in their personal life. The data in Figure 1 show that EE (β = −0.071; p-value = 0.278) and SI (β = −0.012; p-value = 0.849) are negatively associated with PI. Therefore, these results do not support H2 and H3. This result is consistent with earlier study (Yang, 2010), in which EE insignificantly influences attitude of US consumer toward using m-shopping services, based on an online survey collected from the purchased consumer panel validated using SEM. Due to the simplicity of SNSs tools, EE was likely to play no significant role leading to the immediate use of SNSs by university students in Malaysia. In addition, many Malaysians are technologically savvy and are able to speak in more than one language. These skills permit them to communicate with many people in the world with ease. All these show that Malaysians have the potential to embrace e-commerce easily in the context of SNSs. Results in this study also show that SI does not influence the PI which is supported by Pascual-Miguel et al. (2015) and contrast with the findings obtained from previous researches that support the significance of IS in influencing consumers' online PI Venkatesh et al., 2012). The fact that the generality of SNSs apply as a basis of information about goods and services may decrease the common anxiety of the communal setting (either optimistically or pessimistically) with respect to the acceptance of novel tools in the purchase behavior. Furthermore, the instrumental character of the online purchase of goods and services can initiate utility factors such as performance to take precedence over social influences on the development of the online PI. PE, EE, SI, and HM variables explained 44.0% of the variance in PI through SNSs in university students in Malaysia. According to Venkatesh et al. (2012), it has been confirmed that the UTAUT2 model increases the percentage of variance explained in the intention to use ICTs by 18%, and in the actual use of ICTs by 12%. Moderation test of gender An assessment between "the unconstrained model" and "the measurement residuals model" revealed that the unconstrained model (Δχ 2 = 886.695, df = 484, p = 000) and the measurement residuals model (Δχ 2 = 924.569, df = 542, p = 000) were significant; conversely, the unconstrained model was better than the measurement residuals model because the chi-square was smaller (Hair, Black, Babin, Anderson, & Tatham, 2010). Along with the measurement residual's model (χ 2 = 37.874, df = 58, p < 0.05) in "assuming that the unconstrained model is correct," the results exhibited that the impact of possible differences across gender was significant. The results supported with previous findings show that SNSs is typified by female dominantly and gender difference in SNSs use is rapidly increasing (Comscore, 2010). Dittmar, Long, and Meek (2004) believed males and females have been presented to vary in their attitudes toward online buying. In other research, Cha (2011) stated that purchasing activities via SNSs can be different among women and men, because different gender has different motivations in purchasing. In this regard, female highlights emotional attachment in the buying activity. Females consider that shopping is an exciting activity, thus they have a trend to enjoy it. Females also search for a relationship when shopping online. SNSs enable female users to interact with their families and friends. For example, Facebook shopping application assists female users. With this application, female users can discuss goods and products they want to buy with Notes: Performance expectancy (PE), effort expectancy (EE), social influence (SI), hedonic motivation (HM), and online purchase intention (PI). For all estimates, *p < 0.05, **p < 0.01, ***p < 0.001. their family and friends. In contrast, male are relatively more inspired by functional factors such as convenience and suitability regarding making purchases. So, the gender of the customers is real in determining attitudes toward purchasing with SNSs. The findings also showed that there was a significant relationship between PE and PI for male students (β = 0.197; Table 1) and female students (β = 0.170; Table 1). Thus, the moderating impact of gender on the path relationship between PE and PI was not supported. As portrayed in Table 1, the moderating effect of gender on the path relationship between EE and SI and PI were not supported. Moreover, the findings showed that there was a significant association between HM for female students (β = 0.652; Table 1), and the path hypothesis for male students was significant (β = 0.644; Table 1). Consequently, the moderating effect of gender on the path relationship between HM and PI was not supported, while there were differences in the value of standard regression weight for male and female students. Venkatesh et al. (2003Venkatesh et al. ( , 2012 formulated age into to the model and it was hypothesized to moderate the effect of four constructs (PE, EE, SI, and HM). The respondents of the survey were categorized into two groups: the first group ranged in age from 18 to 24 years, which is considered as young students, and the second group ranged in age from 25 to above 40, which is considered as young adult. Results indicated that the differences were significant (p < 0.05) and the unconstrained model was better than the measurement residuals model (Δχ 2 = 121.293 (862.4-741.107); df = 58 (542-484); p = 000), then we can conclude that this is some form of moderation effect of age on the overall model. The results of Table 2 indicated that age significantly moderates only the path relation between PE and PI through SNSs. These findings are consistent with an earlier study (Zaremohzzabieh et al., 2014). Conclusion Drawing upon the UTAUT2, this study looks into the nature of social networking sites (SNSs) in online PI of university students in Malaysia by proposing a set of four factors: PE, effort expectancy, social influence, and HM, respectively. Afterwards, separate dissimilarities, namely, age and gender, are posited to moderate the influences of these elements on the online PI. The results from this study point to four main conclusions. First, our findings indicate that online PI to use SNSs is influenced by HM and PE, in the domain of social media. These findings support hypotheses one and four of this study. Secondly, contrary to hypotheses two and three, EE and SI do not have an effect on online PI through SNSs. Thirdly, these four factors of the UTAUT2 explained about 44 percent of the variance obtained in the data. Fourthly, this study found that gender moderates the effect of SNSs on influencing the online PI. Specifically, age just moderates the association among PE and the online PI through SNSs. The results of this study showed that UTAUT2 is a robust tool that predicts user acceptance of technology across different cultures. From a practical standpoint, the results also provide important insights and implication for researchers, advertisers, and marketers in the context of social network. In accordance with our findings, they can initiate particular factors like PE and HM to support consumers' intentions to purchase online because this will lead to more actual usage of SNSs to make purchases. Furthermore, this study only concentrated on four constructs of the UTAUT2 model. Therefore, it is suggested that future investigations add other constructs of the UTAUT2 model (i.e. FCs, PV, and the moderating influence of experience) on BI and use behavior in different countries and different technologies. Finally, since the empirical research underlying the UTAUT2 model and the investigation social media adoption and usage are relatively new in a wide range of consumer technology use contexts, conducting meta-analysis studies on social media adoption is necessary in order to compare the findings with other UTAUT2 findings.
7,758.8
2016-05-17T00:00:00.000
[ "Business", "Computer Science" ]
LINC00955 suppresses colorectal cancer growth by acting as a molecular scaffold of TRIM25 and Sp1 to Inhibit DNMT3B-mediated methylation of the PHIP promoter Background Long non-coding RNAs play an important role in the development of colorectal cancer (CRC), while many CRC-related lncRNAs have not yet been identified. Methods The relationship between the expression of LINC00955 (Long Intergenic Non-protein Coding RNA 955) and the prognosis of colorectal cancer patients was analyzed using the sequencing results of the TCGA database. LINC00955 expression levels were measured using qRT-PCR. The anti-proliferative activity of LINC00955 was evaluated using CRC cell lines in vitro and xenograft models in nude mice in vivo. The interaction of TRIM25-Sp1-DNMT3B-PHIP-CDK2 was analyzed by western blotting, protein degradation experiment, luciferase, RNA-IP, RNA pull-down assays and immunohistochemically analysis. The biological roles of LINC00955, tripartite motif containing 25 (TRIM25), Sp1 transcription factor (Sp1), DNA methyltransferase 3 beta (DNMT3B), pleckstrin homology domain interacting protein (PHIP), cyclin dependent kinase 2 (CDK2) in colorectal cancer cells were analyzed using ATP assays, Soft agar experiments and EdU assays. Results The present study showed that LINC00955 is downregulated in CRC tissues, and such downregulation is associated with poor prognosis of CRC patients. We found that LINC00955 can inhibit CRC cell growth both in vitro and in vivo. Evaluation of its mechanism of action showed that LINC00955 acts as a scaffold molecule that directly promotes the binding of TRIM25 to Sp1, and promotes ubiquitination and degradation of Sp1, thereby attenuating transcription and expression of DNMT3B. DNMT3B inhibition results in hypomethylation of the PHIP promoter, in turn increasing PHIP transcription and promoting ubiquitination and degradation of CDK2, ultimately leading to G0/G1 growth arrest and inhibition of CRC cell growth. Conclusions These findings indicate that downregulation of LINC00955 in CRC cells promotes tumor growth through the TRIM25/Sp1/DNMT3B/PHIP/CDK2 regulatory axis, suggesting that LINC00955 may be a potential target for the therapy of CRC. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-023-11403-2. Introduction Colorectal cancer (CRC) is the third most prevalent cancer type and the second major cause of cancer-related deaths worldwide [1].Surgical removal of CRCs is the main therapy for patients with CRC, and is frequently combined with other treatment modalities such as neoadjuvant and adjuvant chemotherapy, radiotherapy, and treatment with targeted agents [2].These strategies, however, have not significantly improved survival rates in patients with CRC.Identifying new molecular markers and therapeutic targets, and clarifying the mechanisms underlying their effects, may provide greater understanding of the occurrence, development, and therapy of CRC. Long non-coding RNAs (lncRNAs) are a new type of transcript encoded by the genome, but mostly not translated into protein [3].LncRNAs are involved in a variety of cellular biological processes, including gene regulation and chromatin dynamics [4].Aberrant expression and mutation of lncRNAs are widely associated with a variety of disease development and cell functional behaviors, including tumor proliferation [5], invasion, and metastasis [6].LncRNAs are expected to serve as biomarkers for cancer prognosis, diagnosis, and efficacy prediction, and as therapeutic targets [7,8].In the past decade, a variety of lncRNAs have been found to be involved in the occurrence and development of CRC [9].For example, lncRNA HIF1 α-As2 upregulates hypoxia inducible-factor 1α (HIF1α) expression through a ceRNA mechanism, promoting interaction of HIF1α with the RMRP promoter to activate the IGF2 signaling pathway and promote CRC progression [10].LncRNA GLCC1 stabilizes c-myc by binding to HSP90, which in turn upregulates transcription of lactate dehydrogenase and supports survival and proliferation of CRC cells by enhancing glycolysis [11].Although many lncRNAs are closely related to the malignant progression of CRC and have application prospects in tumor screening and detection, while many CRCrelated lncRNAs have not yet been identified.Therefore, further functional lncRNAs and their regulatory mechanisms in CRC development still need to be explored. LINC00955 (Long Intergenic Non-protein Coding RNA 955) is an intergenic lncRNA located on chromosome 4p16.3with a full length of about 2483 nucleotides.Its biological function has not been reported.The present study was designed to assess the role of LINC00955 in the development of CRC.Evaluation of The Cancer Genome Atlas (TCGA) revealed that LINC00955 was downregulated in CRC tissues, and lower levels of LINC00955 were associated with worse survival.Overexpression of LINC00955 significantly inhibited proliferation of CRC cells in vitro and in vivo.Mechanistically, LINC00955 bound to Sp1 transcription factor (Sp1) protein to modulate the Sp1 protein level post-translationally by regulating the binding of the E3 ubiquitin ligase tripartite motif containing 25 (TRIM25) to Sp1.Downregulation of Sp1 inhibited the cell cycle and malignant proliferation of CRC cells through the DNA methyltransferase 3 beta (DNMT3B)/pleckstrin homology domaininteracting protein (PHIP)/cyclin-dependent kinase 2 (CDK2) axis.This study revealed a novel mechanism by which LINC00955 inhibits the development of CRC and provided a theoretical basis for a potential targeted therapy for CRC. Human samples and cell lines A total of 75 pairs of CRC and adjacent normal human tissues were provided by the First Affiliated Hospital of Wenzhou Medical University.Each specimen was divided into three parts when sampled: one part was confirmed as CRC by pathological examination, the second part was used to extract RNA and synthesize cDNA, and the third part was fixed in formalin, embedded in paraffin, and stored at room temperature.The human CRC cell lines HCT116 (CBP60028 COBIOER, Nanjing, China), HT29 (CBP60011 COBIOER), RKO (CBP60006 COBIOER), SW480 (CCL-228; ATCC, Manassas, VA, USA), CCD 841 CoN (CRL-1790, ATCC), and CCD-18Co (CRL-1459, ATCC) were cultured in 1640 medium (Gibco, 11,875-093), McCoy's 5A 127 medium or minimum essential medium supplemented with 10% fetal bovine serum (FBS) at 37 °C in a humidified incubator with 5% CO 2 . Western blotting Tissues or cells were sonicated, their protein concentrations were measured, and equal aliquots were loaded onto SDS-PAGE gels, which were electrophoresed.Protein samples were transferred to nylon membranes, which were then incubated in 5% non-fat milk for 1 h to avoid non-specific binding.The appropriately diluted primary antibody was applied to the membranes, followed by three washes with TBS, incubation with a tagged secondary antibody for 3 h at 4 °C, and three more washes with TBS.Finally, the membranes were exposed to film.Antibodies against Sp1, HA, FOXO1, FOXO3A, FOXC1, TRIM25, and DNMT1 were diluted 1:1000.Antibodies against cyclin D1, GFP, CDK2, CDK4, CDK6, Sp2, Sp3, cyclin E2, and DNMT3B were diluted 1:500.Antibodies against β-actin, α-tubulin, and DNMT3A were diluted 1:10,000.The anti-KLHL6 antibody was diluted 1:1500.The anti-PHIP antibody was diluted 1:2000. Experimental animals Female BALB/C-nu nude mice weighing 15 ± 0.5 g obtained from GemPharmatech (license number: SCXK [SU] 2018-0008; Nanjing, Jiangsu, China) were raised in the SPF facility of Wenzhou Medical University for experimental animals.The Wenzhou Medical University Experimental Animal Ethics Committee approved all animal research.Twelve female BALB/c-nu nude mice were randomly split into two groups of six.Each mouse was subcutaneously injected with 5 × 10 6 HCT116 (Vector) or HCT116 (LINC00955) cells in 100 μL of media, and the injection was performed slowly and at a constant speed.Three weeks later, the mice were euthanized by injecting excessive pentobarbital sodium.The tumors were dissected out, photographed, and weighed. Cell cycle analysis Cells in logarithmic growth phase were trypsinized, resuspended, and added to the wells of 6-well plates.The cells were grown with 0.1% FBS medium for 12 h after adhering to the plates, and subsequently with the matching 10% FBS complete medium for 12 h.The cells were stored in a 4 °C refrigerator for 12-24 h after being digested in EP tubes with 70% pre-cooled alcohol.The cell pellet was obtained by centrifugation at 1000 × g for 5 min, then the cell pellet was washed with precooled PBS, and the cells were stained with 30 μL RNaseA and 120 μL PI staining solution.The cell suspensions were then assayed by flow cytometry. RNA pull-down assay and mass spectrometry RNA pull-down kits were used to perform RNA pulldown experiments (Bes5102; BersinBio, China).Secondary structures of the corresponding mass of biotin-labeled target RNA probes and NC probes were formed.Two RNase-free centrifuge tubes each received 40 μL of streptavidin magnetic beads.RNA probes (about 100 μL) were added to form secondary structures with the magnetic beads, tubes were centrifuged and incubated at 25 °C, and the supernatants were discarded.Aliquots containing 2 × 10 7 cells were washed, with 100 μL of supernatant considered the input group.The probemagnetic bead complex was mixed with the cell magnetic bead complex and the cell lysate, followed by incubation with rotation for 2 h.The beads were collected and washed for 5 min.The magnetic beads were subsequently mixed with 60 μL of protein elution buffer, and incubated at 37 °C for 2 h.The supernatants were transferred to new centrifuge tubes.A 15 μL aliquot of each protein sample was loaded onto SDS-PAGE gels for western blotting.The gels were stained for 2 h in Coomassie brilliant blue staining solution, washed with ddH 2 O, and subjected to mass spectrometry. RNA-IP assay HCT116 (LINC00955) cells cultured in a 10 cm dish to 70-80% confluence were lysed using the buffer provided in the RNA Immunoprecipitation Kit (Bes5101; Ber-sinBio, China).Each cell lysate sample was divided into three aliquots, with 0.8 mL used for IP, 0.8 mL for IgG assays, and 0.1 mL as input.IP and IgG samples were supplemented with specific antibodies.Each sample received 20 μL of carefully balanced protein A/G beads.The beads were recovered by centrifugation and then incubated at 55 °C for 1 h with polysome elution buffer.RNA was eluted and reverse-transcribed, and qPCR was performed. IP A Myc-Ub IP assay was performed.Briefly, PHIP-knockdown HCT116 (LINC00955) and RKO (LINC00955) cells and TRIM25 knockdown HCT116 (LINC00955) and RKO (LINC00955) cells were grown in 10 cm plates to 70-80% confluence, followed by co-transfection with Myc-Ub and HA-CDK2 or Myc-Ub and GFP-Sp1.Eight hours later, the medium was changed, followed by incubation for another 12 h.MG132 (10 µM) was added to the cells for 8 h.Each protein sample received an anti-Myc antibody before being incubated at 4 °C for 12 h.The samples were mixed with agarose beads (sc2003; Santa Cruz, USA) and incubated for 3 h at 4 °C.The agarose beads were collected and washed 5-6 times.The protein samples were analyzed by western blotting after addition of 60 µL of elution buffer. Prediction of Sp1 and TRIM25 binding regions within LINC00955 The genomic sequence of human LINC00955 was obtained from the nucleotide database of the National Library of Medicine (sequence ID: NR_040045.1).The secondary structure of LINC00955 was predicted using RNAfold, using the options of minimum free energy and partition function, while avoiding isolated base pairs.The RNA binding domains in the transcription factor Sp1 (entry ID: P08047) and the E3 ubiquitin/ISG15 ligase TRIM25 (entry ID: Q14258) were predicted from their three dimensional structures modeled by AlphaFold.Specifically, folds similar to two structural domains in Sp1 (i.e., amino acids 429-558 and 626-714) and three domains in TRIM25 (i.e., amino acids 1-84, 105-190, and 431-630) were searched in the Protein Data Bank using the Dali web server.LINC00955 was aligned with the nucleotides in 1MEY using Clustal Omega. Statistical analysis The mean ± standard deviation (± SD) of three separate experiments are used to represent all experimental data and were compared using t-tests.p < 0.05 was deemed statistically significant. Downregulation of LINC00955 in CRC tissues and cells, and LINC00955 suppression of CRC cell growth in vitro and in vivo The potential role of LINC00955 in development of CRC was investigated by analyzing its expression in samples from the TCGA database.Expression of LINC00955 was markedly lower in CRC tumor tissue than in normal colon tissue (Fig. 1A).Kaplan-Meier analysis revealed that downregulation of LINC00955 was associated with poor prognosis in patients with CRC (Fig. 1B).Downregulation of LINC00955 was confirmed by qPCR assays of 75 clinical samples (Fig. 1C).LINC00955 levels were lower in human CRC cell lines, including HCT116, HT29, RKO, SW480, and LOVO cells, than in normal human colorectal cell lines (Fig. 1D). The possible role of LINC00955 in CRC was further explored by constructing stable CRC cell lines bearing LINC00955, including HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells (Fig. 1E, F), and testing the effects of LINC00955 on proliferation of these cells.LINC00955 significantly reduced the growth rates of monolayers of HCT116 and RKO cells (Fig. 1G, H), and reduced their anchorage-independent growth (Fig. 1I, J).EdU assays showed that LINC00955 significantly inhibited DNA replication in CRC cells (Fig. 1K, L).To better assess the effects of LINC00955 on CRC cell proliferation, HCT116 cells were injected under the skin of nude mice, and tumor growth was monitored.Overexpression of LINC00955 led to a significant reduction in the size and weight of subcutaneous tumors in nude mice (Fig. 1M-P).These findings demonstrate that LINC00955 prevents CRC cells from proliferating malignantly both in vivo and in vitro. LINC00955 induces cell cycle arrest of CRC cells by inhibiting CDK2 The cell cycle is the main physiological process that drives cell growth, as well as a crucial factor in the uncontrolled proliferation of tumor cells.The ability of LINC00955 to affect the cell cycle of CRC cells was therefore evaluated by flow cytometry.Overexpression of LINC00955 induced G0/G1 phase arrest of HCT116 and RKO cells (Fig. 2A, B).These results imply that the ability of LINC00955 to mediate CRC cell proliferation is due to its effect on cell cycle progression.To clarify the mechanism by which LINC00955 induces G0/G1 phase arrest, G0/G1 phase-related proteins in these cells were examined using western blotting.LINC00955 substantially downregulated expression of CDK2 in HCT116 and RKO cells (Fig. 2C), suggesting that LINC00955 may inhibit malignant proliferation of CRC cells by inhibiting CDK2 expression.To verify this hypothesis, HCT116 (LINC00955) and RKO (LINC00955) cells were stably transfected with the HA-labeled CDK2 plasmid (Fig. 2D, E).ATP, soft agar, and EdU assays showed that, compared with control cells, overexpression of CDK2 rescued the growth ability of CRC cells (Fig. 2F-K).Additionally, flow cytometry demonstrated that overexpression of CDK2 drastically reduced the ability of LINC00955 to cause cell cycle arrest in HCT116 and RKO cells (Fig. 2L, M).Collectively, these findings indicate that CDK2 is an important downstream effector of LINC00955. LINC00955 mediates degradation of CDK2 through E3 ligase PHIP To further investigate the specific molecular mechanisms by which LINC00955 regulates CDK2, the ability of LINC00955 to affect CDK2 mRNA levels was assessed by qPCR.LINC00955 upregulated the levels of CDK2 mRNA expression in HCT116 and RKO cells, indicating that LINC00955 did not downregulate CDK2 protein expression at the mRNA level (Fig. 3A).Next, we assessed whether LINC00955 downregulated CDK2 expression by promoting the ubiquitinated degradation.We already knew that the half-life of CDK2 protein was about 3 h [12].Therefore, we treated HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells with MG132 and cycloheximide (CHX) and then identified expression of CDK2 by western blotting.The rate of CDK2 protein degradation in HCT116 (LINC00955) and RKO (LINC00955) cells was significantly higher than in HCT116 (Vector) and RKO (Vector) (Fig. 3B, C).These results reveal that LINC00955 downregulates CDK2 by inducing protein degradation. Next, to explore whether LINC00955 regulates ubiquitination and degradation of CDK2 by regulating E3 ligase, we used western blotting to detect the important E3 ligase KLHL6, which recognizes CDK2 [12].Expression of KLHL6 did not change significantly in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells (Fig. 3E).Next, we predicted E3 ligases that are involved in CDK2 degradation using the UbiBrowser database.We also identified an E3 ligase regulated by LINC00955 by quantitative proteomic analysis.We then combined the site prediction data with the proteomic analysis results.The findings showed that PHIP may be involved in regulation of degradation of ubiquitinated CDK2 by LINC00955 (Fig. 3D).PHIP expression was detected using western blotting, and the results indicated that it was elevated in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 3E).To further explore whether LINC00955 regulates ubiquitination and degradation of CDK2 by regulating the E3 ligase PHIP that specifically recognizes CDK2, PHIP expression was knocked down in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 3F, G), and CDK2 expression was evaluated by western blotting.Knockdown of PHIP significantly upregulated CDK2 expression (Fig. 3H, I).Moreover, the effect of LINC00955 on degradation of CDK2 protein was assessed after knocking down PHIP in HCT116 (LINC00955) and RKO (LINC00955) cells.The results showed that knocking down PHIP significantly reduced the effect of LINC00955 on degradation of CDK2 protein (Fig. 3J, K), indicating that LINC00955 regulates ubiquitination and degradation of CDK2 by regulating the E3 ligase PHIP.To determine whether PHIP binds to CDK2, these mixtures were incubated with HA-beads, which pull-down the HA protein by immunoprecipitation (IP).The results showed that PHIP was present in the immune complex (Fig. 3L), indicating that PHIP binds to CDK2 and triggers its degradation.To further explore whether PHIP is involved in ubiquitination of the CDK2 protein, Myc-Ub and HA-CDK2 were co-transfected into PHIP-knockdown HCT116 (LINC00955) and RKO (LINC00955) cells, followed by ubiquitin-IP.The experiment found that knockdown of PHIP substantially decreased the ubiquitination level of CDK2 (Fig. 3M, N).Knocking down PHIP reduced the ability of LINC00955 to suppress the growth of CRC cells, as demonstrated by ATP, soft agar, and EdU assays (Fig. 3O-T).Flow cytometry also showed that knocking down PHIP significantly weakened the ability of LINC00955 to arrest the cell cycle of HCT116 and RKO cells (Fig. 3U, V).Collectively, these findings suggest that LINC00955 promotes ubiquitination and degradation of CDK2 by upregulating expression of PHIP, thereby inhibiting proliferation of CRC cells. LINC00955 downregulates the DNA methylation level of the PHIP promoter, thereby increasing promoter activity and PHIP expression Analysis of the effects of LINC00955 on the regulation of PHIP mRNA demonstrated that PHIP mRNA levels were considerably higher in HCT116 (LINC00955) and RKO (LINC00955) cells than in HCT116 (Vector) and RKO (Vector) cells (Fig. 4A).A PHIP promoter-driven luciferase reporter was transfected into HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells to determine whether LINC00955 regulates PHIP expression at the transcriptional level.LINC00955 significantly enhanced PHIP promoter activity in CRC cells (Fig. 4B).JASPAR analysis to predict transcription factors that regulate PHIP identified multiple probable transcription factor binding sites in the promoter region of PHIP, including sites for Sp1, Sp2, Sp3, and FOXO1 (Fig. 4C).A truncated PHIP promoter-driven luciferase reporter was constructed (Figure S1A), with the activity of the PHIP promoter region being altered by dual-luciferase reporter assays.This experiment revealed that PHIP-2 is the critical region responsible for the change of promoter activity (Figure S1B, C).Western blotting to assess expression of transcription factors in CRC cells showed that expression of FOXO1 was drastically upregulated, whereas the expression of Sp1 was drastically downregulated in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 4D).These findings suggest that FOXO1 and Sp1 may be transcription factors that regulate the PHIP promoter. To determine whether PHIP upregulation is caused directly by the transcription factors Sp1 and FOXO1, the mutation binding sites of Sp1 and FOXO1 were determined [13,14], and a series of PHIP promoter-driven luciferase reporters harboring Sp1 and FOXO1 mutants was constructed (Fig. 4E, F).Wild-type and mutant-type PHIP promoter-driven luciferase reporters were transferred into HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells, and their promoter activity was measured.The activity of wild-type and mutant-type PHIP promoters did not differ significantly (Fig. 4G-J), indicating that LINC00955 does not promote transcription of PHIP by influencing transcription factor expression. DNA methylation is an epigenetic modification that can play an important role in control of gene expression in mammalian cells, with gene silencing caused by abnormal promoter hypermethylation being one of the mechanisms leading to downregulation of tumor suppressor genes, and occurrence and progression of cancers [15,16].Bioinformatics software predicted that CpG islands are present in the PHIP promoter region, which was confirmed in this study (Fig. 4K).To detect alterations in the methylation level of the PHIP promoter region, methylationspecific PCR (MSP) experiments were performed using primers constructed based on the location of the CpG islands.After overexpression of LINC00955 in HCT116 cells, the DNA methylation level in the PHIP promoter region was significantly reduced (Fig. 4L), suggesting that LINC00955 promotes PHIP promoter activity by affecting DNA methylation in the PHIP promoter region.To test this hypothesis, MSP was performed after treating HCT116 (Vector, LINC00955) cells with the DNA methylation inhibitor 5-Aza, with findings showing that addition of 5-Aza suppressed the level of methylation of the PHIP promoter region (Fig. 4L).Similarly, dual-luciferase reporter, qPCR, and western blotting assays showed that 5-Aza treatment increased PHIP promoter, mRNA, and protein levels (Fig. 4M-O).LINC00955 may promote PHIP promoter activity by affecting DNA methylation in the PHIP promoter region.To determine the specific mechanism by which LINC00955 regulates DNA methylation of the PHIP promoter, expression of the related DNMT1, DNMT3A, and DNMT3B was measured by western blotting.DNMT3B levels were lower in HCT116 (LINC00955) and RKO (LINC00955) cells than in their respective control cells (Fig. 4P).DNMT3B was overexpressed in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 4Q, R), and MSP assays detected changes in PHIP promoter methylation after overexpression of DNMT3B, suggesting that overexpression of DNMT3B can significantly increase the level of PHIP promoter methylation (Fig. 4S, T).Overexpression of DNMT3B significantly reduced PHIP promoter activity and mRNA and protein levels (Fig. 4U, V, Q, R).Additionally, ATP, soft agar, and EdU assays demonstrated that LINC00955 LINC00955 inhibits transcription of DNMT3B by downregulating expression of Sp1 To clarify the possible mechanism by which LINC00955 downregulates DNMT3B in CRC cells, the levels of DNMT3B mRNA were evaluated in HCT116 and RKO cells.DNMT3B mRNA levels were visibly lower in HCT116 (LINC00955) and RKO (LINC00955) cells than in HCT116 (Vector) and RKO (Vector) cells (Fig. 5A).The ability of LINC00955 to regulate DNMT3B promoter activity was assessed using dual-luciferase reporter assays.Overexpression of LINC00955 significantly inhibited DNMT3B promoter activity (Fig. 5B), indicating that LINC00955 downregulates expression of DNMT3B at the transcriptional level. Transcription factors regulate expression of target genes by binding to the promoter region.The DNMT3B promoter is regulated by the transcription factors Sp1, Sp3, FOXO3A, and FOXC1 (Fig. 5C) [17][18][19].Because western blotting showed that Sp1 was noticeably downregulated in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 5D), Sp1 was stably overexpressed in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 5E, F), and alterations in DNMT3B promoter activity and mRNA levels were determined.Sp1 overexpression significantly increased DNMT3B promoter activity and mRNA levels (Fig. 5G-H).To determine whether Sp1 was directly responsible for the downregulation of DNMT3B, the Sp1 mutation sites were determined, and DNMT3B promoter-driven luciferase reporters harboring mutant Sp1 were constructed (Fig. 5I).DNMT3B promoter activity was determined.The reduction in activity of the mutant promoter was significantly lower than that of the wild-type promoter (Fig. 5J, K), demonstrating that LINC00955 inhibited transcription of DNMT3B by downregulating expression of Sp1.ATP, soft agar, and EdU assays showed that Sp1 could restore the ability of LINC00955 to inhibit proliferation of CRC cells (Fig. 5L-Q).According to flow cytometry, overexpression of Sp1 drastically reduced the capacity of LINC00955 to arrest the cell cycle of HCT116 and RKO cells (Fig. 5R, S).Taken together, these results indicate that LINC00955 reduces expression of Sp1 to prevent CRC cell proliferation. LINC00955 recruits TRIM25 to degrade Sp1 via ubiquitination To elucidate the molecular mechanism by which LINC00955 regulates Sp1 expression, we first evaluated the ability of LINC00955 to regulate Sp1 mRNA levels through qPCR.Sp1 mRNA levels in HCT116 (LINC00955) and RKO (LINC00955) cells, however, did not differ significantly from levels in their respective control cells (Fig. 6A).To establish whether LINC00955 regulates expression of Sp1 through ubiquitination and degradation, cells were treated with MG132 and CHX.Sp1 was degraded more rapidly in HCT116 (LINC00955) and RKO (LINC00955) cells than in their respective control cells (Fig. 6B, C), suggesting that LINC00955 was responsible for faster degradation of Sp1.In vitro ubiquitination assays revealed that the amount of ubiquitinated Sp1 protein was markedly higher in HCT116 (LINC00955) and RKO (LINC00955) cells than in their respective controls (Fig. 6D, E).To assess direct involvement of LINC00955 in regulation of Sp1 expression, biotinylated LINC00955 was used as an RNA probe in RNA pull-down assays and mass spectrometry analyses.The amount of Sp1 was much higher in biotinylated (See figure on next page.)Fig. 4 LINC00955 inhibits PHIP promoter methylation by downregulating expression of the DNA methyltransferase DNMT3B.A PHIP mRNA levels in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells, as determined by qPCR.B PHIP promoter levels in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells, as determined by dual-luciferase reporter assays.C Possible transcription factors in the promoter region of PHIP, predicted from the JASPAR database.D Western blotting was performed to examine expression of transcription factors in HCT116 and RKO (Vector, LINC00955) cells.E Construction of a Sp1 mutant PHIP promoter-driven luciferase reporter.F Construction of a FOXO1 mutant PHIP promoter-driven luciferase reporter.G, H Wild-type and Sp1 mutant PHIP promoter-driven luciferase reporters were transferred into (G) HCT116 (Vector, LINC00955) and (H) RKO (Vector, LINC00955) cells, and their promoter activity was determined.I, J Wild-type and FOXO1 mutant PHIP promoter-driven luciferase reporters were transferred into (I) HCT116 (Vector, LINC00955) and (J) RKO (Vector, LINC00955) cells, and their promoter activity was determined.K The MethPrimer website predicted the CpG island in the PHIP promoter.L-O HCT116 (Vector, LINC00955) cells were pretreated with the methylation inhibitor 5-Aza, and the effects of LINC00955 were tested on (L) the methylation and non-methylation levels of the PHIP promoter region (determined by MSP), (M) PHIP promoter levels (determined by dual-luciferase reporter assays), (N) PHIP mRNA levels (determined by qPCR), and (O) PHIP protein levels (analyzed using western blotting).P Expression of DNA methyltransferase, as determined by western blotting.Q, R The transfection efficiency of HA-DNMT3B and its control plasmids was analyzed by western blotting in (Q) HCT116 (LINC00955) and (R) RKO (LINC00955) cells.S, T Effects of DNMT3B overexpression on PHIP promoter methylation in (S) HCT116 (LINC00955) and (T) RKO (LINC00955) cells, as determined by MSP assays.U Effects of DNMT3B overexpression on PHIP promoter activity in HCT116 (LINC00955) and RKO (LINC00955) cells, as determined by dual-luciferase reporter assays.V Effects of DNMT3B overexpression on PHIP mRNA levels in HCT116 (LINC00955) and RKO (LINC00955) cells, as determined by qPCR assays.W, X Effects of DNMT3B overexpression on proliferation of (W) HCT116 (LINC00955) and (X) RKO (LINC00955) cells, as determined by ATP assays.Y, Z Soft agar tests were performed to investigate the effects of DNMT3B overexpression on the growth of (Y) HCT116 (LINC00955) and (Z) RKO (LINC00955) cells.AA, AB Effects of DNMT3B overexpression on the DNA replication activity of (AA) HCT116 (LINC00955) and (AB) RKO (LINC00955) cells, as determined by EdU assays.AC, AD Effects of DNMT3B overexpression the cell cycle in (AC) HCT116 (LINC00955) and (AD) RKO (LINC00955) cells, as determined by flow cytometry.An asterisk (*) indicates a significant difference (p < 0.05) LINC00955 precipitates than in biotinylated antisense LINC00955 precipitates, with the former also being enriched in TRIM25, an E3 ubiquitin ligase that is predicted to mediate Sp1 ubiquitination and degradation (Fig. 6F-H).TRIM25 promotes ubiquitination of Sp1 at K610 in gastric cancer [20].LncRNAs can act as scaffolds that participate in protein-protein interactions.LINC00955 may therefore serve as a scaffold to recruit E3 ligase to Sp1 and promote Sp1 ubiquitination and degradation.To test this hypothesis, RNA pull-down and RIP assays were performed.Both assays showed that LINC00955 interacts with Sp1 and TRIM25 (Fig. 6I-N). Overexpression of LINC00955, however, did not significantly alter expression of TRIM25 protein in CRC cells (Fig. 6O).The effect of LINC00955 on the binding of Sp1 to its ubiquitin E3 ligase TRIM25 was further tested by Co-IP experiments, which showed that LINC00955 overexpression increased the interaction between Sp1 and TRIM25 (Fig. 6P).To determine whether TRIM25 mediates Sp1 degradation, TRIM25 was knocked down in HCT116 (LINC00955) and RKO (LINC00955) cells (Fig. 6Q, R), and degradation rate of Sp1 was determined.TRIM25 knockdown significantly reduced the degradation of Sp1 (Fig. 6S, T), as well as its ubiquitination (Fig. 6U, V), indicating that TRIM25 is an E3 ligase that regulates Sp1 ubiquitination in CRC cells.In summary, these findings show that LINC00955 promotes interaction between Sp1 and TRIM25, thereby promoting Sp1 degradation. To further determine whether the interactions of LINC00955 with Sp1 and TRIM25 proteins inhibit the proliferation of CRC cells, plasmids overexpressing LINC00955 lacking nucleotides 984-1135 and/or 2073-2204 were synthesized and used to construct stable transfected cell lines (Figure S2A, B).Only full-length LINC00955 accelerated Sp1 degradation and downregulated Sp1 expression, whereas plasmids expressing LINC00955 lacking nucleotides 984-1135 and/or 2073-2204 did not (Fig. 7J-M).ATP and soft agar experiments showed that LINC00955 fragments lacking the Sp1binding domain, the TRIM25-binding domain, and both regions did not inhibit CRC cell proliferation (Fig. 7N-Q), which supported the conclusion that the function of LINC00955 depends on its binding to Sp1 and TRIM25. Correlations among Sp1, DNMT3B, PHIP, and CDK2 protein levels in clinical CRC tissues To verify the correlations between LINC00955 and its downstream genes in vivo, we performed immunohistochemistry (IHC) to detect expression of Sp1, DNMT3B, PHIP, and CDK2 in 75 pairs of CRC and normal tissues.The protein levels of Sp1, DNMT3B, and CDK2 were significantly higher in CRC tissues than in normal tissues, while the protein levels of PHIP were significantly lower in CRC tissues than in normal tissues (Fig. 8A-E), which correlated with the downregulation of LINC00955 in CRC.Further correlation analysis showed that Sp1 protein levels were positively correlated with DNMT3B and CDK2 protein levels, but negatively correlated with PHIP protein levels (Fig. 8F-H), which is consistent with the results of in vitro studies.These results indicate that the LINC00955-Sp1-DNMT3B-PHIP-CDK2 regulatory pathway has clinical significance in of CRC progression.Finally, the molecular mechanism of LINC00955 inhibiting the malignant proliferation of CRC cells is mapped (Fig. 8I). Discussion LncRNAs have significant roles in the pathophysiology and development of a number of malignancies [22], indicating that combined targeting of dysregulated lncRNAs may become a potential strategy for cancer treatment in the future [23].To date, however, only the roles of a few lncRNAs have been thoroughly characterized.The discovery and identification of additional lncRNAs, and their underlying mechanisms, are of great significance for cancer treatment.The present study, using both bioinformatics and experimental analyses, demonstrated that LINC00955 expression was substantially downregulated in CRC tissues.Moreover, lower expression of LINC00955 was linked to a poorer prognosis in patients with CRC, suggesting that LINC00955 may be a prognostic biomarker.LINC00955 inhibited CRC cell proliferation in vitro and in vivo.When considered together, these results demonstrate that LINC00955 may be a significant tumor suppressor in CRC. The cell cycle is both important and strictly controlled, as abnormal cell cycle processes can lead to genome instability and cancer progression [24].Little is known about the molecular function of lncRNAs in cell cycle regulation in comparison with the diverse proteins involved in cell cycle regulation and linked with cancercausing mutations [25].Several lncRNAs regulate the cell cycle and cell proliferation by directly regulating DNA replication or indirectly controlling expression of vital cell cycle-regulating genes [26].For example, lncRNA MIR31HG binds to HIF1A, targets p21, and promotes cell proliferation by enhancing cell cycle progression in HNSCC [27].The results presented herein showed that LINC00955 blocked the G0/G1 phase of the cell cycle and inhibited proliferation of CRC cells by reducing expression of CDK2.CDK2 is activated by complexing with a cyclin, and is active from G1 phase progression and throughout S phase [28].CDK2 expression is upregulated in cancers [29], and CDK2 is crucial for anchorage-independent proliferation mediated by oncogenes [30].Cancer therapy is thought to target CDK2 in some cases, and several small-molecule CDK2 inhibitors are currently undergoing clinical trials.One example is Alvocidib, a purine analogue and combination inhibitor [31]; however, this agent shows unsatisfactory efficacy, high toxicity, and non-specificity.Additional key regulators of CDK2 may suppress cell proliferation during oncogenesis.For example, the present study identified a new lncRNA that regulates CDK2 in CRC, suggesting a new direction for CDK2 inhibitors and suppressors of CRC cell proliferation. Current research on CDK2 is mainly concerned with its kinase activity [32].Fewer studies, however, have evaluated the regulation of CDK2 expression, especially its stability.The current investigation discovered that LINC00955 mediates the ubiquitination-degradation of CDK2 through the E3 ligase PHIP.Ubiquitination is a significant post-translational change involved in controlling a number of biological functions.The ubiquitination cascade includes activating enzymes (E1s), conjugating enzymes (E2s), and ligases (E3s) [33], with E3 ligase playing a crucial role in specifically determining the ubiquitination and degradation of target proteins.PHIP is a cytosolic protein encoded on chromosome 6q14.1,which participates in insulin and IGF-1 signaling and interacts only with the PH domain of IRS-1 [34].To our knowledge, PHIP has not been reported to function as a substrate protein for E3 ligase.The present study is therefore the first to show that CDK2 is an ubiquitinated substrate of PHIP.Evaluation of clinical tissue samples demonstrated that PHIP expression was lower in CRC tissues than in normal colon tissues.Moreover, functional experiments showed that PHIP plays a role as a tumor suppressor gene during development of CRC. Additionally, the current study discovered that LINC00955 markedly increased PHIP promoter activity.PHIP inhibited CRC tumor cell proliferation.Several tumor suppressor genes can be rendered inactive by aberrant CpG island methylation in their promoter regions, highlighting the significance of epigenetic changes during carcinogenesis [35].DNA methylation patterns are generally regulated by DNA methyltransferases (DNMTs) [36], with DNMT3B acting as a de novo methyltransferase [37].In HCT116 colon cancer cells, disruption of DNMT1 and DNMT3B decreases the 5-mC concentration by 95% and delays cell proliferation [38].Few studies to date, however, have focused on the specific molecular mechanisms by which DNMT3B participates in the process of CRC proliferation.The present study proposes a novel mechanism by which LINC00955 inhibits CRC cell proliferation by downregulating DNMT3B to inhibit methylation of the PHIP promoter.The PHIP gene, however, is not likely to be the only target of DNMT3B during cell proliferation.Alterations in expression of intracellular DNA methyltransferases must therefore affect other candidate genes and related pathways, indicating a need for additional studies.Assessment of the mechanism by which LINC00955 regulates DNMT3B found that LINC00955 inhibits DNMT3B transcription by downregulating transcription factor Sp1. The molecular processes by which lncRNAs function in the growth of tumors are intricate.LncRNAs typically exert their biological functions through physical interactions with regulatory proteins, miRNAs, or other cellular factors [39], although evidence suggests that it may be more important for lncRNAs to exert their biological functions through their target proteins.Some lncR-NAs remain connected to their transcription sites, and interact with proteins to regulate expression of cis genes [40].Some of them serve as molecular spies and bind to particular transcription factors, preventing them from attaching to DNA [41].LncRNAs can also participate in protein-protein interactions.The transcriptional regulator Sp1 belongs to the family of transcription factors [42].Sp1 was identified as a promoter-specific binding factor involved in a number of biological processes in mammalian cells [43].Sp1 plays an important role in CRC by regulating genes involved in all cancer-related processes, including growth factor-independent proliferation, immortality, evasion of apoptosis, angiogenesis, tissue invasion, and metastasis [44,45].The transcriptional activity, DNA-binding affinity, and protein stability of Sp1 can all be changed post-translationally [46].Sp1 is frequently post-translationally modified through phosphorylation, glycosylation, acetylation, ubiquitination, and sumoylation [47], with ubiquitination being an important post-translational modification [48].Several E3 ligases specifically recognize Sp1 and mediate its ubiquitination and subsequent degradation [49,50].Less is known, however, about the biological roles of lncRNAs during E3 ligase-mediated ubiquitination and degradation of Sp1. The present study found that LINC00955 post-translationally regulates Sp1 ubiquitination and degradation by promoting the binding of E3 ligase to Sp1.The TRIM family of proteins, which is distinguished by the presence of three conserved N-terminal domains, a RING domain, one or two B-Boxes (B1/B2), and a coiled-coil domain, includes the 17 beta-estradiol and type I IFN-inducible E3 ligase known as TRIM25 [51].TRIM25 acts as an E3 ubiquitin ligase that promotes ubiquitination of Sp1 at K610 [20].The present study found that LINC00955 promotes degradation of Sp1 by enhancing binding of the E3 ligase TRIM25 to Sp1, with subsequent degradation of Sp1 protein.LINC00955 nucleotides 2073-2204 interact with Sp1 protein, and nucleotides 984-1135 interact with TRIM25 protein.LINC00955 serves as a scaffold for protein-protein interactions that inhibit proliferation of CRC, indicating that LINC00955 plays a direct role in proliferation of CRC.In recent years, intracellular protein degradation pathways and the development of proteintargeted degradation technology have become of interest to researchers in the field of drug research and development [52].The current research found that LINC00955 can act as a scaffold molecule that participates in the ubiquitination and degradation process, providing new ideas and directions for research on ubiquitin-proteasome systems. Conclusions In conclusion, LINC00955 is downregulated in CRC and inhibits CRC cell proliferation by acting as a molecular scaffold of TRIM25 and Sp1 to inhibit the DNMT3B/ PHIP/CDK2 axis. Fig. 1 Fig. 1 Expression of LINC00955 is considerably lower in CRC tissues, and it inhibits growth of CRC cells in vitro and in vivo.A Expression of LINC00955 in clinical samples from the TCGA database.B Correlation between expression of LINC00955 in the TCGA database and the survival rate of CRC patients.C, D qPCR assays of the expression of LINC00955 in (C) primary CRC and adjacent normal colorectal tissue samples, and (D) in normal colorectal and CRC cell lines.E, F LINC00955 expression in HCT116 and RKO cells stably transfected with LINC00955, as determined by qPCR.G, H Effects of LINC00955 on proliferation of CRC cells, as determined by ATP assays.I, J Soft agar experiments were performed to analyze the effects of LINC00955 on the growth of CRC cells.K, L Effects of LINC00955 on the DNA replication activity of HCT116 and RKO cells, as determined by EdU assays.M HCT116 cells were injected into nude mice, which were then imaged after developing tumors.N Photographs of excised tumors.O Comparison of tumor weight in two sets of nude mice.P Volumes of tumors excised from the two groups of nude mice over 19 days.An asterisk (*) indicates a significant difference (p < 0.05) Fig. 2 Fig. 2 LINC00955 inhibits the proliferation of CRC cells by downregulating expression of CDK2 and inducing cell arrest at G0/G1 phase.A, B Flow cytometric investigation of the impact of LINC00955 on the cell cycle of CRC cells.C Western blotting, showing expression of important proteins associated with the G0-G1 phase.D, E The efficiency of transfecting HA-CDK2 and its control plasmids into HCT116 (LINC00955) and RKO (LINC00955) cells was evaluated using western blotting.F, G Effects of CDK2 overexpression on cell proliferation, as measured by ATP assays, in (F) HCT116 (LINC00955) and (G) RKO (LINC00955) cells.H, I Effects of CDK2 overexpression on growth of (H) HCT116 (LINC00955) and (I) RKO (LINC00955) cells, as determined by soft agar assays.(J, K) Effects of CDK2 overexpression on the DNA replication activity of (J) HCT116 (LINC00955) and (K) RKO (LINC00955) cells, as determined by EdU assays.L, M Flow cytometric analysis of the effects of CDK2 overexpression on the cell cycle in (L) HCT116 (LINC00955) and (M) RKO (LINC00955) cells.An asterisk (*) indicates a significant difference (p < 0.05) Fig. 3 Fig. 3 LINC00955 promotes ubiquitination and degradation of CDK2, and inhibits proliferation of CRC cells by promoting expression of PHIP.A qPCR assay of CDK2 mRNA levels in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells.B, C Degradation of CDK2 protein in (B) HCT116 (Vector, LINC00955) and (C) RKO (Vector, LINC00955) cells treated with MG132 and CHX, as determined by western blotting.D Venn diagram analysis of the E3 ligase controlling CDK2 protein degradation.E Expression of KLHL6 and PHIP in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells according to western blotting.F, G Transfection efficiency of shPHIP in (F) HCT116 (LINC00955) and (G) RKO (LINC00955) cells according to western blotting.H, I Expression of CDK2 in (H) HCT116 (LINC00955) and (I) RKO (LINC00955) cells after PHIP knockdown, as determined by western blotting.J, K Degradation rate of CDK2 in (J) HCT116 (LINC00955) and (K) RKO (LINC00955) cells after PHIP knockdown, as determined by western blotting.L The connection between PHIP and CDK2, as determined by Co-IP assays.M, N Effect of PHIP knockdown on CDK2 ubiquitination in (M) HCT116 (LINC00955) and (N) RKO (LINC00955) cells, as determined by western blotting after ubiquitin-IP assay.O, P ATP assays were performed to assess the impact of PHIP knockdown on the growth of (O) HCT116 (LINC00955) and (P) RKO (LINC00955) cells.Q, R Effect of PHIP knockdown on growth of (Q) HCT116 (LINC00955) and (R) RKO (LINC00955) cells, as determined by soft agar assays.S, T Effect of PHIP knockdown on the DNA replication activity of (S) HCT116 (LINC00955) and (T) RKO (LINC00955) cells, as determined by EdU assays.U, V Effect of CDK2 knockdown on the cell cycle in (U) HCT116 (LINC00955) and (V) RKO (LINC00955) cells, as determined by flow cytometry.An asterisk (*) indicates a significant difference (p < 0.05) (See figure on next page.) Fig. 5 Fig. 5 LINC00955 inhibits transcription of DNMT3B by downregulating expression of transcription factor Sp1.A DNMT3B mRNA levels in HCT116 (Vector, LINC00955) and RKO (Vector or LINC00955) cells, as determined by qPCR.B DNMT3B promoter levels in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells, as determined by dual-luciferase reporter assays.C Transcription factors are present in the DNMT3B promoter region.D Expression of transcription factors in HCT116 (Vector, LINC00955) and RKO (Vector, LINC00955) cells was examined using western blotting.E, F GFP-Sp1 and its control plasmids were transfected into (E) HCT116 (LINC00955) and (F) RKO (LINC00955) cells, and stability of transfection was assessed by western blotting.G Dual-luciferase reporter assays showing the effects of Sp1 overexpression on DNMT3B promoter activity in HCT116 (LINC00955) and RKO (LINC00955) cells.H Effects of Sp1 overexpression on DNMT3B mRNA levels in HCT116 (LINC00955) and RKO (LINC00955) cells, as determined by qPCR.I Construction of Sp1 mutant DNMT3B promoter-driven luciferase reporters.J, K Wild-type and Sp1 mutant DNMT3B promoter-driven luciferase reporters were transferred into (J) HCT116 (Vector, LINC00955) and (K) RKO (Vector, LINC00955) cells, and their promoter activity was measured.(L, M) Effects of DNMT3B overexpression on proliferation of (L) HCT116 (LINC00955) and (M) RKO (LINC00955) cells, as determined by ATP assays.N, O Soft agar tests were performed to explore the effects of DNMT3B overexpression on the growth of (N) HCT116 (LINC00955) and (O) RKO (LINC00955) cells.P, Q Effects of DNMT3B overexpression on the DNA replication activity of (P) HCT116 (LINC00955) and (Q) RKO (LINC00955) cells, as determined by EdU assays.R, S Effects of DNMT3B overexpression on the cell cycle of (R) HCT116 (LINC00955) and (S) RKO (LINC00955) cells, as determined by flow cytometry.An asterisk (*) indicates a significant difference (p < 0.05) (See figure on next page.) Fig. 8 Fig. 8 Correlations between expression of Sp1, DNMT3B, PHIP, and CDK2 proteins.A-E Levels of Sp1, DNMT3B, PHIP, and CDK2 protein in CRC tissues and adjacent normal tissues, as determined by immunohistochemical analysis of 75 pairs of clinical samples.F, H Correlations between expression of the above proteins in 75 pairs of CRC tissues and normal tissue samples.I Mechanistic diagram of inhibition of CRC cell proliferation by LINC00955 through the TRIM25/Sp1/DNMT3B/PHIP/CDK2 axis
9,795.8
2023-09-23T00:00:00.000
[ "Biology" ]
Direct Synthesis of Co-doped Graphene on Dielectric Substrates Using Solid Carbon Sources Direct synthesis of high-quality doped graphene on dielectric substrates without transfer is highly desired for simplified device processing in electronic applications. However, graphene synthesis directly on substrates suitable for device applications, though highly demanded, remains unattainable and challenging. Here, a simple and transfer-free synthesis of high-quality doped graphene on the dielectric substrate has been developed using a thin Cu layer as the top catalyst and polycyclic aromatic hydrocarbons as both carbon precursors and doping sources. N-doped and N, F-co-doped graphene have been achieved using TPB and F16CuPc as solid carbon sources, respectively. The growth conditions were systematically optimized and the as-grown doped graphene were well characterized. The growth strategy provides a controllable transfer-free route for high-quality doped graphene synthesis, which will facilitate the practical applications of graphene. Electronic supplementary material The online version of this article (doi:10.1007/s40820-015-0052-6) contains supplementary material, which is available to authorized users. Introduction Graphene, a one-atom-thick layer of carbon with sp 2 hybrid orbital bonding and two-dimensional structure material, has attracted intense research interests due to its extraordinary physical and chemical characteristics, such as good mechanical strength [1], high carrier mobility [2], excellent electrical conductivity [3], superior thermal conductivity [4], and high transmittance [5]. However, the nature of pristine graphene with zero band gap brings some difficulties for its application in the electronic device field [6]. Among all the approaches to synthesize doped graphene, chemical vapor deposition (CVD) is the most popular method to obtain high-quality doped graphene in large scale by introducing copper or nickel foil as the catalyst [3,14,15] and independent doping source (e.g., NH 3 as N doping source) [16,17]. Recently, carbon sources containing dopant element have been used to directly grow doped graphene by CVD method, avoiding the post-doping treatment or using dopant gases in the growth process. For example, Tour et al. demonstrated a new approach that large area, high-quality N-doped graphene with controllable thickness can be grown from different solid carbon sources such as polymer films or small molecules, deposited on a metal catalyst substrate at 800°C [18]. Liu et al. developed a self-assembly approach that allows the synthesis of single-layer and highly nitrogen-doped graphene domain arrays by self-organization of pyridine molecules on the Cu surface [9]. However, the graphene film obtained by these methods generally requires physical transfer onto the desired substrates for subsequent device processing [19,20], which could introduce the defects and contaminations into the graphene film. Recently, we have developed a new transfer-free approach capable of synthesizing graphene directly on dielectric substrates using polycyclic aromatic hydrocarbons (PAHs) as carbon sources [21]. Significantly, N doping and patterning of graphene can be readily and concurrently achieved by this growth method. In this paper, we systematically investigate the factors that affect the growth quality of the doped graphene and optimized the growth conditions for high-quality doped graphene. Furthermore, we demonstrate that N, F-co-doped graphene can be synthesized using only 1,2,3,4,8,9,10,11,15,16,17,18,22,23,24, 25-Hexadecafluorophthalocyanine Copper(II) (F 16 CuPc) as solid carbon source and both N and F doping sources. Experimental Section The schematic of growth process of doped graphene directly on SiO 2 -layered Si (SiO 2 /Si) without transfer is shown in Fig. 1. First, the SiO 2 /Si (SiO 2 : 300 nm thick) substrate was ultrasonically cleaned by acetone, ethanol, and deionized water for 15 min, respectively. Then PAHs with planar structure (TPB or F 16 CuPc) were evaporated on the substrate as solid carbon sources by thermal evaporation system (Organic Evaporation Coating Machine ZZB-U500SA), followed by the Cu film layer deposition on the surface of PAHs as catalyst by electron-beam evaporation system (Kurt J. Lesker, PVD750). After annealing in a tube furnace under Ar gas flow at *1.8 9 10 2 Pa, doped graphene was synthesized between the Cu layer and the substrate. At last, Cu layer was etched away by Marble's reagent (CuSO 4 :HCl:H 2 O = 10 g:50 mL:50 mL), then doped graphene was obtained directly on SiO 2 substrate without any transfer process. The morphology of doped graphene was characterized by scanning electron microscopy (SEM) (FEI Quanta 200F). Raman spectra were recorded at room temperature using a Jobin-Yvon HR800 Raman microscope with laser excitation at 514 nm. Optical images were obtained using Fluorescence optical microscope (DM4000M). The HR-TEM images were taken by transmission electron microscope (TEM) (Tecnai G2 F20). The surface state and electron structure of the samples were studied by X-ray photoelectron spectroscopy (XPS) measurement (Kratos AXIS UltraDLD ultrahigh vacuum (UHV) surface analysis system), using Al Ka X-rays (1486 eV) as the excitation source. The optical transmittance spectrum and sheet Result and Discussion Carbon source is an important factor in graphene synthesis. We found that planar configuration of PAHs might provide a hexagonal honeycomb skeleton structure for the graphene growth and the growth mechanism from PAHs maybe involves surface-mediated nucleation process of dehydrogenated PAHs catalyzed by Cu rather than segregation or precipitation process of small carbon species that decomposed from the precursors. Therefore, planar PAHs that contain heteroatoms (e.g., nitrogen, boron, fluorine) were chosen as solid carbon sources for doped graphene growth in our work. In addition to the specific structure of solid carbon sources, there are some other key factors to control the quality of doped graphene, such as the thickness of solid carbon sources, the thickness of Cu film layer, annealing time, annealing temperature, etc. Hence, in order to achieve high-quality doped graphene, optimal conditions for doped graphene growth had been investigated by rationalizing the above factors. 2, 4, 6-triphenylborazine (TPB) with planar configuration was selected as the solid carbon source to evaluate the growth conditions of the doped graphene. The thickness effect of TPB on the quality of graphene was investigated firstly. The Raman spectra shown in Fig. S1a reveal that the optimum thickness of TPB layer is 5 nm. When the thickness of TPB is less than 5 nm, the carbon source cannot form continuous film on the substrate, which could result in the formation of discontinuous graphene. While the thickness of TPB is greater than 5 nm, the excessive amount of carbon source leads to multilayer graphene or amorphous carbon formation due to the extremely low solubility of C in Cu. Different annealing temperatures were also investigated for the growth of TPB-derived doped graphene. In general, the growth temperature, in conventional CVD method, required to synthesize good-quality graphene is 1000-1050°C. Figure S1b shows the Raman spectra of graphene synthesized at different growth temperatures, suggesting that graphene can be obtained above 950°C. Annealing temperature below 650°C results in the deposition of amorphous carbon, as characterized by the broad D and G bands and a very weak 2D band shown in Fig. S1b. When the annealing temperature was increased to 1050°C, the obtained graphene layer also has a larger D band than that grown at 1000°C in the Raman spectra. It has probably arisen from the partial evaporation of thin Cu film at 1050°C. Subsequently, different annealing times were studied. As shown in Fig. S1c, higher quality doped graphene with lower I D /I G and higher I 2D /I G ratio can be achieved when the annealing time is 60 min. The effect of Cu film thickness on doped graphene growth was investigated as well. When Cu film thickness is above 100 nm, the graphene film can be obtained. However, when the thickness of the Cu film was decreased below 100 nm, most of the Cu was evaporated during the annealing process at 1000°C and resulted in discontinuous doped graphene. In addition, the graphene formed on the top surface of Cu was observed when a thin Cu layer was used. When the thickness of the Cu film was increased to 1000 nm, relatively high-quality doped graphene was obtained indicated by the G/2D ratio (*0.3), D/G ratio (*1.3), and FWHM of the 2D (*42 cm -1 ) band in Fig. S1d. Thus the optimal growth conditions for the doped graphene growth from TPB were set at 5 nm TPB as carbon source, 1000 nm Cu film on the top surface, and annealing temperature of 1000°C for 60 min. Figure 2a shows the optical image of the doped graphene grown on SiO 2 /Si substrate at the optimal condition using TPB as the carbon source. The continuous film with almost no contrast indicates that the graphene is distributed uniformly on the dielectric substrate. The corresponding Raman spectrum in Fig. 2b shows a weak D band, revealing that the graphene film is almost defect free and the weak D band may have arisen from the doping effect. The G/2D ratio is *0.25 and the 2D peak is sharp and symmetric, indicating that the obtained graphene is monolayer [22,23]. The monolayer graphene is also confirmed by AFM measurement as shown in Fig. S2. A small D' band beside G band confirms that doped graphene has been obtained. Figure S3 shows micro-Raman mapping for the 2D graphene peak, further indicating that graphene film is distributed uniformly on the substrate. Figure 2c shows the high-resolution XPS scan of N 1s centered at 400.7 eV, further confirming that N-doped graphene was obtained under this optimal condition. All the results demonstrate that planar configuration of PAHs precursor containing dopant elements promotes the formation of doped graphene. The atomic concentration of N in TPB-derived doped graphene is about 1.74 % from the data of XPS survey scan. No B 1s peak was observed for this sample, which is probably owing to the difficulty of B-C bonding formation in graphene film at the present condition. In order to achieve co-doping in graphene by this method, F 16 CuPc of 5 nm was used as the carbon source to prepare N, F-co-doped graphene on the SiO 2 /Si substrate at 1000°C for 60 min. F 16 CuPc is also a PAH compound with a planar structure. The Raman spectrum of the product (Fig. 3a) shows a large D peak and a small D' peak, which may be induced by N and F doping atoms. The G/2D ratio is *1.5 and the 2D peak is lower and broader than that of the single layer graphene, indicating that the obtained doped graphene film is of 3-4 layers. Figure 3b shows SEM image of F 16 CuPc-derived doped graphene. It can be found that thin graphene film is homogeneously distributed on the substrate. Moreover, it can be clearly distinguished from the HR-TEM image shown in Fig. 3c that the doped graphene film is of three layers, which is consistent with Raman analysis. XPS investigation further verifies that N and F codoping has been achieved in the graphene. XPS spectra of N, F-co-doped graphene are shown in Fig. 4. Figure 4a shows the full XPS spectrum of F 16 CuPc-derived doped graphene on the SiO 2 /Si substrate. There is no signal of Cu, indicating the clear removal of Cu after etching. Both the nitrogen-and fluorine-related peaks are obviously found in the survey scan, which confirms the successful co-doping of N and F in the graphene film. The atomic concentration of N and F for F 16 CuPc-derived doped graphene is about 2.98 and 0.66 %, respectively. The characteristic XPS C 1s core-level spectrum (Fig. 4b) is assigned as sp 2 carbon (284.4 eV), confirming the graphitic structure of the as-grown graphene grains. The shoulder around 285.5 and 286.6 eV can be assigned to the C-N and C-F bonding, respectively. Figure 4c shows the high-resolution XPS scan of N 1s, suggesting two types of N-C bonding: ''graphitic'' N centered at 401.1 eV and ''pyridine'' N centered at 399.2 eV [7,9,18]. The ratio of two types N indicates that they are mainly bonded to three adjacent carbons, suggesting that the N atoms are uniformly bound to the graphene structure. The high-resolution XPS scan of F 1s shows a single symmetric peak centered at 689.1 eV in Fig. 4d, which is assigned to C-F covalent bond [24]. Figure 5 shows the result of optical transmittance measurement for the N, F-co-doped graphene directly grown on quartz in the same condition as using SiO 2 /Si substrate, exhibiting a high optical transmittance of (a) (b) (c) *93 % at 550 nm, even though the doped graphene film is 3-4 layers. The sheet resistance (R s ) obtained from fourpoint probe measurement is *2.5 kX (sq) -1 , revealing that the as-grown N, F-co-doped graphene film is of high conductivity. Conclusions In summary, a facile method for high-quality synthesis of doped graphene film on the dielectric substrate has been developed. PAHs containing dopant elements with planar configuration were used as both carbon feedstocks and doping sources and a layer of Cu film as the catalyst. The thickness of Cu layer and PAHs, the annealing time, and temperature are optimized for high-quality graphene growth. N-doped and N, F-co-doped graphene have been synthesized using TPB and F 16 CuPc as solid carbon sources, respectively. The properties of the as-grown samples were well studied and N, F-co-doped graphene exhibits a high optical transmittance and low sheet resistance. The present growth strategy provides a controllable transferfree route for high-quality doped graphene growth, which The optical transmittance spectrum of the F 16 CuPc-derived doped graphene on a quartz wafer. Inset is the sheet resistance measured by four-point probe measurement will facilitate the practical electronic applications of graphene.
3,099.6
2015-07-16T00:00:00.000
[ "Materials Science", "Engineering", "Physics", "Chemistry" ]
Functional connectivity in amygdalar‐sensory/(pre)motor networks at rest: new evidence from the Human Connectome Project The word ‘e‐motion’ derives from the Latin word ‘ex‐moveo’ which literally means ‘moving away from something/somebody’. Emotions are thus fundamental to prime action and goal‐directed behavior with obvious implications for individual's survival. However, the brain mechanisms underlying the interactions between emotional and motor cortical systems remain poorly understood. A recent diffusion tensor imaging study in humans has reported the existence of direct anatomical connections between the amygdala and sensory/(pre)motor cortices, corroborating an initial observation in animal research. Nevertheless, the functional significance of these amygdala‐sensory/(pre)motor pathways remain uncertain. More specifically, it is currently unclear whether a distinct amygdala‐sensory/(pre)motor circuit can be identified with resting‐state functional magnetic resonance imaging (rs‐fMRI). This is a key issue, as rs‐fMRI offers an opportunity to simultaneously examine distinct neural circuits that underpin different cognitive, emotional and motor functions, while minimizing task‐related performance confounds. We therefore tested the hypothesis that the amygdala and sensory/(pre)motor cortices could be identified as part of the same resting‐state functional connectivity network. To this end, we examined independent component analysis results in a very large rs‐fMRI data‐set drawn from the Human Connectome Project (n = 820 participants, mean age: 28.5 years). To our knowledge, we report for the first time the existence of a distinct amygdala‐sensory/(pre)motor functional network at rest. rs‐fMRI studies are now warranted to examine potential abnormalities in this circuit in psychiatric and neurological diseases that may be associated with alterations in the amygdala‐sensory/(pre)motor pathways (e.g. conversion disorders, impulse control disorders, amyotrophic lateral sclerosis and multiple sclerosis). Introduction The English word 'e-motion' originates from the Latin word 'ex-moveo' which literally means 'moving away from somebody/ something'. Hence, a fundamental role of emotions is to promote movement and prime goal-directed behavior (Damasio, 2001). The tight link between emotions and action control may also have ancient evolutionary roots as animals/individuals need to quickly adjust their behavior to environmental stimuli with rapidly changing affective value (i.e. threats vs. rewards). This implies that fast and direct interactions between emotional and sensory-motor cortical systems may occur to mediate such adaptive behaviors. Consistent with this hypothesis, data from animal research have provided evidence of a direct link between the amygdala, a key emotional sub-cortical area and a number of sensory-motor cortical regions critically involved in sensory-motor control and action planning. In particular, anterograde and retrograde studies in monkeys, rats and cats (which assessed the axonal projections from and to specific brain regions) have found that the supplementary motor area (SMA) (Jurgens, 1984), cingulate motor area (Ghashghaei et al., 2007), lateral premotor cortex (Avendano et al., 1983;Amaral & Price, 1984;Llamas et al., 1985), primary motor cortex (Macchi et al., 1978;Sripanidkulchai et al., 1984), and primary somato-sensory cortex receive direct inputs from the amygdala (Sripanidkulchai et al., 1984). More recently, a diffusion tensor imaging (DTI) study in humans employed probabilistic tractography to demonstrate mono-synaptic connections between the amygdala and sensory-motor areas like the lateral and medial pre-central cortex, motor cingulate, primary motor cortex and post-central gyrus (Grezes et al., 2014). Further analyses revealed that the dorsal amygdala, which serves as the main amygdalar output nucleus, was more strongly connected with the motor than non-motor cortices (i.e. the orbitofrontal cortex, OFC, fusiform gyrus, FG, and superior temporal gyrus, STG) (Grezes et al., 2014). In contrast, the baso-lateral amygdalar complex, which in turn receives the majority of the inputs directed to the amygdala, showed greater structural connectivity with the OFC, FG, and STG relative to the motor cortices (Grezes et al., 2014). Overall, these findings showed that different amygdala sub-nuclei may have distinct patterns of anatomical connectivity with various cortical regions, which is likely to have important functional consequences (Grezes et al., 2014). In particular, the existence of direct connections between an output nucleus of the amygdala and a set of cortical areas involved in motor planning may represent a key mechanism by which the amygdala influences goal-directed behavior over and above its wellknown effects on autonomic and stereotypical motor responses (as those mediated via the hypothalamus and brainstem) (Grezes et al., 2014). On the other hand, the presence of direct connections between the amygdala and sensory cortical regions may also represent the brain basis of 'embodied' emotions (Damasio, 1994(Damasio, , 2001. Consistently with this theory, a study in healthy volunteers has found that the sight of distorted finger postures in others was associated with increased activation in the right primary motor cortex, post-central somatosensory areas, and amygdala (Schurmann et al., 2011). Likewise, the functional patterns of activation in the right somatosensory cortex are linked to distinct, and somato-topically organized, emotional categories as well as self-reported sensory experiences (Kragel & LaBar, 2016). Together, these data provide support to the model positing that emotional feelings may be 'embodied' via specific brain mechanisms, which may also explain why emotions are able to modulate subjective sensorial experiences. However, the functional significance of the putatively direct anatomical pathways between the amygdala and (pre)motor/sensory cortices remain uncertain. In particular, it is unclear whether the existence of such mono-synaptic routes between the amygdala and (pre)motor/sensory cortical areas facilitates synchronous activity between these regions. To address this issue, one can use restingstate functional magnetic resonance imaging (rs-fMRI), which enables to simultaneously examine distinct neural networks underpinning different cognitive, emotional and sensory-motor functions, while minimizing task-related performance confounds. More specifically, rs-fMRI permits to study the coherence in the spontaneous fluctuations of the blood-oxygenation-level-dependent (BOLD) signal between inter-connected brain areas (Raichle, 2015). Hence, given the evidence for putative direct connections between the amygdala and sensory/(pre)motor cortices, it should be possible to identify a separate amygdala-sensory/(pre)motor network using rs-fMRI. Surprisingly, no study thus far has reported the existence of such a distinct amygdala-sensory/(pre)motor network, although a past rs-fMRI study in n = 65 participants found increased functional connectivity between the amygdala and pre-central gyrus within the context of a more extended circuit including other prefrontal areas like the medial frontal gyrus (BA 11), superior frontal gyrus (BA 10), and anterior cingulate cortex (BA 32) (Roy et al., 2009). All in all, the fact that no distinct amygdala-sensory/(pre) motor network at rest has clearly emerged in earlier studies may have depended on a number of factors including poor awareness of the importance of such circuit, reduced statistical power to detect it (e.g. due to small sample sizes), or the use of different analytical approaches across studies that may have obscured the presence of this network. To overcome these limitations and test the hypothesis that a specific circuit including the amygdala and sensory/(pre)motor regions can be identified with rs-fMRI, we employed a large and homogeneous sample of healthy and young participants (n = 820, age-range: 22-37 years, four scans per subject, 1200 volumes per scan). This very rich rs-fMRI data-base was drawn from a public repository of structural and functional neuroimaging measures, which were made available via the Human Connectome Project (HCP) (http://www.humanconnectome.org/), a world-wide collaborative project that aims at exploring the basic aspects of the human brain structure and function (Van Essen et al., 2013). Methods Participants rs-fMRI data were acquired in 820 participants at 3 Tesla in four runs of approximately 15 min each, two runs in one session and two in another session, with eyes open with relaxed fixation on a projected bright cross-hair on a dark background (and presented in a darkened room). All participants were young and healthy adults (age-range: 22-36 years) with no medical or neuro-psychiatric disorders including hypertension, alcohol abuse, anxiety or depressive disorders and behavioral problems during childhood (i.e. conduct disorder) (see Table 1 for further details on demographics and clinical variables). Magnetic resonance imaging (MRI) scanning Within each session, oblique axial acquisitions alternated between phase encoding in a right-to-left (RL) direction in one run and phase encoding in a left-to-right (LR) direction in the other run. Acquisitions parameters were as follows: Gradient-echo echo-planar imaging, TR = 720 ms, TE = 33.1 ms, flip angle = 52°, FOV = 208 9 180 mm, Matrix 104 9 90, Slice thickness = 2.0 mm; 72 slices; 2.0 mm isotropic voxels, Multiband factor = 8, Echo spacing = 0.58 ms, BW = 2290 Hz/Px). This resulted in a total of 4800 rs-fMRI volumes per subject, subdivided into four sessions of 1200 volumes each. Structural (T1-weighted) images as well as field maps were also acquired in order to aid data preprocessing. Further details about data acquisition and processing (summarized below) can be found in the HCP S900 Release reference manual, available at https://www.humanconnectome.org/. Resting-state fMRI (rs-fMRI) data analysis Each 15-min (1200 volume) run of each subject's rs-fMRI data was pre-processed using FSL according to Smith et al. (Jenkinson et al., 2012;Smith et al., 2013); it was minimally pre-processed according to the latest version (3.1) of the HCP minimal pre-processing pipeline, which is especially designed to capitalize on the high data quality offered by HCP (Glasser et al., 2013). This included gradient distortion correction, motion correction using FLIRT (also part of FSL), TOPUP-based (also part of FSL) field map pre-processing using spin echo field map (specific for each scanning day), distortion correction and registration into standard space using a customized boundary-based-registration (BBR) algorithm, one-step spline resampling from the original EPI into MNI space including all transforms, intensity normalization and bias field removal. Artifacts were removed using ICA+FIX Salimi-Khorshidi et al., 2014). This involves employing an automatic classifier to identify ICA components due to measurement noise, additional motion or physiological artifacts like cardiac pulsation and respiration. Each dataset was then temporally demeaned and had variance normalization applied according to Beckmann et al. (Beckmann & Smith, 2004). Group-PCA output was generated by MIGP (MELODIC's Incremental Group-PCA), a technique that approximates full temporal concatenation of all subjects' data, from all 820 subjects. This comprises the top 4500 weighted spatial eigenvectors from a group-averaged PCA . The MIGP output was then fed into group-ICA using FSL's MELODIC tool (Hyvarinen, 1999;Beckmann & Smith, 2004), applying spatial-ICA at dimensionality 15. Spatial-ICA was applied in gray-ordinate space (which includes surface vertices plus subcortical grey matter voxels (Glasser et al., 2013), designed by the HCP consortium specifically for storing and processing large amounts of voxel-wise functional and structural data more efficiently). Successively, the ICA maps were dual-regressed into each subjects' 3D data. This is a two-stage procedure which involves regressing the group-wise spatial-maps into each subject's 4D dataset to give a set of time-courses, followed by regressing those time-courses into the same 4D dataset to create a subject-specific set of spatial maps. Finally, the subject-wise coefficients (betas) resulting from dual regression were averaged across subjects in Montreal Neurological Institute (MNI) space. All resulting maps were thresholded at the 99th percentile and visually inspected, after which automatic anatomical localization of global cluster-wise maxima as well as local maxima within each cluster was performed using the J€ ulich Histological Atlas (Eickhoff et al., 2005) and the Harvard-Oxford Cortical and subcortical atlases (Desikan et al., 2006). In order to verify the robustness of our results (and particularly the consistency of the components of identified networks) against the pre-determined number of components, the above analysis was re-examined while imposing a dimensionality of 25 (instead of 15) independent components. Figure 1 shows the results of thresholding the 13th ICA component at the 99th percentile along with the anatomical interpretation/localization of clusters surviving the threshold procedure. Table 2 shows the results of anatomical localization of all topologically connected clusters in the 13th ICA component after thresholding. Upon visual inspection, the hypothesized functional amygdala-sensory/(pre) motor network was detected in the 13th component of the 15-components group ICA analysis (Fig. 1). Of note, this network consistently contained the bilateral dorsal amygdala and SMA/pre-SMA when using 15-components group ICA analysis as well as the analysis with higher ICA-dimensionality (25 components), although the primary motor cortex was not present in the 25-components ICA and the sensory cortices were only found in the 15-components analyses (see Table S1). Furthermore, we were able to confirm that the remaining ICA components (i.e. all components from the 1st to 15th apart from the 13th) contained a number of well-known motor, visual, cognitive and emotional networks that have been previously identified in past resting-state studies (e.g. the default-mode network, the frontoparietal-cerebellar attentional networks, the motor-sensory networks, the visual networks, the salience network, etc.) (Raichle, 2015) (see Table S2). Discussion The current data support the existence of a distinct amygdala-sensory/(pre)-motor circuit at rest in a homogeneous and large sample of n = 820 participants drawn from the Human Connectome Project public repository (Van Essen et al., 2013). Of note, the finding of this amygdala-sensory/(pre) motor network remained relatively consistent when varying the dimensionality of the main independent component analysis (ICA), that is, the number of components chosen to separate the correlations in the BOLD signal across regions into non-overlapping spatial and time components. The ability to identify such a distinct amygdala-sensory/(pre)-motor circuit at rest offers the possibility to assess the function of an important limbicsensory/motor network without the potential confounds associated with task-related performances. More specifically, rs-fMRI is particularly suited to study the interaction between emotional and sensory/motor brain systems in clinical conditions in which it is challenging to collect task-based measures (e.g. dementia, severe motor deficits in patients with stroke, multiple sclerosis, or amyotrophic lateral sclerosis). The present findings also provide support to the hypothesis that the amygdala may 'work in tandem with cortical sensory/motor areas to facilitate the preparation of adaptive responses to social and affective signals' (Grezes et al., 2014). This is also consistent with previous task-based fMRI findings which found co-activation of the amygdala and sensory/motor cortices during emotional processing (de Gelder et al., 2004;Ahs et al., 2009;Pichon et al., 2009;Schurmann et al., 2011;Van den Stock et al., 2011;Conty et al., 2012;Grezes et al., 2013;Kragel & LaBar, 2016), and with evidence from transcranial magnetic stimulation studies showing that emotional stimuli may facilitate action readiness (Oliveri et al., 2003;Baumgartner et al., 2007;Hajcak et al., 2007;Toschi et al., 2008Toschi et al., , 2009Coombes et al., 2009;Coelho et al., 2010). In keeping with a recent DTI study (Grezes et al., 2014), we found that the spontaneous activity in the dorsal amygdala, rather than in other amygdala nuclei, was functionally coupled with activity in the sensory and (pre)motor cortices at rest. This suggests that the cortical sensory/motor system may be directly modulated by specific amygdalar output nuclei, which in turn receive highly processed inputs from other amygdalar circuits involved in emotional processing (e.g. the baso-lateral complex). However, just like the DTI results presented by Grezes et al. (Grezes et al., 2014), our study does not enable to infer any directionality of the effects, which can only be examined using causal techniques for wholebrain rs-fMRI analysis (Duggento et al., 2016). Nevertheless, given the prevalent presence of anatomical connections from the amygdala to pre-motor cortices rather than vice versa, it may be that the amygdala's influence over the sensory/motor cortical system is stronger than the opposite, although the motor areas are still in a position to significantly affect the amygdala function via other indirect and perhaps multi-synaptic pathways. Assessing the amygdala/sensory-(pre)motor networks with rs-fMRI may also shed new light on to the pathophysiological mechanisms underlying a group of psychiatric, psychological, and neurological disorders in which movement control and action planning can be compromised by the presence of emotional dysfunction. Several studies have indeed found evidence that abnormal interactions between the amygdala and (pre)motor cortices may be at the basis of such disorders. First, Voon et al. found increased functional connectivity between the amygdala and pre-motor regions (including SMA) in patients with conversion disorder (CD), relative to controls, while performing an emotional faces task (Voon et al., 2010). Second, during both internally and externally generated movement, CD patients, relative to controls, have been reported to have lower SMA activity, which was associated with higher amygdala response (Voon et al., 2011). Third, a recent meta-analysis of fMRI studies on motor conversion disorders (MCDs) found that MCDs patients differed from controls in a series of regions including the amygdala and primary motor cortex (Boeckle et al., 2016). Fourth, convergent neuroimaging findings have suggested alterations in brain circuits mediating emotional processing (e.g. the amygdala) as well as motor control, planning and coordination (e.g. SMA and cerebellum) in patients with psychogenic non-epileptic seizure, a MCD characterized by paroxysmal behaviors resembling epileptic seizures (Labate et al., 2012;Perez et al., 2015). Finally, a study in incarcerated young offenders found that the pre-motor cortex functional connectivity was correlated with activity in the default-mode network, a set of brain regions which includes medial temporal lobe areas like the hippocampus and amygdala (Shannon et al., 2011). At the same time, we have demonstrated that disease severity and duration in amyotrophic lateral sclerosis (ALS), a devastating motor neuron disease affecting motor and non-motor brain areas, was associated with progressively more abnormal functional connectivity between the amygdala and (pre)motor regions while processing emotional faces (Passamonti et al., 2013). This is in keeping with previous neuropathological data showing that the amygdala, alongside the motor cortices, can be affected by neurodegeneration in ALS and may be critical in mediating some of the non-motor symptoms that characterize ALS (Takahashi et al., 1997;Tsuchiya et al., 2001Tsuchiya et al., , 2002. Furthermore, altered structural 'connectomic' measures including shortest path length between the amygdala and (pre)motor cortices were found in depressed patients with multiple sclerosis (MS), relative to non-depressed MS patients and controls (Nigro et al., 2015). The existence of direct anatomical connections between the amygdala and sensory/(pre)motor cortices and the fact that these may have a functional relevance can thus provide a novel mechanistic explanation for the abnormal interactions between emotional and sensory/motor systems that can be detected in patients with ALS and MS. In conclusion, although our study was performed in a high number of participants, whose data have been previously used to successfully delineate the main amygdalar output nuclei (Tyszka & Pauli, 2016), we cannot exclude a possible loss of spatial specificity due to partial volume effects. In addition, it should be noted that volumetric versions of subject-wise or group-wise Z or t maps from dual regression are not publicly available for analysis and, while single-subject Z-maps are available in gray-ordinate space, this space is not fully dense in subcortical regions, particularly around the amygdala. Therefore, in order not to omit the regions which are part of the key findings of this paper, we chose to work with the single volumetric, whole-brain average (across all subjects) maps. This motivated our choice of adopting a stringent 1% thresholding procedure of the group-wise map in lieu of group-wise statistical inference. Additional rs-fMRI studies in a broad spectrum of neuropsychiatric conditions as well as at ultra-high fields (7T) are now warranted to examine, with superior spatial resolution, how the disruption of the normal interactions between brain areas at the interface between the limbic and sensory-motor systems is associated with abnormal emotional behavior in different psychiatric and neurological diseases. Supporting Information Additional supporting information can be found in the online version of this article: Fig. S1. This supplementary figure displays the entire extension of the activation clusters reported in Fig. 1. Table S1. Anatomical Localization of topologically connected clusters after thresholding the 7th Independent Component Analysis (ICA) component with the 25 ICA set at the 99th percentile. Table S2. Anatomical Localization of topologically connected clusters in each of the 15 Independent Component Analysis (ICA) components (from 1st to 15th excluding the 13th) after thresholding at the 99th percentile. Conflict of interests All of the authors have no conflicts of interest to declare.
4,538.8
2017-05-01T00:00:00.000
[ "Psychology", "Biology" ]
Colorimetric hand-held sensors and biosensors with a small digital camera as signal recorder, a review Sensors, biosensors, lateral flow immunoassays, portable thin-layer chromatography and similar devices for hand-held assay are tools suitable for field or out of laboratories assays of various analytes. The assays frequently exert a limit of detection and sensitivity close to more expensive and elaborative analytical methods. In recent years, huge progress has been made in the field of optical instruments where digital cameras or light sensitive chips serve for the measurement of color density. General availability of cameras, a decrease of prices and their integration into wide spectrum phones, tablets and computers give the promise of easy application of analytical methods where such cameras will be employed. This review summarizes research on hand-held assays where small cameras like the ones integrated into smartphones are used. Discussion about such assays, their practical applicability and relevant specifications are also written here. Introduction Currently, gas chromatography, liquid chromatography and high performance liquid chromatography are standard separation analytical methods that can be further combined with simple physical detectors (voltammetric, fluorimetric) or mass spectrometry and these methods have applicability for the measurement of a wide number of analytes, from simple inorganic or organic compounds to sized macromolecules of biological origin [1][2][3][4][5][6][7][8][9][10]. These methods represent direct competition or etalon for any newly developed assay. In the field of genetic material identification, polymerase chain reaction is the method of first choice, it can be performed for the identification of microorganisms, diagnosis of diseases, forensic search of perpetrators or victims, surveillance on genetically modified organisms, food control and others [11][12][13][14][15][16][17][18]. A lot of analytes can be determined by immunoassays. There are conditions where an antibody specific to the analyte should be available when the analyte is measured or there should be a specific antigen when antibodies are examined as a marker. Standard enzyme-linked immunosorbent assay (ELISA) and radioimmunoassay are examplesof the most common ones [19][20][21][22][23][24][25]. The aforementioned analytical methods are routinely available in hospitals, food and hygienic control, industrial and similar laboratories. Though there are of course other methods and devices, the mentioned standard analytical methods are seen as the main tool in the current analytical praxis. This review is focused on the recent progress on hand-held assays like sensors, biosensors, lateral flow immunoassays, portable thin-layer chromatography and similar devices using small digital camera for output signal measurement. Tiny digital cameras like the one integrated in smartphones or portable touristic camera devices were considered here because they are cheap and suitable for everyday carrying. Analytical protocols can be of course based on highly sensitive photographic devices but these are out of scope of this paper. The hand-held assays are not direct competitors to the aforementioned standard analytical methods but they can support their findings, verify them, and they can also work in field and harsh conditions and represent a cheap and easy to perform type of assay. This review summarizes the recent findings on the issue of hand-held assays with digital camera as the output. Discussion about the practical role of hand-held assays, their differences and advantages, and their respective disadvantages to the standard methods are written here as well. The expected direction of the next research in this field is extrapolated from the knowledge of the current literature. Color density and its measurement Measurement of fluorescence and spectral absorbance are the most common methods and both of them are typically performed in cuvettes, microplates, flow through cells and so on. Compared to the standard methods, the assays presented in this review use standard digital camera for the outputting signal measuring. A standard digital camera is a device for making photographs and it is composed from common parts including optics (camera lens), the lens aperture, image sensor, shutter and image sensor. There are other optional parts like the control screen, viewfinder and camera flash for the support of taking pictures in less illuminated spaces. Just the flash is an important optional part when the camera is intended as part of a hand-held assay because the source of light is necessary to make the results repeatable and reproducible [26]. The most common digital output from a camera is jpeg (respectively its variants like jpg, jpe, jif, etc.) format. The jpeg contains 8 bit color information though some specific devices are able to produce jpegs with a higher value of information. The color information is kept in RGB color model which means that every pixel of the photograph contains information about Red (R), Green (G) and Blue (B) color channels [27]. The final color is achieved by the mixing of the three mentioned color channels. The 8 bit point to color depth is a discontinuous variable. The color depth is calculated as 2 n where n letter is an information about number of bits. For an 8 bit photograph, it has value 2 8 which is equal to 256 [28,29]. It means that the color depth can have one of the 256 values equal to a number from 0 to 255. Every pixel of a photograph contains three numbers, one for the R channel, one for G and one for B. Compared to the standard 8 bit photograph, a photograph with only two bits contains four number for each channel, while a 12 bit photograph contains overall 4,096 color shades and 14 bit 16,384 color shades. 8 bit photography is typical for small and cheaper cameras with jpeg as its output. More expensive cameras can provide noncompressed raw data (typically 16 bit for digital single-lens reflex cameras) or non-compressed pictures in formats like tiff. Tiff format has ability to keep a high value of color depth. The so-called true color pictures and movies have color depth 24 bit corresponding with 16,777,216 values. Low value of color depth can represent a problem when a sample containing an analyte in concentration slightly above limit of detection should be distinguished from controls. Currently, 8 bit jpeg format is the most typical so any hand-held analytical method established on the standard camera devices integrated into phones, tablets or computers should consider this limitation when economic competitivity is a followed parameter. The general principle of digital photography and the shortcomings in its methodology were extensively reviewed in work by Pohanka, 2017 [26]. The idea of digital image processing was also extensively discussed in the cited papers [30][31][32][33]. Cameras can suffer from various problems with image quality. Apart from mistakes in assembly or construction shortcomings, there are also problems determined by the physical principle of photography: optical aberrations caused by lenses or noise and other negative impacts caused by detector chips. Cheaper cameras like the integrated one in other smart devices can suppress the aberrations and noise by software resulting in a partial correction but also in data loss. Collecting photographs in a raw format and not in standard picture format like jpeg is one way on how to avoid this effect. On the other hand, cheaper cameras do not support the collection of raw digital data from camera chip. When looking at specific shortcomings, the following can be considered: lenses can suffer from distortion (pincushion or barrel distortion), resulting in the whole image deformation. Edge parts of a picture can suffer from various chromatic aberrations appearing like colored lines. The edge parts of a picture can also darken due to the vignetting effect. The aforementioned effects can be suppressed by the option of a higher aperture number when the light conditions allow this. A negative impact on picture quality can also be due to conditions on the chip caused by light intensity and coloration in the moment of picture taking. Insufficient intensity of light leads to the setting of a higher ISO value but a higher ISO causes significant noise in the picture or camera record. There can also be problems with white balance resulting in incorrect color temperature on the recorded picture or camera record. A reliable source of light such as an integrated light diode or flash is necessary for avoiding problems with ISO and white balance. The white balance can also be switched off on most of devices which is desired in an analysis. The principle of information kept in a photograph is also explained in figure 1. Digital pictures are a source of data for further analysis by many methods. The hand-held ones are discussed in the next chapters, but there are of course other ways to employ digital photography. Image processing has broad applicability in analysis of specific compound, chemical, physical and other processes or diagnosis of diseases. Such an application is of course set to specific conditions of the assay purpose. Characterization of skin structures [34,35], glaucoma diagnosis [36,37], study of cells in neuronal cultures [38], control of diamond's optical properties [39], pathological examinations [40], characterization of diffusing fluorescent molecules [41], classification of wheat grains and detection of poste damaged grain [42,43], quantum steganography [44], improved microscopy and other typically biologist techniques [45][46][47][48][49], automated and robot self-controlled processed within Industry 4.0 strategy [50], fruit and vegetable quality control [51], biochemical analysis [52] and temperature distribution [53] are all methods linked to image processing. The digital camera can of course be linked to a bioassay such as in the following text: The bioassay can be based on various principles where an enzymatic or another chemical reaction provides a colored product or affinity interaction causing the catching of a fluorescent or colored trace. The general principle of a bioassay with digital camera as the output is depicted in figure 2. This review is focused on colorimetry and just the colorimetry is taken for an optimal method in connection of a small camera. The cameras can also be integrated into fluorimetry but there are higher demands on technical equipment, optical filters and light sources. There are known applications of fluorimetry using a digital camera and these techniques can be performed for example for oil droplets analysis [54], fluorimetry of eye applicable for diagnostics in ophthalmology [55], and DNA hybridization measurement [56,57]. Colorimetric biosensors with thin surfaces or small volumes and Thin-Layer Chromatography A thin layer where coloration is formed due to an increase in the concentration of chromogenic analyte is common in thin layer chromatography. Thin layer chromatography can be easily linked with digital photography and illumination by UV light and this is optimal for this purpose. For instance, Tie-xin and Hong performed thin layer chromatography for cichoric acid from a plant Echinacea purpurea [58]. The authors used UV light and images recorded by a digital camera. The assay exerted a limit of detection of 0.067 µg and the calibration plot had quite good linearity and a coefficient of determination equal to 0.9917. In the mentioned study, a standard digital camera with a resolution of 5 megapixels and 10× optical zoom (no further specifications were provided in the paper) was used. Thin layer chromatography with a digital camera as the output was also tested by Simon and coworkers for an investigation of natural extracts [59]. The researchers extracted yellow bedstraw Galium verum into alcohol and made the principal component analysis, including the performance of thin layer chromatography, and recorded photographs under UV light with a wavelength of 254 and 366 nm. A singlelens reflex camera Nikon D3100 (Nikon, Tokyo, Japan) with a standard Advanced photo System type-C (APS-C) sized Complementary Metal-Oxide Semiconductor (CMOS) chip was the digital data recording device. As a result, the authors described a specific chromatograph serving as fingerprints for the extracts. Thin layer chromatography with a digital camera as the output was found to be a reliable tool for various analytes and assays of amino acids after ninhydrin modification [60], creatinine labelled by iodine [61], leucine and isoleucine [62], amphetamine-like amines [63], and polycyclic aromatic hydrocarbons [64]. Thin layer can serve as a platform for enzymatic reactions, therefore it can be simply combined with color depth measurement by a digital camera. Such a concept was adapted for assays of butyrylcholinesterase as a biochemical marker in blood plasma and serum where it can help to reveal some types of poisonings or liver malfunctions [65]. The cited assay was based on color changes where indoxylacetate was converted by the enzyme butyrylcholinesterase to blue colored indigo and the whole assay was performed on small cuts of filter paper. Color depth was recognized from photographs taken by a camera integrated into a smartphone Samsung Galaxy S5 (Samsung, Seoul, Korea). The best sensitivity exerted the assay in R channel for which the limit of detection was equal to 3.09×10 -6 kat/ml while B channel had a limit of detection of 4.67×10 -6 kat/ml and G channel 4.36×10 -6 kat/ ml. The analytical parameters received in this study were quite close to standard spectrophotometry. An unpublished part of the experiment from the cited paper containing calibration of diluted serum is depicted as figure 3. [65] . The upper line contains saline solution diluted plasma samples and paper containing indoxylacetate, the lower part contains the same samples on a clean cut of paper. The photograph was taken by a smartphone camera and served for color depth determination when the photograph underwent image processing. A low space of bubble wrap was selected in a study to establish a colorimetric method for measurement of glucose level [66]. The bubbles were filled with sol-gel, glucose oxidase, peroxidase and o-phenylenediamine dichloride as a chromogenic substrate. When added to a sample containing glucose, a cascade of reactions were initiated by the two immobilized enzymes and orangebrown end product from o-phenylenediamine was formed and photographed by a camera integrated into a smartphone Sony Xperia MT27i (Sony, Tokyo, Japan). The assay exerted a limit of detection of 0.75 mmol/l which is under the physiological level of glycemia (in health people 3.9 -11.1 mmol/l) therefore the functionality of the method was reached. The assay fully correlated with standard spectrophotometry though the sensitivity was a little lower. A colorimetric assay working with small volume of analyte developed Marinho and coworkers for the determination of ethanol content in beverages [67]. They chose a smartphone camera as an analytical device and microtubes as a measuring cuvette. The coloration was formed by the addition of phenolphthalein in an alkaline medium into an alcohol beverage sample. The assay had a limit of detection 2.1 % (v/v) ethanol which is quite a high number, on the other hand it is enough to measure alcoholic beverages since the content of ethanol in them is significantly higher [68,69]. A smartphone digital camera (Samsung Galaxy S5 by Samsung) served as a measuring device for the detection of neurotoxic carbofuran and consisted of cuts of filter paper with immobilized homogenate from psychrophilic bacteria [70]. The assay was based on a lipase mediated conversion of indoxylacetate to blue indigo, which was stopped by an irreversible enzyme inhibition in the presence of carbofuran. The limit of detection was equal to 3.72×10 -8 mol/l for a 5 µl sized sample. A survey of mentioned methods is given in table 1. Thin layer stational immunoassays and Lateral Flow Immunoassays The recognition ability of antibodies is a principle of the immune response to an invading pathogen. The fact that antibodies are highly selective proteins can be used for various assays where an antigen is detected. The assay of prostate-specific antigen [71], infectious diseases [72][73][74], antibiotics [75], and food allergens [76] are relevant examples. The assay can also work in an opposing manner when the bioassay contains an antigen as a reagent or immobilized part and antibody is measured as an analyte. Immunosensors for celiac disease based on the detection of anti-transglutaminase antibodies [77] and detection of antibodies against infectious pathogens like Francisella tularensis [78] can be exampled. Immunosensors with a digital output work on a similar principle like the other immunoassays except for the signal measuring technique. The relevance of immunosensors with a digital output can be learned from the following examples. Streptomycin was, for instance, detected by colorimetry with a smartphone as the output [79]. The assay worked on the principle of interaction between an aptamer and streptomycin, the excess aptamer hybridized with complementary DNA and the resulting double stranded DNA interacted with green fluorescence providing the color SYBR Green I. The assay exerted a linear range of 0.1 -100 µmol/l and a limit of detection of 94 nmol/l. An immunosensor with digital output was constructed for the measurement of alkaline phosphatase which was intended to serve in milk quality control [80]. The assay was based on alkaline phosphatase separation from a sample on paper modified with anti-alkaline phosphatase antibodies. After separation, alkaline phosphatase converts chromogenic substrate 5-bromo-4-chloro 3-indolyl phosphate, providing blue-green coloration which was observable via a smartphone camera. The assay had a two-order sized linear response from 10 to1,000 U/ ml and exerted a limit of detection of 0.87 U/ml. Another assay on alkaline phosphatase was performed by Yu and coworkers [81]. In this assay, the paper strip contained gold nanoparticles modified with phosphotyrosine and a testing line with antibodies specific to phosphotyrosine. Alkaline phosphatase from a sample causes the dephosphorylation of nanoparticles and affects the interaction between nanoparticles and immobilized antibodies. The colored spots were recorded and analyzed by a smartphone. The assay exerted a dynamic range of 0.1 -150 U/l and a limit of detection of 0.1 U/l. Immunoassays have good specifications to reveal sized analytes because the preparation of antibodies against sized antigens is easier and the sized antigens can be precipitated by antibodies because more immunoglobulins can bind to one sized structure like cells and large proteins. The precipitation is conditioned by fact that immunoglobulins have more paratopes binding antigen. It can be from two (e.g. immunoglobulin G -one subunit with two paratopes) up to ten (e.g. immunoglobulin M -five subunit, each two paratopes). The detection of the bacterium Escherichia coli was described in the paper by Sai and coworkers [82]. The assay worked on the principle of a paper with an antibody capturing E. coli from samples and the captured bacterium was then labelled with gold nanoparticles modified with another anti E. coli antibody. Formed spots can be further highlighted by the addition of silver ions which are reduced by gold nanoparticles to metallic silver. The assay was performed on standard filter papers and a cell phone camera (no further specification provided in the paper) served for image processing. The limit of detection for the assay was equal to 57 CFU/ml. This result is quite close to the standard ELISA method for bacteria detection [83], therefore the digital camera assay could substitute the ELISA. Colorimetric immunoassays with signal scoring by a naked eye can be easily improved by the addition of a digital camera as the output. Lateral flow immunoassays also known as lateral flow tests are designed as standard hand-held methods and the formed spots can be characterized by the digital camera. In principle, the lateral flow immunoassay consists of sample migration by capillary action up to the spot where it is captured by an immobilized antibody, another labelled (fluorescent nanoparticle, fluorescent dye etc) antibody finalizes the formed sandwich and it causes that the spot is visible due to coloration [20,[84][85][86][87][88]. The control zone is represented by an antibody against the labelled antibody. The control zone should be formed in any case regardless of the presence of analyte in the sample. As an alternative, the assay can also be performed on the principle of single stranded sequences of DNA hybridization and such a method can also be combined with a digital camera [57,[89][90][91]. Single nucleotide polymorphism can be scrutinized by the digital camera based lateral flow immunoassay [92][93][94][95][96]. Such devices are an alternative to a standard lateral flow immunoassay. The efficacy of an assay where a digital camera is combined with a lateral flow immunoassay is expected to be higher than the lateral flow immunoassay alone [97,98]. A general principle of a lateral flow immunoassay in combination with a digital camera is depicted as figure 4. An example of a commercially available lateral flow immunoassay test strip (human chorionic gonadotropin test) is in figure 5. The relevance of the combination can be learned from following examples. Lateral flow immunoassay was for instance chosen as a method for the detection of cocaine [99]. In the cited paper, a magnetic lateral flow immunoassay was performed on urine samples for the presence of cocaine. The assay worked on the principle of competition between cocaine and conjugate cocaine -bovine albumin. Magnetic beads covered with anti-cocaine antibodies reacted with cocaine or complex cocaine -the bovine albumin formed a gray line on the support pad. Standard smartphone camera served for signal scaling. The assay had limit of detection 3 nmol/l and linear range 5 -500 ng/ml. A lateral flow immunoassay with a digital camera as the output was introduced by Xia and coworkers for chloramphenicol residue assay [100]. The assay exerted limit of detection 0.03 ng/ml. A smart phone Samsung Galaxy S7 Edge with a digital camera and lateral flow immunoassay together were performed for albumin assay [101]. The assay had a limit of detection of 1.71 pg/ml and limit of quantification 26.48 pg/ml. In another experiment, digoxigenin was assayed by lateral flow chromatography [102]. Digoxigenin is a toxic plant steroid forming conjugates to sugars and known for immunogenicity. The assay was mediated by CMOS camera from smartphone iPhone 5S (Apple, Cupertino, California, USA) and used an antibody -gold nanoparticles. The best limit of detection was 14.4 nmol/l and the limit of quantification 19.1 nmol/l. The assay by smartphone was compared with a professional digital colorimeter BioImager (ChemStudio Plus, Analytic Jena, Jena, Germany). When the two assays compared, the professional colorimeter proved to have better specifications (sensitivity, limits of detection and quantification) but the advantages were not significant and considering price and simplicity, make the smartphone camera a viable competitor to the standard colorimetry. A lateral flow immunoassay improved by signal measuring by a smartphone camera was also performed for the determination of 8-Hydroxy-2'-Deoxyguanosine [103] and the identification of antibiotic resistance with the combination of a lateral flow assay, polymerase chain reaction and a smartphone [104]. An overview of the aforementioned immunoassays is given in table 2. alkaline phosphatase was separated on papers modified with specific antibodies; in the second step, 5-bromo-4-chloro 3-indolyl phosphate provided blue-green coloration due to the enzyme activity an unspecified smartphone camera alkaline phosphatase linear response from 10 to1,000 U/ml, limit of detection 0.87 U/ml [80] strip containing gold nanoparticles modified with phosphotyrosine and testing line with antibodies specific to phosphotyrosine; alkaline phosphatase from a sample caused dephosphorylation and affecte the interaction between nanoparticles and antibodies an unspecified smartphone camera alkaline phosphatase dynamic range 0.1 -150 U/l and limit of detection 0.1 U/l [81] paper with antibodies against Escherichia coli captured the bacterium, gold nanoparticles with antibodies labelled captured bacterium, reduction of silver ions make the spots more visible an unspecified cell phone camera E. coli limit of detection 57 CFU/ml [82] lateral flow immunoassay an unspecified smartphone camera cocaine limit of detection 3 nmol/l, linear range 5 -500 ng/ml [99] lateral flow immunoassay an unspecified digital camera chloramphenicol limit of detection 0.03 ng/ml [100] lateral flow immunoassay smartphone camera Samsung Galaxy S7 Edge (Samsung, Seoul, South Korea) albumin limit of detection 1.71 pg/ml, limit of quantification 26.48 pg/ml [101] lateral flow immunoassay smartphone camera iPhone 5S (Apple, Cupertino, Ca, USA) digoxigenin limit of detection 14.4 nmol/l, limit of quantification 19.1 nmol/l [102] Conclusions Digital camera based analytical devices appear to be promising for current praxis and would meet demands from various laboratories including clinical, control of manufacturing processes, assay of pollutants, hygienic control etc. It is not expected that camera based handheld assays will replace standard instrumental methods like chromatography and mass spectrometry but they represent a support to the standard devices and can replace them under some specific conditions like field assay or assays performed under emergency conditions. The major advantage of the hand-held assays with digital cameras is the overall simplicity, low price and general availability. An increasing role of hand-held assays with digital cameras is expected in the next few years. Further development of analytical methods based on digital camera can also be accelerated by other techniques like 3D printing [105][106][107]. The combination of smartphone camera and individually manufactured parts by 3D printing technique would make analytical techniques available for nearly everyone.
5,654.6
2020-01-01T00:00:00.000
[ "Chemistry", "Engineering" ]
On Budget Deficit under Economic Growth: Towards a Mathematical Model of MMT Recently, a school of thought called Modern Monetary Theory (MMT) has been attracting attention, but it has not received much theoretical or mathematical analysis. In this paper, we examine the theoretical validity of the MMT argument using an overlapping generations (OLG) model that includes economic growth due to population growth, and give a generally positive evaluation of MMT. The basic idea is that a certain level of continuous budget deficit is necessary to maintain full employment when the economy is growing, that inflation occurs when the budget deficit exceeds that level, that a recession occurs when the budget deficit falls below that level, and involuntary unemployment occurs. In order to recover from a recession, a budget deficit in excess of that level is required, and that deficit need not be covered by a future budget surplus. The same can be said for growth resulting from technological progress. Introduction Using a simple overlapping generations (OLG) model in which goods are produced solely by labor in a monopolistically competitive industry, this paper shows that maintaining full employment at constant prices in a growing economy with a growing population requires running a continuous budget deficit. Since budget deficits must be sustained, they should be financed by seigniorage rather than debt where institutionally feasible. The need for budget deficits in a growing economy is thought to be due to the fact that older generations have lower total incomes than younger generations and have less total savings available for consumption. This budget deficit is not a debt and should not be repaid or redeemed. If the budget deficit becomes excessive, inflation will be triggered. Since full employment is maintained by continuous budget deficits, it is only necessary to reduce the excess part of the budget deficit, and there is no need to make up for it later by running a budget surplus or reducing the deficit. Furthermore, if the budget deficit is insufficient, involuntary unemployment will occur, and a larger-than-normal budget deficit will be needed to eliminate it and return to full employment, but there is no need to make up for this later. This paper is an attempt to provide a theoretical basis for the so-called functional finance theory by Lerner (1943Lerner ( , 1944, as well as a theoretical basis for the recently discussed MMT (Modern Monetary Theory, Wray (2015), Mitchell, Wray and Watts (2019)). In particular, this paper provides an argument for the following claim (Kelton (2020)). We refer here to Hogan's summary of Kelton's book (Hogan (2021)). In fact, Hogan criticizes "Kelton is wrong" (Note 1), but he summarizes the main points of the argument in a good way. 1. The US Treasury creates new money. Since consumers save with money, and since money supply equals savings, an increase in money supply equals an increase in savings. As equation (5) in Section 5.2 shows, the increase in savings equals the budget deficit. Since the rate of increase in savings equals the rate of increase in money, which equals the rate of economic growth, i.e., the rate of increase in the production of goods, the increase in the money supply does not cause inflation. 2. Inflation is caused by federal government deficit spending, not by Fed policy. As will be shown in Section 5.3, if the actual budget deficit is larger than the budget deficit necessary and sufficient to maintain full employment under economic growth, the price of goods will rise. 3. Federal government spending is not related to taxes or borrowing. As mentioned above, in order to achieve full employment under economic growth, continuous budget deficit is necessary. In a growing economy, it is not possible to maintain full employment through balanced budget. Therefore, even if the budget deficit to maintain full employment is financed by government debt, it does not have to be repaid or redeemed, nor does it have to be covered by future budget surpluses. The same is true for the additional budget deficit required to eliminate involuntary unemployment caused by insufficient budget deficit and return to full employment. ISSN 2327-5510 2022 In Appendix B, using a three generations OLG model we briefly consider the case where there is a pay-as-you-go pension and consumption in childhood period, and show that in order to maintain full employment under economic growth, a budget deficit is necessary if the difference between savings excluding pensions and the debt from childhood consumption is positive, and that an excessive budget deficit leads to inflation. In Appendix C, we review the money flows for a model with pensions and consumption in the childhood period. International Journal of Social Science Research The same conclusion can be reached in the case of economic growth resulting from technological progress rather than from population growth. The model and analysis are almost identical, and the only difference is in the interpretation, which we will briefly discuss in Section 6. In this paper, we assume that labor productivity is affected by the amount of employment but not by population growth itself in the case of increasing or decreasing returns to scale, but we also briefly discuss the case where productivity changes at a rate greater than or less than the rate of population growth due to increasing or decreasing returns to scale in the last section. The Model According to Otaki (2007Otaki ( , 2009Otaki ( , 2015aOtaki ( , 2015b, we use a two-periods (two-generations) overlapping generations (OLG) model with production of goods under monopolistic competition. Two periods are Period 1 (younger or working period) and Period 2 (older or retired period). The structure of the model is as follows. 1. Labor is only the factor of production. The goods constitute a continuum of 0, 1 . Each good is denoted by index ∈ 0, 1 . Good is monopolistically produced by Firm z with increasing or decreasing returns to scale technology. Under increasing or decreasing returns to scale employment and output may affect the labor productivity. However, we assume that population growth itself does not affect the labor productivity, and in the full employment state population and output increases at the same rate. 2. In Period 1 consumers supply labor, consume the goods and save money for consumption in Period 2. They are employed or unemployed. 3. In Period 2 consumers consume the goods by their savings carried over from the previous period. 4. Each consumer determines his/her consumption and labor supply in the beginning of their Period 1 corresponding to the situation where he is employed or unemployed. The notation of this paper is as follows. : consumption basket of an employed consumer in Period , 1, 2. : consumption basket of an unemployed consumer in Period , 1, 2. : demand for good z of an employed consumer in Period , 1, 2. : demand for good z of an unemployed consumer in Period , 1, 2. ISSN 2327-5510 2022 : the price of the consumption basket in Period , 1, 2. International Journal of Social Science Research : the price of good z in Period , 1, 2. : the nominal wage rate. Π: profit of firms which is equally distributed to the younger generation consumers. : individual labor supply. Γ : disutility of labor which is increasing in and strictly concave. : total employment. : labor population, or employment in the full employment state. It increases at the rate 1 ! 0. If population in Period " is , the population in Period " # 1 is . $: labor productivity which is increasing (increasing returns to scale case) or decreasing (decreasing returns to scale case) in the total employment . It is not affected by population growth (in the last section we consider a case where labor productivity changes by population growth). Utility Maximization of Consumers The utility function of an employed consumer is % , Γ . % ⋅,⋅ is homothetic. Γ is disutility of labor. The utility function of an unemployed consumer is % , . The consumption baskets in Period for employed and unemployed consumers are () * +,- 0 is the elasticity of substitution of the goods. It satisfies 0 ! 1. The price of the consumption basket in Period is 1, 2. The budget constraint for an employed consumers is Then, given , the condition for maximization of @ with respect to is Given and , the labor supply is a function of A. From (1) HI ! O 0, the labor supply is an increasing (decreasing) function of the real wage rate A. We assume that the real wage rate does not significantly affect individual labor supply. The labor productivity may affect employment in some way. We assume that even in this case is an increasing function of . Profit Maximization of Firms Let . This is the total demand or the total savings of the older generation consumers which is determined in their Period 1. The demand for good z of the older generation consumers is ( The government expenditure as well as the consumptions of the younger generation consumers and those of the older generation consumers constitute the national income. The total demand for good is (2) U is the following effective demand. V is the government expenditure. under the constraint Let and be the employment for good z and ''employment \ labor supply". Then, we have ) * . The output of Firm is $. By increasing or decreasing returns to scale, $ is a function of . Since in the equilibrium $ . , we obtain DH > DP; In the case of constant returns to scale, Therefore, we have We define the elasticity of the labor productivity by Then, . ` is constant, and satisfies 1 #`! 0. For technology with increasing (decreasing) returns to From the condition for profit maximization: We obtain . Thus, the real wage rate equals A 1 d 1 #` $. Since all firms are symmetric, for all z. International Journal of Social Science Research By the equilibrium of them, In real terms, is not larger than ( is the labor supply in the full employment state). However, it may be smaller than . Then, we have O , and there exists involuntary unemployment. If the government collects taxes from the younger generation consumers, (3) is written as $ 3 $ g # V # R. Budget Deficit for Full Employment Assume that up to Period " full employment has been maintained under constant price. Then, Superscript " means the value in Period ". The savings of the younger generation consumers is To maintain full employment under economic growth by population growth this should equal R h . Therefore, R h is the savings and consumptions of the older generation consumers in Period ". Since this is positive, we have V h ! g h when ! 1. In Period " # 1, we get R h7 R h , V h7 V h and g h7 g h . Thus, under the denotes the labor supply under full employment after population increases. Since this is equivalent to (4), V h7 V h , g h7 g h maintain full employment. Since the budget deficit to maintain full employment must be continuous, it should be financed by seigniorage not government bonds. The reason why budget deficit is necessary in a growing economy is as follows. When economy grows, the life time income of the older generation consumers is lower than that of the younger generation consumers. Therefore, their savings and resulting consumption will be insufficient to achieve full employment. This budget deficit is not a debt and should not be redeemed or repaid. Since consumers save with money, and since the money supply equals savings, an increase in the money supply equals an increase in savings. From (5) we find that an increase in savings equals the budget deficit. Since the rate of increase in savings is equal to the rate of economic growth, the budget deficit in this case will not cause inflation. Summarizing the results, Proposition 1 A continuous budget deficit is necessary to maintain full employment when the economy is growing due to population growth under constant prices. Excessive Budget Deficit and Inflation Assume that up to Period " 1 full employment has been maintained under constant price. However, the government expenditure or tax in Period " is different from the value in the steady state. The steady state means a state where full employment has been maintained under constant price. Let V i h and g i h be the actual values of the government expenditure and the tax, i h be the actual value of the price in Period t. Then, The savings of the younger generation consumers is To maintain full employment in Period " # 1 under the condition that h7 i h ! h we Since i h l h , this is equivalent to (4). Thus, full employment is achieved by V h7 lV h and g h7 lg h . After one period inflation, we can maintain full employment by continuous budget deficit under constant price. Therefore, the excess budget deficit that caused inflation can be reduced only by reducing the excess part, and there is no need to make up for it by creating a surplus later or by making the budget deficit smaller than the steady state value. Summarizing the results, Proposition 2 1. Inflation is caused when the budget deficit becomes larger than the level necessary and sufficient to maintain full employment. 2. An excess part of excessive budget deficits that cause inflation need only be reduced, and there is no need to make up for the excessive deficit by creating a surplus later or by reducing the budget deficit below its steady-state value. Suppose that inflation continues in Period " # 1, and h7 l i h . Then, This is equivalent to (7). As for the process leading to inflation, we can think of a story in which excess demand for goods generates excess demand for labor, which raises the nominal wage rate, which in turn raises the prices of the goods. Assuming that production does not increase above the full employment level, nominal supply and demand will not be balanced unless prices rise. Insufficient Budget Deficit and Involuntary Unemployment Suppose that in is different from in (4) , and the government expenditure is different from V h . The price of the goods is constant. Let V i h and g h be the actual values of the government expenditure and the tax. Then, we have Comparing (9) and (4), As clarified in Proposition 1, budget deficit is necessary to maintain full employment when the economy is growing, so involuntary unemployment will occur under balanced budget condition. The savings of the younger generation consumers in Period " is This equals the consumption of the older generation consumers in Period " # 1. Assume that in Period " # 1 full employment is achieved under the condition that h7 h , g h7 g h . Then, V i ih7 is the actual value of the government expenditure in Period " # 1. The savings of the younger generation consumers in Period " # 1 is This must equal R h to achieve full employment. Note R h is the steady state value of the savings of the younger generation consumers in Period ". Therefore, In the steady sate Therefore, the additional budget deficit larger than the steady state value is necessary to restore full employment in Period " # 1. Summarizing the results. Proposition 3 Insufficient government expenditure in Period t causes involuntary unemployment, and an additional budget deficit above the steady-state value is needed to restore full employment in period t+1. Since a continuous budget deficit is required after full employment is restored as shown in Proposition 1, the additional budget deficit created to overcome the recession does not have to be made up by later surpluses. In this section, prices were assumed to be constant. If involuntary unemployment leads to a fall in the nominal wage rate, which in turn leads to a fall in prices, then the real balance effect (so-called Pigou effect) may kick in and increase consumption. However, the use of fiscal policy would be more likely to bring about a rapid return to full employment. Growth by Technological Progress and a Case Where Population Growth Affect the Labor Productivity (1) In a case of growth by technological progress not population growth, is constant and $ increases at the rate 1. Then, by interpreting as being multiplied by y instead of being multiplied by , basically all the equations are still valid. But (labor supply in a state of full employment) in (6), (11), (12), and (B.4), (B.5) in Appendix B below should be written as . (2) We assume that the labor productivity in each period depends on the amount of employment in the case of increasing and decreasing returns to scale, but that population growth itself has no effect on productivity. In the following, I will briefly explain the case in which population growth affects productivity. Assume that and are equal, and denote them by . If full employment is achieved, the labor productivity $ in Period " # 1 and that in Period " $ satisfy Then, replacing by This is equivalent to (4). In (8) Therefore, in a case where population growth affects the labor productivity, let ` be the elasticity of the labor productivity, the rate of economic growth is ′ 1 1 #` 1 1. In the case of increasing returns to scale, the economy grows at a rate greater than the rate of population growth, and in the case of decreasing returns to scale, the economy grows at a rate less than the rate of population growth. In this paper, we did not consider the existence of capital in production and investment by firms. In the future, we would like to create a model that includes these factors and study the problem of the interest rate. However, the position that fiscal policy, not monetary policy, should play the main role of achieving and restoring full employment without causing inflation will not change. Concluding Remarks The purpose of this paper was to provide a theoretical analysis of MMT and Functional Finance Theory based on mathematical models, while taking into account microeconomic foundations about the behavior of consumers and firms. The main conclusions are that a certain level of continuous budget deficit is necessary to maintain full employment under economic growth, that a budget deficit above that level causes inflation, and a budget deficit below that level causes a recession including involuntary unemployment. The basic two-generation overlapping model with monopolistic competiton about production of the goods was used, but it was shown that essentially the same conclusions could be reached by using a model that includes a pay-as-you-go pension system and unemployment insurance, or by using a three-generation model that includes the childhood period before the consumers work. Please see Appendix B. Appendix B. About a Model With Consumption in the Childhood Period and Pay-As-You-Go Pensions In this appendix we briefly consider a three-generations OLG model with consumption in the childhood period and pay-as-you-go pensions. Put Period 0 before Period 1. It is the childhood period. In this period consumers do not work, only consume. The consumption could be thought of as education. This can be financed by borrowing from the younger generation or by government scholarships. No decisions are made during the childhood period, and the consumption is a constant common to all. Denote the consumption in the childhood period by D. The funds for consumption in the childhood period of consumers become debt, which must be repaid in their younger period. However, if they become unemployed, they will not be able to repay the debt, and the government will provide unemployment insurance in an amount equivalent to the debt. This is financed by the tax to the younger generation who are working. On the other hand, an older generation consumer will be able to receive a pension. This will be also financed by tax to the younger generation. Decisions about consumption in Periods 1 and 2 are made at the beginning of Period 1, as before, depending on whether the consumer is employed or unemployed. V h and g h represent government expenditure and tax other than pension and unemployment insurance. Let t be the pension per consumer of the older generation, Ψ be the pension tax to an employed consumer of the younger generation. Then, t Ψ. Let t X be the pension received by a younger generation consumer in their older period. The budget constraint for an employed consumer is or # # Π # t X Ψ P Q P t P Q P w. Since the debt is offset by the unemployment insurance, the budget constraint for an unemployed consumer is ) * . # ) * . Π # t X , or # Π # t X . From these results we can obtain the demand functions for consumption baskets and for each good. In this case from (B.1) and (B.2), (4) is written as h $ 3 h $ # t X t w g h # w x # V h # R h (B.3) w x is the consumption of the next generation consumers in their childhood period. It constitutes the effective demand as well as the government expenditure. Since when the economy grows, t X t and w x w. The savings of the younger generation consumers (including the pensions received in the future) is Under economic growth, this equals R h . Then, we need V h g h 1 R h t w . Therefore, if the difference between the savings (excluding pensions) and the debt is positive, we need budget deficit. Therefore, an increase in the savings equals the budget deficit. If the government does not collect taxes for pensions or unemployment insurance, we can decrease R3 or increase P4 by that amount. Copyrights Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).
5,213.6
2021-12-19T00:00:00.000
[ "Economics" ]
Final rotational state distributions from NO(vi = 11) in collisions with Au(111): the magnitude of vibrational energy transfer depends on orientation in moleculesurface collisions When NO molecules collide at a Au(111) surface, their interaction is controlled by several factors; especially important are the molecules’ orientation with respect to the surface (N-first vs. O-first) and their distance of closest approach. In fact, the former may control the latter as N-first orientations are attractive and O-first orientations are repulsive. In this work, we employ electric fields to control the molecules’ incidence orientation in combination with rotational rainbow scattering detection. Specifically, we report final rotational state distributions of oriented NO(vi = 11) molecules scattered from Au(111) for final vibrational states between vf = 4 and 11. For O-first collisions, the interaction potential is highly repulsive preventing the close approach and scattering results in high-J rainbows. By contrast, these rainbows are not seen for the more intimate collisions possible for attractive N-first orientations. In this way, we reveal the influence of orientation and the distance of closest approach on vibrational relaxation of NO(vi = 11) in collisions with a Au(111) surface. We also elucidate the influence of steering forces which cause the O-first oriented molecules to rotate to an N-first orientation during their approach to the surface. The experiments show that when NO collides at the surface with the N-atom first, on average more than half of the initial vibrational energy is lost; whereas O-first oriented collisions lose much less vibrational energy. These observations qualitatively confirm theoretical predictions of electronically non-adiabatic NO interactions at Au(111). Introduction Quantum-state resolved experiments probing molecules colliding at surfaces have revealed important dynamical information about interactions leading to energy transfer. 1 A well-studied example is the scattering of NO molecules from a Au(111) surface. [2][3][4][5][6][7] Here, energy transfer is mediated by electron transfer (ET) events forming a transient NO À that result in the coupling of NO vibration to the metal's electronic degrees of freedom. 2,8 During this process, many quanta of NO vibration can be lost to the surface within a sub-ps scattering time. 2,6 The ET mediated energy transfer process is strongly orientation dependent; the relaxation probability is enhanced for N-first compared to O-first collisions. 3 A subtle dependence of the relaxed vibrational distributions was also reported but remained unexplained. 4 The orientation influence on ET probability is at least partly due to the fact that N-first oriented molecules may more closely approach the gold surface. Evidence for this is found in ab initio calculations 9 as well as in observations of strong rotational rainbows for O-first oriented collisions 3,5,10,11 -rotational rainbows are enhanced by repulsive O-first Au interaction. NO scattering from Au(111) has also been investigated by the independent electron surface hopping (IESH) method, 9 an algorithm for propagating classical trajectories 12 on an electron transfer (Newns-Anderson) Hamiltonian, hybridized to the metal electronic continuum. 13 IESH gives good agreement with many experimental results, 6 but due to inaccuracies in the interaction potential used for the calculation, it does not always compare favorably with experiment when compared in a one-toone fashion. 14,15 A good example of this problem concerns the influence of NO orientation on vibrational relaxation, which was studied by IESH for the vibrational relaxation of NO(v i = 15) using classical trajectories. 9 For N-first trajectories with the NO bond perpendicular to the gold surface, strong multi-quantum vibrational relaxation was seen, whereas for O-first trajectories little or no vibrational relaxation was found. O-first trajectories did result in vibrational relaxation, but only due to dynamical steering; that is, re-orientation of the O-first orientation to N-first orientation when the molecules approach the surface. 9 Experiments with oriented molecules do not compare well with IESH theory. One reason is that experiments are not governed by classical mechanics. Instead, the quantum laws of angular momentum enforce that only rather broad initial NO orientation distributions can be produced in the laboratory. Indeed, orientation distributions are so broad that a nominal N-first distribution contains some O-first oriented molecules. 3,16 Beyond this, to find good agreement between experiment and theory, the theory would need to accurately describe the weak forces in the entrance channel that govern dynamical steering, which it cannot yet do. 17 Hence, we seek an alternative experimental approach to testing the qualitative predictions of the IESH theory as they apply to the vibrational relaxation of highly vibrationally excited NO in collisions with a Au(111) surface. These are: (1) N-first collisions result in the loss of many vibrational quanta, (2) O-first collisions result in little or no vibrational energy loss unless (3) they are dynamically steered to an N-first orientation. 9 We accomplish this by combining experimental control of initial NO orientation using externally applied electric fields and detection of rotational scattering rainbows. Specifically, we obtain rotational state distributions accompanying multiquantum relaxation of NO(v i = 11)/Au(111) surface scattering as a function of initial NO orientation. We derive rotational state population distributions for final scattered vibrational states between v f = 4 and 11. We observe strong rotational rainbows when Dv is small and the rotational rainbow vanishes by the time Dv o À5. For these highly inelastic scattering processes, NO orientation has no detectable influence on the scattering rotational state distribution, a finger-print of dynamical steering. As the high-J rotational rainbow is caused by O-first collisions that experience the repulsive O-Au interaction, we conclude that O-first scattering leads to less vibrational relaxation than N-first scattering unless dynamical steering reorients the NO on its approach to the surface. This allows us to derive vibrational distributions for N-first and O-first surface collisions and yields results that are qualitatively in agreement with IESH theoretical predictions. Experimental The molecular beam surface scattering apparatus has been described previously. 5,16 Briefly, we expand 10% NO/H 2 into a vacuum chamber through a piezoelectric valve (1 mm diameter nozzle, 10 Hz, 3 atm stagnation pressure). This mixture yields an incidence translational energy of 0.51 eV. After passing through two differential pumping chambers, highly vibrationally excited NO X 2 P 1/2 (v i = 11 and J i = 0.5) is produced using the Pump-Dump-Sweep approach. 18 The molecules then pass an electrode used to generate a strong electric field (|E| = 33 kV cm À1 ) normal to the room temperature Au(111) surface, which orients the vibrationally excited NO molecules prior to the collision. The NO orientation can be reversed by switching the polarity of the orientation field. 5 The scattered NO molecules are detected using (1 + 1) resonance enhanced multi-photon ionization (REMPI) spectroscopy via the A 2 S + (v = 0-7) state and subsequent detection of the ions with micro-channel plates (Tectra MCP 050 in chevron assembly). The Au(111) crystal is cleaned by Ar-ion sputtering (LK Technologies; NGI3000) and subsequent annealing for 20 min at 950 K. The cleanliness of the surface is verified by means of Auger electron spectroscopy (Staib, ESA 100). For Pump-Dump-Sweep, 18 we require three laser pulses. For the Pump step, the 887 nm output of a frequency doubled Nd:YAG laser (Spectra Physics, Quanta Ray Lab 170-10, 10 Hz, 8-12 ns pulse width (FWHM) of the fundamental) pumped home-built optical parametric oscillator (OPO) 19 is mixed with the fourth harmonic of the Nd:YAG to obtain radiation resonant with the NO A 2 S + (v = 2, J = 0.5) ' X 2 P 1/2 (v = 0, J = 0.5) transition at 204.708 nm. The same frequency doubled Nd:YAG laser pumps a second home-built OPO, whose output is mixed with the residual Nd:YAG output at 532 nm producing laser pulses at 336.10 nm suitable for the DUMP step transferring population from A 2 S + (v = 2, J = 0.5) -X 2 P 1/2 (v I = 11, J I = 0.5). For the Sweep step, the 450.87 nm radiation supplied by a frequency tripled Nd:YAG laser (Spectra Physics, Quanta Ray PRO-270-10) pumped dye laser (Sirah, Precision Scan, PRSCDA-24) removes residual A state population by further excitation to a dissociative state. This prevents the undesired population of various vibrational states in the ground electronic state via fluorescence. We use the 245-315 nm output of a commercially available OPO laser system (Continuum Sunlite Ex, 3 GHz bandwidth, 2 mJ per pulse@255 nm) to record rovibrationally resolved REMPI spectra of scattered molecules. These spectra contain all necessary information to derive rotational and vibrational distributions of ground electronic state NO with vibrational quantum numbers ranging from 4 to 11. Table 1 lists the employed REMPI transitions. In order to derive the rotational state distributions we analyze the REMPI data by fitting simulated spectra to the experiment as explained in more detail in the ESI. † Fig. 1 shows the key observations of this work: rotational state population distributions for several vibrational states produced after scattering NO X 2 P 1/2 (v i = 11 and J i = 0.5) with 0.51 eV incidence translational energy from Au(111). An example of recorded REMPI spectra and a spectral fit, from which the population distributions are derived, can be found in the ESI. † Several of the rotational distributions exhibit a peak near J B 35. This peak is a previously reported rotational rainbow that results from the repulsive interaction in collisions of the NO molecule with the Au(111) surface where the O-atom strikes the gold surface. 3,5 We see that the J B 35 rainbows are present when the vibrational energy loss is small (Dv 4 À5) and when an O-first incidence orientation is employed. Weak high-J rainbows can be seen for N-first orientations; we attribute this to the small fraction of O-first oriented molecules present in the broad orientation distributions. 3,5,16 Particularly striking is the observation that as the vibrational energy loss increases, the J B 35 rainbow diminishes and low J-states become increasingly populated. Furthermore, for a high vibrational energy loss, incidence orientation does not influence the rotational distribution of the scattered molecules -a fingerprint of dynamical steering. 5 These observations suggest a simple interpretation, the key points of which we now emphasize to the reader. Results and discussion When NO(v i = 11) molecules are incident with an O-first orientation, they may: 3 collide with the surface with the O-atom first and produce a J B 35 rainbow or 3 be dynamically steered to an N-first orientation on their approach to the surface, in which case no rainbow is seen. For NO(v i = 11) molecules that are incident with an O-first orientation and do not suffer dynamical steering, the vibrational energy loss is low. For NO(v i = 11) molecules that are incident with or which are dynamically steered to an N-first orientation, the vibrational energy loss is much larger. Previously, we reported a steric influence on the relaxed vibrational distribution for NO(v i = 11) colliding with Au(111) -see Fig. 3b of ref. 4. Here, also information about the data analysis of vibrational state distributions can be found. In light of the rotationally resolved scattering population distributions presented in this work, the explanation for the steric influence is now clear. Indeed, we can use these insights along with the results from Here, P(v) denotes the probability to find the molecule in a certain final vibrational state after surface scattering. The positive asymmetry parameter for final vibrational states v f 4 6 shows that these states are predominantly populated when NO hits the surface with the O-atom first. For final vibrational states v f o 6 the orientation effect vanishes due to dynamical steering as indicated by an asymmetry parameter close to zero. Earlier observations have shown that NO's incidence orientation strongly influences the probability of vibrational energy transfer. [3][4][5] This has been explained by a facile electron transfer event for N-first collisions that is not as likely for O-first collisions. 9 The results presented here show that -not only is there orientation dependence on the vibrational relaxation probability -there is clear incidence orientation dependence on the final vibrational state population distribution. That is, the dynamics of energy transfer and the magnitude of the energy transferred are dependent on incidence orientation. A possible explanation for this behavior is that the magnitude of electronically non-adiabatic coupling is strongly dependent on the distance of closest approach -an N-first collision may Fig. 1 Final rotational state population distributions are influenced by incidence orientation. NO X 2 P 1/2 (v I = 11 and J I = 0.5) approaches the surface with 0.51 eV incidence translational energy. Rotational state distributions of scattered NO molecules in X 2 P (v S = 11, 9, 8, 7, 6, 5, 4) are shown. Three orientation cases are shown: N-first (blue), O-first (red) and unoriented (green). The peaks near J B 35 are due to a rotational rainbow arising from NO collisions where the O-atom strikes the gold surface. Solid lines are drawn to guide the reader's eye. The errors for v S = 6 are calculated based on two datasets for the populations of the rotational states derived from the g(0,6) and g(1,6) band. We expect similar relative errors for the population of rotational states belonging to other vibrational states. approach more closely due to the attractive bonding interaction. We also point out that the qualitative aspects of the orientation behavior observed here were previously predicted from IESH theory. 9 Specifically, these predictions are: (1) that collisions with O-first orientation are approximately vibrationally elastic whereas collisions with N-first orientations transfer large amounts of vibrational energy and (2) that dynamical steering of O-first oriented NO to N-first orientations is an important element of the electronically non-adiabatic energy transfer process. Furthermore, for IESH trajectories with N-first collisions multiple ET events are possible (leading to more vibrational energy loss) whereas for O-first trajectories fewer ET events can occur. Conclusions We report final rotational state distributions of oriented NO(v i = 11) molecules scattered from Au(111) for a large number of final vibrational states. A pronounced high-J rotational rainbow is observed in rotational state distributions for final vibrational states v f = 11-7. This rainbow vanishes for relaxation leading to lower final vibrational states. This allows us to extract the dynamics of O-first and N-first collisions while accounting for the influence of dynamical steering.
3,311
2016-06-02T00:00:00.000
[ "Chemistry", "Physics" ]
Analysis of the Position Recognition of the Bucket Tip According to the Motion Measurement Method of Excavator Boom, Stick and Bucket On modern construction sites, guidance and automation systems are increasingly applied to excavators. Recently, studies have been actively conducted to compare the estimation results of the bucket tip with the motion measurement method of the boom, stick, and bucket and the sensor selection. This study selected the method of measuring the cylinder length of boom, stick, and bucket, and the method of directly measuring the boom, arm, and bucket, which are commonly used in guidance and automation systems. A low-cost sensor that can be attached and detached to the excavator in modular form was selected to apply the above methods to commercial excavator. After the sensor selection, hardware and excavator simulation models for sensor measurements were constructed. Finally, the trajectory of the bucket tip was compared and analyzed through graphs and simulation results when the boom, stick, and bucket were independently rotated one by one, or together. The results gives a guideline on what kinds of sensors would be better in machine guidance or controlling an excavator according to given external environments. Introduction Recently, construction sites can get higher efficiency by virtue of excavator having machine control, guidance, and/or additional systems made for expressing the environments. The most significant matter in the aforementioned techniques is the estimation of the end-point of the excavator precisely by using an appropriate sensor device. However, there are many difficulties in estimating the end-point of the excavator precisely through the sensor system because of inherent characteristics such as engine vibration, large gravity force, or unpredictable disturbance [1]. Thus, the performance of estimation is up to a selection of sensor devices measuring kinematic parameters, and it should should be considered according to the characteristics of the working site. Methods for position estimation of the end-point of excavator are generally classified as follows: (1) measuring the length of cylinders; (2) doing the link motion for estimating the angle displacements indirectly; (3) knowing the angle displacements directly; and (4) considering the hydraulic system. In the case of the cylinder length, the displacements of the cylinder can be measured by using an optional bracket and guide for attaching the linear encoder or stroke sensors. Then, parameters in the derived mathematical equation are substituted by output data of sensors so that the end-point can be estimated. In the case of the link motion, inertia measurement unit (IMU), tilt, and/or accelerometer sensors are used and attached to each link for measuring the joint angle indirectly through the rotation of the boom, stick, and bucket links. These measured data are also used for estimating the end-point of excavator through the derived kinematic modeling. In the case of the measurement of the revolute joints of the excavator, the rotary encoder, resolver, or potentiometer have been used for measuring the angular displacements of the joint directly. The last cases are related to the hydraulic flow rate. The flow rate sensors are inserted into the pipeline, and then devices measure the flow rate so that the angular displacement of each joint that is used for estimating the end-point of the excavator can be estimated [2]. The case study about the use of these sensors at excavators can be found easily. Many studies described the application for machine control, guidance, path planning, collision avoidance, etc. Table 1 showd the most recent studies about sensors application in the excavator. This table also includes additional sensors that are not mentioned above such as vision, real-time location system, etc. Examples of applications in cylinder length and link motion measurements form a large majority. A review on the sensor systems of the next generation construction machinery is present in the [39]. The cited papers under "Other" in Table 1 are about the development sensors. Figure 1 shows some representative examples of sensors application. The third and fourth cases require some modifications and renovations to the mechanical system of excavator for measuring the desired parameters. Additionally, when it comes to the fourth cases that are related to the hydraulic system, there are many non-linearities so that complementary filtering jobs should be progressed for making clear signals [40,41]. In contrast, the first and second cases can reduce and minimize the modifications of the mechanisms of the excavator if additional brackets are used. These ways can preserve the original status of the excavator. Thus, the first two methods are commonly used in practical cases at construction sites since they are economical and do not damage the property (Table 1). Therefore, this paper deals with these two methods: the measurements of the cylinder length, the measurements of the link motion for measuring the angular displacements of each joint indirectly. However, the aforementioned two ways are completely different with respect to the principal of end-point estimation. In the case of the measurement of cylinder length, each cylinder movement has no effect on the other output data because the principal about the motion of the cylinder comes from the absolutely independent mechanism in the joint space. Then, the end-point of the excavator can be estimated through forward kinematics and the derived formula which converts each cylinder length into the joint angle displacement. In contrast, measuring the motion of each link has an effect on the other output data. In general, tilt sensors or inertial measurement units (IMU) devices are attached to the links by using an additional bracket. Then, they measure the absolute angle displacements of each joint indirectly with respect to the fixed base frame. Consequently, each motion of link results in changing the other link position with respect to the view of the global frame. For this reason, some calibration works must be progressed since output data to be inputted into the forward kinematics formula must be relative angle displacement. Additionally, the location of sensor devices to be attached must be considered for making calibration easier. These two methods have one common feature: renovations and modifications of the mechanism and hydraulic system are not required in the installation. This can also reduce the time and cost. Most importantly, these two methods are already widely adopted by many construction companies in order to help the user to make a rapid automation and guidance system without damaging their property. Even though machine guidance and control systems have been developed by many construction companies, studies on the analysis of sensor characteristics regarding the application to excavators have been not public since these are confidential property. Moreover, high-cost devices are used in the commercial guidance system. In the case of papers cited in Table 1, almost all papers have adopted sensor systems depending on the experimental environment and circumstances. Some studies have adopted various types of sensor systems. This means that the effectiveness of each sensor in measuring excavators is different. Therefore, studies on improving the performance at a low cost are still need. In this situation, it is worth studying the sensor characteristics in estimating the end-point of excavator for giving a guideline on what type is better or worse according to the case [18,42]. As mentioned above, low-cost commercial sensor devices that are attached and detached to excavator links easily were selected to analyze the characteristics and suitability when they are applied for estimating the end-point. The reason low-cost devices were selected is that almost all of the developments begin with at a low cost. The analysis was progressed by comparing the estimated trajectory of the bucket made by logged sensor output data. The sensor brackets and communication systems for logging were constructed. The CAD model and simulation tool were also used for checking the validation. Section 2 describes the kinematic modeling for estimating the end-point of the excavator. The system construction for sensors application ise explained in Section 3 and the experimental setup and results are described in Section 4. Lastly, a discussion of the results is presented in Section 5. Forward Kinematics: Revolute Joint To estimate the end-point of the excavator using output sensor data, the derivation of the forward kinematics of this machine must be performed. A typical excavator has three degrees of freedom with respect to side view (two-dimensional space) if the swing of the cabin is excluded. Figure 2 shows the two-dimensional schematic and frame. The Denavit-Hartenberg table based on Figure 2 is shown in Table 2. The end-point position of the excavator can be expressed as x and y values with respect to the defined base frame in Figure 2, and the results of forward kinematics are as follows. Forward Kinematics: Cylinder Length In the above section, θ 1 , θ 2 , and θ 3 are necessary elements in estimating the position of the end-point. Thus, cylinder length must be converted to θ 1 , θ 2 , and θ 3 through the derived mathematical equations. The most important parameters in converting to revolute joint angle are α b , α a , and α bk , which are illustrated in Figure 3. α b can be derived by Equation (3), α a comes from Equation (4), and α bk can be obtained using Equations (5)-(9). Verification of the Derived Kinematic Equations To evaluate the derived equations, this paper selected and analyzed the real physical excavator Vio-17 made by YANMAR. The 3D CAD model was made for validation of the derived equations through the simulation of end-point estimation by inputting the sensors' output data of cylinder length and joint angle from link motions. In fact, the specifications of Vio-17 were not provided as they are a confidential asset of the company, thus the CAD model was done through manual measurement. Therefore, to correct errors in manual measurements, the produced CAD model must be verified. First, real α b , α a , and α bk were measured by using both the protractor and imagery (Figure 4). In the case of the image method, the excavator must be photographed from the side without inclination for making the two-dimensional space absolute. Then, the image was inserted into the CAD utility for measuring the desired value [43,44]. By virtue of these two methods, derived equations were checked and the CAD model was also evaluated. Actual measurements gave the following result: the image method is better than the protractor. Thus, the result of the simulation for estimating the end-point of the excavator was compared by the result of the image method with respect to Figure 5. Table 3 is the result of the comparison between the simulation and image methods. This table gives validation of the CAD model and forward kinematics by showing an allowable error, 1.3 cm. In the construction field, if the error of the machine guidance is within 2 cm, it is enough to be used practically [17]. For this reason, this CAD model was used for analyzing the sensor characteristics in estimating the end-point of the excavator. Construction of the Sensor System The sensor system construction was divided into two parts, namely those regarding cylinder length and link motion, and both were progressed without modifications and renovation of the mechanical system. First, to estimate the end-point of the excavator through the measurements of each cylinder length, a low-cost draw-wire sensor of linear potentiometer type that could be easily attached to the link was selected. The specifications of this selected device is shown in Figure 6. The price of one draw-wire sensor was about 150 USD. Thus, the total price of sensor devices for measuring the boom, stick, and bucket cylinder displacements was 450 USD. This is somewhat low cost because, in general, the encoder type of the draw-wire sensor is more expensive than the potentiometer type. Figure 7 shows the attached draw-wire sensors to each link parallel with customized brackets. In the case of boom link, the bracket for fixing the draw-wire sensor was attached near the boom joint, as shown at the top of Figure 7. Finally, in the case of the stick and bucket links, brackets were fixed to the cover of the cylinder so that all draw-wire sensors could measure the absolute length displacements of the cylinders. The transmission cycle of the sensor systems esd 10 ms. The embedded system for logging the data was also made. Figure 8 shows the minimum and maximum measurement output data of each cylinder length and their linearity. Second, to estimate the end-point through the measurements of each joint angle displacements indirectly by measuring the rotation of links motion, a low-cost IMU sensor device was also selected named EBIMU-09DOF. This device can transmit the raw data so that the inherent characteristics of the excavator itself can be analyzed, and its specifications are explained in Figure 6. The price of one IMU sensor was about 180 USD. Thus, the total price of sensor devices for measuring the boom, stick, and bucket cylinder displacements was 540 USD. This is a very low-cost device. The cost range of IMU devices is very large. For example, the attitude/heading reference system (ARHS) device made by machine guidance company averages over 1000 USD. Brackets were made and installed on each link, as shown in Figure 9. In the case of boom and stick links, the brackets for fixing the IMU sensors were attached to the boom and stick link, as shown at the top of Figure 9. Finally, in the case of the bucket link, the bracket was fixed to two revolute joints of the four-bar linkage so that all IMU sensors could measure the absolute angle displacements of joints indirectly through calibration. The reason calibration is needed in this process is that the slopes of brackets and links are not the same. The angle displacements can be measured correctly through the calibration. The transmission cycle of the sensor systems was 10 ms. When brackets were installed on the link, the butadiene rubber was used for anti-vibration, which prevents the data drift. Figure 10 illustrates drift generation for when rubber was and was not used. This device transmits three kinds of data: roll, pitch, yaw. In general, roll data are the most robust to disturbance and have the widest measurable range [45,46]. Thus, the roll data were chosen for measuring the joint angle. Experiments and Results The CAN (Controller Area Network) communication system was built in the embedded system for logging the output data of both draw-wire and IMU devices at the same time ( Figure 11). These output data were inputted into the derived equations for estimating the end-point. To compare characteristics between two devices in the estimation of the end-point, Simulink of MATLAB was applied to these experiments. The four main poses of the excavator were chosen ( Figure 12). Figure 12A is the middle point in the aspect of the horizontal side view. Figure 12B is the status when all links of the excavator are stretched as much as possible. Figure 12C is performed for making a maximum height. Figure 12D is when all cylinders have a minimum length. The IMU and draw-wire sensors measured the pose information for the poses in Figure 12A-D and then the logged data were calibrated to synchronize the initial and final points by inserting the offset value. This processed data were finally applied to the derived equations to estimate the end-point of the excavator. The entire process is shown schematically in Figure 13. The characteristics of bucket tip were analyzed by measured data from draw-wire and IMU sensor during the motion of the boom, stick, and bucket, separately. In the analysis of the bucket tip, the non-motion part of joints was fixed for measuring the rotating part, and the reference trajectory was generated through the simulation created using Simulink. The length of the cylinder and joint angle displacements of the boom, stick, and bucket were checked in each experiment so that it was possible to know how well the end-point of the bucket followed the reference trajectory. A comparison between the results of the collected data and the reference bucket trajectories is shown in Figure 14. Figure 14A-C shows that the performance of the draw-wire sensor is better than the IMU sensor. By inputting the data from the draw-wire and IMU sensors, the result shows that the draw-wire sensor tracked the reference better than the IMU sensor. Table 4 includes the numerical evaluation with average error (cm). In this case of the simultaneous motion, the draw-wire sensor showed better performance than the IMU device. The area computed by the IMU data was larger than the draw-wire sensor. Figure 15 includes the resulting graph and Table 4 presents the numerical evaluation. In other words, all results illustrated that the performance of the low-cost draw-wire sensor is more accurate than the low-cost IMU device. Although the anti-vibration rubber pad was used for gathering data with little error, the results also show that the MEMS device still has weaknesses in vibrations due to the magnetic materials of the heavy-duty excavator. Conclusions We studied the characteristics of the position recognition of the bucket tip according to the motion measurement of excavator boom, stick, and bucket in the excavator guidance and automation. The measurement methods were divided into two ways: one is about the cylinder length and the other is of measurements of the boom, stick, and bucket links rotation. For the characteristics of each method, the wire sensor was selected for measuring the cylinder length, and the IMU sensor was selected for the measurement of the rotation of the boom, stick, and bucket link. The hardware environment was constructed to apply the selected sensor to the experimental excavator and the CAD model was also made. The accuracy of the CAD model was verified by using a protractor and image method. Then, the method to recognize the bucket tip was devised, which is available in inputting the measured data from the sensors into the excavator model. The results of recognition of the bucket tip were analyzed by experiments: first when the boom, stick, and bucket were rotated independently, and second when all of them moved simultaneously for creating a closed trajectory. Experiments using wire sensors in a single rotation confirmed that the tip of the bucket followed the reference trajectory within 1 cm on both the x and y axes. In addition, the data measured by the wire sensor show that there is almost no change in the error of the trajectory created by the bucket tip during repeated operation. In contrast, the experiments with the IMU sensor showed that the error variance of the trajectory created by a bucket tip was larger than the draw-wire sensor. In simultaneous excavator operation, the IMU sensor measured a larger area of the closed trajectory than the wire sensor. At the third point in Figure 15, a time delay was observed to correct the accumulated error value of the IMU sensor. When measuring the cylinder length, the bucket tip trajectory was hardly affected by the disturbance caused by the movement and vibration of the excavator itself. The cylinder lengths of the boom, stick, and bucket were all measured independently by the sensor, and the independent data had almost no cumulative error compared to the IMU sensor. On the other hand, the method of measuring the rotation of the boom, stick, and bucket links was evaluated to be vulnerable to disturbance of the excavator's own movement and vibration because the sensor data are measured based on the Earth's gravity. Since the data of boom, stick, and bucket are all dependent, the bucket tip can be recognized with a cumulative error. However, the draw-wire sensor has weaknesses in the aspect of installation and robustness in the hardware itself. The construction field is not in clean environments. Many debris and fragments exist so that these things can touch the wire during operations. Thus, housing satisfying desired specifications should be considered to prevent the aforementioned problems. In contrast, IMU is very simple to attach and use in the excavator. Additionally, low-cost IMU is much cheaper than draw-wire sensors. Most importantly, IMU is less affected by physical elements. Consequently, it was verified that the cylinder length measurement method is more stable against the effects of a disturbance than the method using the IMU sensor. However, draw-wire type also has weakness in practical use. This paper focuses on the two low-cost sensor devices, which are detachable and can preserve the original status of the excavator without damage due to the mechanical modifications. This paper gives someone guidelines on what kinds of sensors would be better in machine guidance or controlling an excavator according to given external environments. Funding: The article processing charge was funded by Hanyang University. Conflicts of Interest: The authors declare no conflict of interest.
4,811.8
2020-05-01T00:00:00.000
[ "Engineering" ]
New physics in $b\to s$ transitions after LHC run 1 We present results of global fits of all relevant experimental data on rare $b \to s$ decays. We observe significant tensions between the Standard Model predictions and the data. After critically reviewing the possible sources of theoretical uncertainties, we find that within the Standard Model, the tensions could be explained if there are unaccounted hadronic effects much larger than our estimates. Assuming hadronic uncertainties are estimated in a sufficiently conservative way, we discuss the implications of the experimental results on new physics, both model independently as well as in the context of the minimal supersymmetric standard model and models with flavour-changing $Z'$ bosons. We discuss in detail the violation of lepton flavour universality as hinted by the current data and make predictions for additional lepton flavour universality tests that can be performed in the future. We find that the ratio of the forward-backward asymmetries in $B \to K^* \mu^+\mu^-$ and $B \to K^* e^+e^-$ at low dilepton invariant mass is a particularly sensitive probe of lepton flavour universality and allows to distinguish between different new physics scenarios that give the best description of the current data. Introduction Rare decays based on the flavour-changing neutral current b → s transition are sensitive probes of physics beyond the Standard Model (SM). In recent years, a plethora of observables, including branching ratios, CP and angular asymmetries in inclusive and exclusive B decay modes, has been measured at the B factories and at LHC experiments. This wealth of data allows to investigate the helicity structure of flavour-changing interactions as well as possible new sources of CP violation. In 2013, the observation by LHCb of a tension with the SM in B → K * µ + µ − angular observables [1] has received considerable attention from theorists and it was shown that the tension could be softened by assuming the presence of new physics (NP) [2][3][4][5]. In 2014, another tension with the SM has been observed by LHCb, namely a suppression of the ratio R K of B → Kµ + µ − and B → Ke + e − branching ratios at low dilepton invariant mass [6]. Assuming new physics in B → Kµ + µ − only, a consistent description of these anomalies seems possible [7][8][9][10]. Finally, also branching ratio measurements of B → K * µ + µ − and B s → φµ + µ − decays published recently [11,12] seem to be too low compared to the SM predictions when using state-of-the art form factors from lattice QCD or light-cone sum rules (LCSR) [13][14][15][16]. While the ratio R K is theoretically extremely clean, predicted to be 1 to an excellent accuracy in the SM [17], the other observables mentioned are plagued by sizable hadronic uncertainties. On the one hand, they require the knowledge of the QCD form factors; on the other hand, even if the form factors were known exactly, there would be uncertainties from contributions of the hadronic weak Hamiltonian that violate quark-hadron duality and/or break QCD factorization. These two sources of theoretical uncertainty have been discussed intensively in the recent literature [16,18,19] (see also the earlier work [20][21][22][23]). Understanding how large these hadronic effects could be is crucial to disentangle potential new physics effects from underestimated non-perturbative QCD effects, if significant tensions from the SM expectations are observed in the data. The main aim of our present analysis is thus to perform a global analysis of all relevant experimental data to answer the following questions, 1. Is there a significant tension with SM expectations in the current data on b → s transitions? 2. Assuming the absence of NP, which QCD effects could have been underestimated and how large would they have to be to bring the data into agreement with predictions, assuming they are wholly responsible for an apparent tension? 3. Assuming the QCD uncertainties to be estimated sufficiently conservatively, what do the observations imply for NP, both model-independently and in specific NP models? Our work builds on our previous global analyses of NP in b → s transitions [3,24,25], but we have built up our analysis chain from scratch to incorporate a host of improvements. Compared to our previous analyses and to comparable recent studies in the literature [2,4,5,8,9], the novel features of our analysis are as follows. • In our global fits, we take into account all the correlations of theoretical uncertainties between different observables and between different bins. This has become crucial to assess the global significance of any tension, as the experimental data are performed in more and more observables in finer and finer bins. • We assess the impact of different choices for the estimates of theoretical uncertainties on the preferred values for the Wilson coefficients. • We use the information on B → K * and B s → φ form factors from the most precise LCSR calculation [13,16], taking into account all the correlations between the uncertainties of different form factors and at different q 2 values. This is particularly important to estimate the uncertainties in angular observables that involve ratios of form factors. Our paper is organized as follows. In section 2, we define the effective Hamiltonian and discuss the most important experimental observables, detailing our treatment of theoretical uncertainties. In section 3, we perform the numerical analysis. We start by investigating which sources of theoretical uncertainties, if underestimated, could account for the tension even within the SM. We then proceed with a model-independent analysis beyond the SM, studying the allowed regions for the NP Wilson coefficients. In section 4, we discuss what the model-independent findings imply for the Minimal Supersymmetric Standard Model as well as for models with a new heavy neutral gauge boson. We summarize and conclude in section 5. Several appendices contain all our SM predictions for the observables of interest, details on our treatment of form factors, and plots of constraints on Wilson coefficients. Effective Hamiltonian The effective Hamiltonian for b → s transitions can be written as and we consider NP effects in the following set of dimension-6 operators, Of the complete set of dimension-6 operators invariant under the strong and electromagnetic gauge groups, this set does not include • Four-quark operators (including current-current, QCD penguin, and electroweak penguin operators). These operators only contribute to the observables considered in this analysis through mixing into the operators listed above and through higher order corrections. Moreover, at low energies they are typically dominated by SM contributions. Consequently, we expect the impact of NP contributions to these operators on the observables of interested to be negligible. 1 • Chromomagnetic dipole operators. In the radiative and semi-leptonic decays we consider, their Wilson coefficients enter at leading order only through mixing with the electromagnetic dipoles and thus enter in a fixed linear combination, making their discussion redundant. • Tensor operators. Our rationale for not considering these operators is that they do not appear in the dimension-6 operator product expansion of the Standard Model [27][28][29]. Consequently, they are expected to receive only small NP contributions unless the scale of new physics is very close to the electroweak scale, which is in tension with the absence of new light particles at the LHC. • Scalar operators of the form (sP A b)(¯ P B ). The operators with AB = LL or RR do not appear in the dimension-6 operator product expansion of the Standard Model either. While the ones with AB = LR and RL do appear at dimension 6, their effects in semi-leptonic decays are completely negligible once constraints from B s → µ + µ − are imposed [29]. Observables The differential decay distribution of B → Kµ + µ − in terms of the dimuon invariant mass squared q 2 and the angle between the K and µ − gives access to two angular observables, the so-called flat term F H and the forward-backward asymmetry A FB , in addition to the differential decay rate (or branching ratio). The observables A FB and F H only deviate significantly from zero in the presence of scalar or tensor operators [17]. Due to the argument given above, we do not consider NP contributions to these operators in semi-leptonic decays. While the direct CP asymmetry has been measured recently as well [30], we do not include it in our analysis since it is suppressed by small strong phases and therefore does not provide constraints on new physics at the current level of experimental accuracy. Consequently, the only observable we need to consider is the (CP-averaged) differential branching ratio of the charged B decay, and analogously for the neutral B decay. Theoretical uncertainties The theoretical analysis of the B → Kµ + µ − observables is complicated not only by the need to know the B → K form factors, but also by the fact that the "naive" factorization of the amplitude into a hadronic and a leptonic part is violated by contributions from the hadronic weak Hamiltonian, connecting to the lepton pair through a photon. Concretely, in the limit of vanishing lepton mass 2 , the decay rate can be written as where λ(a, b, c) = a 2 + b 2 + c 2 − 2(ab + bc + ac) , Here, f + and f T are the full QCD form factors and h K includes the non-factorizable contributions from the weak effective Hamiltonian. An additional form factor, f 0 , enters terms that are suppressed by the lepton mass. We now discuss our treatment of these quantities, which represent the main source of theoretical uncertainties in the B → Kµ + µ − observables. For the form factors, we perform a combined fit of the recent lattice computation by the HPQCD collaboration [31], valid at large q 2 , and form factor values at q 2 = 0 obtained from light-cone sum rules (LCSR) [32,33], to a simplified series expansion. Details of the fit are discussed in appendix A. The results are 3-parameter (4-parameter) fit expressions for the form factors f +,T (f 0 ) as well as the full 10 × 10 covariance matrix. We retain the correlations among these uncertainties throughout our numerical analysis. Concerning h K (q 2 ), we emphasize the following contributions. • Virtual corrections to the matrix elements of the four-quark operators O 1 and O 2 . We include them to NNLL accuracy using the results of ref. [34]. • Contributions from weak annihilation and hard spectator scattering. These have been estimated in QCD factorization to be below a percent [33] and we neglect them. • Soft gluon corrections to the virtual charm quark loop at low q 2 . This effect was computed recently in LCSR with B meson distribution amplitudes in ref. [20] and was found to be "unimportant at least up to q 2 ∼ 5 − 6 GeV 2 ." (See also [22]). • Violation of quark-hadron duality at high q 2 , above the open charm threshold, due to the presence of broad charmonium resonances. Employing an OPE in inverse powers of the dilepton invariant mass, this effect has been found to be under control at a few percent in ref. [21]. Concerning the last two items, the uncertainties due to these effects have to be estimated in a consistent and conservative manner to draw robust conclusions about the compatibility of experimental measurements with the SM predictions. We do this by parametrizing our ignorance of sub-leading corrections to h K in the following way, where we used the leading contribution to the amplitude F V as an overall normalization factor. To obtain the theory uncertainties, we vary the strong phases φ a,b,c within (−π, π]. At low q 2 , since the main contribution is expected to come from the soft gluon correction to the charm loop, we vary a within [0, 0.02] and b within [0, 0.05]. In this way, the central value of the effect discussed in [20,22] is contained within our 1σ error band. Although (10) is just a very crude parametrization of the (unknown) q 2 dependence at low q 2 , we believe it is sufficiently general at the current level of experimental precision. At high q 2 , the presence of broad charmonium resonances means that h K (q 2 ) varies strongly with q 2 , but since we will only consider observables integrated over the whole high-q 2 region, we can ignore this fact and the parameter c simply parametrizes the violation of the OPE result. We estimate it by varying c within [0, 0.05], which corresponds to an uncertainty on the rate more than twice the uncertainty quoted in [21], to be conservative. In section 3.2, we will also discuss the consequences of increasing the ranges for these parameters. The angular decay distribution ofB 0 →K * 0 µ + µ − contains in general 12 angular coefficient functions. In the presence of CP violation, the 12 angular coefficients of the CP-conjugate decay B 0 → K * 0 µ + µ − represent another 12 independent observables [35]. However, since scalar contributions are negligible in our setup and one can neglect the muon mass to a good approximation, there are only 9 independent observables in each decay. Moreover, the absence of large strong phases implies that several of the observables are hardly sensitive to new physics. In practice, the observables that are sensitive to new physics are • the CP-averaged differential branching ratio dBR/dq 2 , • the CP-averaged K * longitudinal polarization fraction F L and forward-backward asymmetry A FB , • the CP-averaged angular observables S 3,4,5 , • the T-odd CP-asymmetries A 7,8,9 . All of these observables can be expressed in terms of angular coefficients and are functions of q 2 . Alternative bases have been considered in the literature (see e.g. [36][37][38][39][40]). Choosing different normalizations can reduce the sensitivity of the observables to the hadronic form factors, at least in the heavy quark limit and for naive factorization. In our analysis, the choice of basis is irrelevant for the impact of hadronic uncertainties, as we consistently take into account all the correlations between theoretical uncertainties. The only dependence on the choice of basis is then due to unknown experimental correlations. Where available, we also take correlations of experimental uncertainties into account. In the case of B → K * γ, we consider the following observables: the branching ratio of B ± → K * ± γ, the branching ratio of B 0 → K * 0 γ, the direct CP asymmetry A CP and the mixing-induced CP asymmetry S K * γ in B 0 → K * 0 γ. Since we take all known correlations between the observables into account in our numerical analysis, including the branching ratios of the charged and neutral B decays is to a very good approximation equivalent to including one of these branching ratios and the isospin asymmetry. Theoretical uncertainties Similarly to the B → Kµ + µ − decay, the main challenges of B → K * µ + µ − are the form factors and the contributions of the hadronic weak Hamiltonian. For the form factors, we use the preliminary results of a a combined fit [16] to a LCSR calculation of the full set of seven form factors [13] with correlated uncertainties as well as lattice results for these form factors [14]. This leads to strongly reduced uncertainties in angular observables. The non-factorizable contributions from the hadronic weak Hamiltonian are more involved in B → K * µ + µ − compared to B → Kµ + µ − for several reasons. First, it contributes to three helicity amplitudes instead of just one; Second, the presence of the photon pole at q 2 = 0 enhances several of the contributions at low q 2 ; Third, since we do not only consider branching ratios but also a host of angular observables where form factor uncertainties partly cancel, we require a higher theoretical accuracy in the h λ . Concretely, we include the following contributions. • The NNLL contributions to the matrix elements of O 1,2 as in the case of B → Kµ + µ − . • At low q 2 , weak annihilation beyond the heavy quark limit as obtained from LCSR [43]. • At low q 2 , contributions from the matrix element of the chromomagnetic operator as obtained from LCSR [44]. As in B → Kµ + µ − , there are additional, sub-leading contributions, such as the soft gluon corrections to the charm loop [18,20,22,45]. We parametrize them at low q 2 by a correction relative to the leading contribution to the helicity amplitudes proportional to C eff 7 , The parameters a λ and b λ are allowed to be different for each of the three helicity amplitudes, λ = +, −, 0. We vary the a λ and b λ in the following ranges, Again, with this choice the effect discussed in [20,22] is within our 1σ uncertainty band. Although the normalization of the correction is arbitrary and could have also been written as a relative correction to C 9 , we choose C 7 as normalization in B → K * µ + µ − since the leading contribution proportional to C 9 vanishes at q 2 = 0 and does not contribute to B → K * γ. It is due to this choice that we need to allow for larger a 0 , b 0 since the C eff 7 contribution is not enhanced in the λ = 0 amplitude. At high q 2 , as in the case of B → Kµ + µ − , we do not have to consider a q 2 dependent correction as we are only considering observables integrated over the full high q 2 region. Analogous to B → Kµ + µ − , we parametrize the sub-leading uncertainties by a relative correction to C 9 . To be conservative, we allow it to be up to 7.5% in magnitude, independently for the three helicity amplitudes, with an arbitrary strong phase. Direct CP asymmetry in B → K * γ While direct CP asymmetries in the B decays considered by us are suppressed by small strong phases and so typically do not lead to strong constraints on NP, the direct CP asymmetry in B → K * γ is a special case since the measurements by the B factories and LHCb are so precise that this suppression could be overcome. The world average reads 3 Allowing for general NP contributions in C 7 , we find the following central value for the asymmetry, where we have neglected contributions from NP in C 7 and C 8 . We observe that the experimental bound (13) can constrain an imaginary part of the Wilson coefficient C 7 at the m b scale at the level of 0.1, which is still allowed by all other measurements as we will see. The problem with using this observable as a constraint on NP is that it is proportional to a strong phase that appears only at sub-leading order and is afflicted with a considerable uncertainty. With our error treatment described above, we find an overall relative uncertainty of 20% in the presence of a large imaginary C 7 . However, to be conservative, we will not include A CP (B 0 → K * 0 γ) in our global fits, but we will discuss the impact of including it separately in section 3.3. 2.4. The decay B s → φµ + µ − is similar to the B → K * µ + µ − decay and our treatment of theory uncertainties is analogous. In particular, we use the form factors and their correlated uncertainties that have been obtained in [16] from the combined fit of lattice and LCSR results. The sub-leading non-factorizable corrections are parametrized as in the case of B → K * µ + µ − , and the coefficients a λ , b λ and c λ are varied in the same ranges. We assume the uncertainty in these coefficients to be 90% correlated between B s → φµ + µ − and B → K * µ + µ − since we do not see a physical reason why they should be drastically different. An important difference with respect to B → K * µ + µ − is that the B s → φµ + µ − decay is not self-tagging. Therefore, the only observables among the ones mentioned at the beginning of section 2.3.1 that are experimentally accessible in a straightforward way at a hadron collider are • the differential branching ratio dBR/dq 2 , • the CP-averaged angular observables F L and S 4 , • the angular CP asymmetry A 9 . An additional novelty is the impact of the sizable B s width difference. As shown in [16], this effect is small in the SM and we have checked that it is also negligible in the presence of NP at the current level of experimental precision, unless the Wilson coefficients assume extreme values that are already excluded by other constraints. Fit methodology More and more experimental data on b → sµ + µ − transitions becomes available and many observables are measured with a fine binning. Therefore, in order to determine the values of the Wilson coefficients preferred by the data it becomes more and more important to include the correlation of theoretical uncertainties between different observables as well as between different bins of the same observable. One possibility to achieve this is to perform a global Bayesian analysis where all the uncertainties are parametrized by nuisance parameters that are marginalized over by sophisticated numerical tools like Markov Chain Monte Carlos. This approach has been applied recently e.g. in [4]. A drawback of this approach is that it is timeconsuming and the computing time increases with the number of parameters. Here, we follow a different approach. We construct a χ 2 function that only depends on the Wilson coefficients and take into account the theoretical and experimental uncertainties in terms of covariance matrices, Here, O exp are the experimentally measured central values of all observables of interest, O th are the corresponding theory predictions that depend on the (NP contributions to the) Wilson coefficients, C exp is the covariance matrix of the experimental measurements and C th is the covariance matrix of the theory predictions that contains the theory uncertainties and their correlations. In writing (15), we have made two main approximations. First, we have assumed all the experimental and theoretical uncertainties to be Gaussian. Second, we have neglected the dependence of the theory uncertainties on the new physics contributions to the Wilson coefficients. This means that the theory uncertainties and their correlations have been evaluated for the Wilson coefficients fixed to their SM values. We believe that this assumption is well justified in view of the fact that no drastic deviations from the SM expectations have been observed so far. The only possible exception are observables that vanish in the SM but could receive NP contributions much larger than the current experimental bounds. As we will discuss below, the only such observable at present is the direct CP asymmetry in B → K * γ. We determine C th by evaluating all observables of interest for a large set of the parameters parametrizing the theory uncertainties, randomly distributed according to the uncertainties and correlations described above. In this way, we retain not only correlated uncertainties between different observables, but also between different bins of the same observable. We find these correlations to have a large impact on our numerical results. Concerning C exp , we symmetrize the experimental error bars and use the experimental correlations when available. Where they are not available, we include a rough guess of the correlations by assuming the statistical uncertainties to be uncorrelated and the systematic uncertainties to be fully correlated for measurements of the same observable by a single experiment. We have checked that the treatment of experimental correlations has only a small impact on the overall fit at the current level of experimental accuracy. We do not include the additional results on b → s transitions from BaBar [58,59] and Belle [60,61], as they are only available as an average of µ + µ − and e + e − modes. As already mentioned in section 2, in the fit we do not explicitly include isospin asymmetries, but instead use results on the charged and neutral modes separately. As we take into account all known error correlations, this approach is essentially equivalent. We would like to stress that for none of the observables, we use low q 2 bins that extend into the region above the perturbative charm threshold q 2 > 6 GeV, where hadronic uncertainties cannot be estimated reliably. This applies in particular to the bin [4.3, 8.68] GeV 2 that has been used in several fits in the past [2,5,9]. For the B 0 → K * 0 µ + µ − observables at low q 2 , we choose the smallest available bins satisfying this constraint, since they are most sensitive to the non-trivial q 2 dependence of the angular observables. For B s → φµ + µ − , we use the [1, 6] GeV 2 bin, since the branching ratio does not vary strongly with q 2 and since the statistics is limited. In the high q 2 region, we always consider the largest q 2 bins available that extend to values close to the kinematical end point. All the experimental measurements used in our global fits are listed in appendix B along with their theory predictions. Compatibility of the data with the SM Evaluating (15) with the Wilson coefficients fixed to their SM values, we obtain the total χ 2 of the SM. Including both b → sµ + µ − and b → se + e − observables, we find χ 2 SM ≡ χ 2 ( 0) = 106.1 for 81 independent measurements. This corresponds to a p-value of 3.6%. Including only b → sµ + µ − observables, we find χ 2 SM = 97.2 for 78 independent measurements, corresponding to a p-value of 6.9%. In table 1, we list the observables with the largest deviation from the SM expectation. The full list of observables entering the χ 2 , together with the SM predictions and experimental measurements, is given in appendix B. We note that some of these observables have strongly correlated uncertainties and that for two of the observables, A FB and F L , there is some tension between different experiments. Still, there does seem to be a systematic suppression of branching ratios in different decay modes and we will see in section 3.3 that the quality of the fit can be improved substantially in the presence of new physics. An important questions is whether these tensions could be due to underestimated theory uncertainties and we will investigate this question in the following paragraphs. It should be kept in mind that none of these sources of uncertainties can account for violation of lepton flavour universality. Underestimated hadronic effects? We will see in section 3.3 that the agreement of the theory predictions with the experimental data is improved considerably assuming non-standard values for the Wilson coefficient C 9 . Decay obs. q 2 bin SM pred. measurement pull BaBar −1.8 Table 1: Observables where a single measurement deviates from the SM by 1.8σ or more. The full list of observables is given in appendix B. Since this coefficient corresponds to a left-handed quark current and a leptonic vector current, it is conceivable that a NP effect in C 9 is mimicked by a hadronic SM effect that couples to the lepton current via a virtual photon, e.g. the charm loop effects at low q 2 and the resonance effects at high q 2 as discussed in section 2 (see e.g. [18]). In our numerical analysis, in addition to the known non-factorizable contributions taken into account as described in section 2, subleading effects of this type are parametrized by the parameters a i , b i , c i in (10), (11), and analogously for B s → φµ + µ − . Since they parametrize unknown sub-leading uncertainties, the central values of these parameters are 0 in our SM predictions. Any underestimation of a non-perturbative QCD effect (not related to form factors) should then manifest itself as a drastic reduction of the χ 2 for a sizable value of one of the parameters, when treating them as completely free. To investigate this question, we have constructed a χ 2 function analogous to (15), but writing the central values O th as functions of the parameters a i , b i , c i instead of the Wilson coefficients. In fig. 1, we show the reduction of the χ 2 compared to our SM central value under variation of pairs of these parameters, while treating two of them at a time as free parameters and fixing all the others to 0. We show the cases of varying the coefficients entering the B → K + − amplitude at low and high q 2 (top); the coefficients entering the λ = − and λ = 0 B → K * + − helicity amplitudes at low q 2 (bottom left) and high q 2 (bottom right). Corrections to the λ = + helicity amplitude are expected to be suppressed [23] and we checked explicitly that they have a weak impact. On the green dashed contours, the χ 2 is the same as for the central value, so there is no improvement of the fit. In the green shaded area, the fit is improved, with the solid contours showing ∆χ 2 ≡ χ 2 − χ 2 SM = 1, 4, 9, etc. In the unshaded region to the other side of the dashed contour, the fit is worsened compared to the central value. The blue circles show our 1 and 2σ assumptions for the uncertainties on the parameters in question, as discussed in section 2. We stress that these assumptions have not been used as priors to determine the green contours. We make the following observations. • The χ 2 can be reduced by up to 4 when pushing the parameter b K , parametrizing subleading corrections in B → Kµ + µ − at low q 2 , to the border of our estimated uncertainty. The fit does not improve significantly when changing the parameter c K from 0, i.e. when assuming large violations of quark-hadron duality in the global (integrated) high q 2 observables in B → Kµ + µ − , unless b K is shifted at the same time. • A simultaneous positive shift in the sub-leading corrections to the λ = − and 0 helicity amplitudes in B → K * µ + µ − can significantly reduce the χ 2 as well. ∆χ 2 = 9 requires a shift in both parameters that is four times larger than our error estimate. • Corrections to quark-hadron duality in the global high q 2 observables in B → K * µ + µ − do not lead to a reduction of the χ 2 by more than 1. We conclude that the agreement of the data with the predictions cannot be improved by assuming (unexpectedly) large violations of quark-hadron duality in integrated observables at high q 2 alone, while sizable corrections to B → Kµ + µ − and B → K * µ + µ − at low q 2 could improve the agreement with the data. We stress however that fig. 1 should not be misinterpreted as a determination of the size of subleading QCD effects from the data. Indeed, the regions where the χ 2 is significantly reduced correspond to values that are larger than any known hadronic effect. We will see in section 3.3 that a good fit to the data can be obtained assuming a large negative NP contribution to the Wilson coefficient C 9 . We find it instructive to consider the size of the sub-leading parameters that would make them "mimic" a NP effect. Experimentally, it would be difficult to distinguish between the cases i) where C 9 = C SM 9 + ∆ 9 and all as well as and all other a i , b i , c i equal to zero. This pattern of effects is indeed similar to what is seen in fig. 1. Distinguishing such a scenario from a NP effect is straightforward if the NP effect is not lepton-flavour universal. If it is lepton-flavour universal, a correlated analysis of exclusive and inclusive observables, of the q 2 dependence, and of consistency relations among observables valid in the SM (see e.g. [62]) could help to disentangle QCD and NP. Underestimated parametric uncertainties? While the angular observables in B → K * µ + µ − are almost free from parametric uncertainties 4 , the apparent systematic suppression of branching ratios could also be due to an underestimated overall parametric uncertainty. The uncertainties of the B u,d,s meson lifetimes quoted by the PDG [63] are well below 1% and are therefore very unlikely to be responsible. The dominant parametric uncertainty is the CKM factor |V tb V * ts | 2 to which all branching ratios are proportional and which itself is dominated by the uncertainty of the measurement of |V cb |. The relative uncertainty of all b → s branching ratios due to |V cb | is twice the relative uncertainty of |V cb |. In our numerical analysis, we use which leads to an uncertainty of 4.9% on the branching ratios. In fact there is a long standing tension between determinations of |V cb | from inclusive and exclusive decays. The PDG [63] quotes which are at a 2.5σ tension with each other. Choosing the inclusive value instead of (19) would increase the central values of all our branching ratios by 6.5% and would worsen the agreement with the data. Choosing the exclusive value instead would lead to a reduction of the branching ratios by 6.7%. To see whether this has an impact on the significance of the tensions, we multiply all branching ratios by a scale factor η BR and fit this scale factor to the data. We find η BR = 0.81 ± 0.08, i.e. a 19% reduction of the branching ratios with respect to our central values is preferred. This central value would correspond to |V cb | = 3.7 × 10 −2 , which is in tension with both the inclusive and exclusive determinations. We conclude that underestimated parametric uncertainties are unlikely to be responsible for the observed tensions in the branching ratio measurements. Needless to say, the angular observables and R K would be unaffected by a shift in |V cb | anyway. Underestimated form factor uncertainties? The tensions between data and SM predictions could also be due to underestimated uncertainties in the form factor predictions from LCSR, lattice, or both. A first relevant observation in this respect is that the tensions in table 1 include observables in decays involving B → K, B → K * , and B s → φ transitions, both at low q 2 (where LCSR calculations are valid) and at high q 2 (where the lattice predictions are valid). Explaining all of them would imply underestimated uncertainties in several completely independent theoretical form factor determinations. In the case of B → Kµ + µ − and B s → φµ + µ − , tensions are present only in branching ratios, which seem to be systematically below the SM predictions. This could be straightforwardly explained if the form factor predictions were systematically too high. The case of B → K * µ + µ − is less trivial due to the tensions in angular observables, which cannot simply be due to an overall rescaling of the form factors. To investigate this case, we have parametrized all seven B → K * form factors by a two-parameter z expansion 5 and constructed a χ 2 function analogous to (15), but writing the central values O th as functions of the 14 z expansion parameters instead of the Wilson coefficients. We have then conducted a global Markov Chain Monte Carlo fit of all 14 parameters to the data and compared the obtained posterior probability distribution to the priors (obtained from a combined fit to LCSR and lattice results). We have found that the most significant shift, i.e. preference for a nonstandard value, occurred in the form factor 6 A 12 . In fig. 2, we show the improvement in the χ 2 obtained when changing the A 12 form factor, while fixing all the other form factors to their fig. 1. We observe that an improvement of ∆χ 2 ∼ 4 can be obtained if the value at q 2 = 0 is significantly lower than what is obtained from LCSR. This improvement is quite limited compared to the improvement obtained in the presence of NP discussed below or in the presence of large non-form factor corrections discussed above. Finally, an important observation in the case of B → K * µ + µ − angular observables is that the tensions are only present at low q 2 , where the seven form factors can be expressed in terms of two independent "soft" form factors up to power corrections of naive order Λ QCD /m b . It is then possible to construct angular observables that do not depend on the soft form factors, but only on the power corrections [40]. The tensions can then be seen even without any input from LCSR or lattice, but estimating the power corrections by dimensional analysis [19]. This shows that an explanation of the tensions by underestimated form factor uncertainties would imply a violation of the form factor relations in the heavy quark limit that is much larger than what LCSR calculations predict. New physics in a single Wilson coefficient We now investigate whether new physics could account for the tension of the data with the SM predictions. We start by discussing the preferred ranges for individual Wilson coefficients assuming our nominal size of hadronic uncertainties. We determine the 1σ (2σ) ranges by computing ∆χ 2 = 1 (4) while fixing all the other coefficients to their SM values. We also set the imaginary part of the respective coefficient to 0. In addition to the Wilson coefficients C ( ) 7,9,10 , we also consider the case where the NP contributions to C ( ) 9 and C ( ) 10 are equal up to a sign, since this pattern of effects is generated by SU (2) L -invariant four fermion operators in the dimension-6 SM effective theory. Our results are shown in table 2. We summarize the most important points. Coeff. best fit 1σ 2σ • A negative NP contribution to C 9 , approximately −30% of C SM 9 , leads to a sizable decrease in the χ 2 . The best fit point corresponds to a p-value of 21.5%, compared to 6.9% for the SM. This was already found in fits of low-q 2 angular observables only [2] and in global fits not including data released this year [3,5,19], as well as in a recent fit to a subset of the available data [9]. We find that the significance of this solution has increased substantially. This is due in part to the reduced theory uncertainties, in particular the form factors, as well as due to the new measurements by LHCb. • A significant improvement is also obtained in the SU (2) L invariant direction C NP 9 = −C NP 10 , corresponding to an operator with left-handed muons. • A positive NP contribution to C 10 alone can also improve the fit, although to a lesser extent. • NP contributions to individual right-handed Wilson coefficients hardly lead to improvements of the fit. While table 2 assumed the Wilson coefficients to be real, i.e. aligned in phase with the SM, in general the NP contributions to the Wilson coefficients are complex numbers. Since measurements in semi-leptonic decays are currently restricted to CP-averaged observables or direct CP asymmetries that are suppressed by small strong phases 7 , the constraints on the imaginary parts are generally weaker than on the real parts, since they do not interfere with the SM contribution. An interesting special case is the direct CP asymmetry in B → K * γ. As discussed in section 2.3.3, this observable is precisely measured and very sensitive to the imaginary part of C 7 , but we do not include it in our default χ 2 since it is proportional to a strong phase that is afflicted with a considerable uncertainty. In fig. 3, we show how the allowed region for the NP contribution to C 7 would change by including this observable. The red (green) contours correspond to the 1 and 2σ regions (∆χ 2 = 2.3 and 6 while fixing all other coefficients to their SM values) allowed by the global fit including A CP (B 0 → K * 0 γ) with a relative uncertainty of 50% (25%), while the blue contours correspond to the fit without the CP asymmetry. We observe that the constraint on the imaginary part of C 7 improves by a factor of ∼ 2 even with our conservative estimate for the theory error. The global constraints in the complex planes of all Wilson coefficients are shown in fig. 11 of appendix C. Constraints on pairs of Wilson coefficients We proceed by analysing the constraints in scenarios where two Wilson coefficients are allowed to differ from their SM values. In this section we exemplarily allow for real NP in either C 9 and C 9 or C 9 and C 10 . With our nominal values for the theory uncertainties, the best fit values for the Wilson coefficients and the corresponding ∆χ 2 read in the two cases The best fit points correspond to p-values of 21.6% and 19.5%, respectively. This is comparable to the 21.5% obtained in section 3.3 in the scenario with new physics only in C 9 . In fig. 4, we show the allowed regions in the Re(C NP 9 )-Re(C 9 ) and Re(C NP 9 )-Re(C NP 10 ) planes. The blue contours correspond to the 1 and 2σ regions (∆χ 2 = 2.3 and 6 while fixing all other coefficients to their SM values) allowed by the global fit. In addition, we also show the 2σ allowed regions for two scenarios with inflated theory uncertainties. For the green short-dashed contours, we have doubled all the form factor uncertainties. For the red short-dashed contours, we have doubled all the hadronic uncertainties not related to form factors, i.e. the ones that are parametrized as in (10) and (11). We observe that the negative value preferred for C NP 9 is above the 2σ level even for these conservative assumptions. We also observe that C 9 and C NP 10 are preferentially positive, although they deviate from 0 less significantly than C NP 9 . The corresponding plots for all interesting combinations of real Wilson coefficients are collected in fig. 12 of appendix C, together with the ∆χ 2 values of the corresponding best fit points. It is also interesting to investigate which observables drive the tensions. In fig. 5, we compare the global constraints in the Re(C NP 9 )-Re(C 9 ) and Re(C NP 9 )-Re(C NP 10 ) planes to the constraints one gets only using branching ratios (green) or only using B → K * µ + µ − angular observables (red). We observe that the angular observables strongly prefer a negative C 9 but are not very sensitive to C 9 or C 10 . The branching ratio constraints have an approximate flat direction C NP 9 ∼ −C 9 and show a preference for C NP 10 > 0 in particular if C NP 9 > 0. In fact, from branching ratios alone, one could get a good fit to the data with SM-like C 9 and C NP 10 > 0. Testing lepton flavour universality So far, in our numerical analysis we have only considered the muonic b → sµ + µ − modes and the lepton flavour independent radiative b → sγ modes to probe the Wilson coefficients C (4) only muons are considered. In this section we will extend our analysis and include also semileptonic operators that contain electrons. In particular, we will allow new physics in the Wilson coefficients C e 9 and C e 10 and confront them with the available data on B → Ke + e − from LHCb [6] and B → X s e + e − from BaBar [57]. As mentioned already in the introduction, the recent measurement of the ratio R K of B → Kµ + µ − and B → Ke + e − branching ratios in the q 2 bin [1, 6] GeV 2 by LHCb [6] shows a 2.6σ )-Re(C 9 ) plane (left) and the Re(C NP 9 )-Re(C NP 10 ) plane (right). The blue contours correspond to the 1 and 2σ best fit regions from the global fit. The green and red contours correspond to the 1 and 2σ regions if only branching ratio data or only data on B → K * µ + µ − angular observables is taken into account. tension with the SM prediction The theoretical error of the SM prediction is completely negligible compared to the current experimental uncertainties. The tension between the SM prediction and the experimental data is driven by the reduced B → Kµ + µ − branching ratio, while the measured B → Ke + e − branching ratio is in good agreement with the SM. In our extended global fit we do not use the R K measurement directly but instead include the B → Kµ + µ − and B → Ke + e − branching rations separately, taking into account the correlations of their theory uncertainties. As the theory uncertainties of BR(B → Kµ + µ − ) and BR(B → Ke + e − ) are essentially 100% correlated, our approach is to a good approximation equivalent to using R K . In fig. 6 we show the result of two fits that allow for new physics in C µ 9 and C e 9 (left plot) and new physics along the SU (2) L invariant directions C µ 9 = −C µ 10 and C e 9 = −C e 10 . Recall that in section 3.3 we found that new physics in these scenarios gives the by far best description of the experimental b → sµ + µ − data. As expected, we again find that a C µ 9 significantly smaller than in the SM is clearly preferred by the fits. The best fit regions for C µ 9 and C µ 9 = −C µ 10 approximately coincide with the regions found for C 9 and C 9 = −C 10 in section 3.3. The Wilson coefficients C e 9 and C e 9 = −C e 10 on the other hand are perfectly consistent with the SM prediction. Lepton flavour universality, i.e. C µ 9 = C e 9 and C µ 10 = C e 10 as indicated by the diagonal line in the plots is clearly disfavoured by the data. Our results are consistent with similar findings in recent fits to part of the available experimental data [8,9]. Working under the assumption that the electron modes are indeed SM like, we can make predictions for ratios of observables that test lepton flavour universality using the best fit regions for the muonic Wilson coefficients from our global fit. We consider ratios of branching decays, both at low and high q 2 . Moreover, we also predict ratios of the B → K * angular observables F L , A FB and S 5 at low and high q 2 . The results are shown in table 3. The four columns correspond to the following scenarios: • new physics only in C µ 9 ; • new physics in C µ 9 and C µ 9 ; • new physics along the SU (2) L invariant direction C µ 9 = −C µ 10 ; • new physics independently in C µ 9 and C µ 10 . The Standard Model prediction for all the shown ratios is 1, with negligible uncertainties. In all scenarios all branching ratio ratios are predicted around 0.75 both at low and high dimuon invariant mass. A similar ratio is seen for S 5 at low q 2 . Only very small deviations from the SM are predicted for S 5 and A FB at high q 2 as well as F L at low and high q 2 . The most interesting observable turns out to be the ratio of the forward-backward asymmetries in B → K * µ + µ − and B → K * e + e − in the q 2 bin [4, 6] GeV [4,6] A FB (B → K * e + e − ) [4,6] . Assuming that the electron mode is SM like, R A FB is extremely sensitive to the value of C µ 9 . For the considered values of C µ 9 it deviates drastically from the SM prediction and a precise measurement would even allow to distinguish between the considered scenarios. Constraints on new physics models The results from the model-independent fit of the Wilson coefficients in the effective Hamiltonian can be interpreted in the context of new physics models. Here we discuss implications for the minimal supersymmetric standard model (MSSM) and models that contain massive Z gauge bosons with flavour-changing couplings. General MSSM Experimental data on flavour-changing neutral current processes generically lead to strong constraints on new sources of flavour violation that can be present in the MSSM [64,65]. In particular, the experimental information on rare b → sµ + µ − decays can be used to put constraints on flavour-violating trilinear couplings in the up squark sector, that are only poorly constrained otherwise [66][67][68][69][70]. In principle, the general MSSM also allows for lepton-flavour non-universality effects and we will comment to which extend the R K measurement can be accommodated. The flavour-changing trilinears give contributions to the effective Hamiltonian in (1) at the one loop level. Contributions can arise from boxes, photon penguins, and Z penguins and example Feynman diagrams are shown in fig. 7. A straightforward flavour spurion analysis shows the following points: • contributions to C 7,8 , are suppressed by m s /m b with respect to contributions to C 7,8 ; • contributions to C 9,10 are suppressed by m s m b /m 2 t with respect to contributions to C 9,10 ; • contributions proportional to A tc are suppressed by m c /m t compared to contributions proportional to A ct . We therefore concentrate on the Wilson coefficients C 7 , C 8 , C 9 , and C 10 in the presence of a non-zero A ct . To illustrate the main parameter dependence, in the following we give simple approximate expressions for the Wilson coefficients that are obtained at leading order in an expansion in m 2 EW /m 2 SUSY . The most important SUSY masses involved are the Wino mass M 2 , the Higgsino mass µ, the left-handed slepton mass m˜ , the stop masses mt L and mt R , as well as the left-handed charm squark mass mc L . The largest effects in b → s transitions can obviously be achieved if the SUSY spectrum is as light as possible. To keep the expressions compact, we set for simplicity M 2 = µ = m˜ ≡ M , mt L = mc L ≡ m L , mt R ≡ m R . We also work in the limit M m R m L which is least constrained by collider searches and therefore allows to maximize the new physics contributions to the Wilson coefficients. Note also that a light Higgsino and light stops are well motivated by naturalness arguments [71][72][73]. For the dipole coefficients we find in a leading log approximation The contributions to C 7 and C 8 from A ct arise first at the dimension 8 level, i.e. they are suppressed by m 4 EW /m 4 SUSY . The last terms in (26a) and (26b) are the leading irreducible MFV contributions to C 7 and C 8 from Higgsino stop loops. They arise already at dimension 6 and are typically much larger than the contributions proportional to A ct . For the box contributions, C box 9,10 , and the photon penguin contribution, C γ 9 , to the semileptonic operators we find where s W = sin θ W and θ W is the weak mixing angle. Again we find that these contributions arise at the dimension 8 level. For a TeV scale SUSY spectrum, they are completely negligible. In the considered scenario, only the Z penguin contributions, C Z 9,10 , arise already at the dimension 6 level. We find This suggests that there are regions of MSSM parameter space, where a contribution to C Z 10 of O(1) is indeed possible. MSSM contributions to C Z 9 on the other hand are suppressed by the accidentally small vector coupling of the Z boson to leptons, (4s 2 W − 1) ∼ −0.08, and therefore negligible. Recalling the model independent results from section 3, a positive new physics contribution to the Wilson coefficient C NP 10 O(1), can improve the agreement with the current experimental b → sµ + µ − data significantly (albeit to a lesser extent than NP in C 9 ). Negative NP contributions to C 10 on the other hand are strongly disfavoured with the current data. We use these results to probe regions of MSSM parameter space with sizable flavour-changing trilinear couplings. Bounds on flavour-changing trilinear couplings can also be obtained from vacuum stability considerations. As is well known, sizable trilinear couplings can lead to charge and color breaking minima in the MSSM scalar potential [74,75]. Requiring that the electro-weak minimum be the deepest gives upper bounds on the trilinear couplings. Taking into account non-zero expectation values for the left and right handed stops, the left handed charm squark, as well as the up-type Higgs, we find the following necessary condition to ensure absolute stability of the electro-weak vacuum [76,77] (|A t | + |A ct | tan θ) 2 3 + tan 2 θ m 2 This inequality has to hold for all values of θ, that parametrizes the angle in field space between the left handed top and charm squarks. In the limit θ = 0 one recovers a well known bound on A t given e.g. in [74]; for θ = π/2 one recovers the bound on A ct found in [75]. In principle, additional constraints on A ct can be obtained from the experimental bounds on electric dipole moments (EDMs). In particular, if A ct and A t contain a relative phase, a strange quark EDM and chromo EDM will be induced analogous to the new physics contributions to C 7 and C 8 . However, predicting an experimentally accessible EDM of a hadronic system, like the neutron, given a strange quark EDM or chromo EDM involves large theoretical uncertainties [78,79]. Due to these uncertainties, existing EDM bounds do not give appreciable constraints in our setup. Note also that bounds on the charm quark chromo EDM [80] do not constrain the parameter space of our scenario. A sizable charm quark chromo EDM would be generated in the presence of both A ct and A tc couplings, but here we only consider a non-zero A ct . We now describe the SUSY spectrum that we chose to illustrate the bounds on the trilinear couplings from the b → sµ + µ − data. The soft masses for the left-handed stop and charm squark are set to a common value mt L = mc L = 1 TeV. The soft mass of the right-handed stop is set to mt R = 500 GeV. All other squarks and sleptons as well as the gluino are assumed to be heavy, with masses of 2 TeV. Concerning the trilinear couplings, we only consider nonzero A t and A ct . Due to these trilinear couplings, the lightest up-squark mass eigenstate can have a mass mt 1 < 500 GeV and is potentially subject to strong bounds from direct stop searches. Higgsinos, Winos and Binos are assumed to have mass parameters mB = 250 GeV, mW = 300 GeV, µ = 350 GeV. In that way the mass of the lightest neutralino is given by mχ0 1 225 GeV and the mass of the lightest chargino is mχ± 1 250 GeV. Such a charginoneutralino spectrum is heavy enough to avoid the bounds from the direct stop searches [81][82][83][84] 8 as well as bounds from electro-weakino searches [86,87]. Finally, we set tan β = 3 to minimize contributions to the dipole Wilson coefficients. bounds in the A t -A ct plane, assuming real trilinears. Right: bounds in the Re(A ct ) -Im(A ct ) plane, assuming a fixed A t = −1.5 TeV. The red region is excluded by the b → sµ + µ − data by more than 2σ with respect to the SM. In the blue region the agreement between the theory predictions and the experimental b → sµ + µ − data is improved by more then 1σ. Outside the dashed contours there exist charge and color breaking minima in the MSSM scalar potential that are deeper than the electro-weak minimum. In the black corners, the lightest up-squark mass eigenstate is the LSP. In fig. 8 we show bounds on the trilinear couplings that can be derived from the b → sµ + µ − data in the described scenario. We evaluate all MSSM 1-loop contributions to the Wilson coefficients C ( ) 7,8,9,10 and compute the χ 2 as defined in (15) as a function of the trilinear couplings. For the numerical evaluation of the Wilson coefficients in the MSSM, we use an adapted version of the SUSY_FLAVOR code [88][89][90]. The plot on the left hand side of fig. 8 shows constraints in the A t -A ct plane, assuming real trilinears. The plot on the right hand side shows constraints in the Re(A ct ) -Im(A ct ) plane, for a fixed A t = −1.5 TeV. The red region is excluded by the b → sµ + µ − data by more than 2σ with respect to the SM (χ 2 > χ 2 SM + 6). In the blue region the agreement between the theory predictions and the experimental b → sµ + µ − data is improved by more than 1σ with respect to the SM (χ 2 < χ 2 SM − 2.3). In the black corners, the lightest up-squark mass eigenstate is lighter than the lightest neutralino. Outside the dashed contours there exist charge and color breaking minima in the MSSM scalar potential that are deeper than the electro-weak minimum. Note that the regions outside of these vacuum stability contours are not necessarily excluded. Even though a deep charge and color breaking minimum exists in these regions, the electro-weak vacuum might be meta-stable with a live time longer than the age of the universe. Studies show that requiring only meta-stability, relaxes the stability bounds on the trilinear couplings slightly [91][92][93][94][95]. A detailed analysis of vacuum meta-stability is beyond the scope of the present work. Lepton flavour non-universality in the MSSM The Z penguin effects discussed above are lepton flavour universal, i.e. they lead to the same effects in b → se + e − and b → sµ + µ − decays. Breaking of e-µ universality as hinted by the R K measurement can only come from box contributions as they involve sleptons of different flavours. If there are large mass splittings between the first and second generations of sleptons, or more precisely, if the selectrons are decoupled but smuons are kept light, Wino box diagrams (and to a lesser extent also Bino box diagrams) can contribute to C µ 9 and C µ 10 but not to C e 9 and C e 10 . Box contributions are, however, typically rather modest in size. As discussed above, boxes that are induced by flavour-changing trilinears arise only at the dimension 8 level and are completely negligible. Non-negligible box contributions (at the dimension 6 level) are only possible in the presence of flavour violation in the squark soft masses. However, even allowing for maximal mixing of left-handed bottom and strange squarks, it was found in [3] that Winos and smuons close to the LEP bound of ∼ 100 GeV as well as bottom and strange squarks with masses of few hundred GeV would be required to obtain contributions to C µ 9 and C µ 10 of 0.5, that could give R K ∼ 0.75. A careful collider analysis would be required to ascertain if there are holes in the LHC searches for stops [81][82][83][84], sbottoms [96][97][98], sleptons [86,99,100] and electro-weakinos [86,87] that would allow such an extremely light spectrum. We also note that a sizable splitting between the left-handed smuon and selectron masses required to break e-µ universality is only possible if the slepton mass matrix is exactly diagonal in the same basis as the charged lepton mass matrix, since even a tiny misalignment would lead to an excessive µ → eγ decay rate. Flavour changing Z bosons A massive Z gauge boson with flavour-changing couplings to quarks is an obvious candidate that can lead to large effects in b → s decays [3,[101][102][103][104][105][106]. Instead of discussing a complete model that contains such a Z boson, we will take a bottom up approach and ask which properties a Z has to have in order to explain the discrepancies observed in the b → s data. To this end we treat the mass of the Z as well as its couplings to SM quarks and leptons as free parameters. Following the notation of [107], we parametrize the Z couplings as In the presence of ∆ bs L/R and ∆ µµ L/R couplings, the Z boson will contribute to the Wilson coefficients C ( ) 9 and C ( ) 10 at tree level. As the primed Wilson coefficients hardly improve the agreement of the experimental b → sµ + µ − data with the theory predictions, we will not consider them here and set the right-handed bs couplings to zero, ∆ bs R = 0. The Z couplings ∆ bs L and ∆ µµ L/R are subject to various constraints that bound the maximal effect a Z prime can have in C 9 and C 10 . In particular, a Z boson with flavour-changing b ↔ s couplings will inevitably also contribute to B s -B s mixing at the tree level. One finds the following modification of the mixing amplitude where v = 246 GeV is the Higgs vev, and the SM loop function is given by S 0 2.3. If we allow for maximally 10% new physics contribution to the mixing amplitude, i.e. |M 12 /M SM 12 −1| < 0.1, we obtain the following stringent bound on the Z mass and the flavour-changing coupling, Concerning the couplings of the Z to leptons, we will start with the least constrained case, where the Z only couples to muons, but not to electrons and consider a coupling to lefthanded muons only. Subsequently, we will discuss how our conclusions change if we assume a vector-like coupling to muons or a lepton-flavour universal coupling. Z with coupling to left-handed muons The only non-zero coupling to charged leptons we consider here is ∆ µµ L . Such a Z is very poorly constrained. Over a very broad range of Z masses, the strongest constraint on ∆ µµ L comes from neutrino trident production [105,108], i.e. the production of a muon pair in the scattering of a muon-neutrino in the Coulomb field of a heavy nucleus 9 . The relative correction of the trident cross section in the presence of the considered Z is given by We use the CCFR measurement of the trident cross section, σ CCFR /σ SM = 0.82 ± 0.28 [109], to set bounds on the Z mass and its coupling to muons. At the 2σ level we find Combining this result with the bound on the flavour-changing quark coupling from B s mixing, eq. (32), we can derive an upper bound on the possible size of new physics contributions to the Wilson coefficients C 9 and C 10 that can be achieved in the considered setup. For the Wilson coefficients we have This implies |C NP 9 | = |C NP 10 | < 5.4 . The best fit values in the C NP 9 = −C NP 10 scenario found in section 3.3 are well within this bound. Although the explanation of the tensions in b → sµ + µ − transitions does not require a coupling of the Z to first generation quarks, it is interesting to investigate what happens in models where such couplings are present, which could lead to Z signals at the LHC. Fixing the Wilson coefficients C 9 and C 10 to their best fit values and assuming the flavour-changing [112], assuming a Z → µ + µ − branching ratio of 50%. The region above the red curve is excluded by searches for quark-lepton contact interactions [111]. The upper plot axis shows the minimal value of the Z coupling to left-handed muons (37), required to obtain the best fit values for C 9 and C 10 . coupling to have its maximal value (32) allowed by B s mixing, we find a lower bound on the muon coupling, Adopting the lower end of this range, ATLAS and CMS searches for quark-lepton contact interactions [110,111] can be used to put an upper bound on the Z coupling to the left-handed first-generation quark doublet. Using the CMS results [111], we find M Z |∆ qq L | 11 TeV (7 TeV) (38) for constructive (destructive) interference with the SM q LqL → µ + µ − amplitude. Comparing this to (32), we conclude that models with a rough scaling |∆ bs L | ∼ |V tb V * ts ∆ qq L | are compatible with these bounds. For a Z mass between 200 GeV and 3.5 TeV, also LHC searches for resonances [112,113] in the dimuon mass spectrum can be used to put an upper bound on the Z coupling to first-generation quarks as a function of M Z . In fig. 9 we show the bound on ∆ qq L using the results from the ATLAS search [112] (shaded blue region). For the branching ratios of the Z we assume BR(Z → µ + µ − ) = BR(Z → ν µνµ ) = 1 2 , which approximately holds as long as the ∆ µµ L coupling is sufficiently large compared to couplings to other states. The bound from resonance searches could be weaker if the Z has e.g. a sizable branching ratio into a dark sector. In the same plot, we also show the bound from quark-lepton contact interaction searches from CMS [111], assuming (37) (red line). Below 3.5 TeV, we show this bound as a dashed line, because for such light Z masses the contact interaction approximation becomes invalid. We conclude that, in order to lead to visible effects in b → sµ + µ − transitions, a heavy Z with M Z 3 TeV can have weak-interaction strength couplings to first-generation quarks without being in conflict with the bounds from contact interactions. Such a heavy Z must have strong couplings to muons (∆ µµ L 1). A lighter Z can be weakly coupled to muons, but requires a suppression of the coupling to first-generation quarks by roughly two orders of magnitude to avoid the bounds from direct searches. Z with vector-like coupling to muons If the couplings of the Z to muons are purely vector-like we can define ∆ µµ L = ∆ µµ R ≡ ∆ µµ V /2. In this case, the correction to the neutrino trident cross section reads and we obtain the following bound using the CCFR measurement Now the NP contribution to the Wilson coefficient C 10 vanishes, while for C 9 one has Again, one finds that sizable effects are possible: adopting the maximum allowed values for the couplings (40) and (32), we find |C NP 9 | < 9.3. The bounds on first-generation quark couplings from contact interaction and dimuon resonance searches are qualitatively similar to the left-handed case. Z with universal coupling to leptons If the Z coupling to leptons is flavour-universal, stringent bounds on ∆ can be obtained from LEP2 searches for four lepton contact interactions [114]. Depending on whether the coupling is to left-handed leptons only or is vector-like, we find where, for the last step, the flavour-changing coupling has been assumed to saturate the upper bound in (32) coming from B s mixing. We observe that the effects in b → sµ + µ − transitions are now much more limited, but, in particular for left-handed couplings, can still come close to the best-fit values in section 3.3 (of course, the anomaly in R K cannot be explained in this scenario.) Interestingly, this also implies that the effect in B s mixing is necessarily close to the current experimental bounds. Future improvements of the B s mixing constraints will then allow to test the lepton flavour universal scenario. Concerning collider searches, the new feature of the lepton universal case is that there is an absolute lower bound on the Z mass from LEP2, M Z > 209 GeV. LHC bounds on the coupling to first generation quarks, on the other hand, are qualitatively similar to the non-universal case discussed above. Summary and conclusions Several recent results on rare B decays by the LHCb collaboration show tensions with standard model predictions. Those include discrepancies in angular observables in the B → K * µ + µ − decay, a suppression in the branching ratios of B → K * µ + µ − and B s → φµ + µ − , as well as a hint for the violation of lepton flavour universality in the form of a B → Kµ + µ − branching ratio that is suppressed not only with respect to the SM prediction but also with respect to B → Ke + e − . In this paper we performed global fits of the experimental data within the SM and in the context of new physics. For our SM predictions we use state-of-the-art B → K, B → K * and B s → φ form factors taking into account results from lattice and light cone sum rule calculations. All relevant nonfactorizable corrections to the B → Kµ + µ − , B → K * µ + µ − and B s → φµ + µ − amplitudes that are known are included in our analysis. Additional unknown contributions are parametrized in a conservative manner, such that existing estimates of their size are within the 1σ range of our parametrization. We take into account all the correlations of theoretical uncertainties between different observables and between different bins of dilepton invariant mass. As experimental data is available for more and more observables in finer and finer bins, the theory error correlations have strong impact on the result of the fits. Making use of all relevant experimental data on radiative, leptonic and semi-leptonic b → s decays we find that there is on overall tension between the SM predictions and the experimental results. Assuming the absence of new physics, we investigated to which extent nonperturbative QCD effects can be responsible for the apparent disagreement. We find that large non-factorizable corrections, a factor of 4 above our error estimate, could improve the agreement for the B → K * µµ angular observables and the branching ratios considerably. Alternatively, the branching ratio predictions could also be brought into better agreement with the experimental data, if the involved form factors were all systematically below the theoretical determinations from the lattice and from LCSR. On the other hand, we find that non-standard values of the form factors could at most lead to a modest improvement of B → K * µ + µ − angular observables. In both cases however, the hint for violation of lepton flavour universality cannot be explained. Assuming that in our global fits the hadronic uncertainties are estimated in a sufficiently conservative way, we discussed the implications of the experimental results on new physics. Effects from new physics at short distances can be described model independently by an effective Hamiltonian and the experimental data can be used to obtain allowed regions for the new physics contributions to the Wilson coefficients. We find that the by far largest decrease in the χ 2 can be obtained either by a negative new physics contribution to C 9 (with C NP 9 ∼ −30% × C SM 9 ), or by new physics in the SU (2) L invariant direction C NP 9 = −C NP 10 , (with C NP 9 ∼ −12% × C SM 9 ). A positive NP contribution to C 10 alone would also improve the fit, although to a lesser extent. Concerning the hint for violation of lepton flavour universality, we observe that new physics exclusively in the muonic decay modes leads to an excellent description of the data. We do not find any preference for new physics in the electron modes. We provide predictions for other lepton flavour universality test. We find that the ratio R A FB of the forward-backward asymmetries in B → K * µ + µ − and B → K * e + e − at low dilepton invariant mass is a particularly sensitive probe of new physics in C µ 9 . A precise measurement of R A FB would allow to distinguish the new physics scenarios that give the best description of the current data. Finally we also discussed the implications of the model independent fits for the minimal supersymmetric standard model and models that contain Z gauge bosons with flavour-changing couplings. In the MSSM, large flavor changing trilinear couplings in the up-squark sector can give sizable contributions to the Wilson coefficient C 10 and we identified regions of MSSM parameter space that are favoured or disfavoured by the current experimental data. Heavy Z bosons can have the required properties to explain the discrepancies observed in the b → s data. If the Z couples to muons but not to electrons (as preferred by the data), it is only weakly constrained by indirect probes. On the other hand, if the Z couplings to leptons are flavour universal, LEP constraints on 4 lepton contact interactions imply that an explanation of the b → s discrepancies results in new physics effects in B s mixing of at least ∼ 10%. In all scenarios, the couplings of the Z to first generation quarks are strongly constrained by ATLAS and CMS measurements of dilepton production. We look forward to the updated experimental results using the full LHCb data set, which will be crucial in helping to establish or to refute the exciting possibility of new physics in b → s transitions. C. Constraints on pairs of Wilson coefficients Figs. 11 and 12 shows the constraints in the planes of the complex Wilson coefficients or of various pairs of real Wilson coefficients. The blue contours correspond to the 1 and 2σ regions allowed by the global fit. The green short-dashed and the red short-dashed contours correspond to the 2σ allowed regions for scenarios with doubled form factor uncertainties and doubled uncertainties related to sub-leading non-factorizable corrections, respectively. The ∆χ 2 of the best fit point with respect to the SM is also given in the plots. fig. 4
17,882.2
2014-11-12T00:00:00.000
[ "Physics" ]
Increasing lipid yield in Yarrowia lipolytica through phosphoketolase and phosphotransacetylase expression in a phosphofructokinase deletion strain Background Lipids are important precursors in the biofuel and oleochemical industries. Yarrowia lipolytica is among the most extensively studied oleaginous microorganisms and has been a focus of metabolic engineering to improve lipid production. Yield improvement, through rewiring of the central carbon metabolism of Y. lipolytica from glucose to the lipid precursor acetyl-CoA, is a key strategy for achieving commercial success in this organism. Results Building on YB-392, a Y. lipolytica isolate known for stable non-hyphal growth and low citrate production with demonstrated potential for high lipid accumulation, we assembled a heterologous pathway that redirects carbon flux from glucose through the pentose phosphate pathway (PPP) to acetyl-CoA. We used phosphofructokinase (Pfk) deletion to block glycolysis and expressed two non-native enzymes, phosphoketolase (Xpk) and phosphotransacetylase (Pta), to convert PPP-produced xylulose-5-P to acetyl-CoA. Introduction of the pathway in a pfk deletion strain that is unable to grow and accumulate lipid from glucose in defined media ensured maximal redirection of carbon flux through Xpk/Pta. Expression of Xpk and Pta restored growth and lipid production from glucose. In 1-L bioreactors, the engineered strains recorded improved lipid yield and cell-specific productivity by up to 19 and 78%, respectively. Conclusions Yields and cell-specific productivities are important bioprocess parameters for large-scale lipid fermentations. Improving these parameters by engineering the Xpk/Pta pathway is an important step towards developing Y. lipolytica as an industrially preferred microbial biocatalyst for lipid production. Supplementary Information The online version contains supplementary material available at 10.1186/s13068-021-01962-6. Background Lipids are important precursors in food, cosmetics, biodiesel and biochemical industries [1]. The ever-increasing demands of these industries are largely fulfilled by plant oil feedstocks [1,2]. Plantations for oil are dependent on climatic changes, geopolitics and require large arable lands. Oil plantations are continually displacing large areas of tropical forests worldwide, severely affecting their regional biodiversity [2,3]. These environmental and sustainability concerns have fueled research into microbial oil as an alternative source of lipid production [4,5]. In recent years, Yarrowia lipolytica has emerged as the preferred organism to study and engineer lipid production [6]. It is an oleaginous yeast capable of accumulating more than 20% of its biomass as lipids [7] and with a few genetic modifications can accumulate over 60% lipid [8][9][10]. Y. lipolytica-derived products have received Generally Regarded As Safe (GRAS) status, and the yeast has a well-developed genetic tool kit for manipulating substrate and product pathways [6]. Lipids in Y. lipolytica are mostly stored as triacylglycerol (TAG) molecules (95%), composed mainly of C16 and C18 fatty acids with varying degrees of saturation [11]. Lipid composition alterations to make industrially relevant products like triolein have also been demonstrated in Y. lipolytica [12]. Consequently, Y. lipolytica is often chosen as a model organism to study the glucose-to-lipid pathway, lipid body biogenesis and homeostasis [4,13,14]. De novo fatty acid synthesis requires a constant supply of the metabolic precursor cytosolic acetyl-CoA and the reducing cofactor NADPH (nicotinamide adenine dinucleotide phosphate) [4,13]. In Y. lipolytica, glucose is converted to cytosolic acetyl-CoA through the combined reactions of the glycolytic cycle, mitochondrial pyruvate dehydrogenase (PDH) and ATP:citrate lyase (ACL) (Fig. 1) [7]. The pentose phosphate pathway (PPP) has been shown to supply NADPH for lipid production [15]. The lipid synthesis pathway converts cytosolic acetyl-CoA to C16-18 fatty acyl-CoAs through a series of condensation reactions. Two molecules of NADPH are used for every acetyl unit condensed into a growing C16 fatty acyl chain. Elongating C16 fatty acyl-CoAs to C18 fatty acyl-CoAs also uses two NADPH molecules [16]. While fatty acid desaturation also requires reduced cofactors, it is unclear if the preferred cofactor is NADH or NADPH [16,17]. Thus, to make one molecule of triolein (C 57 H 104 O 6 ) using this biochemical pathway, 27 acetyl-CoA and at least 48 NADPH molecules are needed (Additional file 1: Fig. S1a). To produce the required intermediates, the native lipid pathway utilizes 18 glucose molecules (Additional file 1: Fig. S1a, b). We calculate a theoretical triolein yield from glucose at 0.27 g/g (Eq. 1). Another strategy to improve glucose-to-lipid production involves rewiring central carbon metabolism to improve yield of the biosynthetic lipid precursor acetyl-CoA, the redox cofactor NADPH, or both [22,[27][28][29][30]. This increases the theoretical maximum yield of the pathway and usually involves introducing heterologous enzymatic activities and creating new pathways in the organism [27]. One such pathway is the phosphoketolase (PK or Xpk)/phosphotransacetylase (Pta) pathway which has been tested in various organisms including Escherichia coli [31], Saccharomyces cerevisiae [32][33][34][35] and Y. lipolytica [28,30,36] the reversible conversion of AcP to acetyl-CoA. The combined activities of Xpk and Pta produce cytosolic acetyl-CoA from the PPP instead of glycolysis, and thus link it to NADPH production. The Xpk/Pta route towards acetyl-CoA and NADPH is more efficient than Y. lipolytica's native route via glycolysis and ACL [27] (Additional file 1: Fig. S1c). Overall, the Xpk/Pta pathway (Eq. 2) requires 2.3 fewer moles of glucose to make one mole of triolein compared to the native pathway. This increases the theoretical maximal yield of triolein to 0.31 g/g glucose compared to the native pathway (comparing Eqs. 1 and 2). Xpk/Pta pathway Work in Y. lipolytica has shown that expression of Xpk/ Pta can lead to improved lipid phenotypes. Coexpression of Aspergillus nidulans phosphoketolase (AnXPK) and Bacillus subtilis phosphotransacetylase (BsPTA) led to improved yield from glucose [30] and increased lipid content from xylose [36]. Expression of Leuconostoc mesenteroides phosphoketolase (LmXPK) and Clostridium kluyveri phosphotransacetylase (CkPTA) led to higher dry cell weight and lipid content with a moderate increase in lipid yield [28]. Enzymatic activity was not measured, but improved lipid metrics suggested active Xpk/Pta pathways in the engineered strains. These studies were performed in Po1 series strains, derived from a set of backcrosses between the French and American Y. lipolytica wild-type strains W29 and CBS6142-2 [37,38]. Although well characterized, the Po1 strains exhibit certain traits undesirable for commercial production: (2) 15.7 glucose → 27 Acetyl − CoA + 48 NADPH → 1 C18 : 1 TAG auxotrophy, significant citrate production and tendency to grow in the hyphal morphology under certain stresses [37,39]. Wild-type Y. lipolytica strain YB-392 is a prototroph that has demonstrated potential for very high lipid content while growing entirely in the yeast morphology and with minimal citrate production [21]. It has been established that different Y. lipolytica genetic backgrounds respond differently to genetic engineering, even within the Po1 series [36]. Implementation of an Xpk/Pta pathway in YB-392 therefore remained to be investigated. In published implementations of the Xpk/Pta pathway in other species, additional engineering was required for successful pathway utilization. In E. coli, 11 gene overexpressions, 9 gene deletions and over 50 genomic mutations accumulated through evolution were engineered to successfully rescue growth in a glycolytic mutant through the Xpk/Pta pathway [31]. In a farnesene-producing S. cerevisiae strain, the aldehyde dehydrogenase genes ALD2 and ALD6 were deleted to make Xpk/Pta the sole source of acetyl-CoA production. Additionally, the glycerol-3-phosphate phosphatase gene RHR2 was deleted as this enzyme exhibited acetyl phosphatase activity and competed with PTA for the substrate AcP [35]. The removal of competing reactions ensures that Xpk/Pta is the only available route for growth and/or acetyl-CoA production. In this study, we engineered the Xpk/Pta pathway in a glycolysis-deficient pfk Y. lipolytica strain in the YB-392 strain background. Phosphofructokinase (Pfk, EC 2.7.1.11) catalyzes the irreversible production of fructose 1,6-bisphosphate from fructose 6-phosphate (F6P). Carbon flux from glucose to lipids can still move through glycolysis in an otherwise unmodified Xpk/Pta strain, whereas deleting PFK is expected to reroute glucose flux through the PPP [40,41] and in turn through the Xpk/Pta pathway ( Fig. 1). Increased flux through the PPP could result in excess NADPH with negative consequences for growth [42,43], unless sufficient NADPH-oxidizing reactions are present to restore the redox balance. We hypothesized that introduction of the Xpk/Pta pathway into a pfk-deficient Y. lipolytica strain would correct the NADPH imbalance by providing a route towards the NADPH-oxidizing lipid synthesis pathway (Fig. 1). The Xpk/Pta/Δpfk1 strains engineered in the current study overcame the growth and lipid production deficits of the parent Δpfk1 strain. They also exhibited improved lipid yield and cell-specific lipid productivity over the wild-type. Our work is the first in demonstrating phosphoketolase and phosphotransacetylase enzymatic activities as well as combination of the Xpk/Pta pathway with a PFK deletion in Y. lipolytica, maximizing the effect of the heterologous pathway towards lipid production. YB-392-derived Y. lipolytica strains using the Xpk/Pta/Δpfk1 pathway would be well-suited for industrial applications as increased yield and cell-specific lipid productivity directly tie into better economics for lipid production. Results and discussion In this study, we chose the wild-type Y. lipolytica strain YB-392 as the starting strain for its desirable biocatalyst qualities such as native lipid accumulation levels, minimal citrate formation, non-hyphal morphology and ease of genetic manipulation [10,39]. This strain has also been used to study and engineer lipid pathway to improve lipid accumulation and alter lipid composition [10,12]. Identification of functional heterologous phosphotransacetylase (PTA) and phosphoketolase (XPK) in Y. lipolytica Heterologous PTA and XPK genes from various species of bacteria, archaea, algae and fungi were tested for activity in Y. lipolytica. Genes were individually expressed in wild-type strain YB-392 and cell-free extracts were assayed to measure Pta and Xpk activity (lists of all the PTA and XPK genes tested are in Additional file 1: Tables S1.1 and S1.2, respectively). LmXPK, AnXPK, CkPTA and BsPTA were previously expressed in the Y. lipolytica Po1 lineage [28,30,36]. Xpk and Pta activities were not reported in those studies and, LmXPK and AnXPK did not exhibit activity in our wild-type strain YB-392 (Additional file 1: Table S1.2). The wild-type strain contains no endogenous Xpk or Pta and shows background levels of activity. Strains expressing PTA genes from Bacillus subtilis (BsPTA(v1)) and Thermoanaerobacterium saccharolyticum and an XPK gene from Clostridium acetobutylicum (CaXPK(v1)) exhibited the highest activity in our screens ( Figs. 2 and 3). This marks the first published measurement of Pta and Xpk activity in Y. lipolytica. In the course of identifying active genes in Y. lipolytica, we found that different methods of screening were successful for PTA and XPK. Linear integrating expression cassettes and codon optimization to S. cerevisiae resulted in successful PTA expression. This approach did not yield detectable Xpk activity in our study (data not shown) presumably due to low expression from the candidate genes. Codon optimization of XPK genes to Y. lipolytica and expression from replicating plasmids, a method of expression that yields uniform high expression, was required to identify one gene candidate with detectable enzymatic activity (Fig. 3). Construction and characterization of the Y. lipolytica Δpfk1 strain (NS1047) To increase glucose flux through the Xpk/Pta pathway, we partially disabled glycolysis by deleting the PFK1 gene ( Fig. 1). Y. lipolytica contains only one PFK gene Fig. 2 Heterologous Pta activity in Y. lipolytica. PTA genes from seven different source organisms codon-optimized to S. cerevisiae (GeneArt) were expressed in YB-392 under the control of the Y. lipolytica EXP1 promoter using linear integrating expression cassettes. Cell-free extracts (CFE) from four transformants per test gene were analyzed for Pta activity using a DTNB assay. Data are presented as fold change of the measured specific activity over the averaged specific activity of the parent strain YB-392 (dashed line), which is included as control (YALI0D16357) and its deletion leads to loss of growth on minimal media with glucose as the only carbon source [44]. We deleted the PFK1 gene in YB-392 to create the Δpfk1 strain NS1047. Disruption of glycolysis through PFK or PGI deletions can cause growth defects on glucose [40,42], presumably due to excess NADPH produced by increased glucose flux through the PPP [43,45]. In organisms that can metabolize cytosolic NADPH, glycolytic-deficient strains retain or regain growth on glucose (e.g., through native cytosolic NADPH oxidase in Kluyveromyces lactis [43] or by overexpression of a transhydrogenase enzyme in E. coli [45]). Y. lipolytica lacks cytosolic NADPH oxidase [46] and homologues to PFK1 [44]. Furthermore, NS1047 was unable to grow in minimal glucose media (Fig. 4a) and lipid production media (Fig. 5b, black bar) with glucose as the primary carbon source. We therefore hypothesize that Y. lipolytica cannot tolerate the excess NADPH produced when glucose is consumed in a PFK-deleted strain. Growth on YPD was not abolished, but it was slower for NS1047 than YB-392 (Fig. 4b). The reduced severity of the growth defect on this rich medium could be attributed to the ability of Y. lipolytica to utilize amino acids for growth [47] and the ability of a pfk1 deletion mutant to metabolize permissive carbon sources in the presence of glucose [44]. Engineering the Xpk/Pta pathway into a Δpfk1 strain To assemble the Xpk/Pta/Δpfk1 pathway in Y. lipolytica, we expressed CaXPK(v1) and BsPTA(v1) in NS1047 (Δpfk1) (Fig. 5). With each transformation step, we screened for the best transformant using appropriate enzymatic assays and measured growth and lipid accumulation on glucose (Additional file 1: Table S2.1). Noticeable Xpk and Pta activities were obtained in the course of constructing strain NS1352 (Fig. 5a). However, NS1352 containing three copies of CaXPK(v1) and two copies of BsPTA(v1), showed minimal improvement in growth or lipid accumulation on glucose when compared to NS1047 (Fig. 5b). One possibility for the lack of improvement is that Xpk and Pta activities were still too low for a functional pathway. As addition of multiple copies of CaXPK(v1) and BsPTA(v1) genes yielded only incremental improvements in enzymatic activities (Fig. 5a), we decided to revisit codon optimization strategies to improve gene expression. To further improve Xpk and Pta activity in Y. lipolytica, different codon optimization strategies were tested on our top performing genes BsPTA, TsPTA and CaXPK. TsPTA codon-optimized to Y. lipolytica (GeneArt) exhibited the highest activity and is referred to as TsPTA(v2) from hereon. Improving Xpk activity continued to be challenging and no improvements were observed using the strategies that were successful with Pta (data not shown). Through analysis of publicly available Y. lipolytica transcriptomics data (data accessible at NCBI GEO database [48][49][50]), we noted that the highest expressed genes contained few rare codons (Additional file 1: Fig. S2a). Based on this observation, we manually designed and tested three additional versions of CaXPK (Additional file 1: Fig. S2b). The highest Xpk activity was obtained when all codons present at a frequency ≤ 2% were replaced with their higher frequency counterparts and this gene is referred to as CaXPK(v2). To determine whether TsPTA(v2) and CaXPK(v2) increase flux through the Xpk/Pta pathway, we tested whether their expression in NS1352 could restore growth and lipid accumulation on glucose. Addition of TsPTA(v2) quadrupled Pta activity (NS1420, Fig. 5a), but growth and lipid production on glucose remained unchanged (Fig. 5b), suggesting that Xpk activity could still be limiting. Consistent with this hypothesis, addition of CaXPK(v2) improved lipid accumulation to near YB-392 levels (NS1457, Fig. 5b, grey bars). Growth on glucose also improved, but was still deficient compared to YB-392 (NS1457, Fig. 5b, black bars). Addition of a second copy of CaXPK(v2) almost quadrupled growth on glucose compared to its parent strain (NS1457 and NS1475, Fig. 5b, black bars). Lipid accumulation remained similar to the parent strain NS1457 and YB-392 (Fig. 5b, grey bars). The observation that lipid accumulation was restored with fewer copies of XPK than were required to restore wild-type growth to a Δpfk1 strain suggests that more Xpk activity is needed for growth than lipid production. To further characterize the Xpk/Pta/Δpfk1 pathway strain NS1475, we carried out batch fermentations in 1-L bioreactors with YB-392 included as control. Two replicate fermentation experiments comparing YB-392 and NS1475 were conducted on two separate occasions to account for any culturing variations. NS1475 was comparable to the wild-type YB-392 in terms of growth and total lipid accumulated (Fig. 6a). NS1475 recorded an improved total lipid yield (+ 16%), cell-specific lipid productivity (+ 41%) and lipid content (+ 16%) over YB-392 ( Fig. 6b-d). Our results confirm that the Xpk/Pta pathway can rescue growth and lipid defects of a PFK deletion while also improving bioprocess parameters like yield and cell-specific productivity. Confirming improved lipid production with Xpk/Pta/Δpfk1 genotype To confirm that the Xpk/Pta pathway was directly responsible for restoring growth and improving lipid production from glucose in Δpfk1, we reconstructed the pathway in NS1047 using only TsPTA(v2) and CaXPK(v2) (Fig. 7a). We wanted to eliminate the possibility that the nine rounds of transformation involved in constructing NS1475 resulted in unintentional contributions to the strain's phenotype. As before, Pta and Xpk activity, growth on glucose, and lipid production were monitored at each engineering step (Additional file 1: Table S2.2). One copy of TsPTA(v2) and three copies of CaXPK(v2) were required to restore growth and lipid accumulation to YB-392 levels reducing the number of engineering steps to five instead of nine. Despite the usage of a more active XPK gene sequence, the pathway was again Xpk-limited and required more copies of XPK than PTA to achieve the desired phenotype. Xpk limitation was also reported by Lin et al. while engineering the pathway in E. coli [31]. An Xpk bottleneck could be increasingly restricting as we engineer strains toward higher lipid content by combining this pathway with native lipid pathway engineering (e.g., overexpression of DGA1). Additional research could identify a higher-expressing XPK gene that may further reduce the number of engineering steps required to introduce a functional Xpk/Pta/Δpfk1 pathway in Y. lipolytica. The two best-performing strains at the end of this engineering strategy (NS1656 and NS1657, Fig. 7a) were further characterized in 1-L batch fermentations alongside YB-392. Under the same fermentation conditions, all three strains attained similar lipid-free biomass (Fig. 7b). As was the case with NS1475, the Xpk/Pta/Δpfk1 strains outperformed YB-392 in lipid yield and cell-specific lipid productivity ( Fig. 7c and d). A slight improvement in lipid content was also observed (Fig. 7e). Overall, all three engineered strains showed an improvement in total lipid yield ranging from 13.3%-19.6% and an improvement in cell-specific productivity ranging from 41%-78% over YB-392. These results confirm that we successfully rewired central carbon metabolism in Y. lipolytica. As expected, redox imbalance created by the PFK deletion led to growth and lipid accumulation defects on glucose media. XPK/PTA expression resulted in confirmed Xpk and Pta activity and corrected this imbalance by providing a route towards the NADPH-oxidizing lipid synthesis pathway, restoring growth and lipid production on glucose. Unlike S. cerevisiae (Meadows et al., 2016), we found that Y. lipolytica did not have any native competing AcP-consuming reactions (no AcP degradation detected in wild-type strain, data not shown). Thus, in engineering the Xpk/Pta pathway in Y. lipolytica we found that Xpk limitation was the main bottleneck and PFK deletion offers a means to ensure maximal pathway utilization. Conclusions To achieve maximum lipid titer and yield from Y. lipolytica, a combination of native pathway engineering and rewiring of central carbon metabolism may prove to be a successful strategy. In this study, we expressed optimized phosphoketolase and phosphotransacetylase genes in a phosphofructokinase-deficient Δpfk1 strain to demonstrate the use of Xpk/Pta pathway in improving lipid production in the commercially attractive YB-392 strain background. The engineered strains recorded up to 19% higher total lipid yield and up to 78% higher cell-specific productivity compared to the wild-type strain. Such improvements in bioprocess metrics make lipid production in Y. lipolytica more suitable for industrial applications. Since the Xpk/Pta Fig. 5 Engineering the Xpk/Pta pathway in a Δpfk1 Y. lipolytica strain. a Pta and Xpk activity. Pta activity in all strains shown was measured using the DTNB assay (black bars). Xpk activity in the control strain YB-392 and all the strains that obtained a copy of CaXPK through transformation (NS1281, NS1292, NS1322, NS1457 and NS1475) was measured using the ferric hydroxamate assay with ribose 5-phosphate as the substrate (grey bars). NS1047, NS1341, NS1352 and NS1420 were excluded from this assay. b Growth and lipid accumulation assays. OD 600 was measured after 2 days of growth in lipid production media (black bars). Lipid accumulation was measured as fluorescence/OD after overnight growth in glycerol followed by seven days of culture in modified Verduyn media (grey bars). Modified Verduyn media contained glucose as the only carbon source and no nitrogen to induce lipid production Kamineni et al. Biotechnol Biofuels (2021) 14:113 pathway essentially improves acetyl-CoA production, this pathway can be used to improve bioprocess metrics of other acetyl-CoA derived products including fatty alcohols, sterols, alkenes/alkanes, isoprenoids, etc. [48]. The theoretical improvement in yield makes the Xpk/ Pta pathway a compelling technology for large-scale, commodity fermentation in the biofuel and biochemical industries. Strains, cultivation and media Wild-type, haploid Yarrowia lipolytica strain YB-392 was obtained from the ARS Culture Collection (NRRL). For routine growth and genetic transformation, strains were cultured in YPD (10 g/L yeast extract, 20 g/L bacto peptone, 20 g/L glucose), YPD/Et/Gly (YPD as described, plus 20 g/L ethanol and 30 g/L glycerol) The Y. lipolytica PFK1 gene YALI0D16357 was deleted through targeted genomic integration using direct repeats and a combination of positive and negative selection for marker recycling. Using standard molecular biology techniques, a construct was designed comprising the genetic parts listed in Additional file 1: Table S3. A twofragment deletion cassette was amplified by PCR using a combination of terminal and internal oligonucleotide primers such that the fragments overlapped in the nat marker reading frame, but neither fragment alone contained the entire functional nourseothricin-resistance gene. PCR products were transformed into hydroxyurea-treated cells as described in our previous work [51]. Transformation recovery was in YPD/Et/Gly to provide carbon sources in addition to glucose. Transformed cells were plated on YPD/Et/Gly containing 500 µg/mL nourseothricin. Successful cassette integration replaced the PFK1 locus by a double recombination event at the 47-bp upstream and 621-bp downstream regions. A longer downstream homology region was chosen to increase the likelihood of this recombination event as opposed to recombination between the homologous 450-bp regions in the integration cassette and upstream of PFK1. Nourseothricin-resistant colonies were screened by PCR for the presence of the expected targeted integration product and the absence of the PFK1 gene. The phenotype of resulting deletion strains was confirmed by plating on defined media with glucose as the only carbon source. To eliminate the marker cassette, the deletion strains were grown on YPD/Et/Gly agar plates without selection for 1 day to allow for survival of cells that naturally excised the cassette by recombination of the 450-bp direct repeat formed between the endogenous PFK1 upstream region and the identical sequence introduced in the integration cassette. Subsequent plating of strains on YPD/Et/Gly agar containing 30 µM 5-fluoro-2′-deoxyuridine (FUDR) selected for the absence of the thymidine kinase gene. To identify marker-less PFK1 deletion strains, FUDR-resistant isolates were screened for reversion to nourseothricin sensitivity and loss of the marker cassette from the pfk1 locus was confirmed by PCR. Oligonucleotide primer sequences are included in the Additional file 1: Table S3.1. XPK and PTA gene expression To identify functional XPK and PTA genes, expression cassettes were transformed into the desired Y. lipolytica strains as a part of a linear integrated expression construct [10] or replicating plasmid composed of the genetic parts listed in Additional file 1: Tables S4.1 and S4.2. For replicating plasmids, 100 ng of undigested plasmid was used in the transformation mix. To assemble the Xpk/ Pta/Δpfk1 pathway, NS1047 and subsequent intermediate strains were transformed with linear constructs containing XPK or PTA and positive and negative marker expression cassettes (Additional file 1: Table S4.3). Transformants were selected on antibiotic plates and screened for the highest performance using appropriate enzymatic, lipid and growth assays. Additional file 1: Tables S2.1 and S2.2 describe the screening steps used to construct NS1475 and NS1656-57, respectively. To eliminate the marker cassette in these strains, the chosen isolates were grown on YPD agar plates without selection for one day to allow for survival of cells that naturally excised the cassette by recombination between the identical copies of the Y. lipolytica TEF1 promoter driving expression of thymidine kinase and the gene of interest in the integration cassette. Subsequent plating on YPD agar containing 30 µM 5-fluoro-2′-deoxyuridine (FUDR) counter-selects for the thymidine kinase gene. FUDR-resistant isolates were screened by confirmation of reversion to nourseothricin sensitivity to identify marker-less strains. Growth and lipid assays To evaluate growth on glucose, strains were patched on YNB plates (6.7 g/L Yeast Nitrogen Base without amino acids, 20 g/L agar) or cultured in lipid production media (0.5 g/L urea, 1.5 g/L yeast extract, 0.85 g/L casamino acids, 1.7 g/L Yeast Nitrogen Base without amino acids and ammonium sulfate, 100 g/L glucose, and 5.11 g/L potassium hydrogen phthalate) [10]. Growth in lipid production media was tested by growing strains overnight in YPD, washing with sterile water, and inoculating into the lipid production media at a starting OD 600 (optical density measured at 600 nm) of 0.05. OD 600 measurements to monitor growth were taken after culturing for 2 days in shake flasks. To measure lipid accumulation, strains were grown in glycerol (YPG) for 24 h and then switched to a modified Verduyn media [52] (modified to contain no ammonium sulfate and 100 g/L glucose) to induce lipid production. The cells grown in YPG were pelleted, washed with water and resuspended in the modified Verduyn media and cultured for 7 days. These characterizations were carried out in 96-well, 48-well or 24-well deep well plates and 250-mL shake flasks. The lipid Bodipy assay described in our previous work [10] was used with one modification: PBS was used instead of the master mix previously described. Lipid accumulation was measured as fluorescence units normalized to the OD 600 (Fl/OD). Cell-free extract preparation and enzymatic assays Strains were grown in 5 mL YPD or YPG overnight at 30 ºC. The cells were pelleted by centrifugation and after a wash with autoclaved water, were pelleted again. The pellets were resuspended in lysis buffer Y-PER ™ plus (Thermo Scientific) per the manufacturer's instructions. Protease inhibitor cocktail (Sigma Aldrich) was added (5 µL for every 1 mL of the lysis buffer used) and 0.5-mm glass beads were added at an equal volume to the cell pellet. The cells were homogenized in a FastPrep-24 ™ 5G (MP biomedicals) (3 cycles of 5.5 m/s for 30 s, with 5 min resting on ice in between runs). The homogenized cell lysates were centrifuged at 10,000 rpm for 10 min at 4 ºC and the supernatants were stored on ice for immediate use in enzymatic assays. Total protein concentrations were determined by the Pierce ™ Coomassie (Bradford) Protein Assay Kit (Thermo Scientific). Phosphoketolase activity was measured using a ferric hydroxamate assay on crude cell-free extracts [55]. The 200 µL reaction mixture contained 0.5 mM thiamine pyrophosphate (TPP), 1 mM DTT, 5 mM MgCl 2 , 50 mM morpholine ethane sulfonic acid (MES) buffer (pH 5.5 for all kinetic studies), 333 mM sodium phosphate substrate and 333 mM of either fructose 6-phosphate or ribose 5-phosphate as substrate. Ribose 5-phosphate which is converted to X5P by endogenous enzymes in cell-free extract, was used to measure phosphoketolase activity indirectly [56,57]. 20-80 µL of cell-free extract was used to initiate the reaction, and the mixture was incubated at 37 ºC for 15-30 min. 100 µL of 2 M hydroxylamine hydrochloride (pH 7.0) was added and incubated at room temperature for 10 min to stop the reaction. 600 µL of a 1:1 mixture of 2.5% FeCl 3 in 2 N HCl and 10% trichloroacetic acid was added. The final reaction step results in the formation of the ferric-hydroxamate complex, which was measured spectrophotometrically at 540 nm [58]. For specific activity measurements, reactions were stopped at 5-min intervals and ΔAbs/min was calculated. Codon optimizations BsPTA(v1) and TsPTA(v1) genes were codon-optimized to S. cerevisiae using the GeneArt Gene Synthesis service (ThermoFisher Scientific). TsPTA(v2) and CaXPK(v1) were codon-optimized to Y. lipolytica using GeneArt Gene Synthesis service and the open source web application ATGme [59], respectively. CaXPK(v2) was codon-optimized using the ATGme web application by manual replacement of all possible codons in the gene present at a frequency ≤ 2% with their higher frequency counterparts. All the gene sequences used in the strain engineering are listed in Additional file 1: Table S4. Glucose batch fermentation in 1-L bioreactors Frozen working-stocks of strains were patched onto a YPD plate and grown overnight at 30 °C. A 10-µL loopful of cells was removed from each plate and used to inoculate separate 250-mL baffled Erlenmeyer flasks with 50 mL of lipid production media. Inoculum flasks were cultured overnight at 30 °C with constant agitation of 200 rpm in a New Brunswick I26 incubator shaker, whereupon the OD 600 was measured. A volume of each flask culture required to initiate its corresponding 1-L bioreactors at a T 0 cell density of 0.4 OD 600 was transferred to separate sterile conical tubes. Each conical tube was then brought to 50 mL with sterile diH 2 Process parameters included a pH control at 3.5 automatically adjusted with 10 N sodium hydroxide, a temperature of 30 °C, aeration at 0.3 vvm air, and agitation controlled at 1000 rpm. A sample of 10 mL was taken from each culture once per day. The samples were stored at 4 °C after each harvest until analyzed. For all time-points, broth analysis was conducted via HPLC. Total dry cell weight (DCW) and total lipid content were measured gravimetrically by a two-phase solvent extraction. Cell-specific lipid productivities were calculated once the strains reached lipogenesis and their growth had slowed (day 2-day 5). Gravimetric measurement of dry cell weight (DCW) and total lipid content using a two-phase solvent extraction Broth volume from each harvested culture sample was added to a separate pre-weighed 2-mL screw-cap microfuge tube (USA Scientific, 1420-8799) to achieve a dried cell mass between 15 and 20 mg. Samples were washed twice with deionized H 2 O and centrifuged at 21,130 × g for 2 min. Pelleted cells were then resuspended in 200 µL of deionized H 2 O, frozen at − 80 °C for 30 min, and freeze-dried overnight. Each tube was weighed to obtain the DCW. To each freeze-dried sample and three blank microfuge tubes, 400 mg of glass beads (Sigma, G8772) and 400 µL of a 1.5:1 CPME:MeOH (cyclopentyl methyl ether:methanol) solution was added. Under maximum agitation, samples were then bead-beaten (BioSpec Mini-BeadBeater 8) for 2 min and allowed to cool to room temperature. After having cooled, 640 µL of CPME followed by 640 µL of 10% (w/v) CaCl 2 ·6H 2 O were added to each sample and vortexed. Samples were then centrifuged for 2 min at 21,130 × g, creating two distinct layers. 660 µL (75% of calculated volume) of the top layer, containing CPME and lipid, was transferred to a pre-weighed glass vial. Dispensed samples were evaporated under compressed air until no visual solvent remained and then lyophilized overnight for total solvent removal. The remaining lipid was weighed and corrected by subtracting the average residual mass measured in the blank samples. HPLC analysis The extracellular concentrations of glucose, citrate and polyols (erythritol, arabitol and mannitol) were determined by high-performance liquid chromatography analysis. To that end, a 1-mL broth sample was filtered through a 0.2-mm syringe filter and analyzed using an Aminex HPX-87H column (300 mm × 7.8 mm) (Bio-Rad) on an Agilent 1260 Infinity II HPLC equipped with a refraction index detector (Agilent Technologies). The column was eluted with 5 mM H 2 SO 4 at a flow rate of 0.6 mL min −1 at 45 °C for 25 min. The eluents were determined by comparing peak retention times to those of known standard substances, and the amounts were quantified by comparing the peak area of the analyte to the peak area of the standard substance at known concentrations. Additional file 1. Additional figures and tables.
7,376.6
2021-05-04T00:00:00.000
[ "Environmental Science", "Engineering", "Biology" ]
LOSS OF INFORMATION DURING DESIGN & CONSTRUCTION FOR HIGHWAYS ASSET MANAGEMENT: A GEOBIM PERSPECTIVE : Modern cities will have a catalytic role in regulating global economic growth and development, highlighting their role as centers of economic activity. With urbanisation being a consequence of that, the built environment is pressured to withstand the rapid increase in demand of buildings as well as safe, resilient and sustainable transportation infrastructure. Transportation Infrastructure has a unique characteristic: it is interconnected and thus, it is essential for the stakeholders to be able to capture, analyse and visualise these interlinked relationships efficiently and effectively. This requirement is addressed by an Asset Information Management System (AIMS) which enables the capture of such information from the early stages of a transport infrastructure construction project. Building Information Modelling (BIM) and Geographic Information Science/Systems (GIS) are two domains which facilitate the authoring, management and exchange of asset information by providing the location underpinning, both in the short term and through the very long lifespan of the infrastructure. These systems are not interoperable by nature, with extensive Extract/Transform/Load procedures required when developing an integrated location-based Asset Management system, with consequent loss of information. The purpose of this paper is to provide an insight regarding the information lifecycle during Design and Construction on a Highways Project, focusing on identifying the stages in which loss of information can impact decision-making during operational Asset Management: (i) 3D Model to IFC, (ii) IFC to AIM and (iii) IFC to 3DGIS for AIM. The discussion highlights the significance of custom property sets and classification systems to bridge the different data structures as well as the power of 3D in visualizing Asset Information, with future work focusing on the potential of early BIM-GIS integration for operational AM. INTRODUCTION Transportation Infrastructure has a vital role in the social prosperity, economic growth and environmental sustainability of a country (Liu et al., 2019), with the transportation network often being one of the largest and most valuable public infrastructure assets of a country (Sinha et al., 2017;Shah et al., 2017). The expectations from the public are high; ever evolving requirements around safety, reduced journey times and a demand for a well-maintained transportation network for example. National and local authorities aim to address these requirements under onerous financial requirements with the goal being to achieve maximized value from their assets with less resources (Shah et al., 2017). Asset Management (AM) is fundamental to this task, realising and extracting the value of what constitutes an Asset to the organisation with the Asset Management Plan (AMP) being the imperative tool to help the organisation reach its objectives (ISO 55000, 2014). Organizational change and fit for purpose use of the available technology, are fundamental to a successful AMP (Jafari, 2016) with software and data forming a key part of the latter (Shah et al., 2017). From a data perspective, these two interlinked challenges are underpinned by the implementation of Asset Information Management Systems (AIMS), with the common underlying element of any proposition being efficient and effective utilisation of data (Yang et al., 2019). Location-enabled data is of fundamental importance to Infrastructure Asset Management, not only to address issues relating to condition assessment, maintenance scheduling, health and safety, and strategic decision making (Garramone et al., 2020) but also to enable democratization of information via the power of 3D Visualisation . Location Data for AIMS In the lifecycle of infrastructure, two broad phases can be identified -Design & Construction and Operation & Maintenance. In the UK a key component of any major construction project is the Asset Information Requirements. These form the foundation for the information required for handover between D&C and O&M, to enable an organisation to maintain its assets during the operational stages of its lifecycle. Location intelligence and the use of Geographic Information Science/Systems (GIS) are typically incorporated as part of the contractual requirements for AIM. In parallel with this, the UK BIM Mandate requires the use of Building Information Modelling for major infrastructure construction projects. BIM is very information rich, and this location data is expensive to capture, update and maintain, particularly over the long term where format and storage and software changes need to be addressed. However, although such location-enabled information is fundamental to infrastructure asset O&M, much of this construction information is currently discarded, with a small fraction handed over to O & M. It is hypothesized that this discarded information -structural detail, construction material detail and more -may also be relevant for long term asset operation and maintenance. Provided it can be integrated into the AIM in a suitable manner and the cost of long-term data curation can be justified, making use of such data could save extensive future data capture costs, and also provide hidden structural detail that can't be captured retrospectively. However, while it might be considered ideal never to throw any information away, in reality it is important to understand, justify and evaluate the cost/benefit issues, relating to both the decision to maintain information long term and that to discard it. Purpose & Research Question This paper focuses on an important component of this wider information management challenge: the understanding, and documentation of information losses when comparing the subset of information required by the AIR to the wealth of construction data available from BIM. Such losses are caused both by the AIR specification itself, but also by the challenges encountered when converting BIM to 2D or 3D GIS for integration with AIMS in one system. BIM and GIS are not interoperable by nature and AIMs adds a third domain into this loss of information challenge. Working within a transportation infrastructure context, the proposed method documents the loss of information at three different levels: (i) BIM Authoring Tools to IFC, (ii) IFC to AIM and (iii) IFC to 3D/2D GIS for AIM. The reason for investigating the IFC-GIS-AIM route is the specifications of the AIRs which require the delivery of GIS datasets with this work exploring the potential of GeoBIM producing this information, rather than recreating using laborious and time consuming digitisation processes. Therefore, the Research Question this paper aims to address is: "What are the information losses when integrating BIM into 3D GIS to provide location underpinning for Asset Management?" This research is part of ongoing work investigating the information lifecycle in Transportation Infrastructure projects, and particularly Highways and Rail. The outcomes provide Asset Operators with a clearer picture of information available to them, to allow them to make an informed decision as to whether to invest time, effort and finances in improving existing integration processes. Potential of GeoBIM for AM: The geospatial community has identified Asset Management as a principal application field that can benefit from GIS-BIM integration . Garramone et al. (2020) propose multiple services within AM that may benefit from GeoBIM such as Condition Inspection & Monitoring, Facility Management and Risk Management. However, there are fundamental issues to be addressed focusing on interoperability, data structures and understanding of the GeoBIM concept. Boyes et al. (2017) investigate the integration of GIS-BIM for asset management in the Crossrail infrastructure project in UK, summarizing the key findings into: (i) geometric differentiations between BIM and GIS and (ii) decision making during asset tagging in relation to space reservations within the models. Park et al. (2014) explore BIM-GIS integration to estimate the project cost of a road infrastructure project, including operational and maintenance. A system is proposed, which provides a 3D visualisation of the project simulating different scenarios by linking information related to quantities, earthworks, land acquisition costs and O & M costs, offering to the stakeholders more informed decision making with regards to the optimal scenario. Farghaly et al. (2017) investigate a big data system architecture for Asset Management utilizing BIM. In this research work, BIM requirements are identified, highlighting the stage: "data to information" critical, as the collected data need to be properly stored, managed and exchanged, but also enriched by integrating multiple data sources and primarily GIS data. Kang and Hong, (2015) propose a BIM-GIS solution for facility management in order to address the interoperability challenges of the supported data formats. Utilizing an ETL (Extract-Transform-Load) process they create a workflow that facilitates a unidirectional conversion from BIM (IFC) to 3DGIS (CityGML) and links the generated models with data sources that contain information relevant to facility management. In this particular case study, GIS is utilized as the primary client and visualisation tool to store and exchange this information in 3D and provide access to a detailed BIM Viewer. Integration Challenges: Extensive research efforts are directed towards BIM-GIS interoperability. The "GeoBIM Benchmark 2019" ) initiative commenced in 2017 and concluded in 2019, approaches BIM-GIS integration from two perspectives: (i) technical challenges in data interoperability and (ii) understanding requirements and use cases between the BIM and GEO domains . Biljecki and Tauscher (2019) summarise in detail the most common errors noted in the IFC-CityGML conversion, focusing particularly on geometry, semantics and topology. With regards to geometry, the major challenges involve the conversion of curved surfaces, missing geometric features and poorly geolocated geometries due to the use of local coordinate systems (Biljecki and Tauscher, 2019;Noardo et al., 2019). The "GeoBIM benchmark 2019" (Noardo et al., 2019) investigates the conversion from IFC to 3DGIS, using a variety of 3D models (Noardo et al., 2019) which represent different types of buildings. One of the most important aspects of BIM-GIS integration has been the loss of semantic information during the conversion process. The 3D Models produced by BIM authoring tools, typically are more enriched in terms of representing geometric details, which consequently leads to increased semantic incorporation and notable differences in the semantic classification compared to a 3DGIS model (Biljecki and Tauscher, 2019). Therefore, the semantic differences between BIM and GIS Standards leads to features being either mismapped or classified within non-relevant entities (Floros et al., 2020). Arroyo Ohori et al. (2017) focus on the topological challenges that arise during an IFC to CityGML conversion, which are also interconnected with the geometric issues such as self-intersecting polygons and non-planar surfaces. On top of that, converting adjacent surfaces or surfaces that share the same characteristics from IFC to 3DGIS introduces topological inconsistencies, as the modelling is facilitated using the "Xlinks" functionality Floros et al., 2018). With specific focus on infrastructure, preliminary work relating to understanding the Asset Information Systems and Requirements for Highways and their relation to openBIM Standards such as IFC, has highlighted integration challenges at a data level (Floros et al. 2019), while the maturity of model information from Design to Construction to Handover for Rail Infrastructure creates significant barriers in the downstream BIM-GIS-Asset Management interoperability for O & M (Floros et al., 2020). ETL as a Conversion Tool: One of the most used approaches when converting from BIM to GIS and particularly IFC to CityGML is the use of Extract-Transform-Load (ETL) process. ETL offers the capability of breaking downor extracting-a model into its consisting elements (IfcWall, IfcSlab, IfcPile) so the user is able to manipulate the data within a graphical interface by using "Transformers" that perform specific actions, such as geometry conversion from solid to b-rep. The modified dataset is written to the desired format which can then be viewed and interrogated via relevant data viewers. The ETL process typically is semi-automated (Liu et al., 2017) providing significant flexibility tailored to different model requirements and is able to manipulate and retain semantic information (Floros et al., 2018). There are however considerable drawbacks when it comes to the implementation of the ETL process to convert IFC models to 3DGIS. Firstly, the configuration of the process is heavily dependent on the developer's interpretation of the model elements and the mapping to the corresponding 3DGIS entities. Secondly, depending on the size and elements of the model, it can be performance intensive. Additionally, it is heavily dependent on the semantic enrichment of the data source which leads to the requirement of recreating parts of the process to suit the needs of models with different structure ( Noardo et al., 2019;Liu et al., 2017;Floros et al., 2020). The British Set of Standards PAS 1192 series, specifies the Information Models to be developed during the lifecycle, starting with the Project Information Model (PIM), which consists of Graphical Information, Non-Graphical Information and Documentation and eventually evolves to the Asset Information Model (AIM) post-handover, which is used by the organisation to operate and maintain its assets (PAS 1192-3, 2013). IFC: IFC is an OpenBIM Standard developed by buildingSMART (buildingSMART, 2021) focusing mainly on Buildings with the inclusion of Infrastructure objects such as Bridges being currently under development. Uniclass 2015: Uniclass2015 is a classification system of the built environment for the AECOO industry, with multiple levels of granularity, varying from the description of the generic asset class (Highways, Bridge) up to specific asset elements (Beams, Handrail) (NBS, 2021). Uniclass2015 is introduced typically during the Design stage and is used for multiple purposes including Estimating, CAD Drawing and Layer specification (Gelder, 2015). Data Provider The data for this research is provided by the Skanska 1 UK-Infrastructure BIM team working on the Regional Delivery Partnership (RDP) scheme on behalf of Highways England (HE) 2 . Skanska UK-Infrastructure is the Main Contractor, working with Mott MacDonald as the Designer to deliver the Design and Construction of the above scheme. Datasets The main datasets utilised in this work are: (i) a 3D Model of a Bridge for Highways and (ii) the Asset Data Management Manual of Highways England which defines the scope of the AIMS and its Asset Information Requirements. 3D Model -Bridge: The dataset that the proposed method is developed upon is a Bridge in the stage of Scheme Design, which has been created within Autodesk Revit and exported to IFC 2X3 Version ( fig.1). This Bridge is an example of the Structures involved during the Design and Construction of Highways and it has been selected mainly for two reasons: firstly, it has one of the most asset information rich semantic structures within the ADMM and secondly it is an Infrastructure Asset that can be used to support other types of infrastructure such as Rail, which creates opportunities for the scaling implementation of the proposed method across the two infrastructure sectors (Highways and Rail). The coordinate system used for the design of the Model is HE Local Grid reprojected to British National Grid once transferred to GIS. Asset Data Management Manual (ADMM): The AIR in this case is the ADMM for HE (ADMM, 2021) consists of four parts: (i) Data Principles and Governance, (ii) Requirements and Additional Information, (iii) Data Dictionary and (iv) Asset Reference Catalogue. In this work, the first (1 st ) part sets the tone in understanding HEs AM Vision focusing on linking all Asset Information within a single system so they can operate their Assets efficiently and effectively. The Data Dictionary, alongside the Asset Requirements is the data-oriented version of the ADMM and the one that is being utilised in this paper. The Data Dictionary contains information about the Asset Classes, Asset Names, Asset Attributes as well as the linking to Uniclass2015. The Data dictionary is provided as an extracted Excel spreadsheet that describes the structure of HE's AIMS. The Asset Class describes generic classes such as Structures, Pavement or Geotechnical, with the Asset Name providing the granular detail to a Bridge, a Retaining Wall or a Gantry. Attribute Names include the additional information needed to describe the condition of the Asset. Figure 2 provides an example of the ADMM extract in a UML representation. Software Tools Feature Manipulation Engine 2021 (FME) is the Extract-Transform-Load (ETL) software tool that is used to process and transform the 3D Model and facilitate the mapping between the 3D Model and the Asset Data Management Manual (ADMM). 1 Skanska is a global Construction Company operating in the UK 2 Highways England is a public body operating and maintaining Highways Assets in the UK Autodesk Revit 2020 is the BIM Authoring tool of the 3D Model, while the GIS outputs are visualised in ESRI ArcGIS Pro 2.7.2. METHOD The proposed method of this work consists of 5 stages as illustrated in figure 3: Phase 1: Extract of IFC Classes The IFC model of the Bridge is produced as an export from Autodesk Revit and then it is being imported in Feature Manipulation Engine (FME) in order to extract its individual classes, as summarised in table 1: IfcBeam IfcBuildingElementProxy IfcColumn IfcRailing IfcSlab IfcStair IfcWall IfcWallStandardCase IfcPropertySet Table 1. Classes of IFC Bridge. As described in Section 3, the model contains custom Property Sets that are not being natively included within IFC. These Property Sets include information generated during Design and Construction such as Material, Length, and Volume among others. Therefore, during the extract process, the custom property sets are linked with the relevant classes using a "parent-child" id relationship. Extract of the ADMM Structure The ADMM Extract is being processed within FME in order to group based on Asset Class = 'Structures' and Asset Name = 'BLC' which stands for Bridge and Large Culvert. For each Asset Name, the Uniclass2015 and Attribute Names are extracted in a tabular format pending further processing. Linking IFC with ADMM The next step in the process links the ADMM structure with the IFC schema. The link is facilitated by using Uniclass2015 as the common join attribute between the two schemas within FME. Mapping to 2D/3DGIS Next, follows the conversion from a 3D Model in IFC to a 2D and 3DGIS Model which incorporates the Asset Information Requirements as a data schema, whilst also logs the transition and potential loss of information from IFC to GIS. To address the interoperability between BIM and GIS, the workflow within FME performs the conversion at two stages: (i) geometric conversion and (ii) semantic mapping. The Geometric conversion involves the transition of Constructive Solid Geometry (CSG) Solid Geometry to a b-rep Geometry in GIS, while the Semantic Mapping maintains the semantic structure of the IFC, enriched by the ADMM. Figure 4 presents an example of the 3DGIS output and its semantic structure. Documenting Information Loss The final step of the process involves the documentation of information loss both for 3D and 2D GIS outputs focusing on two stages: geometry and semantics (table 2). The set of criteria selected captures the geometry type of the features (i.e., Polyhedras, Solids, Polygons), as well as the Number of features before and after conversion for each IFC Class. This allows a high-level to ensure that the input features match the produced output. The second set of criteria emphasizes on capturing the property sets that have been dropped during the conversion to GIS. These property sets (i.e., Concrete Grade, Location) are stored in a different table to facilitate the semantic comparison with the AIRs. Lastly, information loss with regards to the 3 rd Dimension (i.e., Height, Volume) is parsed to the 2D GIS output as an additional attribute. Criteria Geometry Semantics Dimension Geometry Type (before and after conversion) Number of Features (before and after conversion) X 3D, 2D Figure 5 summarises the different Object Types that are contained within each IFC Class. IfcBuildingElementProxy stores the most Object Types, followed by IfcSlab, both consisting of elements that differ considerably in nature leading to a misinforming mapping for the end user. This highlights the fact that there is information loss from a mapping perspective from the very early stages of the process, as the generation of the IFC Model is happening utilising built-in functionality of the software tools. To further illustrate how visualisation can potentially impact decision making, figure 8 focuses on highlighting an element that is not easily recognisable, a stiffener plate in 3D view, as the same object in a 2D GIS does not provide an identifiable geometric visualisation. 3D Information is stored as attribute within a 2D Geometry (i.e., volume) Figure 9 presents a UML diagram which highlights the association at a conceptual level between the IFC Schema and the Asset Information Requirements. The UML diagram describes the available IFC Classes and the Object Types they enclose as well as their relationship with the relevant Asset Names as per the ADMM (in blue). The connection of the two is facilitated using Uniclass2015 where available, as there are Instances such as the IfcRailing in which the Uniclass2015 value as created during Design does not match the Uniclass2015 value according to the ADMM. This highlight a second stage on which information loss is noted between conversions at a semantic level. With regards to the Number of Geometry Features in the original IFC Model as well as in the 2D and 3D GIS outputs, all geometric features are being maintained during the conversion across all classes. The Geometry Type has changed from CSG Solid to a Polyhedral surface for 3D and a Polygon for 2D. Figure 10 presents a semantic comparison between the property sets that have been dropped (orange colour) with the AIM requirements (blue colour) as per the ADMM for the IFC Slab. DISCUSSION This paper investigates the information loss when transitioning from construction-driven BIM to a 2D/3D GIS underpinned AIM, extending ongoing work with regards to capturing data requirements and understanding the potential of GeoBIM during Operation & Maintenance for Transportation Infrastructure, in particular Highways and Rail. The conversion to GIS is contractually driven, but also adopts the philosophy of generating the information once (in BIM in this scenario) and use it multiple times with data longevity being considered too. It focusses on a bridge as a Highways Asset during the stage of scheme design and examines the different stages of the asset's information lifecycle until it reaches the handover stage, addressing the question: "What are the information losses when integrating BIM into 3D GIS to provide location underpinning for Asset Management?" Current work addresses this challenge by presenting an approach to integrate BIM, GIS and AIM. Exploring the transition of information at a conceptual level from proprietary data formats to OpenBIM Standards such as Industry Foundation Classes (IFC), their relationship to an AIM as well as the generation of the required GIS output, both in 3D and 2D aim to provide an insight on the technical and non-technical challenges that prevent the uptake of GeoBIM A method is proposed to break down the structure of an IFC Model and link it with the structure of the prospective Asset Information Model in an effort to streamline the flow of information between the two. This method highlights the importance of custom Property Sets to address the limitations of the IFC's schema for Infrastructure and enable the mapping with other systems. In this work, the Uniclass2015 classification system is the common attribute which is used to join the IFC schema with the Asset Information Model. The method continues with the generation of the asset as a GIS output, both in 3D and 2D in an effort to understand potential benefits of 3D Visualisation and increased Level of Detail. The method concludes by proposing a set of metadata according to the validation process as implemented in the project to be captured during the conversion process. This aids the documentation and evaluation of potential loss of information that can impact decision-making for Asset Management. As for limitations of the study, the proposed set is focusing on the geometry and semantics, without taking into consideration information loss that may occur from texture loss or topological errors, therefore the proposed property sets is not an exhaustive list of capturing information loss. Additionally, this study is based upon the particular design guidelines of one Contractor, leaving room for enriching the results as part of future work. To answer the research question, it is essential to understand the type of information that is being lost and propose a scalable methodology that can capture this information. The benefit of the current method is its automation during the conversion allowing the batch processing of multiple models. As demonstrated in Section 5, there are three stages during which the loss of information is encountered and to a considerable extent this is dependent on the Level of Model Information and the custom property sets that enrich it, creating significant challenges to a unified standardised method. However, the use of Uniclass2015 is promising as it can provide a unified approach when describing an asset. This paper considers the ADMM as the guiding document on identifying the Asset Information Requirements for Highways Assets. Three stages during the information lifecycle within Design-Construction and Handover are identified: • 3D Model to IFC: The export of a 3D Model to IFC introduces mis mapping of information which complicates the integration efforts with the Asset Information Model or 3DGIS Standards to be used for Operation & Maintenance. • IFC to AIM: The main link between the IFC and AIM is Uniclass2015. However, the classification values have been specified by different parties: the Contractor assigns the values during the generation of the 3D Model, while the Owner/Operator is responsible for the values within the AIM. Thus, there are instances on which different Uniclass2015 are assigned to describe the same Asset, which may lead in poor mis mapping when transferring information from one system to another. • IFC to 2D GIS for AIM: As a 2D GIS dataset is the desired output to facilitate the delivery of Asset Information the loss of information is focused on the drop of the 3D Geometry and the conversion to a polygon representation of the Asset. The custom property sets are maintained during the conversion process before being mapped to the ADMM requirements, whilst information that describe geometric characteristics such as Volume is incorporated as an attribute. • IFC to 3DGIS for AIM: The 3D GIS dataset is compared with the respective 2D GIS output to identify information loss and discuss the potential of GeoBIM in AM. The geometry is converted to a polyhedral representation, whilst the semantic structure follows the AIM specifications. To validate the accuracy of the conversion and the maintenance of the information the following rules been applied during the ETL conversion process and they focus on: (i) number of features before and after conversion, (ii) geometry type and (iii) semantic coherence to the AIM specification. Whilst the technical challenges of the BIM to GIS are not fully resolved yet, in this work the stages of producing the IFC model, as well as the mapping to AIM are considered more impactful to the downstream BIM-GIS interoperability for O & M. CONCLUSIONS & FUTURE RESEARCH WORK In the United Kingdom, it is estimated that interoperable, accessible and reliable location data can be worth over GBP 4 billion per year for the infrastructure sector, contributing towards the development of a location enabled Information Model that will improve the resilience of the UK's Infrastructure (UK Geospatial Strategy 2020-2025). Two key sources are BIM and GIS, with their integration being instrumental into providing the multi-scale location framework which is required to underpin all national digital twins. Lifecycle Information Management aims to provide a seamless flow of data across the technical lifecycle, delivering the "right" piece of information to the "right" person at the right time with minimum cost. GeoBIM has a role in this process, with data integration serving the purpose of collecting once and using multiple times. There are however challenges around system interoperability and awareness of the GeoBIM potential that need to be addressed. Addressing the contractual requirements in an effective and economic way leads to the development of sustainable processes for the maintenance of information during the full lifecycle of an infrastructure asset which typically is 100+ years. Documenting the information that is lost in the early stages of the project but could be valuable later aims to highlight the potential of GeoBIM as the linking mechanism among the different stages of the technical lifecycle. This work explores whether the existence of GeoBIM from the early stages of the project will benefit the capture and maintenance of information in the long term, by integrating existing information produced from the BIM and GEO domains to serve operational AM. This in return, will enable data-driven processes which facilitate empowered, informed and costeffective decision making across the lifecycle. Future research emphasizes on communicating the findings of this work by interviewing Asset Managers, as well as enhancing the proposed method of documenting loss of information. The proposed method can can be a starting point on initiating these discussions empowered by 3D visualization of potential information loss. This will help to raise awareness, highlight the importance of data integration and facilitate the collaboration between the Asset Managers and the Construction industry to showcase the potential of GeoBIM.
6,652.8
2021-10-07T00:00:00.000
[ "Computer Science" ]
Ubiquitin-specific protease-44 inhibits the proliferation and migration of cells via inhibition of JNK pathway in clear cell renal cell carcinoma Background Clear cell renal cell carcinoma (ccRCC) is the most common form of adult kidney cancer. Ubiquitin-specific protease (USP)44 has been reported to be involved in various cancers. We investigated the function, role and molecular mechanism of USP44 in ccRCC. Methods Data obtained from the Cancer Genome Atlas Data Portal and Gene Expression Omnibus database were analyzed to uncover the clinical relevance of USP44 expression and tumor development. USP44 function in the proliferation and migration of tumor cells was assessed by cellular and molecular analyses using ccRCC lines (786-O cells and Caki-1 cells). Results USP44 showed low expression in ccRCC cancer tissues compared with that in normal tissue. USP44 expression was negatively correlated with tumor stage, tumor grade, and patient survival. USP44 overexpression inhibited the proliferation and migration of 786-O cells and Caki-1 cells significantly. USP44 overexpression also prohibited cell proliferation by upregulating expression of P21, downregulating cyclin-D1 expression, and inhibiting cell migration by downregulating expression of matrix metalloproteinase (MMP)2 and MMP9. USP44 knockdown enhanced the proliferation and migration of 786-O cells and Caki-1 cells. USP44 function in inhibiting the proliferation and migration of 786-O cells and Caki-1 cells was associated with phosphorylation of Jun N-terminal kinase (JNK). Conclusion USP44 may be a marker in predicting ccRCC progression. Inhibition by USP44 of the proliferation and migration of 786-O cells and Caki-1 cells is dependent upon the JNK pathway. Background Renal cell carcinoma (RCC) is represents 80-90% of adult kidney cancers. RCC incidence varies geographically, with the highest incidence being documented in developed countries [1]. Based on recent guidelines, the most efficacious treatment for early-stage clear cell renal cell carcinoma (ccRCC) is surgery and targeted therapy [2]. Unfortunately, the major cause of death for most ccRCC patients is the metastasis and recurrence of tumor cells [3]. Several new biomarkers have been explored to diagnose and predict the occurrence and development of ccRCC [4][5][6]. Chromosomal instability, leading to aneuploidy, is one of the hallmarks of human cancers [7]. Ubiquitinspecific protease (USP)44 is located at 12q22 and encodes a 712-kD amino acid. USP44 is a member of a family of deubiquitinating enzymes and has an important role in human cancers [8]. USP44 regulates the separation and positioning of centrosomes, and the geometry of mitotic spindles [9]. USP44 can stabilize the protein expression of protectin in the cycle of healthy cells until all the chromosomes match correctly with spindle fibers and prevent immature mitosis. By inhibiting USP44 expression in mice, the proportion of aneuploid cells and chromosomal instability can be increased significantly, making them more prone to malignant transformation [10,11]. However, Zou and colleagues showed that USP44 overexpression promotes the malignancy of glioma [12]. However, the function and mechanism of action of USP44 in ccRCC have not been clarified, a knowledge gap we aimed to fill in the present study. The JNK inhibitor JNK-IN-8 (HY-13319, MCE, USA) was dissolved in dimethyl sulfoxide (DMSO) and diluted into a 0.5-μM working solution with complete culture medium, and the same amount of DMSO was set as the control. Bioinformatics analysis Bioinformatics analysis was undertaken in accordance with the work of Jiangqiao and collaborators [13]. ccRCC's gene sequence tertiary count data samples and clinical information are obtained through TCGA data portal. DESeq2 within the R Project for Statistical Computing (Vienna, Austria) was used to standardize counting data and analyze differentially expressed genes between cancer samples and normal samples. Standardized data were used primarily to analyze the visual expression, stage, grade and survival correlation of USP44 in ccRCC and adjacent noncancerous tissues. According to USP44 expression, clinical samples of ccRCC were divided into two groups for analyses. Kaplan-Meier survival curves were used to show the differences in overall survival between patients with high expression of USP44 and cases with low expression of USP44. Simultaneously, we calculated the correlation between USP44 expression and the age, sex, tumor stage and tumor grade of the patient through T-text, and the obtained data were visualized through ggplot2 within the R Project for Statistical Computing. Lentivirus of overexpressed USP44, and construction and production of short hairpin (sh)RNA USP44 lentivirus An overexpressed vector with a flag tag and shRNA vectors of the USP44 (homo) gene were designed and constructed according the method described by Jiangqiao and colleagues [13]. The gene registration number is NM_001347937.1. pHAGE-3xflag was used as the carrier. The primers were h-USP44-NF, AAACGA TTCAGGTGGTCAGG, h-USP44-NR, and AGTGTACC CAGAACCCTCCT. The sequence of pLKO.1-h-USP44-shRNA1 was CGGATGATGAACTTGTGCAAT. The sequence of pLKO.1-h-USP44-shRNA2 was GCACAGGA GAAGGATACTAAT. 786-O cells and Caki-1 cells were inoculated into six-well plates (140,675; Thermo Scientific) at 3 × 10 5 cells per well and incubated overnight. After that, the original culture medium was replaced with DMEM containing mitomycin (10 μg/mL). Then, cells were cultured for 12 h. Cells were wounded with a pipette tip and photographs taken immediately (0 h) as well as 6 h and 12 h after wounding. Then, the Cell Migration Index was calculated using the following formula: Cell Migration Index = (wound width at 0 hwound width at 6 h or 12 h) × 100/wound width at 0 h. Western blotting Proteins were extracted from 786-O cells and Caki-1 cells according to standard protocols. Meanwhile, protease inhibitors (04693132001; Roche) and phosphatase inhibitors (4,906,837,001; Roche) were added. Protein concentrations were determined using a Bicinchoninic Acid Protein Assay kit (23,225; Thermo Fisher Scientific). Briefly, we separated protein samples by sodium dodecyl sulfate-polyacrylamide gel electrophoresis on 12.5% gels, and then transferred them to nitrocellulose membranes. We blocked the nitrocellulose membranes using 5% nonfat dry milk in TBS-T buffer and incubated them overnight with primary antibody at 4°C. After rinsing the blots extensively with TBS-T buffer, incubation with secondary antibodies for 1 h was undertaken. We applied a ChemiDoc™ XRS+ gel-imaging system (Bio-Rad Laboratories, Hercules, CA, USA) to detect the target bands. Reverse transcription-polymerase chain reaction (RT-PCR) The total mRNA of 786-O and Caki-1 cell lines was extracted with TRIzol® Reagent (15596-026; Invitrogen, Carlsbad, CA, USA). Then, total RNA was reversetranscribed into complementary (c)DNA using a Transcriptor First Strand cDNA Synthesis kit (04896866001; Roche) according to manufacturer instructions. SYBR® Green (04887352001; Roche) was used to quantify the PCR-amplification products. mRNA expression of target genes was normalized to that of β-actin expression. All the primer information is in Table 1. Statistical analyses Data are the mean ± standard error. We used SPSS v19.0 (IBM, Armonk, NY, USA) for statistical analyses. The Student's t-test was used to analyze all data. P < 0.05 was considered significant. USP44 expression is deceased in ccRCC tissue and is correlated with the tumor stage, tumor grade. And patient survival Analyses of information from the TCGA Data Portal demonstrated that USP44 expression was significantly lower in ccRCC specimens than that in normal tissues (Fig. 1a). Data analyses from the Gene Expression Omnibus (GEO) 102,101 database confirmed this result (Fig. 1b). Relationship between the expression of Table 2. Subsequently, a subgroup analysis was undertaken based on the stage and grade of ccRCC. USP44 expression was closely related to the stage and grade of ccRCC (Fig. 1c, d). With an increment in stage and grade, USP44 expression showed a gradual decrease. USP44 expression was closely related to patient survival (Fig. 1e). Based on these results, USP44 might be a potential marker to predict ccRCC progression, and play an important part in ccRCC progression. USP44 overexpression inhibits proliferation of 786-O cells and Caki-1 cells We wished to explore the effect of USP44 in vitro. 786-O cells and Caki-1 cells show different metastatic and invasive abilities in the ccRCC model, so we chose these two cell lines for experiments. Overexpressed stable cell lines were obtained by viral infection of USP44 in 786-O cells and Caki-1 cells (Fig. 2a-d). The viability and proliferation potential of cells was evaluated through the CCK8 assay and BrdU experiment. In comparison with negative controls, USP44 overexpression inhibited the (Fig. 2g, h), which demonstrated that USP44 can inhibit ccRCC proliferation. Studies have shown that expression of cyclin D1 and P21 is closely related to tumor occurrence, and that they are markers of proliferation of tumor cells [16,17]. The main function of cyclin D1 is to promote cell proliferation by regulating the cell cycle, which is closely related to the occurrence of tumors and is a marker of proliferation of tumor cells (including ccRCC) [18]. P21 expression is closely related to inhibition of tumor cells and can coordinate the relationship between the cell cycle, DNA replication and DNA repair by inhibiting the activity of cyclin-dependent kinase complexes [19]. USP44 expression was positively correlated with expression of the gene and protein of P21, and negatively correlated with expression of the gene and protein of cyclin D1 (Fig. 2i-l). Taken together, these results demonstrated that USP44 inhibited proliferation of 786-O cells and Caki-1 cells. UPS44 overexpression inhibits migration of 786-O cells and Caki-1 cells We conducted a series of experiments to investigate if USP44 overexpression inhibited the migration of 786-O cells and Caki-1 cells. First, we used Transwells to evaluate the effect of USP44 overexpression on cell migration. We found that USP44 overexpression slowed down the migration of 786-O cells and Caki-1 cells significantly (Fig. 3a, b), which was consistent with our expectation. Because the two types of tumor cells we used have different migration abilities, USP44 overexpression slowed down the migration ability of 786-O cells at the early stage (2 h, 3 h), and slowed down the migration ability of Caki-1 cells at the late stage (10 h, 24 h). Next, we undertook wound-healing experiments to confirm the migration effect of USP44. To avoid the effect of cell proliferation on cell migration, mitomycin was administered before wound-healing experiments. USP44 overexpression slowed down the migration of 786-O cells and Caki-1 cells significantly (Fig. 3c, d). MMP2 and MMP9 are closely related to the bloodvessel formation, growth and metastasis of tumors [20]. MMP2 and MMP9 have been recognized as markers of the migration and metastasis of ccRCC lines [21]. USP44 overexpression down-regulated expression of the mRNA and protein of MMP2 and MMP9 in 786-O cells and Caki-1 cells (Fig. 3e-h). Collectively, these results demonstrated that USP44 inhibited the migration of 786-O cells and Caki-1 cells. UPS44 knockdown promotes the proliferation and migration of Caki-1 cells We attempted to verify the role of USP44 in tumor cells by silencing USP44 expression with shRNAs. Two shRNAs were constructed to silence USP44 expression in Caki-1 cells (Fig. 4a). Consistent with our expectation, USP44 knockdown promoted cell proliferation significantly according to the CCK-8 assay and BrdU experiments (Fig. 4b, c). USP44 knockdown inhibited P21 expression and upregulated expression of cyclin D1 (Fig. 4d, e). The cell-migration assay showed that USP44 deficiency promoted the migration of Caki-1 cells (Fig. 4f), which was associated with upregulation of expression of MMP2 and MMP9 (Fig. 4g, h). These results confirmed that USP44 knockdown enhanced the proliferation and migration of Caki-1 cells. USP44 suppressed the JNK signaling pathway in ccRCC The AKT and mitogen activated protein kinase (MAPK) signaling pathways have important roles in the occurrence and development of malignant tumors [22]. To explore how USP44 regulates the proliferation and migration of tumor cells, we measured the activation of AKT, JNK, p38, and ERK signal pathways in USP44overexpression and control groups. USP44 overexpression decreased the level of JNK, but not that of AKT, p38 or ERK, compared with control cells in both cell lines (Fig. 5a, b). JNK expression was promoted if USP44 expression was knocked down, but no effect was observed on expression of AKT, p38 or ERK (Fig. 5c). The results stated above suggest that the JNK signaling pathway participated in the USP44 function of regulating proliferation of 786-O cells and Caki-1 cells. The promotional effect of USP44 knockdown on the proliferation and migration of 786-O cells and Caki-1 cells was dependent upon the JNK pathway To verify further whether the role of USP44 in ccRCC progression was dependent upon the JNK pathway, we blocked JNK activation via a JNK inhibitor and examined the proliferation and migration of 786-O cells and Caki-1 cells (Fig. 6a). Results showed that the ability of USP44 knockdown to promote the proliferation and migration of 786-O cells and Caki-1 cells was reduced significantly after treatment with a JNK inhibitor. Hence, USP44 regulated the proliferation and migration of 786-O cells and Caki-1 cells through the JNK signaling pathway (Fig. 6b, c). Discussion Several studies have demonstrated that the molecular mechanism of ccRCC is closely related to apoptosis, autophagy, hypoxia metabolism and immune imbalance [23]. However, the mechanism of pathogenesis and metastasis of ccRCC have not been elucidated. The spindle assembly checkpoint (SAC) is an important mechanism to ensure mitosis. An abnormality of the SAC is a key step in the development of aneuploidy and even tumors. Holland and colleagues reported that the important regulatory proteins of the SAC deubiquitinase USP44 were closely associated with tumors [24]. We explored the role of USP44 as a tumor marker based on information from the TCGA Data Portal and GEO 102010 database. Results showed that USP44 had low expression in tumor tissues and correlated with the pathologic stage and grade of tumors. Patients with high USP44 expression showed good survival benefits. These results suggest that USP44 may be a good biomarker to predict ccRCC progression. Some studies have suggested that USP44 overexpression promotes tumor development, whereas other studies have indicated that USP44 inhibits proliferation of tumor cells [10,11,25,26]. Thus, we examined the effect of USP44 on ccRCC proliferation. Using 786-O cells and Caki-1 cells, we showed that USP44 overexpression inhibited proliferation of these two cell lines. The genes associated with proliferation of these two cell lines were also regulated by USP44 overexpression. The metastatic potential of ccRCC is the main factor leading to the death of affected patients [27]. Treatment of metastatic ccRCC has changed considerably over recent years [28]. The US Food and Drug Administration has approved agents to treat metastatic ccRCC, including immunotherapeutic drugs, antiangiogenic agents, and mammalian target of rapamycin (mTOR) inhibitors [1,29]. Nevertheless, even with these treatments, many patients with metastatic ccRCC have very short survival. We demonstrated that USP44 overexpression inhibited migration of tumor cells through wound-healing and cell-migration experiments. To avoid the effect of cell proliferation on cell The MMP family are involved in breakdown of the extracellular matrix in health and disease (e.g., metastasis) [20]. MMP2 and MM9 are closely related to the invasion and metastasis of several types of tumor cells [30]. Our data showed that USP44 overexpression in 786-O cells and Caki-1 cells was a reminder that ccRCC metastasis was related to expression of MMP2 and MMP9. Based on the results from Caki-1 cells with USP44 silencing by shRNAs, we demonstrated that USP44 inhibits ccRCC progression in reverse. Whether a deubiquitinating enzyme has a role in promoting or inhibiting cancer is closely related to the function of its substrate protein [31]. Substrate molecules regulate several tumor-associated signaling pathways: p53, nuclear factor-kappa B, Wnt, transforming growth factor-β, and histone epigenetic modifications. These signaling pathways interact with each other. Upregulation of USP expression in tumor cells often suggests that its substrate protein can promote the malignant progression of cancer cells [32]. Downregulated expression of a USP suggests that its substrate is usually a tumor suppressor. Each USP has multiple substrates, and the same substrates may be regulated by multiple USPs [33]. Therefore, the regulatory network of a USP on a tumor-cell signaling pathway is extremely complex. PI3K/AKT is a serine/threonine protein kinase involved in tumorigenesis (including ccRCC) [34]. If cells are stimulated by extracellular signals, PI3K activates AKT, and the latter further activates its downstream factor mTOR. The MAPK signaling pathway has crucial roles in the occurrence, development, treatment and prognosis of malignant tumors [35]. The downstream signaling pathway includes JNK, . c Western blots of molecules in the MAPK signaling pathway (JNK,AKT,p38,ERK) in short hairpin control (shctrl) and shUSP44 groups of Caki-1 cells. (cropping of blots). **p < 0.01 vs. the ctrl group; n.s. not significant vs. the shctrl group. Data shown are the mean ± SD ERK and p38, which are associated with the growth and proliferation of tumor cells [36]. AKT-JNK/p38/ ERK has been shown to be involved in the progression of lung cancer and pancreatic cancer [34,37]. We measured the protein activity of JNK, AKT, ERK and p38. We found that USP44 inhibited the JNK pathway but not the AKT, ERK or p38 pathways. Rescue experiments showed that silencing USP44 expression to promote the proliferation and migration of tumor cells could be blocked by a JNK inhibitor. JNK activation in USP44 knockdown could have been a result of stressresponse activation due to chromosome mis-segregation, as reported by Kumar and colleagues [38]. The ubiquitinproteasome system regulates oncogenic factors posttranscriptionally at the epigenetic level. Studies have shown that important tumor-related factors, such as the epidermal growth factor receptor, sarbox-2, c-myc, and McL-1, are regulated by USPs. However, little is known about the catalytic substrates of USP44. In current study, overexpression of USP44 enhanced the malignancy of glioma by stabilizing tumorpromoter securing [12]. USP44 can induce the genesis of prostate cancer cells partly by stabilizing EZH2 [39]. Therefore, further studies are needed to ascertain whether USPP44 regulates a promoter or tumor suppressor in ccRCC. Conclusions USP44 was underexpressed in ccRCC. USP44 overexpression inhibited the proliferation and migration of 786-O cells and Caki-1 cells significantly. The JNK pathway is involved in the way that USP44 regulates proliferation and migration of 786-O cells and Caki-1 cells. Fig. 6 USP44 knockdown promotes the proliferation and migration of Caki-1 cells through JNK activity. a Western blotting showed the p-JNK level of Caki-1 cells in short hairpin control (shctrl) and shUSP44#1, and shUSP44#2 groups with or without a JNK inhibitor. (cropping of blots). b BrdU experiment showing the relative proliferation index of Caki-1 cells in shctrl and shUSP44#1, and shUSP44#2 groups with or without a JNK inhibitor. c Image of the Transwell™ result for Caki-1 cells in the shctrl and shUSP44#1, and shUSP44#2 groups with or without a JNK inhibitor, and the histogram shows the number of migrated cells. *p < 0.05 vs. the shctrl DMSO group; **p < 0.01 vs. the shctrl DMSO group; $p < 0.05 vs. the sh#1 DMSO group; $$p < 0.01 vs. the sh#1 DMSO group; ##p < 0.01 vs. the sh#2 DMSO group
4,270
2020-02-03T00:00:00.000
[ "Biology", "Medicine" ]
Aspects of HF radio propagation The propagation characteristics of radio signals are important parameters to consider when designing and operating radio systems. From the point of view Working Group 2 of the COST 296 Action, interest lies with effects associated with propagation via the ionosphere of signals within the HF band. Several aspects are covered in this paper: a) The directions of arrival and times of flight of signals received over a path oriented along the trough have been examined and several types of propagation effects identified. Of particular note, combining the HF observations with satellite measurements has identified the presence of irregularities within the floor of the trough that result in propagation displaced from the great circle direction. An understanding of the propagation effects that result in deviations of the signal path from the great circle direction are of particular relevance to the operation of HF radiolocation systems. b) Inclusion of the results from the above mentioned measurements into a propagation model of the northerly ionosphere (i.e. those regions of the ionosphere located poleward of, and including, the mid-latitude trough) and the use of this model to predict the coverage expected from transmitters where the signals impinge on the northerly ionosphere. c) Development of inversion techniques enabling backscatter ionograms obtained by an HF radar to be used to estimate the ionospheric electron density profile. This development facilitates the operation of over the horizon HF radars by enhancing the frequency management aspects of the systems. d) Various propagation prediction techniques have been tested against measurements made over the trough path mentioned above, and also over a long-range path between Cyprus and the UK. e) The effect of changes in the levels of ionospheric disturbances on the operational availability at various data throughput rates has been examined for the trough path mentioned earlier. The topics covered in this paper are necessarily brief, and the reader is referred to full papers referenced herein on individual aspects. Mailing address: Prof. E. Michael Warrington, Department of Engineering, University of Leicester, Leicester, LE1 7RH, U.K.; e-mail<EMAIL_ADDRESS>Vol52,3,2009 20-09-2009 19:06 Pagina 301 Introduction For terrestrial HF radio systems, the electron density depletion in the trough region reduces the maximum frequency that can be re-flected by the ionosphere along the great circle path (GCP).For long paths, the signal is often received via a ground/sea-scatter mechanism to the side of the GCP (Stocker et al., 2003).For shorter paths, gradients in electron density associated with the trough walls and embedded ionospheric irregularities often result in propagation in which the signal path is well displaced from the great circle direction, with directions of arrival at the receiver offset by up to 100° (Rogers et al., 1997).Deviations from the great circle direction impact not only on radiolocation systems for which estimates of a transmitter location are obtained by triangulation from a number of receiving sites, but also on any radio communications system in which directional antennas are employed.Furthermore, the Doppler and multi-mode delay spread characteristics of the signal are also affected when propagation is via scatter/reflection from irregularities in or close to the north wall of the trough (Warrington and Stocker, 2003). Recently, measurements have been made of the direction of arrival (DOA) and amplitude as a function of time of flight (TOF) of HF signals received over two paths oriented along the trough between Uppsala, Sweden (2001and from August 2006to January 2008) and Helsinki, Finland (since December 2006) and the University of Leicester's field site near to Leicester (see fig. 1).The first of these periods was close to sunspot maximum, and the second close to sunspot minimum. Characteristic types of propagation along the trough Examination of the observations along the trough path from Uppsala and Helsinki has enabled five types of off great circle propagation 303 Aspects of HF radio propagation events to be identified (see Stocker et al. (2009) for full details): • Type 1: These are characterised at onset by a large sudden increase in time of flight accompanied by a sudden deviation of the bearing to the north.Subsequently, the time of flight decreases over a period of hours while the azimuth either remains fairly constant or returns slowly to close to the great circle direction.Type 1 events were commonly observed, except in the summer, close to solar maximum (2001) (see Siddle et al., 2004a for a number of examples) when they also tended to be longerlived and were observed on a wider range of frequencies.Theoretical considerations indicate that these events are consistent with the signal scattering from irregularities in the poleward wall of the trough or from the auroral region (Siddle et al., 2004b). • Type 2: The time of flight gradually increases and is accompanied by a gradual deviation of the azimuth to the north (e.g.see the traces for 10 MHz in fig.2).The rate of change of time of flight and azimuth tends to be fairly constant until, after a period of up to a few hours, the signal is no longer detected. • Type 3: The time of flight gradually increases and is accompanied by either a gradual southward (Type 3S) or, less frequently, northward (Type 3N) deviation of the azimuth.After a few hours either the signal is no longer detected or the azimuth slowly returns to the great cir- Vol52,3,200920-09-2009 19:06 Pagina 303 cle direction (this distinguishes Type 3N propagation from Type 2). • Type 4: This type of propagation is characterised by a sudden large increase in the time of flight and a strong deviation to the south (typically the azimuth is around 120°).The signal is usually relatively weak, and is consistent with sidescatter from a ground reflection in the vicinity of the Alps. • Type 5: This type is similar to Type 1, except that although there is a sudden increase in the time of flight, the signal is deviated to the south instead of the north.Over the following few hours the azimuth usually returns to the great circle direction. The occurrence (as a percentage of nights of observation) of Types 1, 2, and 4 as a function of season, frequency and Kp is presented in fig. 3 (Types 3 and 5 are generally rare).The seasons have been defined with spring including all of March and April, summer including May to August, autumn September and October and winter November to February (as in ITU, 1997).Type 1 events are most common in spring with no clear trend with frequency.For 4.64 MHz, this type of propagation is also 2007).Left hand panels show variation with season, and the right hand panels the variation with Kp (averaged 1800-0000 UT).Vol52,3,200920-09-2009 19:06 Pagina 304 common in winter, while for most other frequencies it occurs least often in winter.This behaviour may be contrasted with that observed in 2001 (see Siddle et al., 2004a,b) where Type 1 propagation was frequently observed (~40-70% of nights depending on frequency) except in summer.Furthermore, the events observed in 2001 tended to consist of larger changes in time of flight and azimuth and to be of longer duration.Type 2 events tend to occur more frequently in spring and, for frequencies in the range 8.01-11.12MHz, in the summer.However, for 6.95 MHz, there is little other seasonal variation, while at 4.64 and 14.36 MHz the events are rarely observed.In spring, the occurrence strongly depends on frequency, peaking at 10.39 MHz.Type 4 events are also a springtime phenomenon, being observed on over 30% of nights for 10.39 and 11.12 MHz, but rarely at other frequencies. At frequencies of 8.01 MHz and below, Type 1 propagation events become more frequent as Kp increases, while for frequencies higher than this the percentage of nights on which this type of propagation is observed is roughly independent of Kp (although there is some suggestion of a slight decrease in occurrence with increasing Kp).For Type 2 propagation, there is a clear decrease in the occurrence of observations with increasing Kp for all frequencies except 4.64 MHz, where the opposite is the case.Type 4 propagation is more commonly observed at the middle range of frequencies (8.01-11.12MHz), and becomes more frequent with increasing Kp. Comparison of satellite measurements with HF measurements The mechanism by which Type 1 off great circle path propagation occurs has been investigated through simulation (Siddle et al., 2004b).However, the modelling has not yet been able to explain why the deviations of dif-ferent types occur on one day but not on otherwise very similar days.In previous work, the parameters describing electron density irregularities embedded in the background ionosphere used in simulation have not been well defined.In this paper, the HF measurements are examined in conjunction with satellite observations of the electron density structure.Some caution must be exercised in comparing the HF and satellite measurements since the satellite altitude is about 700 km, while the HF propagation is most strongly affected by the ionosphere at heights of about 200-400 km.Since the electron density distribution in the topside ionosphere is the near-Earth signature of the magnetospheric plasmapause, it may be expected that the structure of the irregularities due to precipitations will be well correlated at different altitudes, while the behaviour of the instabilities Vol52, 3,20093, 22-09-2009 14:20 Pagina 306 14:20 Pagina 306 inside the trough region will depend on height.For reasons of space, only a single example of Type 2 off great circle propagation from 18 April 2007 will be presented here. Based on the measured angles of arrival and the time of flight of the HF signal (fig.2), and assuming a single, mirror-like reflection (see Siddle et al., 2004b), the reflection point (fig.4) moves steadily north of the great circle path reaching a point close to the poleward wall of the trough before the signal is no longer received.It is evident from Figure 4 that the reflection points of the HF signal in this event are located inside the trough.The DEMETER satellite data (fig.5) indicates that, as well as the expected reduction in electron density, there are strong filamentary electron density structures (i.e.irregularities) inside the main ionospheric trough at this time, and we suggest that scattering from these irregularities is the main mechanism responsible for formation of Type 2 off great circle propagation.Zaalov et al. (2003;2005) have developed a unique ray-tracing model that accurately reproduces many of the features observed in the experimental measurements referred to earlier to a level well beyond that which we originally anticipated would be possible.A major outcome of these ray-tracing simulations is that paths other than those that have been the subject of experimental investigation can readily be assessed. Ionospheric model for ray-tracing The simulations make use of a numerical ray tracing code (Jones and Stephenson, 1975) to estimate the ray paths through a model ionosphere comprising two Chapman layers, the main parameters of which (critical frequency, critical height, vertical scale height of each layer) are based on values obtained from the International Reference Ionosphere (IRI) (Bilitza, 1990).The most important causes of off-great circle propagation in the polar cap are the presence of convecting patches or sun-aligned arcs of enhanced ionisation.Patches are formed in the dayside auroral oval and generally convect in an anti-sunward direction across the polar cap into the nightside auroral oval, whereas arcs occur under different geomagnetic conditions and drift in a duskwards direction.Localised, time varying, perturbations in the electron density are then applied to the background model to represent the convecting patches and arcs of enhanced electron density. Sun aligned arcs (Carlson et al., 1984): the shape of each sun-aligned arc is defined within the model by a small number of three-dimensional Gaussian perturbations in electron density of different spatial scales (altitude, longitude and latitude) randomly distributed near to the centre of the arc.Several Gaussian perturbations were combined in defining the shape of each modelled arc in order to prevent the shapes of the arcs being too stylised.For all arcs away from close proximity to the dawn or dusk auroral oval, the plasma strands are elongated for several hundreds or thousands of kilometres with a latitudinal scale which is significantly larger than the longitudinal scale.Evolution of the structures relative to the propagation path is determined by the rotation of the Earth beneath the arcs and by movement of the arcs in the dawn-dusk direction. Convecting patches (Weber et al., 1984;Buchau et al., 1983): the temporal evolution of the patches relative to the propagation path is simulated by means of a convection flow scheme coupled with the rotation of the Earth beneath the convection pattern, the precise form of which depends upon the components of the IMF.The intensity and spatial scales of the patches can also be varied with simulation time.In practice, the shape, size and number of patches in the convection flow area depends upon many geophysical parameters, not only upon the instantaneous values but also upon their history.By using up to four vortices based on the modelled convection flow patterns associated with the various IMF orientations many realistic situations may be simulated. Mid-latitude trough: the Halcrow and Nisbet model (1977) was used in the simulations as a basis for the position of the trough walls.In order to add smaller-scale structure to the wall, the following modifications were performed: (a) The latitude of the walls was perturbed by two-scaled random functions of longitude, and (b) a landscape of patches elongated in the direction of the trough were added to each wall.Initially, no perturbations were added to the floor of the trough.The depletion of the trough maximum was set according to Kp, typically, 30% for Kp = 2-3 and 60% for Kp = 6. Particle precipitation: the auroral oval is an enhancement of electron density caused by particle precipitation in the E-region and above.As a function of distance along (near-vertical) field lines, the density enhancement was modelled as starting 100 km from the ground, having one or more peaks of about 10 13 electrons/m 3 around 110 km, and then decaying slowly toward 200 km (Bates and Hunsucker, 1974).Small scale electron density enhancements can also be added to the model. Area coverage simulations The area coverage to be expected from a transmitter at a given location can be estimated by ray-tracing through the model ionospheres described in Section 2. A large number of rays launched in an azimuth/elevation grid from the transmitter are traced through the model ionosphere, and the signal strength at the receiver estimated by determining the ray-density in the area around the receive antenna.For the example simulations presented in this case, the dynamic range shown in the figures has been restricted to 20 dB in order to highlight the modal structure of the signal. An example outcome of this process is illustrated in figs.patches are taken into account.Focussing of the rays at the 1, 2 and 3 hop skip zone ranges is clearly evident in all cases.It is interesting to note that the presence of the patches severely distorts the background pattern on the ground, that coverage is reduced in places but also that coverage is obtained in areas where it was not present without the presence of the patches.This later situation is more evident at higher frequencies, not illustrated here. In considering the effect of the presence of the patches, it is important to remember that the patch distribution shown is only illustrative and that in reality the patches will be distributed differently, and also that as time progresses the patches will move in accordance with the pre-vailing convection cell pattern (in turn, a function of the geomagnetic conditions). Inversion of HF radar backscatter ionograms HF radars use the refraction by the ionosphere of radio waves emitted by the radar to detect targets up to 3000 km.The distance covered by the radio waves depends on different parameters: the ionospheric characteristics, the transmitted frequency and the elevation angle.In order to optimise the radar frequencies for the geographical area of interest and to localize the targets accurately, knowledge of the ionospheric charac- teristics is required in quasi real time.In to acquire the necessary ionospheric information, the radar can be used as an oblique backscatter sounder.For this purpose, an inversion method is needed as the forward problem is strongly non linear.By scanning the radar beam in elevation for a fixed azimuth at one operating frequency, the group path can be measured to obtain a backscatter ionogram (a 3D image of magnitude, elevation angle β, and group path P'). The Quasi-Parabolic Segment model (QPS) developed by Dyson and Bennett (1988) has been chosen for the inversion.In this model, each layer of the ionosphere and also the joining segments can be characterized with a QP segment by only three parameters: the critical frequency (fc), the peak height (hm) and the semithickness (ym).Then the group path P' and the ground range D can be obtained using analytical propagation equations. Inversion technique The purpose is to recover the initial model parameters (fc, hm, ym) of each layer of the ionosphere from n points of the backscatter ionogram (nՆ3).With these model parameters the calculation of the ground range, D, is possible so the real position of the target can be determined.The data points used are the coordinates of n group paths (P'meas 1, …, P'meas n) at n fixed elevation angles (βref 1, …, βref n) called of reference for each layer.The method, based on Tarantola (1987) and Landeau et al. (1997), will invert each ionosphere's layer one by one.For each parameter a value is chosen in the parameter space to create a QPS model of the ionosphere and to simulate the elevation-group path curve by using ray tracing.The coordinates of the n group paths simulated corresponding to the n fixed elevation angles are taken (P'simu 1, …, P'simu n). With the coordinates of the n group paths measured and simulated, the a posteriori probability density, σp, can be calculated, using the equation: % where δP'meas1, …, δP'measn represent the variances of the measurement errors over P'meas1, …, P'measn.These variances are presumed to be Cauchy distributed and independent of each other. The optimal parameters are those for which σp is maximum.To find them, each value of the parameters space must be evaluated.But this procedure is costly in time and the ionosphere is changing with time, sometimes on a time scale smaller than 15 minutes.It is thus necessary to use an optimization algorithm to get a good approximation of the global optimum.Two different optimization algorithms have been tested: the simulated annealing and the genetic algorithm. Simulated annealing: The simulated annealing algorithm (SA) developed by Kirkpatrick et al. (1983) is based on the manner in which metals recrystallize in the process of annealing.For each step of the SA algorithm a random neighbour of the current solution is considered.If the new solution is better, it is chosen.If it is worse, it can still be chosen with a probability that depends on the difference between the corresponding function values and on a parameter called the temperature (T). The temperature is gradually decreased during the process.At the beginning the current solution changes almost randomly, but the acceptance of bad solutions decreases as T goes to zero.The allowance for bad solutions saves the method from becoming fixed at local minima. Genetic algorithm: The second optimization method used is a genetic algorithm developed by Goldberg (1989).Genetic algorithms are based on the mechanics of natural selection and genetics.An initial generation is created where the individuals are binary coded strings.By using selection, crossover and mutation a new generation is created from the old generation.Each generation is better than the previous one, until the optimal solution is found. Validation on synthesized data Synthesized data are created from a model of electronic density profile given by predictions of the ionosphere (table I).After calculat-ing the group path-elevation angle curve, we add a zero mean Gaussian noise (7.5 km and 2°o f standard deviation) to simulate a real backscatter ionogram that is used to test the inversion method. The results of the inversion method are compared to the initial values on fig. 9 and in table I.In fig. 9 the group path-elevation curve obtained by inversion match with the original curve used as data. The two optimization methods are compared in fig. 10 and in table I.The genetic algorithm converges faster than the simulated annealing (1.72 minutes vs 15 minutes with a PC Intel Dual Core).Furthermore, the estimated parameters provided by the genetic algorithm are closer to the initial values than by using the simulated annealing.In future works, the genetic algorithm will be used. Results on real data The inversion method has also been tested on real data using an elevation-scan backscatter ionogram, recorded on 11 June 2007 at 2000 UT.In this particular sounding, only the F layer was present (nighttime's profile).For validation purposes vertical ionograms have been collected by the Ebre ionosonde which was selected because it is located in the great circle of the oblique sounding backscatter beaming. Figure 11 compares the elevation angle-group path ionograms obtained with the data inversion and the vertical ionosonde inversion.The corresponding electron density profiles presented in fig.12 show a general agreement of the 2 profiles. Time of flight measurements and their application to testing predictions methods that approximate to ray tracing Accurate predictions of the main parameters characterising the HF ionospheric channel require a detailed knowledge of the ionospheric conditions that only numerical or analytic raytracing techniques can provide.Numerical raytracing models (such as Jones andStephenson, 1975 andNorman et al., 1994) are very accurate but they have a high computational time.Analytic ray-tracing techniques that enable ray tracing through horizontal gradients along and in the direction of the ray path (Norman and Cannon, 1997;1999) are computationally less intensive than numerical ray tracing.For this reason their use is particularly advantageous in HF applications such as in the real-time frequency management of OTH (over-the-horizon) radar systems (Coleman, 1998). Three different predictions methods that approximate ray-tracing techniques were tested: (a) IRI-95 based on the monthly median elec- tron density profiles provided by the IRI-95 model (Bilitza, 1990;2001), (b) SIRM&BR_D based on the SIRM (Zolesi et al., 1993;1996) in conjunction with the Bradley-Dudeney model (Bradley and Dudeney, 1973), and (c) ICEPAC based on the Ionospheric Communications Enhanced Profile Analysis and Circuit prediction program (Stewart, undated). For the IRI-95 and SIRM&BR_D prediction methods, the monthly median electron density profiles provided by the models have been used to calculate the vertical plasma frequency profiles at the mid-point of the Uppsala -Leicester radio link (see Section 1).As the effect of Earth's curvature is important for ground ranges greater than 500 km, the curvature of the Earth and the known ground range (1411 km) of the radio link have been taken into account for calculating the angle of incidence of the ray at the base of the ionosphere.The values of the angle of incidence have been calculated assuming a very simple geometry based on ionospheric reflections taking place from a simple horizontal mirror at the appropriate height.Subse-quently, the secant law was applied to calculate the oblique transmission frequencies.The length of the oblique ray path, its corresponding time of flight (TOF) and take off angle have been calculated from simple geometry. The version of ICEPAC used in this study utilizes the monthly median electron density profiles (Haydon and Lucas, 1968) at the midpoint of the radio link derived from CCIR foF2 and M(3000)F2 coefficients (CCIR, 1966) and, from these, provides predictions of elevation angle, TOF, virtual height, etc. for a given oblique transmission frequency and time of day. In order to test the validity of these methods under quiet ionospheric conditions, comparisons between the predictions and the TOF median measurements for the one hop propagation modes 1E and 1F were carried out for different frequencies and seasons. For 1E modes (see table II), IRI-95 and SIRM&BR_D provide a similar performance, while ICEPAC is in general somewhat better.This is particularly the case when Es is present.The errors for 1F modes (table III) are larger ,3,2009,3, 20-09-2009 19:06 Pagina 315 19:06 Pagina 315 than for 1E, and the performance is similar for all three methods, although ICEPAC can produce significantly better results in winter.Further details of this aspect of the investigation can be found in Pietrella et al. (2009). Comparison of oblique sounding measurements and VOACAP predictions on a mid-latitude path between Cyprus and the UK Signals from FMCW sounders located in Cyprus were received in Leicester (a path length ~3340 km) from 1 February 2008 to 13 July 2008.The monthly median MUF as a function of time of day is presented in fig.13.Predictions of the MUF made using VOACAP (version 05.0119W) are also presented in fig.13.VOACAP has been run using a minimum take-off angle of 0.1°, pre-dicted smoothed sunspot numbers ranging from 7 in February to 11 in July (NGDC, 2008), and with the sporadic E model switched off.Although the observations generally have a similar form to the predictions, there are some discrepancies.For example, the predicted median MUF at 0700-0800 UT in February is much higher than the observed value (the largest daily value in the month is 21.9 MHz at 0700 UT and 23.5 MHz at 0800 UT), in June the characteristic reduction in MUF at noon is not observed, whilst in July, the observed MUF is higher than the predicted value at all times.In June and July the discrepancy results from the presence of sporadic-E layers with high (on occasions >30 MHz) peak frequencies.The predicted MUF(Es) values (obtained using VOACAP method 11, with a FPROB multiplier of 1.0 for Es) are closer to those observed in the morning and early afternoon, but significantly lower than the observations in the evening. Assessment of HF channel availability under ionospherically disturbed conditions Various workers have conducted investigations on the signalling characteristics of HF channels, and on the influence of the channel scattering characteristics on data communications capabilities.For example, Jodalen et al. (2001) reported on a comparison between modem characterisations and channel measurements from the Doppler and Multipath Sound-ing Network (DAMSON) (Cannon et al., 2000).This system characterised narrow-band channels by measuring their SNR, Doppler spread, and delay spread parameters.Measurements of these characteristics may then be applied to three-dimensional performance surfaces to assess signalling capability for various types of modem. For the path from Sweden to the UK discussed earlier in this paper, Warrington and Stocker (2003) previously reported the variation in delay and Doppler spreading observed as a function of season and time of day.This previous analysis did not consider variations with ionospheric disturbance, a topic that is addressed here.Joint probability density functions (PDFs) of SNR, Doppler spread, and effective multipath spread versus the Disturbance Storm Type (DST) index (see, for example fig.14) were produced for measurements made over the path from Uppsala, Sweden (59.92ºN; 17.63º E) and Leicester UK (52.63ºN; 1.08º W) in 2001.It has been demonstrated by Sari (2006) by determining the conditional PDFs and by using Bayes' Theorem, that there were dependencies between DST and the above mentioned signal parameters. As examples of modem characterizations, various military standards were considered.In these standards, HF modem performances are specified in terms of effective multipath spread, Doppler spread, SNR, bit error rate, modulation type, and data conversion (long-short interleaver).Modem availability was quantified as a fraction of the time that the modem would function satisfactorily (the availability of a modem can be estimated by determining the difference between the measured SNR and the SNR required to give an acceptable BER for a given delay and Doppler spread -a positive SNR difference indicates that the modem would operate satisfactorily (see fig. 15)).Tables IV and V show the probability of modem availability for different data rates and frequencies according to requirements defined in STANAG 4538. Concluding remarks Various propagation related topics of direct relevance to the planning and operation of HF radio systems have been investigated during the COST 296 Action, outline details of which are presented in this paper.For mid-latitude paths, these investigations have included comparisons between predicted channel MUFs for a path from Cyprus to the UK with predicted values obtained using VOACAP.Significant progress has been made in relation to the operation of HF over the horizon radars.Such systems require detailed knowledge of the ionospheric electron density profile.In the work presented here, the radar is operated as a backscatter sounder for a period to measure the backscatter characteristics as a function frequency and elevation angle.These measurements are then input to a new inversion technique to obtain the required electron density profile. The results presented in the paper illustrate the importance of understanding and taking into account the presence of various structural features in the northerly ionosphere, i.e. the electron depletion and irregularities associated with the sub-auroral trough, patches and arcs of enhanced electron density within the polar cap, and irregularities within the auroral zone when planning and operating HF radio links.These features result in radiowaves propagating over paths well displaced from the great circle direction and have clear relevance in the operation of HF radiolocation (HF-DF) systems where deviations from the great circle path may result in significant (sometimes inter-continental) triangulation errors.A full understanding of the prevailing propagation mechanisms will enable appropriate selection of receiver sites to be made in order to optimise the positional accuracy.However, the impact of these propagation effects is much wider than this particular application, extending to almost any HF communications system where the signal impacts on the ionosphere within the region polewards of the sub-auroral trough.This has been illustrated here by employing the northerly ionospheric model to indicate the effect of polar patches on the coverage area of two transmitters located at Arctic sites.Development of this aspect is currently being undertaken with the aim of improving the HF communications forecasts and nowcasts available to the airlines when operating on trans-polar routes (note that over-conservative forecasts of the likelihood of disturbed communications results on flights being rerouted to non-polar routes with consequent increases in flight time and fuel usage).In addition to the directionally related effects referred to above, the channel transfer characteristics are also affected by the presence of the ionospheric structures, in particular increases in delay and Doppler spread are evident.These increased spreads limit the available data throughput available on a particular channel, or alternatively the system availability may be reduced for a particular type of modem.Consideration of this aspect has been given for the trough path. Fig. 1 . Fig. 1.Maps showing the paths between Uppsala and Leicester and Helsinki and Leicester together with a typical position for the trough, as estimated by the model of Halcrow and Nisbet (1977) for 0000 UT on 11 March 2006 for Kp values of 0 and 6.The four lines indicate the outer and inner edges of the north and south walls of the trough. Fig. 3 . Fig. 3.The percentage occurrence of Types 1, 2, and 4 off great circle propagation (August 2006-September 2007).Left hand panels show variation with season, and the right hand panels the variation with Kp (averaged 1800-0000 UT). Fig. 5 . Fig. 5.The electron density and temperature measured on board of DEMETER satellite on 18 April 2007. The model is able to reflect the day-to-day variation of received HF signals.See, for example, the time history of the azimuth of arrival and time of flight of a 10.4 MHz signal propagating along Uppsala to Leicester path on 18-19 April 2007 given in fig. 2 and the simulation results in fig.6. Fig. 6 . Fig. 6.Simulated time history of the azimuth of arrival and time of flight of a 10.4 MHz signal propagating along Uppsala to Leicester path between noon 18 April and noon 19 April 2007.Azimuth deviation -top panel, time of flight -bottom panel. Fig. 7. Example electron density distribution over the high latitude region.The dayside ionosphere is around the lower left hand quadrant.Patches of enhanced electron density are also included. Fig. 8 . Fig. 8. Calculated signal strength variations for a 10.4 MHz transmitters located at Iqaluit (left frames) and Tromsø (right frames) covering the high latitude region using the modelled ionosphere of fig. 7. The upper frames are without the presence of polar patches, and the lower frames are with the presence of the patches. Fig. 12 . Fig. 12. Electron density profiles obtained with the inverted parameters and with the ionosonde parameters. Fig. 13 . Fig. 13.Monthly median MUF on a path Cyprus-Leicester for February to July 2008.The observations are plotted as a solid line, while the dashed line represents predicted values from VOACAP (the predicted MUF(Es) is plotted as a dotted line in the July panel).All month-hours had at least 26 observations with the exception of July, which only had 13 for all hours. Table I . QPS model parameters, initial and estimated, comparison. Table II . The minimum and maximum value of the r.m.s.error of TOF are shown taking into account all the frequencies which arrive at the receiver following the 1E on great circle propagation modes. Table III . The minimum and maximum value of the r.m.s.error of TOF are shown taking into account all the frequencies which arrive at the receiver following the 1F on great circle propagation modes. Table IV . Modem availabilities with DST=0
7,656.6
2009-04-25T00:00:00.000
[ "Physics" ]
Investigation of external quality factor and coupling coefficient for a novel SIR based microstrip tri-band bandpass filter In this article, a new method is developed to design a three-band miniaturized bandpass filter (BPF) that uses two asymmetrically coupled resonators with one step discontinuity and open-circuited uniform impedance resonator (UIR) to achieve Global Interoperability with Microwave Access (WiMAX) and Radio Frequency Identification (RFID) applications. First, a pair of asymmetrical step impedance resonators (ASIR) is used to implement a dual band filter, then a half wavelength uniform impedance resonator is added below to the transmission line to achieve a triple band response. The proposed filter resonates at frequencies of 3.7 GHz, 6.6 GHz, and 9 GHz with the fractional bandwidth of 7.52%, 5.1%, and 4.44%, respectively. By adjusting the physical length ratio (α) and the impedance ratio (R) of the asymmetric SIR, the proposed fundamental frequencies of the triple BPF are obtained. Moreover, the coupling coefficient (Ke) and external quality factor (Qe) are investigated between the resonators and the input/output ports of the transmission line and are calculated using full-wave EM simulator HFSS. In addition, five transmission zeros are introduced near the passbands to increase the filter selectivity. Finally, the proposed filter is designed and fabricated with a size of 13.69 × 25 mm (0.02 λg × 0.03 λg), where λg represents the guiding wavelength in the first passband. The simulated and measured results have a good correspondence, thus confirming the design concept. Introduction The field of microwave and RF communication continuously demands a compact wireless transceiver for commercial products, especially in the combination of IEEE 802.11b/g (GSM), IEEE 802.11a (WLAN), GPS, RFID, 3G, 4G, Bluetooth, and automotive radar system. These wireless standards require high data rates and large bandwidth in the radio frequency spectrum. Besides mobile phones, RF systems are also necessary for scientific instruments, navigation and even in medical applications. Thus, efforts are carried out to design such a compact and low power consumption component which simultaneously allows the wireless standards smoothly without interference with other RF bands. One of the key components in such a system is the compact and high selectivity bandpass filters and its performance dominates the entire microwave communication system. For this, the design of multiband BPFs with compact size and low insertion loss plays an important role in the multiband wireless transceiver and it constitutes a great challenge for the circuit designers [1][2][3][4][5][6][7]. In the past several dual and triple bandpass filters were intensively proposed and investigated by combining two or more single BPFs, a stub loaded resonator (SLR), a step impedance resonator (SIR) with one or more step discontinuities, and the multimode resonators (MMR's). For example, in [8][9][10][11][12][13][14] several dual-band filters were designed at different resonance frequencies with their own merits and demerits, however, poor selectivity, larger circuit dimensions, and high insertion losses were the major drawbacks associated with these designs. The design of filters based on SIR with one step discontinuity allows to better control the spurious bands as compared to the traditional SIR which has two-step discontinuities causes more losses and larger circuit size. It has the advantages of designing higher-order compact BPFs with high selectivity and low insertion losses such as dual, tri, quad or quintuple BPFs because of its inherently higher-order resonant modes [15]. A tri-band BPF loaded with pi-section SIR is presented in [16], for GPS (Link-2), WIMAX, and WLAN applications with their merit of greater bandwidth but filter selectivity, insertion losses, and larger size still need to be improved. In [17], another triple-band response is achieved using asymmetric stub loaded resonators for Wireless Medical Telemetry Service (WMTS), WLAN, and WiMAX applications with a shortcoming of high insertion loss (IL), low fractional bandwidth (FBW), poor selectivity as well as larger circuit dimensions. To improve the isolation between the passbands as well as the passband insertion losses, a high-frequency selectivity triple-band BPF is designed and fabricated on Rogers RO-4003 material in [18], using a novel multimode resonator for WCDMA, WiMAX, and WLAN wireless applications. The filter has its strong merit of greater bandwidth and low IL, but the dimension of the filter still needs to be improved and the circuit complexity also increases using MMR's. In [19], the authors utilize the double mode to design a filter which gives three passbands for GSM and GPS applications. The filter shows a good response for insertion loss as well as high-frequency selectivity by exciting six TZ's between the bands, but larger circuit size is a major drawback associated with the design. Another three working bands filter is designed in [20], using composite Right Left-Handed (CRLH) resonator on Roger RO-4003C substrate material with dielectric constant 3.38. The presented filter has a serious issue in FBW, circuit size, poor isolation between the passbands as well as insertion losses especially for the first and third passbands which is greater than -3dB. To overcome the size problem and make the filter suitable for compact wireless transceivers, a high selectivity dual and tri-band filter in [21], is designed and implemented on Rogers substrate material using a common resonator feeding technique. The presented filter has good passband insertion loss however, FBW and circuit size need to be upgraded. In [22], asymmetric T-shaped SLR's based tri passband filter is designed for 2.48/3.58/4.48 GHz wireless applications with wider FBW and low in-band IL, however, larger circuit area and poor isolation between the passbands was the major drawbacks. The authors of [23,24], designed a triple band BPF's using SIR's structure. Both the filter has good passband selectivity, but larger circuit dimensions were a major drawback associated with the design. Moreover, the later one has poor IL as well as narrow FBW. Recently a UIR based triband filter is designed and implemented in [25]. The proposed design has a larger circuit area, poor FBW and greater insertion losses. To improve the passbands IL, a compact three passband filter is designed using ring MMR's in [26]. The proposed filter has good selectivity, but larger circuit area is still a challengeable problem for researcher. The authors of [27][28][29], designed a triband filter using embedded resonators and stub loaded square ring resonators for different wireless applications, but high insertion losses, narrow bandwidth, and larger circuit area are the major drawbacks associated with the designs. Targeting the IEEE 802.16 (WiMAX) and RFID wireless applications, this paper proposes an ultra-compact tri-band BPF using two asymmetrically coupled resonators with one-step discontinuity and a uniform impedance resonator, centered at 3.7 GHz, 6.6 GHz, and 9 GHz with the fractional bandwidth of 7.52%, 5.1% and 4.44%, respectively. The first and second passbands are made by asymmetrically coupled step impedance resonators (SIR), while the third passband is made by a half-wavelength uniform impedance resonator. The resonant frequency of the BPF is determined by adjusting the physical length ratio (α) and the impedance ratio (R) of the asymmetric SIR. Also, the coupling coefficient is determined by the gap between two resonators and a pair of 50 O input/output ports. Finally, the proposed threeband BPF was fabricated on the Rogers substrate and the simulated results are in good agreement with the measured results. This article introduces an easy way to design ultra-compact three passbands filter without complicated design and manufacturing processes. Resonance conditions of the proposed resonator The basic structure of the asymmetric SIR is illustrated in Fig 1. It consists of a low impedance section (Z 1 ) cascaded with a high impedance section (Y 2 ) and are bent in a ring-like shape to reduce the circuit size as shown in Fig 2. The two ring-like shape SIRs are responsible for the generation of the first and second passbands, while the resonator having uniform admittance (Y s ) that is attached to the 50 O input/output ports is responsible for the generation of the third passband, respectively. The proposed configuration has a one-step discontinuity as compared to the conventional SIR which has two-step discontinuity, due to this arrangement the harmonics of the fundamental resonance frequencies can be shifted easily far away without increasing the circuit size or increasing the discontinued step impedance sections. The length and width of the low and high impedance section are denoted by L 1 , W 2 , and L 2 , W 1 with characteristic impedance Z 1 = 1/Y 1 and Z 2 = 1/Y 2 , respectively, whereas θ 1 and θ 2 represent the electrical lengths of the high impedance section and low impedance section of the microstrip line as shown in Fig 1. The physical length ratio (α) and the impedance ratio (R) of the asymmetric SIR can be defined as follows [15]; Here θ t is the total wavelength of the asymmetric step impedance resonator. and In above equations and where f denotes the frequency and v p is the phase velocity of the microstrip line. The characteristic input admittance Y in of the asymmetric SIR seen from the open-end can be found by neglecting the effect of discontinuities and are follows; and The resonance condition occurs when equating below equation to zero i.e. Triband filter geometry An ultra-compact tri-passband filter having 13.69×25 mm (0.02 λ g ×0.03 λ g ) or 0.0006 λ 2 g circuit size where λ g represents the guided wavelength at first passband, consisting of two asymmetric SIRs and one UIR, fabricated on Rogers RO-4350 substrate having relative permittivity 3.66, thickness 0.762 mm, and tested on Agilent E5071C network analyzer is presented in this study. The proposed triband filter is simulated using full-wave electromagnetic software HFSS-13. The proposed filter topology is shown in Fig 2 while the geometrical circuit parameters are listed in Table 1, respectively. The first and second passband centered at 3.7 GHz and 6.6 GHz are obtained by the two coupled asymmetric SIRs consist of high impedance section (Z 1 ) and low impedance section (Z 2 ) for WiMAX and RFID applications, while the third passband centered at 9 GHz is achieved through UIR which is attached to the input/output port transmission line. Coupling coefficient and external quality factor In this study, a compact tri-passband filter is designed and fabricated using two asymmetric SIRs and one UIR. The two asymmetric SIRs are designed in such a manner that it produces the desired coupling coefficient and quality factor with the input/output port of the microstrip line. Thus, the coupling coefficient and quality factor can be determined by the space S 1 between the two resonators which is fixed to 0.7 mm and the gap W 3 between the pair of asymmetric SIRs and the input/output ports transmission line. When the gap between the two resonators increases, the external quality factor increases while the coupling coefficient decreases and vice versa. The Q and the K e can be determined by performing the parametric analysis for different values of S 1 and W 3 using a 3D EM full-wave simulation. After performing the parametric analysis, the quality factor coefficient (Q) and the coupling coefficient (K e ) can be found using the following expression [30]; In above equations, f l and f h demonstrate the lower and upper resonance frequency mode of the asymmetric SIRs, f c represents the resonant mode frequency, and FBW denotes the fractional bandwidth in percentage. Combining with the design specifications of the filter, herein, the Qi and K i can be determined with respect to W 3 gap as K 1 = 0.075 and Q 1 = 13.43 for the first passband, K 2 = 0.027 and Q 2 = 25 for the second passband, and K 3 = 0.028 and Q 3 = 46.26 for the third passband. Similarly, for S 1 gap the K 1 = 0.065 and Q 1 = 13.43 for the first passband, K 2 = 0.058 and Q 2 = 19.65 for the second passband, and K 3 = 0.038 and Q 3 = 30.26 for the third passband. Tables 2 and 3 Table 6, it is clearly show that the external quality factor of the third band abruptly increases when the gap S 2 varies from 0.1 mm to 0.25 mm, while the external quality factor of the first and second band remains unchanged. Moreover, the coupling between the TL and ASIR is calculated on the basis of standard design procedure given in [31], so, the coupling matrixes M ij at 3.6 GHz and M ij at 6.6 GHz of the proposed filter are derived as follows: where M ij is the coupling coefficient and PLOS ONE Where J 1 = -0.276 and J 2 = 0.932 are the admittance inverter constant and g 1 = 1.276 and g 2 = 1.3293 are the element values of the filter prototype. From the above discussion, it is verified that when the gap between the two resonators increases, the coupling coefficient decreases, and quality factor increases according to the Eqs 12 and 13. Results and discussion In this work the first two passbands are generated through the asymmetric coupled SIRs by choosing the design parameters consist of high impedance section (Z 1 = 123. PLOS ONE frequency response along with a fabricated photograph of the proposed tri-band filter. This shows that the fabricated filter resonates at f 1 = 3.7 GHz, f 2 = 6.6 GHz, and f 3 = 9 GHz for WiMAX and RFID wireless applications with the 3-dB fractional bandwidth (FBW 1 = 7.52% for the first passband, FBW 2 = 5.1% for the second passband, and FBW 3 = 4.44% for the third passband), respectively. The minimum insertion loss (-20 log|S 21 |) is 0.99 dB for 3.7 GHz, 1.17 dB for 6.6 GHz, and 1.50 dB for 9 GHz, while the return loss (-20 log|S 11 |) is greater than 10 dB for the three passbands, respectively. The coupling between the two resonators denoted by S 1 and the gap W 3 between the pair of asymmetric SIRs and the input/output ports generates five transmission zeros at 3.19 GHz, 4.71 GHz, 7.72 GHz, 8.3 GHz, and 9.91 GHz between the passbands and thus high selectivity is obtained. Moreover, the space S 1 should be minimum for achieving lower insertion loss. Furthermore, Table 7 summarizes the comparison of the proposed triple-band BPF with other state of the art filters in the literature, which proves that the presented filter has low insertion loss, wide bandwidth, compact size, and has a potential to be utilized in WiMAX, RFID and other triband wireless applications [18][19][20][21][22][23][24][25][26][27][28][29]. Conclusions In this article, a pair of asymmetric step impedance resonator with one-step discontinuity and open-circuited uniform impedance resonator is utilized to achieve a compact tri-band response centered at 3.7 GHz, 6.6 GHz, and 9 GHz for WiMAX and RFID wireless applications with the 3-dB FBW 3.52%, 5.1%, and 4.44%, respectively. The proposed filter has good in-band insertion loss for all three pass bands, i.e. 0.99 dB, 1.17 dB, and 1.50 dB, and the return loss is greater than 10 dB. By choosing the appropriate impedance ratio (R) and the physical length ratio (α) of the asymmetrical SIR, the filter can be perfectly tuned. The simulated results are in good agreement with the measurement results. Measurement results shows that the proposed three-band bandpass filter has high bandwidth, low insertion loss and compact size and can be widely used in modern high-performance multi-service wireless communication systems.
3,565.8
2021-10-25T00:00:00.000
[ "Physics" ]
An Enhanced Machine Learning Approach for Brain MRI Classification Magnetic Resonance Imaging (MRI) is a noninvasive technique used in medical imaging to diagnose a variety of disorders. The majority of previous systems performed well on MRI datasets with a small number of images, but their performance deteriorated when applied to large MRI datasets. Therefore, the objective is to develop a quick and trustworthy classification system that can sustain the best performance over a comprehensive MRI dataset. This paper presents a robust approach that has the ability to analyze and classify different types of brain diseases using MRI images. In this paper, global histogram equalization is utilized to remove unwanted details from the MRI images. After the picture has been enhanced, a symlet wavelet transform-based technique has been suggested that can extract the best features from the MRI images for feature extraction. On gray scale images, the suggested feature extraction approach is a compactly supported wavelet with the lowest asymmetry and the most vanishing moments for a given support width. Because the symlet wavelet can accommodate the orthogonal, biorthogonal, and reverse biorthogonal features of gray scale images, it delivers higher classification results. Following the extraction of the best feature, the linear discriminant analysis (LDA) is employed to minimize the feature space’s dimensions. The model was trained and evaluated using logistic regression, and it correctly classified several types of brain illnesses based on MRI pictures. To illustrate the importance of the proposed strategy, a standard dataset from Harvard Medical School and the Open Access Series of Imaging Studies (OASIS), which encompasses 24 different brain disorders (including normal), is used. The proposed technique achieved the best classification accuracy of 96.6% when measured against current cutting-edge systems. Introduction The brain, which is the human body's most important structural element, contains 50-100 trillion neurons [1]. It is also known as the human body's core section. Furthermore, it is known as the "processor" or "kernel" of the nervous system, and it plays the most important and critical role in the nervous system [2,3]. To the best of our knowledge, diagnosing brain disease is too difficult and complex due to the presence of the skull around it [4]. Utilizing technology to evaluate individuals with the aim of identifying, tracking, and treating medical issues is known as medical imaging. In medical imaging, magnetic resonance imaging (MRI) is a precise and noninvasive technique that can be used to diagnose a variety of disorders. In the last few decades, many scholars have proposed various state-of-the-art methods for brain MRI classification, and most of them focused on various modules of the MRI systems. A latest convolutional neural network-based MRI method, data expansion, and image processing were proposed by [5] to recognize brain MRI images in various diseases. They compared the significance of their approach with pre-trained VGG-16 in the presence of transfer learning using a small dataset. Another deep learning-based method for detecting • In the preprocessing step, the MRI images have been enhanced through existing well-known techniques like global histogram equalization. • Then, for feature extraction, an accurate and robust technique is proposed that is based on symlet wavelet transform. This technique yields better classification outcomes because it can handle the orthogonal, biorthogonal, and reverse biorthogonal features of gray scale images. Our tests support the frequency-based supposition. The wavelet coefficients' statistical reliance was assessed for each frame of grayscale MRI data. A gray scale frame's joint probability is calculated by collecting geometrically aligned MRI images for each wavelet coefficient. In order to determine the wavelet coefficients obtained from these distributions, the mutual information between the two MRI images is used to calculate the statistical dependence's intensity. • Following the extraction of the best feature, a linear discriminant analysis (LDA) was used to minimize the feature space's dimensions. • Following the selection of the best features, the model is trained using logistic regression, which uses the coefficient values to determine which characteristics (i.e., which pixels) are crucial in deciding which class a sample belongs to. The per-class probability for each sample may be computed using the coefficient values, and the conditional probability for each class can be computed using this method. In general, the class with the highest probability might be found to acquire the predicted label. • In order to assess the performance of the proposed approach, a comprehensive set of experiments was performed using the brain MRI dataset, which has 24 various kinds of brain diseases. For this assessment, a comprehensive dataset is collected from Harvard Medical School [14] and Open Access Series of Imaging Studies (OASIS) [15], which has total 24 various kinds of diseases such as The entire paper is organized as follows: Section 2 describes the existing MRI systems along with their respective disadvantages. Section 3 presents the proposed approach, while, the experimental setup is described in Section 4. Based on the experimental setup, the results are shown in Section 5. Finally, Section 6 summarizes the proposed approach along with future directions. Literature Review In the past couple of years, lots of efficient and accurate studies have been done for the classification of numerous types of brain ailments using MRI images. Most of these studies showed the best performances on a small dataset of brain MRI. However, their performances degraded accordingly on larger testing datasets. Therefore, a robust and accurate framework has been designed that showed good classification results on a large brain MRI dataset. A novel method has been proposed by [16] that is based on statistical features coupled with various machine learning techniques. They claimed the best performance on a small MRI dataset. However, computational-wise, this approach is much more expensive. A state-of-the-art framework has been designed by [17], which classified the Alzheimer disease using MRI images. In this framework, the corresponding MRI image has been enhanced in the preprocessing step, while the brain tissues are segmented in the postprocessing step. Then several deep learning techniques (convolutional neural network) are employed to classify the corresponding disease. However, the convolutional neural network has an overfitting problem [18]. Also, this approach has been tested and validated on a small dataset. Similarly, an accurate and robust method was proposed by [19]. They utilized stepwise linear discriminant analysis (SWLDA) for feature extraction and support vector machines for classification on a large brain MRI dataset. They achieved the best performance using the MRI dataset. However, SWLDA is a linear method that might be employed in a small subspace of binary classification problems [20]. On the other hand, an integrated approach was designed by [21], where the authors integrated a feature-based classifier and an image-based classifier for brain tumor clas-sification. Further, their proposed architecture was based on deep neural networks and deep convolutional networks. They achieved a comparable classification rate. However, a huge number of training images and the carefully constructed deep networks required for this approach [22]. Similarly, a state-of-the-art framework was designed by [23] in order to classify brain MRI along with gender and age. They utilized deep neural network, convolutional network, LeNet, AlexNet, ResNet, and SVM to classify abnormal and normal MRIs accurately. However, they showed better performance on a small dataset, and most of the experiments were in a static environment. Likewise, the authors of [24] proposed an efficient brain image classification system using an MRI dataset. In their system, they extracted the features by shape and textual method, such as region based active contour, and showed good performance. However, the major limitation of the region-based method is its' sensitivity to the initialization, and because of this, the region of interest does not segment properly [25]. A cutting-edge method for classifying various brain illnesses using MRI images was reported by Nayak et al. [26]. They utilized convolutional neural network-based dense EfficientNet coupled with min-mix normalization for categorization, and they showed better performance using the MRI dataset. However, this approach employs a huge number of operations, which make the model computationally slower [27]. Similarly, an integrated framework was designed by [28], where the authors employed a semantic segmentation network coupled with GoogleNet and a convolutional neural network (CNN) for brain tumor classification using MRI and CT images. They achieved better results using a small dataset of brain MRI and CT. However, in GoogleNet, the connected layers cannot manage various input image sizes [29]. A fully automated brain tumor segmentation approach was developed by [30] that was based on support vector machines and CNN. Moreover, the segmentation was done through the details of various techniques such as structural, morphological, and relaxometry. However, the methodologies utilized in this framework have comparatively lower significance with larger amounts of input MRI images [31]. Because it is a challenging task for these methods to accurately detect the abnormalities in the brain MRI images [31]. Moreover, a modified CNN based model was developed by [32] for the analysis of brain tumors. The authors employed CNN along with parametric optimization techniques such as the sunflower optimization algorithm (SFOA), the forensic-based investigation algorithm (FBIA), and the material generation algorithm (MGA). They claimed the highest accuracy of classification using the MRI dataset. However, SFOA is very sensitive to initializing and premature convergence [33]. Moreover, in MGA, the predictions are made based on single-slice inputs, hypothetically restraining the information available to the network [34]. An integrated framework was proposed by [35], which was based on the VGG19 feature extractor along with a progressive growing generative adversarial network (PGGAN) augmentation model for brain tumor classification using MRI images. They achieved good classification results on a publicly available MRI dataset. However, this approach cannot generate high-resolution images via the PGGAN model [36]. Moreover, this approach might not generate new examples with objects in the desired condition [37]. Another state-of-the-art scheme was proposed by [38], which contained some steps such as preprocessing, segmentation, feature extraction, and classification. The image was enhanced via a Wiener filter followed by edge detection. The tumor was segmented by a mean shift clustering algorithm. The features were extracted from the segmented tumor through the gray level co-occurrence matrix (GLCM), and the classification was done by support vector machines. However, the GLCM method is robust to Gaussian noise, and the extracted features are based on the difference between the corresponding pixels, but the magnitude of the difference was not taken into account [39]. A state-of-the-art fused method was developed by [40] that was based on gray level co-occurrence matrix (GLCM), spatial grey level dependence matrix (SGLDM), and Harris hawks optimization (HHO) techniques followed by support vector machines for brain tumor detection. However, this approach depends on the manual selection of the region of interest, due to which the corresponding results in the dependence of parameter values on the extracted region might not be selected [40]. As a result, in this work, a solid framework was created for the classification of various brain illnesses using an MRI dataset. A symlet wavelet-based feature extraction method was designed and is used in the proposed framework to extract the key features from brain MRI images. Furthermore, the dimensions of the feature space are reduced by LDA, and the classification is done through logistic regression. The proposed approach achieved the best classification results using MRI images compared to the existing publications. Proposed Feature Extraction Methodology The overall working diagram for the proposed brain MRI images is presented in Figure 1. the extracted features are based on the difference between the corresponding pixels, but the magnitude of the difference was not taken into account [39]. A state-of-the-art fused method was developed by [40] that was based on gray level co-occurrence matrix (GLCM), spatial grey level dependence matrix (SGLDM), and Harris hawks optimization (HHO) techniques followed by support vector machines for brain tumor detection. However, this approach depends on the manual selection of the region of interest, due to which the corresponding results in the dependence of parameter values on the extracted region might not be selected [40]. As a result, in this work, a solid framework was created for the classification of various brain illnesses using an MRI dataset. A symlet wavelet-based feature extraction method was designed and is used in the proposed framework to extract the key features from brain MRI images. Furthermore, the dimensions of the feature space are reduced by LDA, and the classification is done through logistic regression. The proposed approach achieved the best classification results using MRI images compared to the existing publications. Proposed Feature Extraction Methodology The overall working diagram for the proposed brain MRI images is presented in Preprocessing Most images contain extra elements, including background information, lighting effects, and pointless details that could lead to classification errors. To facilitate quick processing and enhance image quality, it is crucial to remove any superfluous parameters. To enhance the quality of the images by extending the dynamic range's intensity using the histogram of the entire image, the global histogram equalization (GHE) is used in the preprocessing stage. In essence, GHE finds the histogram's sequential sum, normalizes it, and then multiplies it by the value of the highest gray level. Then, utilizing one-to-one correspondence, these values are translated onto the earlier original values. GHE's transformation function is given in Equation (1). where k = 0, 1, 2…, N − 1, 0 ≤ Gk ≤ 1, n is the total number of pixels in the input image, ni is the number of pixels with the grey level gi, and P(gi) is the PDF of the input grey level. To Preprocessing Most images contain extra elements, including background information, lighting effects, and pointless details that could lead to classification errors. To facilitate quick processing and enhance image quality, it is crucial to remove any superfluous parameters. To enhance the quality of the images by extending the dynamic range's intensity using the histogram of the entire image, the global histogram equalization (GHE) is used in the preprocessing stage. In essence, GHE finds the histogram's sequential sum, normalizes it, and then multiplies it by the value of the highest gray level. Then, utilizing one-toone correspondence, these values are translated onto the earlier original values. GHE's transformation function is given in Equation (1). where k = 0, 1, 2 . . . , N − 1, 0 ≤ G k ≤ 1, n is the total number of pixels in the input image, n i is the number of pixels with the grey level g i , and P(g i ) is the PDF of the input grey level. To evenly distribute the brightness histogram of picture (I) in GHE, the image (I) must first be normalized before the PDF can be calculated. This is shown by Equation (2), where the cumulative density function (CDF) dependent on PDF is denoted by C(r k ) in (1). The supplied transformation function in Equation (1) be normalized before the PDF can be calculated. This is shown by Equation (2), where the cumulative density function (CDF) dependent on PDF is denoted by C(rk) in (1). The supplied transformation function in Equation (1) Symlet Wavelet Transform Following the preprocessing stage of enhancing the MRI images, the symlet wavelet transform has been used to extract a number of standout features from the MRI images. The decomposition method was employed in this procedure, which required grayscale video frames. The proposed algorithm was converted from RGB to grayscale in order to increase its effectiveness. The decomposition of the signal into a group of distinct feature vectors could be understood as the wavelet decomposition. Each vector includes smaller sub-vectors, such as where F represents the 2D feature vector. Let assume, a 2D MRI image like Y that has been divided into orthogonal sub-images for various visualizations. One level of decomposition is depicted in the following equation. where, R1 and P1 denote rough and precise coefficient vectors, respectively, and Y denotes the decomposed image. If the MRI image is divided into multiple levels, then, the Equation (3) can be expressed as. where, j indicates the decomposition's level. Only the rough coefficients were used for feature extraction because the precise coefficients are typically made up of noise. Each Symlet Wavelet Transform Following the preprocessing stage of enhancing the MRI images, the symlet wavelet transform has been used to extract a number of standout features from the MRI images. The decomposition method was employed in this procedure, which required grayscale video frames. The proposed algorithm was converted from RGB to grayscale in order to increase its effectiveness. The decomposition of the signal into a group of distinct feature vectors could be understood as the wavelet decomposition. Each vector includes smaller sub-vectors, such as where F represents the 2D feature vector. Let assume, a 2D MRI image like Y that has been divided into orthogonal sub-images for various visualizations. One level of decomposition is depicted in the following equation. where, R 1 and P 1 denote rough and precise coefficient vectors, respectively, and Y denotes the decomposed image. If the MRI image is divided into multiple levels, then, the Equation (3) can be expressed as. where, j indicates the decomposition's level. Only the rough coefficients were used for feature extraction because the precise coefficients are typically made up of noise. Each frame is divided into up to four layers of decomposition during the decomposition process, or j = 4, because beyond this value, the image loses a lot of information, making it difficult to discover the useful coefficients and perhaps leading to misclassification. The precise coefficients further consist of three sub-coefficients. So, the Equation (4) can be written as Or simply, the Equation (5) can be written as where, P v , P h , and P d represent vertical, horizontal, and diagonal coefficients, respectively. As can be seen from Equation (6) or (7), all the coefficients are linked to one another in a chain, making it simple to identify the salient features. Figure 3 graphically displays these coefficients. For each stage of the decomposition, the rough and precise coefficient vectors are produced by passing the signal through low-pass and high-pass filters, respectively. Diagnostics 2022, 12, x FOR PEER REVIEW 7 of 21 frame is divided into up to four layers of decomposition during the decomposition process, or j = 4, because beyond this value, the image loses a lot of information, making it difficult to discover the useful coefficients and perhaps leading to misclassification. The precise coefficients further consist of three sub-coefficients. So, the Equation (4) Or simply, the Equation (5) can be written as where, Pv, Ph, and Pd represent vertical, horizontal, and diagonal coefficients, respectively. As can be seen from Equation (6) or (7), all the coefficients are linked to one another in a chain, making it simple to identify the salient features. Figure 3 graphically displays these coefficients. For each stage of the decomposition, the rough and precise coefficient vectors are produced by passing the signal through low-pass and high-pass filters, respectively. The feature vector is produced by averaging all the frequencies present in the MRI images following the decomposition procedure. The frequency of each MRI image within a given time window has been calculated by applying the wavelet transform to the analysis of the relevant frame [41]. where, , is the wavelet function for estimating frequency, and t is the time. In order to obtain a greater level of judgment for frequency estimation, x is the scale of the wavelet between the lower and upper frequency boundaries. Moreover, y represents the wavelet's position within the time frame with respect to the signal sampling period, and the wavelet coefficients with the supplied scale and position parameters are denoted by W(ai, bj), and their mode frequency conversion is shown below. The feature vector is produced by averaging all the frequencies present in the MRI images following the decomposition procedure. The frequency of each MRI image within a given time window has been calculated by applying the wavelet transform to the analysis of the relevant frame [41]. where, ϕ f , e is the wavelet function for estimating frequency, and t is the time. In order to obtain a greater level of judgment for frequency estimation, x is the scale of the wavelet between the lower and upper frequency boundaries. Moreover, y represents the wavelet's position within the time frame with respect to the signal sampling period, and the wavelet coefficients with the supplied scale and position parameters are denoted by W(a i , b j ), and their mode frequency conversion is shown below. where f a ϕ f , e is the wavelet function's average frequency, and δ is the sampling period of the signal. In order to obtain the feature vector, the entire image frequencies for each MRI are averaged as follows: where, K denotes the total number of frames for every MRI image, f K is the last frame of the current disease, and f avg denotes the average values of the frequency for every MRI image. It is also a feature vector for that MRI. Feature Selection and Dimension Reduction via Linear Discriminant Analysis (LDA) LDA ensures maximum separability by maximizing the ratio of between-class variation to within-class variance in any given data set. LDA is used to classify data in order to solve speech recognition classification issues. The input is mapped into the classification space, where the samples' class identification is determined by an ideal linear discriminant function produced by LDA. When the within-class frequencies are unequal and their performances have been evaluated using test data generated at random, LDA handles the situation with ease. The following equations are used to compare within-class VAR W and between-class VAR B . where, c is the total number of classes (in our case, c represents the total MRI diseases within each state), and V i represents the vector in the ith class C i . Also, m represents the mean of the class C i , m k represents the vector of a specific class, and m represents the mean of all vectors. The optimal projection matrix for discrimination, D o , is taken by maximizing the determinant of the between-class and within-class scatter matrices, and it is selected as where, D o is the collection of discriminate vectors of VAR W and VAR B that correspond to the c − 1 highest generalized Eigen values ω. D o has a size of t × r (t ≤ r), and r is the dimension of a vector. Then, where, the upper bound value of t is c − 1, and the rank of VAR B is c − 1 or less. Thus, LDA minimizes the within scatter of classes like MRI diseases while maximizing the total dispersion of the data. Please refer to [42] for additional information on LDA. Classification via Logistic Regression A popular linear model that can be used for image categorization is logistic regression. In this model, a logistic function is used to simulate the probabilities describing the potential outcomes of a single experiment. The example of logistic regression can be binary, e.g., One-vs-Rest, or multinomial logistic regression with optional regularization of 1 , 2 or Elastic-Net. As an optimization problem, binary class 2 regularized logistic regression optimizes the following cost function: Similarly, 1 regularized logistic regression optimizes the following cost function: Elastic-Net regularization is a combination of 1 and 2 , and minimizes the following cost function: where, ∂ regulates the relative magnitude of 1 regularization vs. 2 regularization. Note that, in this notation, the target y i is supposed to accept values from the set [−1, 1] at trial i. Additionally, The Elastic-Net is identical, as might be demonstrated to 1 when ρ = 1 and to 2 when ρ = 0. Please see [43] for a comprehensive detail of logistic regression. Designed Approach Evaluation The proposed technique is evaluated in the following order to show the performance of the proposed technique. MRI Images Dataset A comprehensive and generalized MRI dataset was created that contained the actual MRI images from the Harvard Medical School and OASIS MRI databases. The collection contains brain MRI images that have been T1 and T2 weighted. Each input image is 256 × 256 × 3 pixels in size and contains demographic and clinical data, including the patients' gender, age, clinical dementia rating, mental state observation, and test parameters. The patients are all right-handed. This dataset is separated into two groups: the first comprises eleven diseases (which is used as a benchmark dataset by most existing works), and the second contains 24 diseases, including eleven from the first group. For large-scale experiments, this group is more ubiquitous. The overall number of brain MRI images in the first group is 255 (220 abnormal and 35 normal), while the total number of images in the second group is 340 (260 abnormal and 80 normal). Experiment Settings The performance of the created approach is assessed using the extensive set of experiments below, which are carried out in MATLAB using the specifications of RAM 8GB and processor running at 1.7 Hz. • The first experiment is implemented in order to assess the significance of the developed method on a publicly available MRI dataset. The entire experiment is performed against an n-fold cross validation scheme, where every image is used for both training and testing. Finally, the third experiment prescribes the comparison of the developed approach against the state-of-the-art systems. This experiment was performed against three major measurement rules such as sensitivity, accuracy, and specificity, which are measured through the values of false positive and false negative. Experimental Results The performance of the proposed approach is evaluated through the following comprehensive set of experiments, which are presented in the following order. 1st Experiment This experiment presents the significance of the developed technique on the brain MRI dataset. An n-fold cross validation rule was used, where every MRI image has been used accordingly for training and validation. Table 1 contains the performance of the proposed approach. Table 1 clarifies that the developed technique achieved the best classification rates on a large brain MRI dataset. This is because the statistical reliance of the wavelet coefficients is measured in the proposed method, which means that the joint probabilities are calculated by collecting geometrically aligned MRI images for each wavelet coefficient. In order to determine the wavelet coefficients obtained from these distributions, the mutual information between the two MRI images is used to calculate the statistical dependence's intensity. The execution time for the classification of every class using the proposed approach was 21.5 s against brain MRI dataset, which shows that the proposed approach was not only accurate but also computational wise less expensive. 2nd Experiment In the second type of experiment, a number of tests were performed to demonstrate the value of the suggested feature extraction method for the classification of brain MRI images. The existing state-of-the-art feature extraction techniques are employed instead of utilizing the proposed feature extraction method in the MRI system. The same experimental setup is kept for these experiments as in the first experiment. Then Speeded Up Robust Features, Gray Texture Features, Fusion Feature, Latent Semantic Analysis, Partial, Least Squares, Semidefinite Embedding, Independent Component Analysis are employed in the current respective MRI system. The entire results are presented in Tables 2-8. MN GL VD CA HY CC MA CS MI AD CT ME PD SR AL CJ MB AV MS LE HE CH HD Illnesses FS Average 85.04% Table 3. Performance of Gray Texture Features method (instead of using the developed approach) using MRI dataset (Unit %). Table 4. Performance of Fusion Feature method (instead of using the developed approach) using MRI dataset (Unit %). MN GL VD CA HY CC MA CS MI AD CT ME PD SR AL CJ MB AV MS LE HE CH HD NB FS 70 2 3 1 0 1 2 2 1 0 1 1 0 2 3rd Experiment Finally, in this experiment, we have compared the recognition rate of the proposed approach against existing state-of-the-art systems. These systems were implemented using the existing settings as described in their respective articles. For some systems, we have borrowed their respective implementation, while for others we have utilized their results as mentioned in their respective studies for a fair comparison. Moreover, the proposed approach and the existing state-of-the-art methods are measured through different measurement schemes such as sensitivity, accuracy, and specificity. For every measurement, we utilized the following formulas for evaluation. Speci f icity = T n T n + F p (20) where T p is true positive, T n is true negative, F p is false positive, and F n is false negative. The entire comparisons against the afore-mentioned measurements are respectively presented in Tables 9-11. Table 9. Performance comparison of the state-of-the-art methods against the proposed approach on MRI brain dataset (Number of utilized images 595 (where, normal = 115, and abnormal = 480)). Published Methods Used Methods Recognition Rates Misclassification Orouskhani, et al. [ Table 11. Performance comparison of the state-of-the-art methods and the proposed approach using sensitivity, accuracy, and specificity on brain MRI dataset (Number of utilized images 595 (where, normal = 115, and abnormal = 480)). Published Methods Used Methods Sensitivity Accuracy Specificity Orouskhani, et al. [ Table 9 denotes that the designed framework achieved remarkable achievements compared to the state-of-the-art studies. This is the proposed frameworks that handles the orthogonal, biorthogonal, and reverse biorthogonal properties of gray scale images. It produces higher classification results. Similarly, Table 10 presents the effectiveness of the proposed approach. A comparison has been made against state-of-the-art methods in terms of true positive, true negative, false positive, and false negative. Likewise, Table 11 provides a comparison between the proposed approach and the existing studies in terms of accuracy, sensitivity, and specificity. As can be seen, the proposed approach provides better sensitivity and specificity results compared with existing state-of-the-art methods. Conclusions In medical imaging, magnetic resonance imaging (MRI) is a precise and noninvasive technique that can be used to diagnose a variety of disorders. Various algorithms for brain MRI categorization have been developed by a number of researchers. On small MRI datasets, the majority of these algorithms did well and had higher identification rates. When dealing with larger MRI datasets, however, their performance degrades. As a result, the objective is to create a quick and precise classification system that can sustain a high identification rate across a sizable MRI dataset. As a result, in this study, a well-known enhancement method called global histogram equalization (GHE) is used to reduce undesirable information in MRI images. Furthermore, a reliable and accurate feature extraction technique is suggested for extracting and selecting the most prominent feature from an MRI picture. The suggested feature extraction method for grayscale photos is a compactly supported wavelet that has the greatest number of vanishing moments and the least amount of asymmetry for a given support width. Our study supports the frequency-based hypothesis. The statistical dependence of the wavelet coefficients is assessed for all grayscale MRI pictures. A gray scale frame's joint probability is calculated by collecting geometrically aligned MRI pictures for each wavelet coefficient. Using mutual information for the wavelet coefficients derived using these distributions, the degree of statistical dependence between the two MRI images is evaluated. Furthermore, the linear discriminant analysis is used after extracting the features to choose the best features and lower the dimensions of the feature space, which may improve the performance of the recommended method for generating feature vectors. Finally, logistic regression is used to classify the brain illnesses. A huge dataset from Harvard Medical School and the OASIS is utilized, which comprises a total of 24 distinct types of brain disorders, to assess and test the suggested method. In the proposed approach, the optimum set of features is extracted from the MRI images that are important for improving the accuracy. Subsequently, the rate of convergence is also one of the main factors improving the accuracy of this research; however, the number of features in this approach is not too high to reduce the computational complexity. Therefore, in the future, the proposed approach will be enclosed using MRI datasets in various healthcare domains. Moreover, the proposed approach is robust and efficient, which might be useful for real-time diagnostic applications in the future. Therefore, the proposed method might play a significant role in helping the radiologists and physicians with the initial diagnosis of the brain diseases using MRI.
7,525
2022-11-01T00:00:00.000
[ "Computer Science" ]
Sim Card Alarm for Android Smartphone — Since the first Android smartphones were released, many applications have been developed until today. One of them is security application. Security applications consist of several types of applications such as Antivirus, SIM Card Change Alarm, and Applocker. SIM Card Changed Alarm is a security application that has capability to monitor SIM Card change in Android smartphones. Although there are many SIM Card Change Alarm applications which are available in Google Playstore, there are many people who are not satisfied with this applications. It happens due to them only sends the SMS alert to the owner without locking the phone to prevent other people using it. The outcome of this research will have the same feature as SIM Card Change Alarm but with additional features such as locking the smartphone if the SIM Card has been changed, locking and unlocking remotely using SMS, triggering loud alarm, and calling-back to owner when the smartphone has been lost or stolen. The outcome expected for the Prifone Application is a user-friendly application which could be used by many people around the world and also is able to secure Android smartphones. INTRODUCTION Nowadays, the gadget can hardly be separated from human life, especially smartphone.With the smartphones people almost can do everything like reading, taking pictures with better camera, watching HD movies, playing games, chatting, and browsing.Therefore, the existence of smartphones has become important for the people to make their lives easier. Android has become one of the leading smartphone operating systems in comparison to other smartphone operating systems.Many smartphone products use Android as their operating system.Open source, easy to customize, and a support of thousand application by third-party.Those are the reasons why Android becomes the most popular operating system for smartphones. As a gadget that is often used, people usually store private data such as photos, videos, and login to several social media applications like Facebook, Line, Whatsapp, and Twitter.It is really dangerous if the people lose their smartphones or stolen by the thieves.The thief can access private data on that smartphone or using account that is registered on that smartphone for something bad.The owner can do nothing to prevent that from happening.The owner usually only tries to call to the number that is used by their missing phone and it is not working because usually the thief already changes the SIM card.Based on that, this research will create an application that has features to lock the smartphones and callback to the owner if the SIM card has been changed. A. Subscriber Identity Module (SIM) SIM associates a physical card used in smarphones to a subscriber of the Mobile Network Operator.The SIM's storage also includes a unique serial number ICCID (Integrated Circuit Card Identifier) which identifies the SIM globally and unique IMSI (International Mobile Subscriber Identity).SIM card usage can be controlled with two password: PIN and PUK.PUK is used as a remedy if PIN has been entered incorrectly too many times [2]. The file system of a SIM is organized in a hierarchical tree structure, it consists of the following three types of elements: Master File, Dedicated File and Elementary File [3] and according to ETSI standards [5] the SIM card provides a possibility of storing files, the ISSN 2085-4579 capacity of the module makes possible to store a considerable amount of keys which are less than 1 Kb in size [6]. B. Integrated Circuit Card Identifier(ICCID) The integrated circuit card identification is a unique numeric identifier for the SIM that can be up to 20 digits long.The ICCID can be read from the SIM without providing a PIN and can be never updated [1]. C. International Mobile Subscriber Identity(IMSI) IMSI (International Mobile Subscriber Identity), is a unique number that is associated with all GSM (Global System for Mobile Communications) and UMTS (Universal Mobile Telecommunications System) network mobile phone users.An International Mobile Subscriber Identity is up to 15 digits long.The first three digits represent the country code, followed by the network code.The remaining digits, up to fifteen represents the unique subscriber number from within the network's customer base [4]. III. SySTEM OvERvIEW The outcome of this research is an application which is named as "Prifone".The Prifone application is one of many security applications that develop in Android operating system.The Prifone application is intended to assist the user to prevent Android smartphone can be used by others when the smartphone has been lost or stolen, keep monitoring the SIM Card change in the Android smartphone, and assist the user to get the smartphone back if the smartphone has been lost. Prifone application allow the user to: • Enable or Disable the SIM Card Alarm. • Register the SIM Card based on ICCID and IMSI number on the SIM Card.• Register the remote number that will be used by another phone user as a remote control.• Set the lock keyword that will be used to lock the smartphone using SMS remotely. Set the unlock keyword that will be used to unlock the smartphone using SMS remotely.• Set the setting keyword that will be used to get the current setting of Prifone using SMS remotely.• Set the password that will be used to access Prifone application.• Uninstall the Prifone application from inside the application. The security objectives of this research: -A personalized device shall only work with a specific subset of SIMs.-Only a user with a valid unlock code shall be able to depersonalize the device. This application will rely on SMS keyword that sent from remote number that already set in the Prifone application.There are several ways which the user can get the result of this application. First, Prifone will lock the smartphone, trigger the alarm and run the callback to the owner, if (a) the phone receives sms that contains lock keyword from remote number, (b) the user changes the SIM Card without disabling the SIM Card alarm first, (c) Prifone detects there is no SIM Card in the smartphone and (d) the user fails three times entering the password to access Prifone application. Second, Prifone will unlock the smartphone and stopped the alarm if the phone received sms that contains unlock keyword from remote number.Third, Prifone will automatically send SMS contains Prifone current setting if the phone received sms that contains setting keyword from remote number.Last, the Prifone will uninstall itself if the user taps uninstall in the Prifone menu. Whenever the Profine detect different SIM Card placed in the smartphone, the system will detect the SIM Card ICCID and IMSI after boot complete.After that, system will compare new ICCID and IMSI with ICCID and IMSI that already saved in shared preference.The result is not match then the system will send SMS alert to remote number.After that, the system will lock the phone.When the phone is locked the system will trigger the loud alarm and callback to the owner as shown in Figure 1.SMS permission to be able receive sms keyword from remote number, send prifone current setting to remote number and automatic callback to remote number.Furthermore, prifone also need RECEIVE_BOOT_COMPLETED, MODIFY_ AUDIO_SETTINGS and SYSTEM_ALERT_ WINDOW permission to run on startup, change audio setting to turn on and turn of speakerphone and to draw other apps when Prifone lock screen running. B. Register SIM Card in Prifone List Setting Menu The first thing to do is import onpreferenceClick and implements OnPreferenceClickListener in setting activity class to listen preference click.After that in onpreferenceClick method listen register simcard preference key.After application get the preference key initialize sharedpreference as place to store IMSI and ICCID SIM card, to use and edit sharedpreference system need import SharedPreferences and SharedPreferences as shown in Figure 3. In order to detect SIM Card ICCID and IMSI the application require permission READ_ PHONE_STATE that declare in application manifest file as shown in Figure 4.After that call TelephonyManager API to detect SIM Card.If application successfully detect SIM Card then it will get ICCID and IMSI SIM Card using getSimSerialNumber() and getSubscriberId() API and convert it to string.After get ICCID and IMSI system will edit defaultsharedpreference and put ICCID and IMSI into it then save it.Finally system will show the alert "Sim Card Registered Successfully" and change preference summary to "Registration Completed".But, if the system detect there is no SIM Card inserted into the phone then system will show the message alert "Sim Card Not Detected". C. Register Lock, Unlock, and Setting Keyword Figure 5 show code how to register lock keyword, unlock keyword and setting keyword.First, system will check which preference key that user change.If preference key equals setlockkeyword then system will get user input from edittextpreference setlockkeyword, then check the input, if the user input not equals ("") means user input something into edittextpreference then it will return true means data will automatically save into defaultsharedpreference and the data will be stored in with the name accordance with the key name.But, if user input equals ("") means user did not input anything in edittextpreference then system will show alert message "field cannot be empty".The system will do the same thing with setunlockkeyword and setsettkeyword.Register Password Figure 6 show code how to register password.First system will get the user input from password edittextpreference.After that, system will call isValidPass method to do the validation the password.Method isValidPass has function to check whether the password input contains lowercase, uppercase, and number with the length 6 -20 character.If isValidPass return true then system automatically save the password into default shared preference.But if isValidPass return false then system will show alertdialog that tell the user if the password must contains lowercase, uppercase, number and the length 6 -20 character. E. Check SIM Card on Startup The first thing to do to make the application run on startup is uses permission RECEIVE_ BOOT_COMPLETED.After that, create Class that extends Broadcastreceiver class then filter the receiver with android.intent.action.BOOT_ COMPLETED and give priority "2147483647" in android manifest file to make application run faster after boot completed as shown in Figure 7. Now start to check the SIM Card, First application will get registered SIM Card ICCID and IMSI from default sharedpreference.After that, application check if there are a SIM Card inserted in smartphone.If application detect there are no SIM Card inserted then application automatically run SimAbsentActivity that lock the smartphone and ring loud alarm.But if application detect SIM Card, application will start compare IMSI and ICCID that already saved in default sharedpreference with the new one.If the IMSI and ICCID matched then application will do nothing.But, if application detect different IMSI and ICCID then application automatically run UnauthorizedSimActivity that will lock the smartphone, run the callback feature and ring loud alarm. F. Send Current Prifone Setting The first thing to do to make application able to receive SMS is add permission RECEIVE_SMS In android manifest.After that get the message body and the sender number.Now, application will compare message body with keyword, to compare it used equalsignorecase(), therefore, it will be case insensitive.To compare number it used two ways, first, it used PhoneNumberUtils.compare() it will ignore country code.For example +6285286636771 is same with 085286636771 but this method did not work on some smartphone.Therefore, prifone used the second way to compare the phone number using equals method.The Code is shown in Figure 8. .9show the code to make phone call, to make application able make phonecall application required CALL_PHONE permission and to turn on speakerphone automatically application also required MODIFY_AUDIO_SETTING. e. Set unlock keyword Figure 13 shows the Set Unlock Keyword Edit textbox, Set unlock Keyword edit textbox will show if user taps Set Unlock Keyword in Prifone menu.This edit textbox needed to get unlock keyword that inserted by the user.If user already set unlock keyword then want to change unlock keyword, user just tap set unlock keyword again, remove the old keyword and replace with the new unlock keyword.After that tap OK and it will automatically replace the old unlock keyword with the new unlock keyword.Lock screen will show if the application receive SMS that contain lock keyword sent by remote number or user failed three times entering password in login screen.When lock screen showed, it is also run callback to the remote number and triggering loud alarm.When this happen the only way to unlock the phone and stopping the alarm is send SMS that contains unlock keyword using remote number that already set in Prifone menu. g. Unauthorized SIM lock Screen Unauthorized SIM Lock screen will show if the application detects different SIM Card has been inserted on start up.When Unauthorized SIM lock screen showed, it is also run callback to the remote number and triggering loud alarm.When this happen the only way to unlock the phone and stopping the alarm is send SMS that contains unlock keyword using remote number that already set in Prifone menu. h. Absent SIM Card Lock screen Absent SIM Lock screen will show if the application detects there are no SIM Card has been inserted on start up.When Absent SIM Card lock screen showed, it will triggering loud alarm.When this happen the only way to unlock the phone and stopping the alarm is remove the battery then insert SIM Card after that turn on the smartphone then send SMS that contains unlock keyword using remote number that already set in Prifone menu. VI. CONCLUSIONS This research is aim to developed one of the security applications for Android smartphones that can lock the smartphones if the SIM Card has been changed.The owner of the smartphone also can lock the smartphone remotely using SMS if the smartphone has been lost or stolen.This application also has a feature to ring loud alarm that can make the thieves become panic.Furthermore, this application also has callback to the owner feature that can help if the smartphone has been lost and someone found it, the people can tell the owner where the smartphone is.This application can run best when the smartphone has enough balance to make phone calls.However, this application still has weaknesses such as not supporting dual SIM and calling back feature that can run if the smartphone has enough balance. Figure 1 Figure 1 Activity Diagram of Unauthorized SIM Figure 2 Figure 2 Sdk and Permission Requirements Figure 4 Figure 4 Register SIM Card ICCID and IMSI Figure 5 Figure 5 Register Lock, Unlock, and Setting Keyword Code Figure 6 Figure 6 Check Password Code Figure 7 Figure 7 Check SIM Card on Startup Figure 8 Figure 8 SMS Receiver Lock and unlock Code Figure 9 Figure 9 Code to Start and Stop Alarm Figure 10 Figure 10 Code to Make a Phone Call Figure 11 Figure 11 Setting Menu Screen Figure 12 Figure 12 Set Remote Number Interface Figure 13 Figure 13 Set Unlock Keyword Interface Since the first Android smartphones were released, many applications have been developed until today. One of them is security application. Security applications consist of several types of applications such as Antivirus, SIM Card Change Alarm, and Applocker. SIM Card Changed Alarm is a security application that has capability to monitor SIM Card change in Android smartphones. Although there are many SIM Card Change Alarm applications which are available in Google Playstore, there are many people who are not satisfied with this applications. It happens due to them only sends the SMS alert to the owner without locking the phone to prevent other people using it. The outcome of this research will have the same feature as SIM Card Change Alarm but with additional features such as locking the smartphone if the SIM Card has been changed, locking and unlocking remotely using SMS, triggering loud alarm, and calling-back to owner when the smartphone has been lost or stolen. The outcome expected for the Prifone Application is a user- friendly application which could be used by many people around the world and also is able to secure Android smartphones. Index Terms-android, sim card alarm, security aplication Sim Card Alarm for Android SmartphoneRikip Ginanjar, Ridho Utomo Faculty of Computing, President University, Bekasi, Indonesia<EMAIL_ADDRESS>I.
3,715.6
2016-09-15T00:00:00.000
[ "Computer Science" ]
Gauge mediated supersymmetry breaking without exotics in orbifold compactification We suggest SU(5)$'$ in the hidden sector toward a possible gauge mediated supersymmetry breaking scenario for removing the SUSY flavor problem, with an example constructed in $\Z_{12-I}$ with three families. The example we present has the Pati-Salam type classification of particles in the observable sector and has no exotics at low energy. We point out that six or seven very light pairs of ${\bf 5}'$ and $\bar{\bf 5}'$ out of ten vectorlike $\five'$ and $\fiveb'$ pairs of SU(5)$'$ is achievable, leading to a possibility of an unstable supersymmetry breaking vacuum. The possibility of different compactification radii of three two tori toward achieving the needed coupling strength is also suggested. I. INTRODUCTION The gauge mediated supersymmetry breaking (GMSB) has been proposed toward removing the SUSY flavor problem [1]. However, there has not appeared yet any satisfactory GMSB model from superstring compactification, satisfying all phenomenological constraints. The GMSB relies on dynamical supersymmetry breaking [2]. The well-known GMSB models are an SO(10) ′ model with 16 ′ or 16 ′ + 10 ′ [3], and an SU(5) ′ model with 10 ′ + 5 ′ [4]. If we consider a metastable vacuum also, a SUSY QCD type is possible in SU(5) ′ with six or seven flavors, satisfying N c + 1 ≤ N f < 3 2 N c [5]. Three family standard models (SMs) with this kind of hidden sector are rare. In this regard, we note that the flipped SU (5) model of Ref. [6] has one 16 ′ and one 10 ′ of SO(10) ′ , which therefore can lead to a GMSB model. But as it stands, the confining scale of SO(10) ′ is near the GUT scale and one has to break the group SO(10) ′ by vacuum expectation values of 10 ′ and/or 16 ′ . Then, we do not obtain the spectrum needed for a GMSB scenario and go back to the gaugino condensation idea. If the hidden sector gauge group is smaller than SU(5) ′ , then it is not known which representation necessarily leads to SUSY breaking. The main problem in realizing a GMSB model is the difficulty of obtaining the supersymmetry (SUSY) breaking confining group with appropriate representations in the hidden sector while obtaining a supersymmetric standard model (SSM) with at least three families of the SM in the observable sector. In this paper, we would like to address the GMSB in the orbifold compactification of the E 8 ×E ′ 8 heterotic string with three families at low energy. A typical recent example for the GMSB is where Q is a hidden sector quark and f is a messenger. Before Intriligator, Seiberg and Shih (ISS) [5], the GMSB problem has been studied in string models [7]. After [5] due to opening of new possibilities, the GMSB study has exploded considerably and it is known that the above idea is easily implementable in the ISS type models [8]. Here, we will pay attention to the SUSY breaking sector, not discussing the messenger sector explicitly. The messenger sector {f, · · · } can be usually incorporated, using some recent ideas of [8], since there appear many heavy charged particles at the GUT scale from string compactifications. The three family condition works as a strong constraint in the search of the hidden sector representations. In addition, the GUT scale problem that the GUT scale is somewhat lower than the string scale is analyzed in connection with the GMSB. Toward the GUT scale problem, we attempt to introduce two scales of compactification in the orbifold geometry. In this setup, we discuss physics related to the hidden sector, in particular the hidden sector confining scale related to the GMSB. If the GMSB scale is of order 10 13 GeV, then the SUSY breaking contributions from the gravity mediation and gauge mediation are of the same order and the SUSY flavor problem remains unsolved. To solve the SUSY flavor problem by the GMSB, we require two conditions: one is the relatively low hidden sector confining scale (< 10 12 GeV) and the other is the matter spectrum allowing SUSY breaking. Toward this kind of GMSB, at the GUT scale we naively expect a smaller coupling constant for a relatively big hidden sector nonabelian gauge group (such as SU(5) ′ or SO(10) ′ ) than the coupling constant of the observable sector. But this may not be needed always. The radii of three two tori can be different in principle as depicted in Fig. 1. For simplicity, we assume the same radius r for (12)-and (56)-tori. A much larger radius R is assumed for the second (34)-torus. For the scale much larger than R, we have a 4D theory. In this case, we have four distance scales, R, r, α ′ = M −2 s , and κ = M −1 P , where α ′ is the string tension and M P is the reduced Planck mass. The Planck mass is related to the compactification scales by M 2 P ∝ M 8 s r 4 R 2 . Assuming that strings are placed in the compactified volume, we have a hierarchy 1 R < 1 r < M s < M P . The customary definition of the GUT scale, M GUT , is the unification scale of the QCD and electroweak couplings. For the 4D calculation of the unification of gauge couplings to make sense, we assume that the GUT scale is below the compactification scale 1 R , leading to the following hierarchy where we have not specified the hierarchy between M s and M P . In Sec. II, we discuss phenomenological requirements in the GMSB scenario toward the SUSY flavor problem. In Sec. III, we present a Z 12−I example. In Sec. IV, we discuss the hidden sector gauge group SU(5) ′ where a GMSB spectrum is possible. II. SUSY FCNC CONDITIONS AND GAUGE MEDIATION The MSSM spectrum between the SUSY breaking and GUT scales fixes the unification coupling constant α GUT of the observable sector at around 1 25 . If a complete SU(5) multiplet in the observable sector is added, the unification is still achieved but the unification coupling constant will become larger. Here, we choose the unification coupling constant in the range α GUT ∼ 1 30 − 1 20 . The GMSB scenario has been adopted to hide the gravity mediation below the GMSB effects so that SUSY breaking need not introduce large flavor changing neutral currents (FCNC) [1]: where M P is the reduced Planck mass 2.44 × 10 18 GeV, M X is the effective messenger scale Now the expression (4) is used to give constraint on α h GUT . Defining the inverse of unification coupling constants as we express A ′ in terms of the scale Λ h as 1 If M GUT ≃ 2 × 10 16 GeV and Λ h ≃ 2 × 10 10 GeV, we obtain A ′ in terms of −b h j as shown in Eq. (7). If we consider a metastable vacuum, a SUSY QCD type is possible in SU(5) ′ with six or 1 One can determine Λ h where α h = ∞ for which near Λ h the one loop estimation is not valid. So we seven flavors, 6(5 ′ + 5 ′ ) or 7(5 ′ + 5 ′ ) [5]. The reason that we have this narrow band of N f is that the theory must be infrared free in a controllable way in the magnetic phase. Three family models with α ′ < 1 25 are very rare, and we may allow at most up to 20% deviation from α GUT value, i.e. α ′ > 1 30 . Then, from Fig. 2 we note that it is almost impossible to have an SO(10) ′ model from superstring toward the GMSB. The reason is that SO (10) The ISS type models are possible for SO(N c ) and Sp(N c ) groups also [5]. In this paper, however we restrict our study to the SU(5) ′ hidden sector only. We just point out that SO(N c ) groups, with the infrared free condition in the magnetic phase for N f < 3 2 (N c − 2), are also very interesting toward the unstable vacua, but the study of the phase structure here is more involved. On the other hand, we do not obtain Sp(N c ) groups from orbifold compactification of the hidden sector E ′ 8 . III. A Z 12−I MODEL We illustrate an SSM from Z 12−I . The twist vector in the six dimensional (6d) internal space is The compactification radius of (12)-and (56)-tori is r and the compactification radius of (34)-torus is R, with a hierarchy of radii r ≪ R. We obtain the 4D gauge group by considering massless conditions satisfying P · V = 0 and P · a 3 = 0 in the untwisted sector [9]. This gauge group is also obtained by considering the common intersection of gauge groups obtained at each fixed point. SU(4) : The SU(2) V is like SU(2) R in the Pati-Salam(PS) model [11]. The gauge group SU(4) will be broken by the vacuum expectation value (VEV) of the neutral singlet in the PS model. As shown in Table I Certainly, these conditions can be satisfied. At this point, we are content merely with having three SSM families without exotics, and let us proceed to discuss SUSY breaking via the GMSB scenario, using the hidden sector SU(5) ′ . Out of ten SU(5) ′ quarks, there may result any number of very light ones according to the choice of the vacuum. A complete study is very complicated and here we just mention that it is possible to have six or seven light SU(5) ′ quarks out of ten. The point is that we have enough SU(5) ′ quarks. For example, one may choose the T 3 T 9 coupling such that one pair of SU(2) W doublets (two SU(5) ′ quarks) becomes heavy with a mass scale of m 1 . For the sake of a concrete discussion, presumably by fine-tuning at the moment, one may 5 Details of the rules for Z 12−I are given in [6,10]. consider the T 6 T 6 coupling such that the following 5 ′ · 5 ′ mass matrix form is not broken by hidden sector squark condensates because their values are vanishing [5]. 6 For m 1,2 ≪ Λ h , an unstable minimum is not obtained [5]. Note that the unification of α c and α W is not automatically achieved as in GUTs because light (1, 2, 1; 1; 5 ′ , 1) 1/10 quarks do not form a complete representation of a GUT group such as SU (5). Unification condition must be achieved by mass parameters of the fields surviving below the GUT scale, and the condition depicted in Fig. 2 Fig. 2, we have α h ≃ 1 9 for Λ h = 10 12 GeV. These values are large. 7 To introduce this kind of a large value for the hidden sector coupling constant, we can introduce different radii for the three tori. In this way, a relatively small scale, M GUT ∼ 2 × 10 16 GeV compared to the string scale, can be introduced also via geometry through the ratio r/R. Let the first and third tori are small compared to the second tori as depicted in Fig. 3. If the radius R of the second torus becomes infinite, we treat the second torus as if it is a fixed torus. Then, one might expect a 6D spacetime, expanding our 4D spacetime by including the large (34)-torus. One may guess that the spectrum in T 1 , T 2 , T 4 , and T 7 sectors would be three times what we would obtain in T i 0 (i = 1, 2, 4, 7). For T 3 and T 6 , the spectrum would be the same since they are not affected by the Wilson line from the beginning. But this naive consideration does not work, which can be checked from the spectrum we presented. If the size of the second torus becomes infinite, we are effectively dealing with 4d internal space, and hence we must consider an appropriate 4d internal space compactification toward a full 6D Minkowski spacetime spectrum. This needs another set of twisted sector vacuum energies and the spectrum is not what we commented above. A more careful study is necessary to fit the hidden sector coupling constant to the needed value. 7 A naive expectation of the hidden sector coupling, toward lowering the hidden sector confining scale, is a smaller α h GUT compared to 1 25 . Because of many flavors, α h GUT turns out to be large. Here we just comment that in our example SU(5) ′ is not enhanced further by neglecting the Wilson line. Even though SU(5) ′ is not enhanced between the scales 1/r and 1/R, the SU(5) ′ gauge coupling can run to become bigger than the observable sector coupling at the GUT scale since in our case the bigger group SU(5) ′ , compared to our observable sector SU(4) group even without the Wilson line, results between the scales 1/r and 1/R. The example presented in this paper suggest a possibility that the GMSB with an appropriate hidden sector scale toward a solution of the SUSY flavor problem is realizable in heterotic strings with three families. V. CONCLUSION Toward the SUSY flavor solution, the GMSB from string compactification is looked for. We pointed out that the GMSB is possible within a bounded region of the hidden sector gauge coupling. We find that the hidden sector SU(5) ′ is the handiest group toward this direction, by studying the gauge coupling running. We have presented an example in Z 12−I orbifold construction where there exist enough number of SU(5) ′ flavors satisfying the most needed SM conditions: three observable sector families without exotics. Toward achieving the needed coupling strength of the hidden sector at the GUT scale, we have suggested different compactification radii for the three tori.
3,326.4
2007-06-04T00:00:00.000
[ "Physics" ]
Reusable Smart Lids for Improving Food Safety at Household Level with Programmable UV-C Technology : The worldwide food industry faces the multiple challenges of providing food security while also reducing environmental and health consequences. This requires transitioning to chemical-free techniques of preserving food with a long shelf life that emphasize human health. Even though millions of people are experiencing hunger, the substantial amount of food that is being wasted is impeding the advancement towards UN Sustainable Development Goal 12, which aims to reduce food waste by 50% by the year 2030. On the other hand, conventional food preservation techniques still frequently depend on chemical additives, which might give rise to persistent health issues and potentially undermine nutritional quality. This emphasizes the necessity for inventive, non-chemical remedies that prioritize both prolonged storage duration and the safety of food. Consumer storage conditions, which are the ultimate phase of the food chain, still generate substantial waste because of the proliferation of mold and bacteria on fruits and vegetables, which presents health hazards. Enhancing storage conditions and extending shelf life is important. Low-frequency ultraviolet (UV-C) light technology provides a non-thermal and highly efficient method for fighting foodborne microorganisms such as mold. This method renders pathogens inactive while maintaining product quality, providing a cost-efficient and easily available alternative. This study proposes the development of a programmable “Smart-Lid” SLID storage system that utilizes upcycled home base glass jars with UV-C light-emitting lids to prevent mold growth on various open food items, including milk-and sugar-based food, sauces, and possibly dry meals. The research seeks to assess the efficacy and potential influence of the SLID solution with UV-C light’s potential with programmable applications in this preserving environment at the home level. Introduction One of the most significant paradoxes in human history is the prioritization of the shelf life of food over human life in an ultimate consumer society.Additionally, it is also so tragic that packing materials have a much longer lifespan than the products.Packs are designed to safeguard, and for many consumer products, the packaging and preserving methods are as important as the product itself.Despite the use of various artificial conservation measures, a significant proportion of food items, specifically one-third, nonetheless end up being discarded before they can be consumed.According to the FAO's estimates, approximately one-third (1.3 billion tons) of all food produced globally is lost or wasted each year along the various stages of the food supply chain (FSC), from production, handling, and storage to processing, distribution, and consumption, which is generally known as "food lost waste" [1].This remarkable statistic highlights the significant squandering of worldwide resources used in food production as well as the consequent rise in greenhouse gas emissions caused by lost or discarded food.Food losses, which occur at various stages of the food supply chain, are a complex global issue that requires careful consideration on a macro-and microscale [2].Significant food loss occurs during the consumption phase in middle-and high-income countries, primarily due to inadequate storage conditions, excessive purchasing, and unregulated eating habits [1,3].Consequently, people end up discarding food that is still suitable for consumption.In contrast, in low-income countries, food loss occurs primarily during the initial and intermediate stages of the food supply chain, with little waste at the consumer level [4].Countries experience substantial losses at the beginning of their food supply chains.Harvest practices, adverse climate conditions for storage and cooling facilities, infrastructure limitations, and financial, managerial, and technical constraints in packaging and marketing systems are the primary factors contributing to food losses and waste in low-income nations [2].Since many small-scale farmers in developing nations live on the verge of food insecurity, reducing food losses can quickly and significantly affect their means of subsistence.But one of the main points in food loss or the source of food waste is still present, and that can be addressed at the end of the supply chain [3,4].Food loss and food waste (FL/FW) are global problems with significant economic, environmental, and social impacts.This total loss of food on every scale has several negative consequences: • Economic losses: Food losses and waste amount to roughly USD 680 billion in industrialized countries and USD 310 billion in developing countries.FL/W represents a direct economic loss of approximately USD 990 billion per year [2].Around 88 million tons of food are wasted annually in the EU, with associated costs estimated at EUR 143 billion [5]. • Environmental impacts: FL/FW also has a significant environmental impact.Food production that ultimately leads to loss or waste consumes significant amounts of water, energy, and land resources.It also contributes to greenhouse gas emissions by releasing methane, a potent greenhouse gas [2], as food decomposes in landfills.Food loss and waste account for about 4.4 gigatons of greenhouse gas (GHG) emissions per year.To put this in perspective, if food loss and waste were its own country, it would be the world's third largest GHG emitter, surpassed only by China and the United States [5,6]. • Social impacts: FL/FW is also a social injustice, as it occurs at a time when millions of people around the world are food insecure.It is estimated that the amount of food that is lost or wasted each year could be used to feed all the world's food-insecure people [7].If we could save just one-fourth of the current global food loss or waste, it would suffice to feed 870 million hungry people [5]. Most of the food loss and waste (FL/FW) occurs at the later stages of the food chain, particularly at the retail and consumer levels.Statistics from the Food and Agriculture Organization (FAO) indicate that roughly 35% of global food production is lost or wasted at the consumer level alone [1,2].This significant figure highlights the need for interventions that target consumer behavior and habits.There are critical factors contributing to whether consumer-level FL/FW best practices are followed: • Retail and wholesale practices: Contracts between farmers and buyers can lead to produce waste.Rejection of food items based solely on appearance (shape, size, or cosmetic imperfections) can contribute to FL/FW [2]. • Consumer behavior: Poor planning, misunderstanding of "use by" dates, and casual attitudes towards food all contribute to consumer-level waste [3,4]. • Infrastructure limitations: Lack of proper storage facilities in homes can negatively impact food safety and freshness, leading to increased losses [3,6]. • Regional differences: Developed nations tend to have considerably higher per capita food waste compared to developing or underdeveloped countries.Consumers in Europe and North America discard an estimated 95-115 kg of food per person annually, while Sub-Saharan Africa and Southeast Asia see significantly lower waste at 6-11 kg per person per year [3,8]. The Aim of the Study This study focuses on the FW outputs of home-based consumption and looks at a possible alternative solution for preserving and extending food life cycles.It investigates the potential consequences of utilizing a novel product/lid system known as the "Smart Lid" (SLID) and the potential consequences of employing low-frequency UV-C light as a micro-level preservation technique at the final stage of the food chain by upcycling home base waste glass jars (HBW).The "Smart Lid" (SLID) is a reusable lid/protection device that improves user awareness and optimizes the environment for storing healthy food by using UV-C lights.UV-C light technology remains a non-thermal method employed for decontaminating the surfaces of food and its environment.It is a healthy alternative approach that effectively inhibits the growth of bacteria and helps control losses during storage and transportation [7].The SLID product provides cost-effective and customizable functionality for programmable UV-C food preservation microcontainers.These containers use low-frequency ultraviolet technology to prolong the freshness of consumer items in domestic settings. This study aimed to repurpose and recycle discarded home glass packaging, with a specific emphasis on the jar groups with lids that are 85 mm in radius, with different volumes and heights.It also investigates the advantages and prospective applications of a programmable UV-C microproduct called the SLID ("Smart Lid") to improve the longevity of household items on shelves.This innovative, small-scale solution aligns with sustainability principles and helps reduce food loss and waste across various food categories and scales, particularly at the end point of the supply chain in household settings.It also enhances safe food preservation by offering substitute products of different sizes, which can have major consequences.It is also crucial to evaluate the possible advantages that this healthy, smart solution can offer that also support the Sustainable Development Goals (SDGs), which call for reducing per capita global food waste at the retail and consumer levels and food losses along global production and supply chains by half by 2030.This supports the other SDGs, including the SD Target 2 goal of zero hunger by 2030 [1]. Problem of Food Loss/Waste (FL/FW) From initial agricultural production to final household consumption, food waste occurs throughout the food supply chain (FSC).In medium-and high-income countries, food is largely wasted, meaning that it is thrown away even if it is still suitable for human consumption [2,9].Significant food loss and waste do, however, also occur early in the food supply chain.In low-income countries, food is mainly lost during the early and middle stages of the food supply chain; much less food is wasted at the consumer level.Food loss (FL) and food waste (FW) differ in timing from production to consumption [3].The Food and Agriculture Organization (FAO) of the United Nations defines food loss and waste as a decrease in the quantity or quality of food along the food supply chain.Within this framework, UN agencies distinguish between loss and waste at two different stages in the process [1,6]: • Food loss (FL) occurs along the food supply chain from harvest/slaughter/catch up to but not including the sales level.Food loss (FL) happens early in the supply chain, before consumers purchase it.It can occur during cultivation, postharvest handling, processing, or transportation.According to the FAO, FL is defined as "a reduction in the quantity or quality of food resulting from decisions and actions by food suppliers within the chain, excluding retailers, food service providers, and consumers" [10]. • Food waste (FW) occurs at the retail, storage, and consumption levels.Store shelves, restaurant kitchens, or residences may abandon or render food unfit for ingestion.In contrast to FL, FW refers to a decline in food quantity or quality due to retailer, food service provider, and consumer decisions.According to the FAO, FW is "the decrease in the quantity or quality of food resulting from decisions and actions by retailers, food service providers, and consumers".In affluent countries, per capita food waste is considerable, making FW increasingly common [11]. While FL and FW definitions and metrics differ per organization, certain principles apply universally.On the other hand, food safety has increasingly integrated both themes since the outbreak [2,10].Every stage of the food supply chain (FSC), from agricultural production to household consumption, results in food waste.Countries with medium and high incomes waste a significant amount of food, even if it is still suitable for human consumption [4].Nevertheless, there is also a notable occurrence of food loss and waste at the beginning stages of the food supply chain.Low-income nations primarily experience food loss during the initial and intermediate phases of the food supply chain, resulting in significantly less food waste at the consumer level [1,2,11].Food losses take place at the production, postharvest, and processing stages in the food supply chain.Food losses occurring at the end of the food chain (retail and final consumption) are rather called "food waste", which relates to retailers' and consumers' behavior [3].This study only focuses on FW and food-preserving aims and attitudes in microscale applications for home-type customers.There are related issues that have affected the end user and customer food waste production, as follows: Challenges in the transportation and distribution of food, such as inadequate infrastructure, a lack of cold chain facilities, and transportation delays, can lead to food loss and waste, particularly for perishable items.A lack of cold chain infrastructure, particularly in developing regions, can lead to up to 50% spoilage of fruits and vegetables during transportation [6]. Market Dynamics Economic factors such as market volatility, fluctuating prices, and consumer demand can contribute to food waste.Price advantages may occur due to overproduction or market fluctuations, resulting in excessive storage of food.The World Bank's research reveals that market fluctuations and unpredictable prices, particularly in countries with high inflation, lead to food waste by incentivizing the purchase of food products beyond necessity [8]. Sufficient Storage and Handling Issues at Home Improper storage facilities and handling practices during transport and storage can lead to food spoilage and waste.Factors such as inadequate temperature control, humidity, and pest infestations can accelerate food deterioration.The FAO estimates that between 25 and 30% of postharvest losses in developing countries occur due to inadequate storage facilities and handling practices at the at the end of the supply chain [6]. Quality Standards and Aesthetic Preferences Retailers' strict quality standards and consumers' preferences for visually appealing produce contribute to food waste.Retailers or consumers may reject fruits and vegetables that are imperfect or cosmetically blemished, even if they are perfectly edible.A study published in Applied Economic Perspectives and Policy [9] revealed that stringent cosmetic standards for fruits and vegetables can lead to up to 20% rejection rates by retailers, even if the produce is perfectly edible.At the retail level, large quantities of food are wasted due to quality standards that over-emphasize appearance [5]. Consumer Behavior Consumer behavior has a significant impact on household food waste.Buying more food than needed, improper storage, and discarding edible food due to confusion over expiration dates or perceptions of freshness contribute to food waste [11].Confusion over "use by" and "best before" labels significantly influence household food waste, resulting in premature disposal of edible food [12]. Food Base Contaminations and FW at Home Scale Improper storage is a major culprit in food waste, turning perfectly good food into unnecessary discards.These are the "storage-related villains", and we will discuss the science underlying their destructive actions. Insufficient Storage Conditions This villain represents a multitude of offenses against proper food storage: • Temperature Control: Improper temperature plays a significant role in spoilage rates. Studies have shown that storing fruits and vegetables at improper temperatures can significantly accelerate enzymatic activities that lead to softening, discoloration, and ultimately spoilage [13,14].• Ventilation: Inadequate ventilation can trap moisture around produce, creating an ideal environment for mold and bacteria growth [3].Research by the USDA indicates that proper ventilation in storage facilities can extend the shelf life of fruits and vegetables [15]. • Light Exposure: Food items have varying light sensitivity.Certain fruits and vegetables exposed to excessive light can experience accelerated ripening or chlorophyll degradation (loss of green color) [16]. Molds and Fungus Despite various measures taken to prevent mold growth on food, molds are ubiquitous in nature and can contaminate products before, during, or after harvest, processing, storage, and sale (Table 1).Aspergillus, Cladosporium, Alternaria, Mucor, and Penicillium are the most-known molds that can easily grow and expand on food surfaces.Fruits are more susceptible to fungal spoilage than bacterial spoilage due to their characteristics such as high water activity, high sugar content, and low pH [17,18].This significantly increases the risk of mold growth on fruits, posing a significant concern for food safety and waste.These unwelcome guests thrive in warm, moist environments.Improper storage practices, like leaving vegetables in sealed plastic bags that trap condensation, create a breeding ground for mold growth.Studies have shown that certain fungal species can produce mycotoxins, harmful substances that render food unsafe for consumption [19,20].Molds degrade food and produce mycotoxins that harm human health, causing economic losses.As a result, foods that have mold can potentially pose a health risk [21].By preventing food spoilage, preventive strategies and approaches effectively reduce economic losses and ensure food safety and preservation.While fungicides are increasingly employed to inhibit mold growth and the development of mycotoxins, the establishment of fungicideresistant pathogenic strains remains an inevitable and favored means of attaining this objective.This method entails the excessive utilization of chemicals, but this method can also lead to the presence of unwanted residues on food surfaces and pose potential health hazards to humans [22].When given the right conditions, the ubiquitous fungi Aspergillus, Cladosporium, Alternaria, Mucor, Rhizopus, Penicillium, and Geotrichum can thrive on a variety of food sources. • Aspergillus is a genus of molds that includes a multitude of species, several of which can generate mycotoxins that can be detrimental to both people and animals.Aspergillus may thrive on various substrates, such as grains, nuts, dried fruits, and spices.It frequently causes food degradation and can lead to aflatoxin contamination, particularly in improperly stored grains and nuts.It can cause respiratory problems and allergies [20]. • Penicillium molds have a broad distribution in nature and are frequently present in soil, air, and decomposing plant matter.Fungi can thrive on a diverse range of food sources, such as grains, fruits, vegetables, and cheese.Food manufacturers use certain Penicillium species, like Penicillium roquefortine, in blue cheese.Other Penicillium species, however, can generate mycotoxins and contribute to food rotting under suitable conditions [22]. • Cladosporium is a prevalent genus of mold that can be found in both indoor and outdoor settings.It can thrive on a variety of organic materials, including food.Cladosporium species commonly inhabit fruits, vegetables, cheese, and bread.Despite the recognition that several species are allergenic, they generally do not produce significant quantities of mycotoxins [23]. • Mucor is a rapidly proliferating fungi commonly found in soil, plant remnants, and decomposing organic material.Fungi could thrive on a wide range of food sources, such as fruits, vegetables, bread, and dairy products.Mucor species are renowned for their swift proliferation and have the potential to induce food deterioration, especially in situations with high levels of moisture [23,24].• Alternaria is one of the most common mold genera, and is found in soil, plants, and the air.It can thrive on a diverse array of substrates, encompassing fruits, vegetables, cereals, and dairy products.Researchers recognize Alternaria species for their capacity to produce allergens and mycotoxins, potentially leading to significant health consequences when consumed in excess [20]. • Rhizopus is a genus of common saprophytic fungi on plants and specialized parasites on animals.They are found in a wide variety of organic substances, including "mature fruits and vegetables", jellies, syrups, leather, bread, peanuts, and tobacco [22].They are multicellular.Some Rhizopus species are opportunistic human pathogens that often cause a fatal disease called mucormycosis.In general, these molds can thrive on various types of food, especially in environments with high levels of moisture, warmth, and inadequate storage methods.Adhering to appropriate methods of handling, storing, and maintaining cleanliness is crucial to preventing the formation of mold and the spoilage of food [20]. Bacterial Growth Uncooked meats, dairy products, and leftovers become susceptible to rapid bacterial multiplication at room temperature (Table 2).Research highlights the importance of proper food storage temperatures to prevent foodborne illnesses caused by bacterial growth [20,23]. Here are the factors influencing bacterial growth in the food chain: • Temperature: Bacteria thrive in warm conditions.The "danger zone" for bacterial growth is between 40 • Moisture: Bacteria require moisture for growth.Foods with high moisture content are more susceptible to bacterial growth [18].Food provides nutrients for bacteria to grow and reproduce.• Time: The longer food is stored, the greater the opportunity for bacteria to grow. • Initial Contamination: Food will spoil more quickly if it already contains bacteria when stored. a. Pathogenic bacteria: These pose a significant health risk because they can cause foodborne illness.Consuming food contaminated with these bacteria can lead to symptoms like diarrhea, vomiting, fever, and abdominal cramps.The severity of illness depends on the specific bacteria, the amount consumed, and the individual's health [20]. • Salmonella: This is commonly found in poultry, eggs, meat, and even fruits and vegetables [23,24]. • Bacillus: This is a spore-forming bacteria that survives harsh conditions and contributes to spoilage in canned goods or cooked rice [25]. Misunderstood Expired Dates Confusion around "use by" and "best before" dates often lead to premature food waste.Understanding these labels goes a long way."Use by" dates indicate a safety concern, while "best before" refers to quality.Proper storage can significantly extend the shelf life of food items even after the "best before" date [25]. By understanding these storage-related villains and employing proper food handling practices, we can significantly reduce food waste and contribute to a more sustainable food system. Possible Household-Level Food Storage Practices Storing food safely and according to proper conditions is an important issue to be considered in terms of food safety and has the potential to minimize FW production at home [1,4,8].Healthy storage of food refers to the act of storing food in a specific and appropriate place for future use under the instructions for use over time [27].Nowadays, while food safety and storage issues are becoming increasingly important due to inadequate food safety practices at the household level, it is seen in the regular research of global organizations such as the UN, FAO, and WHO that uncontrolled consumption habits and careless storage conditions lead to an increase in food waste on a global scale.Implementing the details of the methods commonly used for food safety, especially at the microscale (home and user), in a meticulous, orderly, and controlled manner has the potential to positively affect the hunger problem that threatens the whole world [4]. • Proper Storage: Utilize airtight containers, zipper-lock bags, or vacuum-sealed bags for storing leftovers, dry goods, and pantry essentials.This prevents moisture loss, contamination, and exposure to air, avoiding potential spoilage. • Canning and Preserving: Consider canning, pickling, or preserving fruits and vegetables when they are in season.This allows you to enjoy them year-round and reduces food waste.Follow safe canning practices to prevent bacterial contamination [20]. • Store Dry Goods: Store dry foods including rice, pasta, flour, and cereals, in a cool, dry spot, away from direct sunlight and heat.Use sealed containers or resealable bags to protect them from pests and moisture [21].• Temperature and Light Control: Pay attention to temperature-and light-sensitive foods and ingredients.To prevent flavor and texture changes, store potatoes, onions, and tomatoes in a cool, dark place outside the refrigerator [18]. • Cold Protection: Refrigeration is one of the most effective ways to store perishable foods such as dairy products, meats, and fresh produce.Keep your refrigerator temperature at or below 40 • F (4 • C) to slow down bacterial growth and extend the shelf life of foods. • Freezing: Freezing is another excellent method for preserving food.Wrap foods tightly in freezer-safe packaging to prevent freezer burn, and label them with the date to ensure freshness.Freeze items like meat, poultry, fish, bread, fruits, and vegetables for longer-term storage [20].• Anti-bacterial Surface: Chemical treatments on food packaging materials have antimicrobial properties that can help slow bacterial growth.Also, certain spices and herbs possess natural antimicrobial properties.While they may not eliminate bacteria entirely, they can contribute to improved food safety [24]. • Ultraviolet (UV) Light: UV light is a highly efficient and extensively employed industrial technology in the realm of food safety.It offers a range of solutions that can enhance food storage and safety.This is particularly crucial as the global demand for proper food preservation rises, driven by insufficient food safety measures.Nevertheless, the way in which low-frequency UV light in the food industry is used presents a significant barrier to UV technology's overall efficacy.This industrial technique can provide efficient protection not only during application (against actual and potential risks) but also after application (such as inadequate storage, transit, and sales locations).As a result of current practices, industrial UV light's application in the realm of food safety is not a long-lasting and efficient solution that covers every step from production to consumption.Conversely, multiple food safety studies have demonstrated that insufficient storage conditions in households and rising levels of individual consumption are causing an escalation in worldwide food waste at home.This study examines the feasibility of using intermittent and short-term low-frequency UV radiation to provide sustainable food safety.The focus is on providing fundamental protection for food, particularly at the user level and in-home settings.Furthermore, researchers are also investigating the feasibility and convenience of using spot UV protection technology (LED technologies), which have previously demonstrated satisfactory energy efficiency in guaranteeing food safety in residential settings. Food Preservation and Protection Using UV Technology UV light is a powerful industrial technique that can significantly enhance the effectiveness of solution groups, leading to improved food storage and safety.Considering the growing demand for food storage worldwide because of insufficient food safety protocols, this is especially significant.However, low-frequency UV applications pose a significant obstacle to UV technology's widespread efficacy.The technique provides effective protection only while in use, but it is ineffective during transport, exposure, or storage if the application is not ongoing [27].Studies, on the other hand, indicate that inadequate home storage conditions and rising levels of individual consumption contribute to the worsening of global food waste at the household level [6,9,23].This study examines the potential application of intermittent and short-administered low-frequency UV radiation to offer fundamental safeguarding for delicate food items.Furthermore, it explores the practicality and accessibility of using point UV protection as a viable and efficient approach to ensuring food safety in domestic storage scenarios [28,29]. Optimization Research indicates that ultraviolet (UV) radiation can eradicate microorganisms on food surfaces, prolong food shelf life, and ensure food safety by preserving freshness for extended periods of time.The industry widely acknowledges UV light systems as more cost-effective, practical, and health-friendly than other high-end protection choices.These techniques are widely used in the food industry to prevent rotting and extend the shelf life of perishable products, particularly dairy products.Furthermore, UV systems, with their exceptional energy efficiency and minimal maintenance needs, can serve as a focal point for cost-effective research in a range of sizes [21]. Usability and Comprehensiveness Compact and cost-effective industrial and commercial premises are well-suited for food production and storage.UV sterilizers and lamps improve food safety at every stage of production, storage, and transportation.UV light technology, specifically LED technology, has the capability to enhance and be effortlessly incorporated into small-scale and specialized applications, such as residential surroundings [29].Nevertheless, we have not yet achieved the successful integration of this technology with reusable product packaging.Utilizing specialized solutions involving long-term reusable containers can improve food safety by inhibiting bacterial proliferation and maintaining food quality and longevity in various storage locations, such as cupboards and freezers.The use of UV technology in residential storage is exceptional because of its long-term effectiveness and adaptability [20,28]. Ensuring Food Safety and Improving Efficiency Ultraviolet (UV) light reduces the number of microorganisms on food surfaces, ensuring food safety.UV light is an excellent option for maintaining food safety at home since it can efficiently eliminate microorganisms, viruses, and fungi without the need for direct contact with the human eye.The capacity to permeate packaging materials and offer protection without affecting the quality or taste of food indicates the potential for additional progress [14,23,26]. Specialization and Programmable Solutions Recent advancements in ultraviolet (UV) technology have enabled the development of customized UV systems specifically designed for food storage purposes.These devices have the capability to function at different levels of strength and frequency, depending on the specific type of food and the desired level of safety.These devices utilize accurate UV wavelengths to specifically target infections, resulting in focused antimicrobial actions.This customization showcases the capacity to create resilient, enduring storage solutions for a wide range of food items.UV protection is an optimal option for storing food at home because it is cost-effective, convenient, and provides advantages in terms of food safety [28]. Studies suggest that utilizing UV light systems can efficiently inhibit the growth of microorganisms in residential food storage areas, prolong the shelf life of highly perishable items, and decrease food spoilage.With the continuous improvement in the duration and effectiveness of food preservation systems, UV protection has become a critical global issue. UV Light Ultraviolet (UV) light means "beyond violet", as violet is the highest frequency of light visible to the human eye.The electromagnetic spectrum of light that the human eye can detect is between 380 and 700 nanometers (nm) (Figure 1) [30].UV lighting has a shorter wavelength and a higher frequency than visible violet light.German physicist Johann Ritter made the first discovery of UV light in 1801.The use of ultraviolet (UV) light for the treatment of skin conditions dates to the early 1900s.It is well known that sunlight can have therapeutic value, but it can also lead to deleterious effects such as burning and carcinogenesis.Extensive research has expanded our understanding of UV radiation and its effects on human systems and has led to the development of human-made UV sources that are more precise, safer, and more effective for treatment in a wide variety of areas [30].UV light technology was first used for the disinfection of drinking water in France in 1906.Ultraviolet (UV) light technology has been emerging as a compelling alternative for industrial food preservation for around 60 years.Its ability to inactivate microorganisms and extend shelf life positions it as a valuable tool for both industrial-scale and household-level healthy food storage applications.Notably, current advancements in UV-A and UV-B both prevent carcinogenic side effects, but particularly UV-C light, which has the shortest wavelength and is suitable for various applications without harming human life.It has minimized waste production and has almost no environmental impact when used in industrial food processing [30][31][32].Ultraviolet UV lighting can be categorized into four different areas: 1. UV-A (315-400 nm): This is the longest wavelength of UV light and is emitted between 315 and 400 nm.UV-A is the least carcinogenic wavelength but still contributes to sunburns and skin cancer [32].2. UV-B (280-315 nm): We refer to light that emits in the wavelength range of 280 to 320 nm as UV-B.It is more carcinogenic than UV-A; however, only about 5% of this light reaches the earth's surface.3. UV-C (200-280 nm): This is the shortest wavelength of UV light, in the range of 100 to 280 nm.The sun emits UV-C light, the shortest wavelength of light, which the ozone layer completely absorbs, preventing it from ever reaching the Earth.Lamps designed to emit UV-C radiation in the range of 23.7 nm are used in many germicidal applications, such as UV air purification systems, UV water disinfection, and UV sterilization of critical surfaces [33,36].4. Vacuum-UV (100-200 nm): Ultraviolet light with wavelengths in the 100-200 nm range (known as vacuum ultraviolet; VUV) has applications in nanofabrication, photochemistry, and spectroscopy [34]. Each category of UV lighting has very useful purposes for many industries and daily applications.UV-A and UV-B light are common in many medical phototherapy lamp spectrums, and UV-C lights in the range of mercury and low-frequency LED lamps demonstrate the strongest antimicrobial effectiveness, making them ideal for ensuring food safety. Fundamental Approaches to UV-C Application: Pathogen Inactivation and Growth Inhibition Ultraviolet-C (UV-C) light, with its short wavelength (200-280 nm) and germicidal properties, has emerged as a promising technology for food preservation.Its ability to inactivate microorganisms makes it a valuable tool for enhancing food safety and extending shelf life [30].We can broadly categorize the application of UV-C light in food preservation into two primary approaches. Pathogen Inactivation The primary focus of UV-C light application in food preservation is to eliminate or significantly reduce the presence of pathogenic microorganisms, such as bacteria, viruses, and parasites, that can cause foodborne illnesses.This approach aims to prevent these pathogens from contaminating food products in the first place, ensuring their absence and minimizing the risk of foodborne diseases.a. Key objectives of inhibition: 1. Eliminate pathogens: Inactivate a significant proportion of, or all, the present pathogenic microorganisms.Prevent foodborne illnesses; reduce the risk of consumers contracting illnesses caused by contaminated food products. 2. Enhance food safety: Contribute to a safer and healthier food supply. 3. Surface decontamination: Treat food product surfaces, packaging materials, and equipment to eliminate pathogens before or after product contact.4. Liquid food treatment: Apply UV-C light to liquid food products, such as juices, milk, and beverages, to inactivate pathogens while preserving nutrients and sensory qualities. Microbial Growth Inhibition In contrast to pathogen inactivation, which targets the elimination of existing microorganisms, microbial growth inhibition, on the other hand, focuses on preventing or retarding the growth and reproduction of microorganisms that may be present in food products.This approach aims to extend the shelf life of food by controlling microbial populations and delaying spoilage [25].a. Key objectives of inhibition: 1. Slow microbial growth: Reduce the rate at which microorganisms grow and multiply in food products. 2. Extend shelf life: Delay the onset of spoilage and maintain food quality for a longer period. 3. Minimize food waste: Reduce losses due to microbial spoilage and extend the availability of food products.4. Inhibit microbial growth and extend shelf life: To do this, apply UV-C light to fresh fruits and vegetables after harvest. b. In-package treatment: Integrate UV-C light sources into packaging materials to continuously suppress microbial growth within the package.c.Modified atmosphere packaging: Combine UV-C treatment with modified atmosphere packaging to create an environment less conducive to microbial growth. Academic Perspectives and Discussions The application of UV-C light in food preservation has sparked extensive research and discussions among scientists and food industry professionals [26,28,30].Key areas of focus include the following: • Effectiveness: Evaluating the efficacy of UV-C light treatment against various pathogens and microorganisms under different conditions. • Food quality: Assessing the impact of UV-C light exposure on food quality attributes such as nutrient content, sensory properties, and texture. • Safety considerations: These include ensuring the safe and appropriate use of UV-C light technology and addressing potential hazards such as ozone generation and photochemical reactions. • Regulatory frameworks: Establishing clear guidelines and regulations for the application of UV-C light in food processing and preservation. UV-C Light Application and Potential Effects in Food Preservation UV-C light is a disinfection method used to extend the shelf life of foods and control microbiological contamination.The effectiveness of this method depends on various factors, including wavelength, dose, and application time [37].The two fundamental effects of UV-C light on foods are described below. Microbial Inactivation UV-C light has two primary effects on foods: microbial inactivation and chemical changes.Both factors can have a significant impact on a food's shelf life and nutritional value.Selecting the appropriate wavelength, dose, and application time is crucial for the effectiveness and safety of UV-C light treatment in food processing.UV-C light has the ability to inactivate microorganisms by damaging their DNA.This effect encompasses bacteria, viruses, fungi, and molds.The microbial inactivation efficacy of UV-C light depends on the type of microorganism, the structure of the cell wall, and the exposure time to UV-C light [38].Researchers have investigated the effectiveness of UV-C light in inactivating pathogens such as Escherichia coli and Salmonella Typhimurium on fruits and vegetables.Their findings demonstrated that UV-C light significantly reduced these pathogens and extended the food's shelf life [39].Researchers have also examined the efficacy of UV-C light in inactivating pathogens like Staphylococcus aureus and Listeria monocytogenes in dairy products.The results showed that UV-C light significantly reduced these pathogens and extended the food's shelf life.Microbial genetic material (DNA or RNA) is particularly susceptible to UV photons within the UV-C range, with a peak absorption wavelength of around 260-265 nm [34].For the past two decades, UV-C radiation at 253.7 nm has been the preferred method for pasteurization and shelf-life extension, particularly for beverages [35]. The primary mechanism of action involves UV-C irradiation causing damage to microbial nucleic acids (Figure 2) [33].This damage, often manifested as the formation of dimers between pyrimidine bases within DNA strands, disrupts microbial replication and ultimately leads to cell death [31]. Chemical Effects/Changes UV-C light has the ability to cause chemical changes in food components.This effect can lead to the loss of vitamins, enzymes, and other nutrients.The extent of chemical changes caused by UV-C light depends on the type of food, the wavelength of UV-C light, and the dose [40].Researchers have studied the effect of UV-C light on the vitamin C content of orange juice.Their results indicated that UV-C light caused a significant decrease in vitamin C content [40].Investigated the effect of UV-C light on the amino acid content of milk has also been investigated.The findings revealed that UV-C light caused a significant decrease in the content of certain amino acids.On the other hand, unlike traditional methods reliant on chemical preservatives, UV treatment presents a sustainable and human-friendly approach to food safety.This growing interest in UV technology also stems from the limitations of traditional thermal food processing methods.Thermal processing, while effective at eliminating pathogens, can compromise the nutritional value and sensory characteristics of food [30].Furthermore, research suggests that UV-C light not only inactivates microbes but also possesses the potential to enhance the nutritional qualities of fruits and vegetables [33].Regular UV-C application can reduce microbial proliferation, eliminate harmful organisms, and even suppress their genetic mutation, offering additional benefits beyond simple preservation [31,32].Researchers are actively exploring low-temperature alternatives like UV irradiation that prioritize the retention of high quality and cause minimal nutritional loss, ultimately delivering safe and delicious food products [34].Among these non-thermal processing methods, UV light holds significant promise for pathogen reduction while minimizing the drawbacks associated with heat treatment [35].Similarly, the convenient and problem-free application of UV light to food stocks stored in closed volumes can effectively combat airborne pathogens [36].Over a century of scientific research has unequivocally demonstrated the efficacy of UV-C disinfection, and no alternative form of disinfection has surpassed its effectiveness [37]. Factors Influencing the Impact of UV-C Light on Foods: A Comprehensive Discussion UV-C light is a promising technology for food preservation, offering microbial inactivation and extending shelf life.However, various factors beyond the primary parameters of frequency, distance, and intensity can influence the effectiveness of UV-C light treatment.This paper delves into these additional factors that can modulate the impact of UV-C light on foods, including product stability, ambient temperature, target product surface quality (matte vs. glossy), and others [41].UV-C light, with its short wavelength of 200-280 nm, possesses germicidal properties, making it a valuable tool for food disinfection.While the primary parameters of frequency, distance, and intensity play crucial roles in determining the efficacy of UV-C light treatment, several other factors can significantly influence the impact of this technology on foods.Factors influencing UV-C light treatment include the following: • Product Stability: The stability of the product during UV-C light exposure affects the treatment's effectiveness.For instance, liquid products may require agitation or continuous movement to ensure uniform exposure and prevent shadowing effects [42].• Ambient Temperature: The ambient temperature during UV-C light treatment can influence the inactivation rate of microorganisms.Studies have shown that lower temperatures can enhance the effectiveness of UV-C light treatment [38].• Target Product Surface Quality: The surface quality of the target product, whether matte or glossy, can affect the penetration and absorption of UV-C light.Glossy surfaces tend to reflect UV-C light, potentially reducing its effectiveness in shadowed areas [38,39].• The Presence of Packaging Materials: Packaging materials can have an impact on UV-C light transmission and efficacy.Some materials, such as transparent plastics, allow UV-C light to pass through, while others, like metalized packaging, may block or attenuate the light [39]. • Food Composition: The composition of the food itself can affect the impact of UV-C light.Factors such as moisture content, fat content, and the presence of natural pigments can influence UV-C light absorption and efficacy [43]. • Shadowing Effects: Shadowing effects can occur due to product geometry or packaging, leading to uneven UV-C light distribution and potentially reducing treatment effectiveness [44]. Beyond the primary parameters of frequency, distance, and intensity, a range of factors influence the effectiveness of UV-C light as a promising approach for food preservation.Understanding and considering these additional factors, such as product stability, ambient temperature, target product surface quality, packaging materials, food composition, and shadowing effects, is crucial for optimizing UV-C light treatment and achieving desired outcomes in food preservation [42]. Optimizing UV-C Light Treatment Dose-Exposure Time for Food Safety Ultraviolet-C (UV-C) light disinfection is gaining traction as a non-thermal food preservation method.However, optimizing the treatment time for effective microbial inactivation while minimizing negative impacts on food quality presents a significant challenge.The exploration of the intricate relationship between UV-C light treatment time and food safety draws upon established scientific principles and recent research findings.Traditional food preservation methods, such as heat treatment, can compromise sensory quality and nutritional value.UV-C light, with its germicidal properties, offers a promising alternative for enhancing food safety.However, the effectiveness of UV-C light treatment hinges on a crucial factor: treatment time [42,45]. The Dose-Response Relationship and Target Inactivation The concept of dose, defined as the product of UV-C light intensity (mW/cm 2 ) and exposure time (in seconds), governs the efficacy of UV-C light disinfection [42].The doseresponse relationship dictates that microbial inactivation increases with higher UV-C light doses.This relationship often follows a logarithmic trend, where a higher dose results in a greater reduction in microbial populations [46,47].The "Weibull model", a widely used mathematical model, can be employed to quantify this relationship, and predict the level of microbial reduction achieved for a specific UV-C light dose [48].The Weibull model is a powerful tool for analyzing this dose-response relationship in UV-C.It can fit this mathematical equation to experimental data to predict the level of microbial reduction for a specific UV-C light dose.When applying UV-C light to inactivate microorganisms on food surfaces, the effectiveness depends on the dose.The dose is calculated by multiplying the UV-C light intensity (mW/cm 2 ) by the exposure time (seconds).The dose-response relationship describes how microbial inactivation increases with the UV-C light dose.This relationship typically follows a logarithmic trend: a higher dose results in a greater reduction in the number of viable microorganisms [49].Dose (intensity × time) is important for effective microbial inactivation, which is a key concept related to the Weibull model: a. Probability of Survival: It estimates the probability that a single microorganism will survive a given UV-C light dose.b.Logarithmic Reduction: It allows us to predict the number of logarithmic units (log CFU/mL) by which a microbial population will be reduced at a specific dose.(CFU stands for colony-forming unit, a measure of viable microorganisms) [49]. Daily UV-C Dose Application The daily UV-C dose required for food products is determined by a variety of factors [42,43] including the following: • Microbial load: the number and type of microorganisms present in the product. • Package permeability: the extent to which the packaging allows UV-C light to penetrate. • Product type: different dairy products, such as milk, yogurt, and cheese, have varying sensitivities to UV-C light. • Desired shelf life: higher doses may be necessary for a longer shelf life. • Inactivation factor: the UV-C treatment achieves a decimal reduction in the microbial population. • Safety factor: an additional dose to ensure adequate inactivation and account for potential variations. Effective Exposure Time Analysis in UV-C Application Effective exposure time analysis aims to determine the necessary UV-C exposure time to achieve the desired level of microbial inactivation.This time required depends on the dose and UV-C light intensity [21,50]. Other Factors Influencing Optimal Treatment Time Several other factors influence the optimal UV-C light treatment dose and time for a particular food product.These factors include the following: a. Microbial Target: Different microorganisms exhibit varying degrees of susceptibility to UV-C light.Spores, for instance, are significantly more resistant than vegetative bacterial cells [48].As a result, the target microorganism dictates the required UV-C light dose, as well as the treatment time [30].b.Food Product Characteristics: The composition and structure of a food product can have a significant impact on UV-C light penetration and efficacy.Factors like turbidity, fat content, and surface topography can influence light scattering and shadowing effects, potentially requiring longer treatment times for even distribution [41,48]. c. Food Quality Considerations: While UV-C light effectively inactivates microbes, prolonged exposure can lead to undesirable changes in food quality.These changes might include vitamin degradation, lipid oxidation, and the development of offflavors [49].Striking a balance between achieving the desired level of microbial inactivation and minimizing quality deterioration is crucial when determining the optimal treatment time. Recent Advancements and Future Directions Recent research explores strategies for optimizing UV-C light treatment time while mitigating negative impacts on food quality.These strategies include the following: a. Pulsed UV-C light: Applying UV-C light in short pulses, with rest periods in between, can potentially enhance microbial inactivation while reducing thermal effects [51].b. Combined technologies: Integrating UV-C light with other preservation methods, such as mild heat or modified-atmosphere packaging, might achieve synergistic effects and allow for shorter treatment times [21,50].c. Optimizing UV-C light treatment methods: Optimization of UV-C methods for food safety requires careful consideration of various scientific principles and practical factors.Researchers and food processors can use UV-C light technology to make food safer while maintaining its quality [50] if they understand the dose-response relationship, the susceptibility of target microorganisms, and the relationship between treatment time and food quality.Continued research on novel application methods and integration with other technologies holds promise for the further refinement of UV-C light treatment for the food industry [47]. Factors Affecting UV-C Application Efficiency: Critical Parameters for Disinfection The provided response includes Table 3, which describes the key characteristics that impact the efficacy of UV-C disinfection applications.Comprehending and enhancing these factors is crucial for attaining the dependable and uniform elimination of microorganisms in different sectors and environments.Several industries, including food processing, medical device disinfection, air and water purification, and building cleaning, utilize UV-C applications.However, the mere presence of UV-C light does not exclusively dictate the effectiveness of UV-C disinfection.Multiple factors significantly influence the extent of microbial inactivation accomplished.We can roughly classify these crucial parameters into two categories: treatment parameters and target parameters [42]. UV-C Intensity (W/m 2 ) The power of UV-C radiation emitted per unit area. Higher intensity leads to faster inactivation of microorganisms. UV-C Dose (J/m 2 ) The total amount of UV-C radiation energy delivered to a target area. Higher dose ensures more thorough inactivation of microorganisms. Process Time (s) The duration of UV-C exposure.Longer exposure time allows for more effective inactivation of microorganisms. Distance (cm) The distance between the UV-C source and the target surface. Shorter distance increases the intensity of UV-C radiation reaching the target. Temperature ( • C) The ambient temperature surrounding the UV-C source and the target surface. Some microorganisms are more susceptible to UV-C radiation. Relative Humidity (%) The amount of moisture in the air. Higher humidity can reduce UV-C penetration and decrease its effectiveness. Target Surface Characteristics The material, texture, and topography of the surface being treated. Smooth, non-porous surfaces allow for better UV-C penetration. Shielding and Shadowing Th presence of obstructions or uneven surfaces that block UV-C radiation. Eliminate shielding and ensure uniform exposure for optimal disinfection. Treatment Parameters Treatment parameters include variables related to the UV-C source and the treatment process.The following factors should be considered: UV-C intensity, dose, exposure period, distance between the UV-C source and the target surface, ambient temperature, relative humidity, and the presence of shielding or shadowing effects.By optimizing these treatment parameters, the target surface can receive an adequate amount of UV-C radiation, effectively neutralizing germs [53]. The Target Parameters Target parameters refer to the specific characteristics of the treated surface and the present microorganisms.These parameters include the surface's material, texture, and topography; the initial microbial population; the presence of biofilms; and the specific type of microorganisms targeted.Understanding these specific target factors is critical for selecting appropriate UV-C treatment parameters and forecasting disinfection effectiveness [53] to ensure that UV-C disinfection is effectively employed to achieve the desired level of microbial reduction in various settings. Extending Shelf Life of Dairy Products: Effectiveness and Limitations of UV-C The demand for minimally processed and extended-shelf-life dairy products has propelled the exploration of non-thermal preservation techniques like UV-C microproduct applications (air, water, surface, and food).The dairy industry is constantly seeking innovative methods to extend the shelf life of its products while maintaining quality and consumer appeal.Conventional preservation techniques, such as pasteurization and refrigeration, have limitations in terms of product quality and shelf life.UV-C microproduct applications have emerged as a promising non-thermal approach to address these challenges [52]. Effectiveness in Shelf-Life Extension: Studies have demonstrated the effectiveness of UV-C microproduct applications in extending the shelf life of various dairy products. For instance, studies have shown that UV-C treatment of milk can extend its shelf life by up to 53 days, while pasteurized milk alone only lasts 14 days [42].Similarly, studies have found that UV-C treatment extends the shelf life of cheese, yogurt, and other dairy products (Table 4) [39,53,54].Here, we present potential shelf life and UV-C effects on dairy food: a. Limitations and Considerations: Despite its promise, UV-C microproduct use has limitations and considerations that need to be addressed.b.Limited Penetration Depth: UV-C radiation has a limited penetration depth, typically a few millimeters.This restricts its effectiveness when treating bulk products or products with complex structures.c. Potential Impact on Food Quality: Excessive UV-C exposure may lead to vitamin degradation, off-flavors, and texture changes in dairy products [54]. 2. Efficacy Against Spores: UV-C is less effective against bacterial spores, which are dormant forms of bacteria that are more resistant to environmental stresses.To extend the shelf life of UV-C applications and achieve maximum effectiveness in food products, it is important to optimize the UV-C dose and exposure time and product processing.Furthermore, UV-C's limitations, such as its inability to penetrate materials, potential adverse effects on food quality, and inability to effectively remove spores, have highlighted the need for alternative or complementary methods.Furthermore, it is important to remember that UV-C technology, while effective in the process of deactivating microorganisms and genetically preventing their proliferation in food products, does not offer permanent protection.Therefore, it should be quite necessary to continue the examination of the potential benefits of a sustainable [micro] solution for longer-term protection [38]. Table 4 provides general guidelines for UV-C light treatment in food preservation.For optimal results, it is crucial to use a UV-C treatment specifically tailored to the food type, packaging, and desired shelf life.The following recommendations are important: • Follow safety precautions when using UV-C equipment. • Check local regulations regarding UV-C light usage for food preservation. • Ensure the food is clean and dry before UV-C treatment.• Avoid direct exposure to UV-C light. • Allow the food to cool after UV-C treatment. The SLID The global food industry faces a monumental challenge: feeding a growing population while minimizing environmental and health impacts.Each year, global food production wastes a staggering one-third of its food [7].Current food preservation methods often rely on refrigeration, which has limitations in accessibility and energy consumption, or chemical additives, which raise concerns about long-term health effects.In this context, low-frequency ultraviolet-C (UV-C) light emerges as a promising technology for combating microbial contamination in food, offering a potent and non-toxic approach [1].However, its efficacy is subject to limitations such as the inability to provide continuous protection, limited surface penetration, and the requirement for precise application parameters [38].Additionally, concerns regarding potential human health risks from direct exposure exist [55]. These limitations underscore the need for a thorough evaluation of UV-C technology's benefits and limitations in enhancing food safety, improving storage conditions, and reducing household waste levels.The Smart-Lid (SLID) concept addresses these limitations by offering a targeted and controlled approach to UV-C applications.By programming a built-in UV-C light source for periodic activation, the SLID potentially reduces the need for continuous exposure (Figure 3a,b).Additionally, the lid design can facilitate deeper light penetration into exposed food surfaces compared to traditional UV-C applications.By incorporating user-friendly controls, the SLID aims to address the challenge of precise application parameters.At the household level, proposing the SLID concept as a sustainable solution or product helps overcome identified challenges while aligning with UN 2030 goals for sustainable development and the transition to a circular economy.The idea of "reducing household waste and contributing to food safety by using waste and unused packaging while reducing the potential risk to human health" serves as a promising starting point for an innovative and sustainable solution.Encouraging sustainable reuse alongside recycling at the household level constitutes a novel and innovative approach to addressing household food waste, with the potential to significantly reduce carbon emissions. The Working Principle of the SLID Project The Smart-Lid (SLID) project (Figure 4) tackles the dual challenges of food waste reduction and household food safety through a novel and sustainable approach.Here is a breakdown of its working principle, its importance, and its impact on food safety and storage: • UV-C Light Source: Within the SLID, a strategically positioned, wide-angle UV-C LED light source is integrated.This positioning ensures that the UV-C light reaches all food surfaces within the jar for effective microbial inactivation. • User-Controlled Treatment: The SLID features a user-friendly interface that allows users to select pre-programmed treatment settings based on the type of food stored.These settings control the duration and intensity of the UV-C light exposure, optimizing the treatment for different food types. • Controlled Environment: The SLID design prioritizes maintaining consistent internal conditions, including light source height and internal temperature, to ensure optimal UV-C treatment effectiveness. Working Principles 1. Repurposed Glass Jar: The SLID concept utilizes readily available glass jars as storage containers.These jars are ideal for their durability, transparency, and compatibility with UV-C light (discussed later). Integrated UV-C Light Source: The SLID incorporates a built-in UV-C LED (light source, 120 degrees), strategically positioned to illuminate the food contents within the jar, and to not allow a blind spot, with "0" shadow effect". • Enhanced UV-C Penetration: The SLID employs strategically positioned UV-C light sources and carefully selected glass materials to ensure that UV-C radiation penetrates deeply into the food volume.This maximizes the exposure of target microorganisms to the germicidal effects of UV-C, enhancing its inactivation efficacy (Figure 5).• Minimized Light Loss: The SLID incorporates anti-reflective coatings on the inner lid surface to reduce light reflection at the glass-air interface.This minimizes the loss of UV-C radiation from the jar, allowing more of the energy to reach the food and further enhancing treatment efficiency. • Optimized Light Distribution: The SLID design considers the geometry of the jar and the placement of the UV-C light sources to achieve uniform light distribution within the food container.This ensures that all areas of the food receive adequate UV-C exposure, preventing localized microbial growth and extending shelf life. Programmable Activation: The SLID features user-friendly controls that allow programming the UV-C light source for periodic activation like exposure time, operation period, and the intensity of UV-C light.This ensures targeted and controlled exposure, minimizing the need for continuous irradiation while protecting the food quality and extending its shelf life to reach the "0" waste target.4. Food Preservation: When activated, the UV-C light emits short-wavelength ultraviolet radiation that disrupts the DNA and RNA of microorganisms present on the food surface or suspended in the air within the jar.This effectively inactivates bacteria, mold, and viruses, extending the shelf life of the food and minimizing the risk of spoilage. Reduced Food Waste: Food spoilage is a significant contributor to global food waste. The SLID project aims to combat this by extending the shelf life of opened food items, minimizing the amount of food discarded. 2. Enhanced Food Safety: Foodborne illnesses caused by microbial contamination are a major public health concern.The SLID project contributes to safer food storage by inactivating harmful microorganisms and reducing the risk of foodborne illnesses with extra air seals and moisture controls (IP68). 3. Sustainability: By repurposing existing glass jars, and with the multiple use of lids and external protective silicon covers, the SLID project promotes a circular economy, minimizing resource consumption and waste generation. Impact Analysis on Food Safety and Storage 1. Food Safety: Studies have shown the effectiveness of UV-C light in inactivating a wide range of microorganisms, including bacteria, mold, and viruses [1,2].The SLID project, by incorporating a controlled UV-C source, can significantly reduce microbial contamination on food surfaces, enhancing food safety at the household level. 2. Storage: The SLID project offers a convenient and effective solution for extending the shelf life of opened food items.By inactivating spoilage microbes, the SLID can potentially slow down the deterioration process, allowing for safer storage for longer durations. Why Glass Jars? Embracing Sustainability in the SLID Project The Smart-Lid (SLID) project, a revolutionary approach to household food preservation using UV-C light, stands out for its commitment to sustainability.At the heart of this commitment lies the choice to repurpose glass jars as storage containers.This decision not only aligns with the project's environmental goals but also offers a multitude of practical advantages.Repurposing glass jars for the SLID project significantly reduces the environmental impact associated with manufacturing new containers.Glass production is an energy-intensive process, consuming substantial amounts of fossil fuels and generating greenhouse gas emissions [55].By reusing existing jars, we minimize the need for new glass production, thereby conserving resources and reducing our carbon footprint.The sustainable material properties of glass, as a material, embody sustainability principles in various aspects [56]: a. Durability and Longevity: Glass jars are remarkably durable, withstanding repeated use and harsh environments.This durability extends the lifespan of the jars, reducing the need for frequent replacements and minimizing waste generation.b.Recyclability: At the end of their useful life, glass jars can be readily recycled into new glass products, creating a closed-loop system that minimizes waste and promotes resource conservation. c. Transparency: Glass jars offer excellent transparency, allowing users to easily identify and monitor the contents, reducing the likelihood of food spoilage and waste.d.Cost-Effectiveness: Repurposing glass jars also presents economic advantages.Utilizing readily available glass jars significantly reduces packaging costs compared to purchasing new containers, making the SLID project more affordable for consumers.e. Standardization: Glass jars come in standardized sizes and shapes, ensuring compatibility with various household storage needs and facilitating easy stacking and organization.f.Functionality: Glass is an inert material, meaning it does not react with food or release harmful chemicals.This inertness ensures food safety and maintains the integrity of stored items.The choice of glass as the storage material for the SLID project was not merely driven by sustainability considerations; it also aligns with the project's focus on functionality.Glass exhibits unique optical properties that can enhance the effectiveness of UV-C treatment [54,57]. The decision to repurpose glass jars for the SLID project is not merely a matter of convenience; it is a testament to the project's commitment to sustainability and environmental responsibility.By embracing this approach, the solution can minimize ecological footprints, promote resource conservation, and contribute to a more sustainable future. Optimizing UV-C Light Propagation in Glass Jars for Food Preservation The effectiveness of ultraviolet-C (UV-C) light for inactivating microorganisms in food preservation applications can be significantly influenced by its propagation and reflection within the storage container.In this context, the use of a thin film placed on the outer surface of the jar lid presents a promising strategy for optimizing the efficacy of UV-C treatment while protecting human health. Impact of Optical Properties on Light Propagation The optical properties of this film can manipulate the angle at which UV-C light interacts with the glass surface, thereby influencing the reflection and transmission characteristics [58].Anti-reflective films, for example, are specifically designed to minimize surface reflection, enabling a greater portion of the incident light to enter the glass jar [54,56].This translates to enhanced light penetration within the container, reaching deeper into the food volume for more effective microbial inactivation. Improved Efficiency and Protection By optimizing light propagation through the film-glass interface, the overall efficiency of UV-C treatment can be significantly improved.This translates to achieving the desired level of microbial inactivation with potentially lower UV-C light exposure times or lower energy consumption.Additionally, a well-designed film can offer secondary benefits like improved user safety by minimizing UV-C leakage from the jar [59]. Considerations for Jar Usage The effectiveness of UV-C treatment within glass jars is also influenced by several jar-related factors.These include the following: 1. Glass Composition: Different types of glass exhibit varying degrees of UV-C transmittance.Borosilicate glass, for instance, offers superior UV-C transmission compared to standard soda-lime glass [59,60]. 2. Jar Geometry: The shape and size of the jar can affect the path length of the UV-C light within the container, impacting the uniformity of microbial inactivation.Cylindrical jars, because of their radial geometry potential, can offer more uniform light distribution compared to jars with complex geometries. 3. Food Filling Level: The volume of food present within the jar affects the distance that UV-C light needs to travel to reach target microorganisms.Optimizing the filling level can help ensure adequate light exposure throughout the food volume. By incorporating a strategically reflective, designed film on the inner surface of the protective external part and considering jar-related factors, the effectiveness of UV-C light treatment for food preservation within glass jars can be significantly enhanced.This approach offers a promising avenue for developing safe and efficient methods to minimize food spoilage and promote food safety at the household level. Enhancing Human Health Safety in the SLID Project: The Role of Anti-Reflective Films The Smart-Lid (SLID) concept, a novel approach for household food preservation utilizing UV-C light, prioritizes human health safety by incorporating an anti-reflective film on the inner surface of the jar lid.This strategic design choice offers multiple benefits that minimize the risk of UV-C exposure to users [61]: • Minimizing UV-C Leakage and Protecting Eyes: The primary function of the antireflective film is to reduce the reflection of UV-C light from the glass surface, effectively preventing its leakage from the jar.This feature is crucial for ensuring that the UV-C light remains confined within the jar, preventing accidental exposure to users' eyes and skin.The film's ability to enhance light transmission through the glass also contributes to this safety aspect by minimizing the need for excessive UV-C intensity, further reducing the potential for harm [59]. • Optimizing Light Penetration and Efficacy: In addition to its safety benefits, the anti-reflective film also plays a role in optimizing the efficacy of UV-C treatment.By minimizing surface reflection, the film allows more UV-C light to penetrate deeper into the food volume, ensuring more uniform microbial inactivation and reducing the risk of food spoilage.This enhanced light penetration can potentially lead to shorter treatment times or lower UV-C intensity requirements, further minimizing the risk of exposure while maintaining food safety [59,60]. • Considerations for Film Design and Implementation: The design and implementation of the anti-reflective film for the SLID project should carefully consider several factors [60]. • Film Material: The choice of film material should prioritize high UV-C transmittance, durability, and compatibility with food contact applications [61]. • Film Thickness: The optimal film thickness should be determined to balance light transmission and anti-reflective properties.A thicker film may enhance reflection reduction but could also decrease light penetration [54]. • Film Application Method: A uniform and consistent application method should be employed to ensure consistent optical performance across the film surface. The incorporation of an anti-reflective film in the SLID project demonstrates a commitment to prioritizing human health safety while optimizing the effectiveness of UV-C treatment for household food preservation.By minimizing UV-C leakage and enhancing light penetration, the film contributes to a safer and more efficient food preservation solution.OK button: This button confirms your selections and starts the UV light sanitization process.f. Understanding the SLID Control Panel and Settings Cancellation button: This button cancels any changes you made and returns the display to the last saved settings. 3. Using the control panel to adjust for different food types: The user manual informs the user and provides all information about recommended settings for different food types.It would likely be necessary to consult a separate user manual or consult with the manufacturer to find recommended exposure times and intensities for specific food items.Safety information: UV-C light can be harmful to human skin and eyes.When using a UV light sanitizer, it is crucial to adhere to safety precautions such as carefully placing a protective shield on the jar, wearing appropriate personal protective equipment (PPE), and avoiding exposure to the light.Additional notes: The effectiveness of UV light sanitization on food can vary depending on the wavelength of the light, the exposure time, and the intensity of the light.UV light sanitization may not be suitable for all food types.Some foods may be sensitive to UV light and could lose nutrients or spoil more quickly.It is important to properly clean food surfaces before using a UV light sanitizer. • Enhancing UV-C Efficacy through Strategic LED Placement and Controlled Exposure: The incorporation of a strategically positioned 120-degree wide-angle UV-C LED light source within the Smart Lid (SLID) ensures the uniform and effective irradiation of food contents.This innovative design maximizes light penetration while minimizing shadowing, guaranteeing comprehensive exposure to UV-C radiation for microbial inactivation.Moreover, the inclusion of controlled exposure time and intensity settings, accessible through the user interface, enables precise tailoring of UV-C treatment to specific food types, considering their susceptibility to microbial contamination. • User-Friendly Interface for Personalized Food Preservation: The user interface of the Smart Lid (SLID) is characterized by its intuitive design, featuring an economical monochrome display and membrane keypad for seamless data input and user interaction.This interface empowers users to effortlessly navigate through pre-programmed menus tailored to various food categories.Each menu offers a selection of exposure time, period, and intensity settings meticulously optimized for the unique characteristics of specific food types.This personalized approach ensures that UV-C treatment is precisely tailored to individual food items, striking a balance between maximum effectiveness and minimal impact on food quality. • Creating an Optimal Food Preservation Environment: The Smart-Lid (SLID) design prioritizes the establishment of an ideal food preservation environment by meticulously controlling factors such as light source height, internal temperature, air tightness, and moisture protection.These controlled conditions are pivotal in ensuring the consistent and efficient delivery of UV-C treatment, thereby mitigating the influence of external variables on the treatment process.Additionally, the airtight seal and moisture protection mechanisms safeguard food from external contamination and moisture loss, thereby extending its shelf life and preserving its quality. First SLID Project Test The first prototype test started in November 2022, and the initial SLID study was completed over a 12-month period, using a sample set that included 6 × 2 samples.The initial research included six different sample groups consisting of tomato paste (N1-SL1), mayonnaise (N2-SL2), and jams, pasta, legumes, and spices (sumac) (N6-SL6).We used organic and homemade ingredients to ensure the products were free of preservatives, and we took care to ensure there were no signs of live organisms in the dry foods.The maximum recommended usage periods and expiration dates after package opening were considered for each product's shelf life.For storage, the traditional household 85 mm 1 L mason jar group was chosen.The study involved a comprehensive analysis and evaluation, as well as a realtime shelf-life test.On the other hand, we also applied microbiological testing at 2-month intervals to detect living organisms not visible in the product groups.We conducted these studies in a different laboratory, away from the experimental setting.Group 1 consisted of samples with classic lids (N1-N6), and Group 2 had samples with Smart Lids (SL1-SL6).The N1-N3 and SL1-SL3 groups were stored under the following conditions: In the first zone, a cold environment (E1) was established. The temperature was between 0 • C (32 • F) and 5 • C (41 • F) [1]; around 3 • C. The humidity was around 50-60%.There was good air circulation in the conditioned environments, but the jars were airtight.These conditions help to slow down bacterial growth and spoilage of foods. In the second environment, a pantry zone (E2) was established. The ideal temperature for a pantry is between 10 • C and 21 • C (50 • F and 70 • F); the average was around 17 • C. The humidity was between 50% and 60%.There was good air circulation in the conditioned environments, but the jars were airtight.These conditions help to slow down natural bacterial growth and spoilage of food samples.One set of samples was stored in a cold environment (E1), while the N4-N6 and SL4-SL6 groups were monitored under pantry storage conditions (E2).A daily UV dose adjustment was made for each product group (Table 4).Criteria such as humidity, temperature, and air circulation in the storage environments were kept constant.The lids of the products were positioned to prevent air entry, and they were opened once a week to check for any spoilage, which was then recorded and observed.Samples taken every two months were examined in another laboratory for pathogens, live microorganisms (they were examined for caterpillars, egg formation), etc. The UV-treated samples showed no signs of spoilage at the end of the 1-year period.However, during the second laboratory check (4 months), the first mold formation in N1 began, and the mayonnaise and jam groups showed visible spoilage.Sumac N6 (spice group) experienced live formation and taste loss during the third check (6 months).Based on data from a second literature review on UV-treated products, this study largely confirmed the values in Table 5.However, we found that the tomato pastes and sauce groups in the experiment experienced an even shorter period.It was clearly observed that the shelf life of UV-treated products was extended, considering that no biological external factors were allowed, intensive opening and closing were not performed, and air circulation was reduced.Using real-time shelf-life tests, we did not observe changes in the SLID-treated products other than physical water loss and crust formation in the tomato paste and mayonnaise samples (SL1-SL3), and we determined that no pathogen formation occurred even after six periods (12 months).On the other hand, upon microbiological testing, we observed spoilage and mold formation in the N1 group (tomato paste group) during the second period and in the N2 group during the third period (6 months), which supports the view that the SLID effect significantly extended the shelf life of these product groups.Researchers have proven that UV treatment extends the shelf life of various food products by reducing the microbial load and preventing spoilage.The test concluded that this SLID study's focus on the fine line between staleness and spoilage could further enhance its potential, revealing significant improvements in food storage containers.In this 12-month test, we charged SLID covers twice in a cold (3 • C) environment and once in a cellar (15-18 • C).The preliminary results reveal a tendency to extend the recommended consumption periods for products after opening, a significant finding for perishable foods.For dry foods, this period tended to extend by 30-55%. Broader Social and Economic Impact of the SLID Project: A Comprehensive Analysis The SLID project's potential to reduce household food waste extends beyond economic benefits, creating a ripple effect with positive social and environmental implications.By promoting more sustainable food consumption practices, the SLID project contributes to a future where food resources are valued and utilized efficiently. Quantifying the Impact: Estimating Savings and Waste Reduction To evaluate the broader social and economic impact of the SLID project, it is crucial to quantify its potential impact on household savings, waste reduction, and overall resource utilization.While precise calculations require comprehensive data and analysis, here is a proposed approach to estimating these impacts: • Household Savings: Studies indicate that households discard an average of 25-35% of their food purchases due to spoilage.The SLID project's ability to extend food shelf life can potentially reduce this waste by 50%, saving households an estimated 12.5-17.5% of their annual grocery expenses [1]. • Minimized Food Waste Disposal Costs: Food waste disposal costs vary by region but can range from USD 10 to USD 30 per household per year.By reducing food waste, the SLID project can help households save on these disposal costs.• Waste Reduction and Resource Conservation: The SLID project has the potential to reduce household food waste by 50%, diverting millions of tons of food from landfills and incinerators.This reduction in food waste translates to significant savings in landfill space, energy consumption associated with waste management, and greenhouse gas emissions. • Resource Conservation: Food production utilizes significant resources, including land, water, and energy.When food goes to waste, these resources are essentially squandered.The SLID project's contribution to food waste reduction translates to a more efficient utilization of resources across the entire food supply chain. 3.8.Household Food Waste: Formulas, Statistics, and the Impact of the SLID Project While there is no single, universally applicable formula for determining the exact amount of food waste generated in a household, several approaches can provide estimates.Here are two common methods: • Household Savings: Consider the average annual household food expenditure (e.g., USD 6000) and the potential waste reduction percentage (e.g., 50%). • Estimated Savings per Household per Year: USD 6000 × 0.5 = USD 3000 • Waste Reduction: Assume an average household generates 200 kg of food waste per year.With a 50% reduction, the estimated waste reduction per household per year would be 200 kg × 0.5 = 100 kg • Total Savings and Waste Reduction: Extrapolating these estimates to a larger scale, such as a city or a country would provide insights into the overall economic and environmental impact of the SLID project. • Economic Impact of Household Food Waste: The economic cost of household food waste is substantial.In the United States alone, the EPA estimates that food waste costs households between USD 161 and USD 199 billion per year.This translates to an average of USD 1560 to USD 2270 per household per year [35].• The SLID Project's Potential Impact: The SLID project has the potential to significantly reduce household food waste by extending the shelf life of opened food items.By minimizing food spoilage, the SLID project can help households save money on food purchases and disposal costs, contributing to both economic and environmental benefits.Household food waste is a significant global issue with far-reaching economic and environmental consequences.The SLID project, by addressing food waste at the household level, offers a promising solution to reduce food loss, conserve resources, and promote more sustainable food consumption practices. 3.9.Enhancing Food Safety and Extending Shelf Life through the SLID Project The SLID project emerges as a valuable tool for mitigating food waste and promoting food safety within households.The project's design incorporates a comprehensive storage conditions matrix that enables informed decision-making on appropriate storage methods for various dry food categories and home-sourced products.This empowers users to select the optimal storage conditions, including shelf-life recommendations, for their specific food items.By integrating UV-C light technology, the SLID project offers an additional layer of protection against microbial contamination.UV-C light effectively inactivates a broad spectrum of microorganisms, including bacteria, viruses, and mold spores, that can cause food spoilage and pose potential health risks.This targeted approach minimizes the need for chemical preservatives or excessive packaging, contributing to a more sustainable food storage system (Table 5). • Shelf-Life Extension: The SLID project's storage conditions matrix provides valuable guidance on maximizing the shelf life of various food items through proper storage techniques.This not only reduces food waste but also ensures that food remains consumable for extended periods. • Potential of UV-C Microproduct Use: The incorporation of UV-C light technology within the SLID provides a targeted approach to inactivating microorganisms on food surfaces, further extending shelf life and enhancing food safety. Future Studies: Optimizing and Expanding the SLID Project The SLID project presents a promising solution for reducing household food waste by utilizing UV-C light technology and improving storage conditions.However, further research is necessary to optimize its effectiveness and explore broader potential applications.Below, we discuss key areas for future studies.treatment works with different storage conditions (like temperature and humidity) for different types of food.This could lead to personalized storage plans that make the SLID system's shelf-life extension work best.Future studies can optimize the SLID project for efficacy, user-friendliness, and environmental sustainability, thereby maximizing its impact on food waste reduction and food safety. Conclusions The global landscape is rife with intricate challenges, spanning from food waste and hunger to the burgeoning issue of overflowing landfills.With approximately 17% of available food going to waste each year, totaling nearly 690 million tons, the ramifications are profound, exacerbating greenhouse gas emissions and environmental degradation.Concurrently, millions around the world grapple with food insecurity and hunger, while landfills burgeon, posing grave environmental and health hazards.Amidst this complexity, the SLID project emerges as a beacon of hope, offering a multifaceted solution poised to address these interconnected challenges.Through the strategic utilization of UV-C light technology and meticulously optimized storage conditions, the SLID stands poised to extend the shelf life of diverse food items, thereby curbing household food waste.This not only fosters environmental sustainability but also conserves vital resources, potentially mitigating issues of food insecurity. The direct impact of the SLID's capacity to prolong food shelf life cannot be understated, as it directly mitigates household food waste, a significant global contributor to the broader issue.By curtailing spoilage and extending the period during which food remains edible, the SLID empowers households to economize on food expenditures and reduce their ecological footprint.Furthermore, by diminishing food waste, the SLID indirectly contributes to alleviating hunger and food insecurity by augmenting the availability of edible sustenance.Through shelf-life extension, the SLID endeavors to ensure that a greater proportion of food reaches those most in need, particularly in regions grappling with limited access to nutritional resources.Additionally, by mitigating food waste, the SLID alleviates the strain on landfills teeming with food remnants and other organic refuse.This not only obviates the necessity for additional landfill sites but also mitigates the associated environmental and health hazards arising from landfill overflow. However, notwithstanding its promising potential, the SLID project is not without its challenges.To ensure widespread adoption and sustained effectiveness, concerted efforts are imperative.Key challenges include the optimization of treatment regimens for diverse food types to maximize microbial inactivation while safeguarding food quality, as well as the comprehensive evaluation of UV-C treatment's long-term efficacy in controlling microbial proliferation and preserving food's nutritional integrity within the SLID system.Moreover, enhancing the user interface of the SLID system and delving into consumer attitudes and perceptions are crucial steps towards fostering its uptake in households.A comprehensive life cycle assessment is imperative to gauge the environmental impact of the SLID project across its entire life cycle, spanning from material procurement to disposal.In essence, the SLID project represents a holistic approach towards tackling the multifaceted issues of food waste, hunger, and landfill overflow.Through the amalgamation of UV-C light technology and optimized storage conditions, the SLID holds the promise of making substantial strides towards environmental sustainability, resource conservation, and food security.Future endeavors should prioritize the refinement of treatment protocols, assurance of long-term food safety and quality, enhancement of usability and consumer acceptance, and rigorous assessment of environmental ramifications.With continued refinement and advancement, the SLID stands poised to emerge as a pivotal tool in the global fight against food waste and its attendant challenges. Figure 1 . Figure 1.Conversion between wavelength, frequency and energy for the electromagnetic spectrum of light [30-36]. Figure 2 . Figure 2. Ultraviolet-C (UV-C) (germicidal) radiation disrupts the DNA and RNA molecules of a pathogen, preventing its replication and rendering it incapable of causing infection, thereby eliminating its threat.Ultraviolet-C (UV-C) radiation renders viruses, bacteria, and fungi nonfunctional. Figure 3 . Figure 3. (a) The components of the SLID, (b) the SLID UV-C product. Figure 4 . Figure 4. (a) The principal section of the SLID (internal UV-C LED protection), (b) the main challenge of retail and house-level storage with external UV-C LED protection (on shelves).The inclusion of external UV-C LED protection on shelves is the primary obstacle to retail and household-level storage.Blind spots can often be present, and they naturally have a high potential for trapping mold and causing food to deteriorate quickly.These risks can also increase where a semi-open pack is in charge during storage. Figure 6 Figure 6 presents the control panel of a UV-C light (SLID) food storage unit used at home.The following breakdown of the user interface is based on the image: 1. Display: The display shows the currently selected values for exposure time (hours and minutes) and UV-C intensity (low, medium, and high).2. Controls: a. H (hours): This button allows you to set the exposure time in hours.You can choose between 1, 3, 6, 9, 12, and 24 h.b.M (minutes): This button allows you to set the exposure time in minutes.You can choose between (1, 2, 3, and 5 min).c.Up/down arrows for data entry: These arrows allow you to adjust the selected minute value. • Food Spending and Waste Reduction Percentage: Formula: Estimated household food waste = average annual food expenditure × waste reduction percentage Example: Average annual food expenditure = USD 6000, waste reduction percentage = 50% Calculation: Estimated household food waste = USD 6000 × 0.5 = 3000 kg • Food Purchase Data and Waste Rate: Formula: Estimated household food waste = total food purchases × waste rate Example: Total food purchases per year = 5000 kg, waste rate = 20% Calculation: Estimated household food waste = 5000 kg × 0.2 = 1000 kg Calculation Methodology: To estimate the potential savings and waste reduction on a household, annual, and waste basis, we can use the following approach: 3. 10 . 1 . Optimizing Food-Specific Treatment Regimens a. Tailoring UV-C Doses: Research can be expanded to establish precise UV-C light exposure times and intensities optimized for different food types to ensure microbial inactivation efficacy while minimizing impacts on food quality.b.Improving Storage Conditions: More research can be done to find out how UV-C 3.10.2.Long-Term Food Safety and Quality Evaluation a. Long-Term Microbial Control: To address recontamination concerns, studies can assess the long-term efficacy of UV-C treatment in controlling microbial growth on food surfaces stored within the SLID system.b.Nutritional Value Preservation: Research can examine the impact of UV-C light exposure on the nutritional quality of various food items over extended storage periods to ensure both shelf-life extension and the preservation of nutritional quality.3.10.3.Usability and Consumer Adoption Studies a. User Interface Optimization: To maximize user adoption and effectiveness, evaluations can focus on user interaction with the SLID system's interface, emphasizing ease of use, clarity of instructions, and customizability of treatment settings for different food types.b.Consumer Behavior and Acceptance: Social science research can explore consumer attitudes and perceptions towards the SLID project to identify barriers to adoption and inform strategies for promoting household use.3.10.4.Life Cycle Assessment and Environmental Impact a. Energy Consumption and Efficiency: Assessments can evaluate the energy consumption of the UV-C light source within the SLID and explore strategies for optimizing energy efficiency while maintaining microbial inactivation effectiveness.b.Life Cycle Analysis: Comprehensive assessments can evaluate the SLID project's environmental impact throughout its life cycle, identifying areas for environmental footprint reduction from material sourcing to disposal.3.10.5.Exploring Broader Applications a.Commercial Food Storage: Research can investigate the feasibility and effectiveness of implementing the SLID system in commercial food storage settings, such as restaurants or grocery stores.b.Beyond Food Preservation: Exploration of UV-C light's germicidal properties for disinfecting surfaces or inactivating airborne microorganisms in household settings can expand the SLID technology's applications.
19,126.8
2024-06-24T00:00:00.000
[ "Environmental Science", "Engineering", "Materials Science" ]
Phase Transition in Frustrated Magnetic Thin Film—Physics at Phase Boundaries In this review, we outline some principal theoretical knowledge of the properties of frustrated spin systems and magnetic thin films. The two points we would like to emphasize: (i) the physics in low dimensions where exact solutions can be obtained; (ii) the physics at phase boundaries where interesting phenomena can occur due to competing interactions of the two phases around the boundary. This competition causes a frustration. We will concentrate our attention on magnetic thin films and phenomena occurring near the boundary of two phases of different symmetries. Two-dimensional (2D) systems are in fact the limiting case of thin films with a monolayer. Naturally, we will treat this case at the beginning. We begin by defining the frustration and giving examples of frustrated 2D Ising systems that we can exactly solve by transforming them into vertex models. We will show that these simple systems already contain most of the striking features of frustrated systems such as the high degeneracy of the ground state (GS), many phases in the GS phase diagram in the space of interaction parameters, the reentrance occurring near the boundaries of these phases, the disorder lines in the paramagnetic phase, and the partial disorder coexisting with the order at equilibrium. Thin films are then presented with different aspects: surface elementary excitations (surface spin waves), surface phase transition, and criticality. Several examples are shown and discussed. New results on skyrmions in thin films and superlattices are also displayed. By the examples presented in this review we show that the frustration when combined with the surface effect in low dimensions gives rise to striking phenomena observed in particular near the phase boundaries. Introduction Extensive investigations on materials have been carried out over the past three decades. This is due to an enormous number of industrial applications which drastically change our lifestyle. The progress in experimental techniques, the advance on theoretical understanding, and the development of high-precision simulation methods together with the rapid increase of computer power have made possible the rapid development in material science. Today, it is difficult to predict what will be discovered in this research area in ten years. The purpose of this review is to look back at early and recent results in the physics of frustrated spin systems at low dimensions: 2D systems and magnetic thin films. We would like to connect these results, published over a large period of time, on a line of thoughts: physics at phase boundaries. A boundary between two phases of different orderings is determined as a compromise between competing interactions, each of which favors one kind of ordering. The frustration is thus minimum on the boundary (see reviews on many aspects of frustrated spin systems in Ref. [1]). When an external parameter varies, this boundary changes and we will see in this review that many interesting phenomena occur in the boundary region. We will concentrate on the search for interesting physics near the phase boundaries in various frustrated spin systems in this review. In the 1970s, statistical physics with Renormalization Group analysis greatly contributed to the understanding of the phase transition from an ordered phase to a disordered phase [2,3]. We will show methods to study the phase transition in magnetic thin films where surface effects when combined with frustration effects give rise to many new phenomena. Physical properties of solid surfaces, thin films, and superlattices have been intensively studied due to their many applications [4][5][6][7][8][9][10][11]. A large part of this review, Section 2, is devoted to the definition of the frustration and to models which are exactly solved. We begin in Section 3 with exactly solved models to have all properties defined without approximation. As seen, many striking phenomena are exactly uncovered such as partial disorder, reentrance, disorder lines, and multiple phase transitions. Only exact mathematical techniques can allow us to reveal such beautiful phenomena which occur around the boundary separating two phases of different ground-state orderings. These exact results permit to understand similar behaviors in systems that cannot be solved such as 3D systems. In Section 4, an introduction on surface effects in magnetic thin films is given. To avoid a dispersion of techniques, I introduce only the Green's function method which can be generalized in more complicated cases such as non-collinear spin states. Calculations of the spin-wave spectrum and the surface magnetization are in particular explained. In Section 5 several striking results obtained mainly by the author's group are shown on several frustrated magnetic thin films including helimagnetic films. We show in particular the surface phase transition, quantum fluctuations at low temperature, and the existence of partial phase transition. Results obtained by Monte Carlo simulations are also shown in most cases to compare with the Green's function technique. The question of the criticality in thin films is considered in Section 6. Here, the high-precision multi-histogram techniques are used to show that critical exponents in magnetic thin films are effective exponents with values between those of the 2D and 3D universality classes. Section 7 is devoted to skyrmions, a hot subject at the time being due to their numerous possible applications. Here again, we show only results obtained in the author's group, but we mention a large bibliography. Skyrmions are topological excitations. Skyrmions are shown to result from the competition of different antagonist interactions under an applied magnetic field. We find the existence of a skyrmion crystal, namely a network of periodically arranged skyrmions. Results show that such a skyrmion crystal is stable up to a finite temperature. The relaxation time of skyrmions is shown to follow a stretched exponential law. Concluding remarks are given in Section 8. Frustration Since the 1980s, frustrated spin systems have been subjects of intensive studies [1]. The word "frustration" has been introduced to describe the fact that a spin cannot find an orientation to fully satisfy all interactions with its neighbors, namely the energy of a bond is not the lowest one [12,13]. This will be seen below for Ising spins where at least one among the bond with the neighbors is not satisfied. For vector spins, frustration is shared by all spins so that all bonds are only partially satisfied, i.e., the energy of each bond is not minimum. Frustration results either from the competing interactions or from the lattice geometry such as the triangular lattice with antiferromagnetic nearest-neighbor (nn) interaction, the face-centered cubic (fcc) antiferromagnet and the antiferromagnetic hexagonal-close-packed (hcp) lattice (see [1]). Note that real magnetic materials have complicated interactions and there are large families of frustrated systems such as the heavy lanthanides metals (holmium, terbium and dysprosium) [14,15], helical MnSi [16], pyrochore antiferromagnets [17], and spin-ice materials [18]. Exact solutions on simpler systems may help understand qualitatively real materials. Besides, exact results can be used to validate approximations. We recall in the following some basic arguments leading to the definition of the frustration. The interaction energy of two spins S i and S j interacting with each other by J is written as E = −J S i · S j . If J is ferromagnetic (J > 0) then the minimum of E is −J corresponding to S i parallel to S j . If J is antiferromagnetic (J < 0), E is minimum when S i is antiparallel to S j . One sees that in a crystal with nn ferromagnetic interaction, the ground state (GS) of the system is the configuration where all spins are parallel: the interaction of every pair of spins is "fully" satisfied, namely the bond energy is equal to −J. This is true for any lattice structure. If J is antiferromagnetic, the GS depends on the lattice structure: (i) for lattices containing no elementary triangles, i.e., bipartite lattices (such as square lattice, simple cubic lattices, . . . ) in the GS each spin is antiparallel to its neighbors, i.e., every bond is fully satisfied, its energy is equal to −|J|; (ii) for lattices containing elementary triangles such as the triangular lattice, the fcc lattice, and the hcp lattice, one cannot construct a GS where all bonds are fully satisfied (see Figure 1). The GS does not correspond to the minimum interaction energy of every spin pair: the system is frustrated. Let us formally define the frustration. We consider an elementary lattice cell which is a polygon formed by faces called "plaquettes". For example, the elementary cell of the simple cubic lattice is a cube with six square plaquettes, the elementary cell of the fcc lattice is a tetrahedron formed by four triangular plaquettes. According to the definition of Toulouse [12] the plaquette is frustrated if the parameter P defined below is negative where J i,j is the interaction between two nn spins of the plaquette and the product is performed over all J i,j around the plaquette. We show two examples of frustrated plaquettes in Figure 1, a triangle with three antiferromagnetic bonds and a square with three ferromagnetic bonds and one antiferromagnetic bond. P is negative in both cases. If one tries to put Ising spins on those plaquettes, at least one of the bonds around the plaquette will not be satisfied. For vector spins, we show below that the frustration is equally shared by all bonds so that in the GS, each bond is only partially satisfied. Question marks indicate undetermined spin orientation. Choosing an orientation for the spin marked by the question mark will leave one of its bonds unsatisfied (frustrated bond with positive energy). One sees that for the triangular plaquette, the degeneracy is three, and for the square plaquette it is four. Therefore, the degeneracy of an infinite lattice for these cases is infinite, unlike the non-frustrated case. The frustrated triangular lattice with nn interacting Ising spins was studied in 1950 [19,20]. We emphasize that the frustration may be due to the competition between a Heisenberg exchange model which favors a collinear spin configuration and the Dzyaloshinski-Moriya interaction [24,25] which favors the perpendicular configuration. We will return to this interaction in the section on skyrmions later in this paper. We show below how to determine the GS of some frustrated systems and discuss some of their properties. We consider the plaquettes shown in Figure 1 with XY spins. The GS configuration corresponds to the minimum of the energy E of the plaquette. In the case of the triangular plaquette, suppose that spin S i (i = 1, 2, 3) of amplitude S makes an angle θ i with the Ox axis. One has where J < 0 (antiferromagnetic). Minimizing E with respect to 3 angles θ i , we find the solution J is negative, the minimum thus corresponds to S 1 + S 2 + S 3 = 0 which gives the 120 • structure. This is true also for the Heisenberg spins. For the frustrated square plaquette, we suppose that the ferromagnetic bonds are J and the antiferromagnetic bond is −J connecting the spins S 1 and S 4 (see Figure 2). The energy minimization gives If the antiferromagnetic interaction is −η J (η > 0), the angles are [26] cos θ 21 = cos θ 32 = cos θ 43 and |θ 14 | = 3|θ|, where cos θ ij ≡ cos θ i − cos θ j . This solution exists if | cos θ| ≤ 1, namely η > η c = 1/3. One recovers when η = 1, θ = π/4, θ 14 = 3π/4. The GS spin configurations of the frustrated triangular and square lattices are displayed in Figure 2 with XY spins. We see that the frustration is shared by all bonds: the energy of each bond is −0.5J for the triangular lattice, and − √ 2J/2 for the square lattice. Thus, the bond energy in both cases is not fully satisfied, namely not equal to −J, as we said above when defining the frustration. At this stage, we note that the GS found above have a two-fold degeneracy resulting from the equivalence of clockwise and counter-clockwise turning angle (noted by + and − in Figure 3) between adjacent spins on a plaquette in Figure 2. Therefore the symmetry of these spin systems is of Ising type O(1), in addition to the symmetry SO(2) due to the invariance by global spin rotation in the plane. From the GS symmetry, one expects that the respective breaking of O(1) and SO(2) symmetries would behave respectively as the 2D Ising universality class and the Kosterlitz-Thouless transition [3]. However, the question of whether the two phase transitions would occur at the same temperature and the nature of their universality remains an open question [26,27]. Let us determine the GS of a helimagnet. Consider the simplest case: a chain of Heisenberg spins with ferromagnetic interaction J 1 (> 0) between nn and antiferromagnetic interaction J 2 (< 0) between nnn. The interaction energy is where one has supposed that the angle between nn spins is θ. The first solution is sin θ = 0 −→ θ = 0 which is the ferromagnetic solution and the second one is This solution is possible when −1 ≤ cos θ ≤ 1, i.e., when J 1 / (4|J 2 |) ≤ 1 or |J 2 |/J 1 ≥ 1/4 ≡ ε c . An example of configuration is shown in Figure 4. Please note that there are two degenerate configurations of clockwise and counter-clockwise turning angles as other examples above. Please note that the two frequently studied frustrated spin systems are the fcc and hcp antiferromagnets. These two magnets are constructed by stacking tetrahedra with four frustrated triangular faces. Frustration by the lattice structure such as these cases are called "geometry frustration". Another 3D popular model which has been extensively studied since 1984 is the system of stacked antiferromagnetic triangular lattices (satl). The phase transition of this system with XY and Heisenberg spins was a controversial subject for more than 20 years. The controversy was ended with our works: the reader is referred to Refs. [28,29] for the history. In short, we found that in known 3D frustrated spin systems (fcc, hcp, satl, helimagnets, . . . ) with Ising, XY, or Heisenberg spins, the transition is of first order [30,31]. Another subject which has been much studied since the 1980s is the phenomenon called "order by disorder": we have seen that the GS of frustrated spin systems is highly degenerate and often infinitely degenerate (entropy not zero at temperature T = 0). However, it has been shown in many cases that when T is turned on the system chooses a state which has the largest entropy, namely the system chooses its order by the largest disorder. We call this phenomenon "order by disorder" or "order by entropic selection" (see references cited in section III. B of Ref. [30]). We will not discuss these subjects in this review which is devoted to low-dimensional frustrated spin systems. Exactly Solved Frustrated Models Any 2D Ising model with non-crossing interactions can be exactly solved. To avoid the calculation of the partition function one can transform the model into a 16-vertex model or a 32-vertex model. The resulting vertex model is exactly solvable. We have applied this method to search for the exact solution of several Ising frustrated 2D models with non-crossing interactions shown in Details have been given in Ref. [32]. We outline below a simplified formulation of a model for illustration. The aim is to discuss the results. As we will see these models possess spectacular phenomena due to the frustration. Example of the Decimation Method We take the case of the centered honeycomb lattice with the following Hamiltonian where σ i = ±1 is an Ising spin at the lattice site i. The first, second, and third sums are performed on the spins interacting via J 1 , J 2 and J 3 bonds, respectively (see Figure 7). The case J 2 = J 3 = 0 corresponds to the honeycomb lattice, and the case J 1 = J 2 = J 3 to the triangular lattice. Let σ be the central spin of the lattice cell shown in Figure 7. Other spins are numbered from σ 1 to σ 6 . The Boltzmann weight of the elementary cell is written as where K i ≡ J i k B T (i = 1, 2, 3). The partition function reads where the sum is taken over all spin configurations and the product over all elementary cells of the lattice. One imposes the periodic boundary conditions. The above model is exactly solvable. To that end, we decimate the central spin of every elementary lattice cell. We finally get a honeycomb Ising model (without centered spins) with multispin interactions. After decimation of the central spin, namely after summing the values of the central spin σ, the Boltzmann weight of an elementary cell reads We show below that this model is in fact a case of the 32-vertex model on the triangular lattice which has an exact solution. We consider the dual triangular lattice of the honeycomb lattice obtained above [33]. The sites of the dual triangular lattice are at the center of each elementary honeycomb cell with bonds perpendicular to the honeycomb ones, as illustrated in Figure 8. Let us define the conventional arrow configuration for each site of the dual triangular lattice: if all six spins of the honeycomb cell are parallel, then the arrows, called "standard configuration", are shown in Figure 9. From this "conventional" configuration, antiparallel spin pairs on the two sides of a triangular lattice bond will have its corresponding arrow change the direction. As examples, two spin configurations on the honeycomb lattice and their corresponding arrow configurations on the triangular lattice are displayed in Figure 10. Counting all arrow configurations, we obtain 32. To each of these 32 vertices one associates the Boltzmann weight W (σ 1 , σ 2 , σ 3 , σ 4 , σ 5 , σ 6 ) given by Equation (10). Let us give explicitly a few of them: ..... Using the above expressions of the 32-vertex model, one finds the following equation for the critical temperature (see details in Ref. [34]): The solutions of this equation are given in Section 3.3.2 below for some special cases. Following the case studied above, we can study the 2D models shown in Figures 5 and 6: after decimation of the central spin in each square, these models can be transformed into a special case of the 16-vertex model which yields the exact solution for the critical surface (see details in Ref. [32]). Before showing some results in the space of interaction parameters, let us introduce the definitions of disorder line and reentrant phase. Disorder Line, Reentrance It is not the purpose of this review to enter technical details. We would rather like to describe the physical meaning of the disorder line and the reentrance. A full technical review has been given in Ref. [32]. Disorder solutions exist in the paramagnetic region which separate zones of fluctuations of different nature. They are where the short-range pre-ordering correlations change their nature to allow for transitions in the phase diagrams of anisotropic models. They imply constraints on the analytical behavior of the partition function of these models. To obtain the disorder solution one makes a certain local decoupling of the degrees of freedom. This yields a dimension reduction: a 2D system then behaves on the disorder line as a 1D system. This local decoupling is made by a simple local condition imposed on the Boltzmann weights of the elementary cell [35][36][37]. This is very important while interpreting the system behavior: on one side of the disorder line, pre-ordering fluctuations have correlation different from those of the other side. Crossing the line, the system pre-ordering correlation changes. The dimension reduction is often necessary to realize this. Please note that disorder solutions may be used in the study of cellular automata as it has been shown in Ref. [38]. Let us give now a definition for the reentrance. A reentrant phase lies between two ordered phases. For example, at low temperature (T) the system is in an ordered phase I. Increasing T, it undergoes a transition to a paramagnetic phase R, but if one increases further T, the system enters another ordered phase II before becoming disordered at a higher T. Phase R is thus between two ordered phases I and II. It is called "reentrant paramagnetic phase" or "reentrant phase". How physically is it possible? At a first sight, it cannot be possible because the entropy of an ordered phase is smaller than that of a disordered phase so that the disordered phase R cannot exist at lower T than the ordered phase II. In reality, as we will see below, phase II has always a partial disorder which compensates for the loss of entropy while going from R to II. The principle that entropy increases with T is thus not violated. Kagomé Lattice The Kagomé lattice shown in Figure 5 has attracted much attention not only by its great interest in statistical physics but also in real materials [17]. The Kagomé Ising model with only nn interaction J 1 has been solved a long time ago [39]. No phase transition at finite T when J 1 is antiferromagnetic. Taking into account the nnn interaction J 2 , we have solved [40] this model by transforming it into a 16-vertex model which satisfies the free-fermion condition. The equation of the critical surface is We are interested in the region near the phase boundary between two phases IV (partially disordered) and I (ferromagnetic) in Figure 11 (left). We show in Figure 11 (right) the small region near the boundary α = J 2 /J 1 = −1 which has the reentrant paramagnetic phase and a disorder line. Figure 11. Left: Each color represents a ground-state configuration in the space (J 1 , J 2 ) where +, −, and x denote up, down, and free spins, respectively. Right: Phase diagram in the space (α = J 2 /J 1 , T) with J 1 > 0. T is in the unit of J 1 /k B . Solid lines are critical lines, dashed line is the disorder line. P, F, and X stand for paramagnetic, ferromagnetic and partially disordered phases, respectively. The inset shows schematically the enlarged region near the critical value J 2 /J 1 = −1. We note that only near the phase boundary such a reentrant phase and a disorder line can exist. If we suppose that all interactions J 1 , J 2 and J 3 in the model shown in Figure 5 are different, the phase diagram becomes very rich [41]. For instance, the reentrance can occur in an infinite region of interaction parameters and several reentrant phases can occur for a given set of interactions when T varies. The Hamiltonian reads where σ i is the Ising spin occupying the lattice site i, and the sums are performed over the spin pairs connected by J 1 , J 2 and J 3 , respectively. The phase diagram at temperature T = 0 is shown in Figure 12 in the space (α = J 2 /J 1 , β = J 3 /J 1 ), supposing J 1 > 0. The spin configuration of each phase is indicated. The three partially disordered phases (I, II, and III) have free central spins. With J 1 < 0 , it suffices to reverse the central spin in the F phase of Figure 12. In addition, the permutation of J 2 and J 3 will not change the system, because it is equivalent to a π/2 rotation of the lattice. We examine now the temperature effect. We have seen above that a partially disordered phase lies next to the ferromagnetic phase in the GS gives rise to the reentrance phenomenon. We expect therefore similar phenomena near the phase boundary in the present model. As it turns out, we find below a new and richer behavior of the phase diagram. We use the decimation of central spins described in Ref. [32], we get then a checkerboard Ising model with multispin interactions. This corresponds to a symmetric 16-vertex model which satisfies the free-fermion condition [42][43][44]. The critical temperature is the solution of the following equation Note the invariance of Equation (18) with respect to changing K 1 → −K 1 and interchanging K 2 and K 3 . Let us show just the solution near the phase boundary in the plane (β = J 3 /J 1 , T) for two values of α = J 2 /J 1 . It is interesting to note that in the interval 0 > α > −1, there exist three critical lines. Two of them have a common horizontal asymptote as β tends to infinity. They limit a reentrant paramagnetic phase between the F phase and the partially disordered phase I for β between β 2 and infinite β (see Figure 13). Such an infinite reentrance has never been found before in other models. With decreasing α, β 2 tends to zero and the F phase is reduced (comparing Figure 13a,b) . For α < −1, the F phase and the reentrance no longer exist. The ground-state phase diagram in the space (α = J 2 /J 1 , β = J 3 /J 1 ). Each phase is displayed by a color with up, down, and free spins denoted by +, −, and o, respectively. I, II, III, and F indicate the three partially disordered phases and the ferromagnetic phase, respectively. We note that for −1 < α < 0, the model possesses two disorder lines (see equations in Ref. [41]) starting from a point near the phase boundary β = −1 for α close to zero; this point position moves to β = 0 as α tends to −1 (see Figure 13). Centered Honeycomb Lattice We use the decimation of the central spin of each elementary cell as shown in Section 3.1. After the decimation, we obtain a model equivalent to a special case of the 32-vertex model [45] on a triangular lattice which satisfies the free-fermion condition. The general treatment has been given in Ref. [34]. Here we show the result of the case where K 2 = K 3 . Equation (15) is reduced to When K 2 = 0, Equation (15) gives the critical line When K 3 = 0, we observe a reentrant phase. The critical lines are given by The phase diagram obtained from Equations (21) and (22) near the phase boundary α = −0.5 is displayed in Figure 14. One observes here that the reentrant zone goes down to T = 0 at the boundary α = −0.5 separating the GS phases II and III (see Figure 14b). Please note that phase II has the antiferromagnetic ordering on the hexagon and the central spin free to flip, while phase III is the ordered phase where the central spin is parallel to 4 diagonal spins (see Figure 2 of Ref. [34]). Therefore, if −0.6 < α < −0.5 (reentrant region, see Figure 14b), when one increases T from T = 0, ones goes across successively the ordered phase III, the narrow paramagnetic reentrant phase and the partially disordered phase II. Two remarks are in order: (i) The reentrant phase occurs here between an ordered phase and a partially disordered phase. However, as will be seen below, we discover in the three-center square lattice, reentrance can occur between two partially disordered phase; (ii) In any case, we find reentrance between phases when and only when there are free spins in the GS. The entropy of the high-T partially disordered phase is higher than that of the low-T one. The second thermodynamic principle is not violated. It is noted that the present honeycomb model does not possess a disorder solution with a reduction of dimension as the Kagomé lattice shown earlier. Centered Square Lattices In this paragraph, we study several centered square Ising models by mapping them onto 8-vertex models that satisfy the free-fermion condition. The exact solution is then obtained for each case. Let us anticipate that in some cases, for a given set of parameters, up to five transitions have been observed with varying temperature. In addition, there are two reentrant paramagnetic phases going to infinity in the space of interaction parameters, and there are two additional reentrant phases found, each in a small zone of the phase space [46,47]. We consider the dilute centered square lattices shown in Figure 6. The Hamiltonian of these models reads where σ i is an Ising spin at the lattice site i. The sums are performed over the spin pairs interacting by J 1 , J 2 and J 3 bonds (diagonal, vertical and horizontal bonds, respectively). Figure 15 shows the ground-state phase diagrams of the models displayed in Figure 6a,b,d, where a = J 2 /J 1 and b = J 3 /J 1 . The spin configurations in different phases are also displayed. The model in Figure 15a has six phases (numbered from I to VI), five of which (I, II, IV, V and VI) are partially disordered (at least one centered spin being free), the model in Figure 15b has five phases, three of which (I, IV, and V) are partially disordered, and the model in Figure 15c has seven phases with three partially disordered ones (I, VI, and VII). It is interesting to note that each model shown in Figure 6 possesses the reentrance along most of the phase boundary lines when the temperature is turned on. This striking feature of the centered square Ising lattices has not been observed in other known models. Let us show in Figure 16 the results of the three-center model of Figure 6a, in the space For b < −1, there are two reentrances as seen in Figure 16a for b = −1.25. The phase diagram is shown using the same numbers of corresponding ground-state configurations of Figure 15. Please note that the centered spins disordered at T = 0 in phases I, II and VI (Figure 15a) remain so at all T. Note also that the reentrance occurs always at a phase boundary. This point is emphasized in this paper through various shown models. For −1 < b < −0.5, there are three reentrant paramagnetic phases as shown in Figure 16b, two of them on the positive a are so narrow while a goes to infinity. Please note that the critical lines in these regions have horizontal asymptotes. For a large value of a, one has five transitions with decreasing T: paramagnetic phase-partially disordered phase I-first reentrant paramagnetic phase-partially disordered phase II-second reentrant paramagnetic phase-ferromagnetic phase (see Figure 16b). To our knowledge, a model that exhibits such five phase transitions with two reentrances has never been found before. For −0.5 < b ≤ 0, another reentrance is found for a < −1 as seen in the inset of Figure 16c. With increasing b, the ferromagnetic phase III in the phase diagram becomes large, reducing phases I and II. At b = 0, only the ferromagnetic phase remains. For positive b, we have two reentrances for a < 0, ending at a = −2 and a = −1 when T = 0 as seen in Figure 16d. In conclusion, we summarize that in the three-center square lattice model shown in Figure 6a, we found two reentrant phases occurring on the temperature scale at a given set of interaction parameters. A new feature found here is that a reentrant phase can occur between two partially disordered phases, unlike in other models such as the Kagomé Ising lattice where a reentrant phase occurs between an ordered phase and a partially disordered phase. Summary and Discussion The present section shows spectacular phenomena due to the frustration. What to be retained is the fact that those phenomena occur around the boundary of two phases of different GSs, namely different symmetries. These phenomena include (1) the partial disorder at equilibrium: disorder is not equally shared on all particles as usually the case in unfrustrated systems. (2) the reentrance: this occurs around the phase boundary when T increases → the phase with larger entropy will win at finite T. In other words, this is a kind of selection by entropy. (3) the disorder line: this line occurs in the paramagnetic phase. It separates the pre-ordering zones between two nearby ordered phases. In the present section, we looked for interesting effects of the frustration by solving exactly several 2D Ising models with non-crossing interactions. This has been done by the decimation method combined with the mapping to vertex models. We know that vertex models are exactly solvable when the free-fermion conditions are satisfied. This is the case in the 8-, 16-, and 32-vertex models shown above. The striking results mentioned above, namely the partial disorder, the reentrance, the disorder line and the multiple phase transitions, are expected to exist in models other than the Ising model and in three-dimensional lattices, although they cannot be exactly solved. We mention that partial disorder in some 3D highly frustrated Ising systems has been found: for instance, the fully frustrated simple cubic lattice [48,49], a stacked triangular Ising antiferromagnet [50,51] and a body-centered cubic (bcc) crystal [52]. For non-Ising spins such as quantum spins, partial disorder has also been found [53][54][55]. As for the reentrance in 3D, we mention the case of a special lattice which is exactly solved [56]. We believe that reentrance should also exist in the phase space of many other 3D systems. We found for example numerical evidence of a reentrance for the bcc Ising case [52] and a frustrated XY model on stacked 3D checkerboard lattices [55]. Please note that evidence of a reentrance has been found for the q-state Potts model on the 2D frustrated Villain lattice [57,58]. Finally, through the examples shown above, we see that for the reentrance to occur it is necessary to have free spins in the GS. Surface Parameters Surface physics has been rapidly developed in the last 30 years thanks to the progress in the fabrication and the characterization of films of very thin thickness down to a single atomic layer. A lot of industrial applications have been made in memory storage, magnetic sensors, . . . using properties of thin films. Theory and simulation have also been in parallel developed to understand these new properties and to predict further interesting effects. In the following we introduce some useful microscopic mechanisms which help understand macroscopic effects observed in experiments. The existence of a surface on a crystal causes a lot of modifications at the microscopic levels. First, the lack of neighbors of atoms on the surface causes modifications in their electronic structure giving rise to modifications in electron orbital and atomic magnetic moment by for example the spin-orbit coupling and in interaction parameters with neighboring atoms (exchange interaction, for example). In addition, surfaces can have impurities, defects (vacancies, islands, dislocations, . . . ). In short, we expect that the surface parameters are different from the bulk ones. Consequently, we expect physical properties at and near a surface are different from those in the bulk. For the fundamental theory of magnetism and its application to surface physics, the reader is referred to Ref. [6]. In the following we outline some principal microscopic mechanisms which dominate properties of magnetic thin films. Surface Spin Waves: Simple Examples In magnetically ordered systems, spin-wave (SW) excitations dominate thermodynamic properties at low T. The presence of a surface modifies the SW spectrum. We show below that it gives rise to SW modes localized near the surface. These modes lie outside the bulk SW spectrum and modify the low-T behavior of thin films. Let us calculate these modes in some simple cases. We give below for pedagogical purpose some technical details. We consider a thin film of N T layers stacked in the z direction. The Hamiltonian is written as where J ij is the exchange interaction between two nn Heisenberg quantum spins, and D ij > 0 denotes an exchange anisotropy. S + j and S − j are the standard spin operators S ± j = S x j ± iS y j . For simplicity, we suppose no crystalline defects and no impurities at the surface and all interactions are identical for surface and bulk spins. It is known that in perfect crystals the spin waves dominate low-temperature properties [6]. In a thin film, there often exist SW modes localized near the surface. Such surface spin waves are at the origin of the low surface magnetization and transition temperature. One can calculate the SW energy using the method of equation of motion, the Holstein-Primakoff method and the Green's function method. Here we use for illustration the Green's function method which the author has developed for thin films (see details in Ref. [59,60]). This method shall be generalized below for helimagnets and other systems with non-collinear spin configurations. Let us define the double-time Green's function by The equation of motion of G i,j (t, t ) is written as where [. . .] denotes the boson commutator and . . . the canonical thermal average given by with β = 1/k B T. When we perform the commutator of Equation (26), we obtain Green's functions of higher orders. These functions can be reduced by the use of the Tyablikov approximation [61] Thus, we get the same kind of Green's function defined in Equation (25). In a thin film, the system is supposed to be infinite in the xy plane, we can therefore use the in-plane Fourier transforms Here, ω denotes the magnon frequency and k xy the wave vector parallel to the surface. The position of the spin at the site i is R i . n and n are respectively the planes to which i and j belong (n = 1 is the index of the surface). Please note that the integration on k xy is performed in the first Brillouin zone of surface ∆ in the xy plane. • Film of body-centered cubic lattice where γ k = cos(k x a/2) cos(k y a/2) Using Equation (30) for n = 1, 2,. . . , N T , we get N T coupled equations which is written in a matrix equation where u is a column matrix whose n-th element is 2δ n,n < S z n >. For each k xy we can calculate the magnon energyhω(k xy ) by solving the secular equation det|M| = 0. This gives N T values ofhω i (i = 1, ..., N T ). We note that ω i depends on all S z n contained in the coefficients A n , B n and C n . The magnetization S z n of the layer n in the case where S = 1 2 is calculated by (see chapter 6 of Ref. [6]): where S − n S + n is given by the following spectral theorem where = 0 + is a very small constant. Equation (38) becomes where the Green's function g n,n is given by the solution of Equation (37) g n,n = |M| n |M| (41) |M| n is the determinant obtained by replacing the n-th column of |M| by u. To simplify we writehω i = E i andhω = E hereafter. We factorize using E i (i = 1, . . . , N T ), the poles of the Green's function. g n,n is rewritten as where f n (E i ) is given by With Equations (40) and (43) and we get where n = 1, ..., N T . Since < S z n > depends on the neighboring magnetizations, we should solve by iteration the Equation (46) written for n = 1, . . . , N T to get the layer magnetizations at T. The critical temperature T c is calculated self-consistently using the Equation (46), with all < S z n > tending to zero. We show in Figure 17a the SW spectrum of a simple cubic film where there is no surface SW mode. Figure 17b shows the case of a body-centered cubic ferromagnetic case where there are two branches of surface localized modes. Please note that a surface mode has a damping SW amplitude when going from the surface to the interior. The SW amplitudes for each mode are in fact their eigenvectors calculated from Equation (41). Since acoustic surface localized spin waves have low energies, integrands on the right-hand side of Equation (46) are large, making < S z n > to be small and causing a diminution of T c in thin films. We show in Figure 18 the first-and second-layer magnetizations versus T in the films shown above using N T = 4. Calculations for antiferromagnetic thin films and other cases with non-collinear spin configurations can be performed using generalized Green's functions [59,60,62] with the general Hamiltonian defined for two spins S i and S j forming an angle cos θ ij : one can write the Hamiltonian in the local coordinates as follows [63] where an anisotropy (last term) is added for numerical convergence at long-wave lengths. This term is necessary for very thin film thickness since it is known that there is no ordering for isotropic Heisenberg spins in strictly 2D at finite temperatures [64]. The angles between nn spins in the GS are calculated by minimizing the interaction energy with respect to interaction parameters [65,66]. Replacing the angle values in the Hamiltonian, and follow the steps presented above for the collinear case, one then gets a matrix which can be numerically diagonalized to obtain the SW spectrum. Other physical properties can be self-consistently calculated using the SW spectrum as for the collinear spin configuration. Frustrated Thin Films: Surface Phase Transition Having given the background in the previous section, we can show some results here. The reader is referred to the original papers for details. Our aim here is to discuss physical effects due to the conditions of the surface. As said earlier, the combination of the frustration and the surface effect gives rise to drastic effects. This is seen in the examples shown in the following. The effects of surface anisotropies and dipole-dipole interactions have been treated in some of our earlier works. However, to keep the length of the present review reasonable, we do not discuss them here. The reader is referred to Ref. [67] for the re-orientation transition in molecular thin films for the Potts model with dipolar interaction in competition with the film perpendicular anisotropy. The same problem was studied with the Heisenberg spin model in Ref. [68]. Please note that in these works, evidence of the reentrance is found near the GS phase boundary between the in-plane spin configuration and the perpendicular one. Frustrated Surfaces We show here the case of a ferromagnetic film with frustrated surfaces [65], using the analytical Green's function method and extensive Monte Carlo simulations. Effects of frustrated surfaces on the properties of a ferromagnetic thin film are presented. The system is made by stacking triangular layers of Heisenberg spins in the z direction. The in-plane surface interaction J s can be antiferromagnetic or ferromagnetic. All other interactions are ferromagnetic. We show that the ground-state spin configuration is non-collinear when J s is lower than a critical value J c s . The film surfaces are then frustrated. We anticipate here that in this case, there are two phase transitions, one for the disordering of the surface and the other for the disordering of the interior layers. As seen below, good agreement between Monte Carlo and Green's function results are achieved. Model We consider a thin film made of N z planes of triangular lattice of L × L sites, stacked in the z direction. We use the following Hamiltonian where S i is the Heisenberg spin at the lattice site i, ∑ i,j indicates the sum over the nearest-neighbor spin pairs S i and S j . The last term, which will be supposed very small, is needed to have a phase transition at a finite temperature for the film with a very small thickness when all exchange interactions J i,j are ferromagnetic. We suppose that the nn interactions on the surface are J s and I s , and all other interactions are ferromagnetic and equal to J and I. The two surfaces of the film are frustrated if J s is antiferromagnetic (J s < 0), due to the triangular lattice structure. Ground State We suppose here that the spins are classical Heisenberg spins. The classical GS can be calculated as shown below. We recall that for antiferromagnetic systems of quantum spins, the quantum GS though not far from the classical one, cannot be exactly determined because of the quantum fluctuations [6]. For J s > 0, the GS is ferromagnetic. When J s is antiferromagnetic, the surface when detached from the bulk has the 120-degree ordering and the interior layers have the ferromagnetic ordering. The interaction between the surface spins and those of the beneath layer causes a competition between the collinear configuration and the 120-degree one. We first determine the ground-state configuration for I = I s = 0.1 by minimizing the energy of each spin starting from a random spin configuration. This is done by iteration until the convergence is reached. The reader is referred to Ref. [65] for the numerical procedure. In doing so, we obtain the ground-state configuration, without metastable states for the present model. The result shows that when J s is smaller than a critical value J c s the magnetic GS is an "umbrella" form with an angle α between nn surface spins and an angle β between a surface spin and its beneath neighbor (see Figure 19). This structure is due to the interaction of the spins on the beneath layer on the surface spins, acting like an external applied field in the z direction. It is obvious that when |J s | is smaller than |J c s | the collinear ferromagnetic GS results in: the frustration is not strong enough to resist the ferromagnetic interaction from the beneath layer. Figure 19. Surface spin configuration: angle between nn spins on layer 1 is equal to α, angle between vertical nn spins is β. Figure 20. The critical value J c s is found between −0.18 and −0.19. This value can be calculated analytically as shown below, by assuming the "umbrella structure". For the ground-state analysis, we consider just a single cell shown in Figure 19. This is justified by the numerical determination presented above. We consider the Hamiltonian given by (48). We take that (J i,j = J s , I i,j = I s ) for nn surface spins and all other (J i,j = J > 0, I i,j = I > 0) for the inside nn spins including interaction between a surface spin and a nn spin on the second layer. Let us show cos(α) and cos(β) versus J s in We number as in Figure 19 S 1 , S 2 and S 3 are on the surface layer, S 1 , S 2 and S 3 on the second layer. The Hamiltonian for the cell reads We next decompose each spin into an xy component, which is a vector, and a z component . We see that only surface spins have xy vector components. The angle between these xy components of nearest-neighbor surface spins is γ i,j which is the projection of α defined above on the xy plane. We have by symmetry The angles of the spin S i and S i with the z axis are by symmetry The total energy of the cell (49), with S i = S i = 1 2 , can be rewritten as We minimize the cell energy which gives the following solution For given values of I s and I, we see that the solution (53) exists if | cos β| ≤ 1, namely J s ≤ J c s where J c s is the critical value. For I = −I s = 0.1, one has J c s ≈ −0.1889J in perfect agreement with the numerical minimization shown in Figure 20. The classical GS determined here will be used as input for the ground-state configuration in the case of quantum spins presented below using the Green's function method. Results from the Green's Function Method We suppose the spins are quantum in this subsection. The details of the formulation for non-collinear spin configurations have been given in Ref. [65]. We just show the results on the surface phase transition and compare with the Monte Carlo results performed on the equivalent classical model. Phase Transition and Phase Diagram of the Quantum Case Let us take J = 1 as the unit of energy. The temperature is in unit of J/k B . We show in Figure 21 the results of the very frustrated case where J s = −0.5J much smaller than J c s = −0.1889J. Some remarks are in order: (i) there is a strong spin contraction at T = 0 [6] for the surface layer which comes from the antiferromagnetic nature of the in-plane surface interaction J s ; (ii) the surface magnetization is much smaller than the second-layer one, the surface becomes disordered at a temperature T 1 0.2557 while the second layer remains ordered up to T 2 1.522. It is interesting to note that the system is partially disordered for temperatures between T 1 and T 2 . This result confirms again the existence of the partial disorder in quantum spin systems observed in the bulk [54,69]. Please note that between T 1 and T 2 , the ordering of the second layer acts as an external field on the first layer, inducing therefore a small value of the surface magnetization. We show now the case of non-frustrated surface in Figure 22 where J s = 0.5, with I = I s = 0.1. Though the surface magnetization is smaller than the second-layer magnetization, the result suggests there is only a single transition temperature. The phase diagram in the space (J s , T) is shown in Figure 23 where phase I denotes the surface and bulk ordered phase with non collinear spin configuration at the surface. Phase II is the phase where the surface is disordered but the bulk is still ordered, phase III is ferromagnetic, and phase IV is paramagnetic. Please note that the surface transition does not exist for J s ≥ J c s . Monte Carlo Results To study the phase transition occurring at a high temperature, one can use the classical spins and Monte Carlo simulations to obtain the phase diagram for comparison. This is justified since quantum fluctuations are not important at high T. For Monte Carlo simulations (see methods in Refs. [2,[70][71][72][73][74]), we use the same Hamiltonian (48) but with the classical Heisenberg spin model of magnitude S = 1. We use the film size L × L × N z where N z = 4 is the number of layers, and L = 24, 36, 48, 60 to detect the finite-size effects. To reduce the lateral size effect, periodic boundary conditions are employed in the xy planes. The thermodynamic equilibration is done with 10 6 Monte Carlo steps per spin and the averaging time is taken over 2 × 10 6 Monte Carlo steps per spin. J = 1 is taken as unit of energy in the following. Figure 24 shows the first-and second-layer magnetizations versus T where J s = 0.5 (no frustration). In this case, there is clearly just a single transition for surface and bulk, as in the quantum case. Let us show in Figure 25 the result of a frustrated case where J s = −0.5. As in the quantum case, the surface becomes disordered at a temperature much lower than that for the interior layer. The phase diagram is shown in Figure 26 in the space (J s , T). We see that there is a remarkable similarity to that obtained for the quantum spin model shown in Figure 23. Frustrated Thin Films We have also studied frustration effects in an antiferromagnetic fcc Heisenberg film [66]. In this case, the whole film is frustrated due to the geometry of the lattice. We consider the quantum Heisenberg spins occupying the lattice sites of a film of fcc structure with (001) surfaces. The Hamiltonian reads where S i is the spin at the lattice site i, the first sum runs over the nn spin pairs S i and S j , while the second one runs over all sites. The second terms in the Hamiltonian are Ising-like uniaxial anisotropy terms added to avoid the absence of long-range order of isotropic non-Ising spin model at finite T when the film thickness tends to 1 [64]. Hereafter, let J s be the interaction between two nn surface spins, J = −1 (antiferromagnetic) for all other interactions. The GS depends J s with a critical value J c s = −0.5 at which the ordering of type I coexists with ordering of type II (see Figure 27). The demonstration has been given in Ref. [66]. For J s < J c s , the spins in each yz plane are parallel while spins in adjacent yz planes are antiparallel (Figure 27a). This ordering will be called hereafter "ordering of type I": in the x direction the ferromagnetic planes are antiferromagnetically coupled as shown in this figure. Of course, there is a degenerate configuration where the ferromagnetic planes are antiferromagnetically ordered in the y direction. Please note that the surface layer has an antiferromagnetic ordering for both configurations. The degeneracy of type I is therefore 4 including the reversal of all spins. For J s > J c s , the spins in each xy plane is ferromagnetic. The adjacent xy planes have an antiferromagnetic ordering in the z direction perpendicular to the film surface. This will be called hereafter "ordering of type II". Please note that the surface layer is then ferromagnetic (Figure 27b). The degeneracy of type II is 2 due to the reversal of all spins. Monte Carlo simulations have been used to study the phase transition in this frustrated film. We just show below three typical cases, at and far from J c s . Figure 28 shows the sublattice layer magnetizations at J c s = −0.5 where one sees that the surface layer undergoes a transition at a temperature lower than the interior ones. Far from this value there is a single phase transition as seen in Figure 29. However, when J s is negatively stronger, we have a hard surface, namely the surface undergoes a phase transition at a T higher than that for the interior layer transition. This is seen in Figure 30. The phase diagram is shown in Figure 31. Please note that near the phase boundary J c s (−0.5 ≤ J s ≤ −0.43) a reentrant phase is found between phases I and II (not seen with the figure scale). As said in the 2D exactly solved models above, one must be careful while examining the very small region near the phase boundary J c s where unexpected phenomena can occur. This is the case here. We have studied the nature of the phase transition by using the Monte Carlo multi-histogram technique [72][73][74]. Critical exponents are found to have values between 2D and 3D universality classes. The reader is referred to Ref. [66] for details. The criticality of thin films is treated in Section 6 below. Helimagnetic Films Bulk helimagnets have been studied a long time ago [75][76][77]. A simple helimagnetic order resulting from the competition between the nn and nnn interactions is shown in Section 2.2. Helimagnetic films are seen therefore as frustrated films. We have recently used the Green's function method and Monte Carlo simulations to study helimagnetic films in zero field [63,78] and in a perpendicular field [79]. We summarize here some results and emphasize their importance. Consider the following helimagnetic Hamiltonian where J i,j is the interaction between two spins S i and S j occupying the lattice sites i and j and H denotes an external magnetic field applied along the c axis. We suppose the ferromagnetic interaction J 1 between nn everywhere. To generate a helical configuration in the c direction, one must take into account an antiferromagnetic interaction J 2 between nnn in the c direction, in addition to J 1 . Hereafter, we suppose J 2 is the same at the surface and in the bulk for simplicity. Please note that in the bulk in zero field, the helical angle along the c axis is given by cos α = − J 1 4J 2 [see Equation (6)] for a simple cubic lattice when |J 2 | > 0.25J 1 (J 2 < 0). Below this value, the ferromagnetic phase is stable. In zero field the helical angle in a thin film has been shown [63] to be strongly modified near the surface as presented in Figure 32. Some results from the laborious Green's function are shown in Figure 33. To have a long-range ordering at finite T, we added an anisotropic term d S z i S z j in the Hamiltonian where d << J 1 . We observe in Figure 33 the crossover of the layer magnetizations at low T. This is due to quantum fluctuations which are different for each layer, depending on the antiferromagnetic interaction strength (namely the so-called zero-point spin contractions, see Ref. [6]). Without such a theoretical insight, it would be difficult to understand experimental data when one observes this phenomenon at low T. In an applied field [79], we have observed a new phenomenon, namely a partial phase transition in the helimagnetic film. Contrary to what has been shown above (surface phase transition below or above the bulk one), here we have each single interior layer undergoes a separate transition. Theoretically, we can understand this phenomenon by the following fact: under an applied magnetic field, due to the surface effect shown in Figure 32 the spins of each layer in the GS make an angle with the c axis different from those of the other layers of the film (in fact we examine only layers of half of the film, the other half is symmetric because of the symmetry of the two surfaces). When the temperature increases, the layers with large xy spin-components undergo a phase transition where the transverse (in-plane) xy ordering is destroyed. This "in-plane" transition can occur at a temperature because the xy spin-components do not depend on the field. Other layers with small xy spin-components, not large enough to have an xy ordering, do not make a transition. In addition, these layers have large z components, they cannot undergo a transition because the ordering in S z is maintained by the applied field. The transition of several layers with the destruction of the xy ordering, not all layers, is a new phenomenon found in this work with our helimagnetic films in a perpendicular field. Real helimagnetic materials often have interactions more complicated than those in the model studied here, but the important ingredient is the non-uniformity of the spin configuration in an applied field, whatever the interactions are. The clear physical pictures given in our present analysis are believed to be useful in the search for the interpretation of experimental data. Criticality of Thin Films One of the important fundamental questions in surface physics is the criticality of the phase transition in thin films. To clarify this aspect, we studied the critical behavior of magnetic thin films with varying film thickness [80]. In that work, we have studied the ferromagnetic Ising model with the high-resolution multiple histogram Monte Carlo method [72][73][74]. We found that though the 2D behavior remains dominant at small thicknesses, there is a systematic continuous deviation of the critical exponents from their 2D values. We explained these deviations using the concept of "effective" exponents proposed by Capehart and Fisher [81] in a finite-size analysis. The variation the critical temperature with the film thickness obtained by our Monte Carlo simulations is in excellent agreement with their prediction. Let us summarize here this work. We consider a film made from a ferromagnetic simple cubic lattice of size L × L × N z . Periodic boundary conditions (PBC) are used in the xy planes to reduce the lateral size effect. The z direction is limited by the film thickness N z . N z = 1 corresponds to a 2D square lattice. The Hamiltonian is written as where σ i = ±1 is the Ising spin at the lattice site i, and the sum is performed over the nn spin pairs σ i and σ j . J i,j = J = 1 for all spin pairs. Using the high-precision multi-histogram Monte Carlo technique [72][73][74] we have calculated various critical exponents as functions of the film thickness using the finite-size scaling [82] described as follows. In Monte Carlo simulations, one calculates by the multiple histogram technique averaged total energy E , specific heat C v , averaged order parameter M (M: magnetization of the system), susceptibility χ, first-order cumulant of the energy C U , and n th -order cumulant V n of the order parameter, for n = 1 and 2. These quantities are defined as where . . . indicates the thermal average at a given T. Let us summarize the multi-histogram technique [72][73][74]. With this technique, we calculate the probability at a temperature T 0 using the energy histogram recorded during the simulation. The probability at temperatures around T 0 can be deduced. For the multi-histogram technique, we should make many simulations with different T 0 . The combination of these results gives a good probability as a continuous function of temperature. Thermal averages of physical quantities are calculated as continuous functions of T, and the results are valid over a much wider range of temperature than the results from the single histogram technique. Let H j (E) be the energy histogram recorded during the j-th simulation. The overall probability distribution [74] at temperature T obtained from n independent simulations, each with N j configurations, is given by where The thermal average of a physical quantity A is then calculated by in which In our simulations, the xy lattice sizes L × L with L = 20, 24, 30, . . . , 80 have been used. For N z = 3, sizes up to 160 × 160 have been used to evaluate corrections to scaling. In each simulation, the standard Metropolis MC simulations are used first to localize for each size the transition temperatures for specific heat and for susceptibility. The equilibrating time is from 200,000 to 400,000 MC steps/spin and the averaging time is from 500,000 to 1,000,000 MC steps/spin. We record next histograms at 8 different temperatures T j (L) around the transition temperatures with 2 million MC steps/spin, after equilibrating the system with 1 million MC steps/spin. Finally, we record again histograms at 8 different temperatures around the new transition temperatures with 2 × 10 6 and 4 × 10 6 MC steps/spin for equilibrating and averaging time, respectively. Such an iteration procedure gives extremely good results for systems studied so far. Errors shown in the following have been estimated using statistical errors, which are very small thanks to our multiple histogram procedure, and fitting errors given by fitting software. Let us discuss first the 3D case where all dimensions can go to infinity. Consider a system of size L d where d is the space dimension. In the simulation for a finite L, one can identify the pseudo "transition" temperatures by the maxima of C v and χ, . . . . These maxima in general take place at close, but not the same, temperatures. When L tends to infinity, these pseudo "transition" temperatures tend to the "real" transition temperature T c (∞). Thus, when we examine the maxima of V n , C v and χ, we are not at T c (∞). We have to bear this in mind for the discussion given in the following. Now, let us define the reduced temperature, which is the "distance" from T c (∞), by In the finite-size scaling theory, the following scaling relations are valid for large values of L at their respective 'transition' temperatures T c (L) (see details in Ref. [83]): where A, C 0 , C 1 and C A are constants. The exponent ν is calculated independently from V max 1 and V max 2 . Using this value we calculate γ from the scaling of χ max , and α from C max v . The value of T c (∞) can be calculated using the last expression. Next, with T c (∞) we can calculate β from M T c (∞) . We emphasize that the Rushbrooke scaling law α + 2β + γ = 2 is in principle verified [82]. This is a way to verify the obtained critical exponents. Results obtained from multiple histograms described above are shown in Figure 34 for the susceptibility and the first derivative V 1 calculated with the continuous T, using Equations (63)-(66), at temperatures around their maxima, with several sizes L × L (L = 20 − 80). The calculation of ν is shown in Figure 35 for N z = 11 to illustrate the precision of the method: the slope of the "perfect" straight line of our data points gives 1/ν. Other critical exponents are summarized in Table 1. Our results indicate a very small but systematic deviation of the 2D critical exponents with increasing thickness. Note the precision of the 2D case (N z = 1) with respect to the exact result: we have T c (L = ∞, N z = 1) = 2.2699 ± 0.0005. The exact value of T c (∞) is 2.26919 by solving the equation sinh 2 (2J/T c ) = 1. The excellent agreement of our result shows no doubt the efficiency of the multiple histogram technique used in our work. We show now the theory of Capehart and Fisher [81] on the variation of the critical temperature with N z . Defining the critical-point shift due to the finite size by the authors [81] showed that where ν = 0.6289 (3D value). Using T c (3D) = 4.51, we fit the above formula with T c (L = ∞, N z ) taken from Table 1, we obtain a = −1.37572 and b = −1.92629. The Monte Carlo results and the fitted curve are shown in Figure 36. The prediction of Capehart and Fisher is thus very well verified. Note finally that the PBC in the z direction does not change the result if we do not apply the finite-size scaling in that direction [80]. We have also shown that by decreasing the film thickness, a first-order transition in a frustrated fcc Ising thin film can become a second-order transition [84]. Skyrmions in Thin Films and Superlattices Skyrmions are topological excitations in a spin system. They result from the competition between different interactions in an applied magnetic field. We consider in this section the case of a sheet of square lattice of size N × N, occupied by Heisenberg spins interacting via a nn ferromagnetic interaction J and a nn Dzyaloshinskii-Moriya (DM) interaction [24,25]. The Hamiltonian is written as where the D vector of the DM interaction is chosen along theẑ direction perpendicular to the plane. In zero field we have studied the spin waves and layer magnetizations at T = 0 and at finite T [96]. The results show that the DM interaction strongly affects the first mode of the SW spectrum. Skyrmions appear only when an external field is applied perpendicular to the film, as seen in the following. With H = 0, we minimize numerically the above Hamiltonian for a given pair (H, D), taking J = 1 as unit, we obtain the GS configuration of the system. The phase diagram is shown in Figure 37. Above the blue line is the field-induced ferromagnetic phase. Below the red line is the labyrinth phase with a mixing of skyrmions and rectangular domains. The skyrmion crystal phase is found in a narrow region between these two lines, down to infinitesimal D. We wish to study the effect of temperature on the skyrmion crystal. To that end, we define an order parameter of the crystal as follows: we want to see the stability of the skyrmions at a finite T so we make the projection of the actual spin configuration at time t at temperature T on the GS configuration. We should average this projection over many Monte Carlo steps per spin. The order parameter M is where S i (T, t) is the i-th spin at time t, at temperature T, and S i (T = 0) is its state in the GS. By this definition, we see that the order parameter M(T) is close to 1 at very low T where each spin is only weakly deviated from its state in the GS, and M(T) is zero when every spin strongly fluctuates in the paramagnetic state. We show in Figure 39 the dependence of M and M z on T which indicates that the skyrmion crystal remains ordered up to a finite T. This stability at finite T may be important for transport applications. We have carried out a finite-size scaling on the phase transition at T c . We have observed that from the size 800 × 800, all curves coincide within statistical errors. Thus, there is no observable finite-size effects for larger lattice sizes. An important feature of topological systems such as the present system and disordered systems in general (spin glasses, random-field models, . . . ) is the relaxation behavior. In general, they do not follow the simple exponential law [97]. We have studied the relaxation time of the skyrmion crystal and found that it follows a stretched exponential law [98]. The DM interaction has been shown to generate a skyrmion crystal in a 2D lattice. However, skyrmions have been shown to exist in various kinds of lattices [99][100][101][102] and in crystal liquids [87][88][89]. Experimental observations of skyrmion lattices have been realized in MnSi in 2009 [93,94] and in doped semiconductors in 2010 [92]. Also, the existence of skyrmion crystals have been found in thin films [90,91] and direct observation of the skyrmion Hall effect has been realized [103]. In addition, artificial skyrmion lattices have been devised for room temperatures [104]. It is noted that applications of skyrmions in spintronics have been largely discussed and their advantages compared to early magnetic devices such as magnetic bubbles have been pointed out in a recent review by W. Kang et al. [105]. Among the most important applications of skyrmions, let us mention skyrmion-based racetrack memory [106], skyrmion-based logic gates [107,108], skyrmion-based transistor [109][110][111] and skyrmion-based artificial synapse and neuron devices [112,113]. Finally, we mention that we have found skyrmions confined at the interface of a superlattice composed alternately of a ferromagnetic film and a ferroelectric film [114][115][116]. These results may have important applications. Conclusions In this review, we have shown several studied cases on the frustration effects in two dimensions and in magnetic thin films. The main idea of the review is to show some frustrated magnetic systems which present several common interesting features. These features are discovered by solving exactly some 2D Ising frustrated models, they occur near the frontier of two competing phases of different ground-state orderings. Without frustration, such frontiers do not exist. Among the striking features, one can mention the "partial disorder", namely several spins stay disordered in coexistence with ordered spins at equilibrium, the "reentrance", namely a paramagnetic phase exists between two ordered phases in a small region of temperature, and "disorder lines", namely lines on which the system loses one dimension to allow for a symmetry change from one side to the other. Such beautiful phenomena can only be uncovered and understood by means of exact mathematical solutions. We have next studied frustrated magnetic systems close to the 2D solvable systems. We have chosen thin magnetic thin films with Ising or other spin models that are not exactly solvable. Guided by the insights of exactly solvable systems, we have introduced ingredients in the Hamiltonian to find some striking phenomena mentioned above: we have seen in thin films partial disorder (surface disorder coexisting with bulk order), reentrance at phase boundaries in fcc antiferromagnetic films. Thin films have their own interest such as surface spin rearrangement (helimagnetic films) and surface effects on their thermodynamic properties. Those points have been reviewed here. The surface effects have been studied by means of the Green's function method for frustrated non-collinear spin systems. Monte Carlo simulations have also been used to elucidate many physical phenomena where analytical methods cannot be used. Surface spin waves, surface magnetization, and surface phase transition have been analyzed as functions of interactions, temperature, and applied field. We have also treated the question of surface criticality. Results of our works show that critical exponents in thin films depend on the film thickness, their values lie between the values of 2D and 3D universality classes. Recent results on skyrmions have also been reviewed in this paper. One of the striking findings is the discovery of a skyrmion crystal in a spin system with DM interaction in competition with an exchange interaction, in a field. This skyrmion crystal is shown to be stable at finite temperature. To conclude, we would like to say that investigations on the subjects discussed above continue intensively today. Please note that there is an enormous number of investigations of other researchers on the above subjects and on other subjects concerning frustrated magnetic thin films. We have mentioned these works in our original papers, but to keep the paper length reasonable we did not present them here. Also, for the same reason, we have cited only a limited number of experiments and applications in this review. Funding: This research received no external funding. Acknowledgments: The author wishes to thank his former doctorate students and collaborators for their close collaborations in the works presented in this review. In particular, he is grateful to Hector Giacomini, Patrick Azaria, Ngo Van Thanh, Sahbi El Hog, Aurélien Bailly-Reyre and Ildus F. Sharafullin who have greatly contributed by their works to the understanding of frustrated magnetic thin films. Conflicts of Interest: The author declares no conflict of interest.
17,460.8
2018-12-11T00:00:00.000
[ "Physics" ]
Printing Smart Designs of Light Emitting Devices with Maintained Textile Properties † To maintain typical textile properties, smart designs of light emitting devices are printed directly onto textile substrates. A first approach shows improved designs for alternating current powder electroluminescence (ACPEL) devices. A configuration with the following build-up, starting from the textile substrate, was applied using the screen printing technique: silver (10 µm)/barium titanate (10 µm)/zinc-oxide (10 µm) and poly(3,4-ethylenedioxythiophene)poly(styrenesulfonate) (10 µm). Textile properties such as flexibility, drapability and air permeability are preserved by implementing a pixel-like design of the printed layers. Another route is the application of organic light emitting devices (OLEDs) fabricated out of following layers, also starting from the textile substrate: polyurethane or acrylate (10–20 µm) as smoothing layer/silver (200 nm)/poly(3,4-ethylenedioxythiophene)poly(styrenesulfonate) (35 nm)/super yellow (80 nm)/calcium/aluminum (12/17 nm). Their very thin nm-range layer thickness, preserving the flexibility and drapability of the substrate, and their low working voltage, makes these devices the possible future in light-emitting wearables. Introduction Smart luminous textiles are of great interest for applications such as clothing, interior design and visual merchandizing. Moreover, luminous textiles are beneficial for protective clothing and sportswear in order to improve safety by a higher visibility and interactive design for non-verbal communication. Additionally, luminous textiles have potentials for healthcare and medicine applications such as phototherapy. At present, such smart textiles are mostly limited to the integration of light emitting devices (LED) or optical fibres [1]. This approach however is limited to small-scale luminous textiles. An optional solution for the implementation of large-scale luminous surfaces on textiles is brought by applying printing technologies. There are a number of research projects that have investigated scenarios to incorporate light-emitting devices on textiles [2,3]. These address alternating current powder electroluminescence (ACPEL) and organic light emitting diodes (OLED) technology. Nevertheless, these are mostly limited to printing luminous structures on non-textile substrates and subsequently integrating them onto textile surfaces. This work addresses two different approaches implemented directly on textiles substrates: improved screen-printed designs of ACPEL devices and direct deposition of OLEDs. Diverse smart designs of the ACPEL devices were suggested to preserve textile properties such as flexibility, drapability and air permeability. The complete layer stack silver (Ag) (10 µm)/barium titanate (BaTiO 3 ) (10 µm)/zinc-oxide (ZnO) (10 µm)/poly (3,4-ethylenedioxythiophene)poly(styrenesulfonate) (PEDOT:PSS) (10 µm) was applied using the screen printing technique. A pixel-like design of the printed layers was selected and different geometries were implemented. In order to comprehend the interaction between the textile substrate, the applied functional layers and the selected design, the samples were tested on flexibility, air permeability and light output and their morphology after mechanical stress was investigated. Organic light-emitting devices (OLED) are more challenging to apply directly on textiles, but promise to be the future in light-emitting wearables. These devices were build out of the following layers: polyurethane (PU) or acrylate (10-20 µm) as the smoothing layer/Ag (200 nm)/PEDOT:PSS (35 nm)/super yellow (80 nm)/calcium/aluminum (Ca/Al) (12/17 nm) ending up with a device stack of maximum 0.5 µm and therefore maintaining the flexibility and drapability of the textile substrate. Due to the roughness of the textile substrate a planarizing layer of polyurethane (PU) or acrylate (10-20 µm) had to be applied directly on the substrate before completing the rest of the stack. OLEDs have a high brightness and a low power consumption. To protect these devices from fast degradation from contact with oxygen or water vapour, an encapsulation layer is necessary. ACPEL Devices Literature shows that ACPEL devices can be printed on a variety of substrates [4]. The thickness of the complete device is about 40 µm and mostly it is applied as full area coverage. However, this will mask the benefits of the textile such as air permeability and drapability. Therefore, in this work, a special design, based on a hexagonal cell structure is proposed. The design of the stack can be seen in Figure 1a. Both the bottom layer (Ag) and the top layer (PEDOT:PSS) are screen printed in this honeycomb structure. The line width is 0.5 mm and both layers are deposited in such a way that they are not touching, to prevent electrical shorts. The dielectric layer and the light emitting layer consist of 1.5 to 2.5 mm pixels. They can be printed on each crossing of the hexagon structure or on half of them. By changing the diameter and the number of pixels per hexagon, the light emission, but also the air permeability and the crease recovery, can be adapted. A schematic view of a hexagon cell structure is depicted in Figure 1b. Scanning electron microscopy (SEM) is applied to look at the final printed ACPEL device in detail. In Figure 1c, SEM images of the ACPEL devices printed on the polyester textile are shown. From left to right, the diameter of the pixels is changed from 1.5 mm over 2 mm up to 2.5 mm. This is the case for the first three images from the left, where only three pixels are printed in one hexagon cell. Also, for the last three SEM images, this change of diameter is applied but now six pixels are printed in one hexagon cell. It is clear from these SEM micrographs that the area of uncoated textile changes (dark grey area) when altering the diameter and the number of pixels per cell. In Figure 1d, the light emission can be noted. Due to the design of the ACPEL stack, only the pixels show light emission. Finally, Figure 2 indicates the influence of the design on the properties of the textile substrate after screen printing the ACPEL device. (a) Finally, Figure 2 indicates the influence of the design on the properties of the textile substrate after screen printing the ACPEL device. Finally, Figure 2 indicates the influence of the design on the properties of the textile substrate after screen printing the ACPEL device. (a) In Figure 2a, the air permeability, as measured by a FX3300 LabAir IV Air Permeability tester (Textest AG, Schwerzenbach, Switzerland), for the different designs is shown. It is clear from this graph that more air can pass if the diameter of the pixels is smaller. This is logical, of course, as less textile surface is covered. The difference between the single pixel structure, where only three pixels are printed per hexagon call, for the different diameters is however not as big as for the dual pixel structure, with six pixels per hexagon cell. It can also be noted that the difference between the dual pixel structure, with a diameter of 1.5 mm and the single pixel structure, with a pixel diameter of 2.5 mm and 2 mm, is comparable. This graph is in correspondence with the calculated area coverage of the textile substrate. In Figure 2b one can see the crease recovery measurements. In this experiment, the textile is folded and kept as such for 5 min and for 30 min by applying a weight of 1 kg on top of the double folded textile. After these 5 or 30 min, the weight is removed and it is recorded how far the textile will reverse back to its initial state. This is denoted as the crease recovery. From the figure it can be seen that, in comparison to an uncoated polyester substrate (last line of the graph), the crease recovery was smaller for all samples. However, in comparison with ACPEL devices printed as a full covering on the polyester textile, the crease recovery, especially for the single pixel structures, is very good. The light output performances were acquired using a Keithley 2401 (Keithley, Cleveland, OH, USA) source to measure the current and voltage characteristics and an absolute calibrated integrating sphere spectrometer from Avantes to determine the irradiance per wavelength [5]. The light output was obtained by comparing the electrical power to the coated area. In Figure 3 the light output of the different designs are compared to that of a fully covered ACPEL device. This demonstrates that the light output is halved when a dual pixel design is used instead of a fully covered surface. The light output of a device with the single pixel design is even less than one fourth of that of a fully covered device. In Figure 2a, the air permeability, as measured by a FX3300 LabAir IV Air Permeability tester (Textest AG, Schwerzenbach, Switzerland), for the different designs is shown. It is clear from this graph that more air can pass if the diameter of the pixels is smaller. This is logical, of course, as less textile surface is covered. The difference between the single pixel structure, where only three pixels are printed per hexagon call, for the different diameters is however not as big as for the dual pixel structure, with six pixels per hexagon cell. It can also be noted that the difference between the dual pixel structure, with a diameter of 1.5 mm and the single pixel structure, with a pixel diameter of 2.5 mm and 2 mm, is comparable. This graph is in correspondence with the calculated area coverage of the textile substrate. In Figure 2b one can see the crease recovery measurements. In this experiment, the textile is folded and kept as such for 5 min and for 30 min by applying a weight of 1 kg on top of the double folded textile. After these 5 or 30 min, the weight is removed and it is recorded how far the textile will reverse back to its initial state. This is denoted as the crease recovery. From the figure it can be seen that, in comparison to an uncoated polyester substrate (last line of the graph), the crease recovery was smaller for all samples. However, in comparison with ACPEL devices printed as a full covering on the polyester textile, the crease recovery, especially for the single pixel structures, is very good. The light output performances were acquired using a Keithley 2401 (Keithley, Cleveland, OH, USA) source to measure the current and voltage characteristics and an absolute calibrated integrating sphere spectrometer from Avantes to determine the irradiance per wavelength [5]. The light output was obtained by comparing the electrical power to the coated area. In Figure 3 the light output of the different designs are compared to that of a fully covered ACPEL device. This demonstrates that the light output is halved when a dual pixel design is used instead of a fully covered surface. The light output of a device with the single pixel design is even less than one fourth of that of a fully covered device. Based on these experiments and on the light emission measurements, which can be found in [5], the single pixel structure with a diameter of 2 mm is seen as the most optimal, when reserving the textile properties has priority. When, however, the light output is of high importance, the best option can be found in the dual pixel design with a diameter of 1.5 mm. OLED Devices Applying OLEDs to textiles is not that straightforward as is the case for the screen-printed ACPEL devices discussed above. First of all, the total thickness of the OLED stack is only 0.5 µm, which is even smaller than the roughness of the underlying textile substrate. Further, the deposition techniques to do so are not as standard as the screen printing technique from above. The advantages of using OLEDs, however, are numerous. Since they are made of very thin nm layers, the devices can be applied to flexible substrates. The emitted light has a high brightness, a uniform light output and a wide range of vision. OLEDs require a low power supply (3-5 V), have a low energy consumption and a good efficacy. Important disadvantages or challenges, however, have to be taken into account. The devices degrade very quickly due to water vapour and oxygen. Therefore water vapour transmission rates (WVTR) and oxygen transmission rates (OTR) must be lower than respectively 10 −6 g·m −2 per day and 10 −3 cm 3 ·m −2 per day, indicating a very high barrier layer is necessary [6]. Some of the applied techniques to deposit the OLED layers are very expensive and not roll-to-roll compatible. However, more and more less expensive and roll-to-roll compatible printing techniques are emerging. These other deposition techniques and the OLED stack to be applied to textiles will be discussed in more detail in this part of the paper. As mentioned, the surface of the textile substrate is quite rough (µm-range) compared to the nm-range layer thickness of the OLEDs. This roughness can be ruled out by the deposition of a planarizing or covering layer. Printable PU or acrylate are therefore laminated on top of the textile substrates as is shown in Figure 4, with a thickness between 10 and 20 µm to bring the micrometer roughness of the textile substrate to a nanometer roughness. Based on these experiments and on the light emission measurements, which can be found in [5], the single pixel structure with a diameter of 2 mm is seen as the most optimal, when reserving the textile properties has priority. When, however, the light output is of high importance, the best option can be found in the dual pixel design with a diameter of 1.5 mm. OLED Devices Applying OLEDs to textiles is not that straightforward as is the case for the screen-printed ACPEL devices discussed above. First of all, the total thickness of the OLED stack is only 0.5 µm, which is even smaller than the roughness of the underlying textile substrate. Further, the deposition techniques to do so are not as standard as the screen printing technique from above. The advantages of using OLEDs, however, are numerous. Since they are made of very thin nm layers, the devices can be applied to flexible substrates. The emitted light has a high brightness, a uniform light output and a wide range of vision. OLEDs require a low power supply (3-5 V), have a low energy consumption and a good efficacy. Important disadvantages or challenges, however, have to be taken into account. The devices degrade very quickly due to water vapour and oxygen. Therefore water vapour transmission rates (WVTR) and oxygen transmission rates (OTR) must be lower than respectively 10 −6 g·m −2 per day and 10 −3 cm 3 ·m −2 per day, indicating a very high barrier layer is necessary [6]. Some of the applied techniques to deposit the OLED layers are very expensive and not roll-to-roll compatible. However, more and more less expensive and roll-to-roll compatible printing techniques are emerging. These other deposition techniques and the OLED stack to be applied to textiles will be discussed in more detail in this part of the paper. As mentioned, the surface of the textile substrate is quite rough (µm-range) compared to the nm-range layer thickness of the OLEDs. This roughness can be ruled out by the deposition of a planarizing or covering layer. Printable PU or acrylate are therefore laminated on top of the textile substrates as is shown in Figure 4, with a thickness between 10 and 20 µm to bring the micrometer roughness of the textile substrate to a nanometer roughness. As previously stated, OLEDs degrade immediately in ambient conditions, making good encapsulation indispensable. Therefore a transparent barrier layer is applied using plasma techniques. A first oxygen-free silicon nitride (SiN) base layer serves as a protection for later depositions. This layer is followed by an alternating system of high barrier inorganic materials (such as silicon oxide (SiOx)) and a softer, low barrier organic materials. This barrier system brings a halt to defect formation and subsequently increases the diffusion length and the barrier properties. More information on the topic of encapsulation can be found in [7]. The bottom electrode or anode is a thermally evaporated silver (Ag) layer of 200 nm. Subsequently the hole injection/transport layer PEDOT PSS, a polymer mixture, is spin coated to obtain a 35 nm film. As an active layer, the PPV polymer Super Yellow is used to spin coat a layer of 80 nm inside an inert atmosphere glovebox system. Both the lab-scaled spin coating and thermal evaporation technique can be replaced by inkjet printing and ultrasonic spray coating. Inkjet printing is a contactless printing process where a digital image is recreated by ejecting ink droplets onto a substrate. The large-area deposition technique ultrasonic spray coating forms layers by atomizing the ink at the nozzle of the spray head into a continuous flow of micro sized spherical droplets. Both techniques are less expensive and roll-to-roll compatible. It was shown in earlier work of the authors [8] that the active light-emitting layer can be ultrasonically spray coated without changing or damaging the polymer side-chain or backbone of the PPV polymer. As the textile substrate is not transparent, a top emitting polymer OLED (TEOLED) is prepared where the photons have to escape the device through the top transparent electrode or cathode. To obtain a transparent cathode, two different methods are tested in this work, i.e., applying printed metal grids or evaporating very thin metal layers. For comparison, inkjet-printed Ag grids and very thin thermally evaporated golden (Au) layers were assessed by their transparency and sheet resistance. Hexagonal and triangular shaped Ag grids were inkjet-printed on glass substrates with a thickness of 150-250 nm. They showed a low sheet resistance of 0.82-2.7 Ω/□ and a high transparency of 70-90%. Very thin and completely covering Au layers of 1-15 nm were thermally evaporated on glass substrates. Here a higher sheet resistance of 3.2-123.7 Ω/□ and a lower transparency between 25-70% was found. An overview of these results can be seen in Figure 5. As previously stated, OLEDs degrade immediately in ambient conditions, making good encapsulation indispensable. Therefore a transparent barrier layer is applied using plasma techniques. A first oxygen-free silicon nitride (SiN) base layer serves as a protection for later depositions. This layer is followed by an alternating system of high barrier inorganic materials (such as silicon oxide (SiOx)) and a softer, low barrier organic materials. This barrier system brings a halt to defect formation and subsequently increases the diffusion length and the barrier properties. More information on the topic of encapsulation can be found in [7]. The bottom electrode or anode is a thermally evaporated silver (Ag) layer of 200 nm. Subsequently the hole injection/transport layer PEDOT PSS, a polymer mixture, is spin coated to obtain a 35 nm film. As an active layer, the PPV polymer Super Yellow is used to spin coat a layer of 80 nm inside an inert atmosphere glovebox system. Both the lab-scaled spin coating and thermal evaporation technique can be replaced by inkjet printing and ultrasonic spray coating. Inkjet printing is a contactless printing process where a digital image is recreated by ejecting ink droplets onto a substrate. The large-area deposition technique ultrasonic spray coating forms layers by atomizing the ink at the nozzle of the spray head into a continuous flow of micro sized spherical droplets. Both techniques are less expensive and roll-to-roll compatible. It was shown in earlier work of the authors [8] that the active light-emitting layer can be ultrasonically spray coated without changing or damaging the polymer side-chain or backbone of the PPV polymer. As the textile substrate is not transparent, a top emitting polymer OLED (TEOLED) is prepared where the photons have to escape the device through the top transparent electrode or cathode. To obtain a transparent cathode, two different methods are tested in this work, i.e., applying printed metal grids or evaporating very thin metal layers. For comparison, inkjet-printed Ag grids and very thin thermally evaporated golden (Au) layers were assessed by their transparency and sheet resistance. Hexagonal and triangular shaped Ag grids were inkjet-printed on glass substrates with a thickness of 150-250 nm. They showed a low sheet resistance of 0.82-2.7 Ω/ and a high transparency of 70-90%. Very thin and completely covering Au layers of 1-15 nm were thermally evaporated on glass substrates. Here a higher sheet resistance of 3.2-123.7 Ω/ and a lower transparency between 25-70% was found. An overview of these results can be seen in Figure 5. Considering only these two characteristics, the Ag grids score much better on both as can also be found back in earlier work of the authors [9]. However, the used commercially available Ag ink has to be sintered at a temperature of 200 °C, which will destroy all underlying layers. New Ag inks, based on precursors rather than on Ag nanoparticles, are now available with a considerable lower sintering temperature [10] and therefore, applying this grid structure is the most promising. A low work function material, such as calcium (Ca), has to be used in between the light emitting layer and the top Ag layer to align the energy levels for the proper functioning of the OLED. This material is usually thermally evaporated. Therefore, at this time, preference was given to a thermally-evaporated Ca/Ag cathode of respectively 12 and 17 nm. In this work, the complete OLED stack was deposited on glass, PET and textile substrates. The encapsulated glass OLED sample had some visual defects and pinholes, as can be seen in Figure 6. After applying the barrier layer, the OLED sample was taken out of the glovebox system to investigate the effects of ambient conditions on the encapsulation. After 19 h the OLED had already lost more than half of its light output and after 43 h only a few luminous pixels were visible. This shows that applying the barrier layer is a promising encapsulation strategy, but more research is needed to improve the OLED's characteristics and lifetime. Finally, the OLED stack has been deposited onto PET foil and textiles. The OLED on PET had a nice uniform light output and could be bent without any output loss or cracks in the layers, displaying the flexibility of the OLED device as shown in Figure 7. However, for the textile-based Considering only these two characteristics, the Ag grids score much better on both as can also be found back in earlier work of the authors [9]. However, the used commercially available Ag ink has to be sintered at a temperature of 200 • C, which will destroy all underlying layers. New Ag inks, based on precursors rather than on Ag nanoparticles, are now available with a considerable lower sintering temperature [10] and therefore, applying this grid structure is the most promising. A low work function material, such as calcium (Ca), has to be used in between the light emitting layer and the top Ag layer to align the energy levels for the proper functioning of the OLED. This material is usually thermally evaporated. Therefore, at this time, preference was given to a thermally-evaporated Ca/Ag cathode of respectively 12 and 17 nm. In this work, the complete OLED stack was deposited on glass, PET and textile substrates. The encapsulated glass OLED sample had some visual defects and pinholes, as can be seen in Figure 6. After applying the barrier layer, the OLED sample was taken out of the glovebox system to investigate the effects of ambient conditions on the encapsulation. After 19 h the OLED had already lost more than half of its light output and after 43 h only a few luminous pixels were visible. This shows that applying the barrier layer is a promising encapsulation strategy, but more research is needed to improve the OLED's characteristics and lifetime. Considering only these two characteristics, the Ag grids score much better on both as can also be found back in earlier work of the authors [9]. However, the used commercially available Ag ink has to be sintered at a temperature of 200 °C, which will destroy all underlying layers. New Ag inks, based on precursors rather than on Ag nanoparticles, are now available with a considerable lower sintering temperature [10] and therefore, applying this grid structure is the most promising. A low work function material, such as calcium (Ca), has to be used in between the light emitting layer and the top Ag layer to align the energy levels for the proper functioning of the OLED. This material is usually thermally evaporated. Therefore, at this time, preference was given to a thermally-evaporated Ca/Ag cathode of respectively 12 and 17 nm. In this work, the complete OLED stack was deposited on glass, PET and textile substrates. The encapsulated glass OLED sample had some visual defects and pinholes, as can be seen in Figure 6. After applying the barrier layer, the OLED sample was taken out of the glovebox system to investigate the effects of ambient conditions on the encapsulation. After 19 h the OLED had already lost more than half of its light output and after 43 h only a few luminous pixels were visible. This shows that applying the barrier layer is a promising encapsulation strategy, but more research is needed to improve the OLED's characteristics and lifetime. Finally, the OLED stack has been deposited onto PET foil and textiles. The OLED on PET had a nice uniform light output and could be bent without any output loss or cracks in the layers, displaying the flexibility of the OLED device as shown in Figure 7. However, for the textile-based Finally, the OLED stack has been deposited onto PET foil and textiles. The OLED on PET had a nice uniform light output and could be bent without any output loss or cracks in the layers, displaying the flexibility of the OLED device as shown in Figure 7. However, for the textile-based OLED, only a few luminous pixels could be distinguished. The reasoning behind this bad light emission for the textile-based OLED is that this device employed printable PU as planarizing layer. This PU layer was effected by the chlorobenzene used as solvent for the light emitting polymer Super Yellow. Consequently a lot of defects were introduced into the OLED stack, making an informal light output impossible. OLED, only a few luminous pixels could be distinguished. The reasoning behind this bad light emission for the textile-based OLED is that this device employed printable PU as planarizing layer. This PU layer was effected by the chlorobenzene used as solvent for the light emitting polymer Super Yellow. Consequently a lot of defects were introduced into the OLED stack, making an informal light output impossible. Discussion and Conclusions It has been shown in this work that light emitting devices can be printed on textile substrates applying different designs and different printing and coating techniques. First of all, an ACPEL device is fully screen-printed in an adapted, smart design such that the breathability and the drapability of the textile substrate are enhanced. It was shown that adapting the design (diameter and number of pixels per hexagon cell) can influence the air permeability and the crease recovery. The application of OLEDs on textiles shows several advantages, being very thin and flexible devices with a low power supply, low energy consumption, a good efficacy, a bright and uniform light output and a wide range of vision. Nevertheless, the usage of a high barrier layer is necessary and some applied deposition techniques are not roll-to-roll compatible and quite expensive. At the moment high barrier layers are still applied using a combination of printing and plasma techniques, but for the actual OLED stack layers alternative techniques have been pushed forward, such as inkjet printing and ultrasonic spray coating. Adequate research into a proper barrier layer, a planarizing layer, a transparent top electrode and roll-to-roll deposition techniques is ongoing and will bring the OLED technology from the class of PET foils towards textile substrates. The combination of both results presented in this paper can finally lead to a pixelated OLED structure on textile substrates for enhanced light emission without hampering the textile properties. Materials and Methods As mentioned above two diverse technologies for lighting are examined on their printability on textile substrates. Figure 8 shows the typical layer build-up of an ACPEL device. All of the layers, with a thickness of 10 µm, are deposited on top of each other using the screen printing technique. The textile used in this work was a polyester woven fabric (100% PES-washed and fixated-kw11401 from Concordia Textiles (Valmontheim, Belgium) with a roughness average Ra of 6 µm. The first Ag layer (from Gwent) fulfils a dual purpose, as a bottom electrode and as a planarizing layer. This layer is followed by a dielectric layer (BaTiO3 from Gwent, Pontypool, United Kingdom) and a light emitting layer (Cu-doped ZnS from Gwent, Pontypool, United Kingdom). They are stacked in between two electrodes and therefore, a capacitor build-up is achieved. A transparent topelectrode (PEDOT:PSS EL-P 3145 ink from Orgacon, Mortsel, Belgium) completes the stack. After screen printing each layer, they are subsequently thermally annealed at 130 °C for 10 to 30 min. When a AC voltage of 80 V is applied with a frequency of 400 Hz, light is generated and coupled out through the transparent top-electrode. Discussion and Conclusions It has been shown in this work that light emitting devices can be printed on textile substrates applying different designs and different printing and coating techniques. First of all, an ACPEL device is fully screen-printed in an adapted, smart design such that the breathability and the drapability of the textile substrate are enhanced. It was shown that adapting the design (diameter and number of pixels per hexagon cell) can influence the air permeability and the crease recovery. The application of OLEDs on textiles shows several advantages, being very thin and flexible devices with a low power supply, low energy consumption, a good efficacy, a bright and uniform light output and a wide range of vision. Nevertheless, the usage of a high barrier layer is necessary and some applied deposition techniques are not roll-to-roll compatible and quite expensive. At the moment high barrier layers are still applied using a combination of printing and plasma techniques, but for the actual OLED stack layers alternative techniques have been pushed forward, such as inkjet printing and ultrasonic spray coating. Adequate research into a proper barrier layer, a planarizing layer, a transparent top electrode and roll-to-roll deposition techniques is ongoing and will bring the OLED technology from the class of PET foils towards textile substrates. The combination of both results presented in this paper can finally lead to a pixelated OLED structure on textile substrates for enhanced light emission without hampering the textile properties. Materials and Methods As mentioned above two diverse technologies for lighting are examined on their printability on textile substrates. Figure 8 shows the typical layer build-up of an ACPEL device. All of the layers, with a thickness of 10 µm, are deposited on top of each other using the screen printing technique. The textile used in this work was a polyester woven fabric (100% PES-washed and fixated-kw11401 from Concordia Textiles (Valmontheim, Belgium) with a roughness average Ra of 6 µm. The first Ag layer (from Gwent) fulfils a dual purpose, as a bottom electrode and as a planarizing layer. This layer is followed by a dielectric layer (BaTiO3 from Gwent, Pontypool, United Kingdom) and a light emitting layer (Cu-doped ZnS from Gwent, Pontypool, United Kingdom). They are stacked in between two electrodes and therefore, a capacitor build-up is achieved. A transparent top-electrode (PEDOT:PSS EL-P 3145 ink from Orgacon, Mortsel, Belgium) completes the stack. After screen printing each layer, they are subsequently thermally annealed at 130 • C for 10 to 30 min. When a AC voltage of 80 V is applied with a frequency of 400 Hz, light is generated and coupled out through the transparent top-electrode. For the second approach, organic light-emitting diodes (OLED) are deposited. The TEOLED stack ( Figure 9) is produced by implementing different deposition techniques to apply the layers on glass, polyethylene terephthalate (PET) and textile substrates. Due to the relatively high surface roughness of the textile substrates, an additional planarizing/covering layer is required. To equalize the surface polyurethane (PU) or acrylate is laminated onto the textile substrate with a thickness between 10-20 µm. By applying plasma techniques, a barrier layer, composed out of a stack of alternating organic and inorganic layers with a total thickness of 1 µm, was added on top of the substrate or planarizing layer. Afterwards, an Ag anode of 200 nm is thermally evaporated at a base pressure of 10 −7 mbar. Under a fume hood a hole injection/transport layer PEDOT PSS (Clevios™ P AI 4083 from Heraeus, Hanau, Germany) of 35 nm is spin coated. As an active layer the PPV-polymer super yellow (PDY-132 from Merck, Darmstadt, Germany) ( Figure 10) was dissolved in chlorobenzene with a mass concentration of 5 mg/mL and stirred overnight at 50 °C. A layer of 80 nm was spin-coated in an inert atmosphere glovebox system (O2/H2O ppm <0.1). Subsequently a transparent cathode of Ca and Ag was thermally evaporated at a base pressure of 10 −7 mbar with a thickness of respectively 12 and 17 nm. The stack is completed with another barrier layer deposited by plasma techniques. For the second approach, organic light-emitting diodes (OLED) are deposited. The TEOLED stack ( Figure 9) is produced by implementing different deposition techniques to apply the layers on glass, polyethylene terephthalate (PET) and textile substrates. Due to the relatively high surface roughness of the textile substrates, an additional planarizing/covering layer is required. To equalize the surface polyurethane (PU) or acrylate is laminated onto the textile substrate with a thickness between 10-20 µm. By applying plasma techniques, a barrier layer, composed out of a stack of alternating organic and inorganic layers with a total thickness of 1 µm, was added on top of the substrate or planarizing layer. Afterwards, an Ag anode of 200 nm is thermally evaporated at a base pressure of 10 −7 mbar. Under a fume hood a hole injection/transport layer PEDOT PSS (Clevios™ P AI 4083 from Heraeus, Hanau, Germany) of 35 nm is spin coated. As an active layer the PPV-polymer super yellow (PDY-132 from Merck, Darmstadt, Germany) ( Figure 10) was dissolved in chlorobenzene with a mass concentration of 5 mg/mL and stirred overnight at 50 • C. A layer of 80 nm was spin-coated in an inert atmosphere glovebox system (O 2 /H 2 O ppm <0.1). Subsequently a transparent cathode of Ca and Ag was thermally evaporated at a base pressure of 10 −7 mbar with a thickness of respectively 12 and 17 nm. The stack is completed with another barrier layer deposited by plasma techniques. For the second approach, organic light-emitting diodes (OLED) are deposited. The TEOLED stack ( Figure 9) is produced by implementing different deposition techniques to apply the layers on glass, polyethylene terephthalate (PET) and textile substrates. Due to the relatively high surface roughness of the textile substrates, an additional planarizing/covering layer is required. To equalize the surface polyurethane (PU) or acrylate is laminated onto the textile substrate with a thickness between 10-20 µm. By applying plasma techniques, a barrier layer, composed out of a stack of alternating organic and inorganic layers with a total thickness of 1 µm, was added on top of the substrate or planarizing layer. Afterwards, an Ag anode of 200 nm is thermally evaporated at a base pressure of 10 −7 mbar. Under a fume hood a hole injection/transport layer PEDOT PSS (Clevios™ P AI 4083 from Heraeus, Hanau, Germany) of 35 nm is spin coated. As an active layer the PPV-polymer super yellow (PDY-132 from Merck, Darmstadt, Germany) ( Figure 10) was dissolved in chlorobenzene with a mass concentration of 5 mg/mL and stirred overnight at 50 °C. A layer of 80 nm was spin-coated in an inert atmosphere glovebox system (O2/H2O ppm <0.1). Subsequently a transparent cathode of Ca and Ag was thermally evaporated at a base pressure of 10 −7 mbar with a thickness of respectively 12 and 17 nm. The stack is completed with another barrier layer deposited by plasma techniques. [11].
8,346.8
2018-02-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Modulation of anxiety behavior in gonadectomized animals Anxiety is a complex psychological state which happens after stressful life experiences. Many factors such as daily life events, neurotransmitter systems, and different brain areas could influence anxiety behavior in humans and animals. For example, opioids and androgens decrease anxiety behavior both in humans and animals. Furthermore, removing the testes (gonadectomy) causes higher levels of anxiety‐like behaviors, in which the administration of testosterone and opioid antagonist can reverse some of these behaviors. We review the effects of morphine and androgens on the modulation of anxiety behavior in gonadectomized animals. We begin by highlighting the effects of opioid drugs and androgens on the modulation of anxiety behavior that have been implicated in anxiety behavior. We then discuss the functional consequences of gonadectomy on anxiety behavior. Finally, we consider how the opioids and androgens may contribute to adaptive responses associated with anxiety. INTRODUCTION Anxiety is a complex psychological process that often occurs after stressful life experiences. In a number of cases, it is adaptive since it prepares the organism for future stressful encounters. Nevertheless, if prolonged or exaggerated over time, anxiety induces many abnormal and maladaptive thoughts and behaviors (Leuner and Shors, 2012). Anxiety disorders are the most common of all psychiatric disorders; though, the current human and animal investigation has yet to provide a clear understanding of the neural mechanisms underlying their etiology. Understanding the effect of hormones on the neurobiological systems which modulate anxiety behavior will increase our capacity to develop new drug targets to treat several mental illnesses in humans (McHenry et al., 2014). In fact, animal behavioral profiles are usually employed to evaluate new therapeutic agents to treat anxiety disorders and to evaluate the mechanism of action of anxiolytic drugs (Siepmann and Joraschky, 2007). Several lines of research support the role of opioid receptors on the modulation of anxiety (Perrine et al., 2006;Erbs et al., 2012;Miladi-Gorji et al., 2012). For example, anxiety-like responses in mice are differentially affected by the activation of opioid receptors which the effects depend on the social status of the animals (Kudryavtseva et al., 2004). Several studies indicated that systemic administration of µ-opioid receptor agonists induces anxiolytic-like effect (Zarrindast et al., 2005;Solati et al., 2010;Eslimi et al., 2011), while the opioid receptor antagonists increase anxiety Rezayof et al., 2009;Zarrindast et al., 2010). Moreover, many investigations demonstrate that some androgens possess anxiolytic-like activity both in humans and animal models Giltay et al., 2012;McDermott et al., 2012;Terburg et al., 2016). Testosterone is the main circulating androgen. It interacts with classic androgen receptors, thereby induce anxiolytic-like activity (Fernandez-Guasti and Martinez-Mota, 2005). Investigations from human and rodent studies have revealed that levels of testosterone are inversely correlated with levels of anxiety (Frye and Seliga, 2001;Khera, 2013;Khakpai, 2014;Dossat et al., 2017). Removing the testes (gonadectomy), the main source of testosterone, causes higher levels of behavior indicative of anxiety in a variety of tasks, in male rats (Justel et al., 2012a). Testosterone deficiency syndrome, also recognized as late-onset hypogonadism, is a clinical and biochemical syndrome that can happen in men in relation to advancing age. The condition is characterized through deficient testicular production of testosterone. It may influence many organ systems and can result in substantial health consequences (Morales et al., 2015). The appropriate use of testosterone replacement therapy advised the management of testosterone deficiency syndrome (Morales et al., 2015). Also, the relationship between testosterone levels and anxiety disorders in humans and animals is evident with hypogonadism (long-term) and gonadectomy (short-and long-term) in male humans and rodents, respectively. Numerous researches indicated that testosterone-replacement therapy for short-and long-term in hypogonadal men and gonadectomized male rodents critically alleviates anxiety (Fernandez-Guasti and Martinez-Mota, 2005;Zarrouf et al., 2009;McHenry et al., 2014). Interestingly, opioids play a role in the effects of androgen in modulating anxiety behavior. So, investigations show the involvement of testosterone and opioid system in anxiogenic-like behaviors induced by gonadectomy in adult male rats for short-term (10 days) (Khakpai, 2014). Here, we review the effects of morphine and androgens on the modulation of anxiety behavior. We also consider how gonadectomy may induce anxiety behavior, as well as gonadectomy-treatment, may reverse responses associated with anxiety. The effect of opioid system on anxiety behavior Opioid peptides play a role in many functions, including pain perception, respiration, homeothermy, nutrient intake, and the immune response. Moreover, studies have demonstrated the role of opioid receptors in regulating baseline anxiety states and related behaviors (Roeska et al., 2008;Solati 2011;McHugh et al., 2017;Wang et al., 2017). These functions are mediated by three major classes of G protein-coupled receptors, µ, δ and κ, whose activation inhibits adenylyl cyclase (Kahveci et al., 2006). It is well known that systemic injection of µ-opioid receptor agonists including morphine causes the anxiolytic-like effect (Zarrindast et al., 2005;Solati et al., 2010;Eslimi et al., 2011), probably by interacting with the GABAergic system (Le Merrer et al., 2006). In contrast, the opioid receptor antagonists enhance anxiety in various behavioral animal tests such as the elevated plus-maze Rezayof et al., 2009). It has been revealed that both intra-peritoneal (Shin et al., 2003), and intra-cerebral (Zarrindast et al., 2005) injections of morphine potently induced anxiolytic effects. Studies performed in rodents demonstrated that µ-and δ-opioid receptors are involved in the control of emotional responses, including anxiety and depressive-like behaviors (Erbs et al., 2012). Cat odor exposure produced a significant increase in the expression of pro-opiomelanocortin and µ-opioid receptor genes in the brain structures related to anxiety (amygdala) and motivation (mesolimbic area). Anxiety response produced via the odor of a predator is an innate behavioral response and evolutionarily highly conserved. There is a report showing that a cloth impregnated with cat odor placed on the cage of rats caused a robust anxiogenic-like action in rats. This is also coherent with the hypothesis that morphine enhances defensiveness in a situation related to the cat odor stimuli and also morphine eliminates ultrasonic vocalizations evoked by cat odor, which supports the assumption that the opioid system mediates behavioral responses associated with anxiety (Areda et al., 2005). Moreover, withdrawal from chronic opiates is related to an increase in anxiogenic-like behaviors, but the anxiety profile in the morphine-dependent animals is not clear (Buckman et al., 2009;Pooriamehr et al., 2017;Kim et al., 2018). Additionally, recent investigations have revealed that voluntary exercise can decrease anxiety levels in rodents. Miladi-Gorji and coworkers (2012) reported that voluntary exercise decreases the severity of the anxiogenic-like behaviors in both morphine-dependent and withdrawn rats. Therefore, voluntary exercise could be a potential natural method to ameliorate a number of the deleterious behavioral consequences of opiate abuse. In addition, anxiety has been described as key comorbidity in patients suffering from chronic pain. It has been reported that rats subjected to neuropathic pain models develop anxiety-like behavior which can be reversed through appropriate analgesic treatment such as morphine and gabapentin (Roeska et al., 2008). Many neurotransmitter systems including cannabinoid, acetylcholine, histamine, dopamine (Zarrindast et al., 2005;Rezayof et al., 2009) in different sites of the central nervous system (CNS) such as the hippocampus and amygdala have been proposed to be involved in the modulation of morphine functions on anxiety behavior (Solati et al., 2010;Kesmati et al., 2014). Collectively, µ-and δ-opioid receptors are involved in the modulation of anxiety-like behaviors (Erbs et al., 2012). So that administration of opioid agonists induced anxiolytic-like effect (Zarrindast et al., 2005;Solati et al., 2010;Eslimi et al., 2011), but the application of opioid antagonists induced anxiogenic-like response Rezayof et al., 2009). In the present review, the possible mechanism(s) between the opioid system and androgens in the modulation of anxiety-like behaviors have been investigated. The effect of androgens on anxiety behavior Many investigations demonstrate that some androgens possess anxiolytic-like activity both in humans and animal models Giltay et al., 2012;McDermott et al., 2012;Terburg et al., 2016). Testosterone is a main circulating androgen. Preliminary researches suggest that testosterone may have anxiety-decreasing and cognitive-increasing properties in animals and people (Frye and Seliga, 2001;Hermans et al., 2006;Miller et al., 2009). It interacts with classic androgen receptors, proposing that its anxiolytic-like activity could be mediated via this mechanism (Fernandez-Guasti and Martinez-Mota, 2005). Testosterone secretion is under the quick pulsatile control of gonadotropin-releasing hormone (GnRH) that in turn activates the production of luteinizing hormone (LH). The brain has receptors for testosterone and is capable of synthesizing and metabolizing testosterone too, for example, estradiol. It has been reported that low salivary testosterone levels are related to both depressive and anxiety disorders (Giltay et al., 2012). Testosterone plays a role in many behaviors associated with sexual and reproductive function as well as fear and anxiety behaviors (King et al., 2005;Carrier and Kabbaj 2012;McDermott et al., 2012). A wide body of evidence demonstrates an anxiolytic-like effect of testosterone (Justel et al., 2012a;Kim and Spear 2016;Liang et al., 2018). There is also considerable document for fear-and anxiety-reducing properties of testosterone across a number of species, such as rats, mice, ewes, and humans (van Honk et al., 2005;Lacreuse et al., 2010;Domonkos et al., 2017). Studies revealed that subcutaneous administration of testosterone increases anti-anxiety behavior in the elevated plus-maze, zero mazes, and Vogel task and also enhances motor behavior in the activity monitoring test in aged intact male C57/B6 mice (Frye et al., 2008). In animal models, the anxiolytic-like activity of androgens have been reported after different schedules; so, whereas some found immediate actions , others reported decreased anxiety only after a long chronic administration (Fernandez-Guasti and Martinez-Mota, 2005). It has been demonstrated that the androgen receptors play a role in regulating anxiety-related be-haviors, as well as corticosterone responses and neural stimulation following exposure to a mild stressor in rodents (Zuloaga et al., 2011). In humans, gonadal hormones affect mood disorders such as anxiety and depression. Women are detected with anxiety disorders and depression more often than are men, and these disorders often coincide with a decrease in levels of estrogen during menopause (Arpels, 1996). Moreover, estrogen replacement therapy has been revealed to decline anxiety in postmenopausal women (Yazici et al., 2003). In men, alike but a less abrupt decrease in androgen levels with age is also often accompanied through symptoms of anxiety and depression (Kaminetsky 2005;Eskelinen et al., 2007). Androgen treatment of aging men, or of younger men with reduced testicular production of testosterone, improves some of these symptoms in both aging and younger men (Eskelinen et al., 2007;Amore et al., 2009;Seidman et al., 2009;Zuloaga et al., 2011). Furthermore, disorders of anxiety and fear dysregulation are highly prevalent. These disorders affect women nearly 2 times more than they affect men, occur predominately during a woman's reproductive years, and are particularly prevalent at times of hormonal flux. This suggests that gender differences and sex steroids play a main role in the regulation of anxiety and fear (Toufexis et al., 2006). Gender differences in the age-of-onset and prevalence of psychiatric disorders such as anxiety and depression indicate that sex hormones may modify symptoms of mental illness. Fear-potentiated startle is a translational measure of fear and anxiety as recent investigations have shown fear-potentiated startle in monkeys is reliably decreased by anxiolytics such as diazepam and morphine. Fear-potentiated startle is also changed in people with depression and anxiety (Toufexis et al., 2006;Morris et al., 2010). Correspondingly, boys and girls with low testosterone levels display greater indices of depression and anxiety than those with high testosterone (Edinger and Frye 2005;Zuloaga et al., 2011). Granger et al. (2003) reported that young boys and girls with lower salivary testosterone levels are more likely to experience higher levels of anxiety, depression and attention problems throughout the day compared to boys and girls of the same age with higher salivary testosterone levels. Androgen reduction related to aging is associated with negative mood and increased anxiety in men and women. These results suggest that androgens may have organizational and/or activational properties on mood and anxiety in people (Edinger and Frye, 2005;Domonkos et al., 2018). Although the clinical researches of testosterone therapy in women are more limited, some studies support anxiolytic roles for testosterone (Miller et al., 2009). In fact, women with a type of anxi-ety disorder, such as generalized anxiety express lower levels of salivary testosterone, compared to emotionally healthy women (Giltay et al., 2012). Clinical documents suggest that testosterone has anxiolytic benefits, with the potential to promote improved mood in both women and men (McHenry et al., 2014). At least two non-exclusive mechanisms may mediate the behavioral functions of steroid hormones. A classic genomic mechanism contains the coupling of the steroid hormone to intracellular receptors which are translocated to the nucleus and activate protein-synthesis. Furthermore, an alternative mechanism includes the activation of membrane receptors coupled to neurotransmitter receptor systems, as is the case of the GABA A -benzodiazepine receptor complex. There is evidence showing that testosterone exerts its anxiolytic-like activity via its conversion to the reduced metabolites with the consequent activation of the GABA A receptor complex (Aikey et al., 2002;Fernandez-Guasti and Martinez-Mota, 2005). In addition, testosterone activates the hypothalamic-pituitary-adrenal axis, anxiety-related behavior, corticosterone responses, and sensorimotor gating in rodents (Zuloaga et al., 2011). Additionally, injection of either estrogens or androgens generally results in reduced indices of anxiety and depression-related behaviors in rodents (Frye et al., 2008). Studies suggest that anxiolytic functions of estrogens are largely mediated via activation of the estrogen receptor (Lund et al., 2005). Particularly, testosterone treatment declines, whereas estrogen treatment enhances, the release of stress hormones adrenocorticotropic hormone (ACTH) from the pituitary gland, and corticosterone from the adrenal cortex (Zuloaga et al., 2008;2011). There is a wide body of evidence to propose that sexual experience may affect androgen secretion in many species, in turn, androgens may also affect anxiety. Sexual experience may change anxiety behavior and secretion of endogenous androgens. So, sexual experience is related to lower levels of anxiety-like response and higher levels of androgen secretion (Edinger and Frye, 2007b). Endogenous and exogenous testosterone affects some behavioral traits as revealed in human and animal studies. The effects of testosterone can be mediated through androgen or estrogen receptors, but also through rapid non-genomic effects. Endogenous testosterone levels have been revealed to be inversely related to anxiety and depression severity (Hodosy et al., 2012). Additionally to endogenous androgens' effect on anxiety behaviors, exogenous androgens may be used in part for their effects on mood. Men with low endogenous androgen levels due to aging or hypogonadism indicate more anxiety symptoms and declined mood than do their androgen-replete counterparts (Edinger and Frye, 2005;Meyers et al., 2010). Testosterone-replacement to such individuals can decrease some of the negative effects related to androgen decline (Edinger and Frye, 2005). Also, gonadectomy in adult rats for short-term (two weeks) declines open field activity in male rats and supplementation with testosterone propionate in gonadectomized rats recovers open field activity (Zhang et al., 2011;McDermott et al., 2012) (Table I). Testosterone is metabolized to neuroactive steroids through diverse pathways: in one pathway, it is converted to androstenedione and further reduced to androsterone; in other, it is converted to dihydrotestosterone which may be further reduced to 3α-androstanediol. This last pathway has been suggested to involve in the decreased anxiety produced via androgens in intact male rats and proposes that the anxiolytic-like action of androgens may require 5α-reduction (Edinger and Frye 2005;Fernandez-Guasti and Martinez-Mota, 2005). In support of this idea, indicated that the intrahippocampal injection of a 3α-hydroxysteroid-dehydrogenase inhibitor, indomethacin, to dihydrotestosterone-treated rats prevented the anxiolytic-like action induced by this steroid. Physiological levels of testosterone replacement in adult gonadectomized male, but not female rats, show protective properties against the development of anxiety-like behaviors in a model of chronic social isolation (Carrier and Kabbaj, 2012). Likewise, in intact aged male rodents with lower levels of testosterone, the application of testosterone decreases anxiety-like behaviors in the open field test and light-dark box test (Frye et al., 2008). These reports thus support the hypothesis that the activational effects of testosterone can decrease behavioral measures of anxiety in male rodents (McHenry et al., 2014;Domonkos et al., 2018). Altogether, testosterone by interacting with classic androgen receptors induced anxiolytic-like effect (Fernandez-Guasti and Martinez-Mota, 2005). In animals and human studies, gonadal hormones affect anxiety behavior. Gender differences in the age-of-onset and prevalence of anxiety indicate that sex hormones may modify symptoms of anxiety (Edinger and Frye, 2005;Zuloaga et al., 2011). Androgen reduction related to aging is associated with the enhancement of anxiety in men and women. It seems that androgens may have organizational and/or activational effects on anxiety in people (Edinger and Frye, 2005). Testosterone-replacement to such persons can decline some of the negative effects associated with androgen decrease (Edinger and Frye, 2005). Similarly, gonadectomy in rats induced anxiogenic-like effect which testosterone supplementation in gonadectomized rats recovered this effect (Zhang et al., 2011;McDermott et al., 2012). The effect of gonadectomy on anxiety behavior Previous evidence indicates that sexual behavior induces an anxiolytic-like effect decreasing the impact of several types of stressors. Second, androgens have been described to have anxiolytic effects in other situations including emotional stress. For example, male rats administered testosterone are less disrupted during punished drinking testing in the Vogel paradigm (Bing et al., 1998) and display declined signs of anxiety in the elevated plus-maze (Aikey et al., 2002), open field test , defensive burying test (Fernandez-Guasti and Martinez-Mota, 2003), and defensive freezing (Edinger and Frye, 2005) relative to vehicle-treated rats. Additionally, enhancement of endogenous androgen release via sexual stimuli also enhances exploratory behavior in the open arms of the elevated plus-maze in male mice (Aikey et al., 2002). As mentioned previously, gonadectomy in adult male rats during short-term (10 days) as well as long-term (70 days) induces higher levels of anxiety behavior (Svensson et al., 2000;Justel et al., 2012a;Khakpai 2014), which testosterone administration can reverse some of the effects of gonadectomy Justel et al., 2012a;Khakpai, 2014). Toufexis et al. (2005) reported that castration of male rats produced a more consistent light-enhanced startle (anxiogenic response), similar in magnitude to that observed in female rats. Replacement of testosterone, at high physiological doses, significantly attenuated light-enhanced startle in castrated males and further reduced it in intact male rats. This shows that circulating testosterone acts to decrease the response of male rats to the anxiogenic stimulus of bright light (Toufexis et al., 2005). In the open field test, animals treated with anxiolytics display an enhanced tendency to explore the central location of the field (Prut and Belzung, 2003;Yan et al., 2015). So, it was expected that exogenous testosterone treatment would increase activity in the central area of the open field. Conversely, gonadectomy in adult male rats for short-term reduce activity in the central area of the open field (Justel et al., 2012a). The effects of gonadectomy and hormone replacement for short-term in adult rats on various measures of anxiety have also 209 Acta Neurobiol Exp 2020, 80: 205-216 Rat 3-4 month Gonadectomy → failed to affect µ-or δ-receptor agonists induced analgesia (Kepler et al., 1991) been studied in the elevated plus-maze (Walf and Frye, 2005b), avoidance Edinger and Frye, 2007a) and acoustic startle response paradigms (Turvin et al., 2007), consequently introducing the further probability that anxiety in the gonadectomy and/or hormone-replaced subjects could affect NOR testing and outcome (Aubele et al., 2008). Researches demonstrated that testosterone-replacement can alleviate anxiety behavior related to gonadectomy from 4 to 6 weeks following surgery. As mentioned previously, testosterone's anti-anxiety effects may be modulated in part by its metabolites. Testosterone can be aromatized to estrogen, which has been revealed to decline anxiety behavior in people and animals. However, testosterone is also metabolized to dihydrotestosterone. Administration of dihydrotestosterone, a nonaromatizable metabolite, can decrease the anxiety behavior of gonadectomized rats similarly to testosterone administration . Dihydrotestosterone can metabolize to 3α-androstanediol and systemic injection of 3α-androstanediol can also decrease anxiety behavior of intact or gonadectomized male or female rats (Edinger and Frye, 2005). Studies indicated that blocking dihydrotestosterone metabolism to 3α-androstanediol with indomethacin, a 3α-hydroxysteroid dehydrogenase inhibitor also increases anxiety behavior in the open field, elevated plus maze, and defensive freezing tasks of intact or dihydrotestosterone replaced male rats . These findings propose that testosterone's anti-anxiety properties may be due in part to the actions of its 5α-reduced metabolites, independent of its aromatization to estrogen. In the brain, androgen receptors are expressed through both neurons and glial cells and are predominantly found in the hippocampus, amygdala, thalamus, hypothalamus, and cerebral cortex (Moghadami et al., 2016). The hippocampus is a putative site of action for androgens' anti-anxiety properties. The hippocampus modulates the anxiety process (Bannerman et al., 2002). Androgens can have actions in the hippocampus. In the rat hippocampus, the androgen receptor is mainly concentrated in the CA1 pyramidal cells. It is reasonable to assume the presence of an association between androgen receptors and cognitive activities (Moghadami et al., 2016). Castration decreases neuronal firing, enhances vulnerability to cell death, and decreases synapse density in the hippocampus, effects which can be reversed by androgen-replacement (Hajszan et al., 2004). The enzymes necessary for testosterone's metabolism, 5α-reductase, and 3α-hydroxysteroid dehydrogenase, are also located within the hippocampus (Rhodes and Frye, 2004). As such, testosterone and dihydrotestosterone are readily metabolized to 3α-androstanediol in the hippocampus . These data propose that testosterone's 5α-reduced metabolites may have actions in the hippocampus to modulate the anxiety process Edinger and Frye, 2005). Overall, sexual behavior caused an anxiolytic-like effect. Gonadectomy for short-term and long-term induced higher levels of anxiety behavior (Svensson et al., 2000;Justel et al., 2012a;Khakpai 2014), which administration of testosterone can reverse some effects of gonadectomy Justel et al., 2012a;Khakpai 2014), showing that circulating testosterone decreased the response of males to the anxiogenic stimulus (Toufexis et al., 2005). Testosterone's anti-anxiety effects produced via 5α-reduced metabolites . The effects of opioid antagonist and testosterone on the modulation of anxiety behavior Opioids are known to play a role in mediating the effects of androgen. Nonetheless, opioids have many adverse effects, including opioid-induced androgen deficiency (Chrastil et al., 2014). The effect of the opioid system on the modulation of testosterone levels is suggested to be mediated via effects on both the hypothalamus and the testes. Opiates are proposed to affect the release of GnRH from the hypothalamus. In the CNS, endogenous opioids inhibit pulsatile GnRH release, partly mediating the stress response within the central nervous-pituitary-gonadal axis (Bottcher et al., 2017). This, in turn, causes a decrease in the release of LH from the anterior pituitary gland, which is necessary for the activation of Leydig cells to produce testosterone. In addition, opiates were also shown to increase the sensitivity of the hypothalamus to the negative feedback effects of testosterone causing a marked suppression in LH release (Lambert et al., 1990;Hofford et al., 2010;Ruka et al., 2016). Opioidergic transmission reduced, in relation to LHRH release, after long term castration. Opioid receptor activity (evaluated via responsiveness to an opioid receptor agonist) of female rats is maintained, while that of male rats is lost, after long term gonadectomy (Almeida et al., 1988). Studies indicated that naloxone can stimulate LH release when rats gonadectomized for a few weeks, were injected with either oestradiol benzoate or testosterone propionate. Masotto and Negro-Vilar (1988) reported that male rats indicated no variation in any parameter of pulsatile LH secretion in response to naloxone 8 weeks after castration whereas a small enhance in mean LH level and in LH pulse amplitude was observed 1 to 2 weeks after gonadectomy. In gonadally intact ewes the opioid antag-onist, WIN 44,441-3, increased LH pulse frequency and amplitude at selected times during the estrous cycle, however, it had no influence in subjects ovariectomized for 4 months or more. Ewes retained an ability to indicate a small enhance in LH pulse frequency when given an opioid antagonist 1 week after ovariectomy (Whisnant and Goodman, 1988). In other researches, however, LH responses to opioid manipulation have been observed following short-and long-term castration. For example, Cicero and coworkers (1982) found alike LH response to naloxone in male rats tested 3 and 31 days after castration. In female rats, plasma LH levels enhanced in response to naloxone 24 h, 4 days and 8 days after ovariectomy (Babu et al., 1988). Also, intraventricular injection of the opioid receptor agonist, β-endorphin to female rats which had been ovariectomized for 3 weeks, produced significant reductions in LH pulse frequency and amplitude as compared to the LH output observed during a comparable saline injection. Female rabbits displayed dramatic rises in LH pulse amplitudes and mean LH levels when given an intravenous injection of naloxone 2 weeks after gonadectomy. Although LH responses to naloxone are variable after short-term gonadectomy in the species as mentioned, gonadally intact subjects or animals given sex steroids after short-term gonadectomy generally show an enhance in LH secretion in response to opioid receptor antagonists (Lambert et al., 1990). Otherwise, modulation of the sensitivity of the hypothalamus-pituitary axis via opiates was also proposed to result from a declined sensitivity of the pituitary to GnRH. Additionally to their effects on the hypothalamus, the opioid system was revealed to inhibit gonadal function through specific opioid receptors within the testes. This was confirmed to be mediated by suppression of testicular steroidogenesis, which results in reductions in plasma testosterone levels (Hofford et al., 2010). Animal studies also support links between anabolic-androgenic steroids and opioids. At the physiologic level, testosterone increases the response to opioids. At the pharmacologic level, testosterone self-administration intracerebroventricular causes autonomic depression alike to opioid overdose, which is inhibited by the opioid antagonist naltrexone. In other studies, anabolic-androgenic steroids increase morphine-induced hypothermia (Celerier et al., 2003), even as they decrease the analgesic response of morphine (Philipova et al., 2003), and weaken tolerance to morphine's antinociceptive effect (Celerier et al., 2003). This is consistent with anabolic-androgenic steroids-induced opioid receptor binding in the brain (Cooper and Wood, 2014). The effect of castration may be nociceptive because it enhanced morphine analgesia on the hot-plate test (Ali et al., 1995). Alike to results in female rats, intra-cerebroventricular morphine infusions in castrated male rats induce analgesia on the tail-flick and jump tests which are decreased in efficacy but not potency (Kepler et al., 1989). This effect may be CNS area-dependent because morphine potency after infusion into the ventrolateral periaqueductal gray in castrated male rats is slightly enhanced. Generalizations cannot be made from morphine to other µ-receptor agonists because gonadectomy in adult male rats during 4 weeks failed to consistently affect analgesia induced by intracerebroventricular infusions of µ-receptor-selective agonist D-Ala2-MePhe4-Gly-ol5-enkephalin (DAMGO). Gonadectomy in adult male rats for 4 weeks was similarly without action on the δ-receptor analgesia of [D-Ser2, Leu5]enkephalin-Thr6 (DSLET) (Kepler et al., 1991). In male mice, morphine analgesia after castration is enhanced on the hot-plate test and against abdominal writhing produced by acetic acid but declined on the tail-flick test (Ali et al., 1995). Testosterone reversed the decreased morphine sensitivity of the castrated rat (Kest et al., 2000). Results of nociceptive testing procedures examining the activational roles of gonadal hormones on opioid antinociception are somewhat variable. In male rats, gonadectomy for shortand long-term enhanced, declined or failed to change µ-opioid antinociception. Also, in females, gonadectomy during short-and long-term enhanced, declined or did not change opioid antinociception. The variability across investigations that have manipulated gonadal hormones in adult rats might be due to the wide array of methodologic differences across investigations. Almost every investigation has used different gonadectomy test intervals (short-and long-term), hormone replacement regimens, opioid injection procedures, and nociceptive testing procedures. Investigations suggest that both testosterone in adult male rats and estradiol in adult female rats contribute to the sex difference in morphine antinociception (Craft et al., 2004). Sex differences in the antinociceptive effects of opioids have been revealed in both non-human primates and rodents, with males being usually more sensitive than females (Terner et al., 2002;Loyd et al., 2008;Bai et al., 2015). There is abundant evidence showing that pharmacokinetic factors cannot fully account for these differences, as opioids are more potent in males following central injection, and systemic injection of morphine causes comparable brain and plasma levels in males and females (Kepler et al., 1991;Kest et al., 1999;Terner et al., 2002). There is also evidence proposing that pharmacodynamic factors do not play a key role, as sex differences have not been indicated in opioid binding affinity and receptor density (Terner et al., 2002). In mammals, opioids control food intake and energy balance, and gonadal androgens interact with the opioid system neurochemically and behaviorally (Mateo et al., 1992). So, pretreatment with the long-acting opioid blocker naltrexone inhibited the physiologic and behavioral symptoms of testosterone injection and blocked the reinforcing effects of testosterone self-administration (Peters and Wood, 2005). Investigations show the involvement of testosterone and opioidergic system in anxiogenic-like behaviors induced by gonadectomy during the short-and long-term. Several studies have reported an anxiolytic function for morphine and µ-opioid receptor agonists when injected peripherally, whereas µ-opioid receptor antagonists tend to be anxiogenic (Le Merrer et al., 2006;. As mentioned above, the endogenous opioid system could influence testosterone levels via effects on the hypothalamic-pituitary-gonadal axis and the testes (Hofford et al., 2010). Thus, the possible mechanism(s) between testosterone and opioid system in anxiety behavior control seem possible. Administration of opioids in males causes opioid produced androgen deficiency, i.e. a significant reduction in plasma testosterone levels. This effect is reported in humans as well as in experimental animals, for example, rodents (Khakpai, 2014). This opioid effect is dramatic, a single administration can cause a robust decrease in testosterone levels comparable to castration. Moreover, tolerance does not develop to this opioid mediated effect, consequently, this decrease lasts for the entire duration of opioid administration (Aloisi et al., 2005). Collectively, the opioid system could modulate testosterone levels by affecting the release of GnRH from the hypothalamus. Endogenous opioids inhibited pulsatile GnRH release (Bottcher et al., 2017) which caused a decrease in the release of LH (Lambert et al., 1990;Hofford et al., 2010;Ruka et al., 2016). Opioid transmission decreased, in correlation with LHRH release, after long-term castration (Almeida et al., 1988). Naloxone stimulated LH to release following short-and long-term castration in adult male rats (Cicero et al., 1982). Interaction between testosterone and opioidergic system may modulate anxiogenic-like responses induced by gonadectomy during the short-and long-term. (Le Merrer et al., 2006;. Treatment of anxiety disorders with androgens alone or in a combination with different anxiolytics Many investigations have demonstrated that anxiety-like behaviors are influenced via peripheral and central factors including hormones and neurotransmitters in the diverse regions of CNS. Many types of research have revealed the anxiolytic effect of androgens in various methods. The most cited paper exploring the effects of testosterone on anxiety behavior in animals and humans has presented in numerous experiments that testosterone either endogenous or exogenous reduced anxiety (Frye and Seliga, 2001;Aikey et al., 2002;Khera, 2013;Khakpai, 2014;Dossat et al., 2017). Furthermore, a similar experiment indicated that this anxiolytic response of testosterone is dose-dependent and very probable mediated via 5-alpha reductase which reduces testosterone to dihydrotestosterone. Some experiments on gonadectomized rats indicated that dihydrotestosterone 3-alpha metabolites can be the mediators of testosterone anxiolytic effects (Edinger and Frye, 2005). Furthermore, blockade of the dihydrotestosterone transformation to 3-alpha androstanediol via a 3-alpha hydroxysteroid dehydrogenase inhibited or prevented the anxiolysis Celec et al., 2015). The hypothalamic-pituitary-gonadal axis is regulated through a complex series of outside effects as well. Opioids are one of a number of such effects. Studies propose that opioids, both endogenous and exogenous, can couple to opioid receptors principally in the hypothalamus, but potentially also in the pituitary and the testis, to regulate gonadal function. Opioids have been presented to decline the release of GnRH or restrict its normal pulsatility at the level of the hypothalamus, resulting in a reduced release of LH and FSH from the pituitary and a second fall in the gonadal steroid production, that is, hypogonadism. Direct influences of opioids on the testis, including reduced secretion of testosterone and testicular interstitial fluid, have also been revealed. Opioid receptors have also been distributed in ovarian tissue cultures and opioids have been revealed to directly suppress ovarian steroid production in vitro. Opioids have also been revealed to change the adrenal production of dehydroepiandrosterone, the main precursor of both testosterone in men and estradiol production in women (Katz and Mazer, 2009). Therefore, opioids by influencing hypothalamic-pituitary-gonadal activity as well as GnRH and LH secretion interact with androgens to modulation of anxiety behavior. CONCLUSION Anxiety can be produced by various endocrine, autoimmune, metabolic and toxic disorders as well as the adverse effects of medication (Kessler et al., 2005). Several studies have reported anxiolytic function for morphine and androgens. One the other hand, withdrawal of morphine (Buckman et al., 2009;Poori-amehr et al., 2017;Kim et al., 2018) and gonadectomy for short-and long-term (Svensson et al., 2000;Justel et al., 2012a;Khakpai 2014) cause to anxiogenic behavior. Interestingly, the opioid system was revealed to play a role in gonadal hormone regulation (Hofford et al., 2011). The effects of opioids on testosterone levels have several implications for the short and long term health of patients requiring pain management and for drug addicts. Opioid treatment decreases plasma testosterone levels in males (Aloisi et al., 2009;Hofford et al., 2011). This effect induces via modulation of the hypothalamic-pituitary-gonadal axis activity (Hofford et al., 2011;Khakpai, 2014). Researches indicated the effect of opioids on the modulation of anxiety-response induced by androgen. Also, opioids by modulation of plasma testosterone levels could modulate anxiety behavior in gonadectomized animals (Khakpai, 2014). There are reports showing that injection of the opioid antagonist, naloxone, produces a rise in testosterone concentrations and so, administration of naloxone in low doses is capable of modifying testosterone concentrations in plasma (Gartner, 2001;Khakpai, 2014). Therefore, naloxone may have an effect on the modulation of anxiety behavior in gonadectomized animals (Khakpai, 2014). Moreover, future studies are needed to fully understand the nature and causes of the possible mechanisms between opioid and androgens on the modulation of anxiety behavior in gonadectomized animals, as this might simultaneously target the cortical/cognitive as well as subcortical/reflexive characteristics of anxiety while avoiding the apparent side-effects of chronic hormone administration or opiate abuse.
7,868
2020-01-01T00:00:00.000
[ "Medicine", "Biology" ]
Evaluation of Feature Extraction and Recognition for Activity Monitoring and Fall Detection Based on Wearable sEMG Sensors As an essential subfield of context awareness, activity awareness, especially daily activity monitoring and fall detection, plays a significant role for elderly or frail people who need assistance in their daily activities. This study investigates the feature extraction and pattern recognition of surface electromyography (sEMG), with the purpose of determining the best features and classifiers of sEMG for daily living activities monitoring and fall detection. This is done by a serial of experiments. In the experiments, four channels of sEMG signal from wireless, wearable sensors located on lower limbs are recorded from three subjects while they perform seven activities of daily living (ADL). A simulated trip fall scenario is also considered with a custom-made device attached to the ankle. With this experimental setting, 15 feature extraction methods of sEMG, including time, frequency, time/frequency domain and entropy, are analyzed based on class separability and calculation complexity, and five classification methods, each with 15 features, are estimated with respect to the accuracy rate of recognition and calculation complexity for activity monitoring and fall detection. It is shown that a high accuracy rate of recognition and a minimal calculation time for daily activity monitoring and fall detection can be achieved in the current experimental setting. Specifically, the Wilson Amplitude (WAMP) feature performs the best, and the classifier Gaussian Kernel Support Vector Machine (GK-SVM) with Permutation Entropy (PE) or WAMP results in the highest accuracy for activity monitoring with recognition rates of 97.35% and 96.43%. For fall detection, the classifier Fuzzy Min-Max Neural Network (FMMNN) has the best sensitivity and specificity at the cost of the longest calculation time, while the classifier Gaussian Kernel Fisher Linear Discriminant Analysis (GK-FDA) with the feature WAMP guarantees a high sensitivity (98.70%) and specificity (98.59%) with a short calculation time (65.586 ms), making it a possible choice for pre-impact fall detection. The thorough quantitative comparison of the features and classifiers in this study supports the feasibility of a wireless, wearable sEMG sensor system for automatic activity monitoring and fall detection. Introduction As a result of an aging population, the number of elderly or frail people who need help in their daily activities is rapidly increasing [1][2][3]. This leads to a series of problems in caring for older people and people with medical disabilities. Falls are the leading cause of trauma and death among people 65 or older and the resulting health care costs represent a serious public burden [1]. Helping this group of sample entropy, which was applied to real uterine EMG signals to distinguish between pregnancy and labor contraction bursts. Another important step in activity monitoring and fall detection is the classification technique selection. For systems with a few inputs, the most common algorithm for classification, especially for the statistical feature evaluation and classification, is the Linear Discriminant Analysis (LDA). Though accurate and fast, its use becomes complicated for multi-input and multi-output systems. To address this problem, the so-called "kernel-trick" was taken into account. For example, Nonparametric Weighted Feature Extraction (NWFE), Principal Component Analysis (PCA), kernel PCA with Gaussian kernel, and kernel PCA with polynomial kernel were suggested for classification [38]. Kakoty et al. [36] used a linear kernel Support Vector Machine (SVM) with discrete wavelet transform to classify six grasp types, which showed a recognition rate of 84 ± 2.4%. Based on machine learning theory, the SVM is the state-of-the-art classification method, which has significant advantages due to its high accuracy, elegant mathematical tractability, direct geometric interpretation, and lack of a need for a large number of training samples to avoid overfitting [41]. To achieve a higher efficiency, Fuzzy Min-Max Neural Network (FMMNN), whose learning phase is single-pass-through and online-adaptive, was studied. This also led to other modified methods like multi-level fuzzy min-max (MLF) classifier, which mainly uses a multi-level tree structure handles the overlapping area problem [42]. Other widely used unsupervised learning methods are clustering techniques. Fuzzy c-means (FCMs) data clustering was used to automate the construction of a simple amplitude-driven inference rule base, which resulted in the overall classification rates of lower-limb actions ranging from 94% to 99% [43]. In retrospect, a few studies can be found for the quantitative performance comparison of feature extraction and classification of sEMG in the context of controlling prosthetic limbs or gait phase recognition [44,45], but almost no studies can be found for the quantitative performance comparison of activity monitoring and fall detection. For systems with a good performance, EMG features should be selected in maximum class separability, high recognition accuracy and minimum computational complexity, ensuring as low as possible misclassification rate in real-time implementation with reasonable hardware [44]. The current research is aimed at selecting the best sEMG features and classification method from the three approaches mentioned above for the recognition of daily activities and falls. The remainder of this paper is structured as follows: Section 2 outlines daily activities and falls, and data acquisition. Section 3 presents various feature extraction techniques and classification methods. The analysis of experiments performed are described in Section 4. The conclusions and discussions are presented in Sections 5 and 6, respectively. Activity Monitoring and Data Acquisition In order to achieve daily activity monitoring and fall detection, it is necessary to distinguish daily activities and falls. The most common three activities of daily living (ADL) were selected, i.e., walking, stair-ascending and stair-descending. Four ADLs, stand-to-squat, squat-to-stand, stand-to-sit, and sit-to-stand, were selected as well. They are not easily distinguished from falling or each other. Since the activities mentioned above result from contraction of the muscles in the lower limbs, four surface electrodes were used to measure sEMG signals from gastrocnemius, rectus femoris, tibialis anterior, and semitendinosus, which are muscles with lower limb motions. The sEMG electrodes were placed upon muscles of the left lower limb, indicated by small circles in Figure 1 marked by CH1 through CH4. Semitendinosus plays a crucial role in stretching the hips, flexing the legs and rotating the knee joints externally [46]. Gastrocnemius is mainly concerned with standing and walking activities. Rectus femoris is a powerful knee extensor, which has a role in flexing the hips, and the tibialis anterior muscle's roles are mainly regarding stretching the ankle and enabling the foot eversion [47]. The sEMG signal was recorded using Trigno™ Wireless EMG (Delsys Inc, Natick, MA, USA), which provided a 16-bit resolution, a bandwidth of 20-450 Hz, and a baseline noise <1.25 µV (rms). It has a typical operating range of 40 m and the communication protocol is Bluetooth. It has a motion artifact suppression (patent) that can be freely moved. The sEMG signals were sampled at 1024 Hz using EMGworks 4.0 acquisition software (DelSys Inc.). All sensors were secured to the skin by a double-sided adhesive interface. A reference electrode was attached to the skin near the SEMG electrodes to supply a voltage baseline. Feature Extraction Surface EMG features were computed using 1.5 s epochs (1536 samples), which was the time necessary to complete the longest activity (stand-to-squat) in our experiment. And we collected data for each activity separately and get the features. For the purpose of comparison, 15 well-known EMG feature types were considered as shown in Table 1, where N and x i denote the number of samples and the i-th raw EMG sample, respectively, and u(x) indicates a unit-step function. (1) Integral of Absolute Value (IAV) In case of discrete signals, IAV is represented as the average of the absolute value of each signal sample, and the formula is as follows [4]: (2) Variance (VAR) In the stochastic process, variance characterizes the average power of a random signal and can be explained as follows [4]: This is the number of times that the difference between two consecutive amplitudes exceeds a certain threshold. It can be formulated as: In this study, a threshold T of 0.05 V is considered. This feature is an indicator of firing motor unit action potentials (MUAP) and therefore an indicator of the muscle contraction level [25]. ZC represents the number of times that the amplitude of the signal passes through zero [48]: NT counts the number of changes in the sign of the slope, in other words, the number of signal peaks [49]: This feature determines the mean of the difference in amplitudes of two consecutive samples [44]: This feature estimates the mean frequency of the signal in a time segment [50]: where f i denotes frequency, and h i denotes intensity of frequency spectrum. HIST contains a series of highly unequal vertical stripes or segments representing data distribution [49]. This study considers the amplitude range as −5 V-5 V and then divides it into 21 amplitude slots of equal size. (9) Auto-Regressive Coefficient (AR) In the Auto-Regressive model, the signal samples are estimated by the linear combination of their earlier samples. This process computes linear regression coefficients. It has been shown that the EMG spectrum changes with muscle contraction that results in change in AR coefficients [51]. Various experimental and theoretical studies have shown that the model order P = 4 is suitable for EMG signals [52]. Therefore, it was used in the current research. The ARCU is the AR from the third-order cumulant of the signal in each time segment. The novel part of this method is that the input of the algorithm is the cumulant rather than an auto-correlation function. Normally ARCU can effectively separate recycle stationary signals and stationary signals, and completely suppress Gaussian colored noise in theory. Here, a fourth-order AR model from the third-order cumulant is used [44]. This feature computes the energy of the wavelet-transformed signal: where F j is the coefficient of wavelet energy. K is the number of the j-th layer decomposed coefficient. W j,k is the k-th coefficient of the j-th layer decomposed coefficient. Db8 wavelet and decomposition layer 5 is used in our study. (12) Energy of Wavelet Packet Coefficient (EWP) This feature computes the energy of the wavelet packet transformed signal. It is similar to the EWT. Compared with EWT, the advantage of EWP is that it can deal with both high and low frequency components, but the number of feature components is increased, therefore the computation complexity is also increased [34]. where W j is j-th layer decomposed coefficient. (14) Fuzzy entropy (FE) Fuzzy entropy describes the degree of fuzziness of fuzzy sets, which used to quantify the regularity of time series. The formula is as follows: (11) where N is the number of samples, m defines the dimension of the data, D ij is the similarity degree of two samples, r is the width of the exponential function in D ij , and φ m is called the mean average similarity [54]. (15) Permutation entropy (PE) The permutation entropy is a way of quantifying the relative occurrence of the different motifs [55], which is based on the complexity of the measurement, applies to non-linear signal, and has a high anti-interference ability and a good robustness. The core of PE is choosing n consecutive points of samples and making up an n-dimensional vector. The obtained signals are sorted in ascending order. The permutations and combinations of the new sequence is one of n!. Then probability statistics of various permutations and combinations in the entire time series is calculated. It is symbolized as p(π), in which π represents different permutations ways [56]. The formula is as follows: Feature Class Separability In order to perform a qualitative evaluation of the extracted feature, the Fisher's discriminant function was used to translate data samples into a class separability index. To achieve the class separability index, the trace of the between-class scatter matrix is divided into the trace of the within-class scatter matrix [57]. The between-class scatter matrix S B is defined as follows: S B is the covariance matrices of the means of all classes in which m m is the mean of all the classes' means and m i is the mean of the i-th class. The within-class scatter matrix S w is defined as follows: S w is the mean of the covariance matrices of all classes in which m i is the mean of the i-th class and x is the sample vector. The class separability index is calculated as: It is obvious that the quality of the space feature will improve when the value of the index increases. Classification Five representative classification techniques (shown in Table 2) were considered and listed below: (1) Fisher Linear Discriminant Analysis (FDA) The FDA, also known as Fisher's Linear Discriminant Analysis (LDA), finds a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier or for dimensionality reduction before later classification [57]. The FMMNN is based on the hyperbox fuzzy sets. A hyperbox is defined by its minimum and maximum points which are created by the input patterns [58]. The membership function is set with respect to the minimum and maximum points of the hyperbox [59]. Its multilayer structure is capable of dealing with a nonlinear separability issue. It also possesses an adaptive learning capability. Kernel Fisher Linear Discriminant Analysis (KFDA) is the evolution of the FDA and it calculates the projection by kernel function rather than Fisher's algorithm. In a real experiment, most of the kernel methods solve a linear problem in the kernel feature space [60]. In the current study, Gaussian kernel, the most pervasive kernel, is used. (4) Gaussian Kernel Support Vector Machine (GK-SVM) It is a nonlinear version of the SVM classification. Kernel trick with SVM is the most used kernel classifier among the available kernel methods. It makes the SVM more robust and flexible for any kind of data irrespective of its linearity to achieve a highly accurate classification rate [60]. (5) Fuzzy C-means algorithms (FCM) Fuzzy C-means (FCM) is a method of clustering that allows data to belong to two or more clusters [61]. Fuzzy C-means model aims to get membership degree of each sample point in all classes through optimization of the objective function. This function determines the sample type and fulfills the purpose of automatic sample data classification. The common Fuzzy C-means model is an unsupervised machine learning that analyzes and models data with fuzzy theory. Experiments and Results Three subjects (two males and one female, age 24-26, height 160-180 cm, weight 48-70 kg) without neural or musculoskeletal deficits were randomly recruited for the experiment. Each subject performed seven activities of daily living (ADLs): stand-to-squat, squat-to-stand, stand-to-sit, sit-to-stand, stair-ascending, stair-descending, and walking. In addition, a few unexpected simulated trip falls induced by a custom-made device attached to the ankle, were interspersed among the normal walking trials. The custom-made device attached to the ankle is made by a round cushion and a rope. The participants repeated the procedure for 10 times in each experiment day, making sure that the total times of each activity and trip fall was at least 30, and the order of activities stayed same for each experiment. The experiment scene is shown in Figure 2. sit-to-stand, stair-ascending, stair-descending, and walking. In addition, a few unexpected simulated trip falls induced by a custom-made device attached to the ankle, were interspersed among the normal walking trials. The custom-made device attached to the ankle is made by a round cushion and a rope. The participants repeated the procedure for 10 times in each experiment day, making sure that the total times of each activity and trip fall was at least 30, and the order of activities stayed same for each experiment. The experiment scene is shown in Figure 2. squat-to-stand; (c) stand-to-sit ; (d) sit-to-stand; (e) walking, stair-ascending and stair-descending; (f) trip-fall. Typical EMG signals recorded from a typical subject are shown in Figure 3, illustrating the raw sEMG signals of eight typical activities used in this paper. The sEMG signals burst only at the posture transition. During the period of the posture transition, sEMG signals have obvious ups and Typical EMG signals recorded from a typical subject are shown in Figure 3, illustrating the raw sEMG signals of eight typical activities used in this paper. The sEMG signals burst only at the posture transition. During the period of the posture transition, sEMG signals have obvious ups and downs, and the magnitude of some of the transition roses up to 7 mV. The trip falls have a relatively obvious change in most channels. Squat-to-sit and sit-to-squat had a similar EMG with a high magnitude in Channel 1. Others, such as stair-descending and walking, can hardly be recognized from the raw signals. Each activity has its own sEMG patterns in the four channels of signals. This issue reflects the difference in signal patterns of four muscles on lower limb. (Table 2 and Section 3.1) for each of three subjects. A high class separability score means that the corresponding feature data are highly separable. The WAMP feature is ranked as the top one, followed by MA, EWT, and EWP. The IAV, ARCU, and FE features are the worst ones. Figure 4 also shows that there is no significant individual difference in the separability value of EMG features. The average of Spearman's rank correlation coefficient value between subjects is almost 0.98, indicating that the type ranking hardly vary among individuals. This result indicates that the main results in our study remain intact even for a small number of subjects with a large number of samples for each individual subject. Besides, there is no considerable difference in inherent characteristics of EMG signals between subjects with disabilities and subjects without disabilities [3]. Class Separability Results Calculation complexity is an important factor in online applications, particularly in fall detection. The complexity is normally reflected in the calculation time. In the current study, it was calculated on a PC (Intel Core i5-4210U at a 2.4-GHz CPU with a 4G RAM), using MATLAB R2013. downs, and the magnitude of some of the transition roses up to 7 mV. The trip falls have a relatively obvious change in most channels. Squat-to-sit and sit-to-squat had a similar EMG with a high magnitude in Channel 1. Others, such as stair-descending and walking, can hardly be recognized from the raw signals. Each activity has its own sEMG patterns in the four channels of signals. This issue reflects the difference in signal patterns of four muscles on lower limb. Class Separability Results Figure 4 illustrates class separability index values (refer to Section 3.2) of the 15 types of EMG feature sets (Table 2 and Section 3.1) for each of three subjects. A high class separability score means that the corresponding feature data are highly separable. The WAMP feature is ranked as the top one, followed by MA, EWT, and EWP. The IAV, ARCU, and FE features are the worst ones. Figure 4 also shows that there is no significant individual difference in the separability value of EMG features. The average of Spearman's rank correlation coefficient value between subjects is almost 0.98, indicating that the type ranking hardly vary among individuals. This result indicates that the main results in our study remain intact even for a small number of subjects with a large number of samples for each individual subject. Besides, there is no considerable difference in inherent characteristics of EMG signals between subjects with disabilities and subjects without disabilities [3]. Calculation complexity is an important factor in online applications, particularly in fall detection. The complexity is normally reflected in the calculation time. In the current study, it was calculated on a PC (Intel Core i5-4210U at a 2.4-GHz CPU with a 4G RAM), using MATLAB R2013. Figure 5 shows the class separability values and calculation time, which were averaged across subjects, for each individual feature type. The results illustrate that although some feature types have a good separability, some of them like the EWT and EWP, which get better separability values than many other features, have a very long calculation time. Considering this issue, the paper introduced a performance index to trade off the separability value against calculation time, which is defined as: Figure 5 shows the class separability values and calculation time, which were averaged across subjects, for each individual feature type. The results illustrate that although some feature types have a good separability, some of them like the EWT and EWP, which get better separability values than many other features, have a very long calculation time. Considering this issue, the paper introduced a performance index to trade off the separability value against calculation time, which is defined as: where a denotes the normalized separability of each extraction method, t means the inverse normalized calculation time, w (ranges 0 to 1) denotes the proportion of the computational cost in the algorithm. The fastest and the best separability equalized to 100, and the rest are quantified by their respective proportions. According to this equation, the higher index means the better feature. Since the separability always plays a more important role in an arithmetic, the w's range was selected from 0 to 0.5 and the interval was equalized to 0.05. Figure 6 illustrates that regardless of calculation time (w = 0), all of WAMP, MA, EWT and EWP performed well. As expected, the index of AR was mostly affected by the time. When the calculation time weighs greater than 0.3, AR becomes better than others, except for WAMP. The figure also shows that taking into account the time, the WAMP still ranked first among those feature types. Activities Recognition Results The feature dataset of seven kinds of ADLs and falls was individually input into five types of classifiers ( Table 2). All simulations were performed using a fivefold cross validation. The dataset was divided into five equal-sized subsets. Among those subsets, one of the subsets was chosen as Activities Recognition Results The feature dataset of seven kinds of ADLs and falls was individually input into five types of classifiers ( Table 2). All simulations were performed using a fivefold cross validation. The dataset was divided into five equal-sized subsets. Among those subsets, one of the subsets was chosen as testing data and the remaining subsets as training data. This process was repeated for each subset, resulting in five results. The results averaged over five sub-data sets are showed in Figure 7. Activities Recognition Results The feature dataset of seven kinds of ADLs and falls was individually input into five types of classifiers ( Table 2). All simulations were performed using a fivefold cross validation. The dataset was divided into five equal-sized subsets. Among those subsets, one of the subsets was chosen as testing data and the remaining subsets as training data. This process was repeated for each subset, resulting in five results. The results averaged over five sub-data sets are showed in Figure 7. The GK-SVM gets the highest recognition rate for all feature types except for the ZC. The GK-SVM gets also the lowest calculation time for all feature types. The best feature is WAMP for all classifiers except for FMMNN, with which the EWP feature is the best feature. Figure 7 also illustrates that the GK-SVM has the minimum variance for all feature types. The average of recognition accuracy rates can be seen in Figure 7 and Table 3. The calculation time, which is the time of feature extraction and the time of pattern recognition, is shown in Table 3. The GK-SVM using the PE feature ranked first at 97.35%. The classifier GK-SVM with IAV, MF, AR, FE and PE features delivered the recognition rates of above 95% which are satisfied for activity monitoring. The classifier GK-SVM with all features resulted in the calculation time below 50 ms. The GK-SVM gets the highest recognition rate for all feature types except for the ZC. The GK-SVM gets also the lowest calculation time for all feature types. The best feature is WAMP for all classifiers except for FMMNN, with which the EWP feature is the best feature. Figure 7 also illustrates that the GK-SVM has the minimum variance for all feature types. Fall Detection Results All seven activities in ADLs are classified as type one and the trip-fall as type two. The used recognition method is the same as that of section 3.3. Figure 9 and Table 4 show the sensitivity (SEN, falls identified correctly), the specificity (SPE, ADLs identified correctly), and the calculation time. The highest sensitivity is 99.35% that belongs to two classifiers. The first classifier is the FMMNN with the WAMP, HIST, AR, ZCWT, and FE features and the second classifier is the FDA with the VAR, WAMP, MA, and FE features. All classifiers with all feature types have good specificity of above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. 20 Fall Detection Results All seven activities in ADLs are classified as type one and the trip-fall as type two. The used recognition method is the same as that of section 3.3. Figure 9 and Table 4 show the sensitivity (SEN, falls identified correctly), the specificity (SPE, ADLs identified correctly), and the calculation time. The highest sensitivity is 99.35% that belongs to two classifiers. The first classifier is the FMMNN with the WAMP, HIST, AR, ZCWT, and FE features and the second classifier is the FDA with the VAR, WAMP, MA, and FE features. All classifiers with all feature types have good specificity of above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. Fall Detection Results All seven activities in ADLs are classified as type one and the trip-fall as type two. The used recognition method is the same as that of section 3.3. Figure 9 and Table 4 show the sensitivity (SEN, falls identified correctly), the specificity (SPE, ADLs identified correctly), and the calculation time. The highest sensitivity is 99.35% that belongs to two classifiers. The first classifier is the FMMNN with the WAMP, HIST, AR, ZCWT, and FE features and the second classifier is the FDA with the VAR, WAMP, MA, and FE features. All classifiers with all feature types have good specificity of above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. Fall Detection Results All seven activities in ADLs are classified as type one and the trip-fall as type two. The used recognition method is the same as that of section 3.3. Figure 9 and Table 4 show the sensitivity (SEN, falls identified correctly), the specificity (SPE, ADLs identified correctly), and the calculation time. The highest sensitivity is 99.35% that belongs to two classifiers. The first classifier is the FMMNN with the WAMP, HIST, AR, ZCWT, and FE features and the second classifier is the FDA with the VAR, WAMP, MA, and FE features. All classifiers with all feature types have good specificity of above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. Fall Detection Results All seven activities in ADLs are classified as type one and the trip-fall as type two. The used recognition method is the same as that of section 3.3. Figure 9 and Table 4 show the sensitivity (SEN, falls identified correctly), the specificity (SPE, ADLs identified correctly), and the calculation time. The highest sensitivity is 99.35% that belongs to two classifiers. The first classifier is the FMMNN with the WAMP, HIST, AR, ZCWT, and FE features and the second classifier is the FDA with the VAR, WAMP, MA, and FE features. All classifiers with all feature types have good specificity of above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. monitoring. The classifier GK-SVM with all features resulted in the calculation time below 50 ms. The GK-SVM gets the highest recognition rate for all feature types except for the ZC. The GK-SVM gets also the lowest calculation time for all feature types. The best feature is WAMP for all classifiers except for FMMNN, with which the EWP feature is the best feature. Figure 7 also illustrates that the GK-SVM has the minimum variance for all feature types. Fall Detection Results All seven activities in ADLs are classified as type one and the trip-fall as type two. The used recognition method is the same as that of Section 3.3. Figure 9 and Table 4 show the sensitivity (SEN, falls identified correctly), the specificity (SPE, ADLs identified correctly), and the calculation time. The highest sensitivity is 99.35% that belongs to two classifiers. The first classifier is the FMMNN with the WAMP, HIST, AR, ZCWT, and FE features and the second classifier is the FDA with the VAR, WAMP, MA, and FE features. All classifiers with all feature types have good specificity of above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. above 95%, except the FDA with all feature types and the FCM with the feature of ZC, MA, ZCWT and PE. However, the FCM is the worst in terms of both the sensitivity and specificity. Besides, the performance of LDA was poor in the specificity. It is worth noting that the false positive was mainly caused by the stand-to-sit (Channel 1 of the Figure 3), whose signal is similar to trip-falls. Although the FMMNN classifier regardless of feature types has the best performance in both of sensitivity and specificity, its calculation time is longest. The classifier GK-FDA with the feature WAMP delivered high sensitivity (98.70%) and specificity (98.59%) and a short calculation time (65.586 ms), which is satisfied for pre-impact fall detection. The WAMP and MA, which are two feature types with high recognition rates in ADLs-recognition, were chosen and their sensitivity, specificity, the total accuracy recognition rates were analyzed. The results are shown in Figure 10. The WAMP, FMMNN, and GK-FDA features performed well in all three rates. The GK-SVM has a high specificity but its sensitivity drops to 87.5%, meaning that it cannot recognize tip-falls perfectly. Although the GK-SVM for the MA feature has a specificity of 90%, but its sensitivity is even lower than the others'. It indicates that the GK-SVM method is not an appropriate choice for this process. (a) Figure 10. Sensitivity, Specificity, and recognition accurate rate of two specific feature types. (a) Sensitivity, Specificity, and whole recognition rate of WAMP. (b) Sensitivity, Specificity, and Recognition Rate of MA. Discussion The purpose of this study was to find an optimal combination of sEMG feature types and classification methods, thereby providing a practical guideline for designing a sEMG based activity monitoring and fall detection system. The results of this study demonstrate that a system with four sEMG sensors was sufficient for achieving the sensitivity and specificity results in the 90% range, with less than 10% misclassifications. This study provides evidence that automated monitoring of a variety of activities of daily living and fall detection can be achieved using a wireless and wearable surface EMG sensor system with feature extraction and pattern recognition techniques. There are several basic limitations associated with this study that need further development to provide a wearable sEMG-based activity monitoring and fall detection system for the elderly or patients that can be used under real-world conditions. The authors of the current research study monitored "scripted" daily activities and simulated trip falls performed by healthy volunteers in a laboratory environment. It is not known how well this algorithm would work in a real scenario with unscripted free-form activities performed by elderly or real patients. In this study, each individual was trained separately and required multiple repetitions of the task to obtain sufficient data for training and testing purposes. However, it is not clear how different it is from identifying activities Figure 10. Sensitivity, Specificity, and recognition accurate rate of two specific feature types. (a) Sensitivity, Specificity, and whole recognition rate of WAMP. (b) Sensitivity, Specificity, and Recognition Rate of MA. Discussion The purpose of this study was to find an optimal combination of sEMG feature types and classification methods, thereby providing a practical guideline for designing a sEMG based activity monitoring and fall detection system. The results of this study demonstrate that a system with four sEMG sensors was sufficient for achieving the sensitivity and specificity results in the 90% range, with less than 10% misclassifications. This study provides evidence that automated monitoring of a variety of activities of daily living and fall detection can be achieved using a wireless and wearable surface EMG sensor system with feature extraction and pattern recognition techniques. There are several basic limitations associated with this study that need further development to provide a wearable sEMG-based activity monitoring and fall detection system for the elderly or patients that can be used under real-world conditions. The authors of the current research study monitored "scripted" daily activities and simulated trip falls performed by healthy volunteers in a laboratory environment. It is not known how well this algorithm would work in a real scenario with unscripted free-form activities performed by elderly or real patients. In this study, each individual was trained separately and required multiple repetitions of the task to obtain sufficient data for training and testing purposes. However, it is not clear how different it is from identifying activities Discussion The purpose of this study was to find an optimal combination of sEMG feature types and classification methods, thereby providing a practical guideline for designing a sEMG based activity monitoring and fall detection system. The results of this study demonstrate that a system with four sEMG sensors was sufficient for achieving the sensitivity and specificity results in the 90% range, with less than 10% misclassifications. This study provides evidence that automated monitoring of a variety of activities of daily living and fall detection can be achieved using a wireless and wearable surface EMG sensor system with feature extraction and pattern recognition techniques. There are several basic limitations associated with this study that need further development to provide a wearable sEMG-based activity monitoring and fall detection system for the elderly or patients that can be used under real-world conditions. The authors of the current research study monitored "scripted" daily activities and simulated trip falls performed by healthy volunteers in a laboratory environment. It is not known how well this algorithm would work in a real scenario with unscripted free-form activities performed by elderly or real patients. In this study, each individual was trained separately and required multiple repetitions of the task to obtain sufficient data for training and testing purposes. However, it is not clear how different it is from identifying activities Discussion The purpose of this study was to find an optimal combination of sEMG feature types and classification methods, thereby providing a practical guideline for designing a sEMG based activity monitoring and fall detection system. The results of this study demonstrate that a system with four sEMG sensors was sufficient for achieving the sensitivity and specificity results in the 90% range, with less than 10% misclassifications. This study provides evidence that automated monitoring of a variety of activities of daily living and fall detection can be achieved using a wireless and wearable surface EMG sensor system with feature extraction and pattern recognition techniques. There are several basic limitations associated with this study that need further development to provide a wearable sEMG-based activity monitoring and fall detection system for the elderly or patients that can be used under real-world conditions. The authors of the current research study monitored "scripted" daily activities and simulated trip falls performed by healthy volunteers in a laboratory environment. It is not known how well this algorithm would work in a real scenario with unscripted free-form activities performed by elderly or real patients. In this study, each individual was trained separately and required multiple repetitions of the task to obtain sufficient data for training and testing purposes. However, it is not clear how different it is from identifying activities 99. 35 Discussion The purpose of this study was to find an optimal combination of sEMG feature types and classification methods, thereby providing a practical guideline for designing a sEMG based activity monitoring and fall detection system. The results of this study demonstrate that a system with four sEMG sensors was sufficient for achieving the sensitivity and specificity results in the 90% range, with less than 10% misclassifications. This study provides evidence that automated monitoring of a variety of activities of daily living and fall detection can be achieved using a wireless and wearable surface EMG sensor system with feature extraction and pattern recognition techniques. There are several basic limitations associated with this study that need further development to provide a wearable sEMG-based activity monitoring and fall detection system for the elderly or patients that can be used under real-world conditions. The authors of the current research study monitored "scripted" daily activities and simulated trip falls performed by healthy volunteers in a laboratory environment. It is not known how well this algorithm would work in a real scenario with unscripted free-form activities performed by elderly or real patients. In this study, each individual was trained separately and required multiple repetitions of the task to obtain sufficient data for training and testing purposes. However, it is not clear how different it is from identifying activities Discussion The purpose of this study was to find an optimal combination of sEMG feature types and classification methods, thereby providing a practical guideline for designing a sEMG based activity monitoring and fall detection system. The results of this study demonstrate that a system with four sEMG sensors was sufficient for achieving the sensitivity and specificity results in the 90% range, with less than 10% misclassifications. This study provides evidence that automated monitoring of a variety of activities of daily living and fall detection can be achieved using a wireless and wearable surface EMG sensor system with feature extraction and pattern recognition techniques. There are several basic limitations associated with this study that need further development to provide a wearable sEMG-based activity monitoring and fall detection system for the elderly or patients that can be used under real-world conditions. The authors of the current research study monitored "scripted" daily activities and simulated trip falls performed by healthy volunteers in a laboratory environment. It is not known how well this algorithm would work in a real scenario with unscripted free-form activities performed by elderly or real patients. In this study, each individual was trained separately and required multiple repetitions of the task to obtain sufficient data for training and testing purposes. However, it is not clear how different it is from identifying activities and falls in real life with a larger task set. These conditions need to be investigated before using these algorithms for clinical purposes. Conclusions Based on the accuracy of recognition rate and computational complexity, a series of methods of surface EMG feature extraction and recognition were estimated for activity monitoring and fall detection. The statistical analysis of fifteen types of EMG feature sets determined that the WAMP, MA, EWT, and EWP features are highly separable and the IAV, VAR, and AR features have the shortest calculation time. The statistical analysis of class separability against calculation time recognized the WAMP, AR, and MA as the most advantageous features. In terms of activity monitoring, the WAMP is the best feature, the GK-SVM is the best classifier, and the combination of the GK-SVM and PE is the best possible combination of EMG feature types and classification methods. In terms of fall detection, the FMMNN classifier has the best performance in the sensitivity and specificity, but the longest calculation time. Since the detection time for realizing pre-impact fall detection must be less than 300 ms [1], the best choice is the GK-FDA classifier with the WAMP feature whose sensitivity and specificity are both above 98% and the calculation time is 65 ms. This system would further reduce recognition errors if combined with mechanical sensors such as accelerometer or gyroscope sensors. This idea helps to achieve both high recognition rate and reliability for the development of activity monitoring and fall detection systems. Besides, it also has important implications for other EMG signal-based devices, such as clinical assistive devices, walking assist devices, and robotics or prosthetic devices.
11,089.4
2017-05-27T00:00:00.000
[ "Engineering", "Medicine" ]
Improving the Performance of Actor-Based Programs Using a New Actor to Thread Association Technique . Finding the most efficient policy for the association of objects with threads is one of the main challenges in the deployment of concurrently executing objects, including actors. For the case of actor-based programs, libraries, frameworks, and languages provide fine tuning facilities for associating actors with threads. In practice, programmers use the default policy for the initial deployment of actors and the default policy is replaced with some other policies considering runtime behaviors of actors. Although this ad-hoc approach is widely used by programmers, it is tedious and time-consuming for large-scale applications. To reduce the time-consumption of the ad-hoc approach, a set of heuristics is proposed with the aim of balancing computations of actors across threads. This technique results in performance improvement; however, it relies on the static analysis of source codes and actors’ behaviors, ends in the inappropriate configuration of systems in distributed environments. In this paper, we illustrate conditions that the proposed heuristics do not work well and propose a new approach based on the runtime profile of actors for better association of actors with threads. We also show how this approach can be extended to a fully self-adaptive approach and illustrated its applicability using a set of case studies. Introduction The actor model is a well-known model for the development of highly available and high-performance applications. It benefits from universal primitives of concurrent computation [1], called actors. Actors are distributed, autonomous objects that interact by asynchronous message passing. This model was originally introduced by Hewitt [2] as an agent-based language and is later developed by Agha [1] as a mathematical model of concurrent computation. Each actor provides a number of services, and other actors send messages to it to run the services. Messages are put in the mailbox of the receiver, the receiver takes a message from the mailbox and executes its corresponding service. A number of programming languages and libraries are developed for actor-based programming, e.g. Act [3] and Roset [4] which are discontinued and Erlang [5], Salsa [6], and Akka [7] as actively supported programming languages and libraries. In the actor programming model, a large-scale distributed applications are developed by spawning many actors which are distributed among some computation nodes and work in parallel. Using this approach, utilizing CPUs of different nodes is crucial, needs careful mapping of actors to nodes and CPUs. Some of actor-based programming languages handle scheduling of actors on different cores on runtime, using a shared pool of threads for actors which are scheduled on CPUs by round-robin approach, including Erlang [8] and Kilim [9]. However, in the majority of the JVM-based actor languages, it is the duty of programmers to associate actors with threads, including Akka and Scala [10]. This way, a programmer has to associates actors with threads using the default mapping and iteratively tune the mapping, which is a very hard job and sometimes impossible for large-scale applications. Recently, Upadhyaya et al. in [11] proposed some heuristic for the association of actors with thread. To this end, they defined an Actor Characteristics Vector (cVector) for each actor to approximate the runtime behavior of it. The details of this approach are presented in Section 2. Using cVector, actors are associated with threads using one of the predefined policies of thread-pool, pinned, and monitor policies. The main goal of this approach is to map actors to threads in a way that balances actor computational workloads and reduces communication overheads. They implemented the technique for Panini and achieved on average 50% improvement in the running times of program over default mappings [12]. Although this approach improves CPU utilization of nodes significantly, it does not take the runtime behavior of systems into account. This limitation results in inefficiencies in the performance of actor systems, particularly in cases where actors are distributed among different nodes. In this work, we address both the number of spawned actors from a specific type and the load of systems at runtime to propose a better thread association policy. To this end, we propose a new light-weight technique for capturing the runtime behavior of actors (Section 3). We show how characteristic vectors of actors have to be modified to make them appropriate for presenting runtime behaviors of actors. Also, we show how the newly proposed characteristic vector is changed during the time and thread policies of actors have to be adapted to these changes. We develop a set of case studies to illustrate the applicability of this work in Section 4. Static Association of Actors with Threads Actors as loosely coupled parallel entities have to be associated with threads to be allowed to pick messages from their message boxes and execute them. Dedicating one thread to each actor is the simplest approach for this purpose; however, as actor-based applications usually spawn many actors, this approach does not scale. To resolve this limitation, actor libraries provide different policies for allowing programmers to associate a shared thread with multiple actors. Using this resolution, finding the appropriate policy for the association of a thread with a group of (or one) actors is the responsibility of programmers. Generally, three different types of policies for the actor with thread association is provided to cover the requirements of applications, called thread-pool, pinned, and monitor policies. The details of these policies are presented below. Policies for the Association of Actors with Threads The default and widely used policy for the thread to actor association is the thread-pool policy which uses a thread-pool with a limited number of threads for a set of actors. Usually, the number of actors is more than the number of threads and actors compete for threads. This policy efficiently works for actors which are not always busy, so the less number of threads can be shared among actors. Using thread-pool policy, there is no thread preemption while an actor is busy with executing a message and actor lose its associated thread only when finishes serving a message. As the second alternative, using the pinned policy, an OS level thread is dedicated to an actor. This policy efficiently works for busy actors, so the overhead of frequently changing the associated thread with a pinned actor is eliminated. Finally, the monitor policy is used for actors which perform very light activities. Using the monitor policy, the associated thread with the sender of a message is reused by the receiver actor to serve the recently sent message. When serving the message is finished, the actor gives back its associated thread to the sender of the message. Note that the associated thread with the sender actor only can be reused when both of the sender and receiver are deployed on the same node. These three policies are provided by different actor libraries with different naming. Akka provides PinnedDispatcher, BalancingDispatcher, and CallingTh-readDispatcher to realize pinned, thread-pool, and monitor policies. Akka also provides a default dispatcher which is a realization of thread-pool policy configured with a set of general purpose values. In contrast, the scheduler of Erlang only provides thread-pool policy. Kilim as the provider of very light Java actors only provides the thread-pool policy which is implemented efficiently to be able to handle thousands of actors. Using Characteristics Vector of Actors As the only work which tries to propose appropriate policies for actors, Upadhyaya et al. in [11] proposed a heuristic-based technique for setting policies of actors (henceforth, Static-Heuristic approach). In this approach, they defined the notation of Actor Characteristics Vector (cVector) for each actor to approximate the runtime behavior of that actor. They benefit from Actor Communication Graphs (ACG) of systems to generate cVectors. The vertices of ACG are actors of a system and there is an edge between two vertices if and only if there is a possibility of sending a message from an actor which is associated with the source vertex to the actor which is associated with the destination vertex. They also marked actors which have blocking I/O activities, actors which are computationally intensive, and actors which have many communications. As a result, cVectors of actors are created as defined below. Definition 1 (Characteristics Vectors). Set CV as the set of the characteristics vectors of actors is defined as CV = { blk, state, par, comm, cpu | blk ∈ {true, false}∧state ∈ {true, false}∧par ∈ {low, med, high}∧comm ∈ {low, med , high} ∧ cpu ∈ {low, high}}. 2 For a given characteristic vector blk, state, par, comm, cpu for the actor ac, the interpretation of the terms is as the following: the value of blk is true if ac represents blocking behavior, the value of state is true if at least one of the state variables of ac is accessed by more than one of its methods, the value of par is low if ac sends a synchronous message and waits for the result. It is high if ac sends an asynchronous message and does not require result. Otherwise it is med, the value of comm is low if ac does not send message to other actors. It is high if ac sends message to more than one actor. Otherwise it is med, the value of cpu is high if ac represents computational workload, i.e. having recursive call, loops with unknown bounds, or making high cost library calls. Using this interpretation, function CV : AC → CV maps a given actor to its corresponding cVector. Here, we assumed that AC is the set of actors of a system. Note that [11] does not provide a precise guideline for detecting high cost library calls and blocking behavior. To map a cVector to a thread policy, a function is defined in Definition 2. This heuristic states that a thread has to be associated with an actor (pinned policy) that has external blocking behavior. Any other policy for these actors would lead to blocking of the executing thread and may lead to actor starvation or deadlocks. In addition, any actor that is non-blocking with high inherent parallelism, high communication, and high computation should be assigned the pinned policy. Master actors, which have the property that they delegate the work to slave actors and often wait for the result are eligible for the pinned policy. Actors with low CPU consumption and communication do not need special attention and hence are processed by the calling actor (the actor that sends messages). Actors with other characteristic vectors can share their associated threads; so, the thread-pool policy is assigned to them. Note that in this mapping, being stateful/stateless does not matter. Runtime Association of Actors with Threads Although the Static-Heuristic approach for the association of actors with threads results in performance improvements, it does not consider the runtime behavior of the system. This way, both over-approximation and under-approximation of the behavior of system is inevitable and causes inefficiencies in runtime. In the following we illustrated this phenomenon and proposed a runtime approach (henceforth Adaptive-Heuristic approach) to resolve it. In addition, we showed that thread association policy is widely influenced by the deployment strategy of the application and the number of hosts of actors. So, for an efficient thread association policy, deployment strategies have to be taken into account. Redefinition of Actors Characteristics Performing a number of experiments, we found that two terms of cVector have to be redefined. Using the current definition, these two terms misleads heuristic in actors to threads association approaches. The first is the term that shows the level of communication among actors. As mentioned before, based on the definition of [11], the value of this term in the cVector of an actor is set to High if the actor sends more than one message to other actors. However, sending messages is a very light operation which is not affected by thread policies. Instead, level of communication has to be set to High for an actor which receives many messages. Many received messages results in needs for many future computational power, which is tightly in relation with thread policies. To make this difference clear, we use the example of hub-actor in [11]. Hub actors are represented by either false, , high, high, low or false, , low/med, high, high which show that they have high communication characteristics. It is because of the fact that the affinity actors (actors that hub actor communicates often) send message to the hub actor, which is in contrast with the proposed metric in [11], i.e. sending many messages from a hub actor to the others result in High value for the communication level. The other case which results in having high communication is receiving messages from actors which are developed in some other nodes. As we will show later, actors with high communication are not allowed to be mapped to the monitor policy which is essential for high-performance processing of messages which are sent from actors which are hosted by the other nodes. Note than the new definition addresses the runtime behavior of systems, so it can not be used in the approach of [11]. The second term that has to be redefined is the needed computational power, addressed by CPU. The needed computation power is a runtime metric which can not be effectively estimated by static analysis. Note that this argument is valid for complex actor-based systems, since the needed computational power of simple actor models can not be estimated by having a quick look into their source codes. In the new definition, the value of CPU is related to the average consumed processor time by actor. Note that the new definition sets the needed computational power for actor types not actor instances. In addition to modifying the definition of these two terms, we found that lifetime of actors has a significant influence in the runtime behavior of actors and has to be included in the cVectors of actors. For example, using Aggregator pattern [13], a task is split into some very simple subtasks, delegated into newly instantiated actors. The newly instantiated actors complete their associated subtasks, send the result to the owner actor and die. Regardless of the values of the others terms of the corresponding cVector, these short-lived actors are very good candidates for being associated to the monitor policy. This way, one thread is used for performing all the simple subtasks and the overhead of releasing and reclaiming thread for doing subtasks is eliminated. Note that in this case we assumed that all of actors are deployed in the same computational node. Delegating threads using monitor policy is impossible when sender and receiver actors are deployed in different computational nodes. Based on these changes, runtime characteristics vector (rcVector) of an actor is defined as the following. We still have no observation on the effect of being stateless/stateful, so we eliminate it in runtime characteristics vectors. The interpretation of the terms in a given rcVector blk, par, comm, cpu, lt for actor ac for the terms blk and par is the same as them the original characteristics vectors and for the other three terms is as the following: the value of comm is low if the number of received messages per a unit of time of ac is less than this value in average case of all actors. It is high if that value is bigger than the average, and otherwise it is set to med, the value of cpu is high if the value of the needed computational time per method of ac is bigger than this value for the average case, considering all of actors. In the case of receiving messages from actors, deployed on the other computation nodes of the system, the value of cpu is set to high too. otherwise it is set to low, the value of lt is high if the lifetime of ac is bigger than the average lifetime of all of the existing actors. otherwise it is set to low. Using this interpretation, function RCV : AC → RCV maps a given actor to its corresponding rcVector. To map a rcVector to a thread policy, a function is defined as below. Towards a Self-Adaptive Approach Using runtime mapping algorithm improves performance of systems but there is an open question on how the actors must be configured at their instantiation point. It is clear that before running systems communication level, CPU consumption, and lifetime of actors are unknown, so finding the appropriate mapping is impossible for almost all of the actors (except for actors with blocking behaviors). Therefore, a default thread policy must be assumed for all of the actors and it must be changed during the execution of the system. This adaptation is crucial for making the runtime approach possible. To this aim, we propose the adaptation algorithm which is presented in Figure 1. Actors are initially use thread-pool policy and change their thread policy upon detecting any permanent changes in the values of communication level, CPU consumption, and lifetime of their rcVectors. The labels of arrows in Figure 1 shows that which changes trigger that possible adaptation. For example, "CPU +" label on arrow between thread-pool and pinned shows that for actors which thread-pool policy increasing the value of CPU results in changing the policy to pinned. Performing this adaptation, after some amount of time the system meets its high-performance steady state. In addition to resolving the initial mapping of actors to thread policies, the adaptation policy helps in resolving inefficiencies, caused by changes in the load profile of systems (e.g. changes in the number of clients, the operational servers, etc.). Runtime changes in the load profile of a system my change the characteristics of an actor during the time. So, some adaptation may needed after such changes to find the new high-performance steady state. The same argument is valid for actors migration, i.e. changing host nodes of actors. Based on the proposed mapping algorithm, actors migration significantly influences association of monitor policy with actors. Experimental Results To illustrate the applicability of this work we prepare some case studies and show how using the Adaptive-Heuristic approach improves the performance of systems. The presented case studies are partitioned in two parts. The first part contains a number of models which are proposed in [9]. The second part contains an example which shows runtime changes in load profile and the number of actors. We illustrate how the new approach adapts policies based on the encountered changes. Models Without Runtime Adaptation We use some of the models proposed in [9] and develop a simulator for pure actor programs. For the design of the simulator we consider both multi-node and multi-processor environments. This way, a number of threads are spread among nodes and each node schedules its own threads using its associated processors. Using this simulator, the models are developed without need for dealing with the complexities of the real-world Java actor programming. In addition, having simulator, we run models in different infrastructure configurations and monitor pure impact of thread association policies to the runtime execution of models. In the following, we present an intuitive description and deployment diagram for each model. We also present a figure which compares the termination time of the model for three cases of using default thread-pool policy, the Static-Heuristic approach, and the Adaptive-Heuristic approach. The best approach has the smallest termination time, as it consumes the provided computation power better that the others. Request Dispatcher. We develop RequestDispatcher example, i.e. message routing among a set of senders and receivers. This model contains three different actors which are Sender, Receiver, and Dispatcher. Sender actors pass messages to the Receiver actors via Dispatcher. The actor model of RequestDispatcher is shown in Figure 2. As presented in [11], based on the characteristics vector of the actors, the Static-Heuristic approach maps Sender and Dispatcher actors to the thread-pool policy, and Receiver to the monitor policy. This mapping only works for single node deployment of actors as upon deploying Dispatcher and Receiver in different nodes, there is no way for sharing Dispatcher threads with receivers. In addition, heavy weighted receivers may block dispatchers and reduce the performance of the system. The Adaptive-Heuristic approach proposes changing the policy Dispatcher as the bottleneck of the model, has to be able available permanently; so, a thread has to be associated with it. Also, in the case of deploying Receivers and Dispatcher in different nodes, there is no need for changing the policy of receivers, as they do not reuse the thread which is associated with Dispatcher. Changing the number of senders and receivers resulted in the following figure for the completion time of the model. Two Level Hadoop Yarn Scheduler. Hadoop is a framework for MapReduce, a programming model for generating and processing large data sets [14]. MapReduce has undergone a complete overhaul in its latest release, called MapReduce 2.0 (MRv2) or YARN [15]. The fundamental idea of YARN is to split up the major functionalities of the framework into two modules, a global Resource Manager and per-application Application Master. On a Hadoop cluster, there is a single resource manager and for every job there is a single application master. In this example, we modeled a pipeline of two instances of MapReduce clusters, depicted in Figure 4. Based on the characteristics vector of the actors, the Static-Heuristic approach maps ResourceManager actors to the pinned policy, and ApplicationMaster actors are mapped to the monitor policy. However, the work load of the second ResourceManager and ApplicationMasters are shaped by the first ResourceManager and ApplicationMaster. The Adaptive-Heuristic approach proposes pinned policy at the starting point of the first ResourceManager and changes it to thread-pool in some configurations. Based on the light weight load of the second ResourceManger, the adaptive policy proposes monitor policy for this actor. Comparison among completion times of the model in different configurations is depicted in Figure 5. File Search. Document indexing and searching model [11] is the third case study that we developed. This model contains four different actors which are FileCrawler, FileScanner, Indexer, and Searcher. FileCrawler periodically visits directories which their paths are given at the start point and sends a message to FileScanner upon finding a newly modified file. To increase the verity of the number of actors in this model, we used only one crawler actor. FileScanner processes the given file and asks one of the available Indexers to index the file. Indexer performs hash-based indexing and stores the extracted information. The Searcher actor serves the search request which are sent by users. The actor model of FileSearch is shown in Figure 6(a) As presented in [11], the Static-Heuristic approach maps FileCrawler and Searcher actor to the pinned policy, Indexer to the monitor policy, and FileScanner to the thread-pool policy, based on the characteristics vector of the actors. The same as the previous example, this mapping only works for single node deployment of actors. The Adaptive-Heuristic approach proposes changing the policy of Indexer to the pinned policy. Also, in case of deploying FileCrawler in the node which contains FileScanner, it proposes changing the policy of FileCrawler to the thread-pool policy, as there is no need for associating one thread for its periodic behavior. Experiments showed that there is a very light improvements in using the new approach. Bang Model. The last model we developed is the Bang benchmark which simulates many-to-one message passing. As shown in Figure 7(a), in this model there is one receiver and multiple senders that flood the receiver with messages. Based on the CVector of actor, the receiver actor is mapped to the monitor policy and senders are mapped to the tread-pool policy, using the Static-Heuristic approach. The results of [11] shows that the Static-Heuristic approach improves the performance of the system in comparison to the default policy but it does not provide the best mapping. Assume that these actors are deployed as shown in Figure 7(b). In this configuration, mapping receiver to monitor does not result in reusing the threads of senders as the actors are deployed in two different machines. In this case, the receiver actor has to be mapped to the pinned policy to be able to process the requests upon receiving their corresponding messages, as made by the new approach. However, experiments showed that there is no difference between using the Static-Heuristic approach and the adaptive-Heuristic one (based on the deployment of Figure 7(b)) as, there is no thread interference between the senders and the receiver. A Model With Runtime Adaptation We presented an example in the second part of experimental result which is the model of a FilmService system, shown in Figure 8(a). In this example, clients want to stream a movie from film servers. A client spawns a FilmRequest actor to search for the movie in servers. The FilmRequest actor sends messages to all of the servers and the first server which can provide the movie, spawns a Connection actor to start streaming. Besides, there are some indexer actors which are responsible for indexing the movies in the servers to make searching for movies easier. In contrast to the aforementioned models, the load profile of actors in the FileService model may change during the time. This change takes place by requests migration when a server crashes. As soon as detecting a crashed server, requests which are sent to that server are distributed among other servers and the status of the crashed server is changed to repairing. Servers will back to service after passing a repairing period. The crash times of servers are generated by a Poisson distribution and we make sure that there is no case where all of the servers are in repairing state. Preparing the cVector of actors of FileService for the Static-Heuristic approach results in mapping all actors to thread-pool policy, except the Client actor. However, the efficient mapping of the Server actors deeply depends on the load profile of the system. Assume that the actors are deployed as shown in Figure 8(b). In this configuration, having many film requests needs mapping Server actors to pinned to be able to process the request. This mapping reduces the performance of Indexer of that node but increase the performance of the system in general. By reducing the number of request, the mapping has to be changed to thread-pool to allow Indexer use more CPUs. To illustrate the applicability of the Adaptive-Heuristic approach, we simulated the model in different configurations. In this case, instead of computing the average completion time of tasks, we simulated the model for a long period of time and measure the utilization of CPUs using the Adaptive-Heuristic, Static-Heuristic, and default policy approaches. This way, as the needed computation power of all tasks are the same for the three approaches, the best policy has to fully utilize CPUs. So, better utilization of CPUs means completing more tasks in a given period of time. As shown in Figure 9, the Adaptive-Heuristic approach is the only case that shows an acceptable behavior when the number of processors is increased. The figure also shows that the inefficiency of the Static-Adaptive approach is increased by increasing the number of CPUs; however, the Adaptive-Heuristic approach encounters a very small performance penalty. We also examined the behavior of the model in the presence of many servers, depicted in 10. This figure shows that increasing the number of servers results in a light decrease in the performance of the system in the case of using the Static-Heuristic approach which is in contrast with the light increase which is depicted in the case of the default policy and the Adaptive-Heuristic approach. Conclusion, Discussion and Future Works In this paper, we proposed a new approach for associating threads to actors of a system. Applying the previously proposed approaches results in performance improvement; however, it relies on static analysis of source codes and actors' behaviors. In practice, relying on the static analysis of the codes and ignoring the runtime load profile of the application results in inappropriate configuration of systems in distributed environments. In contrast, the self-adaptive approach tunes the mapping of the actors based on the captured information during the execution of the system. In this approach, the needed information can be gathered using very light-weight processes. Comparing the new approach with the old one using a set of case studies showed that the self-adaptive approach improves the performance of the systems in most cases. Although we showed that the proposed approach results in performance improvements, as the results are computed using an actor simulation engine they may change in the real deployment of the actor models. So, we planned to develop the adaptation engine in Akka as the future work of this work. We also planned to develop more examples to show the effectiveness of the approach in different configurations.
6,692
2018-06-18T00:00:00.000
[ "Computer Science" ]
The block copolymer shuffle in size exclusion chromatography: the intrinsic problem with using elugrams to determine chain extension success Is an increase in hydrodynamic volume always expected in block copolymer synthesis? Why SEC is sometimes not the last word. Introduction Owing to its accessibility and ease of operation, size exclusion chromatography (SEC), also known as gel permeation chromatography (GPC), has long become the preferred method for polymer molecular weight determination. SEC provides the full molecular weight distribution of a sample, from which molecular weight averages and dispersity (Đ) can be derived. Molecular weight determination by SEC makes use of the proportionality between the hydrodynamic volume of a (co-)polymer chain in a solvent is and the product of the intrinsic viscosity and the molecular weight : 1 Here, captures to what extent the polymer chain is swollen [ ] by the solvent. For practical reasons, Equation (1) is usually written as: , where the quantity " " is a measure for the hydrodynamic volume, given in units of volume per mole, as is typically [ ] expressed as an inverse mass concentration (volume per gram). The elution volume of a given polymer sample or, equivalently, its retention time on the SEC column, can be accurately correlated to as long as the eluent dissolves the polymer well and only negligibly swells or collapses the stationary phase. This correlation forms the basis for what is often referred to as universal calibration, and researchers have grown so used to it that sometimes its applicability is no longer questioned. Since SEC separates strictly based on hydrodynamic volume, it is an indirect method for determining molecular weight. Hence, estimates for the latter can only be obtained if the system has been properly calibrated and if the relation between and is known. Although readily applicable to most homopolymers, molecular weight determination of block copolymers (BCPs) using SEC is generally much less straightforward than typically considered. Besides issues also known to occur for SEC on homopolymers, e.g. associated with aggregation 2 or chemical dissimilarity with the calibration standard, [3][4][5] new challenges appear due to the mere fact that the different blocks themselves have different properties and solvation. Consequently, their interaction with both the mobile and stationary phase, as well as their resemblance with the calibration standard, is per definition different. The typical assumption of universal calibration thus does frequently not hold true, despite researchers using this assumption almost without exception in their data interpretation. A step forward is presented by using viscometry, hence allowing for chemically different analytes and standards and by measuring the intrinsic viscosity directly rather than estimating it from calibrations. The relation between and is typically expressed via the [ ] empiric Mark-Houwink-Sakurada (MHS) equation, which uses the parameters and of an unknown analyte for correlation, according to the MHS equation : . However, even if [ ] = the MHS parameters are know precisely (which by itself is a challenge and is usually only true for a selected number of homopolymers), an important prerequisite for its use is that the analyte has a similar solvation as the calibration standard. Hence, again related to inhomogeneity in both chain length and composition, the outcome of this exercise should be treated with caution when analyzing a BCP. As mentioned, the direct measurement of the intrinsic viscosity somewhat improves the situation in that MHS parameters must not be known. Employing a viscometer, can be directly calibrated using standards, and measurement of the intrinsic viscosity of the analyte sample allows for direct determination of its molecular weight. Yet, the inaccuracy associated with the prerequisite of similar solution behavior of chains remains. A way out of this dilemma is the use of laser light scattering (LS) in a triple detection SEC system, which allows to measure molecular weight directly without relying on knowledge of HV. Regardless, even when combined with an absolute detection technique, such as multi-angle laser light scattering (MALLS), SEC analysis of BCPs exhibits reliability issues since the eluted composition, and hence the scattering contrast (dn/dc), is timedependent due to the dispersity in block length. In principle, d would need to be known for each exact block composition /d to yield and exact measurement of molecular weight, which obviously is not feasible. Concentration determination is hence hampered, which also invalidates light scattering results to a certain degree. Furthermore, LS-based methods usually involve not only high cost, but also require more skilled operators and in-depth analysis since scattering data is less straight forward to analyze properly compared to the simpler RI, UV and viscometry detectors. It is fair to assume that for routine analysis, most labs do not perform light scattering, or do not have access to such detectors in the first place. Hence, in the analysis of block copolymers (and other polymer architectures for that matter), researchers typically make basic assumptions, and refrain from even determining molecular weights. A typical recommendation is to plot SEC elugrams rather molecular weight distributions to avoid any ambiguity with calibrations and generally to avoid the above described dilemmas. A general assumption is thereby that lower elution volumes are correlated to larger molecular weights. Hence, it is widely accepted that a shift to lower elution volumes presents proof for a successful extension in a block copolymerization, and that a lack thereof is synonymous for a failure in reaction. This does not necessarily hold true though, as this still assumes a universal-type calibration/correlation to be valid for the investigated polymers. As we discuss in here, this can be a dangerous and misleading assumption. This mini-review is written in part in a tutorial form and gives a concise and accessible discussion on challenges, approaches and solutions associated with interpreting molecular weight distributions of BCPs from SEC. We intend to provide practical handles and concepts, rather than presenting a lengthy and exhaustive literature survey to allow practitioners to derive meaningful conclusions. We start our discussion with describing examples of BCPs for which SEC actually works quite well, namely for BCPs comprising flexible blocks that only weakly interact. After that, we discuss a range of examples of BCPs that seem to behave far less ideal due to the fact that the blocks have quite different chemical or physical properties and hence pose significant challenges on the use of SEC for molecular weight determination or even merely proving block extension. It is particularly worthwhile reviewing such "inhomogeneous" BCPs, since their internally cooperative-or multi-functionality is of strong interest to advanced applications, as smart coatings, 6-9 drug delivery, 10-13 bioimaging, 14-16 optoelectronics 17-20 and energy harvesting and storage. [21][22][23][24][25] We subdivide these nonclassical BCPs into four "behavioural categories" and conclude with a brief survey of methods to improve the accuracy of molecular weight estimates. These techniques may either be used in conjunction with SEC or replace it altogether as more viable option. It should be noted that none of the proposed methods represent truly novel concepts, yet SEC is too often used in literature either as proof or dismissal of success of a reaction, especially in BCP formation without further questioning, and we wish to highlight the problems arising from such conclusions. Discussion Classical systems: dual detector approach for BCP molecular weight analysis The simplest cases are presented by (block-)copolymers of which the monomers form flexible chains and only exhibit weak interactions, i.e. lacking ionic charges, strong dipoles and/or Hbonding capability. In effect, for block-copolymers falling into this category, the covalent link between the different blocks forms the only relevant "hetero-contact". This enables an interpretation as connected homopolymers 26 and therefore allows for a molecular weight determination based on a mass fraction-weighted interpolation of the homopolymer SEC calibration curves as typically applied: 27 is the molecular weight of the copolymer ( ) corresponding to an elution volume and are the weight ( ) fractions of the comonomers . In this case, the hydrodynamic volume of the block copolymer is trivially related to the hydrodynamic volumes of the separate homopolymers, 26 i.e. assuming no additional contributions stemming from the interaction between the monomers of the different blocks. This is true for example for BCPs that consist of blocks from the same or similar monomer family. For any BCP obeying this prerequisite, the molecular weight can be established reasonably well with SEC alone. 28 However, even for such straightforward systems, one cannot rely on just a single detection method, i.e. typically UV-vis absorption or refractive index (RI) detection. The reason is that for not perfectly alternating copolymers, not only the molecular weight, but also the composition is distributed. In such cases, use can be made of a dual detection method involving for example both the UV-vis and RI detectors. 29 The latter records copolymer concentration, whereas the former records composition owing to its high chemical specificity. 30 Prior to analyzing the copolymer, solutions of the corresponding homopolymers are eluted through both channels in order to predetermine the detector responses, which are comonomerand instrument-dependent. 31,32 The eluted masses and (e.g. expressed in grams) of comonomers A and B can then be obtained for each "slice" of the SEC chromatogram by solving the following system of equations for the measured intensities: 31 , with and the detector responses. Subsequently, the , , mole fractions of the comonomers are determined using the known monomer molecular weights, upon which an estimate for the copolymer molecular weight in each slice is obtained ′ from the predetermined number-average molecular weight of the precursor block. Optionally, one may generate a calibration curve by fitting versus elution volume. Since the eluted log( ′) volume is known, the polymer concentrations are easily obtained to finally yield the number-and weight-average molecular weight of the copolymer sample. As an example, Figure 1 shows this procedure for the BCP poly(styrene)-bpoly(Z-L-lysine) (PS-b-PZL) 31 . We note that there is a wide variety of other detector combinations for this dual detection method 26,33 (for example infrared (IR) 34 and evaporative light scattering detector (ELSD) 35 ) and their applicability depends on the properties of the monomers. Dual detector SEC has shown to give very reasonable estimates for a range of "classical" BCPs, such as polyisobutylene-bpolystyrene (PIB-co-PS), 32 32 polystyrene-copoly(butylene terephthalate) (PS-co-PBD), 36 polystyrene-bpoly(methyl methacrylate) (PS-b-PMMA), 30,31 polystyrene-b-polybutadiene (PS-b-PB) 31 and polystyrene-b-polylysine (PS-b-PZL) 31 . However, as mentioned above, if the hydrodynamic behaviour of the block copolymer differs from the "summed" behaviour of the homopolymers, even dual detector SEC cannot produce reliable results. Early identified examples are cases wherein one or both components carry a bulky side group. 32 Besides, a trivial limitation of dual detector SEC is that it does not distinguish between homopolymer contaminants and the copolymer. 32 Examples of "non-classical" block copolymers giving deviant SEC analyses In case SEC is applied to BCPs for which the combination of the blocks gives a behavior that differs strongly from what one would expect based on the linear combination expressed by Equation (3), observations can be counterintuitive. In these situations, care should be taken when using SEC for molecular weight determination, or, as we shall see, even for just collecting qualitative evidence for chain extension or successful synthesis of a BCP. Since SEC analysis is based on the general assumption that the retention time decreases if the hydrodynamic radius of the polymer becomes larger, counterintuitive results are obtained if during coupling or growth of the second block the polymer coil contracts, collapses or exhibits an increased interaction with the stationary phase. Below, we discuss a number of representative examples (overview in Table 1) of "non-classical" functional BCPs that behave deviant in SEC inasmuch an apparent increase in retention time is observed with increasing molecular weight. (Table 1, (Table 1, entries 2-4). 37 The GPC traces for these polymers have been reproduced in Figure 2. Although the retention time consistently decreases with increasing PEG length, an initial increase is observed upon converting the OHPV homopolymer (entry 1) into OHPV-b-(PEG) 12 . In other words, the (apparent) hydrodynamic volume of the BCP is smaller than that of the precursor. The authors have not explicitly discussed this observation, as it is not the focus of their study. It may however portray a difficulty associated with molecular weight determination of rod-coil BCPs, a trend that is observable consistently with conjugated polymers. Rod-coil transition The first set of examples The underlying reason for this discrepancy might in part be related to the fact that the molecular weight of the OHPV homopolymer is likely overestimated (~50% compared to the value from NMR and MALDI) due to its high rigidity. For a given molecular weight, the associated hydrodynamic volume of poly(p-phenylenevinylene) is hence high in comparison to a more flexible polymer used for SEC calibration (i.e. usually polystyrene). 41 This is expressed by the disagreement between the SEC molecular weight average and the estimate obtained by NMR (entry 1). Upon connecting the flexible PEG block, the hydrodynamic properties become more reminiscent to those of the PS calibration standard, leading to agreement between the SEC and NMR estimates, but perhaps resulting in an apparent reduction in hydrodynamic volume compared to the OHPV homopolymer. An increase in retention time due to a higher affinity of the PEG residues with the stationary phase is not likely in view of the apolar column material used in this work 37 (crosslinked styrene/divinyl benzene matrix). We note that an increase in retention time when connecting or growing a flexible block to/from a stiff homopolymer is certainly not always observed [42][43][44] and speculate that it may depend on the length of the latter and/or whether the stiff block is truly in the rod-like limit. Low solubility of one block To show that the situation concerning rod-coil BCPs is everything but straightforward, we proceed with an example for which the decrease in hydrodynamic volume was only observed after extension of rather than with the flexible block. This example concerns the linear semiconducting BCP poly[2,7-(9,9-dihexylfluorene)]-bpoly(2,2,3,3,4,4,4-heptafluorobutyl methacrylate (PF-b-PHFBMA). 38 A series with varying PHFBMA block length ( Table 1, entries 5-9) was synthesized by atom transfer radical polymerization (ATRP) using a 2-bromoisobutyryl-endcapped polyfluorene (PF-Br) as a macroinitiator for polymerization of heptafluorobutyl methacrylate. In contrast to the previous example, the retention time does decrease when converting the PF homopolymer into the BCP (compare Table 1 entries 5 and 6, as well as the SEC traces in Figure 3), which is intuitive. However, an unexpected increase in retention time is observed Table 1 entries 7-9 and Figure 3). As a result, the SEC estimate for follows the opposite trend compared to the value obtained from NMR (see Table 1). The authors do not ascribe the anomaly to the stiff-flexible nature of the BCPs, but rather to the poor solubility of the semifluorinated block in the SEC solvent (THF), resulting in an overall collapse of the BCP at elevated degree of polymerization. Interactions between blocks Similar counterintuitive changes in SEC retention times upon block extension, though likely due to yet another mechanism, have been reported by Thomas et al. during the characterization of a range of tri-and multiblock copolymers based on the temperature-responsive polymer poly(N-isopropylacrylamide) (PNIPAM), prepared via reversible deactivation radical polymerization (RDRP) techniques (entries 10-12 in Table 1). 39 For the ABA-type triblock copolymer PNIPAM-b-poly(poly(ethylene glycol) methyl ether acrylate)-b-PNIPAM (PNIPAM-b-PPEGA-b-PNIPAM), a higher retention time was observed in comparison to the PNIPAM homopolymer (see red and green traces in Figure 4), with an associated ~40% apparent reduction in molecular weight from = 23000 g mol -1 to 14000 g mol -1 . This work 39 does not contain comparative values from, for instance, NMR or mass spectrometry (MS). The authors point out that both PNIPAM and PPEGA are known to be strongly susceptible towards hydrogen bonding (although the latter can only accept and not donate H-bonds). They speculate that mutual H-boning between the different polymer constituents is responsible for contracting the coil upon block extension, leading to a concomitant reduction in the hydrodynamic volume and an increase in retention time. Interactions between one block and the stationary phase A third example of a BCP exhibiting an increase in SEC retention time with increasing block length is the ABA-type tri-BCP with poly(4-vinylpyridine) (P4VP) and poly(ether imide) (PEI) as A and B block, respectively, as reported by Liu and coworkers (Table 1, entries 13-16). 40 These BCPs, based on the engineering plastic PEI, are of interest for mechanically robust and temperaturestable mesoporous polymer membranes. The elugrams obtained using an RI detector ( Figure 5) show that an increase in P4VP block length from 29 to 85 monomeric units results in an increase in retention time, which is counterintuitive. Hence, deriving the molecular weight based on a standard PScalibration results in an apparent decrease in the molecular weight, whereas the opposite trend is observed for the estimates obtained from NMR or absolute measurement based on GPC-MALLS (solid curves in Figure 5). Furthermore, Figure 5 shows that besides an increase in retention time with P4VP block length, the elugrams become increasingly skewed towards the low molecular weight range. Given the fact that PEI and P4VP are flexible polymers, well soluble in the SEC solvent (THF) 45,46 and both incapable of donating hydrogen bonds, the reason for the increase in retention time must be different from the examples discussed above. Indeed, according to the authors, their data carry the signature of (too) strong attractive interaction between the BCP with the stationary phase of the GPC column. Although the authors do not give a suggestion which block is responsible for the effect, the fact that the observed trend relates to the length of the P4VP block seems to identify the culprit. According to the authors, the observation that the molecular weight fraction determined by GPC-MALLS (solid lines in Figure 5) shows a "Ushaped" time dependence (see Figure 5) is characteristic for a strong interaction between the polymer analyte and the GPC stationary phase. Unfortunately, the particular column material used in this work is not mentioned. We finally note that, although the interpretation by the authors is certainly plausible, interpreting results generated using only the RI detector or in combination with MALLS requires some care. The reason is that during elution of the BCP, compositional changes associated with the block length distributions that are not accounted for. As mentioned in the introduction, if the refractive index of the solution on which both methods rely strongly varies with time, the shape of the elugrams, as well as the MALLS molecular weight estimates, may be strongly impacted. Alternative methods for molecular weight determination The cases discussed above represent a small but, in our view, representative selection of examples from literature reporting anomalous behavior of functional BCPs during SEC analysis. Although the overview is not exhaustive, it probably covers the most important reasons for which single-or even dualdetection SEC methods may yield counterintuitive results: i) rod-coil BCPs, ii) blocks exhibiting a (strong) difference in compatibility with the carrier solvent, iii) strong mutual interactions between blocks and iv) strong interaction with the stationary phase of one of the blocks. Improved SEC analysis Depending on the type of BCP, column material and solvent, the use of an elevated temperature during SEC may be beneficial. A rise in temperature generally reduces the strength of attractive forces, whether block-block, solvent-block or between the BCP analyte and the stationary phase and may therefore be amenable for mitigating the challenges associated with categories ii), iii) and iv). However, the boiling point of the SEC solvent or the stability of the column material often limit the accessible temperature range as much as limitations of the used instrumentations. Most SEC systems in practice will use temperatures below 50°C for practical reasons. In view of the second example, we note that an elevated temperature is to be avoided if one of the blocks, or the BCP as a whole, exhibits a lower critical solution temperature (LCST). Alternatively, one may use a different solvent 47 or a solvent mixture 48 to improve the accommodation of both blocks in the mobile phase. In order to predict an optimal solvent composition for a particular chemical structure, one may employ Hansen solubility theory. 49 Besides the composition, the use of additives has also shown to improve SEC analysis, in particular if the analyte exhibits a strong interaction with the stationary phase (category iv). Depending on the (expected) type of interaction, these may for instance be acidic 50 or basic species 51 and/or ionic, such as inorganic 52,53 and organic 54 salts. Furthermore, we note that the shape and modality of the SEC elugrams could provide information beyond merely proving a successful BCP synthesis. Deconvolution may allow identifying impurities such as homopolymer contamination 55 as well as side products, e.g. from branching reactions. 56 The fact that such impurities can significantly impact the microphase separated morphology of a BCP, 57 emphasizes the importance of an extended evaluation of the modality in SEC elugrams. If simple remedies such as a change in temperature or mobile phase composition are ineffective or out of scope, an alternative or additional method is required to determine the molecular weight more accurately. Indeed, most of the examples discussed above present estimates obtained with an alternative approach, such as NMR end-group analysis or MS. One should realize though, that where most methods only yield a single value, e.g. for the number-or weight-average molecular weight, GPC has the essential advantage of almost solely being capable of mapping a full distribution. Hence, besides focusing on independent alternative approaches, the remainder of this section also presents methods that can be used in conjunction to SEC to obtain more reliable molecular weight estimates and/or compositional mapping of a BCP sample. Although each method has its own advantages, we will critically discuss disadvantages as well and, where appropriate, assess to what extent the method in question circumvents the anomalies discussed in the previous section. GPC-MALLS As discussed above in relation to the last example in the previous section, using SEC in conjunction with MALLS ("GPC-MALLS") allows for an absolute determination of the full molecular weight distribution, without the need for a mass calibration procedure. In this method, the fractions eluted from the SEC column are being analyzed real-time by the MALLS detector, where the molecular weight is determined based on direct measurement of the radius of gyration of the polymer. 58 This method is hence principally suited to address issues associated with categories i) -iii). The amount of analyte dissolved in the stock solution needs to be determined accurately to be able to account for the change in refractive index with concentration ( ), obtained by on-line d ⁄d measurement using the RI detector. The method assumes each fraction to be monodisperse. For completeness, when used in absence of SEC, light scattering yields the weight-average molecular weight ( ). Since the MALLS measurement is independent of the retention time on the GPC column, some of the issues discussed in the previous section are in principle Please do not adjust margins Please do not adjust margins avoided. It should be noted though that for conjugated polymers -as discussed above -MALLS is inherently associated with an error as the incident laser light will cause not only scattering, but potentially also fluorescence. This again can negatively impact the MALLS signals and lead to misinterpretation of data. Illustratively, Zhao et al. showed that despite exhibiting similar SEC retention times, GPC-MALLS reveals the expected difference in molecular weight between a poly(oligo(ethylene glycol) monomethyl ether methacrylate) (POEGMA) homopolymer and its equivalent BCP, obtained via a click reaction of the POEGMA block with an azide-functionalized polyfluorene (see Figure 6). 59 Although GPC-MALLS is arguably one of the most powerful tools for molecular weight characterization, we emphasize again that it assumes the sample to be compositionally homogeneous, which imparts a risk when analyzing a BCP that is distributed with respect to block length (as practically all BCPs are), 60 and of which the refractive index of the individual blocks is significantly different. SEC-HPLC A sophisticated and certainly not commonly applied alternative to dual detection SEC for mapping inhomogeneity in both chain length and composition is the combination of SEC with high performance liquid chromatography (HPLC). 26,61 In this so-called "two-dimensional" chromatography system, SEC discriminates by size and HPLC by polarity under critical solution conditions (that is where the HPLC only discerns for polarity differences, but not for molecular weight). In other words, in this method deliberate use is made of the fact that different blocks have a different interaction with the stationary phase of the HPLC column. With this respect, the calibration of the latter is based on the elution volumes of the separate homopolymers. SEC-HPLC also allows for discrimination based on polymer topology, i.e. separation of branched versus linear architectures, as well as discrimination between BCP products and homopolymer contaminants. As an example, Figure 7 displays the result of a SEC-HPLC analysis of a very inhomogeneous sample based on polystyreneb-polybutadiene (PS-b-PBD), containing both linear and branched structures, as well as a distribution in PBD block length. SEC-HPLC identified up to 16 different species in this particular sample. 61 Again, as the full analysis is based on an additional parameter or material property (polarity in this case), SEC retention time anomalies can be accounted for, depending on their underlying reason. In view of the challenges discussed above, the example given in Figure 7 represents a system that is of relatively "low risk": the "classical" BCP PS-b-PBD comprises two flexible apolar blocks, both incapable of exhibiting strong interactions. Nevertheless, even in case a BCP analyte would exhibit non-straightforward behavior in SEC, the combination with HPLC certainly seems very useful to confirm a successful block extension, coupling of grafting. 1 H-NMR end group analysis A popular and readily accessible method for determining a polymer molecular weight average is end group analysis by means of 1 H-NMR spectroscopy. The quotient of the integrals of backbone-related protons with those from end groups, together with the monomer molecular weight, gives an estimate for the number average molecular weight ( ). As shown above in the examples discussed, this direct method for which no calibration is required, is also applicable to block-copolymers and in principle applies to all categories mentioned above. A disadvantage compared to SEC, though, is that end group analysis does not yield the full molecular weight distribution. Furthermore, upon block extension the proton relaxation times increase. As a result, the integration limits become ill defined due to signal broadening, which increases the uncertainty in the molecular weight estimate. Importantly, end group analysis does per definition not discriminate between a BCP and the equivalent mixture of the separate blocks. Diffusion-ordered spectroscopy To prove the actual presence of the BCP by means of NMR, end group analysis is often extended with diffusion-ordered spectroscopy (DOSY). 62 This technique discriminates between various species in a mixture based on their self-diffusion rates in solution by applying the usual radiofrequency pulses, though in the presence of a gradient in the magnetic field. The latter provides for spatial information that allows for correlating signals in the NMR spectrum to the diffusivity measured for a specific component. Hence, in case of a BCP, signals corresponding to different blocks but correlating to the same diffusivity proves that the blocks are indeed connected. [63][64][65][66] A decrease in the diffusion coefficient in comparison to the homopolymers or macroinitiator allows for a qualitative evaluation of an increase in molecular weight. Loo et al. exploited DOSY analysis for mapping the actual molecular weight distribution. 67 Similar to SEC, they propose to use a calibration procedure based on low dispersity polymer standards with known molecular mass to allow for conversion of diffusivities obtained for an analyte (see Figure 8). Since the self-diffusivity depends on whether or notand if so, to what extent -the overlap concentration is exceeded, we note that if a BCP collapses upon block extension, as discussed above, the DOSY method may yield similar inconsistencies as observed for SEC, in particular when applied to BCPs falling into categories ii) and iii). Elemental analysis and mass spectrometry For completeness, also elemental analysis (EA) 4,68 and MS 37,69,70 can be used as methods for determining BCP molecular weights and applicable to all four categories mentioned above. EA is analogous to NMR end group analysis in the sense that it i) requires no calibration, ii) provides the number-average molecular weight and iii) is incapable of distinguishing between a BCP and the equivalent mixture of homopolymers. Prerequisites are that the elemental distribution across the copolymer is block-specific and that the molecular weight of the precursor block or macroinitiator is known in advance. In contrast to EA and NMR end-group analysis, MS is well capable of discriminating a BCP from homopolymer contaminants. In fact, MS is perhaps an "ideal" method for molecular weight determination, in the sense that it is principally capable of producing a molecular weight distribution without requiring calibration, or even dissolving the polymer in some cases. An important prerequisite, however, is that fragmentation of the molecular ions before detection is suppressed. Hence, only soft ionization methods, such as matrix-assisted laser desorption-or electrospray ionization (MALDI or ESI) come into question. Even so, quantitative interpretation of the MS spectrum is ambiguous since large components are not as easily ionized and detected as small ones. The obtained overall distributions in mass spectrometry are usually misleading, and MS can hence only serve as an additional method to SEC, rather than be useful in stand-alone for molecular weight determination. 71 Additionally, copolymers typically show very complex mass spectra that are difficult to assign, even if in recent years automatic peak picking methods have improved significantly. If the molecular weight becomes significantly higher than 10 kg mol -1 , MS becomes, however, largely unsuitable. 72,73 Conclusions This paper highlights cases from recent literature concerning functional, "non-classical" BCPs that give inconsistent results when analyzed with size exclusion chromatography (SEC). Although SEC works relatively well for nonpolar, flexible BCPs, in particular when using dual detector approaches, the method has regularly shown to produce deviant results when the blocks have significantly different physical or chemical properties. This is noteworthy, since such inhomogeneous BCPs are relevant to a range of modern applications, either because of new functions arising from cooperativity between the blocks, or the combination of multiple functionalities within the same polymer. It is easy to misinterpret BCP size exclusion data for such systems, and it is important to stay alert for the various effects that can occur in such systems. The common denominator for the examples in this review is an increase in SEC retention time upon block extension, which produces erroneous estimates when relying on standard calibration. We identify a minimum of four scenarios that lead to this counterintuitive behavior: i) rod-coil BCPs, ii) the carrier solvent being a poor solvent for one of the blocks, iii) strong mutual interactions between different blocks and iv) strong interaction between the stationary phase and one of the blocks. In such cases, extending or replacing SEC analysis with/by another detection technique is advisable, even for obtaining mere proof for block coupling or extension. We provide a brief but practical overview of alternative methods, but also discuss to what extent they actually circumvent the challenges identified above. Author Contributions We strongly encourage authors to include author contributions and recommend using CRediT for standardised contribution descriptions. Please refer to our general author guidelines for more information about authorship. Conflicts of interest There are no conflicts to declare.
7,303.8
2021-01-01T00:00:00.000
[ "Materials Science" ]
T cell and bacterial microbiota interaction at intestinal and skin epithelial interfaces Graphical Abstract Graphical Abstract Introduction Our bodies are in continuous contact with diverse microorganisms, with most exposure occurring at epithelial sites.The skin and digestive tract epithelia are two of the biggest surfaces that provide this interaction, with even further enlarged contact areas due to epithelial invaginations like villi and microvilli in the intestines.Each barrier site contains an array of different cell types, including epithelial cells directly interacting with the environment, stromal cells supporting tissues, innervating neurons controlling motility and pain, and diverse immune cells.Barrier site T cells reside within or in close interaction with their neighboring epithelial surfaces, which house microbiota populations in their most external layers and continuously present microbiota-derived antigens to T cells [1][2][3][4][5].In a healthy host, a continuous cross-talk among all barrier cells and the microbiota is necessary to achieve several concomitant goals: (i) protect against external pathogens, (ii) establish and maintain tolerance, and (iii) trigger repair programs to re-establish homeostasis when these barriers are compromised. Others have reviewed the skin or intestinal microbiota composition and how they affect health and tissue-resident immune cells [6][7][8][9].This review will focus on the impact of the skin and intestinal-localized bacterial microbiota on T-cell biology, highlighting both commonalities and differences across epithelia given their function, structure, and the microenvironmental conditions they provide for commensal microorganisms and their dwelling T cells.We will then discuss the types of T cells that inhabit the skin and intestinal epithelial barriers, covering how these environments can differentially activate resident T cells and enable them to be at the first line of defense while generating and maintaining tolerance to self and commensal bacterial microbiota. The skin and intestinal epithelia constitute unique niches for T cells Barrier tissue architecture enables local interaction of the microbiota and the immune system The intestinal barrier consists of a continuous epithelial monolayer with interspersed non-epithelial cells, sometimes with additional intra-epithelial immune cells in selected locations.Under this monolayer resides the lamina propria (LP), a connective tissue that houses different cell types, including many of the immune system.In contrast, the skin barrier comprises a stratified squamous epithelium from the external stratum corneum down to the basement membrane zone, which overlies the dermis and subcutaneous fat.The differing epithelial structures are associated with the development of different bacterial consortia and even unique structures like biofilms.These microbiotas integrate to different extents with the local barrier tissue, enabling the generation of specialized host tissue-microbiota systems.Immune and nonimmune cells at both barrier sites scrutinize antigens that can be presented in situ or taken to their respective draining lymph nodes (LN).The intestinal milieu displays several types of organized tertiary lymphoid structures (TLS) like Peyer's patches, isolated lymphoid follicles, and cryptopatches [10] (Fig. 1).The skin provides less abundant, but also specialized, immune-interaction hubs like the hair follicles (HF) within pilosebaceous units [11] or ectopic lymphoid structures.These niches reproduce some features of their intestinal equivalents, especially in inflamed skin [12].Interestingly, the HF is an immune-privileged site, as it expresses Fas ligands but lacks Major Histocompatibility Complex (MHC) class II expression.Furthermore, HFs are protected from T-cell cytotoxicity while quiescent [13] and are surrounded by an extracellular matrix (ECM) containing immune-suppressive components [14].Apart from presenting antigens, many of these structural barrier cells can also impact T-cell function employing soluble molecules like cytokines, microbiota-driven metabolites, and lipid mediators of inflammation (Fig. 1).T cells can seed and recirculate back and forth from barrier tissues, with the establishment of T-cell residence marking the beginning of a dynamic orchestration of T-cell responses to commensals. T-cell residence and activation at barrier sites T-cell homing and/or residence in the intestine or skin is associated with the expression of chemokine receptors like CCR9 for the gut and CCR4/8 for the skin.While integrin α4β7 and CCR9 are a hallmark of intestinal habitation, skin homing is preferentially bestowed by CLA, α4β1 (LFA-1), and CCR4 [15,16].Importantly, the skin and intestinal barrier tissues share several of the ligands for these T-cell homing surface molecules, although to different degrees (Fig. 2A).The probability of T-cell residence is not unequivocally determined by these surface markers but is manifested as a spectrum pattern, and likely depends on additional signals yet to be discovered. T-cell activation at barrier sites is largely driven by antigens expressed locally or in neighboring lymphoid organs.Barrier sites contain TLS, which promotes adaptive immunity, but are canonically considered less specialized for antigen-driven priming of T cells than secondary lymphoid organs such as the spleen and draining LNs.Microbial antigens are presented to T-cell receptors (TCRs), while antigen-agnostic microberelated signals like microbial-associated molecular patterns (MAMPs) or bacterial metabolites are sensed through tolllike receptors (TLR), NOD-like receptors (NLR), and other microbial metabolite receptors (Fig. 2B).Barrier antigens are offered by antigen-presenting cells (APCs) like epithelial cells, B cells, or monocytic-derived cells, which can become highly specialized barrier antigen-presenting dendritic cells (DC) [4,[16][17][18].Other atypical APCs like goblet cells, M cells, or neutrophils are present in the intestines [17], while the skin contains distinct site-specific APCs like Langerhans cells (Fig. 1).Human skin Langerhans cells can express noncanonical antigen-presenting molecules like CD1a that can control inflammatory responses through induction of IL-17 and IL-22 in local T cells [19], and CD1a-autoreactive T cells can mediate cutaneous allergic and inflammatory responses [20].The immune cells that co-habit barrier sites and have a high degree of interaction with T cells must trigger and maintain responses to microbial threats while simultaneously collaborating in setting up a tolerogenic state to the local commensal microbiota. A variety of microbiota metabolites affect T cells, including short-chain fatty acids (SCFA), vitamin B2 metabolites, deoxycholic and lithocholic acids, secondary bile acids, and polyamines [21] (Fig. 1).As one example, tryptophan derivatives can be sensed through the aryl hydrocarbon receptor (AhR).AhR is widely expressed in the intestines and has a prominent role during regeneration following tissue injury [22].Tryptophan can be metabolized to indole-3-lactic acid, a ligand for AhR, by the murine intestinal commensal L. reuteri, and this AhR signal is responsible for downregulation of the transcription factor ThPOK in T cells and consequent induction of CD4+CD8αα+ intraepithelial lymphocytes (IELs), although the mechanism for this induction is currently unclear [23].AhR also mediates type 3 innate lymphoid cell (ILC3) and Th17 induction in the murine intestine, and the absence of AhR signaling allows for increased susceptibility to enteric infection [24].The catabolism of tryptophan by intestinal microbiota to indole-3-ethanol, indole-3-pyruvate, and indole-3-aldehyde can prevent increased gut permeability in a mouse model of colitis [25].Likewise, AhR signaling of the skin microbiota in keratinocytes protects them from barrier damage and infection [26].The effects of barrier microbiota on host T cells can be due to the properties of single microbes or community-level properties of the whole barrier microbiota.This becomes especially relevant when considering metabolites generated only by specific microbiota species consortia that partially share metabolic pathways.Further discussion of how the microbiota can influence resident memory T-cell homing appears later in this review in the section entitled "Tissue resident memory T cells (TRMs): the local guys are ready for a fight".Collectively, the emerging data indicate that the gut and skin epithelia provide specialized microenvironments for bacterial microbiota and immune cell interaction and that niche-specific commensal microbiota influence T-cell function at these sites. Types of T cells in the skin and intestinal barriers: a mixture of conventional and unconventional flavors Diverse types of T cells are necessary at barrier sites to establish homeostasis and protection against pathogens in the presence of microbiota As immune-rich organs, barrier sites host T cells belonging to common ontogenically defined types like conventional TCRαβ+ single-positive CD4+ or CD8+ cells and TCRγδ+ cells.The skin and intestinal tissues are also especially rich in unconventional T cells that often exert innate-like and rapid-response properties.Such cells include TCRγδ+, NKT, and iNKT cells, which bear invariant or semi-invariant TCR chains [27].The intestines also house unique populations like mucosal-associated T cells (MAIT), TCRαβ+CD4−CD8−T cells, TCRαβ+CD8αα+, and CD4+CD8αα+ double-positive T cells (the latter being absent in gnotobiotic mice).Some are associated with specific TLS, but most intestinal resident T-cell pools diffusely populate the intestines and skin in the LP or dermis.In the intestine (especially in the small intestine), T cells are also present within the epithelial layer as IELs. In mice, the skin generally contains more CD8+ and TCRγδ+ T cells than the intestine.Conversely, the gut houses proportionally more CD4+ and unconventional T cells.This is likely due to the impact of their microenvironment, including the type of co-habiting APCs and the interactivity with the local microbiota.Much evidence has accumulated about the role of the microbiota in T-cell function, with many investigations linked to CD4+ IL17+ (Th17) and regulatory (Treg) T cells [21,28], discussed in further detail below.The relevance of the microbiota in CD8+ T-cell function is less known, but recent studies show that a consortium of 11 bacterial strains from healthy human donor feces can induce IFN-γ-producing CD8+ T cells in the intestine and confer protection against Listeria monocytogenes infection [29].Barrier epithelial cells or APCs are a predominant source of cytokines like IL1β, IL-18, IL-22, IL-23, GM-CSF, and TGFβ, which strongly influence T-cell responses (Fig. 2c).The intestinal and skin epithelia must balance homeostatic regulation to maintain structural harmony with highly proliferative and regenerative capacities when responding to injuries and insults that compromise the barrier.In this context, type-2 T-cell responses in the skin and gut are linked not only to defense against parasites but also to regenerative responses [30].Barrier IL-33, IL-25, and TSLP, which can drive a dominant part of type-2 responses [31], are also part of epithelial reparative programs.Additionally, T-cell surveillance and defense at barrier sites rely on cytotoxic and/or pro-inflammatory programs to battle pathogens, but when dysregulated, this can be a cause of tissue damage, as happens in skin and intestinal autoimmune and inflammatory conditions.Microbiotareactive T cells will hence influence their local barrier environments, including bystander T cells (largely driven by Th2, Th17, and Treg cells) that collectively can mount a memory response to a pathogen insult, trigger targeted or off-target cytotoxic responses, or contribute to tissue homeostasis or repair.Dissecting the mechanisms by which the microbiota elicits optimal T-cell responses to pathogens while maintaining tissue integrity and tolerance remains an active area for current and future investigation. Unconventional T cells can sense microbes indirectly by recognizing microbial metabolites [21].Microbial metabolites control different aspects of thymic development of T cells in barrier sites: In mice, thymic development of MAIT cells is governed by bacterial products captured by non-classical Ag-presenting major histocompatibility complex class I-related (MR1) molecules [32], and in the case of the skin, such induction is restricted to a specific early-life window in response to cutaneous riboflavin-synthesizing commensals [33].The microbiota does, however, retain the capacity to modulate barrier T cells in the adult.For example, the intestinal commensal Lactobacillus reuteri can induce CD4+ CD8αα+ IELs, which differentiate from IEL CD4 + T cells sensing tryptophan derivatives through the aryl hydrocarbon receptor (AhR) [23].Interestingly, while AhR expression is paramount for unconventional T cells, it is also critical for Th17 and Treg cells [22].Additionally, β-hexosaminidase, a conserved enzyme across commensals of the Bacteroidetes phylum, was recently identified as a driver of CD4+ IEL T-cell differentiation.Importantly, such β-hexosaminidase-reactive T cells were able to confer protection against intestinal inflammation in a mouse model of colitis [34].TCRγδ+ T cells constitute a different lineage that acts as first responders due to their restricted reactivities and quickness to engage and respond [35].In the intestine, TCRγδ+ T cells preferentially reside in the small intestine, and TCRγδ+ T cells that produce IL-17 are generally protective against pathogens [36].Specific intestinal commensals like Bacteroides species are required for maintaining mouse TCRγδ+ IL-1R1+ cells, which are a potential source of IL-17 that can be activated by IL-23 and IL-1 [37].In murine skin, basal keratinocytes expressing the oxysterol catabolic enzyme cholesterol 25-hydroxylase maintain TCRγδ+IL17+ cells, and oxysterols in the diet can increase psoriatic inflammation driven by these cells [38].Additionally, skin-resident γδ T cells can activate an IL-17A/ HIF-1A-dependent repair response in epithelial cells [39].Notably, TCRγδ cells are present in significant proportions in homeostatic conditions in murine skin but are less common in human skin, where they are also less likely to produce IL-17. Th17 cells: barrier guardians with high plasticity and autoimmune potential T cells capable of IL-17 production (Th17) constitute a critical part of the immune system in barrier tissues [40][41][42].IL-17 has five receptor members and six isoforms (A-F), with IL-17A being predominantly expressed and most studied.IL-17F (closest in sequence homology to IL-17A), but not IL-17A, can reduce the presence of Treg-inducing Clostridium cluster XIVa in colonic microbiota, and IL-17F drives intestinal pathology in a T-cell transfer mouse model of colitis [43].In humans, serum IL-17A may reflect the active phase in ulcerative colitis, but not in Crohn's disease [44], but whether there is such a division of responses in IL-17A and 17F in the skin is still not established.IL-25 (IL-17E) can also be induced by microbiota commensals in intestinal epithelial cells.IL-25 can inhibit the expression of macrophage-derived IL-23 and hence limit the expansion of murine Th17 cells in the intestine [45].Once Th17 responses have been triggered, specific feedback signals can be delivered to the microbiota: For example, IL-22 induces the production of anti-microbial peptides [46], and specific anti-microbial peptides secreted by neutrophils like cathelicidin can also cause the differentiation of Th17 cells in the presence of TGFβ1 [47], which illustrates the complexity of the circuitry involved in Th17 cell responses to bacteria. Many studies have demonstrated the critical relevance of IL-17 cytokines in protecting against microbial insults in the intestines and skin [37,48].However, how the microbiota regulates the relative contribution to infection, pathogenesis, or the inflammatory resolution of IL-17-producing T cells remains to be deciphered.This might be, to a great extent, due to the plastic nature of Th17 cells, which can switch or add cytokines to their repository depending on environmental triggers [49].IL-6, IL-1β, TGFβ, and IL-23 are required for Th17 induction and maintenance to different extents [48,50].However, in inflammatory environments, while IL-23 is not sufficient for Th17 induction on its own [8], it is a critical regulator of Th17 pathogenicity, which is largely mediated by induction of IFNγ and/or GM-CSF [51,52].In addition, serum amyloid protein A, produced during acute inflammatory responses in the intestine, can also strongly imprint a pathogenic Th17 program that bypasses TGFβ requirements [53].On the contrary, the production of IL-17 and IL-22 is critical for protection, especially against fungi or bacteria like C. rodentium [41].In mice, epithelium-MHCII can limit the accumulation of commensal-specific Th17 cells and generate protection against commensal-driven inflammation [54].However, ILC3 cells also express MHCII and regulate T-cell responses independent of IL-17A, IL-22, or IL-23 [55].MHCII expression in ILC3s can directly induce cell death T cells and microbiota in epithelia, 2023, Vol. 2, No. 1 of activated commensal bacteria-specific CD4+ T cells, and microbiota-induced IL-23 can reversibly silence MHCII in ILC3s [3].These examples also illustrate how cytokines directly induced in barrier cells by microbiota contribute to the local milieu, increasing the complexity of signal networks regulating specific T-cell outcomes.In the colon, T-cell reactivity to intestinal microbiota might be altered in patients with inflammatory bowel disease (IBD), as the presence of T and Th17 cells is increased in intestinal tissue isolated from IBD patients (but not healthy controls or IBD patients' PBMCs) [56].The plasticity of Th17 cells extends even to their capacity to suppress IL-17 while inducing IL-10 [57], or their colonization of Peyer's patches to become Tfh cells, boosting IgA production [58]. The type of bacterial antigen and corresponding environmental cues are vital in guiding the specific Th17-cell response.Microbiota species-specific responses consistently promote IL-17 production in murine lamina propria CD4+ T cells [59].Moreover, the TCR repertoire of Th17, but not other intestinal T cells, is shaped by segmented filamentous bacteria (SFB), illustrating how antigen specificity and T-cell effector functions can be matched [42].Interestingly, the presence of SFB in mouse microbiota can induce intestinal bacteria to produce retinoic acid (RA, another non-antigen signal critical for T-cell biology at barrier sites), conferring a protective function against Citrobacter rodentium infection [60].Conversely, some Clostridia species can impair RA levels by suppressing the expression of retinol dehydrogenase 7 (Rdh7) directly in IECs, which can, in turn, reduce IL-22 antimicrobial responses and enhances resistance to colonization by S. typhimurium [61].In mice, SFB and Escherichia coli (EHEC) adhesion to intestinal epithelial cells are responsible for specific Th17 responses [40].Interestingly, the effect of SFB on Th17 cells is linked to several autoimmune manifestations.For example, compared to germ-free mice, mice that harbor SFB are more susceptible to experimental autoimmune encephalomyelitis (EAE) [62], and SFB colonization can also drive arthritis in a murine model [63].By contrast, the presence of SFB is strongly correlated with a diabetes-free state in non-obese diabetic mice [64].Such contrast of pathogenic vs. protective responses highlights how sensitive to context are microbiota-T-cell interactions.Such disparities can be due to mouse models that very differently recapitulate disease pathogenesis in different locations but also because of the plasticity of Th17 responses.Several other commensals are also capable of inducing Th17 cells: The human symbiont bacterial species Bifidobacterium adolescentis can alone induce Th17 cells in the murine intestine and exacerbate pathological features of autoimmune arthritis in the murine K/BxN model [65].In this context, mucosal-adapted E. gallinarum translocates to secondary lymphoid organs and the liver and may promote maladaptive Th17 responses in tissues and blood [66], contributing to extraintestinal autoimmunity.Another illustration of how commensals can promote or prevent disease in a context-dependent manner is the case of H. hepaticus, which can trigger colitis in mice deficient in IL-10 [67] but can also induce RORγt+FOXP3+ regulatory T cells that selectively restrain pro-inflammatory Th17 cells [68]. As noted for the intestine, the skin microbiota can also have divergent outcomes for skin Th17 cell responses, even with signals from the same bacterium.For example, S. aureus toxin A induces Th17 responses [69].Additionally, skin inflammation caused by S. aureus induces the release of alarmins by keratinocytes that induce IL-17 production by T cells [70].In contrast, LTA from S. aureus can induce T-cell anergy [71].Protective Th17 responses are also paralleled in the skin, like during infection with C. acnes, generating a protective pathogen-specific cytotoxic Th17 program that includes the formation of neutrophil extracellular traps (NETs) [72].Another Th17-protective skin response is guided by S. epidermidis, one of the major facultative anaerobe components of the bacterial skin ecosystem, which induces CD8+IL-17A+ T cells that home to the epidermis.Once in the skin, these CD8+IL17A+ cells enhance innate barrier immunity and limit pathogen invasion, with skin DCs mediating this effect in the absence of inflammation [18].One apparent difference between intestinal and skin Th17 responses is that induction of Th17 in the skin depends on IL1R but is independent of IL23R [73].However, local IL-23 is required for the proliferation and retention of skin-resident memory Th17 cells [74].Interestingly, non-classical MHCI-restricted commensal-specific immune responses in the skin can drive antimicrobial T-cell responses together with tissue repair [75].Supporting this, Harrison et al. found that skin commensal T-cell reactivity drives a predominant IL-17 response during homeostasis.However, in the context of tissue injury, alarmins like IL-1, IL-28, and IL-33 trigger a type-2 program that is superimposed, leading to concomitant tissue repair that is T cell IL-13-dependent [76]. Many other lines of evidence illustrate the relevance of IL-17 in human barrier health: Mutations in genes involved in the IL-17/IL-23 axis have been identified as risk factors for psoriasis [77] ankylosing spondylitis [78], and IBD [79].Both IL-17 and IL-23 are targets for drugs used to treat inflammatory conditions, including skin and intestinal diseases [80].Interestingly, targeting IL-17A itself was initially developed for IBD but resulted in exacerbated symptoms, and the approach has since then proven safe and effective for treating psoriasis and psoriatic arthritis [80,81].This emphasizes the delicate balance and role of cytokines depending on context and indicates that each cytokine's protective versus pathogenic roles must be carefully considered for each tissue and disease.While it is unlikely that the differences in response in skin and intestinal tissues to current anti-IL17 family therapies are solely due to their local microbiota composition, the microbiota can strongly influence the IL-17/IL-23 axis and hence the outcome of T-cell responses during pathogenesis in barrier diseases.More research is therefore needed to reconcile and advance our knowledge to enable a safer and more effective transfer of immune-directed therapies into clinical practice, including a better understanding of how the local cytokine networks mediate crosstalk between microbiota and barrier tissues. Local Tregs: more than just suppression of undesired T-cell responses Barrier site environments represent a challenge to Treg cells, as Tregs must establish and maintain tolerance to commensals while facilitating standard tissue-specific T-cell effector functions.In the intestine, Tregs are fundamental in controlling pro-inflammatory responses to microbial communities and diet [28,[82][83][84][85][86][87][88], as well as modulating epithelial tissue repair programs [89].Tregs can derive from thymus precursors (tTregs) or be generated in the periphery (pTregs).Tolerance to gut microbes starts to be established during early life, but intestinal commensals are also critical to drive pTreg formation in the adult [1,90,91].Tolerance to skin commensal bacteria is preferentially established in neonatal life with unique waves of activated regulatory T cells entering the skin [85].Hence, most skin Treg cells belong to the tTreg-cell subset, while the intestine is richer in pTregs.Intestinal pTregs can be generated and function in the LP, the mLN, and TLS [90,92].Paradoxically, the microbiota is dispensable for the early stages of peripheral regulatory T-cell induction within murine mesenteric LN [93].In contrast, skin Tregs accumulate preferentially in the vicinity of hair follicles [94].Due to their respective environmental peculiarities discussed above, gut Tregs seem to have a bias for maintaining tolerance, while the role of skin Tregs is geared toward tissue regeneration. Tregs can be assigned to control canonical types of immune responses when they co-express FoxP3 with either Tbet, GATA3, or RORγt, exerting suppressive capacity on their mirroring effector T-cell types [21].In general, while GATA3+ Tregs develop in response to dietary antigens, RORγt+ Tregs are generated in response to the microbiota.In vivo studies reported that animals deficient in the FoxP3 intronic enhancer conserved nucleotide sequence 1 (CNS1) displayed microbiome dysbiosis and elevated type 2 immune response in the colon [95].Mouse colonic GATA3 + Treg cells can express ST2, the receptor of the IEC-derived alarmin IL-33, where it promotes Treg function and adaptation to inflammatory environments.Strikingly, IL-23 can inhibit IL-33 responsiveness in T cells, counteracting its Treg-promoting role [89].The lamina propria is also augmented for Treg cells with high expression of RORγt, with most of such cells displaying constitutive expression of cytotoxic T-lymphocyte-associated protein 4 (CTLA4) and inducible expression of IL-10, IL-35, and TGF-β [88,90,91,96].Several bacterial species can induce the generation of FoxP3 +RORγt+ T cells in gnotobiotic mice, and in murine models of intestinal autoimmunity driven by T-cell transfer, the absence of Th17-like Treg cells can exacerbate pathogenesis, indicating its protective function [96,97].The effects of the microbiota on Tregs can vary from the nichepopulation commensal level to single molecules expressed by specific phyla.For example, in mice, broad-spectrum antibiotics are more potent in the depletion of RORγt+ Treg cells than individual antibiotics, showing that communitylevel functions of microbes rather than individual microbial phyla have a larger effect on Treg cells [97].Additionally, other barrier cells fundamental for antimicrobial defense, like ILC3 cells, can promote Treg RORγt+ cells in a microbiotadependent manner [91].Specific Clostridia lacking prominent toxins and virulence factors found in humans, like clusters IV, XIVa, and XVIII, can induce differentiation and expansion of Tregs in murine transfer experiments [98].Clostridium XIVa populations are also increased in mouse models of colitis where there is a simultaneous expansion of colonic Tregs [43].At a molecular level, polysaccharide A from B. fragilis mediates the conversion of CD4+ T cells into Foxp3+ Treg cells that produce IL-10 during commensal colonization, with TLR2-mediated signaling being necessary for both induction of Tregs and IL-10 expression.Furthermore, polysaccharide A can inhibit colitis development in mouse models [83].Tregs are also capable of directly detecting other MAMPs, illustrated by how MYD88-deficient Tregs induce Th17 cell dysregulation and bacterial dysbiosis, which are linked to impaired Tfh generation in Peyer's patches and intestinal IgA [99].Such results also portray the continuous high degree of interconnectivity in the microbiota-immune cross-talk.RA also has a strong influence on Tregs [100], and CD161+ regulatory T cells responding to RA can support wound repair in intestinal mucosa [101].In the skin, sensing of RA through RXRα receptors in keratinocytes regulates hair cycling, proliferation, and differentiation [102].Tregs are also involved in skin epithelial stem cell differentiation [94], but the role of skin commensals in this context has not been addressed. Akin to Th17 cells, Tregs also display some degree of plasticity.Treg malleability can induce loss of suppressive potential, although the relative relevance of this phenomenon is still under debate.Interestingly, upon migration to the intestinal epithelium, murine colonic lamina propria Tregs can lose Foxp3 expression and convert to CD4+ IELs in a microbiotadependent manner [87].The conversion of Treg cells into pathogenic Th17 cells has been described in mouse models of psoriasis, and Th17-like Treg cells have been found in the skin of psoriatic patients [103].However, it remains to be determined whether Treg plasticity is a mechanism to restrict suppression in barrier milieus or if it can be causally linked to autoimmunity.TGFβ is required to generate Tregs and Th17 cells, and synergy between TGFβ and RA induces differentiation of CD4+Foxp3+ Treg cells [84].Some microbiota members, like C. butyricum, can induce TGFβ1 in lamina propria dendritic cells [104].In the skin, TGFβ controls migration of Langerhans cells [105], and activation of TGFβ by keratinocytes mediates antigen-specific circulating memory CD8+ T-cell responses [2].However, how the microbiota impacts the skin to modulate T-cell responses through TGFβ is still largely unknown. Sensing of metabolites produced by commensals can also have a significant impact on Tregs: recruitment and location of Helios+ Tregs in the gut is dependent on signals delivered through AhR on the intestinal epithelium [106], and microbiota-derived metabolites of bile acids can modulate colon RORγt+ Treg cells [86].SCFAs, which are fermented from dietary fiber by the gut microbiota, strongly influence intestinal Treg cell responses [107][108][109][110].The large intestine shows a considerable enrichment for SCFAs, promoting pTreg cell responses in the gut.In specific-pathogen-free mice, administration of SCFAs increases the number of colonic Treg cells.Although intestinal inflammation and pathology reduction were observed in mice and humans treated with SCFAs, it is unclear whether this is mediated preferentially by Treg cells [108].Skin Treg cells might also be substantially affected by the local microbiota, which can ferment SCFAs from skin lipids and generate indoles with potent AhR activity [111]. Regulatory T cells can also mediate specific aspects of tissue reparative programs.In mice, IL-33 induces TGFβ1mediated differentiation of Treg cells and promotes Treg-cell accumulation and maintenance in inflamed tissues, which they collaborate to repair [89].Amphiregulin, a molecule involved in inflammatory repair responses [112], is found in murine Tregs, where it increases Treg suppressive function subset of Tregs through EGFR sensing [113].In the skin, Tregs have been found to promote hair follicle regeneration by augmenting hair follicle stem cell proliferation and differentiation [94].Murine skin Tregs also express high levels of GATA3, which skews them toward a T helper (Th) type 2 and enables them to suppress skin fibrosis [114].T cells and microbiota in epithelia, 2023, Vol. 2, No. 1 Tissue-resident memory T cells (TRMs): the local guys are ready for a fight Most of our knowledge of T-cell biology originates from rodent studies and human T cells present in the blood, but a wave of recent research has reinvigorated the interest in T cells within tissues.Tissue-resident memory T cells (TRMs) are non-migratory T cells that persist in the absence of cognate antigen exposure, possess heightened effector functions, and protect against known pathogens in the tissues they inhabit [115,116].TRMs provide local surveillance and can generate local and systemic responses on pathogen rechallenge [117,118].Phenotypical markers of residence at the intestinal and skin barrier sites are well established, and while some of these molecules seem to be exclusive for a specific barrier site, others are shared to different degrees (Fig. 2A).While TRMs are defined by their location, TRMs also retain plasticity, and different programs (like Th1 or Th17) can be co-opted at barrier sites to suit the demands in response to re-challenge [119].To form pathogen-specific memory populations, circulating T cells must undergo a primary T-cell response that classically involves activation within secondary lymphoid organs.In contrast, TRM cells act as first responders and are quickly reactivated upon challenge with cognate antigens.This allows them, in certain instances, to egress their niche, enter the circulation, and even repopulate distant lymphoid structures [56,120].Prior studies have suggested that inflammatory signals generated by pathogenic invasion of the host can "license" memory responses to the microbiota [121].Importantly, unlike conventional TRM cells, resident TCRγδ+ T cells and CD8αα+TCRαβ+ cells bearing oligoclonal TCRs can recognize microbial products or host molecules released during stress and inflammatory responses [118].Barrier microbiotas can hence modulate TRM responses to pathogens in different ways: they can compete for microenvironmental nutrients and accordingly limit pathogenic expansion, but they can also contribute to an environment that can control exaggerated anti-pathogen responses.Additionally, barrier bystander microbiota-reactive immune cells (including non-memory T cells) can modulate TRM responses.However, the extent and consequences of the relationship among the local microbiota, different pathogens, and the corresponding T-cell responses to such pathogens are still largely unknown. A feature that many TRMs share is a core transcriptional program that relies heavily on Hobit (in mice), Blimp1, and Runx3 [113,122,123].At a transcriptional level, both skin and intestinal TRM cells have high expression of RORA and AHR, but they display a predominant Th17 functionality program in the small intestine, while skin TRMs show more diversity, with a Th2 and Th17 bias.In allogeneic hematopoietic stem cell transplantation patients, bona-fide skin TRMs display a unique transcriptional signature that includes LGALS3 as a long-term residency marker [124].Intestinal and skin TRMs show the highest degree of clonal expansion of any organ [113], arguing that T-cell pools from these barriers have accumulated and retained much of the organism's reactivity potential to any microbial insult over time.Microbiotareactive CD4+ T cells from healthy individuals range from 400 to 4000/million, are enriched in gut tissues, show a memory phenotype, and express several gut-homing chemokine receptors, indicating that the T-cell repertoire of healthy individuals is reactive to intestinal commensals [56].T cells in skin and intestinal tissues may have a differential dependence on cytokines, with IL-15 being critical for skin CD8+ TRMs maintenance and expansion, while IEL and CD4+TRMs rely more on IL-7 [125,126].Human intestinal CD4+ TRM express CD69 and CD161 and have a potent cytokine production potential, with a majority displaying a polyfunctional Th1-like phenotype [127].Like in the intestinal milieu, TRM cells can also proliferate locally upon antigen encounter without exiting the skin [74,124,128].Moreover, human skin CD4+ TRMs proliferate and secrete IL-17A, IL-22, IFNγ, and TNFβ when stimulated with heatkilled S. aureus or C. albicans, but not other skin commensals [128].Interestingly, a subset of skin TRMs can re-circulate in the blood, and patients with graft versus host disease demonstrate circulating Th2/Th17-biased TRMs [129]. Responsiveness to RA, TGFβ, and SCFAs seems especially relevant for TRMs at epithelial interfaces [113,121,130].RA generated by DCs from retinol is critical for inducing α4β7 expression of T cells and their recruitment to the gut [131].Intestinal commensal Clostridia can modulate RA levels by suppressing the expression of the retinol dehydrogenase enzyme in epithelial cells [61], and RA signals can regulate CD8+CD103+ TRM differentiation and commitment to intestinal location during T-cell priming in mLNs [132].Surprisingly, SFB-colonized mice contain intestinal commensals capable of directly generating RA [60].Interestingly, RA decreases expression of CCR4 and other skin-homing molecules on mouse T cells [131], suggesting a parallel restriction of homing to the skin if RA is provided during T-cell activation under certain circumstances.TGFβ is required to retain CD8+ TRMs in the intestines, concomitantly with induction of αEβ7/α1 and CD69 [133], and TRM heterogeneity is driven by TGFβ sourced in the skin [134].Moreover, T-cell autocrine TGFβ is one of the primary local sources in the epidermal niche, promoting antigen-specific TRM-cell accumulation and persistence [2], which can represent a positive feedback loop to amplify microbial-tolerance responses.Additionally, intestinal commensal-derived SCFAs can induce TGFβ in human intestinal epithelial cells [135], and eosinophils recruited during allergen or bacterial challenges in the intestine can also provide TGFβ to local Tregs [136].The differences in local cell populations in skin and intestinal tissues, especially cells that can serve as APCs, will also be determinant factors in establishing a T-cell resident program. Conclusion The commensal microbiome is a fundamental part of many physiological responses, strongly influencing immune cells inhabiting barrier sites that harbor rich and diverse microbial communities.Our microbiota is necessary to maintain barrier tissue homeostasis and establish proper protective and tolerant responses on immune cells, but conditions impairing barrier integrity and/or immune function can also enable the emergence of opportunistic commensal pathobionts.T cells in barrier locations constitute a unique arm of the immune system that can react in innate and adaptive fashions to signals provided by cohabiting microbiota.Barrier T cells can also integrate environmental cues in response to local commensal communities, inducing temporary T-cell states that contribute to protective, inflammatory, tolerant, or regenerative responses (Fig. 2C).Furthermore, dysregulation of T-cell responses at barrier sites is a major driver of skin and intestinal inflammatory and autoimmune disease.A better understanding of the interactivity and influence of our commensals with T cells in the skin and intestinal barriers can help devise new therapeutic approaches to restore normal T-cell function and immune health in these tissues. Figure 1 : Figure 1: Soluble signals from the microbiota and T cells in the skin and intestinal barriers.Skin and intestinal bacteria present in the outermost layers of the barrier provide antigens and other signals.The skin epidermis and the intestinal epithelial monolayer are the first cellular structures to encounter our microbiota and can generate an array of signals in response.Many other cell types like mesenchymal cells, APCs, and ILCs reside in the dermis or lamina propria and can also release soluble mediators affecting T-cell function when sensing microbial triggers.T cells will integrate signals to modulate their function and cytokine profiles (continuous downward arrows).Cytokines produced by barrier-resident T cells can then have feedback effects on their host tissue (discontinuous upward arrows).Indicated are the main soluble signals involved in theses microbiota-T-cell circuits, with colors indicating a preference for these signals to be blue = protective/tolerant and red = inflammatory/pathogenic.APC: antigen-presenting cell, RA: retinoic acid, SCFA: short-chain fatty acids, MAMPs: microbial-associated molecular patterns, LTA: lipoteichoic acid, PSA: polysaccharide A, SAA: serum amyloid A proteins, AMP = anti-microbial proteins, ILF = isolated lymphoid follicle. Figure 2 : Figure 2: Activation, residence, and response of T cells at barrier sites.(A) Surface molecules and transcription factors invovled in recruitment and retention in either the intestinal or the skin milieu.(B) Recognition of microbiota-derived antigens and other signals that modulate T-cell activation derived from the barrier environment.Depicted are different T-cell-intrinsic and -extrinsic signals sourced by the microbiota, divided in core and accessory mediators of T-cell activation.Integration of all these signals leads to the final T-cell response.Highlighted in blue are receptors that can directly bind microbial components or metabolites.(C) Main transcription factors, surface receptors, and cytokines that influence the spectrum of responses from barrier T cells into tolerogenic, pathogenic, protective, or reparative programs.Placements are not necessarily exclusive, as marker and cytokine patterns can be shared depending on the kind of T cell and the type and time of response.*Expressed only in murine cells.
8,195.6
2023-11-25T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
A peptide family being re-united: the angiotensins coming in from the cold. The renin-angiotensin-aldosterone system is now considered to be far more complex than previously thought when the vasoconstrictor and other physiological effects of the octapeptide angiotensin II in the circulating blood were emphasized. The reasons for this altered viewpoint, involving angiotensins other than the octapeptide in the regulation of blood pressure, water and electrolyte homeostasis, are briefly advanced and discussed. INTRODUCTION The initial observation that the kidney contains a highly effective pressor substance was made almost a century ago.1 It was not until 1934, however, when Goldblatt and his colleagues demonstrated that constriction of the renal artery in dogs produced hypertension,2 that there was an explosion of both scientific and clinical interest in the relevant reninangiotensin (RA) system. The first cure for hypertension by nephrectomy was reported soon after,3 an event that immediately led to widespread surgical management of renovascular hypertension, as evaluated by Homer Smith in the 1950's.4 Present day therapeutic approaches in such hypertension have lately been extensively reviewed by Hollenberg5 with regard to use of drugs designed to inhibit, chemically, angiotensin converting enzyme ACE (which converts the decapeptide angiotensin 1 to the octapeptide angiotensin II) and also with surgical intervention and/or angioplasty which produce the best results in terms of both effectiveness and risk in patients with fibro-muscular dysplasia; the least favourable outcome occurred in cases of widespread advanced atherosclerosis. There was almost universal agreement until quite recently that RA activity was, to all intents and purposes, entirely mediated by the octapeptide angiotensin II, after its removal from the bloodstream in the systemic circulation, via its vasopressor and other biological effects. In this earlier scenario the decapeptide angiotensin I was regarded as the inactive precursor of the octapeptide with ACE activity being essentially confined to the lungs. The estimation of the plasma concentrations of angiotensin II and of the enzyme renin (which acts on its substrate angiotensinogen a tetradecapeptide to produce the decapeptide)6 thus became of overriding interest. As a result other possible modes of action of the RA system received little attention. Such a simplistic viewpoint is, however, no longer tenable so that the relevant homeostatic mechanisms involved in blood pressure control, and water and electrolyte balance, are undergoing a profound reappraisal"10 both regarding RA and RAA (that also involves the steroid aldosterone) activity. MEMBER MAKES A COMEBACK Until lately most investigators felt that the decapeptide was biologically inactive. However, as suggested in 1987,"' intrarenal effects of angiotensin I could promote an antidiuresis 40 mediated via the peritubular capillaries, this effect involving complex relationships between angiotensins I and 11 within the kidney; these importantly concern the distribution of pressures throughout the renal vasculature,7 " and also the intra-renal pattern of sodium chloride levels which strongly influences the ability of the decapeptide to promote water reabsorption into the circulating blood." Decreased (NaCl) also stimulates antidiuretic hormone (ADH, vasopressin) secretion by the pituitary7 8 and this, coupled with the intra-renal effects of angiotensin I,7 8 " can profoundly affect renal venous plasma (solute) together with the relative plasma content of blood in the renal vein. The increased awareness of the possible effects of angiotensin I in blood pressure regulation, combined with the improved availability and acceptability of radioactive iodinated angiotensin derivatives, and the development of more specific radioimmunoassay techniques, will doubtless be the basis of investigations supplementing the kind of clinical study involving the decapeptide that has recently been reported.I: THE APPARENT CLAIMS OF ANGIOTENSIN III The loss of asparagine and aspartic acid respectively from the amino-terminals of the widely available synthetic asparaginyl1valyl'-angiotensin II (Hypertensin, Ciba-Geigy) and that of the octapeptide occurring in man (aspartyl'-isoleucyl5angiotensin II) gives rise to heptapeptide derivatives (ie angiotensin)2'* designated angiotensin III. Despite the heptapeptides having only about one-third the pressor potency of the octapeptides, both the octa-and the hepta-peptides are equally effective (although the effect of angiotensin II on adrenal steroidogenesis is independent of the heptapeptide as an intermediate)13 in stimulating aldosterone release from the adrenal zona glomerulosa.13 (The question of distinct receptors for angiotensins II and III in the zona glomerulosa is of more than academic interest. Goodfriend and Peach,14 suggested the possibility of receptors other than those for angiotensin II and this may provide a possible explanation for Bartter's syndrome in which the loss of sensitivity to the pressor effects of angiotensin is not paralleled by a comparable failure of aldosterone secretion. This condition is associated with persistently high circulating plasma renin levels in spite of the lack of aldosterone response and may be due to a defect that is restricted to receptors for the octapeptide). Infusion of angiotensin II into conscious rabbits,15 anaesthetised"' and conscious17 sheep causes an immediate increase in the plasma concentration of potassium ions with a simultaneous decrease in that of Na+ and CI" , a pattern of (electrolyte) response that is reasonably well maintained throughout the 6 minute infusions in the sheep, with a rapid return to preinfusion plasma levels after the infusions ceased.1"17 The octapeptide also increases plasma (K+)in humans.15 Comparable findings to those with use of the octapeptide were also obtained when the decapeptide was similarly utilized, either in sheep under general anaesthesia"' or in the conscious animals. The results clearly demonstrated that the ability of angiotensins I and II to liberate the potassium ions into the bloodstream is dose dependent"' and unrelated to pressor sensitivity, since the conscious sheep are much more sensitive to the pressor effects of both peptides7 ls than are the anaesthetised ones."' I hese findings highlight the possibility that the electrolyte concentration changes arc responsible, either partly or wholly, for stimulating adrenal aldosterone secretion (this being associated with increased RA activity) as initially advanced by Healy and his co-workers15 and supported by others."In both rat and dog adrenal cells the lack of any synergistic effect concerning angiotensin II and potassium suggests "that these factors share a common mode of action on steroidogenesis in zona glomerulosa cells".1-' Their dual role has been further emphasized by Laraglr" with regard to the direct stimulation of aldosterone secretion. However Foster, Lobo and Marusic,21 utilizing adrenal tissue from anaesthetised dogs, concluded that, because angiotensin II augmented aldosterone production without concurrent alteration of either intracellular [K+] or Na,K-ATPase activity, the octapeptide does not stimulate secretion of the steroid via potassium release. It was, however, found that angiotensin II did not promote aldosterone synthesis below a threshold K + concentration in the zona glomerulosa cells, although the basal aldosterone production was, itself, unaffected by low intracellular potassium concentrations per se. Whether infusions of angiotensins II and III produce similar patterns of plasma (electrolyte) changes in experimental animals has apparently not yet been considered. The relevant investigations would especially concern the potassiumreleasing potency of the two peptides and would doubtless throw further light on relationships between them. ANOTHER HEPTAPEPTIDE ON THE SCENE The loss of phenylalanine from the carboxy-terminal of angiotensin II gives rise to the non-pressor heptapeptide des phenvlalanyl-angiotensin II (ie angiotensin)1"7 and only recently has this molecule been shown to possess biological activity.-05 Its ability to stimulate vasopressin release equals that of the octapeptide22 and it also modulates the sensitivity of baroreceptors as readily as does angiotensin II, as indicated.24 This heptapeptide can be formed, either by carboxypeptidase action on angiotensin II,24 or by endopeptidase action on angiotensin 1, 25 25 a metabolic pathway bypassing angiotensin II formation. ACE inhibition markedly increases the plasma concentration of angiotensin (1-7) together with that of angiotensin I.25 This importantly concerns the possibility that the heptapeptide has depressor effects in several possible ways that are augmented by ACE inhibitors as discussed by Goodfriend24 who states that such a depressor angiotensin would be consistent with the existence of pressor, depressor and pressure-neutral members in other families of hormones. WHAT THE FUTURE HOLDS The recently changed scenario regarding RA operation coupled with doubts concerning the nature of its link-up with aldosterone secretion as outlined in this paper will undoubtedly stimulate future developments in the relevant lields. These could well involve angiotensinogen in ways other than as a substrate for renin as indicated by Goodfriend24. ("1-or that matter, it is probably presumptious to dismiss angiotensinogen itself, and the large fragment of angiotensinogen that remains after removal of Ang I, as having no function beyond delivering its namesake.... Maybe, just maybe, angiotensinogen plus or minus Ang I is a potent protein whose other function presently eludes us"). Other possibilities regarding the tetradecapeptide could also result from the experimental Findings of Poulsen and Jacobseiv' who argued that it was a protease inhibitor restraining renin, resembling serine protease inhibitors such as ari-antitrypsin. Certainly no one would now dispute either Goodfriend s statement24 that ".... it is by no means certain that all the biologically active peptides involved in RAA operation have yet been identified" or the conclusion of Samani" that "We are there entering a most exciting stage in our understanding ol the renin-angiotensin system and m our ability to manipulate it"." CONCLUSIONS The rapid retreat from previous concepts of RA, RAA operation, that has resulted from investigations focussing attention on the cellular activity of the systems, has profound physiological and clinical implications. These especially concern the complex inter-relationships of the various peptides involved, in symbiotic effects influencing blood pressure control and the regulation of water and electrolyte homeostasis.
2,163
1992-08-01T00:00:00.000
[ "Chemistry" ]
Optical Amplifiers for Access and Passive Optical Networks: A Tutorial : For many years, passive optical networks (PONs) have received a considerable amount of attention regarding their potential for providing broadband connectivity, especially in remote areas, to enable better life conditions for all citizens. However, it is essential to augment PONs with new features to provide high-quality connectivity without any transmission errors. For these reasons, PONs should exploit technologies for multigigabit transmission speeds and distances of tens of kilometers, which are costly features previously reserved for long-haul backbone networks only. An outline of possible optical amplification methods (2R) and electro/optical methods (3R) is provided with respect to specific conditions of deployment of PONs. We suggest that PONs can withstand such new requirements and utilize new backbone optical technologies without major flaws, such as the associated high cost of optical amplifiers. This article provides a detailed principle explanation of 3R methods (reamplification, reshaping, and retiming) to reach the extension of passive optical networks. The second part of the article focuses on optical amplifiers, their advantages and disadvantages, deployment, and principles. We suggest that PONs can satisfy such new requirements and utilize new backbone optical technologies without major flaws, such as the associated high cost. Introduction Passive optical network (PON) technologies find their major deployment in access networks [1][2][3][4][5][6][7] owing to their low requirements on optical distribution networks (ODNs), such as single and shared optical fibers between customers and the central office (CO). This technique uses point-to-multipoint (P2MP) shared infrastructure, but it should be noted that a shared fiber means some limitations on the customer's side, such as shared bandwidth, and upstream transmission must be secured with another control mechanism [8][9][10][11][12][13]. Passive optical networks are able to transmit signals from the optical line terminal (OLT) to optical network unit(s) (ONUs) up to 20 km, but in some cases, this distance limitation has to be broken or extended due to extensions of signal transmission in rural areas, remote offices, remote cities, etc. For these purposes, standardization organizations, such as the International Telecommunication Union (ITU) or Institute of Electrical and Electronics Engineers (IEEE), proposed PONs with longer reach [14][15][16][17][18][19]. Furthermore, the extended reach networks require optical amplifiers to extend the distance between the OLT and ONUs [20][21][22][23][24][25][26][27][28][29][30][31][32]. In the following sections, the methods for reach extensions are discussed. Optical fiber amplifiers were invented back in 1964, three years after the first fiber laser was developed by Elias Snitzer and his colleagues. Both the first laser and amplifier used neodymium as three categories: 1R, 2R, and 3R. While the current research interest is full optical amplifiers, we discuss all three categories due to the potential usage of 3R amplifiers in xPONs [43][44][45][46][47][48][49]. The main signal degradation in fiber optic systems arises from amplified spontaneous emission (ASE) due to optical amplifiers, pulse spreading due to group velocity dispersion (GVD), which can be corrected by passive dispersion compensation schemes, and polarization mode dispersion (PMD). Nonlinear distortions are attributed to Kerr nonlinearity, such as cross-phase modulation, which can be responsible for time jitter in wavelength division multiplexing (WDM), or Raman amplification, which can induce channel average power discrepancies [50]. The 1R category represents the simplest amplifier of an optical signal. Only the input signal is amplified and transferred to the output. Note that an input signal is not recovered (the shape, position, and phase are exactly the same as those of the input signal). However, 1R amplifiers are simple, which presents some advantages. For example, a processed signal does not depend on the modulation format, transmission speed, or other parameters of a signal. The basic principle of 1R amplifiers is shown in Figure 1. The input signal is degraded, but the output signal is only amplified because 1R amplifiers do not consider the shape and timing of the input signal; they only consider amplification. All known optical amplifiers can be placed in the 1R category. The second category of R amplifiers works more complexly with an input signal because they are based on the 1R category and add reshaping of an input signal. The shape of a carried signal is degraded with increasing distance from the transmitter side. We consider optical networks and take into account the attenuation of optical fibers. We cannot eliminate the attenuation of optical fibers because we are not able to produce clean silica fibers without admixtures and impurities (additional details about optical fiber manufacturing are provided in [51,52]). The standard attenuation values of the fibers are 0.35 and 0.22 dB/km for 1310 and 1550 nm, respectively. Other important factors are dispersion (additional details about dispersions are provided in [53][54][55]). In general, dispersion causes a carried signal to become deformed in the fiber and spread in the time domain, which produces a range restriction by decreasing the signal-to-noise ratio (SNR), transmission speeds, and improperly logical 0 or 1 decision in the receiver. A 2R amplifier is referred to as a regenerator. A regenerator has an optical signal at an input port, which is converted into an electrical signal; decisions are subsequently made. A decision entails recognition of logical 0 and 1 of the input signal. The signal is subsequently transferred to a transmit circuit. The transmitting circuit converts the signal to the optical domain and transfers to a fiber path. Note that the output signal has recovered its shape and has a higher power level (was amplified), but timing recovery does not occur (the positions of the signal samples are unchanged); refer to Figure 1. The 3R amplifier adds time synchronization to the basic principle of 2R. The 3R amplifier converts the input signal from the optical domain to the electrical domain, amplifies it and reshapes it. A clock rate is recovered and reconstructed before sending a time position (for example, by a comparator). This output signal is equivalent to the original signal that was transferred to the fiber. Figure 2 shows the principle of 3R amplifiers, and Figure 3 shows the block scheme of a reach extended passive optical network (RE-PON). Regeneration of 3R can occur in two ways: Inline 3R regeneration and in-node regeneration. Inline 3R regeneration is usually implemented when the physical distance between the end points exceeds the maximal power budget of the optical network. In-node regeneration can occur in the optical cross-connect nodes, where some OEO regenerators are usually deployed [56]. Note that OEO 3R regenerators are dependent on the signal waveform (modulation formats). If the waveform is changed, the 3R regenerator must be adapted to it. A second significant limitation of 3R regeneration is the bit rate. The maximal bit rate for OEO 3R regenerators is approximately 40 Gb/s. Both problems are solved in all-optical 3R regenerators. The standard for a GPON optical reach extension was ratified by ITU-T G.984.6 in 2008. This standard includes the architecture and interface parameters for GPON systems with extended reach using a physical layer mid-span extension between the OLT and the ONU that uses an active device in the remote node. The GPON reach extender enables operation over a maximum of 60 km of fiber with a maximum split ratio of 1:128 [48]. Two ways to amplify a signal are presented in ITU-T G.984.6. The first method is based on optical amplification of the optical signal: Bidirectionally. This principle is based on 1R regeneration. This kind of amplifier can be based on an EDFA, Raman amplifier or semiconductor optical amplifier (SOA). The second approach is to use an OEO regenerator, as shown in Figure 2. The regenerator consists of a couple of branches for each way using diplexers. In both branches, the receiver and the transmitter are dimensioned for the wavelength band, which explains why the optical signal must be converted to an electrical signal. The electrical signal is recovered and converted to the optical domain. The important function of this part is to recover the clock signal. This step is resumed by the receiver downstream-continual mode-but upstream, the burst mode is used. ITU-T G.984.6 also considers the combination of both systems, e.g., the OEO regenerator downstream and the SOA amplifier upstream. All-optical 2R is also possible; however, it is not transparent to modulation of the input signal [57]. Full optical 3R regeneration is not considered in standardized PONs but is suggested for future networks [58]. Full optical 3R regeneration with a real function of retiming requires clock recovery, which can be achieved either electronically or all-optically. The main difference between both types of retiming is that electronic functions are narrowband compared with broadband optical clock recovery [59]. Full optical 3R regeneration can be realized in two different ways: 1. Data-driven 3R regenerator-nonlinear optical gate. This scheme mainly consists of an optical amplifier, that is, a clock recovery block providing an unjittered short pulse clock stream, which is then modulated by a data-driven nonlinear optical gate block [50]. 2. Synchronous modulation 3R regenerator-this technique is particularly efficient with pure soliton pulses. It consists of combining the effects of a localized "clock-driven" synchronous modulation of data, filtering, and line fiber nonlinearity, which results in both timing jitter reduction and amplitude stabilization (see Figure 4). The high-dispersion fiber first converts the amplified pulse into a pure soliton. The filter blocks the unwanted ASE but also has an important role in stabilizing the amplitude in the regeneration span. Data are then synchronously and sinusoidally modulated through an intensity or phase modulator, driven by the recovered clock [50]. Optical Amplifiers in Telecommunications Networks Optical amplifiers are an essential part of any optical transmission system and are not limited to long-haul systems, such as submarine systems. There are excellent books that address optical amplifiers, for example, [60][61][62], which is used as the basic reference for the following paragraphs. For 1 Gb/s and 10 Gb/s transceivers, the maximum fiber distance is usually 80 km; some transceivers can reach 120 km. While 80 km distances can be overcome without any correction control, for longer distances, forward error correction (FEC) mechanisms must be implemented. This situation changed with the emergence of coherent systems in 2008 [63]. In 2019, coherent systems with maximum transmission rates of 200 Gb/s are very common, and those with rates of 400 Gb/s are also available; however, these systems are expensive, with the optical reach limited to a few hundred kilometers. A new generation of silicon electronics-digital signal processing (DSP)-will be able to increase this rate to 600 Gb/s, with the potential to extend the optical reach to 400 km compared with the current situation. The first concepts of optical amplifiers were introduced in the early 1960s, and the first optical amplifier was invented in 1964 by Professor E. Snitzer, who used neodymium and worked in a 1060 nm spectral window. Professor Snitzer also demonstrated the first erbium glass laser. Other experiments with neodymium followed in 1970, but it was too early for real deployment. These principles were also applied for the first single-mode fibers in early 1980 at Bell Laboratories. Erbium was used for amplification at the University of Southampton and AT&T Bell Laboratories in 1985. The key advantage was the capability of erbium to work at 1550 nm-the most important part of the spectrum in silica fibers [60]. Optical amplifiers are referred to as all-optical (OOO) compared with OEO regenerators. We note that optical amplifiers are referred to as "regenerators" in the submarine world, which may be confusing for readers from the terrestrial telco world. The main advantage of optical amplifiers is that one device is able to amplify many optical signals at once. This feature is in sharp contrast with OEO regenerators, where one regenerator can be used for only one signal and expensive multiplexing and demultiplexing techniques are necessary. Optical amplifiers amplify optical signals by stimulated emission; this mechanism is the same mechanism used for lasers. Optical amplifiers are sometimes described as lasers without feedback. An optical amplifier is pumped (fed with energy) optically or electrically to achieve population inversion of the dopant elements. Population inversion indicates that some parts of the system-photons in the case of optical amplifiers-are in higher energy or excited states than would be possible without pumping. These excited states are unstable and revert to normal states with population relaxation times that are approximately in the range 1 ns to 1 ms (other limits are possible and are discussed in more focused resources on optical amplifiers) [60]. Figure 5 shows different configurations of optical amplifiers used in practical applications. A configuration with a booster only is typically used for shorter distances of 150 km (see Figure 5a). A configuration with a preamplifier only is used when we want to avoid high optical powers produced by boosters; in this configuration, it is often necessary to use an optical filter to suppress noise (see Figure 5b). When distances are longer, for example, 250 km, it is necessary to use a configuration with both a booster and preamplifier (see Figure 5c). For longer cascaded optical spans, it is necessary to deploy inline amplifiers (see Figure 5d). Optical filters may be necessary for all configurations with preamplifiers to reduce noise; usually, this is not needed for booster and/or inline amplifiers. The last configuration utilizes Raman pumping, and in this configuration, it is possible to achieve a distance of ≈350 km, but Raman pumping must use high optical powers (up to 1 W) due to the weak Raman effect in silica glass, which may necessitate serious eye safety measures. It must be noted that the provided distances are proximate only and are very dependent on real transmission equipment (the most important parameter is the receiver sensitivity). General parameters of optical amplifiers [64]: • gain-ratio of output and input power, • gain waveform-should be flat in an ideal case, • saturation power-capability to absorb high input power, • saturation gain-energetic efficiency of the optical amplifier, • insertion loss and insertion loss of the switch-off amplifier, • bandwidth, • noise figure-signal-to-noise ratio, • temperature stability. Erbium-Doped Fiber Amplifiers The real revolution with optical amplification started in late 1980 when amplifiers based on rare earth elements became commercially available. The most significant research was performed by D. Payne and E. Desurvire. A detailed description of EDFAs is available in [65], and a detailed theory of EDFAs is provided in [66]. These fiber-doped amplifiers were investigated in the 1960s; however, techniques such as fabrication were not sufficiently mature. Many rare earth elements can be used as dopants in fibers, for example, neodymium, holmium, thulium or ytterbium; these amplifiers can operate in the wavelength range from 500 nm to 3500 nm. However, some combinations of rare earth elements and fibers (fiber is host medium only) can be produced at reasonable prices, and some nonsilica fibers are not easily produced and maintained. Statements about explosive and exponential growth in data traffic and worldwide fiber networks are almost cliché. Twenty years ago, these networks carried telephone traffic and cable television signals; however, the real explosion started with the World Wide Web (WWW). At that time, deployment of optical amplifiers in local networks was expensive; this situation has changed in the last few years. The first EDFA was demonstrated in 1989, and the initial users of these new devices were submarine (or undersea) systems because all-optical amplifiers could replace expensive and unreliable electronic regenerators. Trans-Atlantic transmission (TAT) systems are usually cited as the first long-haul systems to fully utilize the strength of EDFAs in 1996. Other systems followed (US and Japan); note that the amplifier spacing range is 30 km to 80 km. Terrestrial communication systems followed their aquatic counterparts for the same reason-to replace electronic regenerators. It is interesting to note that the first transmission systems supported only a single-channel configuration, and even in early 1990, top commercial transport systems could transport a maximum of 16 channels on a single fiber (with speeds of 2.5 Gb/s, with the latest step to 10 Gb/s), with predictions to support a maximum of 100 channels in the future [60]. The most important of all rare earth elements for telecommunication fiber networks is erbium because it can amplify signals in the most important frequency spectrum in silica fiber: The third window or the conventional C-band near 1550 nm. EDFAs started a new era of optical communication. For example, the usual spacing of EDFAs is 80 km but may be longer; in some links where the total distance is shorter, the spacing can exceed 200 km. This fact is in sharp contrast to the spacing of OEO regenerators, where the spacing was typically 10 km, and as previously mentioned, regenerators can be used for one signal only. EDFAs can amplify a maximum of 100 signals in the C-band, which covers 1530-1565 nm. Almost all optical dense wavelength division multiplexing (DWDM) transmission systems operate in the C-band. However, if the capacity is not sufficient, EDFAs can be customized to amplify signals in the long L-band, which covers 1565-1625 nm [60]. EDFAs must be pumped to achieve gain by population inversion (refer to Figure 6). Figure 6a shows energy levels for erbium atoms. Electrons are pumped from the low energy level to the high energy level, with a relatively short lifetime of 1 microsecond. On the metastable level, with a lifetime of 10 milliseconds, electrons "wait" for incoming photons and amplify them via a radiative transition process. The levels are described by the well-known Russell-Saunders notation, and a detailed description is beyond the scope of this text. Figure 6b shows a more detailed description of the energy levels with splitting due to both spin-orbit coupling and fine splitting owing to the structure of the host silica glass. From this figure, we can deduce the mechanism of amplification in different spectral areas; in this case, for the erbium amplifier, the spectral area is from 1530 nm to 1565 nm. Different pumping schemes are possible, and the most efficient pumping wavelengths are 980 nm and 1480 nm. Two different configurations can be realized. The first configuration has a pump and signal propagating in the opposite direction; this configuration is referred to as backward pumping. The forward pumping configuration is the second configuration, in which the pump and signals propagate in the same direction. Both schemes are frequently used; a combination of backward and forward pumping is employed when more uniform gain is required. As with any real device, optical amplifiers have some limiting factors in practical deployment. The most important factor is amplifier noise, which is usually expressed as a noise figure (NF). The cause of this behavior is ASE. ASE is the random return from an excited state to a normal energy state. ASE can be used to produce a broadband light source, which is undesired in optical amplifiers. The ideal theoretical NF for EDFAs is 3 dB; typical NFs vary from 4 dB to 8 dB. A total of 980 nm pumps can provide a better NF than 1480 nm pumps. Amplifier noise is a very limiting factor in long-haul transmission because not only data signals are amplified [52]. The described EDFAs are referred to as lumped amplifiers, in contrast to the distributed Raman amplification techniques that are described in this section. However, even EDFAs can be used as distributed-gain amplifiers when the fiber is doped with erbium. These distributed EDFAs were investigated but were never massively deployed in reality. Another rare earth element used for amplification is praseodymium. Praseodymium-doped fluoride fiber amplifiers (PDFFAs), which have sometimes been referred to as PDFAs to make the name more visually similar to EDFAs, can be used to amplify signals in the original O-band, which covers 1260-1360 nm. Compared with EDFAs, these amplifiers for the O-band are different in one important aspect. Pr (and Nd) operate on the four-level principle, which implies slightly worse parameters, such as output powers or noise figures. In contrast to a three-level system, the population inversion in four-level systems is permanently positive. However, this issue is beyond the scope of this paper. When pumping does not occur, for example, after pump failure, transmitted signals do not suffer gain or achieve attenuation. This effect is in contrast to the three-level Er system, which becomes a strong absorber, and in reality, no signal is transmitted [52]. People may ask why signals should be amplified in the original lossy area of 1310 nm when all long-haul systems use C-and L-bands. The answer is chromatic dispersion (CD) and higher speeds. Even for 10 gigabit Ethernet (GE), 1310 nm transceivers were available and substantially less expensive than their 1550 nm counterparts. Czech Education and Scientific Network (CESNET) performed few experiments with PDFFAs in the 2000s [67], especially when extending the all-optical reach of 10 GE server adapters or network interface controllers (NICs). Note that NICs with 1550 nm transceivers were not available on the market partly due to the high prices of 1550 nm transceivers. From the original distances of approximately 10 km, we were able to reach more than 100 km with PDFFAs and almost 200 km with PDFFAs augmented with Raman amplification. The only drawback of PDFFA is fluoride fibers, which are difficult to manufacture (fluorine is very hazardous, the fluoride composite glasses are hygroscopic and the mechanical properties are not as relaxed as those of the silica glasses used for EDFAs); therefore, few vendors can manufacture them. PDFFAs are noisier than EDFAs as well. However, problems with chromatic dispersion and higher speeds occur with pluggable transceivers for 100 Gb/s and 200 Gb/s. The price difference between the shorter reach of 1310 nm and the longer reach of 1550 nm transceivers is significant, and PDFFAs can offer economically profitable solutions. Thulium fiber amplifiers are used for PONs for signals in the 1490 nm spectral window. Ytterbium is frequently used as a codopant in EDFAs to achieve higher optical output powers [52]. Fiber optic amplifiers operate based on the principle of stimulated emission. The principle is similar to that of lasers. An EDFA amplifier consists of a laser pump diode (laser source of optical radiation) and special erbium (Er)-doped fiber. Due to the radiation added from the pump to the Er fiber, the gain is achieved in the range of C-band wavelengths. A simple schematic is shown in Figure 7. The principle of working is referred to as "3-layer" [64]. • Optical radiation from the laser pump is coupled to an Er + fiber with a length of a few meters (10-100 m). • Due to this process, the atoms of erbium (Er 3+ ions) are excited. • Absorbed energy allows migration to higher energetic layer E3. • Ions in this so-called metastable state remain only for a short time (a few milliseconds). • Then, the atoms migrate to the conductivity layer-E2 (nonradiative transition). • After the state of "population inversion" is achieved, the highest proportion of Er ions is in an excited state, and the energy is released via the transmitted signal. • The excited ions return to the basic energy layer E1 in the valence band. This is accompanied by the stimulated emission of radiation with the same wavelength and phase as the transmitted signal. • This is how to temporally store the energy achieved by the laser pump. The transmitted signal is amplified in the C-band in the area of 1550 nm. Note that a useful signal and noise are amplified in the amplified band. While the use of 980 nm and 1480 nm is possible, only 980 nm pumps are currently used due to a higher degree of population inversion. With the exception of the C-band (1530-1565 nm), we can use EDFAs for amplification in the L-band (1570-1625 nm). A difference is primarily observed in the Er fiber length-for the C-band, the Er fiber must be longer. The gain of EDFA amplifiers is approximately 30-50 dB depending on the Er fiber length and the power of the pump laser. The higher the quantity of ions is, the higher the energy level and the more frequent the stimulated emission. These phenomena increase the gain of the optical amplifier. Amplification is the result of the population inversion state of doped ions due to the pump laser. If the power of the optical signal increases or the power of the pump decreases, the inversion state is reduced and the power is decreased. This phenomenon is known as "saturation". EDFA amplifiers are used below the saturation threshold. Spontaneous emission and ASE are reduced, which is referred to as "gain compression" [64]. EDFAs are the most usable, and their advantages are described as follows: • full optical system, • high gain, 30-50 dB, • low noise Figure (4 Semiconductor Optical Amplifiers-SOAs SOAs are another possible solution for data transfer in optical communications. An excellent review is provided in [68]. Note that SOAs were explored in the 1960s, when semiconductor lasers were invented. While the principle of the laser dates to 1958, a solid-state ruby laser was demonstrated in 1960, and the semiconductor laser was subsequently considered. Early SOAs used GaAs/AlGaAs, but more complex InGaAsP/InP materials, which operate in the 1300 nm to 1600 nm wavelength window, were subsequently introduced for use in optical transmission systems. SOAs are important devices in many optoelectronic systems, such as optical recording or high-speed printing. In the reality of telecommunication networks, SOAs were deployed in the 1980s, but they exhibited some drawbacks, such as a rather high noise figure and polarization sensitivity, as well as serious problems when amplifying more than one signal due to effects such as cross-phase modulation. On the other hand, SOAs can be manufactured in specific ways and are able to function in nearly every optical band, covering an almost empty spectral window 1460-1530 nm, the so-called the S-band (no silica host glass rare element-based amplifiers can operate in this band); additionally, these amplifiers can be integrated on chips. For this reason, SOAs are reused for high-speed 100 GE transceivers, where four SOAs are integrated within transceivers and each SOA amplifies only one 25 Gb/s optical signal. SOAs can be used as all-optical wavelength converters and even all-optical switches [59]. SOAs have a structure similar to that of Fabry-Pérot lasers (see Figure 8). However, this Fabry-Pérot configuration is practically improper for data transmission applications because the available bandwidth is very small (less than 10 GHz). To make SOAs suitable for the data world, conversion to traveling wave (TW) devices must be accomplished, which can be performed by the suppression of reflections from the end facets of an SOA with antireflection coatings. The reflectivity must be very small (less than 0.1%) to achieve the desirable behavior. For this reason, other techniques for suppressing reflections were invented, for example, angled-facet or tilted-stripe structures [59]. SOAs are small and electrically pumped (in contrast to EDFAs/PDFFAs or Raman amplifiers) and can be easily integrated with other semiconductor elements and devices, such as lasers and modulators. Undesirable properties, such as a high noise figure, low output power and polarization sensitivity, restrain SOAs from massive deployment as amplifiers, even when many techniques, such as series, parallel or double-pass configurations, have been introduced and studied. Other novel areas exist where SOAs can find potential use; examples include wavelength conversion, optical demultiplexing of very-high-speed (100 Gb/s) signals to low-speed (10 Gb/s) tributary signals or optical clock recovery units. However, commercial equipment based on these principles is not available. The gain of semiconductor amplifiers is not generated in the fiber optic material but is generated in the structure of a semiconductor amplifier. Pumping is not optically performed but must support electrical energy (electrical field). Typical materials used for SOA amplifiers are GaAs, AlGaAs, InGaAs, InGaAsP, InAlGa As and InP. These materials have excellent quantum efficiency, which provides a maximum number of generated photons. The principle of SOA operation is similar to that of photon emission in lasers [60]: • Stimulated absorption. • Media excitation. Excitation of a semiconductor medium in the P-N transition is the result of energy pumping and depends on stimulated absorption. Absorbed energy is transferred to an electron in the valence band, which is excited to a higher energetic layer in the valence band. The energy of an incident photon must be sufficient to overcome the forbidden band of the semiconductor. • Population inversion. In a pro-polarized P-N transition, it is possible to achieve population inversion by molecular excitation to a higher energetic layer. The state of population inversion means that the quantity of electrons in the valence band is higher than the quantity of electrons in lower-energy bands. SOAs are manufactured as a chip situated in a standard housing with the capability of temperature control, which allows wavelength stability and the possibility of achieving maximal gain. A high concentration of carriers in an active area causes an increase in the refractive index, which is higher than that in the coating. This region serves as a lightweight circuit for newly generated photons [60]. Advantages of SOAs: • Saturation of an SOA is achieved by a strong input signal, which depletes free carriers in an active area. The gain decreases with increasing input power. Saturated power is achieved by a 3 dB increase in the maximal value (see Figure 9). The influence of carrier depletion can be partially limited by so-called holding beam injection (optical copumping) [69]. Raman Amplifiers Another principle used for optical amplification is based on stimulated Raman inelastic scattering (SRS). This process differs from stimulated emission, as exhibited by EDFAs and SOAs, where incident photons stimulate emission of another photon with the same energy (i.e., frequency). In SRS, incident photons create another photon with lower energy (i.e., with lower frequency), and the remaining energy is absorbed in the fiberglass as molecular vibrations (optical phonons). Materials absorb energy, which is subsequently emitted. If the energy of emitted photons is lower than the energy of absorbed photons, the effect is referred to as Stokes Raman scattering. If the energy of emitted photons is higher than the energy of absorbed photons, materials lose energy, and the effect is referred to as anti-Stokes Raman scattering. This scattering process is spontaneous, i.e., in random time intervals, and occurs when signal photons (sometimes referred to as Stokes photons) are injected into materials with pump photons, as known to occur in EDFAs. While Raman amplification in optical communications was demonstrated in the early 1970s, the Raman effect was predicted in the 1920s and published in 1928 [70]. The first demonstration of Raman amplification in optical fibers was performed in the early 1970s, and many research papers indicated the potential of the Raman effect and amplifiers in fiber optic networks. However, as with coherent systems, Raman amplifiers were overtaken by EDFAs. In the 2000s, Raman amplification started to emerge in real transmission systems, especially with long-haul and ultralong-haul systems, but with improved devices. The Raman effect in silica fiber is weak, and much higher pump powers are required than those with EDFAs. Polarization dependency is also a problem, but it can be solved with the use of two orthogonally polarized pump sources, and the gain profile is not spectrally constant (refer to Figure 10). The problem with gain is true for every amplifier, and solutions to mitigate this effect are known [71]. The principle of the Raman amplifier is based on the interaction between photons that spread in the optical environment and this environment (material). The result of the interaction is a frequency shift. Raman amplifiers produce stimulated Raman scattering (SRS) in the material of the optical fiber. Due to optical pumping at specific wavelengths, interaction between photons and phonons of the material is possible, where the energy of molecules is added to the energy of photons (refer to Figure 11). Due to this change, a new mode with a 100 nm wavelength shift is created. The wavelength is shifted to longer wavelengths. Therefore, if we need to amplify optical signals in the 1550 nm band, 1450 nm pumping sources must be used. Raman scattering is an elastic scattering mechanism that does not require a population inversion. The maximal gain is approximately 30 dB [64]. As previously mentioned, the amplified band is given by the wavelength of the pumping diode. Due to this capability, Raman amplifiers can function in an extensive range of wavelengths. Amplification in Raman amplifiers is very different from that in EDFAs, PDFFAs and SOAs: The transmission fiber is used as the media for amplification, and therefore, Raman amplifiers are distributed. Other optical amplifiers may be considered to be "lumped". DRA uses backward pumping. When the pump is situated at the end of an optical link, the gain contributes to all-optical links, and the power loss is continually compensated. DRA amplifiers have a low noise figure, high gain and low nonlinear distortion [71]. It should be noted that lumped Raman amplifiers were also introduced in telecommunication transmission systems but in a slightly different manner. Raman pumping was combined with a dispersion compensating fiber (DCF). The diameters of DCFs are smaller than those of standard single-mode fibers, so the interaction in DCFs is stronger, and Raman amplification is more efficient. DCFs are "lumped" because they are periodically inserted into the transmission line. Thus, adding Raman pumps to previously lumped DCFs can create lumped Raman amplifiers. With the deployment of coherent transmission systems, DCFs are removed because compensation of chromatic dispersion is not necessary [59]. The Raman effect is broadband, but the drawbacks are the polarization dependency (where a common solution is to use pump depolarization) and the low gain coefficient in silica glass (see Figure 12). For this reason, high optical powers must be applied. In CESNET, experiments were performed in which the launched powers often exceeded 500 mW. These powerful lasers introduce serious safety eye hazards, even when automatic laser shutdown (ALS) is implemented (in some cases of fiber cuts or angled physical contact (APC) connectors, ALS may encounter difficulties in detecting fiber failure). For this reason, Raman amplifiers are rarely deployed in common optical systems. They are used in specific cases when very long fiber spans must be lightweight-for example, submarine links between the mainland and islands or similar specific conditions. The present DWDM system transporting coherent signals over long distances needs to deploy Raman amplifiers together with EDFAs to cope with lossy segments with high attenuation. This hybrid solution helps to keep the overall optical signal-to-noise ratio (OSNR) acceptable. It is interesting to note that in some of the literature [60], because of the rather weak interaction in silica glass, the maximum required pump power to achieve a gain of 30 dB is calculated to be 5 W. Experiments performed in CESNET showed that even pump powers less than 1 W caused very strong distributed Rayleigh scattering (DRS) and could not be used. Some vendors of transmission equipment do use pump powers below 500 mW for Raman amplification, which is another peculiarity on the other end of the power scales; both SRS and stimulated Brillouin scattering (SBS) are nonlinear effects, so some threshold optical powers are required to "kick-start" the mechanism. We tested and verified the Raman amplification for pump powers of 10, 50, 100, 150, 200, 250 and 300 mW of optical power. The central wavelength of the pump was ≈1455 nm, and the signal wavelength was 1552.064 nm, which is adequate for a 97 nm wavelength shift (refer to Table 1). The continuous wave signal from the laser diode was coupled to a fiber spool with a length of 50 km. If we consider a fiber attenuation of approximately 4% per kilometer (corresponding to 0.18 dB/km, which is the normal attenuation coefficient for 1550 nm in a standard single-mode optical fiber), then after 50 km, the signal is attenuated by approximately 9 dB (≈87% loss of power!). In the scheme depicted in Figure 13 we can see two important things. First, the amplification is more effective if the pump signal propagates in the opposite direction from that of the data signal. Second, the Raman gain coefficient is highly polarization dependent; therefore, two pump diodes (Pump 1 and Pump 2) are used to generate depolarized light. It is also very important that the fiber length must be long enough to generate Raman scattering [72]. As shown in Figure 14, the saturation power of the amplified signal is quasilinear. The difference between the saturation power for the 10 mW pump power and the saturation power for the 300 mW pump power is approximately 2.5 dB. Disadvantages of Raman amplifiers: • high pump power requirements, • lower efficiency for a specific wavelength then EDFAs (for the same pump power), • sophisticated gain control is needed. Brillouin Amplification The last amplification technique discussed is based on SBS. SBS is similar to SRS but with notable exceptions: SBS occurs only in the backward direction, the scattered light is shifted by approximately 9-11 GHz (compared with 13 THz or 100 nm for SRS) and the gain spectrum is 100 MHz (for SRS, it is 30 THz). For SBS, energy absorbed by the fiberglass has the form of acoustic phonons (in contrast to an optical phonon in the case of SRS). Brillouin scattering is a "photon-phonon" interaction that occurs when annihilation of a pump photon simultaneously creates a Stokes photon and a phonon. The created phonon is the vibrational mode of atoms, which is also referred to as a propagation density wave or an acoustic phonon/wave. In a silica-based optical fiber, the Brillouin Stokes wave propagates dominantly backward, although very partially forward. The frequency (9-11 GHz) of a Stokes photon at a wavelength of 1550 nm differs considerably from that of Raman scattering (is smaller by three orders of magnitude and is dominantly downshifted due to the Doppler shift associated with the forward movement of created acoustic phonons [73]. Depending on the frequency offset, the interference of the counterpropagating pump light with the signal light causes a moving density grating. The density grating coherently scatters pump photons into the signal beam, which is amplified. The characteristics of the SBR amplifier are the narrowband spectrum of approximately tens of MHz (based on an optical gain medium). Brillouin amplifiers have a built-in narrowband optical filter, which enables amplification of specific signals. In contrast to broadband amplifiers (EDFAs, SOAs, and Raman amplifiers), Brillouin amplifiers enable a maximum signal stage of 50 dB or higher [74]. Brillouin amplifiers are not suitable for standard data communication due to their very narrow gain spectrum; however, new applications use different optical signals. We can provide examples of two new applications that have been extensively investigated: Accurate time transfer (that is, atomic clock comparison) and ultrastable frequency transfer. These signals are very slow (hundreds of MHz for accurate time, continuous wave (CW) for stable frequency), and therefore, their spectra are very narrow, which renders them suitable for Brillouin amplification, especially for ultrastable frequency transfer using very narrow laser sources. In this case, SBS can be used for very powerful amplification [75]. Advantages of Brillouin amplifiers: • high gain and saturation power for narrowband signals, • wavelength conversion, • can enable amplification of a very small input signal (a few nanowatts) by more than 50 dB in a single gain step. Disadvantages of Brillouin amplifiers: • limited range of use, • nonlinear phenomena. Amplifiers for PONs Long-reach optical access is a promising technology for future access networks. This technology can enable broadband access for a large number of customers in access/metro areas while decreasing capital and operational expenditures for the network operator. Almost all the described optical amplifiers can also be used in passive optical networks (see Table 2), with the notable exception of Brillouin amplifiers (which are not suitable due to a very small gain spectrum, as previously described). Prospects for PONs with reaches of 100 km and 10 Gb/s speeds are being investigated, but these devices are not commercially available. A typical PON can reach 20 km with a maximum split ratio of 64. For example, the GPON standard established optical budgets of 28 dB with 2.488 Gb/s for downstream transmission and 1.244 Gb/s for upstream transmission. This standard is the current standard of PONs. In long-haul systems, optical amplifiers are extensively employed to extend the reach of systems to hundreds or thousands of kilometers. The cost of optical amplifiers is sufficiently low; thus, we can consider their use in PONs. The cost of amplifiers can be shared among numerous customers. The GPON protocol can support a logical reach of 60 km and a split ratio of 1:128. Optical amplifiers may be used to extend the reach. The transparency of optical amplifiers indicates their use for GPONs and gigabit Ethernet PONs (GEPONs). Optical amplifiers are a main technology for next-generation access (NGA) PONs. Several benefits of extended-reach PONs exist. First, customers are located as far from the CO as they can be connected. Second, where customers are sparsely distributed over a large area, optical amplifiers can be used to ensure good utilization of the shared PON. Third, depending on an end-to-end network design, extending the reach of a PON can enable node consolidation, which entails reducing the number of PON head-end locations that must be managed by the operator [76]. In metro and long-haul networks, EDFAs are extensively employed because they provide a high gain, output power and noise figure from 1530-1565 nm. Existing PON standards apply EDFAs for analog video broadcast (overlay PON). An alternative to fiber amplifiers is the SOA. While SOAs do not provide gain and noise figures that are comparable to those of EDFAs, their advantage is that they can operate at any wavelength. The gain dynamics of SOAs are also substantially faster than those of EDFAs, so SOAs can be used for bursts upstream [76]. While the Raman amplifier can be theoretically used downstream, we have to consider the high price and necessity of high-power, dangerous pumps. FEC is another important technology for extending the capability of PONs. While FEC is specified in GPONs and GEPONs, an enhanced version of FEC could be used in future PONs. Early proof-of-concept experiments have been performed using an optical amplifier at an intermediate powered location in combination with the FEC, which can envisage a 10 Gb/s PON with a split ratio of 1024 [77]. Other PON systems use the C-band, for example, coarse WDM (CWDM) wavelengths of 1530 nm and 1550 nm between the OLT and ONUs). If EDFAs are used as power boosters and preamplifiers, the maximum budget increase is reported to be 34 dB [31]. EDFAs are used for the area of 1550 nm, where video overlay signals are transmitted. In [78], SOAs with Raman amplification are demonstrated for maximum speeds of 2.5 Gb/s. Raman pumping at 1270 nm is used, with a maximum pumping power of 1 W. The results for extending the reach for rural areas are promising, but 1 W is Class IV, and serious eye safety hazards must be carefully considered. British Telecom has demonstrated its long-reach PON. The system used EDFAs and SOAs. With the appropriate optical technologies, 10 Gb/s transmission was achieved in the downstream and upstream channels across 100 km to 1024 customers using a low-cost optical transceiver in the ONU situated in the customer premises [79]. The ACTS-PLANET realized the SuperPON in 2000. The implemented system supports a total of 2048 ONUs and achieves a span of 100 km. The 100 km fiber span consists of a maximum feeder length of 90 km and an add and drop section of 10 km. EDFAs and SOAs were also used [79]. The Photonic System Group of University College Cork in Ireland has demonstrated the wavelength and time-division multiplexing long-reach PON (WDM-TDM LR-PON). The network supports multiple wavelengths, and each wavelength pair can support a PON segment with a long distance (100 km) and a large split ratio (1:256). The LR-PON contains 17 PON segments, each of which supports symmetric 10 Gb/s upstream and downstream channels over a 100 km distance. The system can serve a large number of end-users: 17 × 256 = 4352 users [80]. The authors in [81] cooperated with British Telecom, Alcatel and Siemens, who introduced the second-stage prototype of a photonic integrated extended metro and access network (PIEMAN) sponsored by Information Society Technologies (IST). PIEMAN consists of a 100 km transmission range with 32 DWDM channels, each of which operates at symmetric 10 Gb/s and 32 PON segments. The split ratio for each PON segment is 1:512; thus, the maximum number of supported users is 32 × 512 = 16384 end-users. Other long-reach topologies considered by researchers include ring-spur topologies for the long-reach PON. Each PON segment and OLT are connected by a fiber ring, and each PON segment can exploit a traditional fiber to the x (FTTx) network with a topology that consists of several "spurs" served from the "ring". The ring can cover a maximum metro area of 100 km. The natural advantage of the ring topology is two-way transmission and failure protection [79]. An example of this topology was demonstrated by ETRI, a Korean government-funded research institute, which has developed a hybrid LR-PON named WE-PON (WDM-E-PON). In the WE-PON, 16 wavelengths are transmitted on the ring and can be added and dropped to local PON segments via the remote node (RN) on the ring. The RN can include an optical add-drop multiplexer (OADM) and an optical amplifier. The split ratio of the PON segments is 1:32, and the system can support 512 end-users [82]. Another demonstration of ring-based technology, which is called scalable advanced ring dense access network architecture (SARDANA), also implements "ring-and-spur" technology. In this system, 32 wavelengths are transmitted on the ring, with a split ratio of 1:32 for each wavelength. More than 1000 end-users are supported. The ONU units are based on a reconfigurable semiconductor optical amplifier (RSOA) [83]. The comparison of LR-PON projects is depicted in Table 3. Many tests of different optical amplifiers in the PONs were conducted. In general, we suggest that use of the Brillouin amplifier is not feasible in this area of optoelectronics due to specific properties [93][94][95][96][97][98][99][100][101]. EDFAs can be employed for analog radio frequency (RF) overlay video services or WDM-PONs, where the C-band or L-band is used [102][103][104][105][106][107]. Other types of fiber amplifiers can be used for PONs: A thulium-based amplifier downstream and a praseodymium-based amplifier upstream [108][109][110][111][112][113][114]. Raman amplifiers can be used for PONs; however, if we take into account the cost and hazardous optical power, it is not the best solution for downstream transmission [115][116][117][118][119]. SOAs can be used as one of the most suitable candidates for future next-generation long-reach PONs. The low cost, sufficient gain and small size positions SOAs for future development [21,[120][121][122][123]. Conclusions In this paper we focus on reach extension in passive optical networks whereas applications in access and passive optical networks are being considered. Achieving longer distances without amplifiers or repeaters is not possible so the article explains both the basic principles of repetition and amplification, as well as the optical fiber amplifiers themselves. History, the general principles of operation and the basic configurations are explained for all types of amplifiers. While many standards for high-speed PONs exist and additional standards are being prepared, there are also new trends that have been barely documented. However, the lack of standards should not hinder the creation of new approaches, for example, deployment of optical amplifiers in PONs, mainly EDFAs, Ramans and SOAs. An evaluation measurement to verify the dependence of the power level of the Raman amplifier on the saturation power was performed. Measurements have shown that even a relatively small powers of the pump diodes of the Raman amplifier (≈300 mW) can amplify the transmitted signal. In addition to explaining the basics of amplification and measuring the amplification itself with a Raman amplifier, the article provides a comprehensive overview of the current state of research in the use of optical fiber amplifiers in PON networks. Both simple solutions that would be easily implementable in practice and complex solutions with signal regeneration are presented. New trends of open networking promoted by hyperdata center companies should be considered for new trends in PON deployment to avoid any undesirable vendor dependencies and lock-ins. Open networking can ensure that technologies are replaced or migrated to new equipment as needed, especially when deploying out-of-box optical equipment, whether 2R or 3R, in PON ecosystems. These new open trends are yet not standardized in many cases but should not be disregarded because they are emerging in many parts of the world, especially in North America and Asia. Additionally, we believe that, with optical amplification, the support of new applications, such as accurate time transfer or distributed fiber sensing, could be important for PON end-users. This new class of applications may not appear to be appropriate for a PON environment at first, but future user requirements and new open approaches are to be utilized here. Acknowledgments: Tomas would like to give thanks to Ales Buksa for his support at the University in memorial. Ales taught and inspired him regarding many things in his personal life. Acknowledgment is also given to CESNET for technical support and the equipment used for the measurement. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
11,431
2020-08-26T00:00:00.000
[ "Engineering", "Physics" ]
A model of multi-pass absorption of external EC radiation at initial stage of discharge in ITER A model is developed for multi-pass absorption of external electron cyclotron radiation (ECR) in tokamaks, which is used at initial stage of discharge to overcome the impurity radiation barrier (burn-through). Model is based on a semi-analytical solution of the ECR transport problem in the case of multiple reflection of radiation from the wall of the vacuum chamber. We estimate the efficiency of absorption of injected radiation for typical values of the electron temperature and density at the initial stage of discharge in ITER. Introduction Due to technological issues the ohmic plasma breakdown in tokamak-reactor ITER is only possible over a narrow range of plasma pressure and magnetic field errors [1].In this connection for the reliable plasma start-up in ITER it is planned to use the electron cyclotron resonance heating (ECH, ECH-assisted start-up) [2], [3].The ECH is a standard way for plasma start-up in stellarators and already showed to be an effective tool for plasma breakdown in tokamaks [4]. The main functions of the ITER ECH&CD system are as follows [3]: • ECH-assisted plasma start-up: assistance to initial breakdown and the heating during the current ramp-up, • auxiliary heating to achieve the H mode and the fusion energy gain factor Q = 10, • steady state on-axis and off-axis current drive, • MHD instabilities control by the localized current drive. In ITER in a wide range of operating parameters all the requirements to the ECH&CD-system may be fulfilled by using the fixed frequency gyrotrons (170 GHz), launching elliptically polarized ordinary wave (O-mode) from the low magnetic field side.Basic parameters of the ITER EC-heating system are shown in fig. 1. At the initial stage of the discharge due to the low electron temperature the absorption of the O-mode is very small.Therefore, to ensure the burn-through the ordinary wave is launched at an oblique angle to the magnetic field (for ITER, toroidal injection angle Φt20 º), which leads to the conversion of ordinary mode to extraordinary (Xmode) one in the reflections of the EC radiation from the wall of the vacuum chamber [1].The fundamental harmonic of the X-mode is strongly absorbed even at zero electron temperature, which makes it ideal for use in start-up scenarios [5].It is assumed that after a few reflections of the O-mode from the wall (about 4 reflections) 75% of the injected power (if only the mode conversion take place) will be converted to X-mode and absorbed by the plasma [1]. There are alternative scenarios of RF heating at the initial stage of discharge.For example electron Bernstein waves (EBW) can be used for heating, because there is no density cut-off for this electrostatic waves and they are fully absorbed in the EC resonance zone.EBW can't exist without plasma, so EBW can't be launched from the antenna outside the plasma.One can use the X-mode to EBW conversion in the upper hybrid resonance (scheme XB, namely, launch of the X-waves from the high magnetic field side and its conversion to the Bernstein waves, and the OXB scheme, namely, launch of the Owave, its full conversion in the first wall reflection to the X-wave and then conversion of the X-wave to EBW) [6]. Modelling of the initial stage of plasma discharge in ITER with the 0D model [1] showed that in a wide range of initial conditions, taking into account beryllium impurities, the 3 MW of absorbed external EC radiation is needed to achieve the plasma breakdown (for the carbon impurities even 5 MW of absorbed power is not enough).However, in [1] the efficiency of EC absorption was not calculated. Modeling of ECH and ECCD was performed using kinetic codes (solution of the Fokker-Planck (FP) equation): OGRAY code [7]; or ray-tracing code GENRAY [8] + FP code CQL3D [9].Also the simplified models were used [10], [11].In all these calculations the single-pass absorption model has been used.Recent 1D simulations [12] of the ECH start-up in ITER with the help of the OGRAY code [7] for the ECH calculations showed that the planned EC power could be not enough for plasma breakdown because of low efficiency of the single-pass ECH power absorption. When the high power of ECH is injected at initial stage of discharge (low density plasma), we have to allow for multi-pass absorption.Here we propose a model for calculating the efficiency of the absorption of external EC power in tokamaks at initial stage of discharge., Basic parameters of ITER: major torus radius R 0 = 6.2 m, minor radius (in the equatorial plane) a = 2 m, toroidal magnetic field on the toroidal axis B (R 0 ) ≡ B 0 = 5.3 T. The ECRH&ECCD system in ITER [3]: frequency of gyrotron ν inject = 170 GHz , Omode , the injected power of EC waves P inject = 24 MW, toroidal injection angle Φ=20 • . Single-pass absorption of EC radiation The absorption of the injected EC radiation power, P inject , can be calculated by the formula: where f O,X coefficient determines the fractions of the radiation of the EC wave modes, respectively: ordinary (O-mode) and extraordinary (X-mode); τ (O,X),effeffective optical thickness of the plasma column.The efficiency of the ECH absorption is defined as follows: The authors of the 1D simulations [12] of plasma start-up in ITER geometry (see fig. 1) with the OGRAY code [7] for ECH calculations propose the following scaling formula for the single-pass ECH absorption for 10≤ T e [eV]≤ 1000, 10 17 ≤ n e [m -3 ]≤ 10 19 : .400 It is worth noting that in OGRAY calculations [12] the distortion of electron velocity distribution function by a strong EC wave absorption was not taken into account. A rough estimation of γ ECH can be obtained from equation (1) using the analytical formulas for the effective optical thickness of the plasma column [13] (table XII, p. 1206).In the vacuum limit for the case of the O-mode propagation perpendicular to magnetic field we obtain: .550 The comparison of single-pass models for ECH absorption (3) and ( 4) is given in Fig. 2. Figure 2 shows that for plasma parameters at initial stage of discharge in ITER, n e ≤ 0.1•10 19 m -3 and T e ≤ 80 eV, the efficiency of ECH in ITER (O-mode) will be less than 1-5%.Formula (4) underestimates the absorption efficiency of EC-heating by ~ 30% in comparison with scaling (3). A model for multi-pass absorption of EC radiation If the single-pass ECH absorption is small we have to consider the multi-pass absorption.We propose the following model for the multi-pass ECH absorption: x multiple reflection of the EC wave from the wall of the vacuum chamber, x isotropy/uniformity of the injected EC radiation intensity in plasma, x EC mode mixing in wall reflections, x full single-pass X-mode absorption. The model modifies the approach of the СYTRAN code [14] and the CYNEQ code [15], [16], developed for the plasma-produced EC radiation transport at moderate and high EC harmonics and verified in the benchmarkings [17]- [19]. We consider two terms in the total absorbed power of the injected ECH: single-pass absorption of the injected EC wave (O-mode) and the above model for multi-pass absorption after first reflection of the EC wave from the wall: Single Multi absorp absorp absorp For the case of the injected O-mode the single-pass absorption is calculated by the formula (1) with f X =0: We will use the OGRAY scaling formula (3) for the single-pass absorption, so the effective optical thickness in ( 6) is given by the formula: The multi-pass absorption can be calculated as follows: For the intensity of the EC radiation we will use semianalytical solution of the radiative transfer problem for the case of multiple reflection of radiation from the wall [14], [15], [16].In frame of this approach we assume the isotropy/uniformity of the injected EC radiation intensity, so the eq.( 8) may be rewritten in the form: From energy balance equation one can obtain (see fig. 3): where q X,O is power density of ECR source, S tot -area of vacuum chamber inner surface, R ςς' is the wall reflection coefficient for the incident ς mode and the reflected ς' mode, cf.fig. 3 (we assume R ςς' (Z)=const).We consider the case when the X-mode appeared in wall reflections is full absorbed in a single pass in plasma.In this case eqs.( 5)-( 7), ( 9)-( 13) give the following formula for the efficiency of the ECH absorption: The comparison of single-pass and multi-pass models for the ECH absorption efficiency is given in figs.4-5.The dependence of the ECH efficiency on the reflection coefficients is shown in figs.6-8.The calculations for the multi-pass model are given for the plasma shape with parameters L z ≈ 3.6 m, L X ≈ 3 m (see fig. 1), obtained in self-consistent calculations of the ITER start-up scenario in [12].The absorption of the O1-mode is calculated with formulas from [13] (table XI, p. 1202). Figure 4 illustrates the effect of increasing efficiency of absorption of the EC waves, when we take into account the multi-pass EC wave absorption (assuming complete single-pass absorption of the X-mode that appears in wall reflections).It turns out that in multi-pass absorption model at a fixed electron temperature the given value of ECH efficiency is achieved for lower values of electron density.Figure 5 shows the effect of an increase of ECH efficiency with increasing the fraction of the EC radiation converted from O-wave to X-wave in wall reflections.It is important to study the dependence of the EC heating efficiency simultaneously on two parameters -wave reflection coefficient from the wall and the proportion of radiation passing from O-to Xmode in wall reflections.In the case of small absorption of the O-mode, eqs.( 14)-( 15) are simplified to give as follows (see fig. 9): Conclusions A model for calculating the efficiency of the multi-pass absorption of the EC heating power in tokamaks at initial stage of discharge is proposed.The single-pass absorption of injected EC radiation is evaluated with the scaling [12] obtained using the OGRAY code.For the subsequent multi-pass absorption, after first reflection of the EC wave from the wall of the vacuum chamber, we develop a model based on the assumption of the isotropy/uniformity of the respective EC radiation intensity in plasma (a semi-analytical solution of the radiative transfer problem for the case of multiple reflection of radiation from the wall).The model modifies the approach of the CYNEQ code [15], [18] developed for the plasma-produced EC radiation transport at high EC harmonics and verified in the benchmarkings [17]- [19].In the frame of this approach, we consider the following case: The performed parametric analysis of the efficiency of ECH absorption in ITER at initial stage of discharge indicates that further investigation of the role of the multi-pass absorption effects is needed.The considered case of the full single-pass X-mode conversion is an estimation of the most optimistic scenario.Furthermore, the effects of multi-pass absorption of injected EC radiation were studied within the transport model which allows only for the EC waves. DOI: 10 .1051/ C Owned by the authors, published by EDP Sciences, Figure 1 . Electron cyclotron heating in the geometry of tokamak-reactor ITER: (a) -top view in the plane R, Y; (b) -cross-section of the tokamak in the plane R, Z.The direction of propagation of EC beam is given by the unit vector s(Θ, Φ), which depends on the angle eV] n[10 m ] Figure 2 . Figure 2. Comparison of the level lines of efficiency of the single-pass ECH absorption in ITER: OGRAY scaling (solid) vs eq.(4) (dashed). and direction of propagation, k -wave vector, index ς=X,O denotes, respectively, the extraordinary (X) and ordinary (O) wave types, , N I r is the EC radiation absorption coefficient. Figure 3 . Figure 3. Schematic diagram of the boundary conditions for intensity of the EC radiation, I, for the case of mode-dependent reflection & polarization scrambling.The subscripts 'inc' (incoming), 'ref' (reflected) and 'out' (outgoing), respectively. Figure 4 . Figure 4. Comparison of the level lines of the ECH power absorption efficiency, γ ECH : single-pass O-wave absorption scaling, obtained in [12] with OGRAY code (solid); multi-pass absorption model for the coefficient of O-wave reflection from the wall R OO =0.6 and coefficient of O-mode to X-mode conversion in wall reflections R OX =0.1 (dashed). Figure 5 . Figure 5.Comparison of the efficiency of the single-pass ECH absorption model (OGRAY scaling, solid) and the suggested multi-pass absorption model (dashed) as a function of electron temperature for n e =0.1 10 19 m -3 .Calculations in the multi-pass absorption model are carried out for R OO =0.6 and several values of R OX . Figure 6 . Figure 6.Ternary contour plot displaying the level lines of the efficiency of ECH absorption in ITER in the multi-pass absorption model for T e =10 eV, n e =0.2 10 19 m -3 .Parameter p is the polarization scrambling parameter (percentage of radiation converted from one mode to another in the wall reflection). Figure 7 . Figure 7.The level lines of the ECH power absorption efficiency, γ ECH , in the multi-pass absorption model for R OO = 0.7, R OX = 0.05. Figure 8 . Figure 8.The level lines of the ECH power absorption efficiency, γ ECH , in the multi-pass absorption model for R OO = 0.6, R OX = 0.1. Figure 9 . Figure 9.The efficiency of ECH absorption in ITER in the multi-pass absorption model in the case of a weak absorption of the O-wave. (a) multiple reflection of injected EC wave (O-mode) from the wall; (b) polarization scrambling in wall reflections; (c) full single-pass absorption of the X-mode.Our parametric analysis of the efficiency of multi-pass absorption of injected EC radiation for typical values of the electron temperature and density at the initial stage of discharge in ITER-like tokamak shows strong dependence on the O-X conversion in wall reflections.
3,242.4
2015-03-01T00:00:00.000
[ "Physics", "Engineering" ]
Gradient Boosting Prediction of Overlapping Genes From Weighted Co-expression and Differential Gene Expression Analysis of Wnt Pathway: An Artificial Intelligence-Based Bioinformatics Study Introduction The Wnt (wingless-related integration site) signalling pathway is crucial for bone formation and remodelling, regulating the commitment of mesenchymal stem cells (MSCs) to the osteoblastic lineage. It triggers the transcriptional activation of Wnt target genes and promotes osteoblast proliferation and survival. Weighted co-expression network analysis (WGCNA) and differential gene expression analysis help researchers understand gene roles. Gradient boosting, a machine learning technique, enhances understanding of genetic and molecular mechanisms contributing to overlap genes, improving gene regulation and functional genomics. The aim is to predict overlapping genes in the Wnt signalling pathway. Methods Differential gene expression analysis was performed using the National Center for Biotechnology Information (NCBI) geo dataset-GSE251951, focusing on the effect of Wnt signaling on treatment. The WGCNA module was analyzed using the iDEP tool to identify interconnected gene clusters. Hub genes were identified by calculating module eigengenes, correlated with external traits, and ranked based on module membership values. The study utilized gradient boosting, an ensemble learning method, to predict models, evaluate their performance using metrics like accuracy, precision, recall, and F1 score, and adjust predictions based on gradient and learning rate. Results The dendrogram uses the "Dynamic TreeCut" algorithm to analyze gene clusters, aiding researchers in understanding gene modules and biological processes, identifying co-expressed genes, and discovering new pathways. The confusion matrix displays 88 actual and predicted cases. The gradient boosting model achieves 78.9% accuracy in predicting Wnt pathway overlapping genes, with a respectable area under the curve (AUC) and classification accuracy values. It accurately predicts 73.9% of samples, with a high precision ratio and low recall. Conclusion Future research should enhance differential expression analysis and WGCNA to identify key Wnt pathway genes, improve sensitivity, specificity, hyperparameter tuning, and validation experiments, and use larger datasets. Introduction The Wnt (wingless-related integration site) signalling pathway is a crucial part of bone formation and remodelling, regulating the commitment of mesenchymal stem cells (MSCs) to the osteoblastic lineage, osteoblast proliferation, and differentiation.It binds Wnt ligands to cell surface receptors, stabilizing and nuclear translocating β-catenin, a cytoplasmic protein [1].The Wnt pathway also promotes osteoblast proliferation and survival by stimulating the production of growth factors and cytokines and inhibiting apoptosis.It also influences osteoblast function by balancing osteoblast and osteoclast activity in bone remodelling, stimulating osteoprotegerin production, a decoy receptor that inhibits osteoclast differentiation and activity, thereby promoting bone formation.The Wnt pathway is crucial for bone formation, but its dysregulation can lead to pathological conditions.Mutations in LRP5 or β-catenin can cause high bone mass disorders, while loss-of-function mutations can cause low bone mass disorders.The Wnt pathway promotes bone formation, osteoblast proliferation, survival, and function, highlighting its importance in bone biology [2]. Mesenchymal stem cells (MSCs) have the potential to differentiate into bone, cartilage, fat, tendon, and muscle tissues.They are harvested from the patient's body, especially from bone marrow, and have therapeutic potential in regenerative medicine.The Wnt signalling pathway plays a crucial role in promoting the osteogenic differentiation of MSCs [3].The pathway inhibits adipogenic differentiation and upregulates osteogenic regulators, contributing to the progression of MSCs into mature osteoblasts.The noncanonical Wnt pathway also induces osteogenic differentiation through a different mechanism.Wnt pathways [4] and other signalling pathways regulate osteogenic differentiation in MSCs.BMPs can enhance or antagonize Wnt-induced differentiation, with BMP2, 6, and 9 major osteogenic growth factors [5].Functional Wnt signalling is required for BMP-induced differentiation, and knocking out BMP receptor type 1 leads to increased bone mass.The inactivating mutation of LRP5 causes osteoporosis pseudo glioma syndrome (OPPG), characterized by early-onset osteoporosis, low bone mineral density, and blindness.In mice, inactivating mutations impair fracture healing, while a gain-of-function missense mutation leads to highbone-mass phenotypes.LRP6 mutations severely affect osteogenic development in humans and mice, leading to osteoporosis, low BMD, neonatal death, and limb abnormalities [6]. Combining weighted gene co-expression network analysis (WGCNA) [7] and differential gene expression (DGE) analysis is a powerful method for understanding complex biological processes, identifying gene overlap, and understanding gene regulation networks.WGCNA and DGE analysis are powerful tools for analysing high-dimensional gene expression data.WGCNA groups genes based on co-expression patterns [8], while DGE analysis identifies differential expression between conditions or phenotypes potentially associated with specific biological functions.WGCNA and DGE analysis can be integrated by identifying gene overlaps between WGCNA-identified modules and differentially expressed genes.These are crucial regulators or functional drivers of biological processes, demonstrating significant changes in expression levels across conditions. WGCNA and DGE analysis enable researchers to understand overlapping genes' functional roles, enabling functional enrichment analysis like gene ontology or pathway analysis, and providing insights into biological processes.Integrating WGCNA and DGE analysis can reveal regulatory relationships between overlapping genes.WGCNA creates co-expression networks, while DGE analysis incorporates differential expression information.This helps identify specific regulatory relationships for conditions or phenotypes, revealing key factors driving biological processes [9].Overlap genes are shared genes in biological processes or molecular pathways.Predicting overlap genes can provide insights into functional relationships.Gradient boosting, a machine learning technique, combines multiple weak predictive models to create a strong predictive model.This approach helps researchers understand genetic and molecular mechanisms contributing to overlap genes, providing valuable insights for complex biological processes.It enhances understanding of functional relationships and regulatory networks among genes, improving gene regulation and functional genomics.So, we aim to predict the overlapping genes in the Wnt signalling pathway from WGCNA and differentially expressed genes using gradient boosting. Materials And Methods This computational study was conducted at Saveetha Dental College, Chennai, India between May 1 and May 31, 2024.This study employed a computational approach to investigate the potential of Wnt signaling in osteoporosis treatment. DGE analysis Using the National Center for Biotechnology Information (NCBI) geo dataset GSE251951 [10], DGE was performed using the Gene Expression Omnibus (GEO) tool.The dataset reveals whether Wnt signaling can induce effective osteoporosis treatment, promoting bone formation through aerobic glycolysis in the Mus musculus in a computational model.Datasets were divided into nonexposed to wnt3a and exposed to wnt3a and DGE.The results were analyzed for differentially expressed genes, fold changes, p-values, and adjusted p-values. WGCNA The WGCNA module used the iDEP tool [11], a standardized gene expression dataset, to identify highly interconnected gene clusters, ensuring comparable gene and sample distributions.WGCNA calculates pairwise gene correlations, constructing an adjacency matrix and transforming it into a topological overlap matrix (TOM), measuring gene interconnectedness within a network [12].The TOM generates a hierarchical clustering tree (dendrogram) using the average linkage method, grouping genes with similar expression patterns indicating potential functional relationships.The Dynamic TreeCut algorithm is used to identify distinct modules or clusters within the co-expression network, with a minimum module size parameter of 30 genes.Identifying modules allows for the characterization and analysis of their biological relevance through gene ontology, biological pathways, or functional annotations of genes within each module.The WGCNA analysis in iDEP aids researchers in identifying gene relationships, functionally related groups, regulatory mechanisms, key genes, and modules associated with specific biological processes or diseases. Identification of hub genes In WGCNA, hub genes are highly connected genes within a module that are crucial for the module's functioning and may have key roles in the biological process or disease being studied.To identify hub genes, we have to calculate module eigengenes (MEs) for each module, correlate them with external traits, identify the module with the highest correlation, extract gene membership and module eigengene values, rank genes within the module based on module membership (MM) values, and further characterize hub genes by examining their functional annotations, gene ontology terms, or biological pathways. Identification and prediction of overlap genes Top hub genes from WGCNA and DGE were tabulated, and the prediction model was performed using gradient boosting.The model was trained sequentially on the training data to predict overlap genes using gradient boosting, learning from previous mistakes to improve predictive accuracy.The model's performance could be evaluated using accuracy, precision, recall, and F1 score metrics.The model's importance could be understood by interpreting its feature importance scores.Finally, the model could predict new data, identifying overlap genes based on the selected features and learned importance.The data was divided into training (80%) and test (20%) datasets, respectively.Preprocessing steps such as outliers' removal and data normalization were applied. Gradient boosting architecture Gradient boosting is an ensemble learning method that combines multiple weak learners, often in the form of decision trees.A loss function is used to quantify the difference between predicted and actual values.Gradient descent optimization minimizes the loss function, and the model adjusts predictions based on the gradient and learning rate.The learning rate controls the contribution of each base model to the ensemble, and regularization techniques like shrinkage or dropout can be applied to avoid overfitting and improve generalization.Feature importance scores indicate the relative importance of each feature in making predictions. FIGURE 1: Volcano plot of the top 250 differential gene expressions X-axis represents log2 fold change and the y-axis represents -log10 p-value. Statistically significant genes have a -log10 p-value greater than 1.3, while upregulated genes have a log2 fold change greater than 1.5. The 249-row dataset exhibited a broad spectrum of up-and down-regulated genes.While statistically significant differentially expressed genes were identified at a conventional threshold, the relatively low number suggests further exploration may be warranted, as depicted in Figure 1.WGCNA provided a hierarchical overview of gene expression patterns, a topological overlap matrix for quantifying gene interrelationships, and a soft thresholding approach to convert raw co-expression values into a weighted adjacency matrix.These outputs facilitate the identification of gene modules and associated biological processes.Dendrograms enabled the analysis of gene clusters, delineated by a Dynamic TreeCut algorithm.The colour-coded bar is segmented into four colours (Figures 2, 3) that represent distinct gene groups based on expression patterns, aiding in the comprehension of functional relationships, co-expressed gene identification, and the potential discovery of novel pathways or regulatory mechanisms. FIGURE 2: Gene dendrogram and module colors This image shows how genes are grouped based on their similarity.Genes with similar characteristics are clustered together.The height of the branches indicates how different the gene groups are.The colored blocks at the bottom represent different groups of genes. FIGURE 3: Graphical representation of relationships between various entities, typically used to visualize interactions within a system. This analysis helps understand functional relationships, identify co-expressed genes, and potentially discover new pathways or regulatory mechanisms in biological systems.Network nodes labelled with identifiers like Gm24045, mt-Tv, mt-Ts1, Gm24399, Mir361, Mir152, and Gm37308 suggest they could represent biological genes or molecular entities.Edges, lines connecting nodes, indicate relationships or interactions.Density indicates strong or multiple interactions, while sparser connections indicate less interaction.Network analysis requires selecting an optimal soft threshold power, typically above 0.8 or 0.9, to construct meaningful biological networks from gene expression data, identifying co-expressed gene modules and key drivers. Network nodes, labelled with identifiers such as Gm24045, mt-Tv, mt-Ts1, Gm24399, Mir361, Mir152, and Gm37308, represent biological genes or molecular entities.Edges connecting these nodes signify relationships or interactions.Node density indicates the strength or frequency of interactions.To construct meaningful biological networks from gene expression data, an optimal soft threshold power, typically exceeding 0.8 or 0.9, is essential for identifying co-expressed gene modules and key regulatory elements, as visualized in Figures 4, 5. FIGURE 4: Scale independence Scale Independence plots the scale-free topology model fit (y-axis) against the soft threshold power (x-axis).The x-axis represents the soft threshold power, a parameter used in network analysis, particularly in WGCNA, to highlight stronger correlations between nodes.The y-axis indicates the scale-free topology model fit, with a higher R 2 value indicating better conformity to a scale-free topology, a crucial assumption in network analysis methods.The graph shows that as soft threshold power increases, the scale-free topology model fit initially increases, peaking and then stabilizing or slightly declining. FIGURE 5: Scale-free topology model fit The x-axis, a parameter in network analysis, ranges from 1 to 20, highlighting stronger correlations between nodes and minimizing weaker ones.The graph shows the scale-free topology model fit, with a higher R 2 value indicating better conformity.Data points are labeled with soft threshold power numbers, and the graph helps select an optimal threshold power for the best scale-free properties. The gradient boosting model performed well in predicting the target variable, with an area under the curve (AUC) value of 0.789 and a classification accuracy of 0.739.However, the model's accuracy is limited by the specific domain and context of the problem.The model's F1 score of 0.706 indicates a balanced trade-off between precision and recall, with a precision value of 0.749 indicating a low false positive rate, resulting in 75% accuracy.The model's recall value of 0.739 indicates a reasonable ability to identify positive instances, identifying approximately 74% of the actual positive instances in the dataset.The gradient boosting model, with a specificity value of 0.564, shows moderate accuracy, precision, recall, and F1 score, but struggles with identifying negative instances.Improvement in specificity is needed, potentially through adjusting thresholds or exploring other models.( The confusion matrix displays 88 actual and predicted cases, assessing the model's performance in distinguishing between "non-overlap" and "overlap" categories and identifying strengths and weaknesses in sensitivity and specificity (Figure 6).The true negative cell shows that the model correctly predicted "nonoverlap" 73.0% of the time, about 42 cases out of 57.The model's false positive and false negative cells indicate that it incorrectly predicted "overlap" and "non-overlap" cases, respectively, at 21.4% and 27.0% of the cases. Discussion Recent research on the canonical Wnt pathway has provided new insights into regulating this pathway.Activation of the canonical Wnt pathway leads to the accumulation and movement of β-catenin into the nucleus, activating transcription factors that control specific genes involved in cellular development.The intracellular signaling of Wnt is complex due to the involvement of multiple Fz receptors and the recently established role of LRP5 and LRP6 as co-receptors for Wnt proteins [13].The Wnt pathway is a promising therapeutic target for bone repair and skeletal homeostasis, with abnormalities in Wnt/β-catenin signaling implicated in osteoarthritis.Sclerostin, a product of the SOST gene, inhibits Wnt signaling and is being investigated for osteoporosis [14].Dual inhibition of Wnt and sclerostin antibody treatment results in synergistic bone formation.Dual inhibition of Wnt and sclerostin antibody treatment results in synergistic bone formation.This treatment is being explored in clinical trials for various medical conditions.In this study, we analyzed weighted co-expression analysis and differential gene expression of the Wnt-based pathway. Limited research exists on identified hub nodes (Gm24045, mt-Tv, mt-Ts1, Gm24399, Mir361, Mir152, and Gm37308) in the Wnt pathway and bone formation of mice.These non-coding genes may have regulatory functions, but their specific roles remain unclear.mt-Tv and mt-Ts1 are transfer RNAs involved in mitochondrial function and energy production, while Mir361 and Mir152 are microRNAs [15] that regulate gene expression [4].Further studies are needed to understand their functions and potential contributions, similar to studies that identified predictive hub genes.Six feature genes (AADAT, APOF, GPC3, LPA, MASP1, and NAT2) were identified using machine learning algorithms such as Random Forest, support vector machine-recursive feature elimination [9,16], and one more study similar to the four-gene model was developed for diagnosing sepsis/severe acute respiratory distress syndrome (ARDS), demonstrating high diagnostic and predictive performance through calibration curves and decision curve analyses [8,9,17]. Future research should investigate differentially expressed genes to identify key genes and pathways in the Wnt pathway.WGCNA can provide insights into gene expression patterns and functional relationships as shown in Figures 2-5.Network analysis can help understand interactions between genes and regulatory mechanisms.Future directions should focus on improving the model's sensitivity and specificity, tuning hyperparameters, and conducting comprehensive validation experiments.However, limitations include the dependence on dataset quality and the potential for generalization to other biological systems. The gradient boosting model achieves 78.9% accuracy in predicting Wnt pathway overlapping genes, with respectable AUC and CA values.It accurately predicts 73.9% of samples, with a high precision ratio and low recall, as shown in Figure 6.Its high precision ratio indicates low false positives and a low recall ratio, suggesting accurate predictions.However, the model's lower F1 score suggests it needs improvement in balancing precision and recall.Future improvements could involve improving the recall rate, reducing false negatives, and capturing more true overlapping genes.The quality of training data and gene annotations limits the model's performance.Obtaining a larger dataset and conducting further experimental validations could enhance its performance. Conclusions The present study aimed to elucidate the intricate regulatory mechanisms within the Wnt signaling pathway, a key determinant of osteogenesis.By integrating DGE analysis and WGCNA, the current study identifies pivotal gene modules and their potential roles in bone formation.The application of gradient boosting provided a computational framework for predicting gene interactions within this pathway. While our findings offer preliminary insights into the complex architecture of the Wnt signalling network, further research is imperative to validate these observations in larger and more diverse cohorts.A deeper understanding of the identified gene modules and their functional implications is essential for translating these findings into clinically relevant applications.Ultimately, unravelling the complexities of Wnt signalling holds the potential to inform the development of novel therapeutic strategies for bone-related disorders. TABLE 1 : Classification model performance AUC: area under the curve; CA: classification accuracy; F1: F-score
3,936.8
2024-08-01T00:00:00.000
[ "Computer Science", "Biology" ]
Omega transition accompanied by mechanically-induced twinned martensite We present here an analysis on omega transition process during martensitic transformation. Martensitic transformation occurred during deformation at room temperature, avoiding the influence of auto-tempering on metastable omega phase. Based on the crystallographic relationships of twinned crystals, the twin interface of twinned martensite was characterized by considering the effect of direction of incidence electron beam on diffraction patterns through pole figures. Omega phase only existed at the boundaries of twinned martensite with single variant. It was proposed that lattice invariant twin shear during the dynamic transformation of twinned martensite promoted the formation of omega phase. Introduction Omega (ω) phase is present as a metastable hexagonal structure in ordered body-centered cubic (bcc) alloy systems, especially in groups IV, V and VI transition metals and their alloys [1,2]. The ω-lattice usually forms on pair of (111) planes within bcc-lattice, leaving the adjacent (111) planes unaltered [1]. In 112 111 { } ⟨ ⟩ mechanical twinning system of bcc metals and alloys, transformation mechanism of ω phase was proposed where mechanical twinning shear dominated the formation of ω phase in Ta-W-based alloys and Ti-Nb-based alloys [3][4][5]. The ω-lattice mechanism was proposed involving the possibility of reverse transformation from ω to bcc in Ti-based alloys [6]. The ω phase was first reported in Fe-C alloys (group VIII), especially in twinned martensite with high carbon content [7,8]. The formation mechanism of two-phase structure (ω+bcc) assumed that austenite directly transformed to ω+bcc structures [9,10]. Based on ω-lattice mechanism in bcc alloys, the formation mechanism of twinned and lath martensite was suggested [11]. Twinned martensite formed from the unstable ω phase through ω-lattice mechanism during the quenching process. Subsequently, twinned martensite transformed to lath martensite during auto-tempering via movement of twin boundaries [11][12][13]. Generally, the inevitable auto-tempering during quenching makes the ω phase unstable and leads to microstructure evolution of martensite. This makes it difficult to characterize the atom-scale structure of ω phase. The transformation mechanism of ω phase has not been analyzed by combining with martensitic transformation process. In this study, martensitic transformation was induced by deformation at ambient temperature, which excluded the influence of auto-tempering on metastable ω phase. To characterize the twin interface of twinned martensite, the effect of direction of incidence electron beam on diffraction patterns was analyzed together with the pole figures based on the crystallographic relationships of twinned crystals. After selecting the right direction of incidence electron beam, the substructure of deformation induced martensite was characterized by high-resolution transmission electron microscopy (HRTEM). Results indicated that ω phase was embedded in the twin interface of deformation-induced twinned martensite with only one variant along the shear. It was suggested that ω phase was a by-product of twinned martensite and the shear of latticeinvariant twin deformation promoted ω transition in medium manganese steels. Materials and methods The actual chemical composition of steel was Fe-0.23C-5.65Mn-1Al (in wt.%). Cold-rolled sheets were first austenitized at 850°C for 15 min and then water quenched to room temperature to obtain martensite. The quenched steels were subsequently intercritically annealed at 680°C for 10 min, followed by quenching to room temperature. The tensile specimens of gage length 25 mm were prepared along the rolling direction according to the ASTM E8 standard. Tensile deformation was used to introduce pre-strain (engineering strain of 3%) into the annealed steel with a constant crosshead speed of 6.7×10 −4 s −1 . X-ray diffraction (XRD) studies were carried out to determine the amount of austenite and the volume fraction calculated based on the model described in reference [14]. The average carbon concentration in austenite was obtained by relation [14]: Mn x Al are the concentration (wt%) of carbon, manganese, and aluminum in retained austenite, respectively. The XRD experiments were conducted on a D/max2400X-ray diffractometer (operated at 56 kV, 182 mA) with Cu Kα radiation at room temperature and the samples were scanned over a 2θ range from 40°to 100°with a step size of 2°/min that included ferrite and austenite peaks. The microstructure was characterized by scanning electron microscope (SEM, Zeiss Ultra 55) and transmission electron microscope (TEM, FEI Tecnai G2F20). Both TEM and HRTEM images were acquired at an accelerating voltage of 200 kV. Electron backscatter diffraction (EBSD) (Symmetry ® , Oxford Instruments, Oxford, UK) maps were obtained at a step size of 0.05 μm and analyzed using the MTEX (version 5.28) texture analysis MATLAB (version 2016b) toolbox. TEM foils were twin-jet polished (StruersTenuPol-5) at a voltage of 20 V in a solution containing 5% perchloric acid. Results and discussion 3.1. Microstructure evolution A complete martensite structure was obtained on quenching to room temperature from 850°C, as shown in figure 1(a), and also implied by the XRD patterns presented in the inset of figure 1(a). After subsequent annealing at 680°C for 10 min, a typical dual-phase microstructure of medium Mn steels consisting of retained austenite and ferrite was achieved ( figure 1(b)). The XRD patterns in figure 1(b) indicated that the volume fraction of austenite was ∼30% after the annealing process. Figure 2 shows the crystallographic analyses after annealing. This includes phase map and the inverse pole figures (IPF) in the Z direction, which is normal to the rolling plane. Grain boundaries were defined where crystallographic misorientations exceeded 3°for the misorientation of martensite lath 2°-5°. After annealing at 680°C for 10 min, the original martensite laths became lath-shaped ferritic grains, some of which coalesced to form large grains, as shown in figure 2(a). Both lath-type austenite and globular austenite can be seen in figure 2(b) and the austenite located in the same prior austenite had similar orientation. The lath-type austenite was usually located at original martensite lath, sub-block and block boundaries, whereas globular austenite located at prior austenite grain boundaries and within packets. In figure 2(c), the phase boundaries between austenite and ferrite were associated with K-S or N-W orientation relationship (OR), which were highlighted red or yellow, respectively. The ratio of the length of K-S/N-W was ∼1.75:1. The angular tolerance for this analysis was ±2.5°, which was based on the 5.26°difference between the K-S and N-W OR [15]. Figures 3(a)-(b) shows the engineering strain-stress curves of the sample after annealing and the XRD patterns of the sample before and after 3% pre-strain, respectively. A significant amount of austenite transformed to martensite during tensile deformation and the volume fraction of austenite decreased from 30% to 13.8% after 3% pre-strain. The microstructure evolution after 3% pre-strain was further characterized by TEM. Figures 4(a)-(d) shows typical morphology of retained austenite after 3% pre-strain. After 3% pre-strain, in the bright field image (figure 4(a)), the retained austenite was feeble due to deformation, and streaks appeared in majority of the retained austenite. The selection area electron diffraction (SAED) pattern shown in figure 4(a) indicated that the streaks were stacking faults and no twins or epsilon-martensite were observed in the retained austenite. The dark-field image taken by the g=002 γ diffraction spot is shown in figure 4(b). The stacking faults in austenite were clearly seen in the austenite outlined by red broken line, which were the nucleation sites for martensitic transformation. Besides, some austenite partially transformed to dislocation martensite, with resulting microstructure consisting of austenite and dislocation martensite, as outlined by white broken line in figure 4. The SAED pattern inserted in figure 4(c) reveals that α′ martensite formed in the deformed austenite and the OR between austenite and martensite followed K-S OR, g a 111 101 , The dark field image (figure 4(d)) taken using the g=020 γ diffraction spot shows that part of the austenite transformed to α′ martensite. A careful TEM tilting experiment did not find any twin contrast in the α′ martensite. However, if the austenite completely transformed to martensite, twinned martensite formed. The TEM images of twinned martensite after 3% pre-strain are shown in [¯¯] are the plane and twinning direction, respectively. A dark field image of twins was obtained using the spot marked c, revealing a high density of twinned martensite of several nanometers ( figure 5(c)). In addition to the diffraction spots of matrix and twin crystals, two other spots were also found along with the (211) plane in the twinned martensite, as shown figure 5(b). The dark field image ( figure 5(d)) obtained by the extra spot marked d showed that some bright ultra-fine nanoscale plates were located between the twinning planes. The crystals in this twinned martensite are most likely to be ω structures, which are identical to twinned martensite [16]. The orientation relationships between martensite and ω structure followed: However, the extra spots may also be double diffraction spots of á ñ 112 111 { } -type twinning structure. To avoid the effect of double diffraction on diffraction analysis, the selection of direction of incident electron beam during TEM characterization is discussed below. Atomic-scale characterization of interface of twinned martensite When diffraction patterns of {112}〈111〉-type twinned martensite are analyzed in bcc system, the commonly used low-order asymmetric diffraction patterns including {112} spots are those with zone axes 〈110〉 and 〈113〉. [ ] [¯] // When the direction of incident electron beam was parallel to these crystal direction pairs, the diffraction patterns were diffracted from both the matrix and twin crystals like figure 5(b). However, the nature of these diffraction patterns was different. respectively. However, only the first two pairs were parallel to the twinning plane 112 . (¯) Accordingly, whenever the direction of incident electron beam is parallel to zone axes 〈110〉 or 〈113〉, there is only one-third possibility to be parallel to the twinning plane. Therefore, directions of the incident electron beam should be carefully selected, especially during atomic-scale characterization, to avoid observing the overlapped structure of twinned crystals. In our study, although the incident electron beam was parallel to the twinning plane as indicated by the streaking of diffraction spots in figure 5(b), the double diffraction may still occur because of fine twins and thickness of TEM samples [17]. To weaken the effect of thickness on double diffraction, a twinned martensite located around the hole of TEM sample was selected for characterization, as shown in figure 7. The streaking of diffraction spots in figure 7(b) indicated that the direction of incident electron beam parallel to [131]m was also parallel to the twinning plane 112 . (¯) This provided an ideal condition to characterize the twin interface of the twinned martensite. Figures 7(c) and (d) are dark field images obtained from g=011 t and the diffraction spot marked by d, respectively. The twin interface of the twinned martensite in figure 7 was further characterized by high-resolution transmission electron microscope. Figures 8(a)-(b) are HRTEM images showing the interface structure of {211} 〈111〉-type twinned martensite. According to the analysis of electron diffraction in figure 7(b), the twin interface was perpendicular to the plane of paper and the matrix and twinned crystal were mirror-symmetric with respect to the twin interface. Figure 8(b) shows corresponding inverse fast Fourier transformed image which originated from the white broken-line region in figure 8(a). The large interplanar spacing in figure 8(b) suggested that the nanoscale particles embedded in the twin boundaries are most likely to be ω phase. As shown in figure 8(b), the measured interplanar spacing of In twinned martensite, the lattice parameter of α-Fe was ∼2.852 Å and ω-Fe lattice parameter was a ω =4.033 Å, c ω =2.470 Å [18]. Figure 8(d) shows the simulated electron diffraction pattern of primitive hexagonal lattice, which is in good agreement with the fast Fourier transformed diffraction pattern ( figure 8(c)). Furthermore, the interplanar space w 1010 (¯) and w 0111 (¯) planes were calculated to be 3.493 Å and 2.017 Å, respectively, which were almost consistent with HRTEM results in figure 8(b). It was confirmed by HRTEM that ω phase forms in mechanically-induced twinned martensite with single variant. But whether it pre-existed or was a by-product phase of martensitic transformation needed discussion. According to the structural difference between ω lattice and bcc lattice, if unstable ω phase reversed to bcc structure through ω-lattice mechanism, there was a 50% probability for pre-existing ω phase transforming to twinned-variant bcc structure [5]. Thus, if ω phase existed before martensitic transformation, there would be more twinned martensite plates after 3% pre-strain. However, it can be seen from figures 4 and 5 that only few austenite islands transformed into twinned martensite, and majority of retained austenite transformed to lath martensite after 3% pre-strain. Moreover, ω phase embedded in the twinned boundary had a single variant, which was inconsistent with the four variants of ω transformed directly from austenite [10]. Therefore, it is suggested that the ω phase in twinned martensite is not a phase that exists before martensitic transformation. The formation must be closely related to the transformation of internal twins. ω transition induced by {112}〈111〉 shear in twinned martensitic transformation The process involved in the formation of twinned martensite can promote the formation of twin shear stress promoting ω transition. When martensitic transformation occurs at low temperature, the stress concentration at the tip of the martensite plate grows giving rise to the lattice invariant twinning shear in martensite [19]. It has been proposed that the twinned martensite forms by a dynamic process [20]. Figure 9 shows a schematic diagram of the formation of internal twinned martensite plate. The increased thickness of internal twin plate can induce inverse stress. To release the inverse stress, a crystal with the same orientation as the first twin plate is obtained in the same martensite plate. By repeating such process, thin internal twinned plates are consequently produced in the martensite plate. The local high stress between the twinned plates would then promote the formation of ω phase. The lattice correspondence of {112}〈111〉-type bcc twins and ω structures are shown in figure 9. As reported for bcc metals, the 〈111〉 shear in {112} plane plays a key role in ω phase transition [3][4][5]21]. In Ti-Nb-based alloys, it was shown that there was a distinct energy barrier for β (bcc) to ω transition, which can be overcome by {112}〈111〉 shear [4]. Detailed structural characterization at atomic scale also indicated that there was a kind of under-developing of ω transitional structure formed along the longitudinal twin boundaries, which provided the conclusive evidence for shear dominated mechanism [5]. Hsiung and Lasilla [3] also interpreted the internal twinning and stress-induced ω transition by dislocation mechanism. It was proved that the growth of omega phase was controlled by the movement of 1/3 〈111〉 dislocation and was expected to be slower than the growth of twin domain that was controlled by the movement of 1/6 〈111〉 dislocations. In our study, compared with the length of twin plates, the ω phase had a shorter length, as shown in figure 8, which was consistent with the dislocation mechanism. In addition, it has been shown that solid solution can increase shear stress and promote the increase of twins and ω phase in bcc metals [3,4]. In this study, both the solid solution of manganese and the low transformation temperature at room temperature would promote the shear stress and decrease the thickness of twinned plates. The average carbon content in retained austenite have reached to ∼0.61 wt.% according to XRD results. The carbon also promotes the twin shear stress of twinned martensite formation. The thickness of mechanically-induced twinned plate is fine ∼5 nm. And the carbon could segregate to the twinned interface during the dynamic transformation of twinned martensite. The carbon segregation played two roles in promoting omega transition. It has been proved that the fulfill orientation could benefit the transition of omega phase [22]. The carbon segregation would decrease the lattice parameter of twinned plate Figure 9. Schematic illustration of the formation of plate like ω phase, which is promoted by twinning shear during dynamic transformation of twinned martensite. and make the orientation of martensite and omega more fulfilled. What's more, based on the first-principles study on omega phase in steel [23], the carbon segregation into omega phase in the twinned interface would make the omega phase stable. On the contrary, in pure iron, although nano-twinned martensite could be obtained via high-pressure, ω structure did not form [24]. Conclusions In this study, lath and twinned martensite were obtained at room temperature by deformation. The characterization of twin interface of twinned martensite using zone axes 〈110〉 or 〈113〉, suggested that there was only one-third possibility for the direction of incident electron beam to be parallel to the twinning plane. HRTEM results confirmed that ω phase was embedded at the interface of twinned martensite with single variant. The formation mechanism of ω phase was discussed by considering dynamic transformation process of twinned martensite, which suggested that twin shear was critical in assisting ω phase formation in steels.
3,885
2020-10-30T00:00:00.000
[ "Materials Science" ]
New Insights into Asian Prunus Viruses in the Light of NGS-Based Full Genome Sequencing Double stranded RNAs were purified from five Prunus sources of Asian origin and submitted to 454 pyrosequencing after a random, whole genome amplification. Four complete genomes of Asian prunus virus 1 (APV1), APV2 and APV3 were reconstructed from the sequencing reads, as well as four additional, near-complete genome sequences. Phylogenetic analyses confirmed the close relationships of these three viruses and the taxonomical position previously proposed for APV1, the only APV so far completely sequenced. The genetic distances in the respective polymerase and coat protein genes as well as their gene products suggest that APV2 should be considered as a distinct viral species in the genus Foveavirus, even if the amino acid identity levels in the polymerase are very close to the species demarcation criteria for the family Betaflexiviridae. However, the situation is more complex for APV1 and APV3, for which opposite conclusions are obtained depending on the gene (polymerase or coat protein) analyzed. Phylogenetic and recombination analyses suggest that recombination events may have been involved in the evolution of APV. Moreover, genome comparisons show that the unusually long 3’ non-coding region (3' NCR) is highly variable and a hot spot for indel polymorphisms. In particular, two APV3 variants differing only in their 3’ NCR were identified in a single Prunus source, with 3' NCRs of 214–312 nt, a size similar to that observed in other foveaviruses, but 567–850 nt smaller than in other APV3 isolates. Overall, this study provides critical genome information of these viruses, frequently associated with Prunus materials, even though their precise role as pathogens remains to be elucidated. Introduction The Asian prunus viruses (APV) were initially identified in several Prunus sources of Asian origin showing cross-reactivity to Plum pox virus (PPV), the viral agent causing Sharka disease, the most important virus disease on stone fruit trees [1,2]. They were therefore initially diversely called "Plum pox-like virus", "Prunus latent virus" or "Prunus virus isolates" [2][3][4]. Several polyclonal antisera showed reliably a cross-reactivity with APV Prunus sources, whereas PPV-specific monoclonal antibodies failed to react [2]. There are some indications assembled using the CLC Genomics Workbench 7.0 (http://www.clcbio.com) and annotated by BlastX and BlastN comparison with GenBank, using a 10 −3 e-value cut-off. The scaffolding and ordering of the contigs for each viral isolate were facilitated by mapping the contigs on reference viral genomes. The gaps between the contigs as well as regions of low pyrosequencing coverage were amplified from total nucleic acids (TNA, [9]), extracted from the grafted GF305 leaves, using primers designed from the sequence of the contigs (S1 Table) in a two-step RT-PCR procedure described by Marais et al. [17]. 5' and 3' ends of the viral genomes were determined using either a 5' Random Amplification of cDNA Ends (5' RACE) strategy, or a Smart™ Long Distance-RT-PCR (Takara Bio Europe/Clontech, Saint-Germain-en-Laye, France) for the 3' genomic regions, using internal primers designed from the assembled contigs (S1 Table). The RACE reactions were performed following the kit manufacturer's instructions (Takara Bio Europe/Clontech, Saint-Germain-en-Laye, France) and the 3' genome ends were amplified using the protocol described by Youssef et al. [18]. All amplification products were sequenced on both strands (GATC Biotech AG, Mulhouse, France), either directly or after a cloning step into the pGEM-T Easy vector (Promega, Charbonnières-Les Bains, France). The sequences obtained were finally assembled with the 454 contigs to generate the complete genomic sequence of the virus isolates. Sequence and phylogenetic analyses Analysis of 454 pyrosequencing sequence data was performed as described by Candresse et al. [16] using the CLC Genomics Workbench 7.0. Multiple alignments of nucleotide or amino acid sequences were performed using the ClustalW program as implemented in MEGA version 6.0 [19]. Phylogenetic trees were reconstructed using the neighbor-joining technique with strict nucleotide or amino acid distances and randomized bootstrapping for the evaluation of branching validity. Genetic distances (p-distances calculated on nucleotide or amino acid identity) were calculated using MEGA version 6.0. The RDP4 program [20] was used to search for potential recombination events in the APV genomic sequences obtained in this study. Results Pyrosequencing of dsRNAs extracted from the five APV sources All sources were found to be infected with more than one virus with the exception of Bonsai. Whereas APV2 was the sole virus detected in Bonsai source, representing 77.6% of the total reads, six different viruses were found in the Ta Tao 25 source: APV2 (46.8% of the total reads), APV3 (11.7% of reads), APV1 (2.6% of reads) and three well known fruit tree viruses, Plum bark necrosis stem pitting-associated virus (PBNSPaV, 14.7% of reads), Cherry green ring mottle virus (CGRMV, 0.1% of reads), and Apple chlorotic leaf spot virus (ACLSV, 0.08% of reads). In the Ta Tao 23 source, Blast analysis identified contigs belonging to each of the three APV: APV1 (3% of reads), APV2 (1.5%), APV3 (26.1%), while 1.3% of the total reads corresponded to ACLSV sequences. A mixed infection with two APV was also observed in the Bungo source, involving APV2 (66.4% of total reads) and APV1 (13.1%). Finally, as shown in a previous work [14], analysis of the contigs from the Nanjing source showed the presence of APV3 (6.7% of the reads), PPV (52%) and PBNSPaV (34.3%). Further analyses of the low levels of reads observed for CGRMV or ACLSV in some of the samples showed that, in each case, the contigs covered a significant proportion of the viral genome (36 to 69%, not shown), suggesting that these viruses were really present in the samples and that the low level of reads observed did not result from a contamination. For each source, contigs annotated as belonging to the various APV were further manually assembled into scaffolds using the APV1 genome [7] as a reference. The partial genome sequences of APV2 and APV3 [5] were also used as references in this scaffold assembly process. The scaffolds were then further extended using a combination of reads mapping and de novo assembly [16]. From the scaffolds thus obtained, four were selected for completion of the sequence of the corresponding isolate: the APV1 from the Bungo source (four internal gaps and 5' and 3' ends missing), the APV2 isolates from the Bungo and Bonsai sources (both missing one short internal region and both genome ends), and the APV3 isolate from the Nanjing source (two internal gaps and both genome ends missing). These four genomic sequences were completed by direct sequencing of RT-PCR products obtained using total nucleic acids of the respective APV sources and specific primers targeting the remaining gaps (S1 Table). The 5' and 3' genome ends were obtained using 5'RACE and Smart™ Long Distance-RT-PCR [18], respectively. The completed sequences have been deposited under accession numbers KT893293-KT893296 in the GenBank database. In addition, the genome sequences of an additional APV2 isolate (Ta Tao 25 source) and of three additional APV3 isolates (two from the Ta Tao 23 source and one from the Ta Tao 25 source) were also obtained during the assembly process. Their 3' genome end was completed as described above but no specific effort was made to complete the 5' genome end, thus, depending on the isolate, between 395 to 745 nucleotides were missing. These sequences have been deposited under accession numbers KT893297 to KT893300 in the GenBank database. Genome organization of APV1, 2, and 3 With the present results, complete genome sequences of two APV1 isolates (including that published by Marini et al [7], FJ824737), two APV2 isolates, and one APV3 isolate are now available. Moreover, near complete sequences, missing only 0.3 to 0.7 kb of 5'-terminal sequence, were also determined for one additional APV2 isolate and three APV3 isolates. Taken together, these sequences show that the genome organizations of APV1, APV2 and APV3 are closely similar to that described for the APV1 reference isolate [7] and are typical of members of the genus Foveavirus (Fig 1). The genome encodes five open reading frames (ORFs), encoding from 5' to 3' the polymerase, the triple gene block proteins (TGB1, 2, and 3) involved in viral movement and finally the coat protein (CP). The genomes of the APV1 to 3 and their isolates are largely colinear. The length of the genome of APV1 Bungo (9,473 nt) is in the same range as that of the reference APV1 isolate (9,409 nt, [7]), the size polymorphism being exclusively limited to the 3' NCR, the other genomic regions being strictly colinear between the two isolates ( Table 1). The genome sizes of the APV2 Bungo and Bonsai isolates are very similar (9,362 and 9,375 nt, respectively) with two regions polymorphic: the 3' NCR and the polymerase gene which displays a 39-nt long (13 amino acids) deletion in the Bungo isolate. At 9,654 nt, the APV3 Nanjing isolate has the longest genome. The sizes of the 5' NCR, the polymerase gene and TGB genes are similar to those of APV1 and APV2. The CP is slightly larger (408 aa as compared to 400 in APV1 and APV2), but the largest difference was once again in the 3' NCR (1,046 nt), which is 160-258 nt longer than those of APV1 and APV2 (Table 1). This long 3' NCR had previously been identified as a salient discriminating feature of APV [5] as compared to other members of the genus Foveavirus, in which this region is much shorter. No additional ORF was identified in this long 3' NCR. Interestingly, the 3' NCR of APV3 appears to be highly polymorphic in size among the four APV3 isolates sequenced in the present work (S1 Fig). The Ta Tao 25 APV3 isolate has a 3' NCR of 879 nt (Table 1), a size comparable to that observed in APV1 and APV2 isolates. The difference in 3' NCR size is mostly explained by a large, ca. 200 nt indel polymorphism (S1 Fig). In addition, in the Ta Tao 23 source, two APV3 variants differing only in their 3' NCR were identified. These variants showed 3' NCRs with large internal deletions, resulting in an overall length of 312 or 214 nt, a size similar to the 176-312 nt long 3' NCRs reported for other Foveaviruses [21]. The last ca. 150 nt of the 3' NCR were highly conserved among all APV isolates (S1 Fig). Phylogenetic relationships of APV1, 2, and 3 Besides their similarities in genome organization, the close relationships linking APV and Foveaviruses are illustrated by a phylogenetic analysis performed on their complete genome sequences, with Poplar mosaic virus (PopMV, Carlavirus) and Apple chlorotic leaf spot virus (ACLSV, Trichovirus) being used as representatives of other Betaflexiviridae genera. The phylogenetic neighbor-joining tree, reconstructed using strict nucleotide sequence identity distances (Fig 2) shows that APV cluster with high bootstrap support (99%) with Rubus canadensis virus 1 (RuCV-1), a tentative member of the genus Foveavirus, as well as the other Foveavirus members (66% bootstrap value; Grapevine rupestris stem pitting associated virus, GRSPaV / Apple stem pitting virus, ASPV / Peach chlorotic mottle virus, PCMV / Apricot latent virus, ApLV / Apple green crinkle associated virus, AGCaV). The average pairwise nucleotide divergence value among the five APV sequences was 23.5±0.3% and isolates of each virus clustered together. However, APV3 appears closer to APV1 and formed a bootstrap-supported cluster with the APV1 isolates (Fig 2). In order to clarify the taxonomical status of APV in the family Betaflexiviridae, sequence comparisons were performed for the polymerase and coat protein genes and for the corresponding proteins ( Table 2). The accepted species demarcation molecular criteria for the family Betaflexiviridae are of 28% nucleotide divergence or 20% amino acid divergence in the polymerase and coat protein genes [21]. By almost all criteria, APV2 appears to be a distinct species, with the exception of its polymerase amino acid divergence level which is sometimes below the 20% threshold when comparing with some APV1 or APV2 isolates. The situation is less clear for APV1 and APV3. Considering the polymerase gene, these agents show divergence values within the species variation range, irrespective of whether the nucleotide or amino acid sequences are considered (Table 2). However, when the comparisons are performed with the To complete these analyses, neighbor-joining trees were reconstructed using the coat protein and polymerase amino acid sequences (Figs 3 and 4). For the polymerase, the near complete sequences of the Ta Tao 25 APV2 isolate and of the Ta Tao 23 and Ta Tao 25 APV3 isolates were included in the analysis. The topology of both trees is similar to that of the tree reconstructed with the complete genome sequences (Fig 2) and the isolates of each agent form a distinct, 100% bootstrap-supported cluster. The close relationship linking APV1 and APV3 is also evident in both trees. Whereas the same tree topology was again obtained when analyzing the TGB1 protein (data not shown), a different pattern emerged with the tree reconstructed using the concatenated TGB2 and TGB3 protein sequences (Fig 5). Indeed, the APV1 Ta Tao 5 reference isolate now clusters together with the APV3 isolates, but away from the other analyzed APV1 Bungo isolate. Such incongruence might be explained by a recombination event, whose potential occurrence was further evaluated using the RDP4 program. A single recombination event involving APV3 Ta Tao 25 and APV1 Ta Tao 5 was detected with very good probability (10 −14 to 10 −44 depending on the algorithm used). The predicted recombined fragment is approximately 500 nt long, with borders around nucleotide positions 6680 and 7189 in the Ta Tao 5 APV1 genome, corresponding to the region comprised between the end of the TGB1 and the end of the TGB2 genes. Discussion The NGS strategy used here allowed the efficient determination of complete genome sequences of four APV1, 2 or 3 isolates. In addition, near complete genome sequences were also obtained for one additional APV2, and three additional APV3 isolates, confirming the potential of NGS technologies to detect and characterize fruit tree viruses, even in situations of multiple infections, like in the case of Ta Tao 25 source, where six different viruses were detected. When compared with other foveaviruses, the eight APV isolates characterized in the present work show more than the 45% nucleotide identity, in their polymerase and coat protein genes, currently accepted genus demarcation criteria in the family Betaflexiviridae (data not shown). This finding supports the previous suggestion [5,7] that APVs should be regarded as species of the genus Foveavirus. This conclusion is further supported by the similarities in genome organization and by the whole genome phylogenetic analysis reported here. When it comes to the species status of the various APV, the situation is more complex. Taking into account sequence comparisons between APV1, 2, and 3 in the two taxonomically relevant regions, we propose that APV2 should be considered as a distinct species in the genus Foveavirus, even if the amino acid identity levels in the polymerase are very close to the species demarcation criteria accepted in the family Betaflexiviridae. The situation of APV1 and APV3 is more complex, since sequence comparisons using the polymerase and coat protein genes or their deduced amino acid sequences provide a conflicting picture, with divergence levels suggesting the existence of a single or of two species, respectively. Although molecular criteria based on identity level in the polymerase and in the coat protein genes are usually convergent [21], such a situation of conflict between polymerase and coat protein criteria has been observed previously for a few Foveaviruses [18,22,23] or for unassigned members in the family Betaflexiviridae [24]. In such cases, the use of additional biological information such as serology, host range, associated symptoms, or vector transmission has been used to reach a decision on the species status. There is a need for such additional information to determine if APV1 and APV3 constitute a single or two distinct species. Conflicts between polymerase and coat protein identity levels used as taxonomic criteria seem to be particularly frequent in the genus Foveavirus ( [18,22,23], present work). This situation appears to be a consequence of the particularly long hypervariable N-terminal region of the coat protein in this genus, which results in high divergence values between isolates and species even if the C-terminal part of the coat protein is highly conserved [18]. Since this appears to be a peculiarity of the genus Foveavirus within the family Betaflexiviridae, a revision of the species discrimination criteria in this genus to take into account may be required ultimately. Phylogenetic analyses on various APV species performed using the amino acid sequences of the various APV proteins revealed that the TGB2-TGB3 tree was not congruent with the trees generated using other proteins (compare . This observation as well as RDP4 analysis strongly suggest that the APV1 Ta Tao5 reference isolate is in fact an APV1-APV3 recombinant in the TGB region. Previous studies have shown that recombination is a relatively common process in RNA plant virus evolution [25,26], even the rate of recombination differs across virus genera. Indeed, recombination events have previously been reported to be involved in the evolution of some Betaflexiviridae members [27][28][29][30][31][32], and additional cases are likely to be documented in the future as more genus members are characterized through metagenomic studies [33]. Recombination events are similarly the most likely explanation for the very large indel polymorphisms observed in the 3' NCRs of APV3 isolates. The identification of APV3 isolates with small, 214 or 312 nt-long 3' NCRs is interesting in that it shows that functional APV genomes can exist with 3' NCRs of a size similar to those of other genus members. In the absence of any additional coding potential, the selective advantage that might be conferred by the very long, 788-1046 nt-long 3' NCRs observed in other APV isolates remains to be explained. With the extensive metagenomic analyses performed here, it becomes possible to hypothesize the origin of the cross-reactivity with PPV-specific reagents, observed in the Prunus sources analyzed here. The comparison of the virome of each Prunus source shows that APV2 is the only virus shared by all sources, with the exception of the Nanjing one, in which PPVcross reactivity is directly explained by PPV infection. In addition, a similar analysis of two additional PPV cross-reacting sources (Agua and Ting Ting) [1,2,4] provided evidence for the presence of APV2 in co-infection with CGRMV and Peach mosaic virus (Agua source) or with PBNSPaV (Ting Ting source) (data not shown). However, since the APV2 genome coverage was limited in these analyses, no further efforts were made to characterize more precisely the APV2 isolates involved. Taken together, these results would seem to exclude a contribution of APV1 and APV3 to the serological cross reactions with PPV but make APV2 the likely candidate involved in cross-reactivity, in particular when considering that it is the only viral agent that was detected in the Bonsai source. Further investigations are clearly necessary to experimentally validate this hypothesis. Questions also persist concerning the biology and pathogenicity of APV in Prunus materials. Unlike for other foveaviruses [21], no APV vector is known. APV are graft-transmissible, and dispersal likely occurs through infected propagation material, raising the question of their prevalence in such Prunus materials. The potential contribution of APV to the symptoms observed in the Prunus sources in which they were detected is also difficult to address. For one thing, these symptoms were very diverse: enlargement and discoloration of veins on old leaves, chlorotic leaf-spotting, fruit deformation and size reduction, delayed maturation [34]. In parallel, most of the sources showed complex mixed infections with a range of other fruit tree-infecting viruses. The situation is a bit different in the case of the Bonsai source, in which a single APV2 infection was detected. The original P. mume plant, grown as a bonsai, did not display any specific symptoms (J.B. Quiot, personal communication). However, GF305 peach seedlings grafted with that source showed enlarged veins on old leaves, a symptom also observed in GF305 indicators grafted with some of the other sources [34]. Although far from providing a conclusive link between APV and symptomatology, this observation suggests that at least APV2 could contribute to symptoms in at least the GF305 peach indicator. Again, further studies are necessary to determine potential pathogenicity of the various APV on different Prunus hosts.
4,697
2016-01-07T00:00:00.000
[ "Biology", "Environmental Science" ]
Handling Uncertainty in Database: An Introduction and Brief Survey , operations handled.This paper is organized as follows: Section two briefly describes some key concept in uncertainty in databases.Section three surveys various techniques of dealing with different data management issues on uncertain data.Section four discusses the differences between existing uncertain DBMSs.Section five contains the conclusion and summary. Uncertainty in Database Although uncertainty, vagueness, ambiguity, imprecision, and inconsistency are five terms that are sometimes used interchangeably, each term has its own meaning.In the database context, uncertainty refers to the data objects that cannot be assured with an obsolete confidence (Motro, 1994).Vagueness refers to a data item that belongs to a range of values without a clear determination of its exact value.For example when saying that a fish tail is long without specifying its exact length.Ambiguity means the incomplete description of a data item.For example, it may not be specified whether the fish tail is measured in cm or mm.Imprecision means not precise and it refers to the level of exactness.For example, the fish color is red or orange.Finally, Inconsistency happens when having conflicting items.For example, the fish tail is greater than 10 cm and the fish tail is greater than or equal 12 cm. The common sources of uncertainty are unreliable information source such as faulty reading instrument or input forms filled incorrectly, and system errors that includes transmission noise, imperfection of system software and delay in processing update transaction (Motro, 1994). In uncertain database systems, uncertainty is handled in two main dimensions, the uncertainty of data and the uncertainty of operations.Uncertainty of data has two levels: the first level is the attribute level where tuples exist for certain in the database but the attribute value is however uncertain.The second level is the tuple uncertainty where all attribute in the tuple are known precisely but the existence of the tuple itself in a relation is uncertain. The degree of uncertainty differs according to the information form and the number of alternatives when uncertain data exists.The highest degree of uncertainty is found when there is a doubt about the existence of true value in the existing data, and it decreases when there is a range of values for an uncertain object.The uncertainty degree decreases when the uncertain value comes from a few set of alternatives.Uncertainty degree is further decreased when there is a probability attached with each alternative indicating their correctness (Motro, 1994;Motro, 1995). The uncertainty of operations includes transformation and modification.Transformation is defined as the operation that gets new data from the stored one.Queries are considered the frequent transformation type used.Uncertain request from the user can occur due to several reasons; such as lacking the information already present, being not sure about what they need.After the answer is delivered to the user, the uncertainty level decreases when the user is more familiar with the answer he has got (Motro, 1994;Motro, 1995).Modification operation includes any operation that cause change in the data already present.The user is the one who defines the modification needed.The uncertainty here can be caused from several reasons such as; lacking system information, lacking database information or the uncertainty can be in the data to be modified.Few tools are present for solving the uncertainty in the modification process (Motro, 1994;Motro, 1995). Processing uncertainty main reasons is the uncertainty about the tools used by the system in processing the request.So in case the description and transformation process are free of uncertainty still we need to check the processing uncertainty (Motro, 1994;Motro, 1995). Finally, a probabilistic database is an uncertain database in which the possible worlds have associated probabilities.Each data item, tuple and value that an attribute can take is associated with a probability ∈ (0, 1), with 0 representing that the data is certainly incorrect, and 1 representing that it is certainly correct (Motro, 1995).There is also the research area of fuzziness in database systems, which has resulted in a number of models aimed at the representation of imperfect information in DB.Fuzzy relational database is an extension of the relational database in order to treat, store, and interrogate imprecise data.This extension introduce fuzzy predicates under shapes of linguistic expressions that, at the time of flexible querying, permits to have a range of answers in order to offer the user all intermediate variations between completely satisfactory answers and those completely dissatisfactory (Touzi & Hassine, 2009). Handling Uncertainty in Databases The survey done by Aggrawal et al. in (Aggrwal, 2009) is considered to be a corner stone for researchers in the area of managing uncertain data.Hence, we took it as a starting point for this paper.In this section, we discuss a number of data management applications on uncertain data.Aggrawal et al. in (Aggrwal, 2009) have included most of the applications and techniques that are handling the management issues on uncertain data such as join processing, query processing, data integration, and indexing.We have added in this paper other recent techniques that handle the same issues.Moreover, we have included two other important management issues to our paper which are security and information leakage, and Representation Formalisms.This survey covers almost all the management issues on uncertain data and shows how techniques can handle it. Indexing Indexing uncertain data is the key technique for efficient query evaluation over uncertain data.The problem of indexing uncertain data is challenging because the diffuse probabilistic nature of the data can reduce the effectiveness of index structure and makes the cost of queries execution a concern.Index structures for deterministic data are not appropriate for uncertain data determining the suitable index structure for uncertain data depends on two main factors: the nature of uncertainty in data that depends mainly on the application domain and the type of required queries (Aggarwal, 2009). In index structures and their associated algorithms are developed to effectively answer Probabilistic Threshold Queries (PTQs).The Index scheme called probability threshold indexing (PTI), is based on the idea of augmenting uncertain information to an R-tree.The one-dimensional intervals are mapped to a two-dimensional space to show that the problem of interval indexing with probabilities is significantly harder than interval indexing (indexing on interval queries which is a complex query).A technique called variance-based clustering is used to overcome the limitation of this problem.The index structure can answer the queries for various kinds of uncertain information, in an almost optimal sense.The problem of range searching was introduced by (Tao, Cheng & Xiao, 2007), solved by considering a small histogram consisting of one piece.In (Tao, Cheng, Xiao, Ngai, Kao & Prabhakar , 2005;Tao, Cheng & Xiao, 2007) this problem is considered in two higher dimensions and presented some index structures based on space partitioning heuristics.In indexing categorical uncertain data is handled, using a heuristic solution, namely, each random object take a value from a discrete, unordered domain.(Agarwal, Cheng, Tao, & Yi, 2009)Presents linear or near linear size indexing schemes for both the fixed and variable threshold version of the problem, with logarithmic or poly-logarithmic query times.An optimal index is presented for answering queries on uncertain data where the probability threshold is fixed.In (Qi, Singh, Shah, & Prabhakar, 2008) the Probabilistic Nearest Neighbor (PPN) query is studied with probability threshold (PPNT) which returns all uncertain objects with NN probability greater than the threshold.An augmented R-tree index is proposed with additional probabilistic information to facilitate pruning as well as global data structures for maintaining the current pruning status. The indexing algorithms proposed in (Singh, Mayfield, Prabhakar, Shah, & Hambrusch, 2007;Qi, Singh, Shah, & Prabhakar, 2008), are not considered a general indexing algorithm.As in (Singh, Mayfield, Prabhakar, Shah, & Hambrusch, 2007) only categorical uncertain data is considered.In (Qi, Singh, Shah, & Prabhakar, 2008) they only focused on indexing the nearest neighbor queries.The indexing algorithm used in (Tao, Cheng, & Xiao, 2007) is the most effective way to solve index challenge when dealing with probabilistic queries, as it can provide correct query answers for different uncertain data. Security and Information Leakage When dealing with the problem of security and information leakage challenge, solving this problem is based on appropriate data modeling usage.For better understanding of the models used, considered the two main security properties in Table 1, the quantitative and qualitative security properties (Ngo & Huisman, 2013).The quantitative security property is based on Shannon Entropy H(X) to measure the information content of a random variable.Where Information Leaked = initial uncertainty -remaining uncertainty (McCamant & Ernst, 2008).Shannon entropy proves superior to guessing entropy that only guarantees the non-negative property of leakage for deterministic programs.(Ngo & Huisman, 2013) Propose a novel model of quantitative analysis for multi-threaded program that also take into account the effect of observables in intermediates states along the trace. Mainly a probabilistic data model has been used to deal with the information leakage in views and in data exchanges.Usually in this case the data is private and only a certain view is published by the owner.This view usually has shortage of private information in its data.Many approaches have been working on this problem.From these approaches is modeling the attacker's background knowledge as a probability space, to be able to check whether the posteriori probability of the secret is in fact different from the priori probability: if the two are the same then it's a perfect security case, otherwise practical security is satisfied if they are only close to each other (Re & Suciu, 2007).This process appears to be extremely difficult when the input probabilities are not known. Designing security policies in the case of data uncertainty represent another big challenge.Nowadays a common practice is to define access control rules by specify them in terms of certain credentials offered by a user (Re & Suciu, 2007).The process of defining the right semantics for such access control policies when credential is probabilistic is still an open problem for researchers till now.In (Chothia, Kawamoto, Novakovic, & Parker, 2013) they develop an information leakage model that can measure the leakage between arbitrary points in a probabilistic program.There model does not detect information leakage that occur between variables that have not been annotated.They believe that detecting leakage at selected points is more practical than one that attempt to detect all possible leaks.They base their framework on a simple probabilistic, imperative language that they call CH-IMP.Not suitable for some application, as for the multi-threaded program. Query Processing The existence of data uncertainty in many real-worlds made the importance of the uncertain query processing increases.The incorporation of probabilistic information affects the correctness and computability of the query plan.Having a query over an uncertain database requires computation or aggregation over a large number of possibilities.The answer to a standard SQL query over probabilistic database is a set of probabilistic tuple, each tuple returned by the system has a probability of being in the query's answer set.Computing these probabilities is difficult and is an open research area.To process large scale probabilistic data we need to develop specific probabilistic inference techniques that can be integrated well with SQL query processors and optimizers, and that scale to large volumes of data. There are two broad semantic approaches used: Intentional semantics approach that is based on modeling the uncertain database in event models or possible worlds then use tree-like structures of inferences on these event combinations.Using the tree-like structure make it possible to get all the possibilities enumerated over which the query may be evaluated and subsequently aggregated.This semantics results are complex in term of the evaluation time which represent its drawback, but usually lead to correct results (Dalvi & Suci , 2007).The other approach is the Extensional semantics approach, instead of performing the whole enumeration process to the tree of inferences, this semantics attempts to design a plan which can approximate these queries.When dealing with simple expression, extensional semantics will be the best choice.But it's not preferred when dealing with complex expression as the dependencies in the underlying query results cannot be evaluated easily that why dealing with complex expression appears to be this semantic drawback (Dalvi & Suciu, 2007). Query evaluation is one of the important factors that should be taken into consideration when dealing with queries.This issue became more complicated in the case of uncertain data or probabilistic data.One of the techniques for adding probabilistic information into query evaluation is a generalization of the standard relational model which was discussed in (Fuhr & Rolleke, 1997).The probabilistic relations are treated as generalizations of deterministic relations.Then a modification is made to the operators of relational algebra in order to take the tuple weights into account during query processing.In (Dalvi & Suciu, 2007) the presence of a correct extensional plan was the focus.But for queries which don't admit correct extensional plan, two techniques are proposed to construct results which yield approximately correct answers.A fast heuristic is designed which can avoid large errors, and a sampling-based Monte-Carlo algorithm is designed which is more expensive, but can guarantee arbitrarily small errors.In (Dalvi & Suciu, 2007) a solution to the case of uncertain predicates on deterministic data is extended.We note that the work in this technique assumes tuple independence which is often not the case for a probabilistic database.In (Dalvi & Suciu, 2005) the data statistics and explicit probabilities at the data sources are used.Probabilistic database with complex tuples correlation is used to deal with the imprecision. Tuple correlation is also one of the important issues that should be taken into consideration in query processing on uncertain data.As it's the case in most of recent applications.Such as sensor data which is highly correlated in space and time (Deshpande, Guestrin, Madden, Hellerstein, & Hong, 2004).Even in the cases that assume that tuples are independent; many intermediate query results may contain correlations.Statistical modeling technique where used in (Sen & Deshpande, 2007) on querying correlated tuples.Then this method built a framework which presents uncertainties and correlations through the use of joint probability distribution. Ranked queries are very useful in decision making applications, and data mining tasks (Dhandore & Ragha, 2014).In particular, in database D a ranking query retrieves k objects in the database that have the highest scores.Ranked queries on uncertain database were discussed in (Lian & Chen , 2008) introducing two effective pruning methods, spatial and probabilistic, to help reduce the ranked query search space.Inverse ranking query was proposed in (Lian & Chen, 2011) by introducing a query named the probabilistic inverse ranking queries (PIR) which retrieves the possible rank of a given query object in an uncertain database with confidence above the probability threshold and they also include effective pruning methods to reduce the search space.In additional to that a study of three interesting aggregate PIR queries which are (max, top-m, avg.), was made but unfortunately they did not cover wider scale of aggregates. The use of possible worlds semantics present another challenge as it allows complex correlations among tuples in the database.In (Soliman, Ilyas, & Chang, 2007) the generalization rules are used to deal with this issue, which are logical formulas that determine valid worlds.The interaction between both the possible world's semantics and top-k queries need careful redefinition of the query semantics. The work in (Re, Dalvi, & Suciu, 2007), (Yi, Li, Kollios, & Srivastava, 2008), (Hua, Pei, Zhang, & Lin, 2008) studied the top-k queries in the probabilistic databases.In (Re, Dalvi, & Suciu, 2007) the main focus was on reducing the difficulty of getting the k uncertain objects that satisfy the query predicates in all possible worlds with the highest probabilities.AVG aggregate function is not supported in (Re, Dalvi, & Suciu, 2007) .The U-Topk query was proposed in (Yi, Li, Kollios, & Srivastava, 2008) which get set of k uncertain objects such that this set is also the top-k answer set appearing in some possible worlds with the highest probability, and the U-kRanks, which finds k objects such that the i-th object (1 ≤ i ≤ k) has the i-th highest rank in some possible worlds with the highest probability.(Yi, Li, Kollios, & Srivastava, 2008) Improved the U-Topk and U-kRanks queries efficiency by including early stopping conditions. A probabilistic threshold top-k (PT-k) query was proposed in (Hua, Pei, Zhang, & Lin, 2008) which gets the k objects so that there is a top-k query answer in some possible worlds with the highest probabilities.In (Peng, Diao, & Liu, 2011) the threshold query processing for uncertain data was optimized.Cormode et al. (Cormode, Li, & Yi, 2009) used the expected ranks as a way to rank objects in a probabilistic database.In (Lian & Chen, 2009) the probabilistic top-k dominating (PTD) query was discussed which was then improved in (Lian & Chen, 2013).In (Lian & Chen, 2011) the probabilistic top-k star (PTkS) query was proposed, which gets k objects in an uncertain database that are near to a static/ dynamic query point, taking both distance and probability aspects into consideration. Representation Formalisms In most cases the probabilistic database is a probability space over all possible instances of the database, called possible worlds.We cannot numerate these possible instances; instead a concise representation formalism that describes all possible worlds and their probabilities is needed.The most common technique used, use conditional independence between variables and represent a probability space in term of a graphical model (Pearl, 1988).For an efficient query evaluation a trade-off is required between the succinctness of representation formalism and the complexity of evaluating interesting queries (Antova, Jansen, Koch, & Olteanu, 2008). In the case of a probabilistic database, the lineage as well needs to be represented to know the reason of uncertainty.The trio project has discussed the problem of representing both the uncertainty and the lineage in (Benjelloun, Sarma, Halevy, & Widom, 2006).Lineage is usually expressed in some form of boolean expressions (Afrati & Vasilakopoulos, 2010). In (Parsons & Saffiotti, 1993) a method that enables systems that use different uncertainty handling formalisms to qualitatively integrate their uncertain information, and argues that this makes it possible for distributed intelligent systems to achieve tasks that would otherwise be beyond them.This paper approach is grounded on the notion of degrading, given a representation of uncertainty, they degrade its information content to a level that can be shared between all the different formalism; this degraded information is then communicated between agents. Join Processing This section aims at surveying the currently followed research directions concerning joins on uncertain data.It presents the most prominent representatives of the join categories.The problem of join processing is challenging in the context of uncertain data, because the join-attribute is probabilistic in nature.The approaches mainly differ in the representation of the uncertain data, the distance measure or other type of object comparison, the types of queries and query predicates and the representation of the result.Join methods can be classified into: Confidence-Based Join Methods Most confidence-based join methods depend on reducing the search space based on the confidence values of the input data.For the candidate selection neither the join-relevant attribute of the object nor the join predicates are taken into account.(Agrawal & Widom, 2007) Propose efficient confidence-based join approach for all query types (as the stored, stored-threshold).Assume that the stored relations provide efficient sorted access by confidence and that neither joins relation fit into main memory.Assume also the uncertainty of the objects and their independence.This approach can be applied regardless of the join-predicate and the type of score function. The probabilistic top-k join queries are also handled.The same way is used to handle the sorted and sorted-threshold join queries. Probabilistic Similarity Join Methods A recognized short come in the confidence based join methods is that the knowledge about the relevant attributes for the join predicate was not incorporated.The previous confidence-based join methods return the pairs of objects regardless of their distance, as long as their combined confidence is sufficient.Similarity join are very selective queries, where only very few candidates satisfy the query predicate.That is why an effective pruning technique is needed for an efficient similarity join processing.Similarity join applications benefit from pruning those candidate whose attributes do not likely satisfy the join predicate.This way guarantees that the candidates having a very low probability are avoided. In (Cheng, Singh, Prabhakar, Shah, Vitter, & Xia, 2006) the similarity joins over uncertain data are studied based on the continuous uncertainty model.An uncertainty interval is accomplished for each uncertain object attribute by an assigned uncertainty probability distribution function (pdf).For two uncertain objects, each represented by a continuous pdf, their score in turn lead to a continuous pdf representing the similarity probability distribution of both objects.The probabilistic similarity join consists of an uncertainty interval and an uncertainty pdf.The probabilistic join queries are defined through the probabilistic predicate defined on the uncertain pairs.Also two join queries are proposed, the probabilistic join query (PJQ), and the probabilistic Threshold Join Query (PTJQ). Probabilistic Spatial Join Methods Spatial joins are applied on spatial objects, which are objects that have a certain position is space and a spatial extension.This spatial joins depend mainly on spatial predicates that refers to spatial topological predicates.The probabilistic spatial join is evaluated in two steps: filtering and refinement.In (Ni, Ravishankar, & Bhanu, 2003) evaluating probabilistic spatial joins was the focus, dealing with the object pairs, and the intersection probability between them.The probabilistic R-tree (PrR-tree) index was proposed, which supports a probabilistic filter step. An efficient algorithm was proposed to obtain the intersection probability between two candidate polygons for the refinement step. In (Burdick, Deshpande, Jayram, Ramakrishnan, & Vaithyanthan, 2005) a probabilistic spatial join approach was proposed based on uncertainty model.Where the uncertain spatial objects are composed of primitive volume elements with confidence values assigned to each of them.Then the score function is used to evaluate the join predicate for each pair.Based on this score function, a Probabilistic Threshold Join Query (PTJQ) and a Probabilistic Top-k Join Query (PTopkJQ) were proposed.(Ljosa & Singh, 2008) Presents algorithm for two kinds of probabilistic spatial join queries, the first is the (PSJ) threshold PSJ queries, which return all pairs that score above a given threshold.The second kind is called top-k PSJ queries, which return the k top-scoring pairs.This algorithm mainly focuses on speeding up the queries. Data Integration Data integration is an important application in the context of uncertain data.It is the general process of providing single information source out of some local information sources.The term data integration is often used to refer to information integration applied to structure data (both schema and instances).From the basic data integration tasks is a comparing local data source to identify matching entities, e.g., two columns with telephone and home-telephone from two company databases both containing customer telephone.The information about matching entities, usually called mapping, is then used to merge the input data sources by including, for example, all of the customers' telephone into a single column.Unfortunately, automated tools may still fail in identifying all the correct mappings, e.g., because of variations in columns. Data integration systems need to handle uncertainty at three levels (Aggarwal, 2009): -Uncertain mediated schema: mediated schema is defined as a set of schema terms in which queried are posed.Mediated schema doesn't include all the attributes present in the sources, but rather the aspects of the domain that the application builder wishes to expose to the users.There are several reasons for uncertainty arising in the schema mapping.First is the mediated schema is known directly from sources that will cause uncertainty in the results.Second, when domains get broad, there will be some uncertainty about how to model the domain. -Uncertain schema mapping: data integration systems depend on schema mapping to define the semantic relationships between the data in the source and terms used in the mediated schema.Taking into consideration that schema mapping can be inaccurate.In practice, schema mapping are often generated by semi-automatic tools and not necessarily verified by domain experts. -Uncertain data: reasons for uncertain data are various, such as the extraction from unstructured or semi-structured sources by automatic methods.Other reason is that data may come from sources that are unreliable or not up to date. -Uncertain queries: In some cases queries are given as keywords rather than structured query over well defined schema.In this case the system need to transform this query into structured one with respect to the data sources. One of the ways to handle the integration is to explicitly represent the uncertainty produced by the data integration system and to consider it an important result of the integration process.In a survey made on 2003 about data integration, the problem of uncertain data management was not mentioned; it was stated that the main difficulty was the discovery of correct semantic relationships between schema objects (Halevy, 2003).After that, the problem of dealing with imprecise mappings was mentioned in another survey paper (Doan & Halevy, 2005).However, it was noticed that we will never be able to find all correct matches and that we should therefore be aware of possible errors and find ways to use partially incorrect results. As for uncertainty management within the data integration process.Uncertain data integration goal is using the uncertainty available in the data sources and/or generated during the matching phase, to create an uncertain integrated view of the data.There are several methods to represent uncertainty.One of these ways is by using the quantitative methods, e.g., specifying the probability that a mapping is correct, or qualitative methods, e.g., using fuzzy sets and possibility theory to represent preferences about the correctness of a mapping.Quantitative models are the most frequently used in recent data integration methods.Qualitative approach is used to reduce the complexity of the manipulation of uncertainty. In particular, as many mediated databases consistent with the sources are possible, there can also be many alternative query answers.Thus they define two categories (correct answer and strongest correct answer) to characterize good and best query answers.An answer is good if it is contained in the answers of all mediated databases consistent with the sources.Many approached worked on reducing the number of mappings thus increasing the efficiency of the process. In (Nottelmann & Straccia, 2007) several methods were used like the ad hoc threshold, top-k to remove some discovered rules.In (Gal, 2006) it's showed that the analysis of the top-k mappings can be used as a selection criterion (keeping the relationships that are more stable in high-likelihood mappings).Both (Nottelmann & Straccia, 2005) and (Keulen, Keijzer, & Alink, 2005) remove some possibilities using thresholds and constraints; however checking these constraints may become an additional source of complexity.In (Keijzer & Keulen, 2007) the authors suggest user feedback can be used to reduce the number of possible worlds.In (Sarma, Dong, & Halevy, 2008) some uncertainty was removed by categorizing all the mapping with a probability greater than a predefined threshold as certain and those with probability less than the predefined threshold as wrong.(Keijzer & Keulen, 2008) Use consistency rules that make part of the possible worlds to be removed.As this way reduces the number of possible worlds, the number of all alternative mappings is exponential on the number of pairs of schema objects; therefore even reducing it by a fixed percentage may not scale to real-world integration a task which is considered its drawback. Uncertain Database Management Systems During the past years, different releases of DBMSs for dealing with uncertain data have been emerged.In this section, we review most of these systems by highlighting their strategy, strength, and weakness. MayBMS MayBMS is a probabilistic Database Management system developed by Oxford and Cornell universities.The MayBMS system is considered to be a complete probabilistic database management system that leverages robust relational database technology.It was developed in 2005 as an extension of the open-source PostgreSQL server backend and has undergone several transformations.Its backend is easily accessible through multiple APIs (inherited from PostgreSQL), and has efficient internal operators for processing probabilistic data (Huang, Antova, Koch, & Olteanu, 2009). MayBMS main features are (Huang, Antova, Koch, & Olteanu, 2009): • A powerful query language for processing and transforming uncertain data • Space-efficient representation and storage • Efficient query evaluation based on mature relational technology Support for conditioning and data cleaning MayBMS is known with its U-relational database where it stores its probabilistic data.Queries are represented in an extension of SQL with specialized constructs for probability computation and what-if analysis.The U-relations in the U-relational database are standard relations extended with condition and probability columns to encode correlations between the uncertain values and probability distribution for the set of possible worlds. Where the variables from finite set of independent random variables are stored in the conditional columns and the probabilities of the variables assignments occurring in the same tuple are stored in the probability columns. TheMayBMS query language extends SQL with uncertainty-aware constructs. Extensions of relational algebra or SQL with limited constructs, such as certain or top-k, are not expressive enough.It is not allow for the convenient construction of new worlds or for the use of data correlations across worlds.MayBMS does not support several aspects such as the lineage; standard SQL aggregates such as sum or count on the uncertain relations only support expectation of the aggregation which is considered as its drawback. Trio Trio is developed at Stanford University in 2010 (Agrawal, Benjelloun, Das, Hayworth, Nabar, Sugihara, & Widom, 2006) for managing uncertain data and data lineage using an extended relational model and a SQL-based query language.Through this project, a new schema named ULDBs is introduced.The ULDBs adds uncertainty and lineage of the data as first-class concepts.In addition, a SQL-based query language for ULDBs called TriQL is developed where the semantics of the SQL are modified to take uncertainty and lineage into account, and some new constructs are added to query uncertainty and lineage directly.The first working prototype of Trio model and language was built on top of a conventional DBMS (Agrawal, Benjelloun, Das, Hayworth, Nabar, Sugihara, & Widom, 2006). Trio data model semantics is based on the possible worlds which is a set of possible instances for the database.In a discrete uncertainty the uncertain database represent a finite set of possible instances with continuous uncertainty.The uncertain attribute value may be an arbitrary probability distribution function (pdf) over a continuous domain, describing the possible values for an attribute.In Trio they the semantic of standard query is defined naturally.When dealing with queries in the trio the query result on uncertain database must include the result of applying the query Q to each possible instance of U (Agrawal & Widom, 2009). The Trio includes the lineage in query processing to define the data from which each result value was derived.Lineage is needed to properly represent uncertainty, and to compute result confidence values lazily.The lineage is generated at query time and for the results that involve pdfs, the lineage is extended to include relevant predicates and mappings.The Trio deals with expensive queries by using approximate answers, either using sample function, or a histogram based on the weight function.When it come to integration, Trio data model include a confidence value for each tuple that represent the probability of the tuple existence.This confidence feature is very useful for pdf integration. Trio main features are (Widom, 2005): • Data values are uncertain, approximate, or incomplete.A record may include confidence that it actually belong in the database. • Queries operate over uncertain data, may return uncertain results. • Lineage is an internal part of the data model. • Lineage and accuracy may be queried. • Lineage can be used to enhance the data modifications. Trio database management system is considered the most powerful database management system on uncertain data, which plenty of researches building their techniques based on its model. MystiQ MystiQ is a probabilistic database system developed at University of Washington.It uses a probabilistic data model to find answers in large numbers of data sources exhibiting various kinds of imprecision (Boulos, Dalvi, Mandhani, Mathur, Chris, & Suciu, 2005). • Ability to return best matches when no tuples satisfies all the predicates. • Support complex SQL queries over inconsistent data, global constraint definition, and the definition of a soft view in queries. What makes MystiQ different from any other system is that it provides probabilistic semantics that makes it a middleware where data is normally stored in a relational database system.Being a middleware enable it to escalate the infrastructure of an existing database engine ex: query evaluation, query optimization, and indexes. MystiQ focuses on efficient processing of SQL queries.It combines two query evaluation techniques: First pushes the computation of the output probability in the DB engine using a technique called "safe plans".Second runs a Monte Carlo simulation in the middle ware guiding the simulation steps to quickly identify and rank the top-k most probable answers.MystiQ can do the select-from -where-group by queries over large probabilistic databases.MystiQ allows users to define and materialize views over events which are an important feature when managing probabilistic data.MystiQ also handles sufficient lineage with minimum errors (Re & Suciu, 2008).However, MystiQ do not handle queries with a having clause and queries with self joins.It treats these queries as unsafe queries.As it also do not support the polynomial lineage.These unsupported features are considered to be shortage in the MystiQ that need to be covered in other work. Orion Orion database system (previously known as U-DBMS), is a state-of-the-art uncertain database management system with built-in support for probabilistic data as first class data type.In contrast to other uncertain database, Orion supports both attribute and tuple uncertainty with arbitrary correlations.This enables the database to handle both continuous and discrete uncertain values.It also provides various indexes for efficient query evaluation.It is implemented in C and PL/PGSQ (Cheng, Singh, & Prabhakar, 2005).It is built on top of PostgreSQL, an object-oriented relational open-source database system. Orion main features include (Cheng, Singh, & Prabhakar, 2005): • An integrated implementation of the "PDF Attributes" data model, which is consistent with Possible Worlds Semantics and supports both continuous and discrete uncertainty. • Efficient access methods for querying uncertain data, including three index structures based on R-trees, signature trees, and inverted indexes. • Improved query optimization, join algorithms, and selectivity estimation by gathering and exploiting additional statistics over probabilistic data types. • Integration with PL/R for graphical visualization of and statistical inference over uncertain data. MCDB: Monte Carlo Database System This is a prototype system that proposes a new approach for handling enterprise-data uncertainty (Jampani, Xu, Wu, Perez, Jermaine, & Haas, 2008).Within the MCDB the uncertainty is not included in the data model, and the query processing is performed on the classical relational data model.MCDB enable the user to declare arbitrary variable generation (VG) function that embodies the database uncertainty.This is then used by the MCDB to generate random values for the uncertain attributes, and to run queries. • Handle arbitrary joint probability distributions over discrete or continuous attribute. • Use novel query processing techniques, executing a query plan exactly once, over tuple bundles instead of ordinary tuples. However MCDB has several points that need improvement as the query optimization, error control, risk assessment and lineage. BayesStore: Probabilistic Data Management Architecture Most recent approach that develops a probabilistic data base management system depends on simplistic model of uncertainty which can be easily mapped onto existing relational architectures: Probabilistic information is associated with individual data tuples.But unfortunately that introduce a gap between the statistical model which is used by the analysts and the model in the probabilistic DB, this is the case in the Trio and MayBMS. BayesStore project solve this "model-mismatch" by supporting statistical models, evidence data and inference algorithms as first-class in the probabilistic data base management system (Wang, Michelakis, Garofalakis, & Hellerstein, 2008).BayesStore ia a probabilistic data management architecture built on the principle of handling statistical models and probabilistic inference tool. • Enhance probabilistic inference and statistical model manipulation as part of the standard DBMS. • Represents model and evidence data as relational tables. • Implement inference algorithm efficiently in SQL. • Add probabilistic relational operators to the query engine. • Optimizes query with both relational and inference operators. The BayesStore goals can be summed up as; supporting query processing efficiently, supporting extensible API for plugging in new models and inference algorithms, and scaling up to large datasets. PrDB: Probabilistic Data Base PrDB goal is to design a probabilistic database model that can capture the uncertainties and complex correlations that appear in real world application.And also capture the probabilistic regularities.PrDB unifies ideas from large-scale structures graphical model like relational model (PRMs), and probabilistic query processing.(Sen & Deshpande, 2007) Its framework is based on the notion of "shared factors", which not only allow the expression and manipulation of uncertainties at various levels of abstractions, but also support capturing rich correlations among uncertain data.PrDB support declarative SQL-like language for specifying uncertain data and correlations among them. • Support exact and approximate evaluation of wide range of queries, including references, SQL queries, and decision queries. Finally, these systems can be summed up as follows: Trio project (Benjelloun, Sarma, Halevy, & Widom, 2006) focused on the study of uncertainty and lineage in incomplete database.MystiQ (Boulos, Dalvi, Mandhani, Mathur, Chris, & Suciu, 2005) supports various constructs for handling uncertainty that include tuples associated with probabilities.MystiQ is mainly a middle ware that leverage infrastructure of existing DB engines.The MayBMS project (Huang, Antova, Koch, & Olteanu, 2009) focused on representation problems, query language design, and query evaluation on uncertain data.A fundamental design choice that set MayBMS apart from Trio and MystiQ is that it's an extension of the open-source PostgresSQL server backend, and not a front-end application of PostgreSQL .MCDB (Jampani, Xu, Wu, Perez, Jermaine, & Haas, 2008) focused on complex probabilistic model with native Monte Carlo simulation.Orion project (Cheng, Singh, & Prabhakar,2005) focused on tuple and attribute uncertainty with attribute correlation given by continuous value probably distribution.BayesStore (Wang, Michelakis, Garofalakis, & Hellerstein, 2008) efficiently express and reason about correlation among uncertain data items, in a concise and statistical way.PrDB (Sen & Deshpande, 2007) focus on managing and exploiting rich correlations in probabilistic databases .Other group has also studied correlation in probabilistic database (Sen & Deshpande, 2007).Table 2 presents a comparison between these uncertain management systems. Conclusion The field of uncertain data management has become one of the most vital topics in recent years.That caused a lot of techniques to be introduced to handle the different management issues of uncertainty.This paper surveys broad areas of work in uncertainty management issues.We presented the important management techniques along with the key representational issues in uncertain data management.The field of the uncertainty management will expand over time, so we hope that this survey will be a good starting point to researchers focusing on the important and emerging issues in this field.In this paper we also gave an overview of the DBMS that handle uncertain data, shown its features and weakness. Uncertain DBMS can be enhanced by taking the probability of all instances into account in data management.For example, taking the instances probability in the aggregate queries and events can have a great effect on the accuracy of the DBMS.As considering probabilities is indispensable when dealing with uncertain data, this probability usage need to be improved.Enhancing the aggregate queries on uncertain data is the main scope for our future work. Table 1 . Security Property DrawbackReject s any program that contain leakage, even if this leakage is unavoidable. Table 2 . Uncertain Database Systems Comparison
9,070.6
2015-07-31T00:00:00.000
[ "Computer Science" ]
Instantons to the people: the power of one-form symmetries We show that the non-perturbative dynamics of $\mathcal{N}=2$ super Yang-Mills theories in a self-dual $\Omega$-background and with an arbitrary simple gauge group is fully determined by studying renormalization group equations of vevs of surface operators generating one-form symmetries. The corresponding system of equations is a {\it non-autonomous} Toda chain, the time being the RG scale. We obtain new recurrence relations which provide a systematic algorithm computing multi-instanton corrections from the tree-level one-loop prepotential as the asymptotic boundary condition of the RGE. We exemplify by computing the $E_6$ and $G_2$ cases up to two-instantons. In an ideal world the non-perturbative structure of gauge theories should be computed by quantum equations of motion determined by a symmetry principle.The presence of extended operators generating higher form symmetries in quantum field theory is a powerful tool to concretely realise such a programme.A perturbative analysis in a weakly coupled regime, if any, would supply appropriate asymptotic conditions.In this letter we present a class of theories where the full non-perturbative result is fixed in such a framework.These are N = 2 super Yang-Mills theories in four dimensional self-dual Ω-background, which enjoy a one-form symmetry generated by surface operators [1].We show that the renormalization group equation obeyed by the vacuum expectation value of such surface operators provides a recursion relation which fully determines, from the perturbative one-loop prepotential, all instanton contributions on the self-dual Ω-background or, equivalently, the all-genus topological string amplitudes on the relevant geometric background.Actually, partition functions with surface operators display a very clear resurgent structure led by the summation over the magnetic fluxes [2]. The system of equations we study is a non-autonomous twisted affine Toda chain of type ( Ĝ) ∨ , where ( Ĝ) ∨ is the Langlands dual of the untwisted affine Kac-Moody algebra Ĝ.Each node of the corresponding affine Dynkin diagram defines a surface operator, the associated τfunction being its vacuum expectation value.The time flow corresponds in the gauge theory to the renormalization group.The resulting recurrence relations constitute a new effective algorithm to determine instanton contributions for all classical groups G. Let us remark that the τ -functions we obtain provide the general solution at the canonical rays for the Jimbo-Miwa-Ueno isomonodromic deformation problem [3,4] on the sphere with two-irregular punctures for all classical groups, which to the best of our knowledge was not known in the previous literature.The recursion relations we obtain are different from the blow-up equations of [5] further elaborated in [6].Indeed the latter necessarily involve the knowledge of the partition function in different Ω-backgrounds.This makes the recursion relations (and the results) coming from blow-up equations more involved and difficult to handle.However, we expect a relation between the two approaches to follow from blow-up relations in presence of surface defects.Indeed, the isomonodromic τ -function for the sphere with four regular punctures was obtained in a similar way from SU (2) gauge theory with N f = 4 in [7].In this letter we summarise our results and refer to a subsequent longer paper for a fully detailed discussion. The τ -functions are labeled by the simple roots of the affinization of the Lie algebra of the gauge group α ∈ ∆, namely {τ α } α∈ ∆, and satisfy the equations where t := (Λ/ǫ) and the logarithmic Hirota derivative is given by D Given a simple root α, its coroot is as usual given by α ∨ = 2α/(α, α), where (•, •) is the scalar product defined by the affine Cartan matrix.Eq. ( 1) is the de-autonomization of the τ -form of the standard Toda integrable system [8,9] governing the classical Seiberg-Witten (SW) theory [10].The de-autonomization is induced by coupling the theory to a self-dual Ω-background (ǫ 1 , ǫ 2 ) = (ǫ, −ǫ) [11].In the autonomous limit ǫ → 0, τ -functions reduce to θ-functions on the classical SW curve [12], which were used to provide recursion relations on the coefficients of the SW prepotential in [13].The gauge theory interpretation of these τ -functions is the v.e.v. of surface operators associated to the corresponding decomposition of the Lie algebra representation under which these are charged.We expect these equations and their generalizations to describe chiral ring relations in presence of a surface operator, which deserve further investigation.Higher chiral observables should generate the flows of the full non-autonomous Toda hierarchy.The actual form of equations (1) depends on the Dynkin diagram.For the classical groups A, B and D these reduce to bilinear equations which we solve via general recursion relations.For C, E, F and G the resulting equations are of higher order and we study them case by case.The symmetries of the equations are given by the center of the group G, namely Moreover, the center is isomorphic to the coset of the affine coweight lattice by the affine coroot lattice, and coincides with the automorphism group of the affine Dynkin diagram.By a remark in [14], the coweights, and by extension the lattice cosets, corresponding to these nodes are the miniscule coweights, a representation of g being miniscule if all its weights form a single Weyl-orbit.This remark will be crucial while solving the τ -system. The τ -functions corresponding to the affine nodes, that is the ones which can be removed from the Dynkin diagram leaving behind that of an irreducible simple Lie algebra, play a special rôle.Indeed, these are related to simple surface operators associated to elements of the center Z(G), and are bounded by fractional 't Hooft lines.Such surface operators are the generators of the one-form symmetry of the corresponding gauge theory, [1].Since their magnetic charge is defined modulo the magnetic root lattice, a natural Ansatz for their expectation value is the co-root lattice and (λ ∨ aff , α) = δ α aff ,α for any simple root α.The constant κ g = (−n g ) rg,s , where n g is the ratio of the squares of long vs. short roots and r g,s is the number of short simple roots.For simply laced, all roots are long and κ g = 1. We will now show how the term t 2) is the full Nekrasov partition function in the self-dual Ωbackground upon the identification σ = a/ǫ, where a is the Cartan parameter.In the A n case, (2) is known as the Kiev Ansatz.In the A 1 case, it was used to give the general solution of Painlevé III 3 equation in [15] and further analysed in [16]. Let us remark that the τ -function (2) displays a clear resurgent structure, with "instantons" given by the magnetic fluxes in the lattice summed with "resurgent" coefficients B(σ|t) and trans-series parameter e 2π √ −1η , see [17] for a similar analysis in the Painlevé III 3 case. The Ansatz (2) is consistent with equations (1).Indeed, after eliminating the τ -functions associated to the non-affine nodes, the resulting equation is bilinear and therefore the Ansatz (2) reduces to a set of recursion relations for the coefficients Z i (σ).The variables η and σ are the integration constants of the second order differential equations (1) and correspond to the initial position and velocity of the de-autonomized Toda particle. Let us set more precisely the boundary conditions which we impose to the solutions of equations (1).We consider the asymptotic behaviour of the solutions at t → 0 and σ → ∞ as up to quadratic and log-terms [18].We will show that the solution of (1) which satisfies the above asymptotic condition is such that where G(z) is the Barnes' G-function and R is the adjoint representation of the group G.The expansion of the above function matches the one-loop gauge theory result upon the appropriate identification of the logbranch.This reads, in the gauge theory variables, as ln √ −1r • a/Λ ∈ R and matches the canonical Stokes rays obtained in [19]. Let us first focus on the A n case whose affine Dynkin diagram is The root lattice is , and all the fundamental weights are miniscule, namely where (1 p , 0 n+1−p ) stands for a vector whose first p entries are 1 and the remaining entries vanish.We label the τ -functions as τ α j ≡ τ j .The τ -system is given by the closed chain of differential equations with τ j = τ n+1+j .Since all the nodes in this case are affine we can use the Kiev Ansatz (2).Then, all the τfunctions are determined by τ 0 as τ j (σ|t) = τ 0 (σ + λ j |t). It is therefore enough to solve the single equation Here and in the following we use the notation f (y ± x) ≡ f (y + x)f (y − x).The Ansatz (2) for τ 0 reads and by inserting it into (6) one gets after some simplifi- Now we simply equate the exponents.To fix B 0 (σ), we look at the lowest order in t.This produces a quadratic constraint and n + 1 linear constraints on the root lattice variables (n 1 , n 2 ) and (m 1 , m 2 ).Let us fix p, q ∈ {0, ...n+1}, p = q.Up to Weyl reflections, the only solution to the above mentioned constraints is given by n 1 = e p − e q , n 2 = 0 and m 1 = e p − e 1 , m 2 = −e q + e 1 , leading to This is solved by ( 4) up to a function periodic on the root lattice, which is set to one by the asymptotic condition (3).The higher order terms in (7) provide the recursion relations where B 0 (σ) is given by (4).For k = 1 we easily obtain The above coincide with one and two instanton contributions to the SU (n + 1) Nekrasov partition function as computed from supersymmetric localization [20,21]. Let us remark that the use of the τ -system (5) provides a completely independent tool to compute all instanton corrections just starting from the asymptotic behaviour (3).This procedure extends to all classical groups. D n is a simply laced root system, with the checkerboard lattice 2 ).These correspond to the "legs" of the affine diagram.Whichever rank we consider, we always have the consistency conditions which are also equal if n = 4, due to the enhanced symmetry of D 4 . , and the two miniscule weights are λ ∨ 0 = (0 n ) and λ ∨ 1 = (1, 0 n−1 ), corresponding to the "antennae" of the diagram.The τ -system coincides with that of D n+1 , with the modification that (i) there is no τ n+1 and (ii) that For n ≥ 3, the analysis proceeds as for D n except we may only use the left antennae and consider the first equation in (9).Therefore, we have a unified approach for both D n and B n .Explicitly, inserting (2) and τ 1 (σ|t) = τ 0 (σ + λ 1 |t) into the first of ( 9) we get after some simplification a formula analogous to (7) leading to quadratic and linear constraints on the lattice labels.By repeating the analysis similarly to the previous case, the equation, analogous to (8), fixing B 0 is The two cases are distinguished by the corresponding different asymptotic conditions (3).Indeed, we have Also the recursion relations are the same, upon using the appropriate root systems R: This result is in line with the contour integral formulae for the relevant Nekrasov partition functions.Indeed the poles in the D n and B n cases are the same, with different residues.From the above recursion relation we can compute the 1-instanton terms and the 2-instantons and so on.These are easily compared to [22].We now turn to the analysis of the other classical groups, which is more involved.Indeed, the τ -system reduces to higher order equations which produce more complicated recurrence relations to be solved by a case by case analysis.We performed explicit checks for C 3 , C 4 and C 5 up to two-instantons again in agreement with [22]. For the exceptional group E 6 we obtain the system where we used the notation D 2n := D 2 • D 2n−2 .The equations which specify B 0 be written as follows. Choose the miniscule weight to be λ = (0 5 , (− 2 3 ) 3 ).Let p 1 , ...p 5 be a permutation of {1, ..., 5} and let δ := (( 12 ) 8 ).Then one gets from the lowest order in ( 11) The solution satisfying the asymptotic behaviour (3) is B We also solved the recurrence relation arising from (11) up to two-instantons.For one-instanton, our results agree with the ones of [23], while the two instantons result is a too huge formula to be reported here.We remark that (11) represents a completely novel way of obtaining equivariant volumes of instanton moduli spaces for exceptional groups. Unimodular algebras G 2 , F 4 , E 8 have no outer automorphisms and consequently all the τ -functions associated to different nodes are independent.Therefore, the equations on the τ -function associated to the affine node turn out to be more difficult to solve.Let us display them for the G 2 case.
3,085.6
2021-02-02T00:00:00.000
[ "Physics" ]
The Life of Volcanic Rocks During and After an Eruption Volcanoes are constantly growing and changing. Every time a volcanic eruption occurs, new rock is added to the surrounding area. These eruptions play a big part in the formation and destruction of rocks as well as in shaping the Earth’s surface. Yet, we do not know everything about the histories of the volcanoes that previously existed on Earth. Volcanologists—scientists that study volcanoes—can study the types of rocks that volcanoes produce, to gain a better understanding of volcanoes. These rocks vary based on the characteristics of the volcano from which they came. Volcanic rocks are unique because we can study them to accurately discover when and how they were formed. In this article, we explain the processes that make volcanic rocks and formations look different from each other. We also discuss ways that volcanologists can determine how ancient volcanoes were made, by studying the rocks produced during past eruptions. VOLCANOES AND IGNEOUS ROCKS While geologists are scientists who study all kinds of rock, volcanologists are geologists who focus on past and present volcanoes, lava, and magma. Volcanologists also study the rocks that LAVA Molten rock that has erupted at the Earth's surface. MAGMA Molten rock below or within the Earth's crust. volcanoes make, looking for clues to help them figure out how and when the volcanoes were formed. By figuring out the conditions that created volcanic rocks, volcanologists can learn about the history of a volcano and possibly predict whether a volcano will erupt again-and what will happen to the landscape and the people living nearby if it does. The rocks surrounding volcanoes give us important data to calculate the volcano's age and to help us answer questions about how the Earth was formed, including when volcanoes erupted and how explosive the eruptions were [ ]. The types of rocks that volcanologists spend their time studying are called igneous rocks. Igneous rocks form when molten rock cools and hardens into solid rock. Molten rock is called magma when it is stored in a chamber beneath a volcano, but it is referred to as lava when it reaches the surface. Volcanoes are created whenever there is a break in the Earth's crust that opens a pathway for magma and gas to escape. Every time a volcano erupts, it changes in shape and size because the lava it releases cools and hardens around it. With time, this can make the volcano higher and wider. The largest volcano in the solar system is called Tamu large di erences in how volcanoes erupt a ect the type of lava flows they produce. LAVA FLOWS Lava flows in di erent ways. Understanding flow types can help volcanologists categorize igneous rocks because features of the rocks change as they cool and harden. Even though lava acts like a liquid, kids.frontiersin.org August | Volume | Article | it is constantly cooling as it moves. This makes it behave di erently depending on its composition and how quickly it is moving. The type of lava flow depends on the lava's viscosity, or resistance to flow. Blocky flows can reach more than meters in height, like a wave growing outward as it travels, before they topple over. Pillow lava (Figure d) only occurs underwater. As the hot lava flows into the cold water, the edge of the flow quickly cools to form volcanic glass, but the inside of the flow continues to move and break through the glassy edge, forming a pillow shape. Sometimes when lava flows on land, the outside layer of the flow cools and hardens first while the inside layer continues to flow, insulated by the outer rock layer. The result is the formation of a lava tube ( Figure ). Lava tubes usually only form when the lava is low viscosity and fast moving. As the lava supply to the tube runs out, the tube empties and the hardened outside tunnel remains, leaving behind a long cave stretching from the base of the volcano, with smooth, flat floors like a hallway where the last of the lava flow hardened [ ]. Studying lava flows is interesting because it allows volcanologists to see how a landscape can be formed or dramatically changed because kids.frontiersin.org August | Volume | Article | CRYSTALLIZATION OF LAVA Before magma leaves a volcano, it exists in one of three major forms, based on their chemical composition, viscosity, and temperature. They are all extremely hot and too dangerous to measure up close, so they are measured with thermal cameras instead [ ]! The di erent types of magma determine what the cooled, solidified rock will look like. Some magma is very fluid and forms pahoehoe or a'a lava flows, which travel long distances. Other magma is more viscous and cooler, forming blocky lava flows. Magma can also be highly viscous, relatively cool, and form lava domes or blocky lava flows. LAVA DOMES A circular mound that results from the slow eruption of viscous lava that piles up around the vents of some volcanoes. The type of magma, how long it takes to cool, and the gas present during a volcanic eruption impact what an igneous rock looks like after cooling and solidifying. Solidification of molten rock is called crystallization. Igneous rocks that cool underground cool more CRYSTALLIZATION A process where molten (liquid or semi-liquid) rock hardens into solid rock. slowly, giving mineral crystals time to grow within the rock. In these rocks, the di erent minerals are large and easy to see with the naked eye. Igneous rocks that crystallize at the earth's surface tend to cool much more quickly and have much smaller crystals. A volcanic rock made entirely of glass, called obsidian, forms from lava that cools with little to no crystals at all. If lava is flung into the air during an eruption, or if it comes in contact with water, the edges of the lava will also cool to form a glass. When volcanoes violently erupt, many gas bubbles form in the lava, so it crystallizes into rocks filled with holes where the gas used to be. Rocks with holes formed from gas trapped in the lava are called vesicular rocks. Gases can sometimes be trapped kids.frontiersin.org August | Volume | Article | in the rocks, allowing volcanologists and geologists studying old vesicular rocks to determine which gases were present in ancient volcanoes [ ]. Lava that contains very little gas tends to have no gas bubbles. TYPES OF IGNEOUS ROCKS Volcanologists have names for all the various kinds of rocks that are formed from volcanic eruptions. These rocks are first classified into categories, including felsic, mafic, and intermediate, based on the types of minerals they contain (Figure ). Basalt is the most common type of igneous rock made from lava. Basalt is a dark-colored rock in the mafic category, with very little crystal formation. In contrast, andesite, an intermediate rock, is much lighter in color and has many crystals visible without a magnifying glass. Granite is a type of non-volcanic igneous rock, formed from magma cooling slowly underground. Granite is often used for kitchen or bathroom countertops and can be found in a variety of colors, all sharing similar crystal size and mineral compositions. Rocks without obvious crystals also vary a lot in color and texture. Pumice is an example of a light-colored, felsic, vesicular rock that can float in water. Scoria is also vesicular but is dark, mafic, and denser, unable to float in water. Obsidian, composed of volcanic glass, is a felsic rock that cooled with no crystals. Although obsidian appears to be dark, like basalt, it is felsic in composition. Its dark appearance is due to impurities like iron or magnesium present within the rock. . End-Triassic mass extinction started by intrusive CAMP activity. Nat. Commun. : . doi: . /ncomms SUBMITTED: June ; ACCEPTED: June ; PUBLISHED ONLINE: August . . doi: . /frym. . CONFLICT OF INTEREST: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. COPYRIGHT © Brennan, Bhathe, Ellis, Moynes, Cousens and Landsman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. ALEJANDRO, AGE: Hi, my name is Alejandro. I am years old and I live in Ecuador. I like rock music, sports, and animals. I also like trekking. My hobbies are play soccer, do exercise, read, and listen music. I like science because you can discover new things. CHRISTINA BRENNAN Christina Brennan is an undergraduate student at Carleton University in Ottawa, Ontario, Canada majoring in interdisciplinary science. She is especially interested in pursuing a career in science communication, and using the skills developed in her undergrad to bridge the gap between disciplines. Since she was years old, kids.frontiersin.org August | Volume | Article | Christina has been designing floor plans of houses on graph paper and continues to do so today through D modeling. Aside from her academic goals, she hopes to day design and build her own net zero home. VISHNU PRITHIV BHATHE Vishnu Prithiv Bhathe recently graduated from Carleton University in Ottawa, Ontario, Canada with a Bachelor of Science degree. Ever since he was a child, he has been curious about science and how people use the lessons from science to better the world. He is also interested in entrepreneurship and aims to use scientific research to fuel innovation. In his free time, he likes to ride bikes, paint, read books, and exploring new things. STEPHANIE ELLIS Stephanie is an undergraduate student in her final year of the Interdisciplinary Science and Practice program in Ottawa, Ontario, Canada. She is passionate about community engagement and figuring out how di erent systems work, from the human brain to computers. She was inspired by her teachers to continue searching for answers and to follow her passions at a young age. EMILY MOYNES Emily is a fifth-year undergraduate student studying Environmental Science at Carleton University in Ottawa, Ontario, Canada. During her schooling, she completed an internship under the supervision of Dr. Steven Cooke on various fish-related projects, resulting in a publication in Transactions of the American Fisheries Society. She is currently completing her honors thesis with Dr. Thomas Sherratt on insect body toughness. She loves acquiring new knowledge and experiences in her field and hopes to day obtain a job exploring animal behavior or working toward conservation-related initiatives. BRIAN COUSENS Brian grew up in Montreal where he stumbled across the fantastic world of earth sciences at college and McGill University in Montreal, Quebec, Canada. His interest in volcanoes grew from studies of seafloor volcanism while at UBC in Vancouver and then ocean island volcanoes at UC Santa Barbara. Brian is a Professor in the Department of Earth Sciences at Carleton University in Ottawa, Ontario, Canada, specializing in the geochemistry of igneous rocks. He teaches field courses in volcanic regions such as eastern California, Nevada, Hawaii and Iceland. SEAN J. LANDSMAN Sean is a teaching professor in the Interdisciplinary Science and Practice program at Carleton University in Ottawa, Ontario, Canada on the unceded territory of the Algonquin First Nation. He is a trained fisheries ecologist and studies how fish move about their environments as well as how people a ect them. He is also a passionate science communicator and enjoys sharing his knowledge with anyone that will listen! In fact, it was this love of communicating fisheries science that led him to photography and specifically underwater photography. Sean loves to spend time outdoors, especially fishing and hiking, and enjoys tinkering in his basement making things out of wood. *sean.landsman@carleton.ca
2,788.8
2021-08-18T00:00:00.000
[ "Geology" ]
Improved synthesis of DQ-113, a new quinolone antibacterial agent, utilizing the Reformatsky reaction A new improved synthetic route for the C-7 substituent in DQ-113, a new quinolone antibacterial agent for infections caused by Gram-positive pathogens, has been developed which does not use low temperatures and avoids the use of expensive fluorinating agents. The key step is a Reformatsky reaction between ethyl bromofluoroacetate and ethyl 1-acetyl-cyclopropanecarboxylate. Results and Discussion The new synthetic route for 6 is shown in Scheme 2. The Reformatsky reaction of 1acetylcyclopropanecarboxylate 3 1 with bromofluoroacetate proceeded under standard conditions to give the fluoro-diester 8.The crude 8 was then treated with thionyl chloride/pyridine to give a chlorinated product.Treatment of the crude chlorinated product with 1,8diazabicyclo[5.4.0]undec-7-ene (DBU) gave the α-fluoro-α,β-unsaturated ester 9 as a 1:1 mixture of (E)-and (Z)-isomers in 79% yield from 1. Bromination of the allylic position of the E/Z mixture 9 with N-bromosuccinimide (NBS) catalyzed by 2,2'-azo-bis-isobutyronitrile (AIBN) gave a 1.35:1 mixture of the (E)-bromo-diester (E)-10 and (Z)-bromo-diester (Z)-10 in 87% yield, which was separated to give the two isomers (silica gel column chromatography).The (Z)isomer (Z)-10, which could not be converted into the unsaturated lactam 11, could be converted into the mixture of (E)-10 and (Z)-10 by heating with 0.1 eq. of NBS and 0.1 eq. of AIBN in benzene.After the isomerization process was repeated three times, the collected (E)-isomer (E)-10 (83% from 9) was treated with (S)-1-phenylethylamine and sodium hydrogencarbonate in ethanol to give the lactam 11 in 81% yield.Finally, hydrogenation of 11 catalyzed by Raney Nickel gave a 3:1 mixture of the (3S,4S)-lactam 5 reported before, 1 and the unwanted (3R,4R)lactam 12, in 86% yield, although other conditions using other catalysts lost face-selectivity and/or gave more des-fluorine products.The mixture of 5 and 12 could be separated easily into the two isomers by silica gel column chromatography.The (3S,4S)-lactam 5 obtained by this route could be transformed into 6 in about the same yield as reported before.In order to avoid the chromatographic separation of (Z)-10 and (E)-10, and to obtain 11 more efficiently with shorter routes, we tried to develop two new synthetic routes for 11, shown in Scheme 3. The key step of one route (1→13→14→15→16→11) is a Reformatsky reaction between the amino-ketone 14 and bromofluoroacetate, and of the other (1→13→17→16→11) is the intramolecular Reformatsky reaction of 17.However, under standard conditions, both of the Reformatsky reactions gave complex mixtures and did not afford the key intermediates 15 or 16.Using other reaction conditions, it is possible that these new processes might be achieved in future. Conclusions Thus, we have developed a new synthetic route for 6 utilizing the Reformatsky reaction between the 1-acetylcyclopropanecarboxylate 1 and ethyl bromofluoroacetate.In this route, expensive fluorinating agents or operations at very low temperature were not needed.The total yield from 1 to 5 (6 steps, and 3 times isomerization process) was improved to 41%, from the earlier quoted value of 10% (8 steps). Experimental Section General Procedures.Unless otherwise stated, materials were from commercial suppliers were used without further purification.Optical rotations were measured in a 0.5-dm cell at 25°C at 589 nm with a HORIBA SEPA-300 polarimeter. 1 H NMR spectra were determined on a JEOL JNM-EX400 spectrometer.Chemical shifts are reported in ppm relative to tetramethylsilane as internal standard.Significant 1 H NMR data are tabulated in the order: number of protons, multiplicity (s, singlet; d, doublet; t, triplet; q; quartet; m, multiplet), coupling constant(s) in Hz. High-resolution mass spectra were obtained on a JEOL JMS-700 mass spectrometer under electron impact ionization (EI), electron spray ionization (ESI), or fast-atom bombardment conditions (FAB).Column chromatography refers to flash column chromatography using on Merck silica gel 60, 230-400 mesh ASTM.Thin-layer chromatography (TLC) was performed with Merck silica gel 60 F 254 TLC plates, and compound visualization was effected with a 5% solution of molybdophosphoric acid in ethanol, UV-lamp, iodine, or Wako Ninhydrin Spray. Ethyl (E/Z)-3-(1-ethoxycarbonylcycloprop-1-yl)-2-fluoro-2-butenoate (9). To a solution of ethyl 1-acetylcyclopropanecarboxylate (1) (124.5 g, 0.797 mol) in 1.5 L of PhH was added Zn powder (156.4 g, 2.393 mol) and a catalytic amount of I 2 .To the mixture, heated under reflux, a solution of ethyl bromofluoroacetate (94.2 ml, 0.797 mol) in 200 mL of PhH was added dropwise over 1 h, and the mixture was heated under reflux for another 1 h.The mixture was cooled on an ice bath, and 1 L of aqueous 1 M HCl solution was added.After stirring for 1 h, the organic layer was separated.The organic solution was washed with aqueous 1 M HCl solution, water, brine, dried over Na 2 SO 4 , and concentrated in vacuo to give crude 8. To the crude 8 in pyridine (387 mL, 4.78 mol) was added thionyl chloride (69.8 mL, 0.957 mol) at -10 °C, and the mixture was stirred for 3 h at the same temperature.The resultant mixture was poured into 2 L of ice cold aqueous 1 M HCl solution, and was extracted with 1.5 L of AcOEt.The organic layer was washed with aqueous 1 M HCl solution, water, brine, dried over Na 2 SO 4 , and concentrated in vacuo to give the crude chlorinated product.To this, in 0.5 L of CH 2 Cl 2 , was added DBU (131 mL, 0.877 mol) at 0°C, and the mixture was stirred for 17 h at ambient temperature.The reaction mixture was partitioned between 1 L of CHCl 3 and 2 L of aqueous 1 M HCl solution, and the organic layer was washed with brine, dried over Na 2 SO 4 , and concentrated in vacuo to give crude 9.This was purified by silica gel chromatography, eluting with AcOEt:hexane = 1:4 to yield 9 as an E/Z mixture (152.8 g, 79%, E/Z = 1/1, by 1 H NMR) as a colorless oil.This E/Z mixture was used for the next reaction without separation of isomers.625 mol) and a catalytic amount of AIBN, and the mixture was heated under reflux for 16 h.After cooling to ambient temperature, the mixture was concentrated in vacuo.0.3 L of benzene was added to the residue, and the resultant suspension was filtered.The filtrate was concentrated in vacuo, and the resultant crude product was separated into two isomers by silica gel chromatography.(E)-10 (100.5 g, 50%) was obtained as a pale yellow oil, eluting with AcOEt:hexane = 1:4.Isomerization of (Z)-10 to a mixture of (Z)-10 and (E)-10.A mixture of ethyl (Z)-4-bromo-3-(1-ethoxycarbonylcycloprop-1-yl)-2-fluoro-2-butenoate (Z)-10, 0.1 eq. of NBS, and 0.1 eq. of AIBN was dissolved in benzene, and heated under reflux under nitrogen for 12-16 h.After the solvent was evaporated, the remaining residue was purified and separated by silica gel chromatography, eluting with AcOEt:hexane = 1:20 → 1:10 → 1:4 → 1:2 to give (E)-10 and a mixture of (E)-10 and (Z)-10.Using the mixture of (E)-10 and (Z)-10, the same isomerization procedure was repeated three times to give total 89% of (E)-10 from (Z)-10. a Scheme 1. a The reported synthetic route for DQ-113 1 . a Scheme 2. aThe new improved synthetic route for DQ-113 utilizing the Reformatsky reaction with bromofluoroacetate. a Scheme 3. a Two possible (unachieved) synthetic routes for 11.
1,638
2003-07-30T00:00:00.000
[ "Chemistry" ]
Sixteenth-Order Optimal Iterative Scheme Based on Inverse Interpolatory Rational Function for Nonlinear Equations : The principal motivation of this paper is to propose a general scheme that is applicable to every existing multi-point optimal eighth-order method/family of methods to produce a further sixteenth-order scheme. By adopting our technique, we can extend all the existing optimal eighth-order schemes whose first sub-step employs Newton’s method for sixteenth-order convergence. The developed technique has an optimal convergence order regarding classical Kung-Traub conjecture. In addition, we fully investigated the computational and theoretical properties along with a main theorem that demonstrates the convergence order and asymptotic error constant term. By using Mathematica-11 with its high-precision computability, we checked the efficiency of our methods and compared them with existing robust methods with same convergence order. Introduction The formation of high-order multi-point iterative techniques for the approximate solution of nonlinear equations has always been a crucial problem in computational mathematics and numerical analysis. Such types of methods provide the utmost and effective imprecise solution up to the specific accuracy degree of where Ω : C → C is holomorphic map/function in the neighborhood of required ξ. A certain recognition has been given to the construction of sixteenth-order iterative methods in the last two decades. There are several reasons behind this. However, some of them are advanced digital computer arithmetic, symbolic computation, desired accuracy of the required solution with in a small number of iterations, smaller residual errors, CPU time, smaller difference between two iterations, etc. (for more details please see Traub [1] and Petković et al. [2]). We have a handful of optimal iterative methods of order sixteen [3][4][5][6][7][8][9]. Among these methods most probably are the improvement or extension of some classical methods e.g., Newton's method or Newton-like method, Ostrowski's method at the liability of further values of function/s and/or 1st-order derivative/s or extra numbers of sub-steps of the native schemes. In addition, we have very few such techniques [5,10] that are applicable to every optimal 8-order method (whose first sub-step employs Newton's method) to further obtain 16-order convergence optimal scheme, according to our knowledge. Presently, optimal schemes suitable to every iterative method of particular order to obtain further high-order methods have more importance than obtaining a high-order version of a native method. Finding such general schemes are a more attractive and harder chore in the area of numerical analysis. Therefore, in this manuscript we pursue the development of a scheme that is suitable to every optimal 8-order scheme whose first sub-step should be the classical Newton's method, in order to have further optimal 16-order convergence, rather than applying the technique only to a certain method. The construction of our technique is based on the rational approximation approach. The main advantage of the constructed technique is that it is suitable to every optimal 8-order scheme whose first sub-step employs Newton's method. Therefore, we can choose any iterative method/family of methods from [5,[11][12][13][14][15][16][17][18][19][20][21][22][23][24][25], etc. to obtain further 16-order optimal scheme. The effectiveness of our technique is illustrated by several numerical examples and it is found that our methods execute superior results than the existing optimal methods with the same convergence order. Construction of the Proposed Optimal Scheme Here, we present an optimal 16-order general iterative scheme that is the main contribution of this study. In this regard, we consider a general 8-order scheme, which is defined as follows: where φ 4 and ψ 8 are optimal scheme of order four and eight, respectively. We adopt Newton's method as a fourth sub-step to obtain a 16-order scheme, which is given by that is non-optimal in the regard of conjecture given by Kung-Traub [5] because of six functional values at each step. We can decrease the number of functional values with the help of following γ(x) third-order rational functional where the values of disposable parameters b i (1 ≤ i ≤ 5) can be found with the help of following tangency constraints Then, the last sub-step iteration is replaced by that does not require Ω (t r ). Expressions (2) and (6) yield an optimal sixteenth-order scheme. It is vital to note that the γ(x) in (4) plays a significant role in the construction of an optimal 16-order scheme. In this paper, we adopt a different last sub-step iteration, which is defined as follows: where Q Ω can be considered to be a correction term to be called naturally as "error corrector". The last sub-step of this type is handier for the convergence analysis and additionally in the dynamics study through basins of attraction. The easy way of obtaining such a fourth sub-step iteration with a feasible error corrector is to apply the Inverse Function Theorem [26] to (5). Since ξ is a simple root (i.e., γ (ξ) = 0), then we have a unique map τ(x) satisfying γ(τ(x)) = x in the certain neighborhood of γ(ξ). Hence, we adopt such an inverse map τ(x) to obtain the needed last sub-step of the form (7) instead of using γ(x) in (5). With the help of Inverse Function Theorem, we will yield the final sub-step iteration from the expression (5): where b i , i = 1, 2, . . . , 5 are disposable constants. We can find them by adopting the following tangency conditions One should note that the rational function on the right side of (8) is regarded as an error corrector. Indeed, the desired last sub-step iteration (8) is obtained using the inverse interpolatory function approach meeting the tangency constraints (9). Clearly, the last sub-step iteration (6) looks more suitable than (3) in the error analysis. It remains for us to determine parameters b i (1 ≤ i ≤ 5) in (8) By using the first two tangency conditions, we obtain By adopting last three tangency constraints and the expression (10), we have the following three independent relations which further yield where Let us consider that the rational Function (8) cuts the x -axis at x = x r+1 , in order to obtain the next estimation x r+1 . Then, we obtain which further yield by using the above values of b 1 , b 2 and b 3 where Finally, by using expressions (2) and (14), we have where θ 2 and θ 3 are defined earlier. We illustrate that convergence order reach at optimal 16-order without adopting any additional functional evaluations in the next Theorem 1. It is vital to note that only coefficients A 0 and B 0 from φ 4 (x r , w r ) and ψ 8 (x r , w r , z r ), respectively, contribute to its important character in the development of the needed asymptotic error constant, which can be found in Theorem 1. Theorem 1. Let Ω : C → C be an analytic function in the region containing the simple zero ξ and initial guess x = x 0 is sufficiently close to ξ for guaranteed convergence. In addition, we consider that φ 4 (x r , w r ) and ψ 8 (x r , w r , z r ) are any optimal 4-and 8-order schemes, respectively. Then, the proposed scheme (15) has an optimal 16-order convergence. Proof. Let us consider e r = x r − ξ be the error at rth step. With the help of the Taylor's series expansion, we expand the functions Ω(x r ) and Ω (x r ) around x = ξ with the assumption Ω (ξ) = 0 which leads us to: and where c j = Ω (j) (ξ) j!Ω (ξ) for j = 2, 3, . . . , 16, respectively. By inserting the expressions (16) and (17) in the first sub-step (15), we have where G k = G k (c 2 , c 3 , . . . , c 16 ) are given in terms of c 2 , c 3 , . . . , c i with explicitly written two coefficients The following expansion of Ω(w r ) about a point x = ξ with the help of Taylor series As in the beginning, we consider that φ 4 (x r , w r ) and φ 8 (x r , w r , z r ) are optimal schemes of order four and eight, respectively. Then, it is obvious that they will satisfy the error equations of the following forms and respectively, where A 0 , B 0 = 0. By using the Taylor series expansion, we further obtain and where With the help of expressions (16)-(23), we have Finally, we obtain The above expression (25) claims that our scheme (15) reaches the 16-order convergence. The expression (15) is also an optimal scheme in the regard of Kung-Traub conjecture since it uses only five functional values at each step. Hence, this completes the proof. Remark 1. Generally, we naturally expect that the presented general scheme (15) should contain other terms from A 0 , A 1 , . . . A 12 and B 0 , B 1 , . . . , B 8 . However, there is no doubt from the expression (25) that the asymptotic error constant involves only on A 0 and B 0 . This simplicity of the asymptotic error constant is because of adopting the inverse interpolatory function with the tangency constraints. Special Cases This is section is devoted to the discussion of some important cases of the proposed scheme. Therefore, we consider 1. We assume an optimal eighth-order technique suggested scheme by Cordero et al. [13]. By using this scheme, we obtain the following new optimal 16-order scheme , where b 1 , b 2 , b 3 ∈ R, provided b 2 + b 3 = 0. Let us consider b 1 = b 2 = 1 and b 3 = 2 in the above scheme, recalled by (OM1). 2. Again, we consider another optimal 8-order scheme presented by Behl and Motsa in [11]. In this way, we obtain another new optimal family of 16-order methods, which is given by where b ∈ R. We chose b = − 1 2 in this expression, called by (OM2). 3. Let us choose one more optimal 8-order scheme proposed by Džuníc and Petkovíc [15]. Therefore, we have Let us call the above scheme by (OM3). 4. Now, we pick another optimal family of eighth-order iterative methods given by Bi et al. in [12]. By adopting this scheme, we further have where α ∈ R and Ω[·, ·] is finite difference of first order. Let us consider α = 1 in the above scheme, denoted by (OM4). In similar fashion, we can develop several new and interesting optimal sixteenth-order schemes by considering any optimal eighth-order scheme from the literature whose first sub-step employs the classical Newton's method. Numerical Experiments This section is dedicated to examining the convergence behavior of particular methods which are mentioned in the Special Cases section. Therefore, we shall consider some standard test functions, which are given as follows: Here, we confirm the theoretical results of the earlier sections on the basis of gained results x r+1 − x r (x r − x r−1 ) 16 and computational convergence order. We displayed the number of iteration indexes (n), approximated zeros (x r ), absolute residual error of the corresponding function (|Ω(x r )|), error in the consecutive iterations |x r+1 − x r |, x r+1 − x r (x r − x r−1 ) 16 , the asymptotic error constant η = lim n→∞ x r+1 − x r (x r − x r−1 ) 16 and the computational convergence order (ρ) in Table 1. To calculate (ρ), we adopt the following method We calculate (ρ), asymptotic error term and other remaining parameters up to a high number of significant digits (minimum 1000 significant digits) to reduce the rounding-off error. However, due to the restricted paper capacity, we depicted the values of x r and ρ up to 25 and 5 significant figures, respectively. Additionally, we mentioned x r+1 − x r (x r − x r−1 ) 16 and η by 10 significant figures. In addition to this, the absolute residual error in the function |Ω(x r )| and error in the consecutive iterations |x r+1 − x r | are depicted up to 2 significant digits with exponent power that can be seen in Tables 1-3. Furthermore, the estimated zeros by 25 significant figures are also mentioned in Table 1. Now, we compare our 16-order methods with optimal 16-order families of iterative schemes that were proposed by Sharma et al. [7], Geum and Kim [3,4] and Ullah et al. [8]. Among these schemes, we pick the iterative methods namely expression (29), expression (Y1) (for more detail please see Table 1 of Geum and Kim [3]) and expression (K2) (please have look at Table 1 of Geum and Kim [4] for more details) and expression (9), respectively called by SM, GK1, GK2 and MM. The numbering and titles of the methods (used for comparisons) are taken from their original research papers. Table 1. Convergence behavior of methods OM1, OM2, OM3 and OM4 on Ω 1 (x)-Ω 8 (x). Cases It is straightforward to say that our proposed methods not only converge very fast towards the required zero, but they have also small asymptotic error constant. We want to demonstrate that our methods perform better than the existing ones. Therefore, instead of manipulating the results by considering self-made examples or/and cherry-picking among the starting points, we assume 4 numerical examples; the first one is taken from Sharma et al. [7]; the second one is considered from Geum and Kim [3]; the third one is picked from Geum and Kim [4] and the fourth one is considered from Ullah et al. [8] with the same starting points that are mentioned in their research articles. Additionally, we want to check what the outcomes will be if we assume different numerical examples and staring guesses that are not suggested in their articles. Therefore, we assume another numerical example from Behl et al. [27]. For the detailed information of the considered examples or test functions, please see Table 4. We have suggested two comparison tables for every test function. The first one is associated with (|Ω(x r )|) mentioned in Table 2. On the other hand, the second one is related to |x r+1 − x r | and the corresponding results are depicted in Table 3. In addition, we assume the estimated zero of considered functions in the case where exact zero is not available, i.e., corrected by 1000 significant figures to calculate |x r − ξ|. All the computations have been executed by adopting the programming package Mathematica 11 with multiple precision arithmetic. Finally, b 1 (±b 2 ) stands for b 1 × 10 (±b 2 ) in Tables 1-3. 0 Ω 13 (z) = (x − 2) 2 − log x − 33x; [27] 37.5 36.98947358294466986534473 Conclusions We constructed a general optimal scheme of 16-order that is suitable for every optimal 8-order iterative method/family of iterative methods provided the first sub-step employs classical Newton's method, unlike the earlier studies, where researchers suggested a high-order version or extension of certain existing methods such as Ostrowski's method or King's method [28], etc. This means that we can choose any iterative method/family of methods from [5,[11][12][13][14][15][16][17][18][19][20][21], etc. to obtain further optimal 16-order scheme. The construction of the presented technique is based on the inverse interpolatory approach. Our scheme also satisfies the conjecture of optimality of iterative methods given by Kung-Traub. In addition, we compare our methods with the existing methods with same convergence order on several of the nonlinear scalar problems. The obtained results in Tables 2 and 3 also illustrate the superiority of our methods to the existing methods, despite choosing the same test problem and same initial guess. Tables 1-3 confirm that smaller |Ω(x r )|, |x r+1 − x r | and simple asymptotic error terms are related to our iterative methods. The superiority of our methods over the existing robust methods may be due to the inherent structure of our technique with simple asymptotic error constants and inverse interpolatory approach.
3,770.2
2019-05-19T00:00:00.000
[ "Mathematics" ]
Targeting cyclophilin-D by miR-1281 protects human macrophages from Mycobacterium tuberculosis-induced programmed necrosis and apoptosis Mycobacterium tuberculosis (MTB) infection induces cytotoxicity to host human macrophages. The underlying signaling mechanisms are largely unknown. Here we discovered that MTB infection induced programmed necrosis in human macrophages, causing mitochondrial cyclophilin-D (CypD)-p53-adenine nucleotide translocator type 1 association, mitochondrial depolarization and lactate dehydrogenase medium release. In human macrophages MTB infection-induced programmed necrosis and apoptosis were largely attenuated by CypD inhibition (by cyclosporin A), silencing and knockout, but intensified with ectopic CypD overexpression. Further studies identified microRNA-1281 as a CypD-targeting miRNA. Ectopic overexpression of microRNA-1281 decreased CypD 3’-untranslated region activity and its expression, protecting human macrophages from MTB-induced programmed necrosis and apoptosis. Conversely, microRNA-1281 inhibition in human macrophages, by the anti-sense sequence, increased CypD expression and potentiated MTB-induced cytotoxicity. Importantly, in CypD-KO macrophages miR-1281 overexpression or inhibition was ineffective against MTB infection. Restoring CypD expression, by an untranslated region-depleted CypD construct, reversed miR-1281-induced cytoprotection against MTB in human macrophages. Collectively, these results show that targeting CypD by miR-1281 protects human macrophages from MTB-induced programmed necrosis and apoptosis. AGING This will lead to mitochondrial depolarization, mPTP opening and cytochrome C release. It will eventually promote cell necrosis [7-9, 11, 12, 15, 16]. Other studies proposed that the cascade is also important for initiating cell apoptosis, as cytochrome C releases to the cytosol [17][18][19]. The current study tested whether this pathway participated in MTB-induced death of human macrophages. MicroRNAs (miRNAs) are a large family of endogenous, short (about 22-nt long) and single-strand non-coding RNAs (ncRNAs) [20,21]. By physically binding to the 3′-untranslated region (3′-UTR) of the targeted mRNA, miRNAs will induce degradation of target mRNAs and/or inhibit gene translation [20,21]. Existing literatures have implied that miRNA dysregulation in the host cells (including macrophages) is extremely important in active and latent TB infection [22][23][24][25]. Our previous study has shown that microRNA-579 (miR-579) upregulation mediated MTB-induced macrophage cytotoxicity [26]. Whether CypD is a target of miRNAs and the molecular regulation of CypD in the necrotic machinery of MTB-infected human macrophages remain to be elucidated. The results of the present study will show that microRNA-1281 (miR-1281) is a CypD-targeting miRNA, and miR-1281 protecting human macrophages from MTB-induced programmed necrosis and apoptosis by silencing CypD. MTB infection induces mPTP opening and programmed necrosis in human macrophages Understanding the underlying mechanisms of MTBinduced death of macrophages is vital for the control of MTB infection [6,26]. We tested the possible involvement of mPTP in the process. The mitochondrial immunoprecipitation (Mito-IP) assay results, Figure 1A, demonstrated that with MTB infection, p53 immunoprecipitated with mPTP components CypD and ANT1 [8,27,28]. It is known as the initial step for mPTP opening and programmed necrosis [11,13,14,29,30]. The expression levels of CypD, ANT1 and p53 were not significantly changed in human macrophages ( Figure 1A, "Input"). mPTP opening is often followed with mitochondrial depolarization [11,13,14,29,30]. JC-1 assay results, Figure 1B, demonstrated that mitochondrial depolarization occurred in the MTBinfected human macrophages, showing JC-1 green fluorescence accumulation ( Figure 1B). Furthermore, the medium LDH contents were significantly increased in human macrophages with MTB infection ( Figure 1C), indicating programmed necrosis [11,13,14,29,30]. Together, these results suggested that MTB infection induced mPTP opening and programmed necrosis in human macrophages. macrophages were infected with Mycobacterium tuberculosis (MTB) for applied time periods, mitochondrial immunoprecipitation (Mito-IP) assays were carried out to test CypD-ANT1-p53 association in the mitochondria (A, "Mito-IP"), with expression of these proteins examined by Western blotting (A, "Input"); Mitochondrial depolarization was examined by JC-1 dye assay (B); Cell necrosis was tested by medium LDH release assays (C). For JC-1 assays, both JC-1 merged images and JC-1 green fluorescence intensity were presented (same for all Figures). Expression of listed proteins was quantified, normalized to loading controls (A). "C" stands for uninfected control macrophages (same for all Figures). Data were presented as mean ± SD (n=5), and results were normalized to "C". * P <0.05 vs. "C" macrophages. Experiments in this figure were repeated five times with similar results obtained. Bar= 100 μm (B). AGING We further hypothesized that ectopic overexpression of CypD should facilitate MTB-induced cytotoxicity of human macrophages. The lentiviral CypD expression construct was transduced to human macrophages. Via selection by puromycin the stable cells were established, showing over five-folds CypD mRNA expression (vs. vector control cells, Figure 2H). CypD protein levels were significantly increased as well ( Figure 2I). Importantly, ectopic CypD overexpression potentiated MTB-induced mitochondrial depolarization ( Figure 2J), viability reduction ( Figure 2K) and medium LDH release ( Figure 2L). TUNEL staining results demonstrated that CypD overexpression significantly enhanced MTB-induced apoptosis activation ( Figure 2M). MTB infection did not affect CypD expression in human macrophages ( Figure 2A and 2H). Without MTB infection, CypD inhibition, silencing, KO or overexpression did not affect the functions of human macrophages ( Figure 2C-2G, 2J-2M). These results show that inhibition of the CypD-mPTP pathway largely attenuated MTB-induced death of human macrophages. miR-1281 overexpression inhibits MTB-induced programmed necrosis and apoptosis in human macrophages Since miR-1281 targets and downregulates CypD, it would then protect human macrophages from MTBinduced cytotoxicity. The lv-pre-miR-1281-expressing human macrophages (see Figure 3) and control macrophages with non-sense microRNA ("lv-C") were infected with MTB. As shown, miR-1281 overexpression potently inhibited MTB-induced mitochondrial depolarization, or JC-1 green AGING Data were presented as mean ± SD (n=5), and results were normalized to "C". * P <0.05 vs. "C" treatment in "Pare" macrophages. # P <0.05 vs. MTB treatment in "Pare" macrophages. Experiments in this figure were repeated four times with similar results obtained. Next, the 3′-UTR-depleted CypD construct was transfected to human macrophages, completely restored CypD mRNA and protein expression in macrophages with lv-pre-miR-1281 ( Figure 6E). As shown lv-pre-miR-128-induced macrophage protection against MTB was completely reversed with re-expression of the 3′-UTR-depleted CypD ( Figure 6F and 6G). Thus, with CypD re-expression MTB-induced viability reduction ( Figure 6F) and cell death ( Figure 6G) were restored even with miR-1281 overexpression. The qPCR assay results, Figure 6H, demonstrated that 3′-UTR-depleted CypD did not alter miR-1281 expression. These results together indicate that CypD should be the important target of miR-1281 in human macrophages. DISCUSSION Necrosis is a common form of cell death characterized by cell swelling, plasma membrane fracture and lysis of the intracellular components and cellular organelles. The traditional concept is that necrosis is a form of accidental, unregulated and passive cell death, while apoptosis is the sole form of "programmed cell death" [10, 37,38]. Yet recent studies have shown that certain necrosis is also programmed and actively regulated [7,10,[37][38][39]. In the present study we show that MTB infection led to programmed necrosis in human macrophages, causing CypD-p53-ANT1 mitochondrial association, mitochondrial depolarization and LDH release (to the medium). Importantly programmed necrosis, together with apoptosis, could be vital for MTB infection-induced cytotoxicity in the human macrophages. CypD is the prolyl isomerase and the key component forming mPTP, along with ANT1 and the voltagedependent anion channel (VDAC) [13,14,29]. Studies have shown that CypD lies in the center to mediate the pore opening. CypD inhibition or depletion will result in inhibition on mPTP formation and opening [13,14,29]. Since mPTP opening is vital for programmed necrosis, CypD is essential in regulating necrotic cell death pathway [13,14,29]. In the present study we show that CypD is vital for MTB-induced cytotoxicity to human macrophages. MTB infection-induced programmed necrosis and apoptosis were largely attenuated with CypD inhibition (by CsA), silencing (by shRNA) and KO (using CRISPR/Cas9 method), but intensified with ectopic overexpression of CypD. Therefore, targeting CypD-mPTP pathway could be a novel strategy to protect human macrophages from MTB infection-induced cytotoxicity. Our results imply that miR-1281 inhibited MTBinduced cytotoxicity to the human macrophages. First, lv-pre-miR-1281 largely attenuated programmed necrosis and apoptosis in MTB-infected macrophages. Conversely, miR-1281 inhibition, by antagomiR-1281, protected human macrophages from MTB-induced cytotoxicity. These results imply that miR-1281 offers cytoprotection against MTB infection in human macrophages. Further analyses show that CypD is the primary target gene of miR-1281 in MTB-infected macrophages. Neither miR-1281 overexpression nor miR-1281 inhibition was able to change MTB-induced cytotoxicity in CypD-KO macrophages. Importantly, restoring CypD expression, by the UTR-depleted CypD construct, reversed miR-1281-induced macrophage protection against MTB infection. Collectively, these results show that targeting CypD by miR-1281 protects human macrophages from MTBinduced programmed necrosis and apoptosis. Chemicals and reagents Puromycin, cyclosporin A (CsA), terminal deoxynucleotidyl transferase (TdT)-mediated Dutp nick-end labeling (TUNEL), DAPI and JC-1 dyes were obtained from Sigma-Aldrich (St. Louis, MO). The antibodies were from Cell Signaling Tech (Danvers, MA). From Invitrogen-Thermo Fisher (Shanghai, China) the cell culture reagents, the Trizol reagents and other RNA assay reagents, as well as the cell transfection reagents were obtained. All the sequences, viral constructs and gene products were provided and verified by Shanghai Genechem Co. (Shanghai, China) or otherwise mentioned. Primary human macrophages. As described early [26], from the peripheral blood mononuclear cells (PBMCs) of a written-informed consent donor the primary human macrophages were differentiated [42] and cultured under the described protocol [42]. The primary macrophages were always utilized at passage 3-10. The protocols of the present study were approved by the Ethics Committee of Tongji University School of Medicine. MTB infection As described early [26], at 2×10 5 cells per well the primary human macrophages were cultured into sixwell plates and then infected with MTB (multiplicity of infection/MOI 10). After 4h the infected macrophages were washed and returned back to the fresh medium. Mitochondrial Immunoprecipitation (Mito-IP) As described previously [18], human macrophages with MTB infection were harvested and homogenized by the lysis buffer provided by Dr. Wang at Soochow University [18]. After centrifugation, the supernatants were collected and suspended. The pellets were then re-suspended in the above buffer plus NP-40, forming the mitochondria fraction lysates. The quantified mitochondrial lysates (500 μg per sample) were precleared and incubated with anti-CypD antibody [28,43], with the mitochondrial CypD-p53-ANT1 complex captured by the protein IgG-Sepharose beads (Sigma), and tested by Western blotting. Quantitative real-time PCR (qPCR) Total cellular RNA was extracted by the Trizol reagents from MTB-infected macrophages, with the RNA concentrations determined using the NanoDrop system. From each treatment 100 ng total RNA was utilized for the reverse transcription using the described protocol [26]. The detailed procedures for qPCR were described previously [26], with the melt curve analyses performed. Quantification of targeted genes was through the 2 −ΔΔCt method, using GAPDH as the internal control. miR-1281 expression was normalized to U6. From Shanghai Genechem the primers for U6 and GAPDH were obtained, with other primers for miR-1281, CypD and ANT1 listed in Table 1. Western blotting The detailed procedures for the Western blotting assay were reported early [26]. In brief, with the applied treatments, 30 μg total lysates (of each lane) were separated by sodium dodecyl sulfate-polyacrylamide gels, thereby transferred to the polyvinylidene difluoride (PVDF) blots (Merck-Millipore). After blocking the blots were incubated with the primary and secondary antibodies, and detected using the enhanced chemiluminescence (ECL) kit (Pierce, Rockford, IL). Cell viability Macrophages were plated at 3×10 3 cells per well onto the 96-well tissue-culture plates. Following the indicated treatments the Cell Counting Kit-8 (CCK-8, Dojindo Laboratories, Kumamoto, Japan) reagent (10 μL in each well) was added. After 2h, the CCK-8 absorbance at 450 nm was tested through a spectrophotometer (Thermo Fisher Scientific, Vantaa, Finland). Cell necrosis Cell necrosis was tested through assaying the medium lactate dehydrogenase (LDH) contents by a two-step easy enzymatic reaction LDH kit (Takara, Tokyo, Japan). Medium LDH contents were always normalized to total LDH levels. TUNEL staining Following MTB infection, the human macrophages were co-stained with TUNEL and DAPI dyes (Sigma). The apoptotic nuclei percentage (TUNEL/DAPI×100%) was calculated, from at least 500 cells of five random views (1: 100 magnification). JC-1 assay As described previously [26], the human macrophages with the indicated treatment were stained with JC-1 (5 μg/mL, for 10-15 min) and washed. JC-1 green fluorescence, indicating mitochondrial depolarization, was tested at 550 nm using the RF-5301 PC fluorescence spectrofluorometer (Shimadzu, Tokyo, Japan). Furthermore, the representative JC-1 fluorescence images were taken, merging the green fluorescence image (at 550 nm) and the corresponding red fluorescence image (at 650 nm). CypD short hairpin RNA (shRNA) The CypD shRNA (with the target sequence, CCCG TCCTCTTCCTCCTCCTCCG) lentiviral particles and the control shRNA lentiviral particles were provided by Dr. Xu [46]. Human macrophages were plated onto sixwell plates (in polybrene-containing complete medium), transduced with the applied shRNA lentivirus particles. After 48h, puromycin was added to select stable cells (for 10-12 days), with CypD silencing verified by qPCR and Western blotting assays. CypD knockout (KO) The small guide RNA (sgRNA) against human CypD (target DNA sequence, GGCGACTTCACCAACCA CAA) was selected from Dr. Zhang's laboratory (http://crispr.mit.edu/), and inserted into the lentiCRISPR-green fluorescent protein (GFP) plasmid (from Dr. Zhao at Shanghai Jiao Tong University) with the puromycin selection gene. The construct was transfected to the human macrophages by Lipofectamine 2000, with macrophages subjected to FACS-mediated GFP sorting and selected by puromycin (3.0 μg/mL) to achieved stable cells. CypD KO was verified by qPCR and Western blotting assays. Control cells were transfected with the empty vector. Ectopic CypD over-expression The CypD expression (with no 3′-UTR region) pSuperpuro-Flag vector, provided by Dr. Xu [46], was transfected to human macrophages by the Lipofectamine 2000 protocol (Invitrogen, Suzhou, China). The macrophages were then selected by puromycin for 10 days to achieve stable cells, with CypD overexpression confirmed by qPCR and Western blotting assays. Statistical analyses Data in the present study were shown as mean ± standard deviation (SD). Statistical analyses were carried out by the SPSS 20.0 software (SPSS Co., Chicago, CA), using oneway analysis of variance of post hoc Bonferroni test as comparisons of multiple groups. The Student T Test was utilized for comparison between two groups. Statistically differences were assigned to P < 0.05. AUTHOR CONTRIBUTIONS All authors listed carried out the experiments, participated in the design of the study and performed the statistical analysis, conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript. CONFLICTS OF INTEREST None of the authors has any conflicts of interests to declare.
3,271.8
2019-12-28T00:00:00.000
[ "Biology", "Chemistry" ]
Thin Layer Buckling in Perovskite CsPbBr3 Nanobelts Flexible semiconductor materials, where structural fluctuations and transformation are tolerable and have low impact on electronic properties, focus interest for future applications. Two-dimensional thin layer lead halide perovskites are hailed for their unconventional optoelectronic features. We report structural deformations via thin layer buckling in colloidal CsPbBr3 nanobelts adsorbed on carbon substrates. The microstructure of buckled nanobelts is determined using transmission electron microscopy and atomic force microscopy. We measured significant decrease in emission from the buckled nanobelt using cathodoluminescence, marking the influence of such mechanical deformations on electronic properties. By employing plate buckling theory, we approximate adhesion forces between the buckled nanobelt and the substrate to be Fadhesion ∼ 0.12 μN, marking a limit to sustain such deformation. This work highlights detrimental effects of mechanical buckling on electronic properties in halide perovskite nanostructures and points toward the capillary action that should be minimized in fabrication of future devices and heterostructures based on nanoperovskites. L ow dimensional metal lead halide perovskites pose excellent optoelectronic properties such as high quantum yield, narrow and tunable emission, and affordable chemical synthesis that do not require costly inorganic shelling. 1−8 Contrary to typical semiconductors, the halide perovskites lattice is considered soft 9−22 with long chemical bond lengths. Affecting vibrational properties, 9,14,15 electron phonon interactions, 14,23 supporting exciton polaritons, 14 as well as facile ion diffusivity 10,12,13,17,18,20−22 and structural transformation. 10 Understanding structure deformation in low dimensional perovskites and its effects on the optoelectronic properties is of fundamental and technological importance. Studies relating structural transformations and pressure of bulk crystal and thin films have been considered in recent years. 24−33 Some structural deformation can modify the electronic band structure and thereby modulate the optoelectronic properties. 25−28,30−32,34−38 For example, Zhang et al. 31 simulated the change of the band structure of methyl-ammonium lead iodide perovskite as a result of external strains, and Wang et al. 30 reported the effect of pressure on the structure of MAPbBr 3 and its influence on the optic and electronic properties. In a review by Jaff et al., 39 the effects of compression on the structure of the perovskites and hence the optical and electronic properties of the material were presented and changes from absorption and photoluminescence (PL) emission to metallization were discussed. In the case of low dimensional anisotropic nanocrystals, the situation is more complex due to the strong asymmetry in the crystal's dimensions. 40,41 Recent developments in colloidal synthesis of lead halide nanocrystals include growth of nanocubes, 42 nanosheets, 43 nanowires, 44 nanoplates, 45 and nanobelts. 46 In this study, we focus on colloidally synthesized CsPbBr 3 with nanobelt shape, in which the thickness is of a few unit cells and the length is of hundreds of nanometers up to a few micrometers. A colloidal suspension of the nanobelts is deposited onto carbon covered substrates and let dry. Through transmission electron microscopy (TEM) characterization, structural deformation is observed in the form of electron contrast bands across the perovskite nanobelts. Analysis of experimental high-resolution TEM (HRTEM) micrographs accompanied by multislice computer simulations were used to confirm that the contrast bands are structural deformations in the form of thin layer buckling and forces leading to these phenomena are extracted. ■ CONTRAST CHARACTERIZATION Colloidal CsPbBr 3 perovskite nanobelts were synthesized and cleaned as detailed in the Supporting Information (SI). In short, CsBr and PbBr 2 were dissolved in acetone with oleic acid and oleyl amine as ligands for a certain amount of time, then centrifuged and redispersed in hexane. This synthesis was based on a nanowire synthesis done by Liu et al. 47 with changes in the salt ratio, centrifuge speed, and redispersion solvent. Optical absorption and photoluminescence spectra of the nanobelts were recorded at room temperature and are shown in Figures S1a,b, respectively. The nanobelts absorption spectrum includes the first and second exciton transitions at 519 and 367 nm, respectively, while the PL spectrum shows an emission peak at 522 nm in agreement with weakly quantum confined excitons. 48−50 Indeed, TEM and atomic force microscopy (AFM) characterizations of the nanobelts (Figure 1a−c, and Figure S4) depict lateral dimensions of a few microns in length, 10− 200 nm in width, and with a thickness of 2.5−10 nm on the scale of the Bohr exciton radius, 51 (Figure 1f). CsPbBr 3 nanobelts present orthorhombic crystal structure (Figure 1d) similar to nanowires 44 as determined by selected area diffraction (SAD) (shown in Figure S2), X-ray diffraction (XRD) ( Figure S3), and aberration corrected HRSTEM as shown in Figure 1g, which is schematically shown in Figure 1e. TEM micrographs of CsPbBr 3 nanostructures shown in Figure 1a−c depict a dark-bright strip feature across the nanobelt. Electron micrograph contrast bands in thin crystal lattices are associated with (1) variations of crystal thickness and (2) thin layer buckling (abrupt change in height due to crystal ripples). 52 Here, we attribute the contrast bands to perovskite nanobelts buckling and not variation in the nanobelts thickness. This deduction is supported by four different electron microscopy methods: (1) selected orientation dark-field (DF) imaging, (2) specimen tilting, (3) a dynamic measurement, and (4) Quantitative TEM simulations, all detailed in this study. In order to study the microstructure of the buckled nanobelts, we used a modified text book characterization technique, 52 which we name selected orientation dark-field imaging (SODFI). In this method, one selects two opposite reflections of a selected area diffraction pattern. Each reflection corresponds to a specific angle of transmitted electrons scattered from the crystal. By selecting opposite reflections, we ensure the angles are equal but opposite in sign. Next, an aperture is moved to capture a dark-field micrograph from each of these orientations. Figure 2a shows a micrograph of a buckled nanobelt. The sketch in Figure 2b demonstrates the principles of the SODFI characterization technique. In the case of a buckled nanobelt, when a reflection is chosen for a DF imaging only electrons scattered from a specific angle are collected by the detector. So, when imaged two opposing sides of the buckled nanobelt present themselves as high contrast regions. This is demonstrated in Figure 2c,d showing higher contrast on opposite sides of the buckled area. In order to reassure the bands are indeed buckled areas of the nanobelts and not local variations in the thickness of the crystals, we conducted an experiment where we tilt the sample in respect to the electron beam while monitoring the change in contrast of the bands. This is a text book procedure to differentiate between bend and thickness contrast; if a change in relative tilt will cause movement of the contrast, then the contrast is a pure amplitude contrast and not a result of massthickness contrast. 52 In Figure 2e−i, we detect clear movement of the contrast of the buckled area, as we change the tilt of the sample. In addition, we see a correlation between the direction of the tilt and the direction in which the contrast moves. Another indication of a bend contrast is if one detects movement of the contrast (without tilting the sample). 53−62 The movement of the contrast was examined during a minute long electron imaging exposure. Figure S6 depicts a series of micrographs of the same bend contrast taken approximately a minute apart. The distance between two contrast bends on the same nanobelt was measured to be 51 and 39 nm. While thickness variations typically display static electron micrograph contrast bands, 63 a crystal ripple (buckled structure) could be more dynamic. 53−62 This dynamic, typically referred to as "jittering effect", is a consequence of local charging accumulation caused by the electron beam. 55,56,58,61,62 A schematic illustration of proposed charging in buckled nanobelts is demonstrated in Figure S6b,c. When the electron beam (I 0 ) penetrates the sample, the specimen atoms interact with it, creating elastic (I e ) and inelastic (I i ) collisions. In this process, Auger and secondary electrons (I s ) might be emitted from the specimen causing an accumulation of positive charge on its surface. 64−68 In insulating and semiconducting materials, the conductive amorphous carbon film on the TEM grid prevents charging effects. 64,66,68 However, in the case of a crystal ripple, or a buckled area which does not have direct contact to the conductive substrate, accumulation of charges is possible. Charging effects will modify the formed image and Nano Letters pubs.acs.org/NanoLett Letter their discharge will cause movements, blurring, and focus modifications, which may be displayed in a jittering effect as seen in the buckled nanobelts. 64,65,68 Such jittering was noticed both in TEM and scanning electron microscopy (SEM) experiments ( Figure S5). In order to better understand how lattice orientation of buckled thin layer halide perovskite results in a contrasted band in the nanobelt TEM micrograph, we analyzed numerically modeled high-resolution TEM data. Figure 3a (inset) shows a high-magnification (low-magnification) typical microstructure of the buckled nanobelt depicting symmetric white-black-white contrast pattern. We note in passing that other typical "inverted color patterns" may also be observed (as seen in Figure 1c and Figure S5). Nano Letters pubs.acs.org/NanoLett Letter fringes (0.16 nm) but larger distance separation (0.58 nm). The overall change in fringe density is clearly seen in the FFT filtered images (Figure 3c (1−3)). We now use our two observations as follows: (1) Band contrast is associated with tilted perovskite lattice, and (2) HRTEM micrographs depict variations in lattice fringes density across the bands. To simulate electron transmission through the perturbed perovskites lattices, we use Quantitative STEM/TEM (QSTEM) simulation package for electron transmission calculations based on a multislice algorithm 69 (see SI for details). Figure 3d (1− 3) depicts intensity simulations of scattered electrons when transmitted through a CsPbBr 3 crystal. When the angle between the sample and the electron beam (90-α) is varied (α = 0−6°), a general blurring of the pattern occurs. This is evident by the widening of lattice fringes and the reducing of the overall contrast of the image; we will note that the simulation presented in Figure 3d does not consider changes in the focus as a result of the height change in the buckled area. Therefore, to model the extent of buckling in perovskite nanobelts simulations based on an atomistic model of bent perovskite nanobelts via a commercially available numerical simulation software (Samson, see SI for details) were carried out. It should be noted that the simulated buckled nanobelt obtained in Figure 4 is independent of the TEM micrographs received in our experiments. A series of crystals models of a buckled nanobelt with different buckling angles (α) (via Samson software) was used for electron scattering simulation (via QSTEM software). 69 Figure 4a shows simulation of an unbuckled nanobelt, which is clearer than the simulation of a buckled nanobelt due to (1) change in the angle between the electron beam and crystal surface, and (2) change in the height of the buckled area. We note that both of these will influence the focus and contrast. In Figure 4d, an onset in the lattice fringe periodicity is detected with the blurred lattice fringes at the middle of the ripple contrast. The effect of the ripple is pronounced from an angle of 7.5°which shows a resemblance to the contrast patterns seen in measured TEM and HRTEM micrographs (Figures 1−3). The atomistic modeling simulation of a buckled crystal for simulating its electron micrograph achieves two goals: (1) Confirms that the contrast phenomenon is indeed a crystal ripple in perovskite nanobelts. (2) Gives a lower limit for a buckling angle of 7.5°to allow observation of contrast features from buckled perovskite nanobelts under our experimental conditions. ■ BUCKLING-MECHANICAL MODEL Probable forces that may lead to buckling of nanobelts are generated by the drying of colloidal suspensions and resulting capillary forces. At the nanoscale, capillary forces play an important role 70−75 due to high surface-to-volume ratios and the minimization of surface energies upon creation of interfaces. A colloidal suspension of perovskite nanobelts is drop casted on a TEM carbon grid and let dry in open air. Capillary forces of the drying solvent strain nanobelts that are adsorbed on the grid. During the drying process, the capillary action leads to the generation of a compressive force that is applied on the nanobelt. Once a critical force is reached, the nanobelt buckles. Once the solvent completely dries, the buckled configuration of the nanobelts is maintained by adhesion forces between the nanobelt and the substrate ( Figure S7). The adhesion has a normal and tangential components, which adhere the nanobelt to the surface and maintain the buckling, respectively. By employing buckling theory, 76 we can estimate this adhesion force from plate buckling theory (F critical ≅ F adhesion ). where υ = 0.3 77 and E = 28 GPa 77 are the Poisson's ratio and the Young's modulus of the nanobelt, respectively, and t, w, and l are the thickness, width, and length of the nanobelt at the buckled segment, respectively. The width and the length are measured from TEM micrographs, while the thickness is estimated to be ∼9 nm (see SI for justification). We calculate F adhesion = 0.12 ± 0.06 μN where the error stems from the uncertainty in the thicknesses of the nanobelts and from the orthorhombic crystallographic structure which is not taken into account in our model. From the above analysis, we find that CsPbBr 3 nanobelts adhere to amorphous carbon surfaces with force at the scale of 0.12 μN, which is in the typical adhesion force scale. 78−84 Xie et al. discussed the influence of the van der Waals forces on the stable buckled state and showed that the balance of the energies leads to a specific height of buckled structures. 38 One has to consider also that organic ligands that cover the nanobelt's surfaces may contribute to these attracting forces. Any future device or application that will require processing of such thin colloidal perovskites cannot ignore these adhesion and capillary forces in order to avoid buckling phenomena. ■ BUCKLING ANALYSIS To characterize the morphology of buckled nanobelts, a correlation between TEM micrograph and an AFM scan was implemented using a finder grid for locating specific buckled Nano Letters pubs.acs.org/NanoLett Letter nanobelt in the TEM and then relocating it with in an AFM scan; such a measurement is presented in Figure 5 and Figure S8. Figure 5a,b shows TEM and AFM of the same buckled nanobelt. The location of the bend contrast is zoomed in Figure 5c,d. AFM cross section of this area demonstrates topographic variations of 1−3.5 nm (see additional scans in Figure S8). Similar ripple amplitude measurements in the AFM were done in (2T) 2 PbI 4 halide perovskite by Shi et al., 34 showing ripples with a height of 40−50 nm, and in WS 2 −WSe 2 heterostructures by Xie et al. 38 showing a ripple height of 1−2 nm with similar scale to our measurements. We note the differences between Shi et al. 34 and this work are probably due to the nature of the stress applied, which is the heterostructure interface in the work of Shi et al. versus capillary action in this work. To measure the change in physical properties of the buckled area in CsPbBr 3 nanobelt SEM cathodoluminescence (CL) was used. 85 CL enables one to excite specific areas on the nanobelt with nanometer resolution and compare spectroscopic data of buckled and unbuckled areas of the same nanobelt. Figure 5e shows such an experiment where the CL was measured at four locations along the nanobelt, as marked with the arrows. The red and blue arrows point to buckled areas, the green arrow is located between the two buckles, and the purple arrow is located further along the nanobelt, far away from the buckled areas. Figure 5f shows a CL micrograph of the nanobelt with clear intensity differences between the buckled areas (marked with the red and blue arrows) and the rest of the unbuckled nanobelt. The PL emission graphs in Figure 5g corresponds to the positions marked with the arrows which show a significant decrease in intensity between the purple marked location and the red, green, and blue marked locations. We carefully approximate 65−92% decrease in CL intensity which is correlated with the buckled areas. This observation which shows that mechanical buckling that is caused through capillary action dramatically influences the emissive properties of lead halide perovskite nanobelts. The exact mechanism leading modification of electronic properties of buckled perovskites will be investigated in future experiments and may include trap states due to induced defects in the buckled regions, or local strain of the perovskite crystal structure which modifies the excitonic properties. Our findings emphasize the importance of careful processing of colloidal perovskites with intent to minimize readily occurring buckling effects which reduce fluorescence quality. In conclusion, structural deformations in thin CsPbBr 3 nanobelts are reported. A band contrast pattern observed in electron microscopy micrographs was analyzed and determined to be thin layer buckling of the perovskite nanobelts. Dark-field micrographs of opposing diffraction reflections indicated different contrast band areas and represented tilted perovskite lattices. This statement is additionally supported by a tilting experiment showing the contrast is a pure amplitude contrast. Correlation of TEM and AFM micrographs of the same buckled area demonstrate topographic variations of 1− 3.5 nm. Lead halide perovskite buckled areas showed reduced cathodoluminescence compared to unbuckled areas on the same nanobelt. Since buckling in thin layer CsPbBr 3 nanobelts detrimentally influence their physical properties, measures should be taken when processing them. A standard plate buckling model was used to estimate capillary action and adhesion forces between the nanobelts and the substrate. We report a lower limit of adhesion to sustain the buckling which may serve future processes involving CsPbBr 3 nanoperovskite where buckling effects are to be minimized. The manuscript was written through contributions of all authors. Funding This work is supported by Israel's Alon fellowship program, Technion, Russel Berrie nanotechnology institute, and Technion Helen Diller quantum center. Notes The authors declare no competing financial interest. ■ ACKNOWLEDGMENTS We gratefully thank Prof. Eugen Rabkin for fruitful discussions, Dr. Olga Kleinerman for the SEM examination and analysis, and Dr. Ifat Kaplan-Ashiri from Weizmann Institute of Science for the SEM-CL measurements. Y.B. thanks the Nancy and Stephen Grand Technion Energy Program for generous support.
4,236.4
2021-06-28T00:00:00.000
[ "Materials Science", "Engineering" ]
A construction of a conformal Chebyshev chaotic map based authentication protocol for healthcare telemedicine services The outbreak of coronavirus has caused widespread global havoc, and the implementation of lockdown to contain the spread of the virus has caused increased levels of online healthcare services. Upgraded network technology gives birth to a new interface “telecare medicine information systems” in short TMIS. In this system, a user from a remote area and a server located at the hospital can establish a connection to share the necessary information between them. But, it is very clear that all the information is always being transmitted over a public channel. Chaotic map possesses a dynamic structure and it plays a very important role in the construction of a secure and efficient authentication protocols, but they are generally found vulnerable to identity-guess, password-guess, impersonation, and stolen smart-card. We have analyzed (Li et al. in Fut Gen Comput Syst 840:149–159, 2018; Madhusudhan and Nayak Chaitanya in A robust authentication scheme for telecare medical information systems, 2008; Zhang et al in Privacy protection for telecare medicine information systems using a chaotic map-based three-factor authenticated key agreement scheme, 2017; Dharminder and Gupta in Pratik security analysis and application of Chebyshev Chaotic map in the authentication protocols, 2019) and found that Bergamo’s attack (IEEE Trans Circ Syst 52(7):1382–1393, 2005) cannot be resisted by the protocol. Although few of the protocols ensures efficient computations but they cannot ensure an anonymous and secure communication. Therefore, we have proposed a secure and efficient chaotic map based authentication protocol that can be used in telecare medicine information system. This protocol supports verified session keys with only two messages of exchange. Moreover, we have analysed the performance of proposed protocol with relevant protocols and it is being implemented in “Automated Validation of Internet Security Protocols and Applications” respectively. a secure authentication protocol. Security and privacy of "Telemedicne Information System" are the impressive components that are of great interest to the field of health care. Because the Internet is truly an open network with many potential security gaps, close consideration and measures must be required to ensure safe medical facilities and the safety of patient data. Both health care and treatment are two very important factors in the human's life (see data in Fig. 1). Upgraded technology in the field of online health care services such as variety of medical sensors, smart phones, and smart robotics helps the patients to facilitate the health care services in the remote areas. In these days, most of the doctors are employing robots and smart digital sensor in surgeries is an application of computer science in health care services [40,41]. There are other applications such as artificial intelligence and machine learning are used to detect the medical conditions of a patient. Nowadays, a patient possessing smart Patients can be benefited with online health care services via their smart phones, i-pads, and other smart sensors, but their security and privacy are two very important components during communication on public channel. In 2012, Wu et al. [10] designed a secure and anonymous authentication protocol to benefit the patients at their home. In the same year, Wei et al. [9] analyses the security of the protocol [10] and it is found vulnerable to two-factor authentication. In order to eradicate the two-factor authentication defect, a fresh design is needed for two-factor authentication. In the same year, Zhu [12] discussed the security attributes such as password guessing in the protocol [9] and invented a password-guess resistant protocol, although he didn't seem to think about communicating anonymously. In 2012, Chen et al. [4] designed an efficient and secure lightweight authentication protocol that preserves an anonymous communication in health care telemedicine services. In 2013, Lin et al. [7] observed that identity can traced in [4] using both dictionary and password guess along with stolen smart card information. He tried to remove most of the existing attacks and he invented an anonymous authentication protocol. In the same year, Cao and Zhai [3] discussed both security and privacy of [4] and they found that the protocol is vulnerable against both identity guess and password guess along with the information stored in the smart card. Three protocols discussed [3,7,12] are found insecure to input ver-ification procedure due to which they cannot differentiate incorrect inputs with in short time interval. The anonymous communication is another important factor that is missing in [9,10,12,32] respectively. In 2013, Guo et al. [14] used the complex dynamic structure of chaotic maps to design a new secure authentication protocol, but Hao et al. [15] discussed the security of the protocol and he found that two important attributes traceability and anonymity are missing, and he tried to fill the gap with a new design [14]. In 2014, Jiang et al. [16] reviewed both security and privacy attributes in [15] and he found the protocol is vulnerable to stolen smart card attack. In the year 2016, Li et al. [21] designed a secure and efficient chaotic map-based authentication protocol to secure the communication in health care services, but in the year 2018, Madhusudhan et al. [20] discussed the attacks in [21] such as password guess, and impersonations, and he tried to remove these attacks as discussed in [20]. In the year 2018, Jiang et al. [28] introduced a secure and efficient protocol to improve the telemedicine services in health care sector, but it is not much efficient and it requires to exchange three messages to establish secure and fresh session key. In the same year 2018, Wu et al. [29] introduced a secure and efficient authentication protocol based on RFID and Radhakrishnan et al. [19] also proposed a new design to secure the health care telemedicine services, but their protocols found susceptible to password guess, identity guess and also for stolen card information too. In the same year 2018, Zhang et al. [25] introduced a lightweight and secure authentication protocol for the mobile devices used in heath care telemedicine services, but it is also susceptible to identity guess, password guess and replay attacks. In 2018, Madhusudhan et al. [20] designed an efficient, and secure, and robust protocol for telecare services, but Dharminder et al. [35] discussed the security of the protocol [20] and they found it susceptible to identity guess, password guess, impersonations, and stolen smart card. In the same year 2020, Dharminder et al. [34] introduced a new design for authentication scheme based on RSA, but it uses the modulo operations that decreases the efficiency of the protocol due to costly modulo exponentiation. In the Table 2, we have observed various security attributes achieved by the existing relevant chaotic map based authentication protocols used to secure TMIS system, where the symbol √ denotes "yes", and × denotes "not" respectively. In the Tables 1, 2 one can see that existing protocols in the TMIS environment suffers various vulnerabilities such as password-guess, identity-guess, impersonations, replaying of older messages, and stolen smart card attacks. In the proposed design, we have discussed two important components security and privacy in the form of security attributes such as identity guess, impersonations, password guess, anonymity, replaying of messages, and stolen smart card information in the protocols [1,20,25,35] respectively. In the design [35], we have analyzed that a user U i selects I D i , PW i and calculates A i = h(I D i ||PW i ), then he sends < I D i , A i > to the server. Next, the server chooses a random n i ∈ Z * p , and does the computation Similarly, in the design [1], we have analyzed a vulnerability in the session key established during the communication. In the design [1], an adversary A obtains the information from earlier transmitted information M 1 and M 2 . Moreover, A computes u with knowledge of x, Where k is a positive integer, then A computes T s T u (x) = T s T u (x) = Sk u = Sk s that plays the role of session key during communication. Similarly, in the design [20], we have analyzed a vulnerability in the session key established during the communication. In the design [20], an adversary A obtains the information from earlier transmitted messages that plays the role of session key during communication. To handle the issues in [1,20,25,35], we have an idea to compute x = h(I D i ||s), where I D i is the identity of i th user and "s" is the long term secret key of the server, in this way a user possesses x that results from different I D i concatenated with master key of the server to produce different secret keys x for each of the user. Now, the x will plays the role in place of master secret that is different for each of the user. Therefore, we have designed a new authentication protocol possessing both security and efficiency using the dynamic Chaos theory. The security of the presented scheme have been analyzed in random Oracle with this we also use the tool for authentication called "Automated Validation of Internet Security Protocols and Applications" respectively. Moreover, the presented protocol resists session key violation problem, that is proposed by Bergamo et al. [33] and establishes a session key with just two messages of exchange. Preliminaries In this section, we will discuss some of the basic notations, terminologies and basic properties of conformal Chaos maps used in the proposed protocol. A conformal map is an anglepreserving transformation that preserves local angles. A brief review of some of the useful notations are also given in Table 3. Chebyshev chaotic mapping As seen in Fig. 3, chaotic maps have a complex dynamic structure and it is well known for its pseudo randomness. In this subsection, we have discussed some of basic definitions and dynamic properties [17]. where the values y and x are known to the adversary. Then this problem is know as Discrete Logarithm Problem (DLP). Definition 3 Computational Diffie-Hellman Problem (CDHP) can be stated to find T uv (x), where the values x, T u (x) and T v (x) are known to the adversary. Fuzzy extractor A fuzzy extractor(E f ) [2] is an extraction mechanism that is used to extract a random uniform string from biometric imprints (bm i ). It consists of two algorithms I (.) and R(.). I (.) is a probabilistic algorithm that produced two strings b 1 , b 2 as output after taking bm i an input parameter, where b 1 is private key and b 2 is a helper string. R(.) is an algorithm that is used to regenerate the private key b 1 after taking noisy biometric parameter bm i and helper string b 2 as input, where Proposed authentication protocol under chaotic mapping We have proposed a secure and efficient chaotic map based authentication that can be divided into four phases, (1) registration-phase, (2) login-phase, (3) authentication-phase and (4) password-update-phase. Registration-phase U i registers to the concern server Sr j via a secure channel as written in the following lines. -U i selects I D i , Ps i , and imprints his own biomet- Login phase If U i wants to login to Sr j then: Password update phase To update password U i , executes the following steps: Formal security analysis At first, we have to define a framework P to verify the security of the presented protocol and then, under random oracle, we will implement the presented protocol. Security-model Suppose the i th instance of a user U i is denoted by M i ∈ (U i , Sr j ), and A be an attacker that governs the connection between U i and Sr j . An illustration of A, is therefore stated as follows: Extract: With the help of extract query, A could get the private key of a user U i . Send(m, M i ): With the help of send query, A could be able to send arbitrary message m, to random oracle then in response of m, random oracle have to reply with a computational output. Hash(m): In this query A sends random massage m to H (.), then oracle select s ∈ Z * p randomly and reply with s, after storing it into hash list H i j with m. Initially H i j assumed to be empty. Suppose a bit b is selected from corrupt query phase and Succ(A) correctly estimates the value of b, then the advantage Adv A,P (k), against the protocol retained by adversary is specified as: Mutual authentication is established by security analysis in Random Oracle for the suggested scheme. Chaotic based assumption: Suppose x ∈ Z * p is a secret key of Sr j , p is a prime number with length n, then from generation algorithm Gen(1 n ) = p ∃ a negligible function neg(n) such that: Send-queries: In send query there are three phases discussed as below, first U i request for login to Sr j , then U i sends a message < (Ui 4 , N I D i , T 1 ) > to Sr j , and at last Sr j responds (Si 4 , T 2 ). We will describe this phase by a game between U i and Sr j respectively. 1. A start it with sending a query, in response of that Mo is supposed to reply a login message to A. where r is chosen arbitrarily, then check the equation So, we can observe that the algorithm M o has the advantage, so if A can breach the scheme, then M o can breach the suggested protocol, using subroutine A. Theorem 3 The protocol is secured against chosen massage attack in RO model, if Chaotic discrete logarithm problem (CDLP) holds. Proof By contradiction, let us prove this, let us say that there is A, who breaches proposed scheme against chosen massage attack, so we can model model an algorithm mo that violates the discrete logarithm presumption based on the Chaotic map (CDLP), which implies that A breaches the proposed system, then mo breaches the proposed system as well, which means A breaches the proposed scheme with non-negligible advantage. Since CDLP says challenger can obtain s, q ← Z n * by running Gen(.) algorithm and returns (1 n , T y (s) ) to Mo. Now Mo has to return s without having y. Game 3: To guess y correctly, A has three options: Game 1: Suppose that to oracle with a random z, then it calculates C 1 = T z (r ), else RO(z) = y, and reports y = y. However, A have non negligible chance of conducting such a query, so Mo will win with a non-negligible gain. However, it has been concluded that Mo violates the assumption with a marginal chance that implies A has a neg(n) gain. Anonymous If for any protocol, it is impossible for any adversary, to find user's real identity then we says that protocol follows the anonymous property. In our protocol, user's identity is not used over public channel, instead we use N I D i = (Ui 3 ||Ui 5 ||I D i ) ⊕ T y (Si 1 ). Adversary can intercept N I D i and Ui 4 but it is not possible for adversary to extract I D i from N I D i because for that A needs to compute T s (Ui 4 ) that requires long term secret key s of server. Password guessing attack Password guessing attack is not possible on proposed scheme because A needs Si 3 , Ui 2 , I D i , H i simultaneously to implement it successfully. Adversary might get Si 3 , Ui 2 by side channel attack or by power analysis attack but then he also need I D i and H i , which is not possible. Privileged insider attack Since in registration phase user U i send I D i , Ui 1 to the server Sr j , where Ui 1 = h(I D i ||Ps i ||H i ). So it is not possible for any insider to know the password and biometric of any user because these are protected by hash function h(.). It makes proposed scheme secure against privileged insider attack. Impersonation attack If A wants to act like a legitimate user, he needs to send N I D i correctly to the server, where N I Again if A wants to impersonate server then he need to create Si 4 correctly and for this A needs server's long term term secret key s. So above discussion suggest that proposed scheme is secure against impersonation attack. Reply attack Since we use different random number y for every session in proposed scheme where session key depends upon y, with this we also use time stamp, to avoid this type of attacks. For every session, the timestamp is uniquely chosen. The timestamp uniqueness property limits the duplication of log-in messages. This indicates that the proposed system is responsive to replay attacks. Perfect forward secrecy Even if long term secret key s is compromised, suggested scheme is secure. Because to create session key A needs to compute x and for this A need I D i which is not possible because we do not send I D i through public channel. Man in the middle attack If any adversary A wants to implement man in the middle attack then firstly A, intercept the massage Ui 4 , N I D i , T 1 , where Ui 4 = T y (I D i ) and N I D i = (Ui 3 ||Ui 5 ||I D i ) ⊕ T y (Si 1 ) = (Ui 3 ||Ui 5 ||I D i ⊕ T y T s (I D i ). Since I D i of U i is hidden in N I D i and it is not practically possible for any adversary to extract I D i from public channel information therefore adversary fails to forge Ui 4 . So proposed scheme is secure against man in the middle attack. Stolen card attack If adversary get access to the smart card of a user and extract P i , Si 3 and Ui 2 from smart card. Even then he could not able to get any meaningful information, that helps A to breach the security of proposed protocol because all of these are secured by hash function. A needs I D i , Ps i and H i simultaneously get useful information which is not possible. Simulation and output using "Automated Validation of Internet Security Protocols and Applications" In this subsection, we simulate the scheme using "Automated Validation of Internet Security Protocols and Applications" in short AVISPA tool to analyse formal security (man in the middle attack, replay attack) [36]. We have provided essential illustration on the basic output in OFMC and ATSE modes in Fig. [6]. Performance analysis In this section, we have analyzed the performance of the proposed protocol and the performance of the proposed protocol has been compared with the related chaotic map based authentication scheme in the Table 4, where the cost of various operations are h t ≈ 0.0005s, s t ≈ 0.0087s, m t ≈ 0.06307s, and c t ≈ 0.02102s denote the time for hashing, message encryption under symmetric key, one ordinary multiplication in Z * p and chaotic based operation respectively. We have analysed the performance of chaotic map based authentication protocols [1,15,16,20,22,24,25] with the proposed protocol. As we know our mobile phones has limited storage and random access memory, and internet connectivity is another problem, that all the telecare medicine services runs on limited bandwidth network that is why we need a secure and efficient authentication protocol. Both computation and communication efficiency are very important and these two costs of computations have been compared with existing protocols in the Table 4. The various operation cost estimated via executing an experiment on intel Pentiums−4 (1024 MB ram) processor as in [6,35] with this computation cost described in Fig. 7. In addition, Liu et al. [22] runs with computation cost 4h t + 2c t at user side, 5h t + 2c t at server side, Jiang et al. [16], at user side runs with computation cost 2h t + s t + c t at server side use 2h t +2s t +3c t , Hao et al. [15] at user side runs with computation cost 2c t + 3h t + 2s t , at sever computation cost is 2c t + 3s t + 2h t , Lee et al. [24], at the user side runs with computation cost 2c t + 7h t , at the server side it takes 2c t +8h t , Zhang et al. [25] runs with computation cost 6h t +2c t on the user side, and at server cost is 4h t +1c t +2s t , Madhusudhan et al. [20] at user side runs with computation cost 7h t + 2c t , at server side 3h t + 2c t , Li et al. [1] runs with computation cost 7h t + 2c t at the user side, and at server side cost is 7h t + 2c t , whereas the suggested protocol runs with computation cost 4c t + 4h t at the user side, 2c t + 4h t at the server side respectively. In this article, we have considered the cost of communication in the form of hashing, chaotic operation, and time-stamp as 160-bits, and symmetric encryption outputs standard 256 bits, whereas total cost of communication is given in the Conclusion This article provides a review on the security of recently proposed chaotic map based authentication protocol. The suggested design is free from most of the existing vulnerabilities such as password-guess, identity-guess, impersonations, replaying of messages, and stolen smart cards attacks and it also gives the idea how a poor verification results vulnerabilities. Furthermore, we have observed that the proposed design fulfills the requirement of session key verification in just two message exchange. In future, we can implement the protocol in vehicular communications, and digital rights management system etc. Conflict of interests All the authors have no conflict of interests. Research involving human participants/animals Research does not involve any human participant and/or animal performed by any of the authors. Informed consent All the authors have agreed to this submission. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
5,434.8
2021-06-19T00:00:00.000
[ "Computer Science" ]
Enhanced late-outgrowth circulating endothelial progenitor cell levels in rheumatoid arthritis and correlation with disease activity Introduction Angiogenesis and vasculogenesis are critical in rheumatoid arthritis (RA) as they could be a key issue for chronic synovitis. Contradictory results have been published regarding circulating endothelial progenitor cells (EPCs) in RA. We herein investigated late outgrowth EPC sub-population using recent recommendations in patients with RA and healthy controls. Methods EPCs, defined as Lin-/7AAD-/CD34+/CD133+/VEGFR-2+ cells, were quantified by flow cytometry in peripheral blood mononuclear cells (PBMCs) from 59 RA patients (mean age: 54 ± 15 years, disease duration: 16 ± 11 years) and 36 controls (mean age: 53 ± 19 years) free of cardiovascular events and of cardiovascular risk factors. Concomitantly, late outgrowth endothelial cell colonies derived from culture of PBMCs were analyzed by colony-forming units (CFUs). Results RA patients displayed higher circulating EPC counts than controls (median 112 [27 to 588] vs. 60 [5 to 275]) per million Lin- mononuclear cells; P = 0.0007). The number of circulating EPCs positively correlated with disease activity reflected by DAS-28 score (r = 0.43; P = 0.0028) and lower counts were found in RA patients fulfilling remission criteria (P = 0.0069). Furthermore, late outgrowth CFU number was increased in RA patients compared to controls. In RA, there was no association between the number of EPCs and serum markers of inflammation or endothelial injury or synovitis. Conclusions Our data, based on a well characterized definition of late outgrowth EPCs, demonstrate enhanced levels in RA and relationship with disease activity. This supports the contribution of vasculogenesis in the inflammatory articular process that occurs in RA by mobilization of EPCs. Introduction Rheumatoid arthritis (RA) is a chronic and destructive inflammatory disease affecting the joints. RA is now well known to be associated with striking neovascularization developed in inflammatory joints [1]. Indeed, angiogenesis, leading to an increased number of synovial vessels through local endothelial cells, is a cornerstone of synovial hyperplasia occurring in RA. Disturbances in endothelial cell turnover and apoptosis as well as in angiogenic factors such as vascular endothelial growth factor (VEGF) have been reported in RA synovium [2,3]. However, despite the abundant synovial vasculature, there are areas of synovial hypoxia contributing to synovial and cartilage damage [4,5]. Hypoxia is highly suggested to activate the angiogenic cascade, thereby contributing to the perpetuation of RA synovitis [6]. In addition to angiogenesis issued from resident cells, cells derived from bone marrow and named circulating endothelial progenitor cells (EPCs) are able to promote new blood vessel formation (vasculogenesis) and may therefore contribute to RA synovitis [7]. EPCs were originally identified by (a) the expression of markers shared with hematopoietic stem cells such as CD34 and CD133, (b) specific endothelial cell markers such as KDR (vascular endothelial growth factor receptor-2 or kinase-insert domain receptor), and (c) their capacity to differentiate into functional endothelial cells [8][9][10]. However, there is no consensus on the precise definition of EPCs [11]. Evidence showed that there is more than one endothelial progeny, monocytic versus hemangioblastic, within the circulating blood, and two distinct cell types of EPCs are currently recognized according to their growth characteristics and morphological appearance: early-outgrowth EPCs and late-outgrowth EPCs [7,12]. In the currently available human studies, variations in the level of circulating EPCs were reported in different diseases affecting the vascular system and were suggested to be a biomarker for vascular function and tumor progression [13,14]. In the field of RA, contradictory results have been reported. Indeed, some studies suggested a lower circulating EPC number in RA patients compared with controls [9,10], but conversely, some others reported higher values [15], and finally some other reports did not find any difference [8,16]. Several studies have shown an increase of EPCs within the RA synovial tissue [17,18]. Altogether, these observations underline the difficulty of accurately quantifying EPC populations. The major issue is the identification of the different types of circulating endothelial cells (CECs) issued respectively from the vessel wall or from bone marrow progenitors. The use of accurate methods allowing the detection of rare events by flow cytometry is thus critical. Within this context, our group contributed to recommendations aiming at the improvement of EPC detection and characterization [19]. In line with these latter recommendations and our background in systemic sclerosis [20], we focused on late-outgrowth EPCs that represent the progenitors with the more genuine endothelial properties. In parallel to EPCs, CECs detached from vessel walls (CECs) may also be a relevant biomarker of vascular disease. We hypothesized that coupled raised levels of these two populations may reflect the vascular status of the disease and thus represent innovative biomarkers. Therefore, our aims were (a) to enumerate EPCs and lateoutgrowth endothelial colony formation in RA patients and controls, (b) to assess correlations between EPC counts, CEC counts, and RA activity, and (c) to correlate EPC and CEC counts with levels of serum markers of synovitis or endothelial injury. Patients The study involved 59 RA patients (54 females; mean age of 54 ± 15 years) fulfilling the RA American College of Rheumatology criteria [21]. RA patients were consecutively enrolled during a 6-month period regardless of disease activity and underwent a routine clinical examination that included the calculation of 28-joint disease activity score (DAS-28). The patients' characteristics are summarized in Table 1. Ongoing biologic therapies included tumor necrosis factor (TNF) blockers (etanercept, adalimumab, or infliximab) in 11 patients and anti-CD20 (rituximab) in 12 patients. Thirty-six healthy volunteers (26 females; mean age of 53 ± 19 years) coming from our first study served as controls [20]. Exclusion criteria for all subjects were cardiovascular events and conventional cardiovascular risk factors (diabetes, hypertension, and past medical history of coronary artery disease and smoking) except for three RA patients with controlled systemic hypertension. None of the patients had been treated previously with statins, a drug (17) Anti-CD20 agents, number 12 (20) anti-CCP, anti-cyclic citrullinated peptide; CRP, C-reactive protein; DAS-28, 28-joint disease activity score; ELISA, enzymelinked immunosorbent assay; ESR, erythrocyte sedimentation rate; RA, rheumatoid arthritis; SD, standard deviation. known to be associated with increased EPC levels [22,23]. All patients and volunteers gave informed consent for all procedures, which were carried out with local ethics committee approval Comité de Protection des Personnes, Ile de France III (CPPP IDF III). Endothelial progenitor cell quantification by lateoutgrowth colony-forming unit assay In 53 RA patients and 35 controls with FACS quantification, we used a method of culture suitable for isolating lateoutgrowth EPC-derived colonies [20]. The blood mononuclear cell fraction was collected by Ficoll (Pancoll, Dutcher, France) density gradient centrifugation and was resuspended in endothelial growth medium (EGM-2) (Lonza, Verviers, Belgium). Cells were then seeded on collagenprecoated 12-well plates (BD Biosciences) at 2 × 10 7 cells per well and stored at 37°C and 5% CO 2 . After 24 hours of culture, adherent cells were washed once with phosphatebuffered saline 1x and cultured in EGM-2 with daily changes until the quantification. Colonies of endothelial cells appeared between 9 and 26 days of culture and were identified as well-circumscribed monolayers of cells with a cobblestone appearance. EPC colonies were counted visually under an inverted microscope (Olympus, Paris, France). Data analysis All data are presented as median (range) unless otherwise stated. Comparisons were performed by non-parametric Mann-Whitney, Kruskal-Wallis, or Spearman rank correlation (r) tests, when appropriate. The chi-square test was used to compare categorical variables. P values are twotailed, and P values of not more than 0.05 were considered statistically significant. Number of late-outgrowth endothelial progenitor cell colony-forming units in rheumatoid arthritis Endothelial colony formation has previously been used as an alternative method to detect endothelial progenitors in PBMCs [20]. CFU assays were performed in association with FACS quantification in 53 RA patients and 35 controls. EPC-CFUs appeared at the ninth day of PBMC culture at the earliest and were confirmed by a typical morphology of a well-delineated colony of cells with a cobblestone appearance (Figure 4a). (Figure 4b). In the RA population, CFU numbers correlated with none of RA characteristics or treatments. ,451] pg/mL). However, values of EPCs in RA patients were unrelated to any of the above serum markers. Also, CEC levels were not linked with serum markers of synovitis but were significantly higher in RA patients with a high sVCAM level (>1,000 ng/ mL; P = 0.0035). Discussion Our results obtained by using a well-characterized definition of late-outgrowth EPCs in a relatively large number of patients show enhanced levels of this cell population and relationships with RA disease activity. Available data have reported conflicting results about the EPC counts in this inflammatory condition. Several methodological issues could explain such discrepancies. We herein followed recent recommendations and used a previously validated method for late-outgrowth EPC enumeration [20]. The quantification of EPCs by flow cytometry first requires enrichment techniques to select a correct number of this scarce population and a specific marker combination to select the subpopulation of hemangioblastic EPCs. In culture, these 'true' angioblast-like EPCs are represented by cells that enable late outgrowth with higher proliferative potential, while endothelial cell colonies that appear early might more preferentially originate from monocytes or CECs [11,24]. In the study herein, we excluded the monocytic EPC subpopulation from the quantification and thus selected hemangioblastic EPCs by the lineage-positive cell depletion including CD14 + cells. We also extended the circulating EPC definition with an additional marker of viability to select non-apoptotic cells. One may suggest that the several steps required by our technique may induce procedural loss of progenitors which are prone to undergo apoptosis. However, the controlled design of our study limits this potential bias but this will need additional work. In parallel, EPC counts were also determined by the selection in culture of the late-outgrowth EPCs according to the delays before their appearance. None of the previous studies that reported EPC levels in RA patients was based on these methods. The previous data quantified, conversely to our study, circulating EPCs in whole blood with only three surface markers (CD34/ CD133/VEGFR-2) [8][9][10]16]. These authors also assessed EPC-CFU numbers, focusing on the 'early outgrowth' subpopulation and finding or not finding results consistent with those of flow cytometry quantification. These methodological differences may account for the discrepancy with regard to our results. Indeed, differences between the various EPC studies may not relate to the characteristics of the RA population enrolled. Our population of RA patients, issued from consecutive inclusions, did not display differences with other studies based on criteria known to modulate EPC levels --such as age (mean of 53 to 59 years) and frequency of use of methotrexate or low doses of corticosteroids --or on the choice to exclude patients with previous cardiovascular events [25][26][27]. In addition, it is noteworthy that EPC counts did not differ according to the use of biologics, although the cross-sectional design limits the analysis of the influence of such therapies in our RA patients. The only specificity of our RA patients may be the relatively long disease duration in comparison with other works. Neverthe-less, disease duration was never reported to be associated with EPC levels and thus may not account for our findings. We excluded RA patients with cardiovascular risk factors in order to rule out the bias of the specific effects of atheroma on EPC counts and thus to focus on relationships between disease activity and EPC counts as this has been done in many previous studies [9,10]. One may suggest that this may have introduced a selection bias and the use of this exclusion criterion may have obscured a negative influence of cardiovascular risk factors on EPC counts. We herein provide the demonstration of the identification of the late-outgrowth subset by the association between circulating cell counts and culture isolations. Indeed, two different subpopulations of EPCs, namely early and late EPCs, can be derived from peripheral blood depending on the different culture methods and times [24,28]. Although both EPCs express endothelial markers, they have different morphologies, patterns of growth, and angiogenic properties and thus might have different roles in neovasculogenesis [24,[29][30][31]. Late-outgrowth EPCs that represent the progenitors with the more genuine endothelial properties such as tube-forming activity in vitro and in vivo need to be better characterized in the context of inflammatory conditions. Previously, the late EPC subpopulation has been studied in systemic sclerosis by our group and their endothelial properties confirmed by angiogenic tests [32]. As reported in systemic sclerosis, we observed that the RA patients, having high Lin -7AAD -CD34 + CD133 + VEGFR-2 + , displayed a higher number of EPC-CFUs. However, the size of the sample reduced by the non-systematic achievement of EPC-CFUs could explain the limited increase of EPC-CFU numbers in RA patients as well as the lack of association with disease activity and will need larger studies. We thus assume that our EPC definition allows a wellcharacterized quantification of late-outgrowth EPCs. In RA, our data support the contribution of late-outgrowth EPCs to synovitis according to our finding of a strong link with disease activity. The direct correlation of EPC counts with DAS-28 levels is strengthened by the fact that RA patients in remission displayed EPC levels comparable to those of controls. Together with evidence of CD133/ VEGFR-2 + cells in RA synovial tissue [17], our results emphasize a key role for vasculogenesis and EPC mobilization in RA. Preliminary data on CEC outcome in vascular diseases have suggested a relationship between the detachment of mature CECs and vascular hurting [33]. We concomitantly determined the value of CECs as compared with EPCs. We found a correlation in RA between these two circulating cell levels but CEC levels in RA did not differ for the ones in controls. Using a CD146 immunoselection in whole blood, one study found enhanced CEC levels but failed to identify a specific association with blood inflammatory markers [34]. The best combination of surface markers including exclusion of dead cells by viability marker seems to be required for CEC quantification. One of the pitfalls of EPC quantification in RA is the potential involvement of atherosclerosis in EPC changes [35,36]. However, we excluded from our study individuals with classical cardiovascular risk factors and also those with previous clinical events. Furthermore, we did not find an increase of CEC levels in our RA population which reflects endothelial injury. Indeed, we presume that EPC increase relates to RA disease activity and synovial inflammation, although measurement of infra-clinical atheroma would be necessary to definitely rule out endothelial dysfunction. We then attempted to correlate EPC counts with blood markers reflecting inflammation (ESR), endothelium injury (sVCAM), or synovial involvement (COMP and YKL-40). While we failed in the identification of any link, sVCAM, COMP, and YKL-40 were found to be increased in RA patients, thus confirming the activity of the disease in our sample of RA patients. We assume that, despite the lack of a link with serum markers, the identification of a relationship between EPCs and DAS-28 is highly relevant. EPC count is probably influenced by several factors, including inflammation, vascular injury, and potentially the immune response with bone marrow changes. The multifactorial regulation probably precludes the identification of a correlation with one single serum marker. Therefore, EPCs could represent a new 'integrative' biomarker. Its predictive value on disease outcome, including both articular and cardiovascular issues, will have to be evaluated in upcoming prospective studies. Conclusions We demonstrate enhanced levels of late-outgrowth EPCs in RA and a relationship with disease activity. This supports the implication of vasculogenesis in the perpetuation of the synovitis that occurs in RA. Late-outgrowth EPC isolation also offers a unique opportunity to determine an RA endothelial signature.
3,606.4
2010-02-16T00:00:00.000
[ "Biology", "Medicine" ]
Vortex Structure and Kinematics of Encased Axial Turbomachines This paper models the kinematics of the vortex system of an encased axial turbomachine at part load and overload applying analytical methods. Thus far, the influence of the casing and the tip clearance on the kinematics have been solved separately. The vortex system is composed of a hub, bound and tip vortices. For the nominal operating point φ ≈ φ opt and negligible induction, the tip vortices transform into a screw. For part load operation φ → 0 the tip vortices wind up to a vortex ring, i.e., the pitch of the screw vanishes. The vortex ring itself is generated by bound vortices rotating at the angular frequency Ω . The hub vortex induces a velocity on the vortex ring causing a rotation at the sub-synchronous frequency Ω ind = 0.5 Ω . Besides, the vortex ring itself induces an axial velocity. Superimposed with the axial main flow this results in a stagnation point at the tube wall. This stagnation point may wrongly be interpreted as dynamic induced wall stall. For overload operation φ → ∞ the vortex system of the turbomachine forms a horseshoe, i.e., the pitch of the screw becomes infinite. Both hub and tip vortices are semi-infinite, straight vortex filaments. The tip vortices rotate against the rotating direction of the turbomachine due to the induction of the hub vortex yielding the induced frequency Ω ind = − 0.5 Ω / s with the tip clearance s. INTRODUCTION By now, the common understanding is, that rotating stall and the resulting noise and vibration within a turbomachine is a dynamic effect. is means, that frictional forces lead to boundary layer separation and eventually stall in rotating machines [ , ]. is understanding is recently confirmed by Cloos et al. [ ] both, experimentally and analytically, for the most generic machine, a flow through a coaxial rotating circular tube. According to Cloos et al. [ ], "wall stall"a term coined by Greitzer [ ] in contrast to "blade stall"is caused at part load by the interaction of axial boundary layer and swirl boundary layer flow, i.e. the influence of centrifugal force on axial momentum. For "wall stall", the axial velocity component u z vanishes at the line z = −z 0 , r = a (axial coordinate in mean flow direction z, distance z 0 from the reference point on the line of symmetry r = 0, radial coordinate r, tube radius a; cf. figure ). is paper analyzes the flow situation in encased axial turbomachines for small viscous friction. is work shows, that "wall stall", i.e. u z (−z 0 , a) = 0, can also be a result of kinematics only, due to induced velocities of the vortex system superimposed with the axial main flow. e structure of the vortex system, especially the tip vortices, depends on the operating point of the turbomachine. Furthermore, the tip vortices rotate with a sub-synchronous frequency Ω ind (Karstadt et al. [ ], Zhu [ ]). e aim of the present investigation is to analyze the influence of the vortex system in encased axial turbomachines and its circulation strength on the observed phenomena, yielding the research questions: . Is it possible to explain by means of analytical methods the sub-synchronous frequencies observed for turbomachines? . Can "wall stall" be a result of kinematics only? To answer these questions, this work first employs vortex theory for an encased axial turbomachine, followed by the application of fundamental solutions. For machines without casing like wind turbines and screw propellers, vortex theory is well described by In addition, to enlarge the investigation on the whole operating range of a turbomachine, we investigate the structure and kinematics of the vortex system at heavy overload, applying theory of functions (complex analysis). For turbomachines, the flow number ϕ := U/ Ωa defines the operating point (e.g. part load or overload). e flow number is the ratio of axial free-stream velocity U to the circumferential velocity Ωa, where Ω = 2πn is the rotational speed (the scaling to Ωa and not to Ωb, with the blade tip radius b, is common in the context of turbomachines and therefore used here as well [ ]). For the nominal operating point ϕ ≈ ϕ opt and negligible induction, the vortex system of an encased axial turbomachine consists of a hub, Z bound and Z tip vortices, with Z the number of blades. e tip vortices transform into helices with a pitch of 2πϕa. For part load operation ϕ → 0, see figure , the Z helices "roll up" and form a vortex ring, i.e. the pitch of the helices vanishes. e vortex ring is continuously generated by the bound vortex system. Hence, the coaxial vortex ring strength is transient. A nice picture for this vortex ring is that of a thread spool rolling up and gaining strength over time. is picture will explain some transient phenomena using kinematic arguments only. e case of heavy overload occurs for infinitely high flow numbers ϕ → ∞; see figure . e hub, the bound and the tip vortices form a horseshoe, i.e. the pitch of the helices becomes infinite. Both, hub and tip vortices are semi-infinite, straight vortex filaments. In real turbomachines, the flow number cannot be adjusted to infinity, but is limited to a maximum valueφ due to flow rate limitations and geometric restrictions. Nevertheless, the analysis of this limiting case is important for the basic understanding of the vortex system in axial turbomachines. To develop physical understanding of the whole picture in detail, the paper is organized as follows. Section gives a short literature overview. Section uses vortex theory to determine the strength of the vortices. Subsequently, section derives the velocity potential of a coaxial vortex ring within a circular tube at part load and the induced rotating frequency. e flow potential and the induction at overload is introduced in section . e paper closes with a short outlook to potential applications in section and a discussion in section . . LITERATURE REVIEW Investigations of vortex systems in fluid dynamics trace back to the work of Helmholtz [ ], who formulated the Helmholtz's theorems as a basis for the research concerning rotational fluid motion. Didden [ ] performed measurements of the rolling-up process of vortex rings and compared the results with similarity laws for the rolling-up of vortex sheets. Besides the investigation of vortex kinematics, a broad research field on vortex structures in turbomachines is the experimental and numerical analysis of acoustic and noise emission of tip vortices [ , , , ]. e noise of a fan is noticeable by a CPU, car or a rail vehicle cooler. All three examples are met in the everyday life. One of the main reasons for the noise is the gap s := (a − b)/a between the housing and the impeller tip. With increasing gap, the noise emission and the energy dissipation increase [ , ]. e sketch is for Z = 1, i.e. one bound vortex only to improve clarity. from the Z bound vortices. e generation of a bound vortex was explained by Prandtl [ ] using arguments of boundary layer theory and Kelvin's circulation theorem. e presence of viscosity is essential for the creation of the bound vortex, but the generation phase is not in the scope of this paper. For vortex generation, we would like to refer the reader to the work of Prandtl [ ]. By vortex theory, each blade 1...Z of length b is represented by its bound vortex of strength Γ. For simplicity, this investigation assumes Γ to be constant in radial direction along the blade from r = 0 to r = b. As a vortex filament cannot end in a fluid due to Helmholtz's vortex theorem, a free, trailing vortex springs at each blade end r = 0 and r = b; see figures and . ese vortices are of the same strength as the bound vortex. At the inner end r = 0, a straight semi-infinite vortex line 0 ≤ z < ∞ of strength Z Γ -the so called hub vortex -a aches to the blade. e tip vortices at the outer end r = b are helices. e axial distance of the each helix winding, i.e. the helix pitch, is given by U/n = 2πaϕ. Depending on the load, these helices either "wind up" (ϕ → 0), forming a vortex ring, or stretch to infinity (ϕ → ∞), yielding a straight, semi-infinite vortex line. Regardless of the flow number ϕ, the semi-infinite straight vortex line at r = 0 induces the circumferential velocity Z Γ/(4πb) at z = 0, r = b due to the Biot-Savart law. Hence, the induced rotational speed is In a next step, this analysis calculates the vortex strength Z Γ, employing the angular momentum equation and the energy equation. On the one hand, the axial component of the angular momentum equation is Z Γ/2π = dM/d m. Here, M is the axial torque component and m the mass flux. Multiplying the momentum equation by Ω yields Z Γn = dP/d m. P = MΩ is the power applied to the fluid by means of the rotating bound vortices. On the other hand, the energy equation for an adiabatic flow reads dP/d m = △h t , with △h t being the difference in total enthalpy experienced by a fluid particle passing the cross-section z = 0. Both arguments result in the relation Z Γn = △h t . From turbomachine theory, the expression △h t = (Ωb) 2 (1 − ϕ/φ) can be derived from the equation mentioned above. e dimensionless design parameterφ equals the tangent of the blade's trailing edge angle β 2 , i.e. ϕ = tan β 2 . Hence, the relation between Z Γ and Ω yields As equation shows, the total change in circulation Z Γ along the plane of the machine is linked to the flow number ϕ by Euler's turbine equation. . THE VORTEX SYSTEM AT HEAVY PART LOAD For the limiting case of interest ϕ → 0, the relation between Z Γ and Ω, equation , yields △h t = Z Γn = (Ωb) 2 . is results in an induced sub-synchronous frequency is induced frequency is in surprisingly good agreement with measured sub-synchronous frequencies 0.5 Ω...0.7 Ω of rotating stall of compressors, fans and pumps at part load operation [ ] and may result in a rethinking of rotating stall from a kinematic perspective. is investigation is now set to analyze the kinematics of coaxial vortex rings of radius b and maximal strength Γ t = Z Γnt < (Ωb) 2 t as sketched in figure . By doing so, Laplace's equation with J 0 , J 1 the Bessel function of orders 0 and 1, respectively, and k n the zeros n = 1...∞ of the function J ′ 0 (k n ) = −J 1 (k n ) = 0. e dimensionless velocity potential φ depends on the dimensionless ring radius β := b/a and the dimensionless vortex strength τ := Γ t / 2bU. Since Γ t increases linearly in time, τ can also be interpreted as a parametric time of the process. Stokes stream function for this flow is (using the integrability conditions ∂ψ/∂z = −r ∂φ/∂r and ∂ψ/∂r = r ∂φ/∂z [ ]) With the stream function, the radial velocity component and the axial velocity component are given. e velocity field (equation and ) takes the induction of the vortex ring into account. At the stagnation point z = ±z 0 , the axial velocity u z vanishes for Mirrored tip vortices are necessary, to fulfill the kinematic boundary condition on the tube wall. ese mirrored vortices are located in the housing and on the rotational axis of the turbomachine. Considering the tip vortex and its mirrored conjugates only, i.e. neglecting the hub vortex as a first step, one obtains the system visualized in figure bo om le . e vortex on the axis and the hub vortex feature identical magnitude but opposed rotating direction. Adding the hub vortex, yielding the complete system, hence, results in the annulation of these two vortices; see figure bo om right. For the considered potential flow, the tip vortex at radial position b = (1 − s)a yields the complex potential Here, s is the dimensionless gap. e Milne-omson circle theorem [ ] is applied, to derive the complex potential satisfying the kinematic boundary condition at the wall. is theorem postulates a resulting complex potential for a potential F 1 and the mirrored potential at the surrounding wall. Adding the potential of the mirrored tip vortex on the axis of the turbomachine ζ = 0, see figure bo om right, yields for the complex flow potential e tip vortex at ζ = b with the circulation Γ necessitates a mirrored vortex at ζ = 0 with the same magnitude of circulation and a mirrored vortex in the housing at ζ = a 2 /b = a/(1 − s) with the same magnitude and inverted direction. Up to now, the hub vortex is excluded from the considerations. Considering the hub vortex, as visualized in figure bo om right, yields for the complex potential In the following, this analysis shows, that an induced movement of the gap vortex occurs against the rotating direction of the turbomachine at heavy overload. A potential vortex induces a velocity on the surrounding flow. e velocity components of a given potential F(ζ) are calculated by A straight vortex filament does not induce a velocity on its own, due to the Biot-Savart law, so the induced velocity at ζ = b is only due to the mirrored tip vortex at ζ = a/(1 − s). e resulting induced velocity at the position of the tip vortex yields Assuming a turbomachine with Z impeller blades, the rotating velocity of the tip vortex is is is the rotating direction against the rotating direction of the turbomachine. For symmetry reasons, the rotating trajectory defines a circular path at radius b = a(1 − s). Hence, the induced frequency at ζ = b is For high flow number ϕ →φ and small gap s ≪ 1, the induced frequency yields , that for heavy overload, the induced frequency will increase with decreasing tip clearance. Furthermore, we expect a noise of high frequency due to the small value of s < 1%, which is common for turbomachines. e broadband drop in the sound power for all flow numbers is clearly visible. Müller [ ] applied the continuity and the momentum equation and deduced, that sound inside a fluid volume is only emi ed, if the rotation of the velocity field changes in time. e present study applies a similar approach to analyse the tip clearance noise of a turbomachine. Time-consuming simulations, as performed by Carolus et al. [ ] surely allow a more profond and accurate insight into the acoustics of turbomachines. e development of an analytical model, which predicts main frequencys, is yet interesting to generate a deeper understanding of the acoustics in turbomachines. ese findings and the presented analytical model in this paper could be an efficient tool for acoustic design of turbomachines. . SUMMARY AND CONCLUSION An interplay between dynamic and kinematic effects explains flow structures and phenomena. Using computational fluid dynamics, a clear distinction of both effects is o en impossible. In contrast, analytical methods allow a more focused picture of fluid mechanics, i.e. they allow a clear distinction of effects. Of course, only generic flows are accessible to analytical methods. is paper focused on an analytic model for wall stall, so far being explained by dynamics only: boundary layer separation is indeed a dynamic effect. Nevertheless boundary layer separation is not necessarily the only reason for wall stall. It is shown, that kinematics may also explain at least some effects of wall and rotating stall. e used picture for a flow at small flow numbers is a thread spool, rolling up the tip vortices resulting from rotating bound vortices. From the fluid mechanics perspective, the thread spool is a coaxial vortex ring of increasing strength connected to a semi-infinite hub vortex (figure ). So far, the velocity potential of a coaxial vortex ring inside a tube was unknown. e solution of Laplace's equation results in the velocity potential for the vortex filament within a tube [ ]; see equation . is paper gained three main results, which are due to kinematics only. First, at part load operation, the hub vortex induces a sub-synchronous rotation of the vortex ring. e derived rotational speed Ω ind = 0.5 Ω of the vortex ring is surprisingly consistent with observed sub-synchronous speeds of rotating stall; cf. [ ]. Second, the vortex ring induces an upstream axial velocity at the wall. Together with the undisturbed flow velocity, this results in a stagnation point upstream and downstream at the wall, which may be interpreted as "wall stall" (figure ). ird, at overload operation, the induced rotational direction is inverted to the case at part load. e semi-infinite straight vortex filament at the outer blade end rotates against the rotating direction of the turbomachine, due to the induction of the hub vortex. e induced frequency yields Ω ind = −0.5 Ω/s. e presented analytical model may give new arguments and improves the understanding of the vortex system in turbomachines, but is also intended to motivate generic experiments. Hence, a test rig will validate the models presented in this paper in the near future. As a next step, the velocity potential for a coaxial vortex ring filament in a circular tube (equation ) will be extended to a coaxial vortex layer, yielding a transient behavior of the vortex system. is behavior leads to a change in the circulation over time being responsible for noise emission [ ].
4,174.8
2017-12-16T00:00:00.000
[ "Engineering", "Physics" ]
Chitosan hydrogel encapsulated with LL-37 peptide promotes deep tissue injury healing in a mouse model Background LL-37 peptide is a member of the human cathelicidin family, and has been shown to promote the healing of pressure ulcers. However, the low stability of this peptide within the wound environment limits its clinical use. Chitosan (CS) hydrogel is commonly used as a base material for wound dressing material. Methods CS hydrogel (2.5% w/v) was encapsulated with LL-37. Cytotoxicity of the product was examined in cultured NIH3T3 fibroblasts. Effects on immune response was examined by measuring tumor necrosis factor-α (TNF-α) release from RAW 264.7 macrophages upon exposure to lipopolysaccharides. Antibacterial activity was assessed using Staphylococcus aureus. Potential effect on pressure ulcers was examined using a mouse model. Briefly, adult male C57BL/6 mice were subjected to skin pressure using magnets under a 12/12 h schedule for 21 days. Mice were randomized to receive naked LL-37 (20 μg), chitosan gel containing 20-μg LL-37 (LL-37/CS hydrogel) or hydrogel alone under the ulcer bed (n = 6). A group of mice receiving no intervention was also included as a control. Results LL-37/CS hydrogel did not affect NIH3T3 cell viability. At a concentration of 1–5 μg/ml, LL-37/CS inhibited TNF-α release from macrophage. At 5 μg/ml, LL-37/CS inhibited the growth of Staphylococcus aureus. The area of the pressure ulcers was significantly lower in mice receiving LL-37/CS hydrogel in comparison to all other 3 groups on days 11 (84.24% ± 0.25%), 13 (56.22% ± 3.91%) and 15 (48.12% ± 0.28%). Histological examination on days 15 and 21 showed increased epithelial thickness and density of newly-formed capillary with naked LL-37 and more so with LL-37/CS. The expression of key macromolecules in the process of angiogenesis (i.e., hypoxia inducible factor-1α (HIF-1α) and vascular endothelial growth factor-A (VEGF-A)) in wound tissue was increased at both the mRNA and protein levels. Conclusion Chitosan hydrogel encapsulated with LL-37 is biocompatible and could promote the healing of pressure ulcers. Background Pressure injuries, also known as pressure ulcers, are common in bed-ridden patients, and associated with poor quality of life as well as high medical care costs [1,2]. Pressure injuries have also been associated with poor prognosis and higher mortality in some patients [3]. Deep tissue injuries can rapidly develop into open ulcers, exposing wounded tissue to the external environment. Therefore, in clinical wound care and nursing specialties, the development of effective treatments for deep tissue pressure injuries is urgently required. Inflammatory responses and reductions in capillary density caused by ischemia/reperfusion are the main factors that affect deep tissue injury healing [4,5]. The main principles of deep tissue injury management are in the inhibition of inflammatory responses in conjunction with the restoration of blood microcirculation within the wound site [4,[6][7][8]. However, there are limited therapeutic drugs that can achieve these desirable effects; hence, there is an urgent requirement for the development of new, effective means to satisfy the unmet clinical need for the treatment of deep tissue injuries. LL-37 is the only antimicrobial peptide of the cathelicidin family that has been identified in humans [9]. Recent studies have highlighted the important roles that antimicrobial peptides play in the regulation of wound healing [10]. In addition to bactericidal actions, LL-37 can also bind to Toll-like receptors (TLRs), inhibit TLR signaling pathways and reduce the production of proinflammatory cytokines [11]. LL-37 also contributes to blood vessel formation and has been shown to act as a practical immune adjuvant [12,13]. Local injection of LL-37 significantly increased ischemic hind limb collateral circulation in animal models [14]. In clinical trials, it was found that supplementation with LL-37 was safe and was well tolerated when applied to nonhealing venous leg ulcers (VLUs) [15]. The major challenge associated with LL-37 administration is its rapid degradation within the wound environment; thus, treatments require higher dosage and dosing frequencies to achieve the desired therapeutic effect [16]. Recent research has shown that hydrogel dressings are an attractive option for use as small molecule drug delivery systems [17,18]. Hydrogel dressings not only provide a moist healing environment for in vivo local injury tissues but also function as an extracellular matrix-like scaffold for cellular support while maintaining the biological activity of loaded small molecule polypeptides [19][20][21]. Multiple reports have shown that Chitosan (CS) hydrogels have strong potential for use in both pharmaceutical and medical applications [22,23]. CS hydrogels are also widely used as drug delivery systems and wound dressings due to their mucoadhesive properties [24]. Previously, CS hydrogel wound dressings were prepared and successfully loaded with small molecular peptides for application in the treatment of lower limb ischemic disease [25]. In light of these reports, CS was selected as an appropriate base material for the hydrogels used in this study. Hydrogels were fabricated at a concentration of 2.5% CS (w/v), as it was previously reported that CS hydrogels at this concentration have suitable biocompatibility and fast degradation profiles [26]. In the present study, LL-37/CS hydrogel dressings were prepared using physical blending methods. The therapeutic effects were assessed, and molecular mechanisms of action were evaluated in deep tissue injuries through the use of molecular biology techniques. The results obtained in this study provide a theoretical basis for further investigations into the clinical application of LL-37/CS hydrogel dressings. Encapsulation efficiency The encapsulation efficiency of LL-37 within CS hydrogels was determined by measuring the amount of nonencapsulated LL-37 still present within the supernatant after hydrogel formation and after 1 mg of lyophilized LL-37/CS hydrogel was dissolved in 1 ml of water. The concentration of LL-37 that was encapsulated within CS hydrogels was analyzed by HPLC using conditions reported in the literature [29]. A calibration curve was plotted for the LL-37 peptide concentration range from 1 ng/ml to 25,000 ng/ml. The LL-37 concentration was quantified, and the encapsulation efficiency of LL-37 was calculated using the following equation: Encapsulation efficiency = [Encapsulated LL-37/Initial LL-37] × 100%. Antimicrobial activity Antimicrobial activity was compared among LL-37 (5 μg/ml), LL-37/CS hydrogel (LL-37, 5 μg/ml) and CS hydrogel using Staphylococcus aureus bacterial culture. Briefly, a total of 1 × 10 5 Staphylococcus aureus bacterial cells were added to 1 ml of PBS containing LL-37 or hydrogels and incubated for 6 h at 37°C. A series of aliquots (100 μl) were taken and diluted in PBS to yield 1 × 10 3 bacteria per ml solution, and then plated on LB agar and incubated for 24 h at 37°C before colonies were counted. Cell culture NIH3T3 cells were cultured in DMEM (Gibco) supplemented with 10% FBS containing antibiotics (100 U/ml penicillin and 100 μg/ml streptomycin. Cells were maintained at 37°C in a humidified 5% CO 2 atmosphere. Cells were subcultured every 2-3 days or when a confluent monolayer had formed. Cytotoxicity assay Cytotoxicity to NIH3T3 cells was examined using CCK-8 assays. The experiment was grouped into: control (saline), LL-37/CS (containing various concentration of LL-37: 10 ng/ml, 50 ng/mL, 100 ng/ml), CS with 6 replicate wells per group. Briefly, cells were seeded in 96-well plates at a density of 1 × 10 5 cells/ml, grown to 70-80% confluence, and incubated with LL-37/CS hydrogel or free LL-37 at various concentrations for 24 or 48 h. After two washes with PBS, cell viability was assessed using a CCK-8 assay kit (Dojindo). Absorbance was measured at 450 nm. The results are expressed as the percentage of control cultures. In vivo wound healing study Animals Male C57BL/6 mice (6-8 weeks of age, approximately 20 g) were acquired from Beijing Vital River Laboratory Animal Technology (Beijing, China), and maintained in the Qingdao University Veterinary Service Center (Qingdao, China). Experiments were performed in compliance with the guidelines established by the Institutional Animal Care and Use Committee of Qingdao University (Qingdao, China). Deep tissue injury Hair of the right hind limb of the mouse was shaved using hair clippers. An area close to the gluteus superficialis muscle was subjected to pressure using a magnet (12-mm diameter, 5-mm thickness, 2.4 g weight, 1000 G surface magnetic flux) under a 12/ h under pressure and 12 h schedule. During the 12-h period with pressure, the mice had unlimited access to food and water, and allowed to move in the cage freely. One day after the paradigm started, mice randomly received subcutaneous injection of 20-μg LL-37, 20-μg LL-37/CS, or CS hydrogel alone (n = 6) into deep tissue under the magnet. A group of mice received magnet but no other intervention was included as a control. On day 21, the mice were sacrificed by severing the neck to obtain tissue for further analysis. Evaluation of wound healing Wound healing was evaluated by measuring the wound area on days 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 and 21. Wound sites were photographed, and the wound area was measured using ImageJ analysis software (NIH, Bethesda, MD, US). Wound closure was expressed as the percentage of the initial wound area. The healing ratio was calculated using the following equation: Histological analysis of wound healing Three mice were selected from each group, and the ulcer site and the surrounding tissue (approximately 1cm radius) were excised, fixed in 4% paraformaldehyde for 24 h, and then transferred to PBS for storage at 4°C until further use. Samples were embedded in paraffin, cut into three 5-μm-thick sections, and stained with hematoxylin and eosin (H&E). Five random visual fields of each section were selected to conduct histological analysis. Reverse transcription (RT-PCR) and quantitative real timepolymerase chain reaction (qRT-PCR) Total RNA was isolated using Trizol reagent (#RP1001, BioTeke, Beijing, China) and reverse-transcribed into cDNA using the First-Strand cDNA Synthesis System kit with random oligo primers. Samples were stored at − 20°C until use in qRT-PCR. Primers were designed to be specific for mouse mRNA expression and for use in qRT-PCR analysis (Exicycler 96, Bioneer, South Korea). Nuclease-free water was used in place of the sample for negative control. β-actin was used as the internal control. Every sample was tested in duplicate. Melting curves were analyzed for each run to assess the presence of non-specific PCR products. The results were analyzed using Step One Software V2.1. The mRNA expression of interleukin-6 (IL-6), interleukin-10 (IL-10), TNF-α and transforming growth factor-β (TGF-β 1 ) genes was calculated relative to the expression of the β-actin and according to the ΔΔ Ct method. Statistical analysis Data are presented as the mean ± standard deviation (SD), and analyzed using two-way ANOVA followed by Tukey's test for pairwise comparison using GraphPad Prism 7.00 (v7.Ii.0; San Diego, CA, USA). P < 0.05 was considered statistically significant. Results Preparation and characterization of LL-37/CS CS hydrogels (2.5% w/v) containing various final concentrations of LL-37 were prepared: 0.1% w/w, 0.25% w/w and 0.5% w/w. Following gelation, stable hydrogels formed within minutes (Fig. 1a). Scanning electron microscopy (SEM) showed morphological characteristics common to freeze-dried hydrogels (Fig. 1b), indicating that all hydrogels had a porous network structure with homogeneous interconnectivity. The diameter of the porous network structure (range: 50-100 μm) was sufficient to incorporate and release LL-37 peptide. Encapsulation of LL-37 did not appear to affect the gelation performance of CS hydrogels. The encapsulation efficiency of LL-37 within CS hydrogels was 86.16% ± 2.61% (Fig. 1c) when LL-37 was loaded at 5 μg per mg CS hydrogel. Total amount of LL-37 within each hydrogel (0.1 ml) was 4.31 ± 0.13 μg. LL-37/CS hydrogels accelerate deep tissue injury wound healing The largest wound closure was identified in the mice that received treatment with LL-37/CS hydrogels ( Fig. 3a-b). The wound areas of control, CS, LL-37 and LL-37/CS groups were reduced to 33.05% ± 0.59, 38.52% ± 0.53, 79.78% ± 0.13 and 8.96% ± 0.58% at 21 days, respectively. The wound closure by LL-37/CS hydrogels was significant different in comparison to all other treatment groups for the vast majority of time points. Notably, the LL-37/CS group showed a wound 3, 5, 7, 9, 11, 13, 15, 19, and 21. Statistical significance compared with control group (mean ± SD; n = 3). c. LL-37/CS facilitated an increase in the density of new capillaries and re-epithelialization of deep tissue injury. Wound sections (n = 3) were stained with hematoxylin and eosin (H&E). Representative sections are presented for all four groups on days 14 and 21 after application of treatment (× 10 and × 20, mean ± SD; n = 3). *P < 0.05, **P < 0.01 and ***P < 0.001 compared with control group Fig. 4 In vivo quantitative determination of mRNA expression in wound sites. a-f. TGF-β 1 , VEGF-A, HIF-1α, IL-6, TNF-α, and IL-10 expression at the mRNA level in deep tissue injury (mean ± SD; n = 3). The error bars indicate the standard deviation of the mean. * P < 0.05, ** P < 0.01, ***P < 0.001 and ****P < 0.0001 compared with control group. area rate of 84.24% ± 0.25, 56.22% ± 3.91, 48.12% ± 0.28 and 20.14% ± 0.21% after 11, 13, 15 and 17 days, respectively, significantly higher than that of the other groups. On days 15 and 21, mice treated with LL-37/CS hydrogel displayed characteristics of rapid and effective wound healing in comparison to the control group. It was clear that in the LL-37/CS hydrogel treatment group, the epidermal and subepidermal layers were well defined and organized (Fig. 3c-e). At 2 weeks, there were fewer inflammatory cells and the new epidermis was formed to a greater extent in the LL-37 group and LL-37/CS group. The epithelial thickness in the control, LL-37, CS and LL-37/CS groups were 35.43 ± 1.80, 42.90 ± 2.25, 35.03 ± 0.76 and 53.98 ± 2.61 μm, respectively. The numbers of blood capillaries per field in the same four groups were 7.33 ± 1.15, 16 ± 1, 9.33 ± 0.58 and 19.33 ± 0.58, respectively. At 3 weeks, the wound surfaces treated by LL-37/ CS were much smoother and the new tissue was more full-grown. The numbers of blood capillaries per field in the control, LL-37, CS and LL-37/CS groups were 13.33 ± 0.58, 26 ± 1.73, 16.33 ± 0.58 and 29.33 ± 0.58, respectively. The wound regions in the LL-37 and LL-37/ CS groups had significantly more blood capillaries than that of control group (P < 0.01). LL-37/CS hydrogels upregulate VEGF-A, TGF-β 1 and HIF-1α and downregulate TNF-α and IL-6 expression qRT-PCR revealed significantly higher expression of TGF-β 1 , VEGF-A and HIF-1α in the LL-37/CS hydrogeltreated group than in the control group on day 14 ( Fig. 4a-c, P < 0.0001). However, there was no detectable difference in the mRNA expression of these genes between the untreated and CS hydrogel-treated group. Higher VEGF-A and HIF-1α expression was also observed in mice receiving free LL-37 in comparison with the control. IL-6 and TNF-α expression was lower on days 14 in the LL-37/CS hydrogel and LL-37 peptidetreated groups than that in control group (Fig. 4d-e, P < 0.001). No significant differences were observed between control and CS hydrogel-treated groups. IL-10 was increased in the LL-37/CS hydrogel-treated and LL-37 peptide-treated groups compared with control group (Fig. 4f, P < 0.01). Discussion In this study, LL-37/CS hydrogels were injected subcutaneously into the wound site of deep tissue injuries. LL- Fig. 5 Results of Western blotting. This figure shows the changes in the protein expression of HIF-1α, TGF-β and VEGF-A in wound tissue; β-actin was used as a loading control. a-d. The relative protein expression of HIF-1α, TGF-β 1 and VEGF-A was calculated as integrated density values. The error bars indicate the standard deviation of the mean. *P < 0.05, **P < 0.01 and ***P < 0.001 compared with control group. 37/CS hydrogels had an increased capacity to drive wound closure and to improve re-epithelialization by keratinocytes. In addition, the formulation and generation of LL-37/CS hydrogel used in the present study displayed excellent cytocompatibility and no cytotoxic effects in vitro (Fig. 2). The pathological changes relating to symptoms of deep tissue injuries and pressure wounds were evident in control group and were observed to be resolved following treatment with LL-37/CS hydrogels (Fig. 3). Previous research has shown that ischemia-reperfusion injury can promote and aggravate deep tissue injuries and even result in limb gangrene or shedding [5,31], suggesting that tissue hypoxia may play a prominent role in driving the complications associated with deep tissue injuries. Anti-inflammation and angiogenesis mechanisms were shown to be important factors in limiting damage to deep tissue muscle injuries [32,33]; thus, targeting these fundamental mechanisms of wound healing may help to limit damage resulting from pressure wounds. TNF-α is the initiator of inflammatory responses in ischemia-reperfusion injury, resulting in the enhancement of antibody-dependent cell-mediated cytotoxicity, stimulation of cell degranulation and secretion of myeloperoxidases [34,35]. IL-6 activates B-cells following its secretion by macrophages, lymphocytes and epithelial cells. IL-6 has also been shown to have dual functionality, either causing or inhibiting inflammation, depending on the chronicity of tissue trauma [36,37]. In this study, tissue mRNA expression of the inflammatory factors IL-6 and TNF-α was determined by qRT-PCR, as shown in Fig. 4. In previous studies, it was found that LL-37 exerts a protective effect against inflammatory damage by inhibiting the activation of certain enzymes or activators of inflammatory factors [14,38,39], observations that are in keeping with our findings. New blood vessel formation is also critical for tissue repair, as the provision of oxygen and nutrients to the wound site facilitates tissue regeneration [40]. VEGF is an endothelial-specific angiogenic factor that promotes endothelial cell proliferation, migration, and lumen formation and increases vascular permeability [41]. In this study, we demonstrated that LL-37 accelerated wound healing rates and stimulated the production of VEGF-A within wound tissue (Fig. 4). These results are in keeping with the findings of previous studies, which also demonstrated that LL-37 induced VEGF-A in human keratinocytes. In this study, we show that the mechanism of LL-37 induction of VEGF-A production is regulated by HIF-1α. HIF-1α is an important nuclear transcriptional regulator involved in cellular adaptation to hypoxia and has roles in regulating hypoxic tissue angiogenesis, cell proliferation and cell survival [42]. Previous studies reported that HIF-1α upregulated VEGF expression and promoted angiogenesis, which suggests that it is vital in the rescue of tissue repair following ischemia-reperfusionassociated injuries. In this study, it was found that the expression of VEGF in the treated group was significantly higher than that in control group at day 14 after wounding. The data showed that HIF-1α had a positive correlation with VEGF protein expression, suggesting that HIF-1α is closely related to angiogenesis in deep tissue injuries (Figs. 4 and 5). TGF-β is a multifunctional cytokine that promotes angiogenesis after ischemiareperfusion [43]. In our current work, the expression of TGF-β 1 was detected in the different treatment groups by qRT-PCR on day 14 of wound healing. The results indicated that the mRNA expression of TGF-β within deep tissue injuries significantly increased after LL-37/CS hydrogel treatment. Therefore, LL-37/CS hydrogels may convey therapeutic action through the upregulation of pro-angiogenic growth factor expression, subsequently increasing blood supply while promoting the generation of granulation tissue and protecting against tissue matrix catabolism. The local treatment of deep tissue injuries with LL-37/ CS hydrogels shown in this investigation offers a promising therapy for deep tissue and pressure-associated injuries. It was previously suggested that LL-37 is involved in angiogenesis. Indeed, LL-37 has been shown to be associated with wound healing in chronic wounds, namely, diabetic foot ulcers. Our results showed that LL-37 hydrogels directly increased angiogenesis and prohealing cytokine production in deep tissue injuries, suggesting that the outcomes from LL-37/CS hydrogel treatment in deep tissue injury healing may also be applicable to chronic nonhealing wounds or broader ischemic tissue injuries. Conclusion LL-37-loaded CS hydrogels were successfully fabricated and could improve wound healing in deep tissue pressure injuries. Our results showed that topically injected LL-37/CS hydrogels could enhance anti-inflammatory and pro-angiogenic cytokine production in wounds of deep tissue injuries, overcoming the chronic inflammation and poor microcirculation issues commonly observed in chronic wound environments. The LL-37/CS hydrogel could effectively deliver LL-37 peptide to the wound site and produce antimicrobial and pro-healing activity.
4,638.2
2020-04-22T00:00:00.000
[ "Medicine", "Materials Science" ]
Hybrid anchoring for a color-reflective dual-frequency cholesteric liquid crystal device switched by low voltages : Cholesteric liquid crystal (CLC) materials used in electro-optical (EO) devices are characterized by high operating voltage and slow response speed, which hinders their further development in display applications. Dual-frequency CLCs (DFCLCs) can solve the problem of slow bistable transition, but the operating voltage is still high, especially in color-reflective DFCLC cells. Here we report a simple approach to lowering the switching voltage as well as to shortening the response time. This technique adopts hybrid surface treatment to modulate the structural arrangement of CLC molecules. Both planar-and vertical-alignment layers are employed and coated on one and the other substrates separately to improve the electro-optical properties of DFCLCs. We show that the threshold voltage for switching can be decreased to as low as 5 V and the shortest response time is measured to be 0.8 ms, which renders CLC EO devices including displays more practical for commercial purpose. Introduction Cholesteric liquid crystals (CLCs) are a type of optically active liquid crystalline materials having a helical arrangement of molecular directors from layer to layer.They are usually utilized in the form of a thin layer between two parallel substrates in such a way that the helical axis is perpendicular to the substrate surfaces.Conforming to the definition of a circular polarizer, if such a thin CLC layer is irradiated with a beam of unpolarized light, the component of the light which has the same handedness as the CLC chirality will be reflected, whereas the remainder of the light; i.e., the oppositely handed component, is transmitted.Owing to their intrinsic optical bistability and reflection selectivity governed by the Bragg law, the consequent energy-saving feature of CLC displays is especially alluring.Reflective CLC displays are fabricated without the need of a backlight, polarizers, and color filters, enabling the CLC displays to be more processable [1,2].Unfortunately, a typical electrooptical (EO) CLC device requires high switching voltage and has a long response time.These drawbacks impede expected development of CLC displays [3].Dual-frequency CLC (DFCLC) materials generally possess positive dielectric anisotropy at low frequencies, while exhibit negative dielectric anisotropy above a certain frequency known as the crossover frequency fc.Thus, by switching the frequency of a reasonable voltage from below to above fc, the molecular orientation can be substantially changed [4,5].On the basis of this nature, DFCLCs can switch bidirectionally or reversibly between the planar (P) state and the focal conic (FC) state by means of frequency-modulated voltage pulses, as first demonstrated by Hsiao et al. [6].DFCLCs can also be used for accelerating the switching process.Accordingly, DFCLCs are more promising for display applications except, again, the high operation voltage.Prior studies of CLCs mainly focus on the planar alignment (PA) configuration because of the strong horizontal force rendering the constituent molecules to be assembled in the P state initially.Smalyukh et al. distinctively studied the phase diagram of director structures in CLCs in the vertical-alignment (VA) mode; for rubbed VA substrates, only two types of fingerprint textures were observed [7].Moreover, the VA mode for CLC applications has been proposed in diffraction gratings and beam-steering components [8,9].Here we use a simple surface-treatment technique to fabricate a hybrid-anchoring (HA) cell [10].By coating a PA layer on a substrate and a VA layer on the other, we demonstrate that the resulting cell configuration leads to not only faster switching response but also to lower switching voltage.With this HA approach implemented for DFCLC cells, their unique EO properties now make them more attractive for photonic device applications in displays, light modulators, and many others [11][12][13][14][15][16]. Experiment The nematic host material used in this study is MLC2048 (Merck), whose dielectric anisotropy ε at room temperature (20 C) is +3.2 at 1 kHz and 3.1 at 50 kHz, and the crossover frequency fc (where ε = 0) 12 kHz.The chiral dopant (CD) R5011 (Merck) was dispersed in the nematic host at concentrations of 2.49, 2.88, and 3.51 wt% in order to fabricate three types of cells with reflection wavelengths centered at 650 nm (R), 548 nm (G), and 449 nm (B) in the unperturbed HA configuration, respectively.The DFCLCs were introduced into empty HA cells by capillary action in the isotropic phase.Each HA cell was made of a pair of 1.1-mm-thick indium-tin-oxide glass substrates separately covered with rubbed PA (SE-8793 from Nissan Chemical) and VA (AL-8395 from Daily Polymer, Taiwan) layers, yielding a cell gap d of 4.0 ± 0.5 μm.Note that the rubbed PA layer allowed unambiguous identification of the initial quasi-P state.For comparison, we prepared conventional cells with planar anchoring as well.An arbitrary function generator (Tektronix AFG-3022B) was used to supply various frequency-modulated voltages to switch the DFCLC states.A He-Ne laser operating at the wavelength of 632.8 nm was employed in the EO measurement.The transmission spectra of the DFCLCs were acquired with a high-speed fiber-optic spectrometer (Ocean Optics HR2000+) in conjunction with a halogen light source (Ocean Optics HL2000).The experimental temperature was fixed at 25 ± 1 °C.Figure 1 shows a schematic of the two optically stable states and the switching between the quasi-P and FC states in a HA DFCLC cell.The low-frequency (1 kHz) voltage VL and high-frequency (100 kHz) voltage VH permit the reversible switching. Results and discussion Figures 2(a) and (b) show the transmission spectra of the DFCLCs (with 2.18-wt% CD and 4-m cell gap) in a typical PA cell and a HA counterpart, respectively.The "perfect" Bragg reflection band is revealed in the spectrum of the PA cell as seen in Fig. 2(a).Due to the strong self-assembly and continuum effect, the CLC structure formed in the HA scheme (Fig. 2(b)) also exhibits the Bragg reflection although the refection band is blunter, with a bandwidth wider than that of the PA cell.Obviously, the CLC molecules are tilted near the VA region, leading to the slight blueshift and broadening of the photonic bandgap [17].Figure 2(c) shows the transmission spectra of three HA cells with 2.49-, 2.88-, and 3.51-wt% CD dispersed in MLC2048.One can see that the cells reflecting in R, G, and B are demonstrated under the proposed HA configuration (Figs.2(c) and (d)).It is worth mentioning that Fig. 2(d), depicting the appearance of these cells, has not been retouched or manipulated by any photo-editing software.Next we compare the conventional PA and our proposed HA for DFCLC cells.In order to know how the surface treatment affects the operation voltage, EO measurement was taken. Figure 3(a) illustrates the voltage-dependent transmittance of both the PA and HA cells reflecting in R at the operation frequency of 1 kHz.The transmittance diminishes due to the reflection when the cells are initially in the P or quasi-P state at low voltages.As the lowfrequency voltage increases beyond 5 V, the FC state is induced by the broken and randomly oriented helices, allowing photons to transmit via scattering.When the operation voltage elevates further beyond the critical voltage, both DFCLCs transit to the homeotropic (H) state, reaching the maximum in the transmission spectra.Although the critical voltage (~21 Vrms) is the same for both cells, a 50% reduction in voltage is achieved for P-FC switching in the HA cell-5 Vrms solely as compared with 10 Vrms for the PA cell, thanks to the VA layer in the HA cell to impose the local distortion and, in turn, to facilitate the transition to the FC state.It is worth mentioning that the 5-Vrms operation voltage can be easily managed by current thinfilm transistors (TFTs).By curiosity, we also found that the switching voltage reasonably increases when the cell gap decreases to 3.5 ± 0.5 μm in that the strength of PA is usually stronger than that of VA and the anchoring becomes more significant in thinner cells.Likewise, Fig. 3(b) shows that the FC-to-P transition voltages are 9.5 and 12.4 Vrms in the HA and PA cells, respectively.Here a 23% reduction is achieved by the rectified surface treatment. We acquired the reflection spectra of the HA DFCLC cells with a fiber white-light source 20 s after a 100-ms-wide voltage pulse was removed so as to ascertain the bistability.The angle of incidence was 10° from the cell normal and the detector was set at the specular angle.Figure 4(a) delineates the reflections in cells R, G, and B varying with the applied voltage at 1 kHz.Note that the cholesteric texture corresponding to high voltages is quasi-P state owing to the relaxation from the H state after the removal of the triggering pulses.It is clear that VL =10 Vrms induces the "best" scattering states for all of the cells.Figure 4(b) illustrates the reflections in these cells vs. the applied voltage pulse at 100 kHz.It is obvious that VH = 10 Vrms can generate a strong torque for the HA bulk with negative dielectric anisotropy to be reverted to the quasi-P state.Based on our previous study, the frequency of 100 kHz yields the shortest switching time from the FC to P state [6].From Fig. 4, VL and VH can be used for designing the drive scheme in the TFT addressing technology. Figure 5(a) illustrates the transition times between the FC and P states for a PA cell and a HA counterpart, switched with a 10-Vrms voltage pulse at 1 kHz for 5 s (as the pulse width).The P-FC response times (tPF) of 1.9 and 0.8 ms were measured in the PA and HA cells reflecting in R, respectively.This is a 58% reduction in response time.Furthermore, the particular property of DFCLC is the response time of the direct FC-P transition (tFP).Figure 5(b) shows that tFP are 2.2 and 1 ms for the PA and HA cells, respectively.Here again, a 54% reduction is realized as a result of the modified surface treatment.Table 1 presents the response times, tPF and tFP, upon a 10-Vrms pulse for bistable transitions.Since VL, VH, tPF, and tFP can all be significantly reduced by means of the HA treatment, the issue concerning characteristically high switching voltage in typical DFCLCs can be no more a problem.The added value of the HA for DFCLCs is its even faster response of ~1 ms in bistable transitions [6]. Conclusions Interesting EO properties of color-reflective HA DFCLCs have been investigated.The VA layer in a HA cell does not hinder the characteristics of selective reflection and bistability as expected in a typical PA CLC.The optically stable quasi-P state can be rapidly switched to the other stable state-the scattering FC state-by a low-frequency voltage pulse and also rapidly but reversibly back to the quasi-P state by a high-frequency switching voltage.While we previously achieved the fast response in color-reflective CLC display by adopting a DFCLC, enabling the direct and reversible switching between the P state and the FC state [15], here the most prominent improvement by adopting the proposed HA configuration is the much lowered operation voltages for the switching between the bistable states.The fast transition from the quasi-P to FC state is demonstrated with a response time of 0.8 ms.With reasonably low switching voltages and fast response of the order of 1 ms, the HA technique holds great promise for applications in the general CLC device technology.The long-standing problems of the slow response speed and high operation voltages in CLC devices can be simultaneously solved by adopting the HA DFCLC configuration.It thus opens up the possibilities for commercialized DFCLC displays. Fig. 2 . Fig. 2. Transmission spectra of (a) a PA cell and (b) a HA cell containing 2.18-wt% CD in MLC2048 and (c) three HA cells individually composed of CD at 2.49, 2.88, and 3.51 wt%.(d) The appearance of the three HA cells. Fig. 4 . Fig. 4. Reflection intensity of cells R, G, and B vs. the applied voltage pulse at (a) 1 kHz and (b) 100 kHz.
2,704.8
2015-11-01T00:00:00.000
[ "Physics" ]
Oligomeric State and Holding Activity of Hsp60 Similar to its bacterial homolog GroEL, Hsp60 in oligomeric conformation is known to work as a folding machine, with the assistance of co-chaperonin Hsp10 and ATP. However, recent results have evidenced that Hsp60 can stabilize aggregation-prone molecules in the absence of Hsp10 and ATP by a different, “holding-like” mechanism. Here, we investigated the relationship between the oligomeric conformation of Hsp60 and its ability to inhibit fibrillization of the Ab40 peptide. The monomeric or tetradecameric form of the protein was isolated, and its effect on beta-amyloid aggregation was separately tested. The structural stability of the two forms of Hsp60 was also investigated using differential scanning calorimetry (DSC), light scattering, and circular dichroism. The results showed that the protein in monomeric form is less stable, but more effective against amyloid fibrillization. This greater functionality is attributed to the disordered nature of the domains involved in subunit contacts. Introduction Hsp60 is a subgroup of the large family of heat-shock proteins. They are normally produced under physiological conditions and overexpressed under stress conditions. As essential components of the Protein Quality Control machinery [1], they control the proper folding of nascent proteins and prevent the aggregation of misfolded protein forms. They are also involved in cell signaling and protein transport across mitochondrial membranes. The Hsp60 subgroup includes 60 kDa proteins that are generally assembled in a double-ring oligomeric structure. Together with co-proteins and ATP, they form macromolecular complexes, called chaperonins 60. These complexes act as true folding machines that recognize and interact with hydrophobic regions exposed by partially unfolded or misfolded proteins and use energy fueled by ATP hydrolysis to stabilize the folded conformations [2][3][4][5][6]. The best-known member of this group is bacterial GroEL, whose structure and functioning mechanism have been elucidated in great detail [3,4]. The homolog of GroEL in mammals is mitochondrial Hsp60, which is encoded by a nuclear gene, translated in the cytosol, and finally translocated into mitochondria after proteolytic cleavage of an N-terminal-target sequence [6]. The main function of mitochondrial Hsp60 is to support proper protein folding/refolding, in cooperation with Hsp10 and ATP. The mechanism of action was initially thought to be similar to that of GroEL. However, a series of experimental observations revealed some relevant differences at the molecular level. While GroEL is stably organized in two back-to-back heptameric rings, the formation of similar oligomeric structures in Hsp60 requires the presence of ATP and the co-chaperonin Hsp10 [7][8][9]. The reduced number of contacts between Hsp60 subunits [9] makes the tetradecameric structure less stable. The substitution of specific amino acids in the regions of inter-ring contacts leads to the lack of the negative inter-ring cooperativity [7,10] that distinguishes the efficiency of GroEL as a folding machine. Hsp60 is known to exert folding activity even in the single-heptamer conformation [7,10,11]. On the other hand, it is recognized that mitochondrial Hsp60 probably exists in equilibrium between monomeric, heptameric, and tetradecameric forms [8,12,13], and its structural dynamics underpin its unique functional versatility [9]. In mammalian cell lines, low levels of Hsp60 have been found outside the mitochondria, and it has been hypothesized that it may be involved in physiological activities other than folding assistance [14]. Under pathological conditions, a huge increase in Hsp60 has been observed in the cytosol, where it could exert pro-survival or lethal functions [13,[15][16][17], depending on whether it is in monomeric or oligomeric conformation. An ambivalent role of Hsp60 was found in many cancers [18,19] and in several neurodegenerative diseases [1,5,20,21]. All these findings have generated a great interest in understanding the characteristic keys that regulate the multi-functionality of Hsp60 [22]. An interesting hypothesis is that the oligomerization state of Hsp60 could be related to its pathological or functional role outside the mitochondrion [12,13,23]. However, this information is missing in a large amount of literature data from in vivo and in vitro experimental studies. In previous work [24], we observed that Hsp60 at a sub-micromolar concentration ratio strongly inhibits Aβ 40 amyloid aggregation in the absence of ATP. We used a commercial recombinant Hsp60, whose arrangement in solution consisted of heptamers and tetradecamers in a wide range of protein concentrations [25]. The results suggested a selective action of Hsp60 against small, early aggregates of Aβ 40 , that would trigger the fibrillization process if left free in solution [24]. It was hypothesized that the mechanism might be sequestration of reactive amyloid species through some sort of holding action. The same type of mechanism was observed in evaluating the effect of a commercial recombinant Hsp60 on the fibrillization of Aβ 42 in the absence of ATP [26]. The protective action of Hsp60 against the neurotoxicity of Aβ 42 oligomers was demonstrated by in vitro and ex vivo experiments, and the physical interaction between Hsp60 and preformed oligomers of Aβ 42 was thought to induce a conformational change of Aβ 42 oligomers towards less toxic forms [27]. Additionally, in this case, a commercial recombinant was used. Regarding the conformational state of Hsp60, much attention has been paid to heptameric and tetradecameric functionality in classical folding activity. However, it is becoming clear that issues of stability, structure, and the chaperone mechanism need to be investigated to elucidate the ambivalent role of Hsp60 in various diseases [28,29]. The ability of Hsp60 to bind hydrophobic patches is well-established [29], and both the flexibility and size of the exposed surface of the protein may be important for the interaction with hydrophobic species prone to aggregation. Since the canonical folding mechanism, which requires ATP and a co-chaperone, does not seem to be involved in the protective effect of Hsp60 against amyloid fibrillization, the question arises as to what role the oligomeric structure plays here. Therefore, we wondered whether the same amount of protein in monomeric form could have a comparable effect. In this work, we investigated the relationship between the oligomeric conformation of Hsp60 and its ability to inhibit Aβ 40 fibrillization in the absence of ATP. We produced a recombinant human Hsp60 and isolated the protein in monomeric or tetradecameric form to study and compare its effect on fibrillization of Aβ 40 . To rule out an effect due to different thermal stabilities, we also examined the thermal unfolding of the protein in the two conformational states. To our knowledge, the present work is the first experimental study addressing the relationship between the chaperone mechanism and oligomeric conformation using biophysical techniques. The results should encourage researchers in the biomedical field to investigate the oligomeric state of Hsp60 and its impact on cancer or neurodegenerative diseases. Oligomerization State and Stabilizing Activity of Hsp60 We used ThT fluorescence detection to follow the fibrillization kinetics of 50 µM of Aβ 40 under destabilizing conditions (37 • C and stirring at 200 rpm), in the absence and the presence of a small amount of Hsp60 in monomeric or oligomeric form. The concentration of the protein in the two forms was expressed as the concentration of monomer units organized in oligomeric complexes or dispersed as single monomers in solution. Results are shown in Figure 1. In the absence of Hsp60, the typical, sigmoidal increase in ThT emission was observed. In the presence of tetradecameric Hsp60, the ThT signal increased similarly, but with a larger time delay. This indicates that Hsp60's main effect is to inhibit the primary nucleation step [30]. Notably, when we added the same amount of protein in monomeric form, we did not observe any change, even after 24 h. to investigate the oligomeric state of Hsp60 and its impact on cancer or neurodegenerative diseases. Oligomerization State and Stabilizing Activity of Hsp60 We used ThT fluorescence detection to follow the fibrillization kinetics of 50 µM of Aβ40 under destabilizing conditions (37 °C and stirring at 200 rpm), in the absence and the presence of a small amount of Hsp60 in monomeric or oligomeric form. The concentration of the protein in the two forms was expressed as the concentration of monomer units organized in oligomeric complexes or dispersed as single monomers in solution. Results are shown in Figure 1. In the absence of Hsp60, the typical, sigmoidal increase in ThT emission was observed. In the presence of tetradecameric Hsp60, the ThT signal increased similarly, but with a larger time delay. This indicates that Hsp60's main effect is to inhibit the primary nucleation step [30]. Notably, when we added the same amount of protein in monomeric form, we did not observe any change, even after 24 h. The size and morphology of species contained in each sample at the end of kinetics experiments were analyzed by AFM imaging. Typical images of long fibril bundles were obtained in the case of Aβ40 alone (Figure 2A). Sparse isolate fibers with lower surface heights were detected in samples with tetradecameric Hsp60 (Figure 2B), whereas samples with monomeric Hsp60 displayed even thinner rare filaments, together with some globular denser objects ( Figure 2C). When Hsp60, either in monomeric or oligomeric form, was added to already formed fibrils, no disaggregating effects were observed. As noted in our previous work [24], this indicates that the inhibiting mechanism acts before fibrils' formation and might rely on recruitment of aggregation-prone Aβ40 molecules. It should be noted that the final fluorescence intensity of ThT was comparable for samples without and with Hsp60 oligomers, although AFM images clearly showed fewer fibers in the sample with Hsp60. Since ThT is very specific for the cross-beta structure of amyloids, discrepancies between the data can be explained by differences in the structure of the aggregates. The size and morphology of species contained in each sample at the end of kinetics experiments were analyzed by AFM imaging. Typical images of long fibril bundles were obtained in the case of Aβ 40 alone (Figure 2A). Sparse isolate fibers with lower surface heights were detected in samples with tetradecameric Hsp60 (Figure 2B), whereas samples with monomeric Hsp60 displayed even thinner rare filaments, together with some globular denser objects ( Figure 2C). When Hsp60, either in monomeric or oligomeric form, was added to already formed fibrils, no disaggregating effects were observed. As noted in our previous work [24], this indicates that the inhibiting mechanism acts before fibrils' formation and might rely on recruitment of aggregation-prone Aβ 40 molecules. It should be noted that the final fluorescence intensity of ThT was comparable for samples without and with Hsp60 oligomers, although AFM images clearly showed fewer fibers in the sample with Hsp60. Since ThT is very specific for the cross-beta structure of amyloids, discrepancies between the data can be explained by differences in the structure of the aggregates. Oligomerization State and Thermal Stability of Hsp60 The major efficacy of Hsp60 in monomeric form in binding aggregation-prone Aβ40 peptides might be conceivably related to the larger amount of surface available for interactions with aggregation-prone, partially unfolded, or misfolded molecules. On the other hand, the protein in the monomeric state might lose its thermal stability, as is well-known for other multimeric proteins, and then influence Aβ40 fibrillization in a non-native configuration. We applied DLS, DSC, and CD techniques to investigate this aspect. Static and dynamic light scattering (SLS and DLS) measurements were performed on samples of 15 µM of Hsp60 in monomeric or tetradecameric form during a temperature scan. The intensity of scattered light and the mean average hydrodynamic radius of species in solution are shown in Figure 3 as a function of temperature. A sharp increase of the scattered intensity was observed at about 41 °C for the protein in monomeric form, and at about 54 °C for the protein in oligomeric form. The intensity increase correlated with a sudden growth of the hydrodynamic radius, from a few nanometers (4 nm and 24 nm for monomeric and oligomeric forms, respectively) up to several hundred. The accompanying growth of sample polydispersity and the appearance of multiple scattering, likely due to a rapid formation of large aggregates, made it impractical to continue the measurements. The irreversible formation of aggregates at high temperatures was confirmed by measurements on samples brought back at low temperatures. Oligomerization State and Thermal Stability of Hsp60 The major efficacy of Hsp60 in monomeric form in binding aggregation-prone Aβ 40 peptides might be conceivably related to the larger amount of surface available for interactions with aggregation-prone, partially unfolded, or misfolded molecules. On the other hand, the protein in the monomeric state might lose its thermal stability, as is wellknown for other multimeric proteins, and then influence Aβ 40 fibrillization in a non-native configuration. We applied DLS, DSC, and CD techniques to investigate this aspect. Static and dynamic light scattering (SLS and DLS) measurements were performed on samples of 15 µM of Hsp60 in monomeric or tetradecameric form during a temperature scan. The intensity of scattered light and the mean average hydrodynamic radius of species in solution are shown in Figure 3 as a function of temperature. A sharp increase of the scattered intensity was observed at about 41 • C for the protein in monomeric form, and at about 54 • C for the protein in oligomeric form. The intensity increase correlated with a sudden growth of the hydrodynamic radius, from a few nanometers (4 nm and 24 nm for monomeric and oligomeric forms, respectively) up to several hundred. The accompanying growth of sample polydispersity and the appearance of multiple scattering, likely due to a rapid formation of large aggregates, made it impractical to continue the measurements. The irreversible formation of aggregates at high temperatures was confirmed by measurements on samples brought back at low temperatures. DSC thermograms for 15 µM of Hsp60 in the monomeric or tetradecameric form are shown in Figure 4. The protein in monomeric form unfolded with a Tm of 42.2 °C. The post-transition baseline was well-defined up to about 55 °C, and after that it slightly decreased, likely due to the formation of aggregates according to the DLS results. The con- DSC thermograms for 15 µM of Hsp60 in the monomeric or tetradecameric form are shown in Figure 4. The protein in monomeric form unfolded with a T m of 42.2 • C. The posttransition baseline was well-defined up to about 55 • C, and after that it slightly decreased, likely due to the formation of aggregates according to the DLS results. The contribution due to the difference between the heat capacity of native and unfolded proteins was calculated over the temperature interval 25-55 • C and subtracted from C P to obtain C EX P . The integration of C EX P over the same temperature interval yielded ∆H cal = 318 kJ mol −1 , which is 75% of ∆H vH (425 kJ mol −1 ), calculated by Equation (2) (in Section 4.6). Large deviations of the ratio ∆H cal /∆H vH from unity evidence the inadequacy of the two-state model to describe the unfolding process [31]. In our case, a ratio slightly less than one could indicate the occurrence of partial aggregation according to the results from the light scattering measurements. Signals of aggregation were more visible in the DSC trace for proteins in tetradecameric form. The unfolding occurred with a T m of 58.8 • C, but the peak was appreciably made asymmetric by the superposition of the well-visible drop in the post-transition baseline. This indicates that the aggregation process was quite fast in the unfolding temperature range [32]. Data analysis in terms of the simplest version of the two-state irreversible model (Equation (4) of Section 4.6) allowed deriving the unfolding enthalpy and the activation energy for the final step of the transition, from the unfolded to the irreversibly aggregated state. For comparison, the same analysis was applied to data for proteins in monomeric form, even if the fit quality was not as good. Fitting parameters are summarized in Table 1. No peak signal was observed in the downward temperature scan for the protein in either monomeric or tetradecameric form, thus indicating aggregation upon unfolding. A larger thermal stability of the tetradecameric form was observed by Shao et al. [33] through differential scanning fluorimetry. The authors determined a large difference in Tm values (40.7 and 58.2 °C for monomer and oligomer, respectively), in good agreement with our Tm values. In comparison to GroEL [34], the thermal stability of Hsp60 was lower, likely due to the weakness of the inter-ring contacts. Data analysis in terms of the simplest version of the two-state irreversible model (Equation (4) of Section 4.6) allowed deriving the unfolding enthalpy and the activation energy for the final step of the transition, from the unfolded to the irreversibly aggregated state. For comparison, the same analysis was applied to data for proteins in monomeric form, even if the fit quality was not as good. Fitting parameters are summarized in Table 1. No peak signal was observed in the downward temperature scan for the protein in either monomeric or tetradecameric form, thus indicating aggregation upon unfolding. A larger thermal stability of the tetradecameric form was observed by Shao et al. [33] through differential scanning fluorimetry. The authors determined a large difference in T m values (40.7 and 58.2 • C for monomer and oligomer, respectively), in good agreement with our T m values. In comparison to GroEL [34], the thermal stability of Hsp60 was lower, likely due to the weakness of the inter-ring contacts. Since the presence of monomeric Hsp60 in the cytosol has only recently become known, there is no information on possible differences in the secondary structure of the monomeric and oligomeric forms. In fact, one would expect some changes in the interfaces between the subunits [9]. To investigate this aspect, we collected far-UV CD spectra with increasing temperature for the protein in monomeric or oligomeric form ( Figure S1). The signal at a 225 nm wavelength was also followed during the temperature scan. CDPro software [35] was used to determine the percentage contribution of alpha-helix, beta-sheet, turns, and disordered elements to any CD spectrum. As shown in Figure 5, the percent contribution of alpha-helix, beta-sheet, and the unordered structure was similar in both forms, whereas the percent contribution of the disordered structure was slightly higher in the monomeric form. In both protein forms, the percent contribution of alpha-helix decreased with the increasing temperature, while the percent of beta-sheet content increased. However, in the monomeric form, the change was steeper, and occurred at a lower temperature, with a lower final alpha-helix content and a higher final beta-sheet content. percent contribution of alpha-helix, beta-sheet, and the unordered structure was similar in both forms, whereas the percent contribution of the disordered structure was slightly higher in the monomeric form. In both protein forms, the percent contribution of alphahelix decreased with the increasing temperature, while the percent of beta-sheet content increased. However, in the monomeric form, the change was steeper, and occurred at a lower temperature, with a lower final alpha-helix content and a higher final beta-sheet content. Figure S1). The three sets of results from DSC, LS, and CD measurements are plotted together in Figure 6 to look inside the probable sequence of events. Note that the derivative of both light-scattered intensity and the molecular ellipticity CD at a 225 nm wavelength are plotted together with DSC traces to assure the consistency of the data. For the protein in monomeric form, the DSC signal and the derivative of the CD signal almost coincided, thus indicating that conformational transition and unfolding are simultaneous events. When the fraction of unfolded molecules was enough large (well beyond 50%), the scattered intensity began to grow, reflecting the formation of aggregates. A different scenario was observed for the protein in oligomeric form. In this case, the conformational transition preceded the unfolding signal, thus suggesting that conformational changes were still constrained in the oligomeric structures. When the oligomers finally melted, a concomi- Figure S1). The three sets of results from DSC, LS, and CD measurements are plotted together in Figure 6 to look inside the probable sequence of events. Note that the derivative of both light-scattered intensity and the molecular ellipticity CD at a 225 nm wavelength are plotted together with DSC traces to assure the consistency of the data. For the protein in monomeric form, the DSC signal and the derivative of the CD signal almost coincided, thus indicating that conformational transition and unfolding are simultaneous events. When the fraction of unfolded molecules was enough large (well beyond 50%), the scattered intensity began to grow, reflecting the formation of aggregates. A different scenario was observed for the protein in oligomeric form. In this case, the conformational transition preceded the unfolding signal, thus suggesting that conformational changes were still constrained in the oligomeric structures. When the oligomers finally melted, a concomitant formation of aggregates was observed, as reflected by the growth of the scattered light intensity. Discussion The different families of heat-shock proteins exert their protective action against misfolding and aggregation of biomolecules through specific mechanisms, generally related to their molecular weight [5]. However, members of each family are also capable of performing tasks other than their specific ones. For instance, small heat-shock proteins (sHsps) are known to exert a passive, ATP-independent "holding" activity, sequestering proteins in unfolded or misfolded form, and preventing their aggregation while awaiting the intervention of a folding chaperone. However, they are characterized by exceptional structural plasticity, which allows them to employ different mechanisms to control the binding/release of different client molecules [36]. In particular, an active conformational change from large oligomers to smaller units is associated with a dynamic low-affinity interaction for amyloid proteins [37]. An unusual activity that does not require ATP energy input is exerted by other ATPdependent Hsps against amyloid proteins. Hsp70, known for its folding activity, prevents alpha-synuclein fibrillization in an ATP-independent manner by interacting with alphasynuclein monomers through a binding site other than the canonical binding site for folding [38]. Hsp70 is also capable of inhibiting beta fibrillization at sub-stoichiometric concentrations without the aid of ATP and the co-chaperon [39] by recruiting small, early oligomers. Hsp90, known for its folding activity, binds to the fibril core region of tau protein, inducing the formation of small oligomers and inhibiting fiber formation [40,41]. Both Hsp90 and Hsp60 are able to exert non canonical, ATP-independent holding activity on proteins in misfolded form by forming multiple, weak, and nonspecific bonds. The target proteins retain their misfolded form but are protected from aggregation [42]. It has been shown that Hsp104, in addition to disassembling protein aggregates fueled by ATP hydrolysis, exerts ATP-independent holding activity on soluble monomers of amylogenic proteins [43]. Discussion The different families of heat-shock proteins exert their protective action against misfolding and aggregation of biomolecules through specific mechanisms, generally related to their molecular weight [5]. However, members of each family are also capable of performing tasks other than their specific ones. For instance, small heat-shock proteins (sHsps) are known to exert a passive, ATP-independent "holding" activity, sequestering proteins in unfolded or misfolded form, and preventing their aggregation while awaiting the intervention of a folding chaperone. However, they are characterized by exceptional structural plasticity, which allows them to employ different mechanisms to control the binding/release of different client molecules [36]. In particular, an active conformational change from large oligomers to smaller units is associated with a dynamic low-affinity interaction for amyloid proteins [37]. An unusual activity that does not require ATP energy input is exerted by other ATPdependent Hsps against amyloid proteins. Hsp70, known for its folding activity, prevents alpha-synuclein fibrillization in an ATP-independent manner by interacting with alphasynuclein monomers through a binding site other than the canonical binding site for folding [38]. Hsp70 is also capable of inhibiting beta fibrillization at sub-stoichiometric concentrations without the aid of ATP and the co-chaperon [39] by recruiting small, early oligomers. Hsp90, known for its folding activity, binds to the fibril core region of tau protein, inducing the formation of small oligomers and inhibiting fiber formation [40,41]. Both Hsp90 and Hsp60 are able to exert non canonical, ATP-independent holding activity on proteins in misfolded form by forming multiple, weak, and nonspecific bonds. The target proteins retain their misfolded form but are protected from aggregation [42]. It has been shown that Hsp104, in addition to disassembling protein aggregates fueled by ATP hydrolysis, exerts ATP-independent holding activity on soluble monomers of amylogenic proteins [43]. Indeed, each member of the large family of molecular chaperons can employ different mechanisms to interact appropriately with different client proteins [44]. It is noteworthy that the ability to convert specific agents into appropriate defenders to cope with different events of the cellular life is a more efficient strategy than the production of specific chaperons. The multitasking ability, or moonlighting activity [45], is a property shared by proteins or molecules with disordered regions [46] that has attracted great interest in the search for new therapeutic strategies [45]. Hsp60 is known to function as a folding machine [2] with the assistance of cochaperonin Hsp10 and ATP, in close analogy to the functioning of its bacterial homologue GroEL. Although the oligomeric (tetradecamer or heptamer) structure of Hsp60 is essential for folding activity, it is less stable than that of GroEL, and indeed, monomeric, heptameric, and tetradecameric forms can coexist in an equilibrium regulated by ATP and protein concentration [7,8]. The lower stability of the oligomeric structure of Hsp60 compared to GroEL is due to a smaller interface between subunits with a smaller number of intermolecular contacts [9,10]. Most of the contacts between subunits occur in the equatorial domain, where several amino acid sequences, unique to human Hsp60, make weak sidechain contacts upon ATP binding [9]. Interestingly, these specific sequences in the equatorial domain are even responsible for moonlighting functions [47]. Considering that the oligomerization of Hsp60 occurs only in the presence of ATP and nature never does anything by chance, it had been questioned if the protein in its dissociated form may have a physiological role [7,12,13,23]. The interest for what could be the role of a minor presence of Hp60 in the cytosol or other extracellular locations, and if the protein was in monomeric or multimeric state [18,23,48], was fueled by the findings of Chandra and co-authors, who observed that Hsp60 may exert pro-death or pro-vita effects in several apoptotic systems, depending on its localization and oligomeric configuration [13]. In this work, we investigated the mechanism exerted by Hsp60 against the amyloid fibrillization in relation to its oligomeric conformation, and we found that the protein in monomeric form was less stable but more effective in exerting a holding action, which is a task different from that commonly ascribed to Hsp60 in oligomeric form. A similar stabilizing effect at sub-stoichiometric concentrations was observed for Hsp90 on Aβ 40 [49]. Interestingly, the authors found that the dimeric form was more active than the tetrameric form. This result was interpreted to mean that Hsp90 probably solubilizes Aβ 40 monomers through weak interactions, the number of which is greater for the dimeric form, which offers a larger hydrophobic surface area [49]. In studying the effect of Hsp60 on the fibrillization of amyloid proteins, great attention has been reserved to the apical domain, which is the canonical region involved in the capturing of aggregation-prone proteins. Yamamoto et al. [50] had shown that a mutant of Hsp60, in which the apical domain is fixed in open conformation, was more effective than wildtype Hsp60 in inhibiting the fibrillization of a-synuclein. Although the mutant and wildtype forms have similar tetradecameric structures, the larger hydrophobic surface area of the open apical domain favors a stronger interaction with a-synuclein monomers, as confirmed by fluorescence assays with 8-anilino-1-naphthalenesulfonic acid (ANS) [50]. A major functionality combined with a less ordered and stable structure is a common feature of proteins having intrinsically disordered regions (IDRs) [51,52], whose plasticity enables a rapid response to external environmental changes [53]. Regions of structural disorder allow the molecule to interact with different partner molecules and perform moonlighting activity [45,54,55]. Several contacts between the equatorial domain of Hsp60 subunits are disordered regions. Conceivably, their exposure to the solvent confers monomeric Hsp60 the ability to interact with amyloid molecules to a greater extent than that due to the increase of the exposed protein surface. Further work is needed to elucidate the molecular mechanism by which Hsp60 inhibits Aβ 40 fibrillization. Aβ 40 , similar to other amyloidogenic proteins, is an intrinsically disordered protein because it lacks the well-folded structure with minimum free energy that corresponds to the native state of a globular protein [56]. Rather, it can adopt multiple conformations with comparable, relatively low free energy [57]. Through a transient interaction, Hsp60 could select and stabilize specific conformations of Aβ 40 that are resistant to aggregation and better-suited for binding or induce a conformational change that optimizes the interaction between the two molecules. These two mechanisms, termed conformational selection and induced fit, respectively, have been primarily hypothesized to describe how disordered proteins can undergo a major, irreversible conformational change after a transient interaction with other molecules [53]. However, the generalization of a precise model of action in multifunctional chaperons has been discouraged because the great versatility likely implies an easily modifiable, client-dependent specific mechanism [44]. Not much information on the possible relation between the oligomerization state and the biological function of Hsp60 or other chaperonins of the same family can easily be found in the literature, albeit there is continuous growth. To the best of our knowledge, the present work is the first in vitro experimental study on this topic. Studies on this relationship could provide insights into the controversial role of Hsp60 in cancers and neurodegenerative diseases [18][19][20]. We believe that our results can help raise awareness among medical and biomedical researchers, as knowledge of the molecular anatomy of the mitochondrial chaperonin Hsp60 is crucial for a better understanding of its physiological and pathophysiological roles in health and disease. Plasmid DNA Construct The gene encoding the wildtype human Hsp60 without the mitochondrial-targeting sequence (HSPD1, GenBank, Accession number NM_002156) was cloned into the pET-15b expression vector (Eurofins, Ebersberg, Germany). The HSPD1 sequence was inserted between the sequences for the restriction enzymes BamHI at the 3 end and NdeI at the 5 end. The resulting plasmid, pET-15b-Entry-HSPD1, encodes the recombinant Hsp60 as an N-terminal hexa-histidine tag protein. Expression and Purification of Recombinant Human Hsp60 Escherichia coli BL21-Gold (DE3) cells (Agilent Technologies, Santa Clara, CA, USA) were transformed with pET-15b-Entry-HSPD1 and grown in Luria-Bertani (LB) broth soil plates (Sigma Aldrich Merck Darmstadt, Germany) containing agar (Sigma-Aldrich-Merck), with 100 µg mL −1 of ampicillin (Sigma), at 37 • C for 16 h. Then, 20 mL of LB broth containing 0.5% glucose (Sigma-Aldrich-Merck) and 100 µg mL −1 of ampicillin was used to prepare a pre-inoculum of a chosen colony. The pre-inoculum was allowed to grow overnight at 37 • C and 230 rpm, before being added to 1 L of LB broth containing 0.5% glucose and 100 µg mL −1 of ampicillin. When the optical density at 600 nm of the culture reached 0.6-0.8, expression was induced by the addition of 1 mM of isopropyl-β-D-thiogalactopyranoside (Sigma Aldrich Merck). After incubation at 37 • C and 230 rpm for 3 h, the cells were harvested by centrifugation at 2800× g for 10 min. The cell pellet was re-suspended in pre-chilled lysis buffer (20 mM sodium phosphate buffer, pH 7.4, 0.5 mM NaCl, 10 mM imidazole, 1 mM dithiothreitol, 5 mM MgCl 2 , 10 µg mL −1 DNAse, cOmplete EDTA-free protease inhibitor-Roche mixture, and 0.4 mg/mL of hen egg white lysozyme). The cells were disrupted on ice by an ultrasonic homogenizer (Bandelin HD 2070) and incubated at 4 • C for 30 min. The lysate was centrifuged at 20,000× g, 4 • C, for 30 min to remove cell debris and suspended particles. The supernatant was filtered through 0.45 µm filters (Sartorius Stedim Biotech, Gottinga, Germany) and loaded onto a 5 mL HisTrap FF column prepacked with Ni-Sepharose (GE Healthcare, CA, USA), equilibrated in the same buffer as the protein sample. The His-tagged protein was eluted on a FPLC system (ÄKTA pureTM 25 M, GE Healthcare) with a linear gradient from 10 to 500 mM of imidazole in 10 CV at room temperature. Protein size and purity were verified by 12% (w/v) SDS-PAGE and Coomassie Brilliant Blue staining. Protein concentration was assessed by measuring the optical density at 280 nm using an extinction coefficient of 14,565 M −1 cm −1 , as estimated by the ProtParam tool [58]. The fractions containing Hsp60 were pooled and treated for 2 h on ice, with 20 mM of EDTA to induce the dissociation of Hsp60 oligomers [7,8]. Then, the Hsp60 fractions were transferred in 50 mM of Tris-HCl buffer, pH 7.7, 0.3 M NaCl, and concentrated by using the 10 kDa Amicon ® Ultra-Centrifugal Filters (Millipore, Sigma Aldrich Merck Darmstadt, Germany). After that, aliquots of the protein solution were loaded onto a Superdex 200 increase 10/300 GL (GE Healthcare) column equilibrated with the same buffer and eluted at a 0.75 mL min −1 flow rate. A 500 µL sample loop was used. The peak at the highest elution time (Figure S2A), corresponding to a population of monomers and dimers, was collected, and stored in 50 mM of Tris-HCl pH 7.7, 0.3 M NaCl, 10% (w/v) glycerol, at −80 • C. Human Hsp60 Oligomerization The in vitro assembly reaction was performed according to the procedure described by Viitanen et al. [8], with some modifications. Briefly, the fraction of Hsp60 corresponding to monomers and dimers was thawed, and the concentration of glycerol was reduced to less than 1% by repeated dilutions with glycerol-free buffer and re-concentration by centrifugation at 7500× g with 10 kDa Amicon ® Ultra-Centrifugal Filter Units (Millipore). A portion of the final sample was taken and used for measurements of the protein in monomeric conformation. The chromatographic profile of this sample (Superdex 200 increase 10/300 GL (GE Healthcare) column, 50 µL loop, 0.75 mL min −1 flow rate) confirmed that the protein population was still in equilibrium between dimers and monomers ( Figure S2B). To start the oligomerization procedure, the buffer of the remaining part of the sample was replaced by 50 mM of Tris HCl, pH 7.7, 0.3 M NaCl, 20 mM KCl, 25 mM MgCl 2 , by adding the appropriate quantity of the same buffer with higher KCl and MgCl 2 concentrations. The sample was incubated for 90 min at 30 • C in the presence of 4 mM of ATP (Sigma-Aldrich), and then injected into a Superdex 200 increase 10/300 GL (GE Healthcare) column to analyze the oligomer population ( Figure S2C). Aliquots with the highest percentage of tetradecamers were collected and concentrated for use in subsequent experiments. The entire procedure was applied to an aliquot of a protein sample incubated without ATP, in which case no tetradecamers were observed. Preparation of Aβ 40 Samples The synthetic peptide Aβ 40 (AnaSpec, Fremont, CA, USA) was pretreated following the procedure of Fezoui et al. [59]. Stock aliquots were stored at −80 • C until use. Aβ 40 samples were prepared in a cold room by dissolving the lyophilized peptide in 50 mM of Tris-HCl, 3% glycerol, 0.3 M NaCl, pH 7.7, at a concentration of about 70 µM. The sample was then filtered through 0.20 µm (Millex-LG, Millipore, Darmstadt, Germany) and 0.02 µm (Whatman, Maidston, UK) filters, to eliminate large aggregates. The peptide concentration was measured by tyrosine absorption at 276 nm using an extinction coefficient of 1.39 cm −1 M −1 . The sample was then diluted at the working concentration of 50 µM by adding the appropriate amounts of buffer, and concentrated solutions of ThT and Hsp60 when required. Atomic Force Microscopy (AFM) Aliquots of Aβ 40 samples with or without Hsp60, taken at the end of fibrillization kinetics, were deposited onto freshly cleaved mica surfaces (Agar Scientific, Assing, Italy) and incubated for up to 60 min at room temperature. Then, samples were rinsed with deionized water and dried under a low-pressure nitrogen flow. AFM measurements were performed using a Nanowizard III (JPK Instruments, Berlin, Germany) system mounted on an Eclipse Ti (Nikon, Japan) inverted optical microscope. Tapping mode AFM images were acquired in the air using a multimode scanning probe microscope driven by a nano-scope V controller (Digital Instruments, Bruker, Kennewick, WA, USA). Single-beam uncoated silicon cantilevers (type SPM Probe Mikromasch) were used. The drive frequency was between 260 and 325 kHz, and the scan rate was 0.25-0.7 Hz. Differential Scanning Calorimetry (DSC) DSC experiments were performed with a Nano-DSC instrument (TA Instruments, New Castle, DE, USA) equipped with 0.3 mL of capillary platinum cells. The unfolding of monomeric or oligomeric 15 µM Hsp60 was obtained within the temperature range 25-80 • C, at a 30 • C/h scan rate. Both the protein solution and buffer were degassed before loading in the respective cells. The instrumental baseline was obtained by a preliminary temperature scan, with both cells filled with degassed buffer. The so-called chemical baseline, which is the contribution due to the difference in the heat capacities of the native and unfolded states of the protein, was calculated by linearly extrapolating the pre-and post-unfolding baselines into the transition region and merging them in proportion to the unfolding progress [31]. This was subtracted to obtain the excess heat capacity. The calorimetric enthalpy change of the unfolding transitions (∆H cal ) was obtained by integrating C EX P over the temperature interval comprised between T 1 and T 2 values: T m was defined as the temperature value at which excess unfolding heat capacity is maximal. The van't Hoff enthalpy was calculated by the relationship [60]: To deal with DSC profiles biased by the likely occurrence of aggregation upon folding, the simplest form of the Lumry-Eyring model [32] was applied in data analysis. In this model, the reversible transition from the native to the unfolded state is coupled to a conversion of the unfolded molecule into an altered "final" state, from which it cannot fold back. This conversion is a kinetic process governed by an activation energy, E A . If the transition to the final state is fast enough, the population of unfolded molecules is very low, and the transition from the native to the final state can be modeled as: The temperature dependence of the rate constant, k, is given by the Arrhenius equation: where T A is the temperature at which k = 1. By applying the kinetic analysis developed by Sanchez-Ruiz and co-authors [32,61], C P trace was fit to the expression: where X N is the fraction of native molecules, ∆H is the unfolding enthalpy, and C Pre P and C Post P are the temperature-dependent pre-and post-transition baselines. X N and ((dX N )/dt) are expressed as: A downward temperature scan followed this to check the reversibility of the unfolding transition. Circular Dichroism Spectroscopy (CD) Circular dichroism (CD) spectra of 15 µM of Hsp60 in either monomeric or multimeric form were recorded at increasing temperatures using a CD spectrometer (JASCO J-810, Jasco Europe, Cremella, LC, Italy)), equipped with a Peltier unit for temperature control. The cell path length was 0.2 mm. The CD signal at a 225 nm wavelength (typical choice for following the thermal denaturation of the alpha-helix) was continuously monitored during the temperature scan, from 20 to 90 • C at a 30 • C/h rate. Spectra in the 190-260 nm wavelength interval were collected every 5 • C ( Figure S1). Each CD spectrum was the average of 10 scans subtracted by the solvent contribution. Data are presented in molar absorbance units per residue. The CDPro software package [35] was used to evaluate the secondary structure elements and their changes as a function of temperature. The relative contribution of alphahelix, beta-sheet, turns, and disordered elements was determined by averaging the results from CONTIN, CDSSTR, and SELCOM 3 programs. Thioflavin T Fluorescence Assay for Aβ 40 Fibrillization The change in ThT fluorescence emission during Aβ 40 aggregation kinetics was monitored using a JASCO FP-6500 spectrometer. The excitation and emission wavelengths were 450 and 485 nm, respectively. The concentration of ThT was 12 µM. The concentration of Hsp60, if present, was 2 µM. The sample was placed in the thermostated cell compartment at 37 • C and continuously sheared at 200 rpm using a magnetic stirrer (mod. 300, Rank Brothers Ltd., Cambridge, UK). Dynamic and Static Light Scattering Samples of monomeric or multimeric Hsp60 at a 15 µM concentration were directly filtered into a dust-free quartz cell and placed in the thermostatic cell compartment of a Brookhaven Instruments BI200-SM goniometer. The temperature was controlled within 0.1 • C using a thermostatic recirculating bath. Samples were allowed to equilibrate at 20 • C before beginning a temperature scan from 20 to 80 • C at a scan rate of 8 • C/h. The scattered light intensity and time autocorrelation function were measured at a scattering angle of 90 • using a Brookhaven BI-9000 correlator and a 50 mW He-Ne laser tuned to a 632.8 nm wavelength. In dynamic light scattering (DLS) experiments, the correlator was operated in the multi-τ mode, and the experimental duration was set to 3 min. Static light scattering data were corrected for the background scattering of the solvent and normalized by using toluene as the calibration liquid. The autocorrelation function was analyzed using the cumulant method [62] to derive a z-averaged translational diffusion coefficient, which was converted to an average apparent hydrodynamic radius of an equivalent sphere, R H , through the Stoke-Einstein relationship: D z = kBT/6πηR H .
9,651.6
2023-04-25T00:00:00.000
[ "Biology", "Chemistry" ]
Solving Verbal Questions in IQ Test by Knowledge-Powered Word Embedding Verbal comprehension questions appear very frequently in Intelligence Quotient (IQ) tests, which measure human’s verbal ability including the understanding of the words with multiple senses, the synonyms and antonyms, and the analogies among words. In this work, we explore whether such tests can be solved automatically by the deep learning technologies for text data. We found that the task was quite challenging, and simply applying existing technologies like word embedding could not achieve a good performance, due to the multiple senses of words and the complex relations among words. To tackle these challenges, we propose a novel framework to automatically solve the verbal IQ questions by leveraging improved word embedding by jointly considering the multi-sense nature of words and the relational information among words. Experimental results have shown that the proposed framework can not only outperform existing methods for solving verbal comprehension questions but also exceed the average performance of the Amazon Mechanical Turk workers involved in the study. Introduction The Intelligence Quotient (IQ) test [Stern, 1914] is a test of intelligence designed to formally study the success of an individual in adapting to a specific situation under certain conditions.Common IQ tests measure various types of abilities such as verbal, mathematical, logical, and reasoning skills.These tests have been widely used in the study of psychology, education, and career development.In the community of artificial intelligence, agents have been invented to fulfill many interesting and challenging tasks like face recognition, speech recognition, handwriting recognition, and question answering.However, as far as we know, there are very limited studies of developing an agent to solve IQ tests, which in some sense is more challenging, since even common human beings could not always succeed in the tests.Considering that IQ test scores have been widely considered as a measure of intelligence, we think it is worth making further investigations whether we can develop an agent that can solve IQ tests. The commonly used IQ tests contain several types of questions like verbal, mathematical, logical, and picture questions, among which a large proportion (near 40%) are verbal questions [Carter, 2005].The recent progress on deep learning for natural language processing (NLP), such as word embedding technologies, has advanced the ability of machines (or AI agents) to understand the meaning of words and the relations among words.This inspires us to solve the verbal questions in IQ tests by leveraging the word embedding technologies.However, our attempts show that a straightforward application of word embedding could not result in satisfactory performances.This is actually understandable.Standard word embedding technologies learn one embedding vector for each word based on the co-occurrence information in a text corpus.However, verbal comprehension questions in IQ tests usually consider the multiple senses of a word (and often focus on the rare senses), and the complex relations among (polysemous) words.This has clearly exceeded the capability of standard word embedding technologies. To tackle the aforementioned challenges, we propose a novel framework that consists of three components. First, we build a classifier to recognize the specific type (e.g., analogy, classification, synonym, and antonym) of verbal questions.For different types of questions, different kinds of relationships need to be considered and the solvers could have different forms.Therefore, with an effective question type classifier, we may solve the questions in a divide-andconquer manner. Second, we obtain distributed representations of words and relations by leveraging a novel word embedding method that considers the multi-sense nature of words and the relational knowledge among words (or their senses) contained in dictionaries.In particular, for each polysemous word, we retrieve its number of senses from a dictionary, and conduct clustering on all its context windows in the corpus.Then we attach the example sentences for every sense in the dictionary to the clusters, such that we can tag the polysemous word in each context window with a specific word sense.On top of this, instead of learning one embedding vector for each word, we learn one vector for each pair of word-sense.Furthermore, in addition to learning the embedding vectors for words, we also learn the embedding vectors for relations (e.g., synonym and antonym) at the same time, by incorporating relational knowledge into the objective function of the word embedding learning algorithm.That is, the learning of word-sense representations and relation representations interacts with each other, such that the relational knowledge obtained from dictionaries is effectively incorporated. Third, for each type of questions, we propose a specific solver based on the obtained distributed word-sense representations and relation representations.For example, for analogy questions, we find the answer by minimizing the distance between word-sense pairs in the question and the word-sense pairs in the candidate answers. We have conducted experiments using a combined IQ test set to test the performance of our proposed framework.The experimental results show that our method can outperform several baseline methods for verbal comprehension questions in IQ tests.We further deliver the questions in the test set to human beings through Amazon Mechanical Turk1 .The average performance of the human beings is even a little lower than that of our proposed method. Verbal Questions in IQ Test In common IQ tests, a large proportion of questions are verbal comprehension questions, which play an important role in deciding the final IQ scores.For example, in Wechsler Adult Intelligence Scale [Wechsler, 2008], which is among the most famous IQ test systems, the full-scale IQ is calculated from two IQ scores: Verbal IQ and Performance IQ, and around 40% questions in a typical test are verbal comprehension questions.Verbal questions can test not only the verbal ability (e.g., understanding polysemy of a word), but also the reasoning ability and induction ability of an individual.According to previous studies [Carter, 2005], verbal questions mainly have the types elaborated in Table 1, in which the correct answers are highlighted in bold font. Analogy-I questions usually take the form "A is to B as C is to ?".One needs to choose a word D from a given list of candidate words to form an analogical relation between pair (A, B) and pair (C, D).Such questions test the ability of identifying an implicit relation from word pair (A, B) and apply it to compose word pair (C, D).Note that the Analogy-I questions are also used as a major evaluation task in the word2vec models [Mikolov et al., 2013].Analogy-II questions require two words to be identified from two given lists in order to form an analogical relation like "A is to ? as C is to ?".Such questions are a bit more difficult than the Analogy-I questions since the analogical relation cannot be observed directly from the questions, but need to be searched in the word pair combinations from the candidate answers.Classification questions require one to identify the word that is different (or dissimilar) from others in a given word list.Such questions are also known as odd-one-out, which have been studied in [Pintér et al., 2012].Classification questions test the ability of summarizing the majority sense of the words and identifying the outlier.Synonym questions require one to pick one word out of a list of words such that it has the closest meaning to a given word.Synonym questions test the ability of identifying all senses of the candidate words and selecting the correct sense that can form a synonymous relation to the given word.Antonym questions require one to pick one word out of a list of words such that it has the opposite meaning to a given word.Antonym questions test the ability of identifying all senses of the candidate words and selecting the correct sense that can form an antonymous relation to the given word. Although there are some efforts to solve mathematical, logical, and picture questions in IQ test [Sanghi and Dowe, 2003;Strannegard et al., 2012;Kushmany et al., 2014;Seo et al., 2014;Hosseini et al., 2014;Weston et al., 2015], there has been very few efforts to develop automatic methods to solve verbal questions. word embeddings, has attracted increasing attention in the area of machine learning.Different with conventional one-hot representations of words or distributional word representations based on co-occurrence matrix between words such as LSA [Dumais et al., 1988] and LDA [Blei et al., 2003], distributed word representations are usually low-dimensional dense vectors trained with neural networks by maximizing the likelihood of a text corpus.Recently, to show its effectiveness in a variety of text mining tasks, a series of works applied deep learning techniques to learn highquality word representations [Collobert and Weston, 2008;Mikolov et al., 2013;Pennington et al., 2014]. Nevertheless, since the above works learn word representations mainly based on the word co-occurrence information, it is quite difficult to obtain high quality embeddings for those words with very little context information; on the other hand, large amount of noisy or biased context could give rise to ineffective word embeddings either.Therefore, it is necessary to introduce extra knowledge into the learning process to regularize the quality of word embedding.Some efforts have paid attention to learn word embedding in order to address knowledge base completion and enhancement [Bordes et al., 2011;Socher et al., 2013;Weston et al., 2013a], and some other efforts have tried to leverage knowledge to enhance word representations [Luong et al., 2013;Weston et al., 2013b;Fried and Duh, 2014;Celikyilmaz et al., 2015].Moreover, all the above models assume that one word has only one embedding no matter whether the word is polysemous or monosemous, which might bring some confusion for the polysemous words. To solve the problem, there are several efforts like [Huang et al., ;Tian et al., 2014;Neelakantan et al., 2014].However, these models do not leverage any extra knowledge (e.g., relational knowledge) to enhance word representations. Solving Verbal Questions In this section, we introduce our proposed framework to solve the verbal questions, which consists of the following three components. Table 1: Types of verbal questions. Type Example Analogy-I Isotherm is to temperature as isobar is to?(i) atmosphere, (ii) wind, (iii) pressure, (iv) latitude, (v) current. Analogy-II Identify two words (one from each set of brackets) that form a connection (analogy) when paired with the words in capitals: CHAPTER (book, verse, read), ACT (stage, audience, play). Classification of Question Types The first component of the framework is a question classifier, which identifies different types of verbal questions.Since different types of questions usually have their unique ways of expressions, the classification task is relatively easy, and we therefore take a simple approach to fulfill the task.Specifically, we regard each verbal question as a short document and use the TF•IDF features to build its representation.Then we train an SVM classifier with linear kernel on a portion of labeled question data, and apply it to other questions.The question labels include Analogy-I, Analogy-II, Classification, Synonym, and Antonym.We use the one-vs-rest training strategy to obtain a linear SVM classifier for each question type. Embedding of Word-Senses and Relations The second component of our framework leverages deep learning technologies to learn distributed representations for words (i.e.word embedding).Note that in the context of verbal question answering, we have some specific requirements on this learning process.Verbal questions in IQ tests usually consider the multiple senses of a word (and focus on the rare senses), and the complex relations among (polysemous) words, such as synonym and antonym relation.These challenges have exceeded the capability of standard word embedding technologies.To address this problem, we propose a novel approach that considers the multi-sense nature of words and integrate the relational knowledge among words (or their senses) into the learning process.In particular, our approach consists of two steps.The first step aims at labeling a word in the text corpus with its specific sense, and the second step employs both the labeled text corpus and the relational knowledge contained in dictionaries to simultaneously learn embeddings for both word-sense pairs and relations. Multi-Sense Identification First, we learn a single-sense word embedding by using the skip-gram method in word2vec [Mikolov et al., 2013].Second, we gather the context windows of all occurrences of a word used in the skip-gram model, and represent each context by a weighted average of the pre-learned embedding vectors of the context words.We use TF•IDF to define the weighting function, where we regard each context window of the word as a short document to calculate the document frequency.Specifically, for a word w 0 , each of its context window can be denoted by Then we represent the window by calculating the weighted average of the pre-learned embedding vectors of the context words as below, where g wi is the TF•IDF score of w i , and v wi is the prelearned embedding vector of w i .After that, for each word, we use spherical k-means to cluster all its context representations, where cluster number k is set as the number of senses of this word in the online dictionary.Third, we match each cluster to the corresponding sense in the dictionary.On one hand, we represent each cluster by the average embedding vector of all those context windows included in the cluster.For example, suppose word w 0 has k senses and thus it has k clusters of context windows, we denote the average embedding vectors for these clusters as ξ1 , • • • , ξk .On the other hand, since the online dictionary uses some descriptions and example sentences to interpret each word sense, we can represent each word sense by the average embedding of those words including its description words and the words in the corresponding example sentences.Here, we assume the representation vectors (based on the online dictionary) for the k senses of w 0 are ζ 1 , • • • , ζ k .After that, we consecutively match each cluster to its closest word sense in terms of the distance computed in the word embedding space, i.e., where d(•, •) calculates the Euclidean distance and is the first matched pair of window cluster and word sense. Here, we simply take a greedy strategy.That is, we remove ξi ′ and ζ j ′ from the cluster vector set and the sense vector set, and recursively run (2) to find the next matched pair till all the pairs are found.Finally, each word occurrence in the corpus is relabeled by its associated word sense, which will be used to learn the embeddings for word-sense pairs in the next step. Co-Learning Word-Sense Pair Representations and Relation Representations After relabeling the text corpus, different occurrences of a polysemous word may correspond to its different senses, or more accurately word-sense pairs.We then learn the embeddings for word-sense pairs and relations (obtained from dictionaries, such as synonym and antonym) simultaneously, by integrating relational knowledge into the objective function of the word embedding learning model like skip-gram.We propose to use a function E r as described below to capture the relational knowledge.Specifically, the existing relational knowledge extracted from dictionaries, such as synonym, antonym, etc., can be naturally represented in the form of a triplet (head, relation, tail) (denoted by (h i , r, t j ) ∈ S, where S is the set of relational knowledge), which consists of two word-sense pairs (i.e.word h with its i-th sense and word t with its j-th sense), h, t ∈ W (W is the set of words) and a relationship r ∈ R (R is the set of relationships).To learn the relation representations, we make an assumption that relationships between words can be interpreted as translation operations and they can be represented by vectors.The principle in this model is that if the relationship (h i , r, t j ) exists, the representation of the word-sense pair t j should be close to that of h i plus the representation vector of the relationship r, i.e. h i + r; otherwise, h i + r should be far away from t j .Note that this model learns word-sense pair representations and relation representations in a unified continuous embedding space. According to the above principle, we define E r as a margin-based regularization function over the set of relational knowledge S, Here [X] + denotes the positive part of X, γ > 0 is a margin hyperparameter, and d(•, •) is the distance measure for the words in the embedding space.For simplicity, we again define d(•, •) as the Euclidean distance.The set of corrupted triplets S ′ (h,r,t) is defined as: which is constructed from S by replacing either the head word-sense pair or the tail word-sense pair by another randomly selected word with its randomly selected sense.Note that the optimization process might trivially minimize E r by simply increasing the norms of word-sense pair representations and relation representations.To avoid this problem, we use an additional constraint on the norms, which is a commonly-used trick in the literature [Bordes et al., 2011].However, instead of enforcing the L 2 -norm of the representations to 1 as used in [Bordes et al., 2011], we adopt a soft norm constraint on the relation representations as below: where σ(•) is the sigmoid function σ(x i ) = 1/(1 + e −xi ), r i is the i-th dimension of relation vector r, and x i is a latent variable, which guarantees that every dimension of the relation representation vector is within the range (−1, 1).By combining the skip-gram objective function and the regularization function derived from relational knowledge, we get the following combined objective J r that incorporates relational knowledge into the word-sense pair embedding calculation process, J r = αE r − L, (5) where α is the combination coefficient.Our goal is to minimize the combined objective J r , which can be optimized using back propagation neural networks.By using this model, we can obtain the distributed representations for both wordsense pairs and relations simultaneously. Solvers for Each Type of Questions Analogy-I For the Analogy-I questions like "A is to B as C is to ?", we answer them by optimizing: where T contains all the candidate answers, cos means cosine similarity, and i b , i a , i c , i d ′ are the indexes for the word senses of B, A, C, D ′ respectively.Finally D is selected as the answer. Analogy-II As the form of the Analogy-II questions is like "A is to ? as C is to ?" with two lists of candidate answers, we can apply an optimization method as below to select the best (B, D) pair, ) where T 1 , T 2 are two lists of candidate words.Thus we get the answers B and D that can form an analogical relation between word pair (A, B) and word pair (C, D) under a certain specific word sense combination. Classification For the Classification questions, we leverage the property that words with similar co-occurrence information are distributed close to each other in the embedding space.As there is one word in the list that does not belong to others, it does not have similar co-occurrence information with other words in the training corpus, and thus this word should be far away from other words in the word embedding space. According to the above discussion, we first calculate a group of mean vectors m iw 1 ,••• ,iw N of all the candidate words with any possible word senses as below, where T is the set of candidate words, N is the capacity of T , w j is a word in T ; i wj (j = 1, is the index for the word senses of w j , and is the number of word senses of w j .Therefore, the number of the mean vectors is M = N j=1 k wj .As both N and k wj are very small, the computation cost is acceptable.Then, we choose the word with such a sense that its closest sense to the corresponding mean vector is the largest among the candidate words as the answer, i.e., (9) Synonym For the Synonym questions, we empirically explored two solvers.For the first solver, we also leverage the property that words with similar co-occurrence information are located closely in the word embedding space.Therefore, given the question word w q and the candidate words w i , we can find the answer by the following optimization problem. where T is the set of candidate words.The second solver is based on the minimization objective of the translation distance between entities in the relational knowledge model (3).Specifically, we calculate the offset vector between the embedding of question word w q and each word w j in the candidate list.Then, we set the answer w as the candidate word with which the offset is the closest to the representation vector of the synonym relation r s , i.e., w = argmin iw q ,iw j ;wj ∈T |v (wj ,iw j ) − v (wq,iw q ) | − r s . (11) In practice, we found the second solver performs better (the results are listed in Section 4). Antonym Similar to solving the Synonym questions, we explored two solvers for Antonym questions as well.That is, the first solver ( 12) is based on the small offset distance between semantically close words whereas the second solver (13) leverages the translation distance between two words' offset and the embedding vector of the antonym relation.One might doubt on the reasonableness of the first solver given that we aim to find an answer word with opposite meaning for the target word (i.e.antonym).We explain it here that since antonym and its original word have similar co-occurrence information, based on which the embedding vectors are derived, thus the embedding vectors of both words with antonym relation will still lie closely in the embedding space. where T is the set of candidate words and r a is the representation vector of the antonym relation.Again we found that the second solver performs better.Similarly, for skip-gram, only the first solver is applied. Experiments We conduct experiments to examine whether our proposed framework can achieve satisfying results on verbal comprehension questions. Data Collection Training Set for Word Embedding We trained word embeddings on a publicly available text corpus named wiki20142 , which is a large text snapshot from Wikipedia.After being pre-processed by removing all the html meta-data and replacing the digit numbers by English words, the final training corpus contains more than 3.4 billion word tokens, and the number of unique words, i.e. the vocabulary size, is about 2 million. IQ Test Set According to our study, there is no online dataset specifically released for verbal comprehension questions, although there are many online IQ tests for users to play with.In addition, most of the online tests only calculate the final IQ scores but do not provide the correct answers.Therefore, we only use the online questions to train the verbal question classifier described in Section 3.1.Specifically, we manually collected and labeled 30 verbal questions from the online IQ test Websites3 for each of the five types (i.e.Analogy-I, Analogy-II, Classification, Synonym, and Antonym) and trained an onevs-rest SVM classifier for each type.The total accuracy on the training set itself is 95.0%.The classifier was then applied in the test set below.We collected a set of verbal comprehension questions associated with correct answers from the published IQ test books, such as [Carter, 2005;Carter, 2007;Pape, 1993;Ken Russell, 2002], and we used this collection as the test set to evaluate the effectiveness of our new framework.In total, this test set contains 232 questions with the corresponding answers.The statistics of each question type are listed in Table 2. Compared Methods In the following experiments, we compare our new relation knowledge powered model to several baselines. Random Guess Model (RG).Random guess is the most straightforward way for an agent to solve questions.In our experiments, we used a random guess agent which would select an answer randomly regardless what the question was.To measure the performance of random guess, we ran each task for 5 times and calculated the average accuracy. Human Performance (HP).Since IQ tests are designed to evaluate human intelligence, it is quite natural to leverage human performance as a baseline.To collect human answers on the test questions, we delivered them to human beings through Amazon Mechanical Turk, a crowd-sourcing Internet marketplace that allows people to participate Human Intelligence Tasks.In our study, we published five Mechanical Turk jobs, one job corresponding to one specific question type.The jobs were delivered to 200 people.To control the quality of the collected results, we took several strategies: (i) we imposed high restrictions on the workers -we required all the workers to be native English speakers in North American and to be Mechanical Turk Masters (who have demonstrated high accuracy on previous Human Intelligence Tasks on the Mechanical Turk marketplace); (ii) we recruited a large number of workers in order to guarantee the statistical confidence in their performances; (iii) we tracked their age distribution and education background, which are very similar to those of the overall population in the U.S.While we can continue to improve the design, we believe the current results already make a lot of sense. Latent Dirichlet Allocation Model (LDA).This baseline model leveraged one of the most classical distributional word representations, i.e.Latent Dirichlet Allocation (LDA) [Blei et al., 2003].In particular, we trained word representations using LDA on wiki2014 with the topic number 1000. Skip-Gram Model (SG).In this baseline, we applied the word embedding trained by skip-gram [Mikolov et al., 2013] (denoted by SG-1).In particular, when using skip-gram to learn the embedding on wiki2014, we set the window size as 5, the embedding dimension as 500, the negative sampling count as 3, and the epoch number as 3.In addition, we also employed a pre-trained word embedding by Google4 with the dimension of 300 (denoted by SG-2). Glove.This baseline algorithm uses another powerful word embedding model Glove [Pennington et al., 2014].The configurations of running Glove are the same with those in running SG-1. Multi-Sense Model (MS).In this baseline, we applied the multi-sense word embedding models proposed in [Huang et al., ;Tian et al., 2014;Neelakantan et al., 2014] (denoted by MS-1, MS-2 and MS-3 respectively).For MS-1, we directly used the published multi-sense word embedding vectors by the authors5 , in which they set 10 senses for the top 5% most frequent words.For MS-2 and MS-3, we get the embedding vectors by the released codes from the authors using the same configurations as MS-1. Relation Knowledge Powered Model (RK).This is our proposed method in Section 3. In particular, when learning the embedding on wiki2014, we set the window size as 5, the embedding dimension as 500, the negative sampling count as 3 (i.e. the number of random selected negative triples in S ′ ), and the epoch number as 3.We adopted the online Longman Dictionary as the dictionary used in multi-sense clustering.We used a public relation knowledge set, Wor-dRep [Gao et al., 2014], for relation training. Accuracy of Question Classifier We applied the question classifier trained in Section 4.1 on the test set in Table 2, and got the total accuracy 93.1%.For RG and HP, the question classifier was not needed.For other methods, the wrongly classified questions were also sent to the corresponding wrong solver to find an answer.If the solver returned an empty result (which was usually caused by invalid input format, e.g., an Analogy-II question was wrongly input to the Classification solver), we would randomly select an answer. Overall Accuracy Table 3 demonstrates the accuracy of answering verbal questions by using all the approaches mentioned in Section 4.2. From this table, we have the following observations: (i) RK can achieve the best overall accuracy than all the other methods.In particular, RK can raise the overall accuracy by about 4.63% over HP. (ii) RK is empirically superior than the skip-gram models SG-1/SG-2 and Glove.According to our understanding, the improvement of RK over SG-1/SG-2/Glove comes from two aspects: multi-sense and relational knowledge.Note that the performance difference between MS-1/MS-2/MS-3 and SG-1/SG-2/Glove is not significant, showing that simply changing single-sense word embedding to multi-sense word embedding does not bring too much benefit.One reason is that the rare word-senses do not have enough training data (contextual information) to produce high-quality word embedding.By further introducing the relational knowledge among word-senses, the training for rare word-senses will be linked to the training of their related word-senses.As a result, the embedding quality of the rare word-senses will be improved.(iii) RK is empirically superior than the two multi-sense algorithms MS-1, MS-2 and MS-3, demonstrating the effectiveness brought by adopting less model parameters and using online dictionary in building the multi-sense embedding model. These results are quite impressive, indicating the potential of using machine to comprehend human knowledge and even achieve the comparable level of human intelligence. Accuracy in Different Question Types Table 3 reports the accuracy of answering various types of verbal questions by each comparing method.From the table, we can observe that the SG and MS models can achieve competitive accuracy on some certain question types (like Synonym) compared with HP.After incorporating knowledge into learning word embedding, our RK model can improve the accuracy over all question types.Moreover, the table shows that RK can result in a big improvement over HP on the question types of Synonym and Classification, while its accuracy on the other question types is not so good as these two types. To sum up, the experimental results have demonstrated the effectiveness of the proposed RK model compared with several baseline methods.Although the test set is not large, the generalization of RK to other test sets should not be a concern due to the unsupervised nature of our model. Conclusions We investigated how to automatically solve verbal comprehension questions in IQ Tests by using the word embedding techniques in deep learning.In particular, we proposed a three-step framework: (i) to recognize the specific type of a verbal comprehension question by a classifier, (ii) to leverage a novel deep learning model to co-learn the representations of both word-sense pairs and relations among words (or their senses), (iii) to design dedicated solvers, based on the obtained word-sense pair representations and relation representations, for addressing each type of questions.Experimental results have illustrated that this novel framework can achieve better performance than existing methods for solving verbal comprehension questions and even exceed the average performance of the Amazon Mechanical Turk workers involved in the experiments. Table 2 : Statistics of the verbal question test set. Table 3 : Accuracy of different methods among different human groups.
7,028
2015-05-29T00:00:00.000
[ "Computer Science" ]
The role of community libraries in the alleviation of information poverty for sustainable development This literature review focused on the role of rural community libraries in minimizing information poverty. The potentials of rural community libraries in promoting sustainable development are discussed in this article. The necessity of information poverty alleviation for sustainable development is also discussed. The study found that information poverty is an obstacle whereas information is a key to achieving sustainable development. The study also found that community library is not only a library of few shelves of books but also a hub of the local communities, particularly of the rural and disadvantaged communities offering a continuously changeable information resource for the community. It empowers individuals and communities to help them reach their goals. It lays down the foundation stone for sustainable development. INTRODUCTION Sustainable development is a global major agenda in recent years. It is a procedure of developing society so that it may exist in the long term. According to WCED (1987), sustainable development is a development that meets the needs of the present without compromising the ability of future generations to meet their own needs. Sustainable development is conceived as a socioeconomic system that confirms the ability to sustain with the advancement of economy, education, health and the every spheres of life (Pearce et al., 1989). In a word, 'development with sustainability' is called sustainable development. Sustainable development is not possible without building knowledge-based society of which, information is regarded as life -blood. Information is a very urgent element for every step of development (Okiy, 2003). Thus, information poverty alleviation is very crucial for sustainable development. Information poverty is defined as, no entrance to needed information for survival and development (Marcella and Chowdhury, 2018). Britz (2004) states that information poverty is that situation in which individuals and communities within a given context, do not have the requisite skills, abilities or materials means to obtain efficient access to information, interpret it and apply it appropriately. Later, it is characterized by a deficit of necessary information and poor information infrastructure. Actually, information poverty is more than just a lack of information or technology; it is also a lack of E-mail<EMAIL_ADDRESS>Tel: 88-01724571661, 88-01838322733. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License utilizing, accessing, producing and realizing the value of information (Strand, 2016a). According to Cruz-Cunha et al. (2013), information poverty can be defined as, the users of information in depressed circumstances which is caused by social inequity, users inability to define the need of information, lack of ICT and knowledge for using such technology and information resources. In general, information poor can be defined as groups and individuals who do not have adequate and equal access to quality and quantity information (Shen, 2013). Moore (2007) stated that the opportunity in access to information is not the same for all parts of the society. The wealthy city dwellers have many opportunities to access various sources of information. On the other hand, the poor rural dwellers are often neglected in access to essential information which could improve their lives. This condition is extremely critical in a large number of villages of low and middle income countries otherwise called, developing countries. A lot of people in developing countries live in villages. The village people are not conscious of the role of information for growth and development (Mansoor and Kamba, 2010). They have no proper idea about the application of information for development (Stone, 1993).In fact, many rural communities of developing countries have poor understanding of the value of information for development. It is important to note that they have to be fully conscious of the significance of information to keep pace with the present knowledge-based economy. People in these communities do not realize their demand of information. They also do not know from where and how they can satisfy their information demand (Islam, 2010). They suffer from a lack of useful information, low levels of literacy and skill, meager use of technology or ICT, a low level sense of participation and a low standard living (Ahmed et al., 1997). All these factors conspire to exclude them from the world of information. People in these villages are always the last to receive anything. They are always excluded from update information, technological advances and any plan of economic action or implementation agenda. The rural development rate infects other sectors that strike national development and thereby sustainable development. Sustainable development is impossible keeping aloof the large sections of rural people of developing countries from the world of information. In fact, information poverty is a burning question in developing countries (OECD, 2017). The dilemma of low and middle income countries of the world is not only economic poverty but also information poverty. We must give priority to elucidate the problem . Chester and Neelameghan (2006) states that information experts and their agents should make an effort to help community members to get rid of information poverty which is a major obstacle of achieving sustainable development. By empowering rural people with appropriate information access, skills, infrastructure and understanding, rural community library can assist in the alleviation of information poverty as well as the sustainable development throughout the world. Iqbal (2004) stated information as a means of community development. Though there is much information dissemination centers in the villages, rural people are not always satisfied with the services of these centers because; the services are always not related and suitable for them. It is obvious to state that there are many other information sources and suppliers working in the villages at present but a rural community library is increasingly regarded as the hub of the rural community for delivering essential information that could empower them to achieve sustainable development. Community library means a distinct type of public library or an alternative to public library that is established and governed by local people with or without public funding normally in disadvantaged areas to supply miscellaneous studying facilities and community information services for the growth of the community and raising their quality of life. Harrod's Libarian's Glossary and Reference Book (2005) stated, it is normally a section or part library (though may be a central or mobile service) designed to produce advice-center functions and community information for the whole of its population, rather than only presenting a book store to learners. According to Feather and Sturges (2003) in the International Encyclopaedia of Information and Library Science (2003), community librarianship is the distribution of library and information services for a particular community. It provides information mainly on social, domestic, health or educational affairs, local cultural activities, clubs and local authority or governmental services. Public libraries have also the same responsibility-providing community information and meeting facilities. But, it may also be provided via a special unit set up by local authority, a voluntary agency or an advice group and this is called community library. Having a great contribution to solve the problems of social exclusion community librarianship is recently a powerful tool for the advancement of the community or society. Community library is an extension of existing practices of public libraries which heavily relies on community participation. It empowers the entire community through information services. It is regarded as an important basis for rural development. Rural community library, as a vital information supplier, have a very significant role to play in diminishing information poverty by giving suitable information assistance in village areas. By providing information to rural areas, rural community libraries can make the rural dwellers aware of what is happening in the country, what their rights and responsibilities are and how can they get information services. Thus, rural community libraries may directly contribute in good governance, social progress and economic development through eradicating information poverty needed for sustainable development. The purposes of the study were to: (1) define the necessity of information poverty alleviation for sustainable development; (2) highlight the role of rural community library in alleviating information poverty; (3) explore the potentials of rural community library in promoting sustainable development and (4) study the core concept of community library, information poverty, sustainable development and link among them; (5) draw more attention of the stakeholders and user communities to the rural community library. METHODOLOGY This study applied literature review methodology along with narrative and integrative approach. Relevant literatures available on sustainable development and the community libraries in diminishing information poverty were consulted. Secondary sources of information like books, journal articles, conference proceedings, official reports were used for collecting information. Internet websites were also used for collecting data. A) Information for development Information can be called key to development. It is regarded as a prime resource that plays very significant role in the overall development of a country or nation. Moore (2007) states, information are a necessary precondition for the improvement of a particular person or groups. People require information to improve their capability through knowledge and education, to succeed in business, to flourish their culture and civilization and for controlling their lives. Information in a well-organized character that can increase desire and expectation, by turning people from fatalism and fear of change to a desire for a better life and the determination to work for it. Information can assist man to observe their present status and to make future development plan. Ideally, information brings about knowledge, and a community can only become knowledgeable by information as tool for development. Information can remove the darkness of ignorance and help to achieve goal in every spheres of community life (Mansoor and Kamba, 2010). According to Muhamed et al. (2010), information within the realm of the 'knowledge-based economy' is essential for the socio-economic and socio-technical development, because it begets knowledge that is essential for sustainable development. In fact, information is a prerequisite for knowledge production or coproduction. It helps in research, innovation, and communication. It helps to make good decision, policy and plan. Information and knowledge protect us for making mistake. It decreases uncertainty but increases efficiency. It is a power of solving problem. Chen et al. (2011) has rightly said, somebody who can obtain more information will occupy the dominant position in the social competition. Babalola et al. (2012) state, information is an important factor for political participation and social inclusion and the foundation for developing at all levels of human life. Ogar et al. (2018) indicates information is needed for effective success at all levels of good governance. According to Harande (2009), information is compared as blood of social life and crucial for governmental and private activities. He also showed information as basic materials for the progress of the society-urban or rural community and the progress of any nation is greatly dependent on information. Hoq (2012) states, rural people applying information and knowledge in agriculture, health, human rights, education, employment, market and finance, disaster management can ensure socioeconomic advancement for sustainable rural development. Kari (2007) states, information are a fundamental need like air, water, food and shelter for human being. Information enables people to utilize the factors of production such as land, labor and capital resources into meaningful and productive use. Actually, every dimension of development has information and knowledge implication. Information scientists and scholars have defined information as an empowering agent, in terms of the ways in which access to and use of information can assist individuals to overcome obstacles, take advantage of the opportunities available to them and improve their lives. Mchombu and Mchombu (2014) stated that information can play stirring role by encouraging and motivating people for the economical and cultural evolution essential for sustainable development. B) Information poverty restrains sustainable development Realizing the value of information for development, it can be said that information poverty restrains sustainable development. Sustainable development is a technique to develop by balancing different, and competing for the needs against an awareness of the environment, social and economic limitations we face as a society. Sustainable development is largely based on the acquisition, dissemination and utilization of knowledge and information (World Bank, 1998;Asian Development Bank, 2011). Truly, information is considered as the most vital element essential for facilitating the potentiality to satisfy all human needs. Without access to information, people cannot develop and cannot fulfill their demands. So, information poverty is one of the greatest impediments to development. According to Britz (2007Britz ( , 2010, information poverty is one of the main forms of poverty today. It relates to an individual's or communities inability, not only to access essential information but also to benefit from it in order to meet their basic needs for survival and development. Information poverty is closely linked to economic poverty and it has negative effect on every facade of people's life (Britz 2007;Strand, 2016b). Joselin and Panneerselvam (2015) indicate, information poor do not have equal opportunities to access the necessary information. Information poor are victimized with insufficiency of resources, lack of essential infrastructures, lack of needed skills to access and use the information and financial limitation. So, information poverty can narrow opportunities to access in employment, business, capitalization, creative and social networks, and the capacity to grow the skill needed for global citizenship. In addition, information poverty is a problem of polarizing between the rich and the poor and widening the socioeconomic disparity. As a result of information poverty, societies are at a risk of being left behind. Access to information does not in itself gives people power over their lives but lack of access to information can render a person powerless in the sense of being unable to exercise intelligent life options (Buddy, 1977). Lack of access to information is one of the most serious obstacles for building a better community. Access to information guides to move forward for a standard life. Besides, lack of access to governmental information makes barrier to participate in governmental agenda which hindrances the development process of a community and nation. C) Information poverty alleviation-a matter of urgency Inequality in access to information causes information poverty. This inequality is caused by various factors like political, cultural, economical, educational, moral, geographical, technological and technical (Britz, 2004). Britz (2004) also stated, the most critical issue of the present world is information poverty which has a bad impact on the socio-economic, political and cultural development. He further stated, information poverty is chronic and long term. In reality, the world still faces the problems of information poverty. So, whatever the causes of information poverty is not matter, the most urgent thing is that information poverty alleviation is a crying need for achieving sustainable development goals of today's world. Scheeder (2018) stated, sustainable development is totally impossible without access to information. In fact, access to information is an urgent need for the development of the society. Access to information helps in the war of achieving an inclusive society. As we are living in an age of globalization, equal access to information is the most urgent need for growing world information economy. According to, Vargas and Lee (2018), information poverty is a one form of poverty which is related to economic and social poverty. With a view to achieving sustainable development, they emphasize to address information poverty. Information is the pre-condition for any kind of development and it influences all dimensions of life. They also say information poverty is closely linked to communication poverty, because information is the most important component of communication. Sustainable development consists of socio-economic development that saves and enriches the natural environment and ensures social equity (Diesendorf, 2000). It is true that sustainable development comprises of various dimensions of development and its all dimensions are interrelated. Hence, addressing development challenges requires addressing all types of rights-social, economic, cultural, political, civil and informational (Garrido and Wyber, 2017). Chowdhury and Koya (2017) indicate about 30 information-related matters which are included in the UN's Sustainable Development Goals (SDGs) guidelines, identifying the role of information in building stronger economies. According to the Development and Access to Information Report (2017), access to information is a catalyst for developing worldwide. It enables us to place the basic foundations for equality, sustainability and prosperity. The Lyon Declaration (2014) sets out the principles behind access to information and development. These bedrock principles state that access to information empowers people to: 1. Apply their all kinds of rights 2. Acquire and employ new skills. 3. Decide and attend in active and engaged civil society. 4. Make community-based solutions to development challenges. 5. Confirm accountability, transparency, good governance and empowerment. 6. Estimate prosperity on public and private communities on sustainable development. Therefore, it is very clear that information poverty alleviation is a matter of urgency for sustainable development. Rural community library's role in the alleviation of information poverty Rural community library plays various important roles in the reduction of information poverty. A rural community library is established to deliver advice centre functions and community information centre for the people of the community rather than only offering a volume of books and study materials to learners. Rural community library provides information to address the real needs of the people. Different communities have different information needs. Rural community library is actually established on the basis of considering the real information needs of the community. People's demand of information is changed on the basis of changing communities. A rural community library is always active to meet the changing needs of the community. Leonard and Ngula (2013) stated, community libraries are very important to provide information and fulfilling the information demands of the people in respective communities. Community libraries are considered as the critical interface between the communities and nation's information services, and are therefore mandated to serve as: 1. Community study centers for promoting education, building awareness among community members to use information and supporting for lifelong education. 2. Centers for upgrading the participatory status of living culture. 3. Centers for circulating information in all aspects of life with particular emphasis of information essential for participating in democratic decision making and further successful implementation of national development plan. 4. Centers for leisure learning. Thus, a rural community library is a lifelong learning, information and recreational reading centre. According to the famous Russian bibliopsychologist and educator named Rubakin (1968), library is not only a bookshop where various books are to be had, but also it is an adviser, a guide, a friend. It must go out to reader, bring him in rather than wait for him to come of his own accord. A rural community library is, like an adviser, a guide, a friend for the people of a community. It reaches to the community members to solve their problems of information. Rural community librarians work as educator, facilitators and advocates for local culture and they use their expertise to enable local people to make sense of and utilize the increasingly complex and systematic global information environment. Rural community library plays a major role in accumulating, processing, preserving and circulating the community information which is necessary for everyday life of people in respective community. Childers (1975), stated 'information poverty' is the scarcity of basic survival information experienced by a large number of people. Actually, information poor do not have equal opportunities to access the necessary information. Distributing community information services in community libraries are an effective mechanism to face the challenges of information poverty. Colemen (in Barnes 1994:79) describes, community library services are highly political in nature in the sense that every person must have equitable access of information and society's resources. Rural community libraries have the huge potentials and opportunities to ensure equal access of information to all. It is the local information center for providing information, opportunities for lifelong learning to everybody of a community without any discrimination regardless of sex, cast, religion and social status. Mostert and Vermeulen (1998) states that, community Mia 35 library mainly provides two kinds of information: 1. Survival information on health and family affairs, housing, finance etc. 2. Citizens' action information on social, political and legal rights. So, a rural community library is an important local gateway of knowledge. It links directly between information creator and information users especially for government-related information. It is an indispensible tool in the creation of informed society. Alemna (1995) argued that a community library is better than public library to meeting the community information needs of the rural people. Mostert (1997) stated, community libraries are more popular because of their dedication to empowerment of the whole community through their information services. He further said, community library plays a major role by providing information and educational resources for meeting the community information needs as well as helping in the operation of brightening these communities (Mostert, 1998). In developing countries, huge numbers of the population are disadvantaged with latest development of ICT, information resources and the lack of basic skills needed to access the necessary information (Joselin and Panneerselvam, 2015). Community library always gives opportunities of the library services and collections based on the finding of the specific needs of the information users, particularly those coming from underprivileged section of the society. A community library always responds to the critical community information needs of the rural and non-literate communities through its functions, services and collections (Alemna, 1995;Stillwell, 1989;Mostert, 1998). The rural community library, being a dynamic social institution is capable of serving accurate information at the accurate time to solve the problem of information poverty. The rural community library plays an effective role in the dissemination of information to a wide range of user community through its free facilities of access to ICT and information resources, the delivery of outreach, partnerships with other local information centers, production and preservation of local culture, cultivating reading habit among the villagers, literacy program, information and digital literacy training , active interaction between the librarian and users to answer the user's questions, develop trustworthy community partnerships. All these roles are fundamental to alleviate information poverty in the community broadly in society or country. Rural community library promotes sustainable development Rural community library plays very significant role for sustainable development. The role of rural community libraries can be explained as to qualify people from the underprivileged communities to develop their quality of lives. Namhila and Niskala (2013) stated that community libraries are established with a view to developing the quality of life for the people in their respective community. According to Legoabe (1995), community library provides information that covers every spheres of life to assist all members of a community to overcome the daily problems for better quality of life. Community information services of community libraries are managed with an aim to supply information to the community to help members cope with their designated roles within the community. Leonard and Ngula (2013) stated community library plays a vital role in the community development with regard to education and social well-being. Rural community library is closely related to daily life of community members. Providing knowledge and information, community library creates a space for community people to contribute on a wide range of development initiatives. It also helps to cope with the problems in their daily lives. It teaches the users to become self-dependent and self-sufficient. Community libraries being empowering agents are proactive to fulfill the social progress and sustainability. Dent (2006) states in the African context, community libraries are established to: 1. Assist the villagers to maintain knowledge gained from their education. 2. Assist the rural people to realize the country's social, political, and economic endeavourers and nation building efforts. 3. Help the improvement of wholesome family life, producing materials about social, economic and health care development and No nation can develop without the development of its human resource. Human resources of a country can make a good contribution to the progress of the country (Okiy, 2003). A rural community library can serve meaningfully for the purpose of human resource development and thereby national development. Again, there is no development anywhere in the world without the impact of education. Education is regarded as the tool for development. Without education, there will be no innovation and, without innovation, there will be no transformation. Jubair (2009) states, a community library is not merely a library, It is also a village educative institute, outside the traditional education system in villages, that provides different studying facilities for community improvement and better quality of life. A community library having an important role in the advancement of people's knowledge is a community education center. Community libraries develop sound reading habits among local people. They deliver various learning elements to meet the interests of all aged people. Local people can develop their literacy and skills using easy access to the library facilities, technology and information. This turns villages into viable places to live by creating jobs and access to modern technology, civic engagement and partnership, community networks and cultural arisen, thus balancing the urban-rural gap. Sustainable social improvement depends on a partnership between state, civil society and locality. Rural community libraries are very potential national resources for developing individuals and groups. Therefore, it is a vehicle of development to undertake various other developmental initiatives toward sustainable development in the country. The librarian of a rural community library is an advocate for rural community development. Rural community library serves as a focus for local activity and culture. Rural community library can contribute a lot for the rural economical, social and cultural development. It actually, works in order to develop the socio-economic status of the community. Sultana (2014) states, the advancement of any community are an index like a positive signal for the development of a nation. So, the development of a community is the most important method of prompting total national development that will further stimulate sustainable development. Lahti (2015) states, community library are an important tool for national as well as local development. Community libraries contribute effectively in different spheres of community success. Studies done by scholars such as, Hamilton-Pennell (2008), Jones (2009), Abu et al. (2011), Strand (2016b indicated, libraries play very significant role in education, social policy, information, cultural enrichment and economic development. In fact, rural community libraries make an outstanding contribution in the society and their impact in social and economic development should not be underestimated (Leonard and Ngula, 2013). The role of the rural community library for all round development of the society is unquestionable. DISCUSSION The review of the literature proves with the previous findings that rural community libraries have an active role to play in diminishing information poverty. The analysis suggests that alleviating information poverty is the prerequisite of sustainable development. Besides, rural community library can promote sustainable development. Although, several of the findings discussed in this article are on community library in general (Stillwell, 1989;Moster, 1998;Leonard and Ngula, 2013;Lahti 2015), others are on mainly focused rural community library in particular (Dent, 2006;Jones, 2009;Jubair, 2009). A few studies have also been consulted with an aim to link between information poverty and sustainable development. In addition, there is a wide range of studies that reported the needs of alleviating information poverty for sustainable development (Lyon Declaration on Access to Information and Development, 2014;UN, 2015 Sustainable Development Goals;Development and Access to Information Report, 2017;Scheeder, 2018;Vargas and Lee, 2018). All these studies, however, conclusively stated that the successful implementation of sustainable development largely depends on an efficient information service to all parts of the society or community. Community library can serve very well for this purpose. Sustainable development is a multidimensional process and its all dimensions are relevant to each other. Addressing the development challenges needs to address all types of rights-social, political, economical, cultural, civil and informational. The investigation disclosed in this literature review identify the need for further research into others dimension of sustainable development addressing by community library. Though the studies reviewed in this paper identified few areas of development addressed by community library, overall, they paid less attention to address sustainable rural development challenges through community library. The present study mainly focused on the rural community library, the future research may be undertaken on the community library in general. Conclusion Information is a key enabler of achieving sustainable development goal. Information poverty is a barrier on the way of promoting sustainable development. Thus, information poverty alleviation is badly needed for sustainable development. Reducing information poverty, rural community library can contribute a lot for sustainable development. Sustainable development is impossible without alleviating of information poverty and effective alleviation of information poverty is about impossible without community library. Rural community libraries play very vital role in reducing social exclusion around the world. Rural community library is truly the hub of the rural and disadvantaged community for providing necessary information and knowledge that could enable them to promote sustainable development. Significance of the study The study will encourage the stakeholders to give more emphasis on the rural community library's development throughout the world. It will also encourage the user communities to utilize the potentials of rural community libraries in their information needs and sustainable development. It is expected that the study will contribute to add a literature in the Library and Information Science field. Although there are a number of studies regarding this, this is the first one of its kind in Bangladesh. ACKNOWLEDGEMENT The author would like to thank all the scholars whose contributions have been used in this article. Without their Mia 37 contributions, this study would have not been possible. He would also like to thank all of his friends, colleagues who encouraged him to undertake this study.
6,857.2
2020-08-31T00:00:00.000
[ "Economics" ]
Development of Photoluminescent and Photochromic Polyester Nanocomposite Reinforced with Electrospun Glass Nanofibers A polyester resin was strengthened with electrospun glass nanofibers to create long-lasting photochromic and photoluminescent products, such as smart windows and concrete, as well as anti-counterfeiting patterns. A transparent glass@polyester (GLS@PET) sheet was created by physically immobilizing lanthanide-doped aluminate (LA) nanoparticles (NPs). The spectral analysis using the CIE Lab and luminescence revealed that the transparent GLS@PET samples turned green under ultraviolet light and greenish-yellow in the dark. The detected photochromism can be quickly reversed in the photoluminescent GLS@PET hybrids at low concentrations of LANPs. Conversely, the GLS@PET substrates with the highest phosphor concentrations exhibited sustained luminosity with slow reversibility. Transmission electron microscopic analysis (TEM) and scanning electron microscopy (SEM) were utilized to examine the morphological features of lanthanide-doped aluminate nanoparticles (LANPs) and glass nanofibers to display diameters of 7–15 nm and 90–140 nm, respectively. SEM, energy-dispersive X-ray spectroscopy (EDXA), and X-ray fluorescence (XRF) were used to analyze the luminous GLS@PET substrates for their morphology and elemental composition. The glass nanofibers were reinforced into the polyester resin as a roughening agent to improve its mechanical properties. Scratch resistance was found to be significantly increased in the created photoluminescent GLS@PET substrates when compared with the LANPs-free substrate. When excited at 368 nm, the observed photoluminescence spectra showed an emission peak at 518 nm. The results demonstrated improved hydrophobicity and UV blocking properties in the luminescent colorless GLS@PET hybrids. Introduction The creation of glow-in-the dark glasses often involves the use of rare earth oxides. Smart glass windows and other types of glass products that produce light have been used in many lighting systems. Cheapness, photostability, non-toxicity, and high-quality photoluminescence are just a few reasons why rare earth oxides are so well suited for usage in a smart window [1,2]. A glass window can be customized to provide a variety of benefits, including protection from sun rays, excess heat, and even visible light. After being exposed to a light source and then having that source turned off, certain materials exhibit a phenomenon called persistent photoluminescence, characterized by sustained emission in the visible, near-infrared, and ultraviolet spectrums [3][4][5]. Controlling the intensity and lifetime of photoluminescence is an active area of scientific inquiry. Therefore, advanced visual smart windows, electronic displays, and photonics have become feasible. Both hosting and doping materials affect on afterglow period and color [6]. The photoluminescence efficiency of lanthanide-activated nanomaterials is found to be very high throughout a wide range of applications. Therefore, many optical instruments, such as light-emitting diodes, use nano-scale materials that have been activated by rare earth elements. The physical properties of smart materials can be altered in response to an external stimulus such as chemical, electromagnetic, or photochemical agents [7,8]. The ability of smart materials to respond to dangerous stimuli such as extremely high temperatures and toxic chemical agents has led to their incorporation into protective devices. It is possible for a specific substance to keep emitting light for a considerable amount of time after the light supply has been shut off due to a phenomenon known as persistent emission. When illuminated with ultraviolet light, photochromic materials tolerate a colorimetric switching between two optical states. When the excitation source is switched off, the photochromic agent returns to its original state [9][10][11]. Potential applications of the photochromic effect in the technological realm include sensors, ophthalmic lenses, displays, and sunglasses. Security barcodes, trademark protection, ultraviolet shielding, military camouflage, and smart fabrics are just a few of the many uses that have been reported for our enhanced understanding of light-induced chromic materials [12][13][14]. The majority of commercialized photochromic agents are organic dyestuffs, such as Spiropyrans and diarylethenes [15,16]. However, there are certain downsides to organic photochromic colorants, including high costs and poor photostabilty. For the reason that the photochromism of such organic dyestuffs is based on the molecular switching of their structure, their inclusion in bulk materials limits their molecular switching ability due to steric hindrance effects [17]. This means that the photostability of photochromic organic dyes can be diminished with extended exposure to UV light. However, inorganic photochromic agents are resistant to steric effects as they do not depend on a change in their molecular structural system. Thus, inorganic photochromic agents provide better photochromism and higher photostability as compared with organic agents [18]. Eu 2+ and other activation lanthanide ions are well-known photon emitters, particularly stimulated by UV light [19]. Inorganic compounds with photoluminescent lanthanide activators have been found to exhibit novel optical, electrical, and magnetic characteristics. When a lanthanide ion is exposed to a light photon, it absorbs some of the light and emits other wavelengths because its 4f shell is not completely filled. When a photoluminescent agent such as SrAl 2 O 4 is co-doped with Eu 2+ and Dy 3+ at a certain ratio, the afterglow period is increased by a factor of ten [20]. Adding Dy 3+ to the Eu 2+ -doped strontium aluminate increases its luminescence and extends the afterglow effect to more than 15 h. It has been claimed that rare-earth ions have been added to several types of glasses for possible optical electronic applications. There have been a number of studies [21] conducted to determine whether it is possible to generate bright glass by doping with rare-earth cations. The most studied persistently emitting luminous compounds are lanthanide-doped aluminates (LA; MAl 2 O 4 ; M = Sr, Ca, Ba). Changing the amount of LA present in the supporting medium affects the intensity of the afterglow. The luminous features of strontium aluminates doped with rare earth are noteworthy because of their high quantum yield, chemical stability, safety, prolonged afterglow duration, and dazzling pure hues [22][23][24][25][26]. Due to their exceptional photostability, lanthanide aluminates (LAs) exhibit high reversibility. Inorganic phosphors have shown different emission colors such as SrAl 2 O 4 :Eu 2+ ;Dy 3+ as greenish emitter [27], CaMgSi 2 O 6 :Eu 2+ ;Dy 3+ as bluish emitter [28], and Y 2 O 2 S:Eu 3+ ;Mg 2+ ;Ti 4+ as reddish emitter [29]. These photochromic inorganic phosphors are among the most impressive colorants due to their efficient photoluminescence, persistent emissions (>10 h), and resistivity to light, heat and chemicals [30,31]. The benefits of alkaline earth aluminates include their lack of toxicity and radioactivity, which makes them an attractive option. LAs have been recommended for several applications [32][33][34] due to their photochromic and persistent phosphorescence characteristics. It has been observed that a change in the concentration of LAs can alter the photochromic and persistent phosphorescence properties of a bulk polyester resin. Therefore, the incorporation of Eu 2+ and Dy 3+ doped strontium aluminum oxide into a glass material represents a significant step toward the creation of transparent smart glass with long-lasting phosphorescence, energy-saving disposition, tough surface, photochromism, and cheap price [35]. Glass is characterized by its qualities of transparency, weather and rust resistance, waterproofness, and dustproofness. Thus, glass has made it possible to create windows that let in a lot of natural light. However, glass materials are brittle, costly, heat transparent, and unsafe for earthquake-prone areas [36]. On the other hand, a polyester resin can be described as an unsaturated synthetic resin that can be prepared from the interaction of a polyhydric alcohol with a dibasic organic acid. Polyesters have been utilized in various fields such as sheet and bulk molded products, fiberglass, adhesives, and cured-on-site pipes. Polyesters are distinguished by their resistance to aging, chemicals, and water. They are thermally stable up to 80 • C and cheap [37][38][39]. To boost the mechanical characteristics of polymer resins, researchers have produced fiber-reinforced polymeric composites by encasing fibers as fillers within a polymer bulk. Fiber-reinforced polymeric composites have found widespread uses in several industries, including the construction of satellites, vehicles, and aeroplanes. Nanofiber-reinforced polymer nanocomposites [40,41] have attracted a lot of attention as viable materials for many fields. This could be attributed to the improved interfacial binding strength of the filler with the matrix, which is much improved due to the increased surface area of the integrated nanofibers. Electrospinning technology has provided a practical method for producing electrospun nanofibers from a variety of materials, including ceramic, polymer, and carbon. For nanofiber-reinforced polymer nanocomposites, electrospun glass nanofibers based on silicon dioxide have been presented to have excellent mechanical properties [42]. Persistent photoluminescent, energy-saving and photochromic windows and concretes have only been described in limited studies. Herein, we have been inspired to develop new transparent GLS@PET materials with high light transmittance, photochromism, hydrophobic activity, UV protection, and long-persistent photoluminescence (lighting in the dark) to minimize energy usage in buildings. Electrospinning was used to prepare glass nanofibers to be immobilized together with LANPs across a polyester matrix, creating photoluminescent GLS@PET hybrids. TEM was used to examine LANPs. EDXA, SEM, and XRF were used to examine the morphologies of the prepared electrospun glass nanofibers and GLS@PET hybrid substrates at various concentrations of LANPs. The optical characteristics of the GLS@PET samples were investigated by looking at their luminescence spectra. Under UV illumination, GLS@PET became green, according to the colorimetric screening conducted by CIE Lab parameters. The scratch resistance of the GLS@PET substrates implanted with LANPs was shown to increase in tandem with the phosphor content. Studying the static contact angle revealed improved hydrophobic characteristics. For various possible uses, including safety warning, anticounterfeiting, and soft illumination, the present luminescent GLS@PET hybrids are photostable, and can provide smart window and concrete with transparent photoluminescence capabilities. Synthesis of LANPs The LA phosphor was created by the high-temperature solid-state technique [43]. A combination of 0.2 mol of Al 2 O 3 , 0.002 mol of Eu 2 O 3 , 0.02 mol of H 3 BO 3 , 0.1 mol of SrCO 3 , and 0.001 mol of Dy 2 O 3 was stirred for 3 h in a 300 mL of 100% ethanol. The combination was subjected to an ultrasonic treatment for 1 h at 35 kHz, drying for 2 h at 90 • C, and milling for 3 h. Sintering at 1300 • C for three hours was applied to the resulting residue to provide the LA microparticles. The LANPs were produced by subjecting the provided phosphor microparticles to the top-down technique [44]. Ten grams of LA microparticles were put into a 20-cm-diameter ball milling tube (stainless steel) mounted on a vibrating disc. A silicon carbide (SiC) milling ball was used to grind LA powder in the tube by the continuous collision between the pigment-charged tube and the vibrating disc for 23 h, introducing LANPs. Electrospinning and Silanization The electrospun glass nanofibers were made using a solution of TEOS (14%) and PVP (14%) in a solvent mixture of dimethyl sulfoxide/dimethylformamide (1/2) and then pyrolyzed at 800 • C, as described in a previous publication [45]. For 10 min, the given glass fibers were suspended in distilled water (5%), and they were sonicated for 10 min. The fibers were then immersed in a solution of silane (15%) in absolute ethanol, heated to 50 • C, and stirred at 125 rpm for an hour to complete the silanization process. Ethanol was used to both homogenize the mixture and rinse it after it had been allowed to sit for 10 min. To remove moisture from the supplied glass nanofibers, they placed a desiccator under a vacuum. Preparation of GLS@PET Polyester resin was mixed with the glass fibers (5% w/w), stirred for 15 min at room temperature and 125 rpm, and then homogenized for 5 min to ensure a uniform distribution of the electrospun glass nanofibers throughout the polyester bulk material. LANPs were charged into the above solution at a range of different concentrations (0.5%, 1%, 2%, 4%, 6%, 8%, 10%, and 12% w/w). Different symbols, including LA 0 , LA 1 , LA 2 , LA 3 , LA 4 , LA 5 , LA 6 , LA 7 , and LA 8 , were used to represent the resultant nanocomposites, respectively. The ingredients were stirred for 15 min at 150 rpm and sonicated for 5 min. MEKP was added at a weight percentage of 1.5%. The combination was exposed to stirring for 5 min, subjected to drop-casting onto an aluminum mould, and cured for 30 min at room temperature to yield a composite panel measuring 200 mm in length, 5 mm in thickness, and 200 mm in width. Topographical Measurements The LANPs' morphology was examined by a JEOL 1230 TEM (Tokyo, Japan). Ten minutes of ultrasonic (35 kHz) treatment of LANPs in CH 3 CN were followed by adding a drop onto a copper grid for TEM analysis. The GLS@PET morphologies were verified using a Quanta FEG-250 SEM (Czech). An EDX (TEAM) system connected to a SEM was used to analyze the luminescent GLS@PET substrates for their elemental composition. Additionally, Axios sequential XRF (Axios Instruments, Netherlands) was used to examine the chemical composition of the GLS@PET substrate. Hydrophobicity Measurements OCA15EC (Dataphysics, Filderstadt, Germany) was applied to determine the contact angle of the GLS@PET sheets [46]. Mechanical Testing The LANPs-embedded GLS@PET nanocomposite substrates were tested for their resistance to scratching [47] using HB pencils. They were also examined for their hardness properties [48] using a Shore D hardness tester (Otto Wolpert-Werke, GMBH, Ludwigshafen, Germany). The sample dimensions were a 55 mm diameter and a 20 mm thickness. Ultraviolet Blocking The ultraviolet protection factor (UPF) [49] was used to determine the UV-shielding efficiency of the luminescent GLS@PET hybrid samples. UPF was measured employing a UV-visible spectrophotometer according to the AATCC 183(2010) standardized technique. Photoluminescence Analysis A JASCO FP-8300 (Japan) was employed to run the spectral analysis of photoluminescence. The photoluminescence lifetime of the ready-to-use GLS@PET substrates was measured using a phosphorescence accessory tool. In order to investigate the decay time, the GLS@PET substrate was subjected to 15 min of UV irradiation. The GLS@PET sample was then completely shielded from the UV light source to measure the decay time results. Reversibility Evaluation For five minutes, the photoluminescent GLS@PET sample was exposed to UV light as per the established protocol [50]. After waiting 60 min in a dark wooden box, the GLS@PET sample was returned to its original state. Multiple rounds of UV irradiation and measurement of the resulting fluorescence spectra were performed. Colorimetric Properties UltraScanPro (HunterLab, Reston, VA, USA) assessed the CIE Lab coordinates and color strength (K/S) of GLS@PET substrates before and after UV irradiation. Known in French as the "Commission Internationale de L'éclairage," the CIE Lab is widely considered to be the preeminent authority on issues related to color vision, lighting, and illumination. The CIE Lab system [51] provides a numerical depiction of colors. Before and after being subjected to UV light, the Canon A710IS was utilized to take images of GLS@PET. Preparation of GLS@PET Sheets The solid-state high-temperature synthesis [43] was employed to prepare LA micropowder, which was subsequently treated with the top-down grinding technology [44] to yield LA nanoparticles. Figure 1 displays TEM and selected area electron diffraction (SAED) images. Neither defects nor dislocations were detected in the SAED image [52]. Using TEM, the diameter of LANPs was measured to be between 7 and 15 nm. The nano-scale materials have been essential to achieve the transparency of a matrix, which is critical for a variety of applications [53]. Thus, the LANPs were used to maintain the transparency of the GLS@PET matrix. Several concentrations of LANPs were used to create GLS@PET hybrids. The glass fibers were electrospun using formerly described methods [45]. Different concentrations of LANPs were blended with the given electrospun glass nanofibers and polyester resin. The present persistent phosphorescent, hydrophobic, photochromic, and UV protective GLS@PET sheets have shown to be practical toward industrial production for a variety of applications, such as directional marks, soft lights, smart window, smart concrete, and anticounterfeiting materials. Figures 2 and 3 show the topographical characteristics of glass fibers and luminous GLS@PET substrates, respectively. Using EDXA, the elemental contents (wt%) of GLS@PET at three dissimilar surface locations are shown in Table 1. SEM analysis of the manufactured electrospun glass nanofibers revealed diameters between 90 and 140 nm. The surface topography of GLS@PET was maintained unchangeable when the LANPs Figures 2 and 3 show the topographical characteristics of glass fibers and luminous GLS@PET substrates, respectively. Using EDXA, the elemental contents (wt%) of GLS@PET at three dissimilar surface locations are shown in Table 1. SEM analysis of the manufactured electrospun glass nanofibers revealed diameters between 90 and 140 nm. The surface topography of GLS@PET was maintained unchangeable when the LANPs ratio was raised. No phosphor particles were monitored on the GLS@PET surface, indicating that LANPs are entirely incorporated within the GLS@PET bulk. Figures 2 and 3 show the topographical characteristics of glass fibers and luminous GLS@PET substrates, respectively. Using EDXA, the elemental contents (wt%) of GLS@PET at three dissimilar surface locations are shown in Table 1. SEM analysis of the manufactured electrospun glass nanofibers revealed diameters between 90 and 140 nm. The surface topography of GLS@PET was maintained unchangeable when the LANPs ratio was raised. No phosphor particles were monitored on the GLS@PET surface, indicating that LANPs are entirely incorporated within the GLS@PET bulk. The EDXA analysis verified the presence of the phosphor elemental components in the GLS@PET substrates. The EDXA analysis of the GLS@PET surface was performed at three distinct sites to evaluate the elemental compositions of the fabricated GLS@PET substrates. Similar elemental ratios were detected at the three examined locations, demonstrating that the phosphor particles spread uniformly over the sample matrix. According to Table 1, many elemental components were detected by EDXA analysis. Due to the incorporation of LANPs and glass nanofibers in the polyester bulk, many elements were detected in the GLS@PET substrates, including oxygen, carbon, silicon, strontium, aluminum, dysprosium, and europium. Both oxygen and carbon were assigned to the polyester bulk material's. The presence of silicon was assigned to the glass nanofibers. Low levels of strontium, europium, dysprosium, and aluminum were found due to the trace quantities of LANPs employed in the production process of the GLS@PET substrates. The EDXA analysis verified the presence of the phosphor elemental components in the GLS@PET substrates. The EDXA analysis of the GLS@PET surface was performed at three distinct sites to evaluate the elemental compositions of the fabricated GLS@PET substrates. Similar elemental ratios were detected at the three examined locations, demonstrating that the phosphor particles spread uniformly over the sample matrix. According to Table 1, many elemental components were detected by EDXA analysis. Due to the incorporation of LANPs and glass nanofibers in the polyester bulk, many elements were detected in the GLS@PET substrates, including oxygen, carbon, silicon, strontium, aluminum, dysprosium, and europium. Both oxygen and carbon were assigned to the polyester bulk material's. The presence of silicon was assigned to the glass nanofibers. Low levels of strontium, europium, dysprosium, and aluminum were found due to the trace quantities of LANPs employed in the production process of the GLS@PET substrates. XRF was also applied to determine the chemical compositions of the luminous GLS@PET sheets, as shown in Table 2. The EDXA method provides a more accurate elemental analysis of a material. However, the XRF analysis can only identify elements at quantities higher than 10 ppm [54]. Hence, XRF provides a diagnostic method for the partial determination of a material elemental composition. Thus, only aluminium and strontium were detected by XRF in the GLS@PET substrates with lower LANPs ratios (LA 0 and LA 1 ), and the minute amounts of Eu and Dy present were not traceable. Strontium, dysprosium, europium, and aluminum were identified by XRF in GLS@PET with higher LANPs ratios (LA 2 -LA 8 ). EDX and XRF studies indicated that the elemental ratios in LA and luminous GLS@PET substrates were quite similar. Photoluminescence Spectra A translucent backdrop of a material substrate is necessary for better visual perception of the color change to green [55]. A phosphor-infused GLS@PET hybrid was tested for photochromism and found to undergo fast, reversible color change. The GLS@PET-containing LANPs with ratios of 1% or less showed instant reversibility, indicating fluorescence emission. The sheets of GLS@PET with LANPs content over 1% kept glowing even after being exposed to darkness, a phenomenon known as delayed reversibility that indicates afterglow emission. Figure 4 displays the excitation spectra of GLS@PET, illustrating the impact of LANPs' concentration on the resulting sheets. The emission peak intensity of LA 6 increased as the duration of UV illumination increased from 100 to 400 s. However, no more increments were detected in the emission peak intensity of LA 6 upon increasing the illumination time above 400 s. Figure 5 depicts the LA 6 photoluminescence spectra against the illumination time. The emission peak was monitored at 518 nm after being excited at 368 nm. The strength of the emission intensity band grew with the lengthening of the illumination time. The glass nanofiber-reinforced polyester served as a transparent trapping bulk for the phosphor pigment nanoparticles. By either physical entrapment of LANPs within the GLS@PET medium or by coordination binding between Al 3+ of LANPs and the oxygen atoms of GLS@PET [50,55], the addition of LANPs strengthened the binding between GLS@PET polymer chains. It has been observed that the 4f ↔5d transition of Eu 2+ produces a green light emis- It has been observed that the 4f ↔5d transition of Eu 2+ produces a green light emission wavelength of 518 nm. The LANPs usually provide two emission bands of blue (shorter wavelength) and green (longer wavelength) color emissions that originate from two different strontium locations in the SrAl 2 O 4 crystal. However, the ambient thermal quenching considerably reduces the intensity of the blue peak [56]. As a result, we could only see emissions in a greenish color. Therefore, we may deduce that only the emissions from Eu(II), and not Eu(III), had an effect on the photoluminescence spectra. The time-dependent exponential decay of the light from the GLS@PET substrate is second-order. The rate of deterioration was initially rapid but slowed considerably afterwards. Photochromic Study Phosphor nanoparticles were incorporated into the GLS@PET hybrid to create a transparent smart sheet, such as a smart window or concrete. As shown in Figure 6, photos of LA 6 were taken in both the visible spectrum and UV illumination to analyze the photochromism of the GLS@PET sheets. Significant bright green emissions were observed in the UV spectrum, but no traces were discernible in the daylight spectrum. Anticounterfeiting applications, such as packaging, have made use of photochromism, which is caused by exposure to UV light [57][58][59]. Therefore, any product that employs a standard anti-counterfeiting strategy may benefit from the current approach. A rectangular gasket was made using the existing GLS@PET hybrid. The used gasket is invisible during the day but becomes fluorescent green when subjected to ultraviolet illumination, making it hard to duplicate. sions from Eu(II), and not Eu(III), had an effect on the photoluminescence spectra. The time-dependent exponential decay of the light from the GLS@PET substrate is second-order. The rate of deterioration was initially rapid but slowed considerably afterwards. Photochromic Study Phosphor nanoparticles were incorporated into the GLS@PET hybrid to create a transparent smart sheet, such as a smart window or concrete. As shown in Figure 6, photos of LA6 were taken in both the visible spectrum and UV illumination to analyze the photochromism of the GLS@PET sheets. Significant bright green emissions were observed in the UV spectrum, but no traces were discernible in the daylight spectrum. Anticounterfeiting applications, such as packaging, have made use of photochromism, which is caused by exposure to UV light [57][58][59]. Therefore, any product that employs a standard anti-counterfeiting strategy may benefit from the current approach. A rectangular gasket was made using the existing GLS@PET hybrid. The used gasket is invisible during the day but becomes fluorescent green when subjected to ultraviolet illumination, making it hard to duplicate. The current approach is a reliable strategy since it has been used successfully to develop several anti-counterfeiting solutions for a more robust marketplace. The created GLS@PET substrates had their optical transparency examined to verify their promised clarity. The optical transmittance somewhat reduced as the quantity of LANPs increased in the GLS@PET bulk. LA1 had a transmission rate of 91%, whereas LA8 only managed 86%. Under UV light, the LA1 and LA8 samples, which appeared clear in daylight, took on a distinctly greenish color. For the reason that the present photoluminescent GLS@PET hybrids are transparent during the visible spectrum, it is easier to produce anti-counterfeiting patterns to prevent the forging of commercial items. It has been hypothesized that the 4f 6 5D 1 ↔4f 7 transition of Eu(II) is accountable for the emission of the LA phosphor [51]. Due to the absence of a discernible emission peak for Eu 3+ or Dy 3+ , it was concluded that Eu 3+ had been substituted with Eu 2+ . Dy 3+ was also shown to be responsible for the formation of traps that allow for the release of light in the dark and consequently allow for Eu 2+ to revert to its ground state. To achieve durability and photostability, reversibility is required for materials with photochromism and persistent luminescence. Several cycles of color change under UV and visible lights were used to confirm that LA6 has maintained its excellent reversibility, as shown in Figure 7. The current approach is a reliable strategy since it has been used successfully to develop several anti-counterfeiting solutions for a more robust marketplace. The created GLS@PET substrates had their optical transparency examined to verify their promised clarity. The optical transmittance somewhat reduced as the quantity of LANPs increased in the GLS@PET bulk. LA 1 had a transmission rate of 91%, whereas LA 8 only managed 86%. Under UV light, the LA 1 and LA 8 samples, which appeared clear in daylight, took on a distinctly greenish color. For the reason that the present photoluminescent GLS@PET hybrids are transparent during the visible spectrum, it is easier to produce anti-counterfeiting patterns to prevent the forging of commercial items. It has been hypothesized that the 4f 6 5D 1 ↔ 4f 7 transition of Eu(II) is accountable for the emission of the LA phosphor [51]. Due to the absence of a discernible emission peak for Eu 3+ or Dy 3+ , it was concluded that Eu 3+ had been substituted with Eu 2+ . Dy 3+ was also shown to be responsible for the formation of traps that allow for the release of light in the dark and consequently allow for Eu 2+ to revert to its ground state. To achieve durability and photostability, reversibility is required for materials with photochromism and persistent luminescence. Several cycles of color change under UV and visible lights were used to confirm that LA 6 has maintained its excellent reversibility, as shown in Figure 7. Table 3 displays the photo-induced chromic features of the produced GLS@PET hybrid composites. From LA0 through LA6, the GLS@PET substrates were clearly transparent. However, hybrid composites with a higher concentration of phosphor nanoparticles (LA7 and LA8) had a slightly white color. Only if the LANPs are evenly dispersed throughout the GLS@PET bulk can the transparency of phosphor-containing GLS@PET substrates be guaranteed, as nanomaterials have shown exceptional transparency [53]. The colorless GLS@PET substrates (LA1 through LA2) with negligible amounts of phosphor nanoparticles emitted an intense green color only under the ultraviolet spectrum, indicating fluorescence emission. When exposed to ultraviolet light, GLS@PET hybrid composites (LA3 through LA8) with high concentrations of phosphor nanoparticles emitted a greenish color below ultraviolet light and a greenish-yellow glow in darkness, signifying an afterglow. As the ratio of LANPs was raised, the GLS@PET hybrid turned from transparent to slightly white. While the LANPs ratio was raised from LA0 to LA6, the colorimetric strength value changed by a small amount under visible light. When the LANPs ratio was raised from LA7 to LA8 to signify transparency, a substantial rise in K/S was seen to signify a whiter color. When illuminated with ultraviolet, increasing LANPs from LA1 to LA8 led to higher K/S. The K/S values were larger under ultraviolet illumination than the equivalent non-illuminated GLS@PET. This could be attributed to the greenish emission below ultraviolet illumination in contrast to the colorless look measured below visible daylight. LANPs-free GLS@PET (LA0) showed little variation in the CIE Lab below either visible or UV illumination. In contrast, phosphor-containing GLS@PET substrates revealed significant variations for the CIE Lab. Increases in the LANPs ratio resulted in a reduction in light transmission, which in turn led to a marginal fall in L * under visible lighting conditions. L * was demonstrated to drop considerably under an ultraviolet device when the LANPs ratio was increased, indicating a better, greener shade. Slight changes in-a * and +b * were detected under daylight as the LANPs ratio increased. Under UV, the magnitudes of -a * and +b * were demonstrated to grow when the LANPs ratio was elevated. The concentration of phosphor nanoparticles in the GLS@PET hybrid composites (LA1 and LA2) was monitored for fluorescence emission. Table 3 displays the photo-induced chromic features of the produced GLS@PET hybrid composites. From LA 0 through LA 6 , the GLS@PET substrates were clearly transparent. However, hybrid composites with a higher concentration of phosphor nanoparticles (LA 7 and LA 8 ) had a slightly white color. Only if the LANPs are evenly dispersed throughout the GLS@PET bulk can the transparency of phosphor-containing GLS@PET substrates be guaranteed, as nanomaterials have shown exceptional transparency [53]. The colorless GLS@PET substrates (LA 1 through LA 2 ) with negligible amounts of phosphor nanoparticles emitted an intense green color only under the ultraviolet spectrum, indicating fluorescence emission. When exposed to ultraviolet light, GLS@PET hybrid composites (LA 3 through LA 8 ) with high concentrations of phosphor nanoparticles emitted a greenish color below ultraviolet light and a greenish-yellow glow in darkness, signifying an afterglow. As the ratio of LANPs was raised, the GLS@PET hybrid turned from transparent to slightly white. While the LANPs ratio was raised from LA 0 to LA 6 , the colorimetric strength value changed by a small amount under visible light. When the LANPs ratio was raised from LA 7 to LA 8 to signify transparency, a substantial rise in K/S was seen to signify a whiter color. When illuminated with ultraviolet, increasing LANPs from LA 1 to LA 8 led to higher K/S. The K/S values were larger under ultraviolet illumination than the equivalent non-illuminated GLS@PET. This could be attributed to the greenish emission below ultraviolet illumination in contrast to the colorless look measured below visible daylight. LANPs-free GLS@PET (LA 0 ) showed little variation in the CIE Lab below either visible or UV illumination. In contrast, phosphor-containing GLS@PET substrates revealed significant variations for the CIE Lab. Increases in the LANPs ratio resulted in a reduction in light transmission, which in turn led to a marginal fall in L * under visible lighting conditions. L * was demonstrated to drop considerably under an ultraviolet device when the LANPs ratio was increased, indicating a better, greener shade. Slight changes in-a * and +b * were detected under daylight as the LANPs ratio increased. Under UV, the magnitudes of -a * and +b * were demonstrated to grow when the LANPs ratio was elevated. The concentration of phosphor nanoparticles in the GLS@PET hybrid composites (LA1 and LA2) was monitored for fluorescence emission. The GLS@PET substrates that contained the highest ratios of phosphor nanoparticles (LA 3 and LA 8 ) had afterglow emission. The values of +b * and -a * decreased when the quantity of LANPs was increased from LA 1 to LA 8 . Higher quantities of phosphor produced a white hue in both the LA 7 and LA 8 substrates. The LA 2 sample, which appeared colorless in visible light, exhibited the greatest amounts of photo-induced greener fluorescence and photochromic activity. On the other hand, the LA 6 sample retained its colorless look despite having the maximum phosphorescent greenish emission and the ability to glow in the dark. Table 3. Colorimetric screening of LANPs-free and LANPs-loaded GLS@PET below visible spectrum (VS), and UV illumination; L* represents lightness between black(0) and white(100), a* represents color ratio between red(+a*) and green(−a*), and b* represents color ratio between yellow(+b*) and blue(−b*). Hydrophobic and UV-Blocking Properties When LANPs were introduced to the GLS@PET hybride, the contacting angle rose from 136.5 • in the control sample (LA 0 ) to 137.6 • (LA 1 ). When the phosphor ratio was raised from LA 1 to LA 6 , the LA nano-scaled particles roughened the surface, which increased the contact angle from 137.6 • to 146.2 • , respectively. The roughness and contact angle of LA 7 and LA 8 were marginally reduced when the quantity of LA nanoparticles was increased. Smart windows that can filter ultraviolet rays are a convenient way to protect humans from sunburn, erythema, and skin cancer [28,29]. Figure 8 displays the results of testing the ultraviolet protection properties of GLS@PET. The integrated LANPs in LA 1 have a significant ultraviolet absorption capacity, providing strong UV protection. Therefore, LA 1 performance in terms of protecting against UV rays was much better than that of LA 0 . Increasing the LANPs concentration also improved the UV blocking properties of the luminous colorless GLS@PET hybrids. The transparent GLS@PET hybrid that emits light can be used as smart windows that reduce energy costs. During the day, a lot of UV light enters via the photochromic window. Therefore, it generates a greenish hue that blocks the sun's rays from entering the building. The photochromic GLS@PET hybrid reverts to its colorless state in low light, allowing more light to enter the building interior. Mechanical Properties The hardness performance of the prepared GLS@PET samples is a critical factor in identifying their durability and extent of deformation. Therefore, hardness is an im- Mechanical Properties The hardness performance of the prepared GLS@PET samples is a critical factor in identifying their durability and extent of deformation. Therefore, hardness is an important character and a valuable parameter to evaluate a composite's performance [48,55]. The aim of this study is to develop a method for making transparent GLS@PET with a smooth exterior. Consequently, a series of scratch and hardness tests were performed to evaluate the mechanical features of GLS@PET. As a quick and easy method, the scratch resistance property was evaluated using a pencil [47]. Scratch pencils (6B to 9H) were used to create scratch patterns on the GLS@PET hybrid composites. The LANPs-free GLS@PET sample (LA 0 ) was easily scratched using the HB pencil. For the samples from LA 1 to LA 8 , the scratch resistance values were monitored at H, H, H, H, 2H, 2H, 3H, and 3H, respectively. Thus, increasing the LANPs ratio improved the scratching resistance of GLS@PET. Figure 9 shows the relationship between the LANPs ratio and the hardness properties of GLS@PET. The hardness of the prepared GLS@PET samples was found to decrease from 12.96 kg/mm 2 to 10.21 kg/mm 2 with an increasing LANPs filler ratio from 0.5% (LA 1 ) to 4% (LA 4 ), respectively. The hardness was then increased up to 12.39 kg/mm 2 at LANPs ratio of 12% (LA 8 ). Likewise, the impact strength (per unit volume) was observed to decrease from 13.26 MPa to 9.14 MPa when increasing the LANPs ratio from 0.5% (LA 1 ) to 4% (LA 4 ), respectively. The performance of impact strength was then increased to 15.13 MPa at a LANPs ratio of 12% (LA 8 ). The improved hardness could be attributed to the incorporation of LANPs that serve as a very effective stress transmission agent inside the GLS@PET framework. Increasing the phosphor ratio increases the GLS@PET hardness by strengthening the intermolecular coordination linkages between polyester oxygen and Al 3+ of LA. Al 3+ may function as a catalytic agent to increase the polyester polymerization rate, improving the sample hardness. The LA phosphor creates a 3D polymer network with a higher molecular weight [48,50,55] due to Al 3+ as a coordinating crosslinker between oxygen on polyester chains. Conclusions Using a GLS@PET hybrid host material and LANPs, this research set out to create a smart transparent windows and concrete that reacts to UV light. This GLS@PET substrate has photochromic and long-lasting luminescence properties, making it suitable for application in smart window and concrete technologies. The current technique can be utilized to make transparent GLS@PET hybrid nanocomposites that can change color and rely on photochromics to toggle their light transmission. With this simple technique, we were able to prove that photochromic GLS@PET substrates with desired characteristics, including transparency, photostability, UV protection, and hydrophobicity, are possible to produce. LANPs (afterglow and photochromic agents) were synthesized using the high-temperature solid-state method followed by the top-down milling technology. Conclusions Using a GLS@PET hybrid host material and LANPs, this research set out to create a smart transparent windows and concrete that reacts to UV light. This GLS@PET substrate has photochromic and long-lasting luminescence properties, making it suitable for application in smart window and concrete technologies. The current technique can be utilized to make transparent GLS@PET hybrid nanocomposites that can change color and rely on photochromics to toggle their light transmission. With this simple technique, we were able to prove that photochromic GLS@PET substrates with desired characteristics, including transparency, photostability, UV protection, and hydrophobicity, are possible to produce. LANPs (afterglow and photochromic agents) were synthesized using the hightemperature solid-state method followed by the top-down milling technology. Transmission electron microscopy examinations revealed that the generated phosphor nanoparticles had sizes between 7 and 15 nm. Electrospinning was used to prepare glass nanofibers as a reinforcement agent. Both LANPs and glass nanofibers were uniformly embedded in transparent polyester resin, making them multifunctional photoluminescent materials. The GLS@PET substrates were analyzed for their morphological features using EDXA, XRF, and SEM. Luminescent GLS@PET showed photochromism by changing color from translucent to green below UV light, as measured by photoluminescence spectra and CIE Lab values. The contact angle of the GLS@PET hybrid nanocomposites increased from 136.5 • to 146.2 • when the LANPs ratio was raised. Scratch resistance and hardness were also shown to improve with an increase in the phosphor ratio. It has been reported that the use of a 1% phosphor ratio is optimal for fluorescence photochromism in GLS@PET, yielding a transparent material that emits a vivid green color when exposed to ultraviolet light. The colorless GLS@PET with a 6% phosphor ratio was found to have the optimum ratio for long-persistent phosphorescence. The GLS@PET hybrid composites showed excellent photostability.
8,939
2023-02-01T00:00:00.000
[ "Materials Science" ]
On electroweak corrections to neutral current Drell–Yan with the POWHEG BOX Motivated by the requirement of a refined and flexible treatment of electroweak corrections to the neutral current Drell-Yan process, we report on recent developments on various input parameter/renormalization schemes for the calculation of fully differential cross sections, including both on-shell and MS schemes. The latter are particularly interesting for direct determinations of running couplings at the highest LHC energies. The calculations feature next-to-leading order precision with additional higher order contributions from universal corrections such as ∆α and ∆ρ . All the discussed input parameter/renormalization scheme options are implemented in the package of POWHEG-BOX-V2 dedicated do the neutral current Drell-Yan simulation, i.e. Z_ew-BMNNPV , which is used to obtain the presented numerical results. In particular, a comprehensive analysis on physical observables calculated with different input parameter/renormalization schemes is presented, addressing the Z peak invariant mass region as well as the high energy window. We take the opportunity of reporting also on additional improvements and options introduced in the package Z_ew-BMNNPV after svn revision 3376, such as different options for the treatment of the hadronic contribution to the running of the electromagnetic coupling and for the handling of the unstable Z resonance. Introduction The neutral current Drell-Yan (NC DY) process plays a particular role in the precision physics programme of the LHC.In fact, considering its large cross section and clean experimental signature, together with the high precision measurement of the Z-boson mass at LEP, this process is a standard candle that can be used for different general purposes such as detector calibration, Parton Distribution Functions (PDFs) constraining and tuning of non-perturbative parameters in the general purpose Monte Carlo event generators.Moreover, in the high tail of the transverse momentum and invariant mass distributions of the produced leptons, the NC DY is one of the main irreducible backgrounds to the searches for New Physics at the LHC.Recently an impressive precision at the sub-percent level has been reached by the experimental analysis in large regions of the dilepton phase space. In addition to the above general aspects, it has to be stressed that the NC DY process also allows to perform precision tests of the Standard Model electroweak (SM EW) parameters through the direct determination of the W -boson mass [1][2][3][4] and the weak mixing angle in hadronic collisions [5][6][7][8][9].While in the former case the NC DY observables enter only indirectly, the latter can be determined directly, without reference to the charged current DY process.Another interesting possibility is the direct determination of the running of the coupling sin 2 θ , defined in the MS scheme, at the highest available LHC energies, in order to check its consistency with low energy and Z-peak measurements within the SM framework [10]. A fundamental role in simulations for collider phenomenology is played by Monte Carlo event generators capable to consistently match fixed order calculations to parton showers (PS) simulating multiple soft/collinear radiations.The MC@NLO [94] and POWHEG [95,96] algorithms have been developed for the matching of NLO QCD computations to QCD PS and implemented in the public softwares MadGraph5-_aMC@NLO [97] and POWHEG-BOX [98,99].Alternative for-mulations of the above algorithms are used in event generators like Sherpa [100,101] and HERWIG [102,103].More recently, algorithms like NNLOPS [104] (based on reweighting of the MiNLO ′ [105] merging strategy), UNNLOPS [106,107], Geneva [108], and MiNNLO PS [109,110] have been proposed for the matching of NNLO QCD calculations to QCD PS.Though several studies appeared on the approximated inclusion of EW corrections in event generators including higher-order QCD corrections (see, for instance Refs.[111][112][113]), event generators including both NLO QCD and NLO EW corrections consistently matched to both QCD and QED parton showers are only available for a limited number of processes, namely: charged and neutral current Drell-Yan [114][115][116][117][118], HV + 0/1 jet [118], diboson production [119], and electroweak H + 2 jets production [120] 1 .QED correction exclusive exponentiation for DY processes within YFS framework is realized within the event generator KKMC-hh [122][123][124][125][126]. Very recently, the resummation of EW and mixed QCD-EW effects up to next-to-leading logarithmic accuracy has been presented in Ref. [127] for charged and neutral current DY. The first implementation of QCD and EW NLO corrections and their interplay in a unique simulation framework have been given in Ref. [114] for charged current DY 2 and in Ref. [116] for NC DY.An important improvement in the matching of QED radiation in the presence of a resonance, following the ideas presented in Ref. [99], has been discussed in Ref. [128] for the charged current DY process (W_ew-BMNNP svn revision 3375) and extended to the NC one (Z_ew-BMNNPV svn revision 3376) 3 .After the above mentioned release, additional improvements and options have been introduced, in particular for the NC DY package Z_ew--BMNNPV, motivated by the need of a refined and flexible treatment of EW corrections that allows a consistent internal estimate of the uncertainties affecting the theoretical predictions.They can be schematically enumerated as follows: input parameter/renormalization schemes introduction of known higher order corrections treatment of the hadronic contribution to the running of the electromagnetic coupling ∆ α had scheme for the treatment of the unstable resonance In the following we give a detailed description of the various input parameter schemes and the related higher order corrections.A key ingredient of the latter is given by the running of the electromagnetic coupling α between differ-ent scales.In particular, when low scales are involved in the running, the hadronic contribution is intrinsically nonperturbative and different parametrizations have been developed in the literature relying on low-energy experimental data, which we properly include in our formulation of the electroweak corrections. The formulae relevant for the various input parameters/ renormalization schemes are presented in a complete and self-contained form, so that they can be implemented in any simulation tool. Though the Z_ew-BMNNPV package allows to simulate NC DY production at NLO QCD+NLO EW accuracy with consistent matching to the QED and QCD parton showers provided by PYTHIA8 [129,130] and/or Photos [131][132][133], in the present paper we are mainly interested in various aspects of the fixed-order calculation (NLO EW plus universal higher orders).For this reason we show numerical results obtained at fixed-order and including only the weak corrections, since the QED contributions are not affected by the choice of the input parameter scheme and are a gaugeinvariant subset of the EW corrections for the NC DY process. The layout of the paper is the following: Section 2 provides an introduction to the input parameter schemes available in the code, while general considerations on higher order universal corrections, common to all the schemes, are presented in Section 3. A detailed account of various input/renormalization schemes at NLO accuracy and the related higher order corrections is given in Section 4, while a numerical analysis of the features of the schemes, with reference to cross section and forward-backward asymmetry, as functions of the dilepton invariant mass, is presented in Section 5, together with a discussion on the main parametric uncertainties in Section 6.The treatment of the hadronic contribution to the running of α is discussed in Section 7, while Section 8 is devoted to the description of the improvement of the code with respect to the treatment of the unstable Z resonance.In Section 9 we analyse the effect of the various input/renormalization schemes in the high energy regimes, which will be accessible to the HL-LHC phase and future FCC-hh.A brief summary is given in Section 10.The list of the default parameter values 4 is contained in Appendix A, while the list of the flags activating the available options is given in Appendix B. Input parameter schemes: general considerations The input parameter schemes available in the Z_ew-BMNNPV package of POWHEG-BOX-V2 can be divided in three categories: the ones including both the W and the Z boson masses among the independent parameters, the (α 0 , G µ , M Z ) scheme, where we use the notation α 0 for the QED coupling constant at Q 2 = 0, and the schemes with M Z and the sine of the weak mixing angle as free parameters.The latter class includes the schemes that use sin θ l e f f as input parameter, where θ l e f f is the effective weak mixing angle, and an hybrid MS scheme where the independent quantities are α MS (µ 2 ), sin 2 θ MS (µ 2 ) and M Z , with the couplings renormalized in the MS scheme and M Z with the usual on-shell prescription. The first class of input parameter schemes, namely (α i , M W , M Z ) with α i = α 0 , α(M 2 Z ), G µ , is widely used for the calculation of the EW corrections for processes of interest at the LHC.On the one hand, the fact that the W boson mass is a free parameter is a useful feature, in particular, in view of the experimental determination of M W from charged current Drell-Yan production using template fit methods; on the other hand, the predictions obtained in these schemes can suffer from relatively large parametric uncertainties related to the current experimental precision on M W .This drawback is overcome, for instance, in the (α 0 , G µ , M Z ) scheme used for the calculation of the EW corrections in the context of LEP physics, where all the input parameters are experimentally known with high precision.The third class of input parameter schemes uses the sine of the weak mixing angle as a free parameter.In the Z_ew-BMNNPV package the (α i , sin θ l e f f , M Z ) schemes, with α i = α 0 , α(M 2 Z ), G µ , and the (α MS (µ), sin 2 θ MS (µ 2 ), M Z ) one are implemented.The schemes where sin θ l e f f (sin 2 θ MS (µ 2 )) is a free parameter are particularly useful in the context of the experimental determination of sin θ l e f f (sin 2 θ MS (µ 2 )) from NC DY production at the LHC using template fits [10,118]. The predictions for NC DY production obtained in the schemes that use α(M 2 Z ) or G µ as inputs show a better convergence of the perturbative series compared to the corresponding results from the schemes with α 0 as free parameter 5 .This is a consequence of the fact that, when using α(M 2 Z ) or G µ as independent variables, large parts of the radiative corrections related to the running of α(Q 2 ) from Q 2 = 0 to the electroweak scale are reabsorbed in the LO predictions.On the contrary, the EW corrections in the schemes with α 0 as input tend to be larger, because the running of α(Q 2 ) involves logarithmic corrections of the form log(m 2 /Q 2 ), where m stands for the light-fermion masses (we refer to Sect.7 for the treatment of the light-quark contributions to the running of α(Q 2 )) and Q 2 is the typical large mass scale of the process. From a technical point of view, the calculation of the one-loop electroweak corrections in the above-mentioned input parameter schemes differs in the renormalization pre-scriptions used for the computation, while the bare part of the Drell-Yan amplitudes remains formally the same and it is just evaluated with different numerical values of the input parameters.For each choice of input scheme, the renormalization is performed as follows: first the electroweak parameters are expressed as a function of the three selected independent quantities, then the counterterms corresponding to these parameters are fixed by imposing some renormalization condition, and finally the counterterms for the derived electroweak parameters are written in terms of the ones corresponding to the input parameters. The fact that the counterterm part of the Drell-Yan amplitude differs in the considered input parameter schemes implies that also the expression of the universal fermionic corrections changes, as these corrections at NLO can be related to the counterterm amplitude.In fact, they can be computed at O(α 2 ) taking the square of the fermionic universal contributions at NLO (after subtracting the O(α) terms already included in the NLO calculation).We refer to Sect. 3 for details. The input parameter schemes described in the following sections are formally equivalent for a given order in perturbation theory; however, the numerical results obtained differ because of the truncation of the perturbative expansion.Although there is some arbitrariness in the choice of the input parameter scheme to be used for the calculations, there can be phenomenological motivations to prefer one scheme to the others, depending on the observables under consideration and on the role played by the theory predictions with respect to the interpretation of the experimental measurements 6 .For instance, in the context of cross section or distribution measurements, where the theory predictions are used as a benchmark for the experimental results but do not provide input for parameter determination, those input parameter schemes should be preferred that involve independent quantities known experimentally with high precision in order to minimize the corresponding parametric uncertainties.One should also try to minimize the parametric uncertainties from quantities that enter the calculation only at loop level (such as, for instance, the top quark mass in DY processes).Another aspect that should be taken into account when choosing an input parameter scheme is the convergence of the perturbative expansion in the predictions for the observables of interest, which is mainly related to the possibility of reabsorbing large parts of the radiative corrections in the definition of the coupling at LO. A different situation is the direct determination of electroweak parameters using template fit methods, as done for example for the W boson mass at Tevatron and LHC.In this case, the theory predictions enter the interpretation of the measurement (with the Monte Carlo templates) and the the-ory uncertainties become part of the total systematic error on the quantity under consideration: it is thus important to use an input parameter scheme where the quantity to be measured is a free parameter that can be varied independently not only at LO, but also at higher orders in perturbation theory. Higher-order corrections At moderate energies, the leading corrections to NC DY production are related to the logarithms of the light fermion masses and to terms proportional to the top quark mass squared.These contributions can be traced back to the running of α(Q 2 ) (i.e. to ∆ α) and to ∆ ρ and are thus related to the counterterm amplitude for the process under consideration.Following Refs.[29,[135][136][137], these effects can be taken into account at O(α 2 ) by taking the square of the part of the counterterm amplitude proportional to ∆ α and ∆ ρ.They can thus be combined to the full NLO calculation after subtracting the part of linear terms in ∆ α and ∆ ρ appearing in the square of the counterterm amplitude that are already present in the NLO computation. The numerical results for the fermionic higher-order corrections presented in the following are obtained using the one-loop expression for ∆ α (even though the two-loop leptonic corrections are also available in the Z_ew-BMNNPV package and can be activated with the flag dalpha_lep_2loop), while for ∆ ρ we include the leading Yukawa corrections up to O(α 2 S ), O(α S x 2 t ), and O(x 3 t ), with x t = √ 2G µ M 2 top /16π 2 .More precisely, the expression used for ∆ ρ is where ∆ ρ (2) is the two-loop heavy-top corrections to the ρ parameter [138][139][140], δ QCD and δ QCD are the two and threeloop QCD corrections [141][142][143][144], while the three-loop contributions ∆ ρ x 3 t and ∆ ρ x 2 t α S are taken from Ref. [145].The last term in Eq. ( 1) is introduced in order to avoid the double counting of the O(x 2 t α 2 S ) contribution already present in factorized approximation in the product of the QCD corrections and the Yukawa corrections at two loops.The fourloop QCD corrections to the ρ parameter [146] are not included.By inspection of the numerical impact of three-loop QCD corrections (cf.Figs.7 and 8), the phenomenological impact of four-loop QCD corrections to the ρ parameter is expected to be negligible at the LHC.For the numerical studies presented in the following, the scale for the α S factors entering the QCD corrections to ∆ ρ is set to the invariant mass of the dilepton pair. Input parameter schemes: detailed description In the following subsections we present a detailed account of the available input parameter schemes at NLO weak accuracy and the related universal higher order corrections (in what follows, the label NLO+HO refers to NLO plus higherorder accuracy).In the last subsection we present a comparison of the radiative corrections obtained with the different parameter schemes for two relevant differential observables (cross section and forward-backward asymmetry as functions of the dilepton invariant mass M ll ) of the NC DY process at the LHC with √ s = 13 TeV, considering µ + µ − final states.In the following, for the sake of simplicity of notation, whenever the complex mass scheme (CMS in the following) is used for the treatment of the unstable gauge bosons, the symbol In these schemes, the input parameters are the W and Z boson masses and The counterterms for the independent parameters are defined as where the subscript b denotes the bare parameter.The expression for δ Z e is fixed by imposing that the NLO EW corrections to the γe + e − vertex vanish in the Thomson limit, while δ M 2 W and δ M 2 Z are obtained by requiring that the gaugeboson masses do not receive radiative corrections. The analytic expression of the counterterms can be found in Ref. [136] (and in Refs.[147][148][149] if the complex-mass scheme is used).In the following, for the self energies and the counterterms we will use the notation of Ref. [136]. In the schemes with M W and M Z as independent parameters, the sine of the weak-mixing angle is a derived quantity defined as and the corresponding counterterm reads: When α(M 2 Z ) or G µ are used as input parameters, the calculation of the O(α) corrections is formally the same one as in the (α 0 , M W , M Z ) scheme but with the replacements δ Z e → δ Z e − ∆ α(M 2 Z )/2 and δ Z e → δ Z e − ∆ r/2, respectively, that take into account the running of α(Q 2 ) from Q 2 = 0 to the weak scale which is absorbed in the LO coupling (α(M 2 Z ) or G µ ).It is worth noticing that these replacements remove the logarithmically enhanced fermionic corrections coming from ∆ α.The factor ∆ r represents the full one-loop electroweak corrections to the muon decay in the scheme (α 0 , M W , M Z ) after the subtraction of the QED effects in the Fermi theory and reads: with In the Z_ew-BMNNPV package, we implemented a slightly modified version of Eqs.(3.45)-(3.49) of Ref. [29] for the computation of the leading fermionic corrections to neutralcurrent Drell-Yan up to O(α 2 ).More precisely, those equations are modified in such a way to be valid also in the complex-mass scheme.As discussed in Sect.3, in order to combine these higher-order fermionic corrections with the NLO EW results, it is mandatory to subtract those effects that are included in the full one-loop calculation to avoid double counting.In particular, this implies the replacement ∆ ρ → (∆ ρ − ∆ ρ 1−loop ) in the linear terms of the fermionic corrections up to O(α 2 ): if we use, optionally by means of the flag a2a0-for-QED-only 7 , for the overall weakloop factors the same value of α i used in the LO couplings, ∆ ρ 1−loop is computed as a function of α 0 , α(M 2 Z ), or G µ for the (α 0 , M W , M Z ), (α(M 2 Z ), M W , M Z ), and (G µ , M W , M Z ) schemes, respectively.If instead we use α 0 for the overall weak-loop factors, we subtract the quantity ∆ ρ 1−loop | α 0 computed in the α 0 scheme, regardless of the value of α i used as independent parameter. The In the (α i , sin 2 θ l e f f , M Z ) schemes (where α i = α 0 , α(M 2 Z ), G µ ) the sine of the effective weak mixing angle is used as input parameter 8 .This quantity is defined from the ratio of the vectorial and axial-vectorial couplings of the Z boson to the leptons g l V and g l A , or, equivalently, in terms of the chiral Zll couplings g l L and g l R , measured at the Z resonance and reads sin 2 θ l e f f ≡ where I l 3 is the third component of the weak isospin for lefthanded leptons 9 .Since sin 2 θ l e f f is used as an independent parameter, this scheme is particularly useful in the context of the direct extraction of sin 2 θ l e f f from NC DY at the LHC using template fit methods at NLO EW accuracy. The counterterms corresponding to the input parameters are defined as The expressions of δ Z e and δ M 2 Z are determined as in Sect.4.1, while the expression of δ sin 2 θ l e f f is fixed by requiring that the definition in Eq. ( 11) holds to all orders in perturbation theory.More precisely, we write Eq. ( 11) at one loop as sin 2 θ l e f f ≡ where ) represent the Zl L l L and Zl R l R form factors computed at one loop accuracy at the scale M 2 Z and we impose the condition: The δ g l L(R) factors contain both bare vertices and counterterms and since they are functions of δ sin 2 θ l e f f , Eq. ( 17) can be used to compute the counterterm corresponding to the effective weak mixing angle.By inserting the expressions for sin θ l e f f δ Z AZ (18) where δ Z l L(R) are the pure weak parts of the wave function renormalization counterterms for the leptons and δV L(R) are the one-loop weak corrections to the left/right Zll vertices defined as and the vertex functions V a and V − b are given in Eqs.(C.1) and (C.2) of Ref. [136], respectively.No QED correction is included in Eq. ( 18), since the QED contributions to the Zll vertex are the same for left or right-handed fermions and cancel in Eq. (17).When the complex mass scheme is used, the input value for sin 2 θ l e f f remains real: this implies that g LO R/L remain real and the condition (16) still reduces to (17). As a consequence, the definition in Eq. (18) remains valid in the complex-mass scheme, provided that the CMS expressions for δ Z l L(R) and δ Z AZ are used.Note that the vertex functions V a and V b are computed for a real scale M 2 Z , while the gauge boson masses appearing in the loop diagrams are promoted to complex.If one instead uses a complex-valued s 2 W (see the discussion on fermionic higher-order effects in Sect.4.3), the condition in (17) can still be used but without taking the real part. As already discussed in Sect.4.1, the counterterms in the (α(M 2 Z ), sin 2 θ l e f f , M Z ) and (G µ , sin 2 θ l e f f , M Z ) schemes can be obtained from the ones in the (α 0 , sin 2 θ l e f f , M Z ) scheme performing the replacements δ Z e → δ Z e −∆ α(M 2 Z )/2 and δ Z e → δ Z e − ∆ r/2, respectively, where ∆ r represents the one-loop electroweak corrections to the muon decay (after subtracting the QED effects in the Fermi theory) in the scheme (α 0 , sin 2 θ l e f f , M Z ) and reads that can be written also as with where we used the short-hand notation s W = sin θ l e f f and c W = cos θ l e f f .From Eqs. ( 18)-( 21) it is clear that the leading fermionic corrections in the schemes with sin 2 θ l e f f as input parameter are only related to δ Z e ∼ ∆ α 2 and ∆ r ∼ ∆ α − ∆ ρ, while the counterterm of sin 2 θ l e f f does not contain terms proportional to the logarithms of the light-fermion masses or to the square of the top quark mass.As a result, the fermionic higher-order corrections in these schemes (after the subtraction of the effects already included in the O(α) calculation) are just overall factors that multiply the LO matrix element squared and read: and for the schemes with α 0 and G µ as input parameters, respectively, while these corrections are zero when α(M 2 Z ) is used as independent parameter.In equation (23), a resummation of the logarithms of the light-fermion masses was performed, while the overall factor in (24) comes from the relation between α and G µ at NLO plus higher orders, 4.3 The (α 0 , G µ , M Z ) scheme In the (α 0 , G µ , M Z ) scheme, the input parameters are α 0 , G µ , and the mass of the Z boson.The main advantage of using this scheme is that all the independent parameters are experimentally known with high precision and the corresponding parametric uncertainties are small (in particular, compared to the schemes in Sect.4.1, it is independent of the uncertainties related to the experimental knowledge of M W ). In the scheme under consideration, the sine of the weak mixing angle and the W boson mass are derived quantities.At the lowest order in perturbation theory they can be computed using the relations 10 In the LO matrix element, the value of α used is derived from G µ at In terms of Eqs.(26), it is possible to write the LO amplitude for NC DY as the sum of the photon exchange amplitude proportional to α and the Z exchange amplitude proportional to G µ M 2 Z , namely: where A σ ,τ is the part of the amplitude containing the γ matrices and the external fermions spinors (σ , τ = L, R) and Q q(l) and I σ q(l) 3 being the quark (lepton) charges and third components of the weak isospin.In the complex-mass scheme, the definition of χ Z is s/(s − M 2 Z ).Clearly the Z-boson exchange diagram contains a residual dependence on α from s W in Eq. (28).Two different realizations of the (α 0 , G µ , M Z ) scheme are available in the Z_ew-BMNNPV package: users can select a specific one through the azinscheme4 flag in the powehg.inputfile.If the flag is absent or negative, in Eqs. ( 26) α = α 0 in such a way that the γ f f interaction is evaluated at low scale while the Z f f couplings are computed at the weak scale.If azinscheme4 is positive, α = α 0 /(1 − ∆ α(M 2 Z )): this way also the photon part of the amplitude is evaluated at the weak scale.Note that we compute ∆ α(M 2 Z ) form α 0 rather that taking α(M 2 Z ) as an independent parameter.For dilepton invariant masses in the resonance region or larger, the latter running mode allows to reabsorb in the couplings the mass logarithms originating from the running of α from q 2 = 0 to the weak scale.If not otherwise stated, the numerical results presented for the (α 0 , G µ , M Z ) scheme are obtained with azinscheme4= 1. The counterterms for the independent quantities are defined as: The expression of the δ Z e and δ M 2 Z counterterms is fixed as in Sect.4.1 (if azinscheme4= 1, there is the additional shift δ Z e → δ Z e − ∆ α(M 2 Z )/2), while δ G µ is determined by requiring that the muon decay computed in the (α 0 , G µ , M Z ) scheme does not receive any correction at NLO (after the subtraction of the QED effects in the Fermi theory), namely: where α in the last term correspond to the loop factor governed by the flag a2a0-for-QED-only.The counterterms for the dependent quantities read where δ opt ,1 is one if the azinscheme4 flag is active and zero otherwise.By looking at the expressions of the counterterms in equations ( 32)- (35), it is clear that at NLO the leading fermionic corrections to the photon exchange amplitude are related to δ Ze , while for the Z exchange amplitude they come from the counterterms of the overall factor G µ M 2 Z and from δ s W /s W , with In order to include these effects beyond O(α), we follow the strategy described, for instance, in [150]: the fermionic higher-order corrections are written as a Born-improved amplitude written in terms of effective couplings α = α 0 /(1 − ∆ α(M 2 Z )) and sin 2 θ l e f f (computed as a function of α 0 , M Z , G µ ) after subtracting those parts of the corrections already present in the NLO result.The sine of effective leptonic weak-mixing angle in the (α 0 , G µ , M Z ) can be computed at NLO using Eq. ( 15): after noticing that the second term in the last line goes like δ s 2 W and it would be zero if the s W counterterm was δ sin 2 θ l e f f (i.e. it had the expression derived according to Eq. ( 16) but with a numerical value of s W fixed by Eq. ( 26)), by adding and subtracting δ sin 2 θ l e f f Eq. ( 15) boils down to where in the second equality the explicit expression of the counterterms δ s 2 W (35) and δ G µ (32) was used and compared to the explicit expression of ∆ r (if the flag azinscheme4 is on, ∆ r must be computed in terms of δ Ze rather than δ Z e ).Equation (37) is the NLO expansion of sin 2, HO where ∆ rHO is obtained from ∆ r by adding to the ∆ ρ term in (21) the higher-order corrections in Eq. ( 1).Note that ∆ rHO depends on ∆ ρ, but not on ∆ α: in fact, if azinscheme4 is equal to one, ∆ r is function of δ Ze , while for azinscheme4 equal to zero the ∆ α factor originally present in ∆ r is subtracted and resummed in the α = α 0 /(1 − ∆ α) factor under the square root in Eq. (38).To summarize, the fermionic higher-order effects are computed in terms of a LO matrix-element squared computed as a function of the effective parameters α 0 /(1 − ∆ α(M 2 Z )) and sin 2, HO θ l e f f and the removal of the double-counting of the O(α) correction is achieved by subtracting its first-order expansion in ∆ r (and ∆ α if the azinscheme4 flag is off).If the complex-mass scheme is used, ∆ r in Eq. ( 37) becomes complex, but we decided to include in Eq. ( 38) − and thus, effectively, resum − only its real part in order to minimize the spurious effects introduced by the CMS prescription. As a conclusive remark, we recall that α 0 , M Z , and G µ are the input parameters used for the theory predictions/tools [151][152][153][154][155][156][157][158][159][160][161][162][163] developed for the precise determination of the Zboson properties at LEP1 (see for instance [164] for a tuned comparison).The realizations of the (α 0 , G µ , M Z ) scheme described in this section differ from the ones used in the above-mentioned references, even though they are equivalent at the perturbative order under consideration.In fact, a typical strategy in the literature was to perform the calculation in a given scheme, for instance (G µ , M W , M Z ), using the formulae for the NLO (or NLO plus fermionic higherorder corrections) derived in that scheme but computing the numerical value of M W and s W (at the same perturbative accuracy) from (α 0 , G µ , M Z ) through the expression of ∆ r, namely: where the leading fermionic effects related to the running of the parameters α and s W in ∆ r have been resummed [135] and ∆ ρ HO includes also higher-order corrections.In Eq. ( 39) Equation ( 39) is solved iteratively as ∆ r remn is a function of s W .In Ref. [164] a slightly modified version of Eq. ( 39) was used, with ∆ ρ HO promoted to ∆ ρ HO + ∆ ρ X and ∆ r remn changed accordingly to ∆ r remn + where the notation MS just means that, within the brackets, UV poles have been removed and the mass scale µ Dim was replaced with M Z .In the following, the LEP1-like tuned comparison will be performed with the convention from Ref. [164]. A similar strategy could be followed for the (α, sin 2 θ l e f f , M Z ) schemes, where the value of sin 2 θ l e f f can be obtained from the iterative solution of 4.4 The (α MS , s 2 W MS , M Z ) scheme and its decoupling variants In the (α MS , s 2 W MS , M Z ) scheme, the independent parameters are the MS running couplings α MS and s 2 W MS and the Z-boson mass.More precisely, the input parameters are the numerical values of α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ) for a given MS renormalization scale µ 0 selected by the user and the onshell Z mass (internally converted to the corresponding pole value).The values of α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ) are then evolved to α MS (µ 2 ) and s 2 W MS (µ 2 ), where µ is the MS renormalization scale selected for the calculation.Both fixed and dynamical renormalization scale choices are implemented in the code.The numerical results presented in the following are obtained with a dynamical renormalization scales, µ being set to the dilepton invariant mass. The calculation of the tree-level and bare one-loop amplitudes in the (α MS , s 2 W MS , M Z ) scheme proceeds in the very same way as in the other schemes described above, with the only difference that the electric charge and the sine of the weak-mixing angle are set to e MS (µ 2 ) and s 2 W MS (µ 2 ), respectively.The additional factor of α coming from the virtual and real QED corrections is always set to α 0 .In the numerical studies presented in the next sections, the value of α used in the loop factor in the virtual weak loops corresponds to α MS (µ 2 ), as the flag a2a0-for-QED-only is active, but the code allows the use of α 0 as well. The renormalization in the weak sector is performed in a hybrid scheme: the Z-boson mass counterterm as well as the external-fermions wave-function counterterms are derived in the on-shell scheme (with the modifications related to the complex-mass scheme choice), while the electric charge and the sine of the weak-mixing angle are renormalized in the MS scheme (possibly supplemented with W -boson and topquark decoupling). The electric charge counterterm in the (α MS , s 2 W MS , M Z ) scheme reads: where µ Dim is the unphysical dimensional scale introduced with dimensional regularization and cancels in the sum of bare and counterterm amplitudes.The last two terms in Eq. ( 42) implement the top and W decoupling: if µ is greater than M top (M W, thr.), only the part of the top-quark (W ) loop proportional to the combination ∆ UV − log µ The counterterm corresponding to the sine of the weakmixing angle reads: where δ Z ZA MS and δ Z AZ MS have the usual expression of δ Z ZA and δ Z AZ in the on-shell scheme upon the replacement Note that M 2 W in Eq. ( 44) is computed as Z and does not necessarily coincide with M 2 W, thr. .When the decoupling is active, the O(α) threshold correction for µ = M W, thr. in the running of α MS induces a similar discontinuity in the running of s 2 W MS : the last term in Eq. ( 43) cancels this discontinuity at the W threshold at O(α). The running of α MS from the scale µ 0 to the scale µ is taken from Eqs. ( 9)-( 13) of Ref. [165], which contain QED and QCD corrections to the fermionic contributions to the β function up to O(α) and O(α 3 s ) [166][167][168][169], respectively.When the calculation is performed in the decoupling scheme, the threshold corrections corresponding to the W and the top-quark thresholds are also implemented: while the former are O(α) effects, the latter are included at O(α 2 ), O(αα S ), and O(αα 2 S ) [165,170].In the code, the running of α MS is only computed between scales µ 0 and µ well within the perturbative regime (µ 2 0 , µ 2 ≫ 4m 2 b ): non-perturbative QCD effects are effectively included through the numerical value of α MS (µ 2 0 ) selected by the user (see Appendix B for the corresponding default value and related discussion).The running of s W MS is taken from Eq. ( 25) of Ref. [171] (see also [172]), which contains O(α 2 ), O(αα S ), O(αα 2 S ), and O(αα 3 S ) corrections to the fermionic part of the β function [166][167][168].As in the case of α MS , when the decoupling is active, the corrections associated with the crossing of the W and the top-quark thresholds at O(α) and at O(α 2 ), O(αα S ), and O(αα 2 S ), respectively, are also computed.For some of the results presented below, the running is only performed at NLO (flag excludeHOrun= 1). Similarly to the (α 0 , sin 2 θ l e f f , M Z ) scheme, where the fermionic higher-order corrections effectively account for the running of α from the Thomson limit to the weak scale, in the (α MS , s 2 W MS , M Z ) scheme the universal higher-order effects are included through the running of the couplings. In the Z_ew-BMNNPV package, the choice of leaving α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ) as free parameters is motivated by the possibility of measuring s 2 W MS at the LHC and future hadron colliders from neutral-current Drell-Yan through a template fit approach, as investigated in Ref. [10].Such measurements would require the generation of Monte Carlo templates for different vales of s 2 W MS (µ 2 0 ) (and possibly α MS (µ 2 0 )) to be fitted to the data.While the present study is focused on fixedorder results and in particular on weak corrections, the Z-_ew-BMNNPV can generate the required templates at NLO QCD+NLO EW accuracy with the consistent matching to QCD and QED parton showers.Another possibility could be to use the MS scheme for a precise prediction (rather than determination) of s 2 W at the weak scale as done in Ref. [173][174][175][176][177][178][179] up to full O(α) accuracy (plus higher order corrections to the running of α and ∆ ρ).In this approach α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ) are derived quantities computed as functions of other input parameters: typically the calculation is performed in the (α 0 , G µ , M Z ) scheme given the high accuracy at which these parameters are measured.For what concerns α MS , it can be computed from α 0 via the relation where ) as well as QCD corrections of order α S and α 2 S are also available).At order α, where the (renormalized) sine of the weak-mixing angle and the corresponding counterterm on the right-hand side of the second equality are computed in the (α 0 , G µ , M Z ) scheme.From equation (46) it follows that, at O(α): Fig. 1 Upper panel: cross section distribution as a function of the leptonic invariant mass, at leading order in the (G µ , M W , M Z ) scheme.Lower panel: relative difference between the NLO cross section and the LO one in the three renormalization schemes ∆ r MS being formally identical to ∆ r, but with δ Z e and sin 2 θ l e f f replaced with δ Z e MS (µ 2 ) and δ s 2 W MS .As in the case of Eqs.(37) and (38), we can consider Eq. ( 47) as the NLO expansion of In equation (48) the renormalization scale was identified with M 2 Z , given the input-parameter set used, and ∆ r MS, HO is obtained from ∆ r MS by replacing the O(α) expression of ∆ ρ LO Fig. 2 Upper panel: forward-backward asymmetry distribution as a function of the leptonic invariant mass, at leading order in the (G µ , M W , M Z ) scheme.Lower panel: absolute difference between NLO and LO asymmetry in the three renormalization schemes (G µ , M W , M Z ) (solid blue), (α 0 , G µ , M Z ) (dashed red), and with the one including the higher-order corrections discussed in Sect.3. A last comment is in order concerning the decoupling procedure.We decouple the top-quark and the W boson in the α MS and s 2 W MS running to make contact with Refs.[171,172], mainly motivated by the huge impact of the W decoupling. However we adopt a minimal (and simplified) approach where the top and the W are integrated out only in the renormalization-group equations for α MS and s 2 W MS and in the expression of the NLO counterterms for δ Z e MS and δ s 2 W MS /s 2 W MS , which are closely related to the evolution equations.The heavy degrees of freedom are not integrated out in the calculation of the relevant matrix elements. Input parameter schemes: numerical results In this section we investigate the numerical impact of the radiative corrections to differential observables of the NC DY process at the LHC, according to the above described input parameter schemes.In particular, we focus on the dilepton invariant mass distribution dσ /dM ll and on the forwardbackward asymmetry A FB (M ll ), defined as: where c is the cosine of the lepton scattering angle in the Collins-Soper frame, as a function of the invariant mass M ll .We consider the µ + µ − final state, with √ s = 13 TeV.All results are obtained for an inclusive setup, where no cuts are imposed on the final-state leptons except for an invariant mass cut M ll ≥ 50 GeV.The numerical values of the relevant parameters are specified in Appendix A and the default values for the higher order options, for the hadronic contributions to ∆ α as well as for the W /Z boson width options are adopted. The upper panels of Figs. 1 and 2 show the LO predictions obtained in the G µ , M W , M Z scheme for the differential cross section and A FB distribution computed as functions of the dilepton invariant mass M ll in the window 50 GeV ≤ M ll ≤ 200 GeV, without additional kinematical cuts on the leptons.While the invariant mass distribution has a Breit-Wigner peak for M ll equal to Z mass, the asymmetry crosses zero and changes sign in the resonance region: because of this behaviour, in the following we quantify the impact of EW corrections or the differences among predictions obtained in different input-parameter schemes in terms of absolute (rather than relative) differences for the A FB distribution. To analyse the main features of the NLO weak corrections and the higher-order effects discussed in Sects.3 and 4, we consider the (α 0 , G µ , M Z ) scheme together with a representative of the class of schemes using M W as independent parameter and a representative for the class using sin 2 θ l e f f as input.We choose schemes with couplings defined at the weak scale, namely G µ , M W , M Z , G µ , sin 2 θ l e f f , M Z , and (α 0 , G µ , M Z ) with the flag azinscheme4 equal to one (i.e. using α = α 0 /(1 − ∆ α) in the calculation).Other schemes where couplings are defined at low-energy, like the ones involving α 0 , lead to larger corrections with respect to the LO because of the running of the parameters up to the weak scale: such effects tend to reduce the differences w.r.t. to the predictions of schemes relying on α(M 2 Z ) or G µ when moving from LO to NLO and NLO+HO accuracy (see Figs. 9-10).In the MS scheme of Sect.4.4, the running of α MS (µ 2 ) and s 2 W MS (µ 2 ) reabsorbs large part of the corrections in the Born matrix elements: for this reason, the relative corrections with respect to the LO are not shown for the (α MS , s 2 W MS , M Z ) scheme and we only show the MS results at NLO with the best predictions (i.e.NLO+HO) in the other schemes (Figs.9-10). The lower panel of Fig. 1 shows the NLO relative correction to dσ /dM ll w.r.t. the LO prediction, for three input schemes: (G µ , sin 2 θ l e f f , M Z ) (dotted green line), α 0 , G µ , M Z (dashed red line), and (G µ , M W , M Z ) (solid blue line).The corrections in the first two schemes are very similar, ranging from −1% to about +1%, with the line corresponding to the (G µ , sin 2 θ l e f f , M Z ) scheme slightly above the one for the α 0 , G µ , M Z scheme.When using M W as an independent parameter, the corrections have different shape and are in general larger, ranging form +5% at 40 GeV to −1% around 100 GeV.The picture could be understood as follows.The analytic expression of the one-loop matrix element in the three schemes is identical once the counterterms are expressed in terms of δ Z e (or δ Ze ) and δ s 2 W , the only differences being the actual form of the counterterms (δ Z e and δ s 2 W ) and the ∆ r or ∆ r subtraction terms that factorize on a the tree-level matrix-element in the scheme (G µ , M W , M Z ) or (G µ , sin 2 θ l e f f , M Z ) schemes, respectively 11 .If one replaces the counterterm δ s 2 W with δ s 2 W + δ sin 2 θ l e f f − δ sin 2 θ l e f f , one can split the one-loop matrix-element in a term that corresponds to the one-loop amplitude in the scheme (G µ , sin 2 θ l e f f , M Z ) (up to the above-mentioned subtraction terms which however appear as constant shifts in the relative corrections) plus a reminder that might be written as ∆ s 2 W ∂ M LO /∂ s 2 W which represents the change in the LO matrix-element when the numerical value of s 2 W is shifted by a factor ∆ s 2 W = δ s 2 W − δ sin 2 θ l e f f .In the α 0 , G µ , M Z scheme, ∆ s 2 W is about 2.7 × 10 −4 and the corresponding impact is hardly visible on the scale of the plot, while in the W is much larger, of order 1 × 10 −2 , and it is the main responsible for the shape and the size of the effects shown in Fig. 1.The relative corrections at NLO in the schemes with α 0 or α(M 2 Z ) as input together with M W (sin 2 θ l e f f ) can be obtained from the ones shown in Fig. 1 by removing the constant term −2∆ r (−2∆ r) or replacing it with −2∆ α(M 2 Z ), respectively.The lower panel of Fig. 2 shows the NLO correction to the asymmetry, defined as the absolute difference Similarly to what happens for the cross section, the NLO weak corrections with the schemes (G µ , sin 2 θ l e f f , M Z ) and α 0 , G µ , M Z are very close and in general smaller, falling in the range ±0.002, while the corrections in the (G µ , M W , M Z ) are larger, reaching the value of −0.018 at about 80 GeV.The results for the asymmetry and the ones for the dilepton 11 More precisely, ∆ r − ∆ α or ∆ r − ∆ α, since in all the three schemes there is a term −∆ αM LO .invariant mass basically share the same interpretation detailed above, with the main difference that the effect of the overall subtraction terms like ∆ r and ∆ r largely cancel in A NLO FB . Figures 3 and 4 show the relative (absolute) NLO weak corrections to the cross section (forward-backward asymmetry) if only the gauge-invariant subset of the bosonic loops is included.At low dilepton invariant masses, large part of these corrections come from the bosonic contribution to ∆ r and ∆ r entering the calculation in the (G µ , M W , M Z ) and Fig. 6 Higher-order correction to the forward-backward asymmetry distribution as a function of the leptonic invariant mass.The three curves correspond to the three different choices of renormalization scheme discussed. in the (G µ , sin 2 θ l e f f , M Z ) schemes, respectively.For larger M ll values, on the right part of the plot, the bosonic corrections are relatively large (order −5%), while the full NLO weak corrections are at the permille level, pointing out a strong cancellation between bosonic and fermionic corrections.The contribution from ∆ r and ∆ r essentially cancels in A FB and the asymmetry difference in Fig. 4 is dominated by the shift induced in the effective s 2 W by the bosonic part of the O(α) corrections in the (α 0 , G µ , M Z ) and (G µ , M W , M Z ) schemes (δ s 2 W ∼ 3 × 10 −3 and δ s 2 W ∼ 2.5 × 10 −3 , respectively).By comparing Figs. 2 and 4, one notices that a large cancellation between bosonic and fermionic effects is still there in the (α 0 , G µ , M Z ) and (G µ , M W , M Z ) schemes, while in the (G µ , sin 2 θ l e f f , M Z ) bosonic corrections dominate over the fermionic ones and the lines corresponding to this scheme in Figs. 2 and 4 are almost the same in the scale of the plot. Figs. 5 and 6 show the higher-order universal corrections (i.e.beyond NLO) defined in Sect.3, to the cross section invariant mass distribution (normalized to the LO predictions) and the forward-backward asymmetry, respectively, in the three renormalization schemes.As for the NLO case, the plots display the relative corrections for the cross section distribution and the absolute correction for A FB (M ll ).In the (G µ , sin 2 θ l e f f , M Z ) scheme the corrections are small (order 0.2%) and essentially flat: this is because the corrections in Eq. ( 24) factorize on the LO matrix-element squared and the only dependence on M ll comes from the running of α S in the QCD corrections to ∆ ρ.The corrections in the (α 0 , G µ , M Z ) scheme fall in the range [−0.3%, 0], being basically zero for low dilepton invariant masses and reaching their maximum around the Z peak: the shape of the corrections is determined by the additional shift ∆ s HO W on top of the NLO one (∼ 5 × 10 −5 ) which only affects the Z-boson exchange amplitude, while for small invariant masses the dominant contribution is the γ exchange.The impact of the fermionic higher-order effects in the (G µ , M W , M Z ) scheme is larger than for the other choices of input parameters, ranging from about −0.7% at M ll = 50 GeV to about +0.1% at the Z-peak: in this scheme, the corrections come from the interplay of the shift to s 2 W (−9 × 10 −4 in addition to the NLO shift) and the overall factor 2(∆ ρ − ∆ ρ (α) )c W /s w + ∆ ρ 2 c 2 W /s 2 W coming from the relation between α and G µ .The latter effect enters also the γ-exchange diagram and thus affects also the low-invariant mass region of the plot.When considering the asymmetry (Fig. 6), any overall term common to numerator and denominator of Eq. ( 49) cancels: this is almost the case for the higher-order corrections in the (G µ , sin 2 θ l e f f , M Z ) scheme, where the factorization of the higher-order terms is only approximate, due to the presence of the NLO corrections, leading to a negligible residual effect of order 10 −6 on the asymmetry difference not visible within the resolution of the plot.The impact in the (G µ , M W , M Z ) scheme is larger, with a maximum of 2 × 10 −3 for M ll around 80 GeV: the behaviour is essentially determined by the above-mentioned shift in s 2 W on top of the O(α) one.In the (α 0 , G µ , M Z ) scheme the corrections are negative, reaching the value of about −7 × 10 −4 again at about 80 GeV, and driven by the higher-order shift on s 2 W . Figs. 7 and 8 show the impact of the three-loop QCD correction to ∆ ρ, δ Fig. 8 The same as in Fig. 7 for A FB .The absolute difference between the three-loop QCD contribution to ∆ ρ and the two-loop case is shown. and the forward-backward asymmetry, respectively, in the three renormalization schemes.The leading δ QCD contribution comes from the replacement of the ∆ ρ (α) terms in the NLO calculation with the expression of ∆ ρ in Eq. (1).In the (G µ , sin 2 θ l e f f , M Z ) scheme, ∆ ρ comes from the relation between α and G µ and simply multiplies the LO matrixelement squared: the corresponding line in Fig. 7 is twice the factor 3x t (1 + x t ∆ ρ (2) ) QCD and the dependence on M ll is the residual scale dependence of the QCD correction.In the (α 0 , G µ , M Z ) scheme, the linear term in ∆ ρ comes from the corrections to the G µ M 2 Z factor in the Zboson exchange diagram: as a consequence, in the low dilepton invariant mass region, where the dominant contribution is from γ exchange, the correction tends to vanish, while for larger values of M ll it does not factorize on the LO matrixelement.In the (G µ , M W , M Z ) scheme, linear terms in ∆ ρ come both from the δ s 2 W counterterm and from the ∆ r relating α and G µ at NLO (∼ c 2 W /s 2 W ∆ ρ).Only the latter contribution factorizes on the Born result and it is the only one affecting the γ-exchange which dominates the cross section in the low-invariant mass limit, where the effect is basically three times (∼ c 2 W /s 2 W ) larger than the one observed in the (G µ , sin 2 θ l e f f , M Z ) scheme.Moving to the asymmetry, the impact of δ QCD in the (G µ , sin 2 θ l e f f , M Z ) scheme is not visible in Fig. 8, since it largely cancels between numerator and denominator in A FB .Also for the other two schemes the effect is tiny, of the order of 10 −5 .The four-loop QCD corrections to ∆ ρ (not included in Z_ew-BMNNPV code) computed at the scale M top should be about five times smaller than the three loop ones [146], but with reduced scale dependence, so that the numerical impact on the M ll and A FB distributions is negligible compared to the other effects discussed in the following. We close this subsection presenting in Figs. 9 and 10 the predictions for different schemes referred to the ones obtained in the (α(M 2 Z ), sin 2 θ l e f f , M Z ) scheme, for different levels of perturbative accuracy: LO, NLO, and NLO+HO (relative differences for dσ /dM ll and absolute differences for A FB (M ll )).The lower panels, referring to the NLO+HO predictions, contain also the results in the hybrid MS scheme discussed in Section 4.4, (α MS , s 2 W MS , M Z ).The choice of the reference scheme is motivated by the fact that, in the (α(M 2 Z ), sin 2 θ l e f f , M Z ) scheme, the corrections do not involve ∆ α-or ∆ ρ-enhanced terms and thus the higher-order corrections discussed in Sect. 3 are absent.On the other hand, in the other schemes, the corrections can be split in a non-enhanced part − which is formally the same one as in the (α(M 2 Z ), sin 2 θ l e f f , M Z ) scheme with a different numerical value for α and s 2 W − plus a shift in s 2 W from Eq. ( 37) and an overall effect coming from the running of α or from the corrections to the relation between α and G µ when α 0 or G µ are used as input, respectively.When going beyond NLO, the latter effects can have a non-trivial interplay leading, for instance, to mixed contributions of the form ∆ α∆ ρ.As a general comment, the spread of the predictions for the differential cross section based on different input parameter schemes tends to shrink from order 20% at LO to 2% at NLO and few 0.1% with the inclusion of universal additional corrections.The absolute differences for A FB are at the level 0.02 at LO and become of the order of 10 −3 (10 −4 ) when the NLO (NLO plus fermionic higher-order) corrections are included.In the low invariant mass region, dominated by the γ-exchange diagram, the cross section ra-tios computed at LO (upper panel of Fig. 9) reduce to ratios of the values of α used in the numerator and in the denominator (squared).For the schemes employing α(M 2 Z ), including (α 0 , G µ , M Z ) since the azinscheme4 flag is active, the ratios tend to one at low M ll .The same holds for the (G µ , sin 2 θ l e f f , M Z ) scheme, since the value of α computed from G µ and sin 2 θ l e f f at LO is pretty close to α(M 2 Z ).For the schemes based on α 0 the ratios are about 12% smaller, while for the (G µ , M W , M Z ) scheme the corresponding ratio is about 6% smaller than the one for the α(M 2 Z )-based schemes.At the LO, the only difference in the predictions for the cross section computed in schemes using sin 2 θ l e f f as input comes from the value of α used in the LO couplings: this explains the horizontal lines corresponding to the (α 0 , sin 2 θ l e f f , M Z ) and and (G µ , sin 2 θ l e f f , M Z ) schemes.When using the schemes with M W as input or (α 0 , G µ , M Z ), not only the value of α used for the couplings changes with respect to the one used in the denominator, but also s 2 W is different: since a variation of this parameter affects in a different way the Zand γ-exchange amplitudes (the latter only when α is derived from G µ ) which are weighted by the factors Z +iΓ Z M Z and 1/s, respectively, the ratios corresponding to the M W -based schemes in the upper panel of Fig. 9 have a non trivial shape as a function of M ll .For the (α 0 , G µ , M Z ) scheme, the ratio is still close to one since the value of s 2 W computed at LO in this scheme is close to the value of sin 2 θ l e f f used in the denominator (0.2308 versus 0.2315, to be compared with 0.2228 in the M W -related schemes).As any overall constant factor cancels between numerator and denominator in A FB , the LO asymmetry difference with respect to the (α(M 2 Z ), sin 2 θ l e f f , M Z ) scheme is zero when (α 0 , sin 2 θ l e f f , M Z ) or (G µ , sin 2 θ l e f f , M Z ) are used as input parameters (upper panel of Fig. 10).For the other schemes one only sees the impact of the different value of s 2 W used: the three lines for the schemes based on M W overlap, while the one for the (α 0 , G µ , M Z ) is again closer to the one of the sin 2 θ l e f f -based schemes. At NLO, the spread of the cross section ratios is considerably reduced.In the low M ll region, the ratio stays one for the schemes based on α(M 2 Z ), while for the α 0 -related schemes it is much closer to one compared to the LO results.This is because the NLO corrections in these schemes develop an overall factor 2∆ α that, once added to the LO term, leads to a sort of LO-improved prediction proportional to the effective coupling α 2 0 (1 + 2∆ α) which is just the firstorder expansion of α(M 2 Z ) 2 = α 2 0 /(1 − ∆ α) 2 .Something similar happens for the (G µ , M W , M Z ) scheme including the one-loop corrections to the α-G µ relation contained in ∆ r.Besides changing the effective value of α used in the calculation, the one-loop corrections also change the value of the effective s 2 W used for the M W -based and the (α 0 , G µ , M Z ) schemes: the latter effect is the main responsible for the shape differences in the plots and in particular it explains the change of trend moving from LO to NLO when M W is used as input parameter (s 2 e f f W, LO (M W ) < sin 2 θ l e f f while s 2 e f f W, NLO (M W ) > sin 2 θ l e f f ) 12 .Including the higher-order corrections goes in the direction of further reducing the differences among the predictions in different schemes (lower panel of Fig. 9), as this class of corrections is basically obtained in terms of Bornimproved matrix elements squared written as functions of effective couplings α and s 2 W reabsorbing the leading part of the fermionic corrections up to the scale M Z and numerically close to α(M 2 Z ) and sin 2 θ l e f f .It is worth noticing that this sort of redefinition of the couplings in the LO matrixelement does not affect the part of the one-loop result that is not enhanced by large fermionic corrections and in particular does not apply to the bosonic part of the O(α) result: the different couplings entering this part of the corrections are the main responsible for the residual deviations from one in the lower panel of Fig. 9.As an example, one can take the predictions in the (α 0 , sin 2 θ l e f f , M Z ) scheme: according to Eq. ( 23), the expression for the NLO+HO corrections is identical to the one in the (α(M 2 Z ), sin 2 θ l e f f , M Z ) scheme, when α(M 2 Z ) is obtained from α 0 /(1 − ∆ α), and the only difference is the non-enhanced part of the O(α) result, that is proportional to α 3 0 in the numerator and to α(M 2 Z ) 3 in the denominator of the ratio in Fig. 9.This difference alone leads to an effect of order ±0.2% as shown in the plot. The lower panel of Fig. 9 also shows the ratio of the MS predictions with respect to the ones in the (α(M 2 Z ), sin 2 θ l e f f , M Z ) scheme.In the Z_ew-BMNNPV package, we implemented the expressions of Refs.[165] and [171,172] for the running of α MS and s 2 W MS from a scale µ 2 0 to a scale µ 213 leaving both µ 2 0 and the actual values of α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ) as free parameters, since we had in mind the determination of s 2 W MS from neutral-current Drell-Yan at the LHC and future hadron-colliders by means of template fits, as in Ref. [10]: the MS results thus depend on α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ) as input parameters.The solid black line shows the ratio of the MS prediction over the one in the ) to the values quoted by the PDG [180] (see also Appendix A).While the numbers fall in the same ballpark as the ones obtained in the other schemes, the discrepancy tends to be a little larger.The source of the differences is twofold: on the one hand, the values in Ref. [180] are computed with a theoretical accuracy that is not matched by the rest of the calculation in Z_ew-BMNNPV and, on the other hand, the param- 12 Since in our simulations the flag a2a0-for-QED-only is switched on, the loop factors for the schemes (α 0 , M W , M Z ), (α(M 2 Z ), M W , M Z ), and (G µ , M W , M Z ) are not the same, leading to slightly different values of s 2 e f f W, NLO (M W ). 13 Under the assumption that both µ 2 0 and µ 2 are in the region above 4m 2 b . eters used in their computation and the ones employed in the present study are not tuned.The dashed black line corresponds to the MS predictions for a tuned choice of α MS (M 2 Z ) and s 2 W MS (M 2 Z ): α MS (M 2 Z ) is consistently computed from α 0 using the same parameters as in the rest of the calculation, while s 2 W MS (M 2 Z ) is derived from the input parameters (α 0 , G µ , M Z ) as described in Sect.4.4. The interpretation of Fig. 10 for the asymmetry difference follows closely the one for the dilepton invariant mass cross section distribution, with the main difference that the corrections connected to ∆ α and ∆ r largely cancel between numerator and denominator in A FB (though not exactly, leading for instance to small deviations from zero in the low invariant mass region for the sin 2 θ l e f f -based schemes at NLO), and the spread of the predictions in the considered schemes is mainly due to the different values of s 2 W effectively employed. The numerical results in Figs.9-10 are obtained under the assumption that the input parameters are actually free parameters to be set to the corresponding experimental values (or to be used as variables in template fit analyses) and no attempt was made to tune the input parameters for the different schemes (with the only exceptions of α(M 2 Z ) − computed from α 0 − and s 2 W MS (µ 2 0 ) in the tuned MS calculation).Another possibility, closer to the strategy used for the numerical predictions for LEP1 studies mentioned in Sect.4.3, would be to take a reference input scheme, say (α 0 , G µ , M Z ), and perform the calculation in other schemes, like the (G µ , M W , M Z ), (G µ , sin 2 θ l e f f , M Z ), or (α MS , s 2 W MS , M Z ) ones, but deriving the numerical values of M W , sin 2 θ l e f f , and s 2 W MS (µ 2 0 ), from the parameters α 0 , G µ , M 2 Z using the quantity ∆ r (∆ r) as in Eqs.(39), (41), and (48).Clearly the tuning procedure reduces all the tuned schemes to the reference one at the considered theoretical accuracy (in our case, NLO plus leading fermionic corrections of order ∆ α 2 , ∆ ρ 2 , ∆ α∆ ρ) and it is expected to reduce the spread of the predictions in the peak region (where the tuning is actually performed) but not necessarily away from the resonance.The effect of tuning is shown in Fig. 11 for the dilepton invariant mass cross section distribution (upper panel) and for the forwardbackward asymmetry (lower panel), which basically correspond to the lower panels of Figs.9-10 but taking as reference the (α 0 , G µ , M Z ) scheme.The maximum spread of the cross section ratios as a function of M ll is of about 0.025%, while the one for the asymmetry difference is of the order of 0.005%.As a technical remark, the plots are obtained in the pole scheme in order to minimize the spurious O(α 2 ) effects induced by the CMS, and the fermionic HO corrections in the (G µ , sin 2 θ l e f f , M Z ) scheme are obtained with a modified version of Eq. ( 24) where ∆ ρ is replaced with ∆ r: the expressions are equivalent at the considered theoretical accuracy, differing by terms at most of order ∆ rremn ∆ ρ, but this way the effective couplings entering the Z f f ver- NLO+HO Fig. 9 Relative difference of the predictions for the dilepton invariant mass cross section distribution at LO (upper panel), NLO (middle panel), NLO+HO (lower panel).The calculation is performed in the CMS.The α values used in the loop factors corresponds to the ones used for the LO couplings.In the tex in the calculation of the fermionic higher-orders in the (G µ , sin 2 θ l e f f , M Z ) and the (α 0 , G µ , M Z ) schemes become identical. Concerning the (α MS , s 2 W MS , M Z ) scheme, it is interesting to analyze the renormalization-scale dependence of the predictions obtained in this scheme.Fig. 12 shows the ratio of the dilepton invariant mass cross section distribution computed with µ R = 2M ll (µ R = M ll /2) and the one obtained with the default choice µ R = M ll at LO (upper panel) and NLO (lower panel).Regardless of the accuracy in the matrix element calculation, the running of α MS and s 2 W MS is computed at O(α) accuracy (solid and dotted lines) or at O(α) plus the higher-order corrections taken from Refs.[165,171,172] (dashed and dot-dashed lines).In the plots, the finite jumps for M ll = M W (M ll = 2M W ) are a consequence of the discontinuity in the O(α) running of the MS parameters for µ R = M W at the denominator (at the numerator for the choice µ R = M ll /2).When the HO corrections to the running of α MS and s 2 W MS are included, similar discontinuities appear also per M ll = M top (and At the LO, scale-variation effects are of order ±2% and the size of the jumps related to the W threshold in the running of the couplings is of about a couple of permille, while the jumps originated by the top threshold in the dashed and dotdashed lines are not visible on the scale of the plot.In the NLO calculation, the renormalization-scale dependence of α MS and s 2 W MS cancels against the one of the renormalization counterterms in the one-loop amplitude and the residual scale dependence starts at O(α 2 ).As a consequence, on the one hand scale variation effects are strongly suppressed − 1 Fig. 11 Relative difference (absolute difference) of the predictions for the cross section (forward-backward asymmetry) as a function of the dilepton invariant mass at NLO+HO accuracy.The calculation is performed in the pole scheme.The α values used in the loop factors corresponds to the ones used for the LO couplings.In the (α 0 , G µ , M Z ) the actual value of α is α 0 /(1 − ∆ α).The value of M W used in the (G µ , M W , M Z ) scheme is derived from (α 0 , G µ , M Z ) using Eq. ( 39).Similarly, sin 2 θ l e f f and s 2 W MS are computed by means of Eqs. ( 41) and (48). (compared to the ones in the upper panel) and enter at the sub-permille level and, on the other hand, the jumps at the W threshold are visibly reduced.This does not happen for the discontinuities at the top threshold, since the matching corrections to the running formulae are beyond O(α).Though the HO corrections to the running of the MS parameters are not matched by the O(α) virtual matrix elements, the size of the renormalization-scale dependence shown by the dashed and dot-dashed lines is close to the one in the solid and dotted plots where only the O(α) running of α MS and s 2 W MS is used.It is thus reasonable to take the numerical impact of the HO contribution to the running of the parameters (some 0.01%, as shown in Fig. 13) as a rough estimate of the missing higher-order corrections to the matrix elements. As a general remark, while the difference between theoretical predictions obtained with different input parameter and renormalization schemes can be considered as a rough and conservative estimate of the theoretical control over predictions involving weak corrections, there might be motivations to prefer one scheme to the others, like, for instance, the parametric uncertainties connected with the knowledge NLO Fig. 14 Effects of varying the input parameter sin 2 θ l e f f = 0.23154 ± 0.00016 in the (G µ , sin 2 θ l e f f , M Z ) scheme from the central value to the upper and lower ones, at leading and next-to-leading order.Here it is shown the relative difference between the invariant mass distribution obtained with the upper/lower value of sin 2 θ l e f f and the one with the central value sin 2 θ l e f f , c = 0.23154. of the input parameters, the size of the perturbative corrections, the need of a specific free parameter in the calculation. In the following we address some of these additional sources of theoretical uncertainty, in particular in Sect.6 we focus on the main parametric uncertainties, in Sect.7 we discuss the treatment of the light quark contributions, and in Sect.8 we consider different available strategies for the treatment of the unstable gauge bosons. Parametric uncertainties We study in the following the parametric uncertainties induced on dσ /dM ll and A FB (M ll ) by the current experimental errors affecting some of the relevant input parameters for each of the above considered schemes.In particular, we treat the scheme α 0 , G µ , M Z as free from parametric uncertainties due to the input parameters because α 0 , G µ and M Z are known with excellent accuracy in high energy physics.Therefore, for the other two representative schemes, (G µ , M W , M Z ) and (G µ , sin 2 θ l e f f , M Z ), we study the uncertainties induced by the imperfect knowledge of M W and sin 2 θ l e f f , respectively.Figure 14 displays the effect of a variation of sin 2 θ l e f f within the range 0.23154 ± 0.00016 [181] on the dilepton invariant mass distribution computed in the (G µ , Fig. 15 Effects of varying the input parameter sin 2 θ l e f f = 0.23154 ± 0.00016 in the (G µ , sin 2 θ l e f f , M Z ) scheme from the central value to the upper and lower ones, at leading and next-to-leading order.In particular it is shown the absolute difference between the asymmetry distribution obtained with the upper/lower value of sin 2 θ l e f f and the one with the central value sin sin 2 θ l e f f , M Z ) scheme at LO and NLO accuracy (black and red lines, respectively).In particular, the quantity where s 2 e f f , c stands for the reference sin 2 θ l e f f value (0.23154) and ∆ s 2 e f f = 0.00016, is plotted as a function of M ll .Since the renormalization conditions in the (G µ , sin 2 θ l e f f , M Z ) scheme require that sin 2 θ l e f f is not affected by radiative corrections, the variations of sin 2 θ l e f f have basically the same impact at LO and at NLO.It is worth noticing that the dependence of dσ /dM ll on sin 2 θ l e f f is twofold: on the one hand, it depends on the g V /g A ratio through the Z f f vertices and, on the other hand, there is an overall dependence coming from the relation14 The latter effect is the source of the enhancement in the lowmass region of Fig. 14, as can be understood by comparing the predictions in Fig. 14 with the ones in Fig. 16, obtained in the α 0 , sin 2 θ l e f f , M Z scheme.As a consequence, in order to assess the sensitivity of the dilepton invariant mass distribution on the leptonic effective weak mixing angle (considered as a measure of the g V /g A ratio), one should consider the normalized dσ /dM ll distribution, rather than the absolute one. In Figure 15 we plot the quantity showing the effects on A FB (M ll ) induced by variations of sin 2 θ l e f f within the same range considered in Fig. 14.Also in this case, the dependence on sin 2 θ l e f f is basically the same at LO and at NLO accuracy.Quantitatively, it amounts to approximately ±3 × 10 −4 in the resonance region and drops quickly away from the Z peak.As A FB is defined through a ratio of differential distributions, the overall spurious dependence on sin 2 θ l e f f related to Eq. ( 53) cancels and the results in Fig. 15 show the sensitivity of the A FB on the effective leptonic weak-mixing angle. In Figures 17 and 18, we focus on the parametric uncertainty coming from the value of the W -boson mass that affects the predictions obtained in the (G µ , M W , M Z ) scheme.In particular, we plot the quantities in Eqs. ( 52) and ( 54) where we replaced s 2 e f f , c and ∆ s 2 e f f with the reference Wmass value (M c W = 80.385 GeV) and its 1σ error (∆ M W = ±15 MeV15 ).Figures 17 (18) and 14 (15) are very simi-lar.This can be understood for instance at LO, where the variations of M W and sin 2 θ l e f f are related by from which we can see that a shift of 15 MeV in M W corresponds to a shift of −0.0003 in sin 2 θ l e f f , which is approximately twice the shift we are considering in Figs. 14 and 15.The plots show the same pattern also at NLO, though the relation between sin 2 θ l e f f and M W beyond LO is more involved (indeed, at variance with the plots in Figs. 14 and 15, in Figs. 17 and 18 the NLO curves do not overlap with the LO ones).As in the case of Fig. 14, the source of the enhancement in the low invariant mass region of Fig. 17 is the overall dependence on M W originating from the relation where the real part of the masses is taken in the complexmass scheme.Figs.19 and 20 show the sensitivity of dσ /dM ll and A FB (M ll ) to variations of 400 MeV 16 in the top-quark mass value, for the three different input parameter schemes.Since M top enters parametrically only through the loop diagrams, we display only the results obtained with the NLO predictions.The largest part of the top-quark mass dependence at O(α) can be encoded in the ∆ ρ factor defined in Sect.3. In the NLO predictions computed in the (G µ , M W , M Z ) scheme, ∆ ρ enters in two different ways: through the overall factor −2∆ r (∼ 2c 2 W /s 2 W ) and via the counterterm corresponding to s 2 W (δ s 2 W ∼ c 2 W ∆ ρ).The former contribution is responsible for the constant shift of about ±3 × 10 −4 for ∆ M top = ±0.4 GeV clearly visible at low dilepton invariant masses, while the latter is the source of the shape effect in the upper panel of Fig. 19.In the (α 0 , G µ , M Z ) scheme, ∆ ρ enters the O(α) predictions only via the counterterms δ G µ and δ s 2 W : as a consequence, the γ-exchange amplitude is not affected by top-mass variations, as clearly visible in the low dilepton invariant-mass region in the central panel of Fig. 19.In the (G µ , sin 2 θ l e f f , M Z ) scheme, ∆ ρ comes from the overall factor −2∆ r (∼ 2∆ ρ) and induces a constant shift approximately three times smaller than the one coming from ∆ r in the (G µ , M W , M Z ) scheme (given the different coefficients multiplying ∆ ρ in the two calculations, namely c 2 W /s 2 W and 1).The two lines in the lower panel of Fig. 19 are not completely flat because, besides the quadratic terms in M top collected in ∆ ρ, there is a residual subleading dependence on the top-quark mass which leads to a tiny relative effect of order 10 −6 .In the forward-backward asymmetry the overall contributions from ∆ r and ∆ r largely cancel.As a result, Fig. 20 shows the impact of the ∆ ρ term in δ s 2 W (which, in turn, is a shift of the effective s 2 W entering the calculation) for the (G µ , M W , M Z ) and (α 0 , G µ , M Z ) schemes, while for the (G µ , sin 2 θ l e f f , M Z ) scheme we only see the impact of the non-enhanced M top corrections which is basically two orders of magnitude smaller than the effect observed for the other schemes. Treatment of ∆ had The contributions to the running of α coming from the charged leptons and the top quark can be computed perturbatively in terms of the corresponding contributions to the photon selfenergy and its derivative, namely: For the light-quark contributions, on the contrary, Eq. ( 57) cannot be used because of the ambiguities related to the definition of the light-quark masses arising from non-perturbative QCD effects.In the literature, a common strategy to compute the light-quark contribution to ∆ α is the introduction of light fermion masses as effective parameters which are used to calculate the analogous of Eq. ( 57) for the quark sector: The light-quark masses are chosen is such a way that the resulting hadronic running of α from 0 to M 2 Z corresponds to the one obtained from the experimental results for inclusive hadron production in e + e − collisions using dispersion relations (∆ α had fit ), namely: This approach is implemented in Z_ew-BMNNPV and it is used as a default.We stress that the light-quark masses are only used for the self-energy corrections but they do not enter the vertex and box diagrams.In particular, they are not used for the QED corrections, where the light-quarks mass singularities are regularized by means of dimensional regularization. Starting from revision 4048, a more accurate treatment of the hadronic vacuum polarization is available in Z_ew--BMNNPV.The code contains an interface to the routines of Refs.[185][186][187][188][189][190][191][192] and [193][194][195][196] (HADR5X19.F and KNT v3.0.1, respectively) for the calculation of the hadronic running of α based on the experimental data for inclusive e + e − → hadron production at low energies in terms of dispersion relations.This interface can be activated using the input flag da_had_-from_fit=1 and the flag fit=1,2 can be used to switch between the two routines for ∆ α had fit . is worth noticing that these routines only provide results in the range [0, q 2 max ]: for larger values of q 2 , we define ∆ α had fit (q 2 ) = ∆ α had fit (q 2 max ) + ∆ α had pert.(q 2 ) − ∆ α had pert.(q 2 max ). The starting point for the calculation for da_had_from_fit=1 is the relation While Eq. ( 58) is a definition of ∆ α had pert., Eq. ( 61) can be considered as a definition of . On the one hand, Eq. ( 61) is used in the one-loop corrections to the photon propagator to replace the combination Σ had AA (s) − sδ Z had A with −s∆ α had fit (q 2 )+isImΣ had AA (s) (where the factor δ is the light-quark contribution to the photon wave function renormalization counterterm) and, on the other hand, is used for the counterterms related to the electric charge and the photon wave function.More precisely, since the self-energy in Eq. ( 61) can be computed perturbatively for q 2 much larger than Λ 2 QCD , we take q 2 = M 2 Z and tune the quark masses using Eq. ( 59).This way, the formal expression of the counterterms is the same as the one used in the default computation (da_had_from_fit=0).The calculations for da_had_from_fit set to 0 and 1 are not equivalent: in fact, even though both of them rely on the tuning of the light-quark masses from Eq. ( 59), the corrections to the photon propagator are different since ∆ α had fit (s) ̸ = ∆ α had pert. The input scheme used here is (α 0 , M W , M Z ). We notice that the electric-charge and wave function counterterms could also be defined from Eq. ( 61) setting the lightquark masses to zero in the photon self-energy diagrams: on the one hand, this would lead to differences of order and, on the other hand, setting the lightquark masses to zero would require several modifications to the routines used for the evaluation of the virtual one-loop corrections. The impact of the improved treatment of the hadronic running of α is only visible for the input parameter schemes that use α 0 as an independent parameter, since for the other schemes the terms that depend logarithmically on the lightquark masses cancel.Fig. 21 shows the dependence of dσ /dM ll on the uncertainty δ ∆ α had fit for each of the two adopted parameterizations, with the (α 0 , M W , M Z ) scheme.The effect of changing ∆ α had from its central value by a shift of ±δ ∆ α had is at the level of ±0.022% for HADR5X19.F and ±0.027% for KNT v3.0.1, respectively.Variations of ∆ α had mainly affect the δ Z e counterterm, as the light-quark mass logarithms in δ Z e have been traded for ∆ α had , leading to an almost constant shift in Fig. 21.Changing ∆ α had also affects the NLO corrected γ propagator, but the numerical impact is tiny as the bare self-energy diagrams do not involve logarithmically enhanced light-quark mass terms: this effect, being only present for the γ-mediated amplitude, induces a small shape effect in Fig. 21.In A FB the contribution from δ Z e largely cancels, leading to an absolute change in A FB at the 10 −6 level as shown in Fig. 22. Fig. 22 Change in the asymmetry distribution if one takes the central value for ∆ α had fit plus its uncertainty δ ∆ α had fit , as in Fig. 21. Treatment of the Z width The unstable nature of the Z vector boson is considered by default through the complex-mass scheme [147][148][149], according to which the squared vector boson masses are taken as complex quantities ), in the LO and NLO calculation.The input values for M V and Γ V are assumed to be the on-shell ones, M OS V and Γ OS V , and are converted internally in the initialization phase to the corresponding pole values using the relations [197,198] The pole parameters M V and Γ V are used throughout the code for the matrix element calculations.In the CMS, the couplings that are functions of the gauge-boson masses become necessarily complex quantities.In particular, in the schemes with M W and M Z as input parameters, the quantities s 2 W and δ s W /s W of Eq. ( 5) and Eq. ( 6) are calculated in terms of µ W and µ Z .Since sin 2 θ l e f f is defined through the real part of the g V /g A ratio, it is considered as a real quantity when used as a free parameter.Similarly, the input parameters α 0 , α(M 2 Z ), and G µ are real.As a consequence, in the input parameter schemes with only one vector boson mass, M Z , the input couplings are taken as real quantities. When performing calculations in the input parameter/ renormalization schemes having G µ among the free parameters (with the only exception of the (α 0 , G µ , M Z ) one), G µ is usually traded for α G µ by means of Eqs.(53) or (56).Since the gauge-boson masses enter in this relations, α G µ might in principle acquire an imaginary part if the CMS scheme is employed.In the code, we follow the standard procedure of taking a real-valued α G µ to minimize the spurious higherorder terms associated with the overall factor [Im(α G µ )] 2 .More precisely, in Eqs. ( 53) and ( 56), we always use the real part of the gauge-boson masses 17 . The CMS preserves gauge invariance order by order in perturbation theory and this feature guarantees also that the higher-order unitarity violations are not artificially enhanced.For this reason it is the scheme commonly adopted for multiparticle NLO calculations.However, for neutral-current Drell-Yan it is possible to adopt other strategies for the treatment of the Z resonance.In particular, in the Z_ew-BMNNPV package, we implemented the so-called pole and factorization schemes following Secs.3.3.iiand 3.3.iii of Ref. [29], respectively 18 .These schemes can be switched on by means of the flags PS_scheme 1 and FS_scheme 1, respectively. In Figs.23 we show the relative difference between the pole (factorization) scheme, blue (red) line, w.r.t. the CMS, for dσ /dM ll , considering different input parameter schemes.A feature common to all schemes is the oscillation of few 0.01% of amplitude around the Z resonance, as already shown separately for u ū and d d partonic initial states in Ref. [29].Over the whole range 50 GeV < M ll < 200 GeV, the shapes of the differences are similar for the three considered input parameter schemes, with larger differences appearing with the (G µ , M W , M Z ) and (α 0 , G µ , M Z ) schemes.The structure at the WW threshold present in Fig. 6 of Ref. [29] is not visible, within the available statistical error, because of a partial cancellation between the contributions of up-and down-type quark channels. The same comparison between pole (factorization) scheme and CMS is shown for A FB as a function of M ll in Fig. 24.In this case we plot the absolute difference instead of the relative difference with respect to the CMS.For A FB the difference is smooth around the Z resonance, being of the order of few 10 −5 for the pole scheme and of the order of 10 −4 for the factorization scheme.Contrary to the dσ /dM ll case, in Fig 24 the WW threshold enhancement, of the order of 5 × 10 −4 , is clearly visible. High energy regime The analysis presented so far has been focused on the physics at the Z peak.To complete this study, we examine now the behaviour of the corrections and the interplay among different renormalization and input schemes in the high-energy regime, that can be relevant also in view of the upcoming programme of the LHC and at future high-energy machines.In this section, we present the results for a specific initialstate quark flavour focusing, for brevity, on d-quarks: in this way PDFs contributions exactly cancel when studying the relative effect of weak corrections with respect to the LO, as well as in the ratio of predictions obtained for different schemes.The main motivation of our choice is the fact that, at high dilepton invariant masses, PDFs are poorly constrained and typically affected by large errors.If one considers the contribution of all quark flavors at the same time, the above-mentioned relative corrections and ratios have a residual dependence on PDFs which tends to induce, for high M ll values, quite large unphysical distortions.The actual size of this effect clearly depends on the specific PDF set used. Fig. 24 Difference of the pole/factorization scheme with respect to the default complex-mass scheme in the forward-backward asymmetry distribution at NLO.The difference PS-CMS is shown by the solid blue curve, while the FS-CMS one by the dashed red one.Upper panel: (G µ , M W , M Z ); middle panel: (α 0 , G µ , M Z ); lower panel: In Fig. 25 we repeat the study in Fig. 9 for dilepton invariant masses in the range between 1 and 12 TeV.With respect to the peak region, we find a quite different behaviour.First of all, the dilepton invariant mass ratios at LO are flat.In the upper panel of Fig. 9, the shapes came from the variations in the values of s 2 W entering the g Z f f couplings in the Z-boson exchange diagram and the M ll dependence was originated by the different propagators of the γ and the Z as well as by PDFs weighting the different quark flavours: in Fig. 25 not only we consider only d-quarks, but also the Zboson propagator is effectively 1/s (since s ≫ M 2 Z ) so that the only s dependence in the differential cross section is the Fig. 25 Relative difference of the predictions for dilepton invariant mass cross section distribution at LO (upper panel), NLO (middle panel), NLO+HO (lower panel), in the range 1 − 12 TeV.The calculation is performed with the same inputs of Fig. 9. overall flux factor (see for instance Eq. (2.12) of [29] with χ Z = 1.)Moving to the NLO results, the level of agreement between the different input parameter/renormalization schemes is of the order of 1% at the left edge of the plot, but it gets worse as the partonic center of mass energy increases, with a 10% spread at 12 TeV.The inclusion of the fermionic higher-order effects discussed in the previous sections improves the picture only around 1 TeV, but it does not reabsorb the differences among the predictions in the considered schemes at large M ll . The behaviour shown in Fig. 25 can be understood as follows.In the considered dilepton invariant mass range, the bosonic part of the weak NLO corrections is dominated by the so-called Sudakov logarithms, which are double and sin-gle logarithms of kinematic invariants over the gauge-boson masses.These logs correspond to the infrared limit of the weak corrections, where the gauge-boson masses are small compared to the energy scales involved and act as (physical) cutoff for the soft and/or collinear virtual weak corrections .Besides the Sudakov corrections, there is another class of logarithmic corrections coming from parameter renormalization.When using dimensional regularization, counterterms contain logarithms of the unphysical mass-dimension scale µ Dim in the combination where ε = (4 − D)/2, D being the number of space-time dimensions, and r ct is related to particle masses in on-shell based schemes or directly to the renormalization scale µ R in the MS scheme.This contribution cancels against similar terms appearing in the bare loop diagrams (where r ct will be some other scale, say r bare ) leaving contribution of the form log(r 2 bare /r 2 ct ).In the Drell-Yan parameter-renormalization counterterms only vertex diagrams enter, so the only possible scale (in particular in the limit of vanishing gauge-boson masses) is M ll .The functional form of both the Sudakov and the parameter-renormalization logarithms is the same in any scheme, but the coefficients multiplying the logarithms (including the LO-like amplitude where they appear and the LO amplitude in interference with it) differ numerically as they are function of the actual α and s 2 W values used.As a result, the logarithms appearing in the numerator and denominator in the NLO ratios of Fig. 25 have different coefficients and they do not cancel, leaving logarithmically enhanced remnants.It is worth emphasising that the inclusion of the fermionic higher-order corrections, by definition has no impact on the Sudakov corrections (as they are bosonic), but they are also irrelevant for the parameter-renormalization logs, as the corrections in Sect. 3 do include an effective running of the parameters, but only up to the weak scale. In order to prove the argument above, we implemented a private version of the Z_ew-BMMNPV including the routines used in [225,226] for the evaluation of the Sudakov corrections in ALPGEN [227] 19 .Though it is true that the Sudakov corrections alone are not a good approximation for the full NLO weak corrections to neutral-current Drell-Yan (as pointed-out, for instance, in [232]), this is mainly due to the large cancellations between fermionic and bosonic corrections (as shown in Fig. 26) and to a large UV contribution from parameter-renormalization logarithms.For the onshell renormalization based schemes, like the (α 0 , G µ , M Z ), (G µ , sin 2 θ l e f f , M Z ), and (G µ , M W , M Z ) shown in the plot, the fermionic O(α) corrections are of the order of 10 − 20% and mainly come from the fermionic loops entering param- 19 For more recent implementations of Sudakov logarithms in other frameworks, see Refs.[228][229][230][231]. Fig. 26 Comparison among the predictions for the invariant cross section distribution, obtained by including the full NLO weak corrections, the fermionic-only and bosonic-only corrections, or the approximation which includes the Sudakov logarithms, the parameterrenormalization logs and the leading fermionic corrections stemming from ∆ α and ∆ ρ.Four schemes are shown: the MS in both its runningand fixed-scale realizations, the latter one with µ eter renormalization.The sum of the Sudakov corrections and the logarithms from parameter renormalization is a reasonable approximation of the NLO corrections, reproducing their shape with an essentially constant shift of about 5 − 6%.Similar consideration apply to the MS scheme, if fixed renormalization scale is used (µ R = M Z in Fig. 26).The picture changes considerably if the MS scheme is used with the renormalization scale set to the dilepton invariant mass: in this way, the corrections related to parameter renor-malization are reabsorbed in the running couplings and the remaining corrections are smaller than in the other schemes (the fermionic ones, in particular, boil-down to an almost flat −4% effect).The figure also singles-out the part of the fermionic contributions coming from the universal enhanced terms ∆ α and ∆ ρ. To conclude, in Fig. 27 we repeat the same study of the lower panel of Fig. 25 Fig. 27 Relative difference of the predictions for the dilepton invariant mass cross section distribution at NLO+HO, after the subtraction of the leading logarithmic corrections (Sudakov plus parameterrenormalization logs). been subtracted from the NLO+HO predictions.Despite the approximations in the calculation of the logarithmic corrections, it is still possible to read a clear trend in Fig. 27: the ratios fall in the few per mille range (as in the case of Fig. 9 for the near-resonance region) and they tend to be flat 20 . As thoroughly discussed in the literature, when electroweak Sudakov logarithms start to become dominant a resummation algorithm needs to be adopted, in order to obtain reliable predictions.A very recent discussion on automated resummation algorithms of Sudakov logarithms in simulation tools can be found in Ref. [233].This issue is left to future investigations for the case of DY processes with POWHEG-BOX. Conclusions The precision physics program of the LHC requires flexible and precise simulation tools to be used for different purposes.The NC DY process, thanks to its large cross section and clean signature, plays a particular role in this context and its NLO electroweak corrections are a mandatory ingredient for every kind of analysis.In the present paper we have addressed the issue of the input parameter/renormalization schemes in the gauge sector for the electroweak corrections to NC DY.In particular, we have provided the relevant expressions for the counterterms at NLO precision in various realizations of on-shell renormalization schemes as well as of an hybrid MS/on-shell renormalization scheme.Among the on-shell schemes, we considered explicitly the following combinations: The hybrid scheme we considered, containing the input quantities (α MS (µ 2 ), s 2 W MS (µ 2 ), M Z ), is of interest for possible future direct determinations of the running of the electroweak couplings at high energies.In addition to the NLO expressions of the counterterms, we provided, for each considered scheme, the expressions for the higher-order universal corrections due to ∆ α and ∆ ρ.All the relevant expressions are presented in a self-contained way, so that they could be easily adopted in any simulation tool.For the present phenomenlogical study, all the discussed input parameter schemes have been implemented in the code Z_ew-BMNNPV of the POWHEG-BOX-V2 framework, which has been used to obtain illustrative phenomenological results on the differential distributions dσ /dM µ + µ − and A FB (M µ + µ − ), with inclusive acceptance of the leptons.The main features of the various input parameter schemes have been quantitatively analysed for the two considered observables, with focus on the invariant mass window which includes the Z peak and on the high energy region.The latter is characterized by the presence of Sudakov logarithms, whose impact has been analysed in detail with a comparison among different schemes.In addition to the effects of the electroweak corrections, we illustrated the parametric uncertainties on the two considered observables, associated with the different schemes.For the schemes with α 0 as input parameter, we included the possibility to calculate ∆ α by means of two different parameterizations based on dispersion relations using e + e − collider data.A section is devoted to the discussion of the improved treatment of the unstable Z boson with respect to the original version of the code.While the full complex-mass scheme is the new default, also the pole and the factorization schemes are available as options in the code.The numerical impact of the different width options on the considered observables are shown for three representative input parameter schemes. Appendix A: Input parameters We list below the values of the EW parameters used for the phenomenological results presented in the paper: The numerical values of s 2 W MS and sin 2 θ l e f f correspond to the estimates of Ref. [184].The numerical values of the fermionic masses are given below The light-quark masses are used for the calculation of ∆ α had , as detailed in section 7. Depending on the input parameter scheme adopted, we use the values in Eq. (A.1)only for the independent parameters, while the other ones are either derived or not used at all.The only exception is α 0 , which is always used as input parameter for the calculation of the one-loop QED corrections (not discussed in the present paper) and enters the loop factor α 0 /(4π).While the choice of α 0 for the photon-fermion coupling is motivated by the physical scale of the γ f f splitting and by the required cancellation of the infrared divergences between virtual and real contributions, the natural choice of α entering the weak loop factor α/(4π) is given by α of the input parameter scheme at hand.However, we leave to the user the freedom of using the loop factor α 0 /(4π) also in the pure weak corrections by setting to 0 the flag a2a0-for-QED-only.We stress that the different choices of α for the weak loop factor introduce differences at O(α 2 ). For the parton distribution functions (PDFs), we use the NNPDF31_nlo_as_0118_luxqed set [234][235][236] provided by the LHAPDF-6.2framework [237] and set the factorization scale to the invariant mass of the dilepton system. Appendix B: Input flags In the following, we briefly describe the input-parameter flags for the Z_ew-BMNNPV package that have been used to produce the results shown in the main text.These are only a subset of the available input flags and we refer to the user manual for the complete list of process-specific input options.The non process-specific input flags can be found in the POWHEG-BOX-V2 documentation. Options for EW corrections no_ew: it is possible to switch off electroweak correction by setting no_ew 1 in the powheg.inputfile.By default no_ew= 0. no_strong: allows to switch off the QCD corrections when set to 1.By default no_strong= 0. ew_ho: the fermionic higher-order corrections discussed in section 3 are included by using the flag ew_ho 1.By default ew_ho=0.includer3qcd : if set to 1, the expression for ∆ r used in the higher-order corrections includes the three-loop QCD corrections.Default 0. includer3qcdew: when equal to 1, the three-loop mixed EW-QCD effects are included in the formula for ∆ r used for the fermionic higher-order corrections.Default 0. includer3ew : same as the previous flag, but for the threeloop EW corrections.Default 0. dalpha_lep_2loop : if set to 1, ∆ α includes the two-loop leptonic corrections from Ref. [238] when computing higherorder corrections.Default 0. QED-only: for the NC DY, the EW corrections can be split in QED and pure weak corrections in a gauge invariant way.By setting QED-only 1 in the input card, only pure QED corrections are computed.weak-only: when set to 1, only the virtual weak part of the EW corrections is computed. Note that events can be generated without QCD corrections only at LO accuracy or at LO plus weak (potentially higher-order) corrections: the generation of events including NLO QED corrections but not the NLO QCD ones is not allowed. Options for EW input parameter schemes scheme: this is the main flag for the choice of the EW input scheme.All the available schemes use the on-shell Z mass M OS Z (Zmass) as independent parameter (internally converted to the corresponding pole value), but differ in the choice of the remaining two independent parameters.Default 0. scheme 0: the second EW input parameter is α 0 . scheme 1: α(M 2 Z ) is taken as free parameter.scheme 2: G µ is used as input parameter. scheme 3: in this scheme, the actual input parameter is α 0 .However, for each phase-space point, the matrix elements are evaluated using the on-shell running of α from q 2 = 0 to the partonic center of mass of the event (computed in terms of the Born-like momenta kn_pborn). The additional factor of α coming from real and virtual QED corrections is always set to α 0 .The loop factor from the virtual weak corrections is set by default to α 0 , but α(q 2 ) is employed when the flag a2a0-for-QED-only is equal to 1 in the input card.The running is performed by default at NLO accuracy, and contains the two-loop leptonic corrections when the flag dalpha_lep_2loop is set to 1. scheme 4: the (α 0 , G µ , M Z ) scheme of Sect.4.3 is used. scheme 5: the calculation is performed in the MS scheme of Sect.4.4 (see below for input flags specific to this scheme choice). When scheme is equal to 0, 1, 2, or 3, the third EW input parameter is by default the on-shell W mass M OS W (Wmass) (internally converted to the corresponding pole value).If the flag use-s2effin is different from zero, the third independent parameter is the effective weak-mixing angle as described in Sect.4.2.use-s2effin: should be set to the desired value of the effective weak-mixing angle sin 2 θ l e f f .If this flag is present and scheme is equal to 0, 1, 2, and 3, the calculation is performed in the (α 0 , sin 2 θ l e f f , M Z ), (α(M 2 Z ), sin 2 θ l e f f , M Z ), (G µ , sin 2 θ l e f f , M Z ), and (α(q 2 ), sin 2 θ l e f f , M Z ) schemes, respectively.This flag is not compatible with the options scheme= 4, 5. a2a0-for-QED-only: regardless of the scheme used, the additional loop factor coming from the virtual weak corrections is set by default to α 0 .If the flag is set to 1, the purely weak loop factor is set to the same value used for the LO matrix element in the selected scheme.Note that the additional factor of α from real and virtual QED corrections is always equal to α 0 . Besides M OS Z , also the on-shell Z width (Zwidth) is a free parameter of the calculation (internally converted to the corresponding pole value).The same holds for the on-shell W width (Wwidth), when M OS W is taken as an EW input parameter, though this is only relevant when the complex-mass scheme is used. Options for the hadronic running of α and light-quark masses da_had_from_fit: if set to 1, the calculation of the hadronic corrections to the photon propagator and its derivative is based on the experimental data for inclusive e + e − → hadron production at low energies in terms of dispersion relations.Default 0. fit: if da_had_from_fit=1 and fit= 1(2), the HADR5X19.F (KNT v3.0.1) routine is used for the calculation of the hadronic vacuum polarization.The option fit=0 is left for cross checks, as with this option the quark loops under consideration are computed as in the case da_had_from_fit=0.mq_only_phot: if set to 1, the light-quark masses are set to 0 in the W , Z, and mixed γZ self-energy corrections and in their derivatives, since their light-quark mass dependence is regular and tiny in the massless quark limit.Default 0. The treatment of the hadronic vacuum polarization is critical for the derivative of the photon propagator and thus for the electric-charge and photon wave-function counterterms in the on-shell scheme.As a consequence, it is critical for those schemes that use α 0 as input parameter, while the impact is minor when α(M 2 Z ), G µ , or α(q 2 ) are used.In the context of the MS calculation (scheme= 5), the electric-charge and photon wave-function counterterms do not depend on the light-quark masses, while in the hadronic corrections to the bare photon propagator the light-quark mass dependence is regular and tiny in the limit m q → 0: as a result, in this scheme the da_had_from_fit flag is not needed.The non perturbative effects in the MS running of α MS (and, indirectly, of s 2 W MS ) are included in the starting values of the evolution α MS (µ 2 0 ) and s 2 W MS (µ 2 0 ), that should correspond to a scale µ 2 0 sufficiently larger than 4m 2 b . Options for the MS scheme The flags below are only effective when scheme= 5. running_muR_sw: if set to 1, the calculation is performed in the MS scheme with dynamical renormalization scale.For each phase-space point, the MS scale is set to the dilepton invariant mass in the Born-like kinematic (kn_pborn momenta) and the couplings α MS and s 2 W MS are evolved accordingly [165,171,172].Default 0. MSbarmu02: if running_muR_sw= 1, this parameter is the starting scale of the evolution of α MS and s 2 W MS .Otherwise, this is the (constant) value of the MS renormalization scale.It should be sufficiently larger than 4m 2 b .By default it is the pole Z mass computed internally from the input parameter M OS Z .MSbar_alpha_mu02: is the value of α MS (µ 2 0 ) for MSbarmu02 = µ 2 0 .MSbar_alpha_mu02 has a default value only if MSbar-mu02 is not present in the input card and corresponds to α MS (M 2 Z ) computed as a function of α 0 according to eq. (10.10) of Ref. [239] which includes effects up to O(αα 2 S ) (see also Ref. [170]).If excludeHOrun= 1, the NLO relation α 0 and α MS (M 2 Z ) is used.MSbar_sw2_mu02: same as MSbar_alpha_mu02, but for s 2 W MS .The default value is s 2 W MS (M 2 Z ) = 0.23122.decouplemtOFF: if set to 1, switches off the top-quark decoupling.Default 0. decouplemwOFF: same as decouplemtOFF, but for the W decoupling. MW_insw2_thr: allows to tune the position of the W threshold in the MS running of α end s excludeHOrun: if set to 1, the MS running of α end s 2 W is performed at NLO accuracy.Default 0 (i.e. the higher-order effects in Refs.[165,171,172] are included).OFFas_aMS: when set to 1, the running of α MS and s 2 W MS does not include the corrections O(αα S ) and O(αα 2 S ).Default 0. OFFas2_aMS: if equal to 1, the running of α MS and s 2 W MS does not include the corrections O(αα 2 S ) .Default 0. ewmur_fact: this entry sets the factor by which the renormalization scale is multiplied.It can be used for studying scale variations, e.g. by setting it to the standard values 2 or 1/2.Default 1. Remaining EW parameters alphaem : α 0 .Default value: 1/137.0359909956391.When scheme= 0, it is used for both the couplings in the LO matrix element and in the NLO corrections.For other schemes, it is used for the extra power if α in the real and virtual QED corrections (and in the loop factor for the virtual weak corrections if a2a0-for-QED-only is not set to 1).alphaem_z: α(M 2 Z ).Default value: 1/128.95072067974729.It is only used if scheme= 1. See above (alphaem) for the additional power of α in the NLO corrections.gmu : G µ .Default value: 1.1663787 × 10 −5 GeV −2 .It is only used if scheme= 2. See above (alphaem) for the additional power of α in the NLO corrections.azinscheme4 : if it is positive, the electromagnetic coupling of the (α 0 , G µ , M Z ) scheme is set to α = α 0 /(1 − ∆ α)) in the evaluation of the matrix elements (and in the α/4π loop factor if a2a0-for-QED-only= 1).Default 0. It is only active when scheme=4.Zmass : on-shell Z-boson mass M OS Z .Default value: 91.1876 GeV.Internally converted to the corresponding pole value.Zwidth : on-shell Z-boson width Γ OS Z .Default value: 2.4952 GeV.Internally converted to the corresponding pole value.Wmass : on-shell W -boson mass M OS W . Default value: 80.385 GeV.Internally converted to the corresponding pole value.This parameter is only used if the flag use-s2effin is absent when if scheme= 0, 1, 2, 3. Otherwise, it is computed from the independent EW parameters.Wwidth : on-shell W -boson width Γ OS W . Default value: 2.085 GeV.Internally converted to the corresponding pole value.This parameter is only relevant when complexmasses= 1 if scheme is set 0, 1, 2, 3 while use-s2effin is absent.Hmass : Higgs-boson mass, only entering weak corrections.Default value: 125 GeV.Tmass : top-quark mass.Default value: 173 GeV.Elmass : electron mass.Default value: 0.51099907 MeV.Since the calculation is performed for massive final-state leptons, this is the parameter used in the phase-space generator when running the code for the process pp → e − e + . For a description of the role of light-quark masses in the calculation, we refer to Sect.7 and to the flags for the hadronic running of α and light-quark masses. 2 -charge counterterm, while for µ < M top (M W, thr. ) the full top-quark (W ) loop enters the counterterm expression.Note that the discontinuity at O(α) on the W threshold cancels the corresponding discontinuity in the running of α MS (µ 2 ).In equation 42, δ D, W (δ D, top ) is equal to one if the W (top) decoupling is enabled together with the threshold corrections and zero otherwise (flags decouplemtOFF, decouplemwOFF, OFFthreshcorrs). Fig. 3 Fig.3Relative corrections to the invariant mass cross section distribution obtained by considering only NLO bosonic contributions. Fig. 4 Fig. 4 Absolute corrections to the forward-backward asymmetry obtained by considering only NLO bosonic contributions. Fig. 5 Fig.5Higher-order correction to the cross section distribution as a function of the leptonic invariant mass.The three curves correspond to the three different choices of renormalization scheme discussed. ( 3 ) QCD in Eq. (1), normalized to the LO predictions, to the invariant mass cross section distribution Fig. 10 Fig.10Absolute difference of the predictions for the forwardbackward asymmetry as a function of the dilepton invariant mass at LO (upper panel), NLO (middle panel), NLO+HO (lower panel).The calculation is performed in the CMS.The α values used in the loop factors corresponds to the ones used for the LO couplings.In the (α 0 , G µ , M Z ) the actual value of α is α 0 /(1 − ∆ α). Fig. 12 1 M Fig.12Relative difference of the dilepton invariant mass distribution obtained for µ R = 2M ll (red and green lines) or µ R = M ll /2 (black and blue curves) with respect to the predictions obtained with the de-fault choice µ R = M ll .The MS running of α MS (µ 2 R ) and s 2 W MS (µ 2 R) is computed at O(α) in the solid and dotted lines, while in the dashed and dot-dashed lines the running includes the higher-order effects described in the text.The distributions are computed at LO in the upper panel and at NLO in the lower one. Fig. 13 2 R ) and s 2 W MS (µ 2 R Fig.13 Relative difference between the dilepton invariant mass distributions computed in the MS scheme with and without including the HO corrections to the running of α MS (µ 2 R ) and s 2 W MS (µ 2 R ) described in the main text. Fig.15Effects of varying the input parameter sin 2 θ l e f f = 0.23154 ± 0.00016 in the (G µ , sin 2 θ l e f f , M Z ) scheme from the central value to the upper and lower ones, at leading and next-to-leading order.In particular it is shown the absolute difference between the asymmetry distribution obtained with the upper/lower value of sin 2 θ l e f f and the one with the central value sin 2 θ l e f f , c = 0.23154. Fig. 16 Fig.16 Effects induced on the dilepton invariant mass distribution by a variation of the input parameter sin 2 θ l e f f = 0.23154 ± 0.00016 in the (α 0 , sin 2 θ l e f f , M Z ) scheme from the central value to the upper and lower ones, at LO and NLO.Same notation and conventions of Fig.14. Fig. 17 Fig. 17Effects of varying the input parameter M W = 80.385 ± 0.015 GeV in the (G µ , M W , M Z ) scheme from the central value to the upper and lower ones, at LO and NLO.Here it is shown the relative difference between the invariant mass distribution obtained with the upper/lower value of M W and the one with the central value M c W = 80.385 GeV. Fig. 18 Fig. 18 Effects of varying the input parameter M W = 80.385 ± 0.015 GeV in the (G µ , M W , M Z ) scheme from the central value to the upper and lower ones, at LO and NLO.In particular it is shown the absolute difference between the asymmetry distribution obtained with the upper/lower value of M W and the one with the central value M c W = 80.385 GeV. Fig. 19 Fig. 19 Effects of varying the top-quark mass M top = 173.0± 0.4 GeV in the three considered schemes, from the central value to the upper and lower ones.The top-quark mass enters only the next-to-leading order corrections.Here it is shown the relative difference between the invariant mass distribution obtained with the upper/lower value of M top and the one with the central value M c top = 173.0GeV. Fig. 20 Fig. 20 Effects of the top-quark mass M top = 173.0± 0.4 GeV in the three considered schemes, from the central value to the upper and lower ones.The top-quark mass enters only the NLO corrections.Here it is shown the absolute difference between the asymmetry distribution obtained with the upper/lower value of M top and the one with the central value M c top = 173.0GeV. ( 1 MFig. 23 Fig.23 Difference of the pole/factorization scheme with respect to the default complex-mass scheme in the dilepton invariant mass cross section distribution at NLO.The difference PS-CMS is shown by the solid blue curve, the FS-CMS one by the dashed red one.Upper panel: (G µ , M W , M Z ); middle panel: (α 0 , G µ , M Z ); lower panel: (G µ , sin 2 θ l e f f , M Z ). 2 W and the corresponding argument of the decoupling logarithms.If absent, this parameter is computed as M 2, th W = Re[M 2 Z (1 − s 2 W MS (µ 2 0 ))].OFFthreshcorrs: when set to 1, the threshold corrections in the MS running of α end s 2 W are switched off.Default 0.
29,325.8
2024-02-22T00:00:00.000
[ "Physics" ]
Causes of extreme events revealed by Rényi information transfer Information-theoretic generalization of Granger causality principle, based on evaluation of conditional mutual information, also known as transfer entropy (CMI/TE), is redefined in the framework of Rényi entropy (RCMI/RTE). Using numerically generated data with a defined causal structure and examples of real data from the climate system, it is demonstrated that RCMI/RTE is able to identify the cause variable responsible for the occurrence of extreme values in an effect variable. In the presented example, the Siberian High was identified as the cause responsible for the increased probability of cold extremes in the winter and spring surface air temperature in Europe, while the North Atlantic Oscillation and blocking events can induce shifts of the whole temperature probability distribution. INTRODUCTION Early in January 2021, very mild winter temperatures occurred in Central Europe, while Spain experienced extraordinary cold weather.For instance, on 5 January 2021, the minimum temperature in Potsdam, Germany, was 0.4°C, and in Frankfurt am Main, Germany, it was 1.7°C, while the average January temperatures there are −0.13° and 1.38°C, respectively.The average January temperatures in the Spanish cities of Madrid and Zaragoza are more than 6°C, while on 5 January 2021, the temperatures there dropped to −1.7° and −0.6°C, respectively.Three months later, many areas of France experienced devastating April night frosts.During the weeks preceding this extreme phenomenon, warm spring weather encouraged vegetation to bloom early and bud break occurred in vineyards.Then, from 5 April 2021, the temperature fell under −4°C in the early morning hours, threatening these new buds in all major winegrowing regions, e.g., Bordeaux, Champagne, or Burgundy.In Dijon, the largest city in Burgundy, the minimum daily temperatures during 7 April and 8 April 2021 were −2.8° and −4.9°C, respectively.The average April daily mean temperature there is 3°C. Such extraordinary digressions from long-term means ("normals") are called extreme values, or extremes, and their occurrences are called extreme events.Besides cold extremes, discussed in this study, warm extremes, heatwaves, and other meteorological and climate extreme events, such as floods, droughts, or hurricanes, have recently attracted considerable attention (1).Extreme events occur in diverse natural and social systems and usually have tremendous impact on human lives.Therefore, during the last decades, remarkable research effort has been devoted to understanding, modeling, and predictions of extreme events (2)(3)(4). Researchers in any scientific discipline strive to uncover causes of observed phenomena.If studied phenomena or processes evolve in time and can be characterized by measurable quantities, registered in consecutive instants of time and stored in datasets called time series, then scientists can apply computational methods for detecting causal relations between processes represented by different datasets.Granger causality (GC thereafter) provides an approach to describe causality in quantitative, mathematically expressible terms.It was inspired by the 1950s work of the father of cybernetics, N. Wiener (5), and formalized by C. W. J. Granger, the 2003 Nobel Prize winner in economics (6).According to the GC principle, variable C is causal to variable E if the knowledge of the present (time t) state C(t) of the variable C improves the prediction of E(t + τ), i.e., of the variable E in a future time t + τ.Granger (7) introduced a mathematical framework for inference of causality based on linear autoregressive (AR) processes.Many approaches for causality in nonlinear processes have been proposed, based, e.g., on theory of dynamical systems (8)(9)(10), nonlinear prediction (11), machine learning (12), or data compression efficiency (13).One of the successful nonlinear generalizations of the GC approach is based on information theory (14,15).Mutual information I(C; E) measures the amount of common information contained in the variables C and E. It is computed from the probability distribution functions (PDFs) of the considered variables and can be expressed using their Shannon entropies.Possible causal influence of C on E can be evaluated using the conditional mutual information (16,17) (CMI thereafter), mathematically expressed as I[C(t); E(t + τ) | E(t)].(For cases of multidimensional variables, see the Supplementary Materials.)CMI or its mathematically equivalent (15,17) definition known as transfer entropy (18) (TE thereafter) measures the amount of information about E(t + τ) contained in C(t).The conditioning on E(t) removes the possible "present-time" common information in C(t) and E(t), to obtain the "net" information about the future of E contained in the presence of C. Studying the cause-effect relationships, various forms of CMI/TE have been successfully applied in diverse scientific disciplines (19), including the Earth sciences (20)(21)(22)(23) where the approaches generalizing the GC principle are becoming increasingly popular (24)(25)(26)(27)(28)(29)(30) and successfully complementing causal counterfactual theory and data assimilation methods for the detection and attribution of weather and climate-related events (31,32). Despite very intensive research activity in the areas of extreme events and causality, there are surprisingly few studies connecting the two topics.Zanin (33) proposed a metric based on conditional probability to decide whether extremes in one dataset cause extremes in another dataset.Gnecco et al. (34) connect the fields of causal inference and extreme value theory and define the causal tail coefficient (CTC) that captures asymmetries in the extremal dependence of two random variables.Dependence structures, including graphical models and directed (causal) graphs in multivariate data with extremes, were also studied (35)(36)(37).CMI/TE is a tool from information theory (14), traditionally based on the Shannon entropy (14,15), which can be computed using PDF p of considered random variables.Rényi (38) proposed a more general definition of entropy that includes the term p α , i.e., the PDF taken to the power of α.The Rényi entropy is a parametric quantity in the sense that its value is influenced by the Rényi parameter α. Jizba et al. (39) proposed to study causal information transfer between financial time series using the TE redefined in the Rényi entropy framework.They expected that certain values of the parameter α would selectively emphasize only certain sectors of the underlying PDF while strongly suppressing others.Because extreme values are typically located in so-called tails of PDF, in this study we ask whether the redefinition of CMI using the Rényi entropy concept (RCMI thereafter) could help to identify causes of extreme events.More specifically, the basic research question of this work is as follows: Considering two or more (potential) cause variables, influencing an effect variable, can the RCMI help to distinguish which of the cause variables is causing the extremes in the effect variable? We will demonstrate that the CMI/TE redefined using the Rényi entropy framework opens the possibility to infer specifically the causes of extreme events, first using simulated data with clear causeeffect relations given by their construction.Then, we will introduce a societally relevant example of real-world data from the Earth climate system.We will consider long-term records of near-surface air temperature (SAT) from Europe as the effect variable and three potential cause variables. The North Atlantic Oscillation (NAO) is a dominant pattern of atmospheric circulation variability in the extratropical Northern Hemisphere, and it is a major factor influencing air temperature and other meteorological variables in the Atlantic sector and surrounding continents (40). Atmospheric blocking is a mid-latitude weather pattern manifesting as a quasi-stationary, long-lasting, high-pressure system that blocks or diverts the prevailing westerly large-scale atmospheric flow.Blocking events can have major impacts on the mid-latitude weather, sometimes leading to extreme events as cold spells in winter or heat waves in summer (41,42). The Siberian High (SH) is a dominant circulation system over the Eurasian continent created by a massive collection of cold dry air that accumulates in the northeastern part of Eurasia from September until April.The SH influence, characterized by excessively low surface temperatures, affects regions extending well beyond its source area (43)(44)(45).The indices characterizing NAO, blocking events, and SH are defined in the "Climate data" section. Simulated data To test the ability of RCMI to uncover the known causal relations, we have numerically generated time series of three variables.There are two independent cause variables, C(t) and X(t); C(t) is a realization of an autoregressive process of order one (AR1 thereafter), and X(t) is a realization of Gaussian white noise.The effect variable E(t) is also of the AR1 type with E(t) given as a linear combination of E(t − 1) and C(t − 1).In addition, X(t) is causing extreme values in E(t) using a simple rule: If |X(t) | > 3, then E(t + 1) = 1.8X(t)/|X(t)|.For details, see methods and "Simulated data" section below. The conditional mutual information based on the Rényi entropy (RCMI), mathematically expressed as quantifying the causal influence of the cause variable C on the effect variable E (in the following, the notation C → E will be used), is presented as a function of the Rényi parameter α in Fig. 1A (blue curve).The gray curve and whiskers illustrate the RCMI mean ±2 standard deviation (SD) for the surrogate data representing the null hypothesis of no causality.The difference of the RCMI of the tested data from the surrogate mean, in the number of surrogate SDs, is presented by the z-score in Fig. 1C using the blue curve.In the full analogy, the results for the causal relation X → E are presented in Fig. 1 (B and D), using the purple curves.For the noncausal directions, only the zscores are presented (E → C in Fig. 1C, turquoise curve; and E → X in Fig. 1D, orange curve) which are confined under the red line of 2 SD, confirming no significant causality in these directions.The RCMI and the related z-scores in the causal directions (C → E-the blue curves in Fig. 1, A and C, respectively; and X → E-the purple curves in Fig. 1, B and D, respectively) indicate the existence of causality with a high statistical significance for large ranges of α.The zscores in the causal directions in Fig. 1 (C and D) are presented with scale breaks on the ordinate to see their maxima as well as their behavior around the significance threshold line of 2 SD.On the other hand, the z-scores in the opposite causal directions (E → C-the turquoise curve in Fig. 1C; and E → X-the orange curve in Fig. 1D) are confined between −2 and +2 SD, confirming the unidirectionality of the detected causal relations.Now, let us concentrate on differences between the RCMI I α characterizing the causal influence of the two cause variables C and X on the effect variable E. The RCMI for C → E is statistically significant for all but the smallest values of α (Fig. 1, A and C), while the RCMI for X → E is not statistically significant for large values of α ≥ 2, but is significant for all smaller values α < 2 (Fig. 1, B and D).Because the small values of α in the expression p α emphasize the tails of probability distributions, the statistically significant RCMI for small α is the property that we expected for the cause variables influencing the values of the effect variable on the tails of its PDF, i.e., for the cause variables causing the extreme events. Let us illustrate the effects of the cause variables on the effect variable using the conditional probability distributions (CD thereafter), estimated as the conditional histograms.The CD of the variable E for the condition C < −σ C ( σ 2 C is the variance of C) is plotted in Fig. 1E (blue curve).The CD for surrogate data, representing the null hypothesis of no influence of C on E, is illustrated as mean ±2 SD in gray, and the related z-scores are illustrated in blue in Fig. 1G.We can see that the condition C < −σ C shifts the whole histogram to the left and this shift is statistically significant for all values but the values in the tails.For the variable X, we use the condition X < −1, and the results are presented in Fig. 1 (F and H).The only statistically significant effect of the variable X can be seen in the edge tail histogram bins; i.e., X influences the probability of extreme events in E, while C shifts most E values, but does not significantly affect the occurrence of extreme values. The above numerical example uses Gaussian processes with artificially added extreme values.To demonstrate the performance of the introduced RCMI approach applied on data that inherently contain extreme values, we substitute the Gaussian noises by random numbers drawn from Lévy alpha-stable distributions (46).In the first scenario, only the variable X is a Lévy process, while Gaussian noise is included in the definitions of variables C and E. In a more challenging scenario, the intrinsic noise in E is also of the Lévy type.We demonstrate that also in these cases, the RCMI approach correctly identifies the variable X as the cause of extreme values in the variable E. Details about this extended numerical study are included in the Supplementary Materials. Climate data Using simulated data from simple causality models, we have demonstrated that, indeed, using the RCMI, we can infer which cause variable is responsible for the occurrence of extreme values in the effect variable.Can we observe such a distinction in real experimental data?We will study causal relations between indices of atmospheric circulation variability modes and near-SAT T in Europe. First, let us study the influence of NAO on the winter temperature (T) in Frankfurt, Germany.RCMI analysis in Fig. 2 demonstrates unidirectional causality NAO → T (Fig. 2, A and C; no significance in the direction T → NAO, Fig. 2, B and D) and RCMI is significant for NAO → T for α ≥ 1, while its significance disappears shortly after α decreases under 1.This observation reminds the result for the cause variable C in the simulation experiment above, and indeed the conditional distribution (CD) of T given the normalized NAO < −1 is shifted to lower values.This shift is statistically significant for the whole range of T values, but not for the values on the tails of the distribution (Fig. 2, E and G).Plotting the CD for the NAO index values for the condition that the normalized winter temperature anomalies in Frankfurt are smaller than −1, we can see the same effect of the distribution shift (Fig. 2, F and H).CD sees the relation of NAO and T as symmetrical, because CD is not a causality measure; it is just a measure of dependence.The true causality measures such as RCMI (or CMI) help us to infer that the large-scale circulation variability mode influences regional air temperature, but a single regional temperature has no measurable causal effect on the large-scale circulation variability mode, here the NAO. Let us proceed to the influence of other circulation variability indices on the Frankfurt winter temperature.The atmospheric blocking, characterized by the blocking index (BI), has a similar effect on the winter temperature as the NAO: In Fig. 3A the RCMI z-score for BI → T is plotted using the black curve, and for comparison, the dashed blue curve is used for NAO → T. The same color codes are used for CD in Fig. 3 (B and D), where we investigate either the BI > 0 or the NAO < −1 condition.Both the conditions induce a similar leftward shift in the temperature histogram.For completeness, in Fig. 3E, we present CD for either the BI = 0 or the NAO > 1 condition.We can see a rightward shift, smaller than the leftward shift under the previous conditions; however, the shift is still statistically significant (Fig. 3G). There is no measurable effect of the SH itself on the Frankfurt winter temperature (SH → T, the dashed olive curve in Fig. 3C); however, if we consider only the days during negative NAO (NAO <0), the RCMI for SH → T (the purple curve in Fig. 3C) is statistically significant in a certain region under and around α = 1 (Fig. 3C).Using CD, the most apparent effect of SH is the increase of the probability of cold extremes in the range from −7° to −15°C in winter temperature anomalies (Fig. 3, F and H).In the winter temperature record from Frankfurt, as a representative of Central European stations, NAO and BI cause the shift of the whole winter temperature value distribution, while SH, under the condition of negative NAO, specifically increases the probability of the occurrence of cold extremes. Let us move westward and analyze the same causes for the winter temperature in Madrid, Spain.RCMI for the causality NAO → T and BI → T (Fig. 4A) is statistically significant in similar α ranges and, although the larger significance values are not necessarily equivalent to a stronger causal effect, for the Madrid winter temperature, the main causal drive is the BI.The condition BI > 0 shifts the whole T (C) z-scores for the RcMi for the causality nAO → T (blue), and (D) z-scores for the RcMi for the causality T → nAO (turquoise).(E) the conditional histogram of the winter air temperature anomaly for the normalized nAO index < −1 (blue), and conditional histograms (mean ±2 Sd) for 30 realizations of the surrogate data (gray), and (G) the related z-score (blue).(F) the conditional histogram of the winter nAO index for the normalized winter Frankfurt temperature anomaly < −1 (turquoise), and conditional histograms (mean ± 2 Sd) for 30 realizations of the surrogate data (gray), and (H) the related z-score (turquoise).the red lines mark the significance levels of ±2 Sd. A B E F H G D C Fig. 3. Causes of the winter air temperature in Frankfurt.(A) the z-scores for the RcMi measuring the causal effect of the blocking index on the winter air temperature (Bi → T, black), and, for comparison, the z-scores for the RcMi for the causality nAO → T (dashed blue curve).(B) the conditional histogram of the winter air temperature anomaly for Bi > 0 (black), and, for comparison, the conditional histogram for the normalized nAO index < −1 (dashed blue curve); and (D) the related z-scores.(C) the z-scores for the RcMi measuring the causal effect of the Siberian high on the winter air temperature (Sh → T, dashed olive curve), and the z-scores for Sh → T conditioned on the negative nAO index (Sh → T | nAO < 0, purple curve).(E) the conditional histogram of the winter air temperature anomaly for the normalized nAO index > 1 (dashed blue curve) and for Bi = 0 (black curve), and (G) the related z-scores.(F) the conditional histogram of the winter air temperature anomaly for the normalized winter Sh index > 1 (dashed olive curve) and for the combined condition Sh index >1 and nAO < 0 (purple curve) and (H) the related z-scores.in the z-score graphs, the red lines mark the significance levels of ±2 Sd. in the histogram graphs, the gray curves and whiskers illustrate the mean and the mean ± 2 Sd for 30 realizations of the surrogate data; in (e), the thin and thick whiskers are used for the nAO and Bi condition, respectively.probability distribution to the left; i.e., it increases the probability of cold temperature anomalies and simultaneously decreases the probability of warm anomalies (Fig. 4, E and G).On the other hand, the condition NAO < −1 increases the probability of cold anomalies but does not change the probability of warm anomalies (Fig. 4, E and G), and the positive NAO has no effect on the winter temperature in Madrid (Fig. 4, F and H).There is no causal effect of SH observed, irrespectively of the NAO condition (Fig. 4C). We also present the relations between the studied cause variables.We can observe bidirectional causal relation between BI and NAO (Fig. 4B) and between SH and NAO (Fig. 4D).The different shapes of dependence on the parameter α stem probably from different probability distributions of the variables NAO, BI, and SH.The inference whether the observed causalities are direct or induced by some common cause is beyond the scope of this study.We just want to note that the used cause variables are not independent; however, their effects on European temperatures are different and dependent on space and time; i.e., their effects are geographically and seasonally specific.To bring more support for this statement, let us analyze causal effects in the spring temperature in Dijon, France. For the Dijon spring temperature, we can see significant causality BI → T; however, there was no significance for the influence NAO → T (Fig. 5A).The influence of SH has similar behavior as in the case of the Frankfurt winter temperature-no significant causality can be detected when analyzing SH alone; however, the causality SH → T becomes significant for the condition of negative NAO (Fig. 5B).Note that the causality BI → T is significant for a narrow interval around α = 1 (Fig. 5A), while the causality SH → T (given NAO <0) is significant for a larger range of small α < 1 (Fig. 5B).For comparison of the ranges of small α values, for which the observed causalities are significant, we mark the left intersections of the z-score curves with the 2 SD significance line by vertical dashed lines in Fig. 5 (A and B).This difference of the α ranges of significant RCMI predicts the different causal effects of the two causes in the Dijon spring temperature: While BI >0 shifts leftward the center of the temperature probability distribution, i.e., the probability of small positive anomalies decreases and the probability of small negative anomalies increases; the SH > 1 condition increases the left tail of the histogram including the smallest anomalies (Fig. 5, C and D).To see this effect in usual temperature units, in Fig. 5E, we present the left tail of the same CD as in Fig. 5C, but for the spring daily mean temperatures, demonstrating that the high SH and negative NAO conditions significantly increase the probability of extreme cold daily mean temperatures under −5°C.Last, the left tail for the CD of the spring daily minimum temperature for the same condition is presented in Fig. 5F, showing the significantly increased probability of extreme spring frost around and under −10°C. The presented results demonstrate that, indeed, the RCMI can indicate which cause variable is specifically responsible for the occurrence of extreme values.On the other hand, can we infer, just using the RCMI analysis, that some variables are not causing extremes?For the three above cause variables, NAO, BI, and SH, we will perform a simple analysis to answer the question what portion of extremes is "caused" by a particular variable.The word "caused" is given in the quotation marks, because the following computations are just a coincidence analysis, not a causality analysis. First, we will consider cold extremes during the winter season.For an operational definition of cold extremes, we will take the first percentile of the winter air temperature distribution, i.e., 1% of the coldest winter temperature values.For instance, in the Frankfurt station air temperature data from 1950 to 2019, we have 6317 winter days; thus, we can select 63 coldest days.We ask, how many of these coldest days occurred during the NAO condition given by the normalized NAO index < −1?We find 28 days from the first percentile of the winter air temperature anomaly distribution coinciding with the NAO condition; i.e., the NAO condition "explains" or rather coincides with 44% of the cold extremes.Can such coincidence occur by chance?To find an answer, we could use random draws from the winter temperatures constrained by the number of days satisfying the NAO condition.The latter, however, occurs in clusters; therefore, we use the real NAO condition days, but we take the values from the surrogate temperature data obtained in the same way as for testing the significance in the causality analysis.We find, in 1000 surrogate data realizations, the mean 8.43 coinciding days, with the SD 4.45.The resulting z-score is 4.4; i.e., the original data value exceeds the surrogate mean by 4.4 SDs, and the null hypothesis of a random occurrence of the observed coincidence can be rejected. If we use the raw air temperature data instead of the air temperature anomalies, we find 29 days from the first percentile of the raw winter air temperature distribution coinciding with the NAO condition.The related z-score is 4.3, rejecting the null hypothesis of a random occurrence of this coincidence.We can see that for the winter season, the results for the raw temperature and the temperature anomalies are very similar. Continuing with the Frankfurt station winter temperature, the BI condition (BI > 0) coincides with 36 extreme cold days (z-score is 5.1) and the SH condition (normalized winter SH > 1) coincides with 24 extreme cold days (z-score is 3.4); i.e., all three studied variables "cause" (in the sense of the nonrandom coincidence) statistically significant portions of the cold winter temperature extremes. To extend this extreme coincidence analysis, we repeat the same computations using the gridded reanalysis winter temperature data and map the results in Fig. 6. We can see that in a large part of Europe, the blocking events play a primary role in the occurrence of the cold extremes in winter temperature (Fig. 6B), while NAO is more important for the British islands and a part of Scandinavia (Fig. 6A).The SH takes its part in a smaller region of Europe, namely, in eastern France, southwestern Germany, Switzerland, Austria, Slovenia, and northern Italy (Fig. 6C).In the above analysis, we have found that the three cause variables are not independent; in particular, we have inferred a bidirectional causality between NAO and BI (Fig. 4B).On the other hand, the days of NAO and BI conditions only partially overlap.The portion of cold extremes simultaneously coinciding with both the conditions (Fig. 6D) is smaller than those coinciding with either the NAO (Fig. 6A) or the BI condition (Fig. 6B). It should be noted that coloring of a particular grid-point in the coincidence maps was decided by the result of an individual statistical test.Because the whole maps were not corrected for multiple testing, the maps can contain also some false-positive results. Let us move to the cold extremes in the spring season.We have observed nonnegligible differences between the results obtained using either raw or anomaly data.The reason is probably the steep increase of temperature from March to May.Thus, negative anomalies in May may still represent temperatures well above the frost point in the raw data.In the search for the causes of dangerous frosts at the beginning of the growing season, we present here the results for the raw spring temperature data. As the introductory example, consider the Dijon station data (1950-2019) in which we have 6440 spring days.Using again the first percentile definition of the cold extremes, we identify 64 extreme cold spring days.For the NAO condition, we find 25 coinciding values (38%, z-score 2.7); for the BI condition, 26 coinciding values were found (40%, z-score 2.5); last, the SH condition coincides with 33 cold extreme values (51%, z-score 2.5).In all three cases, the coincidence 5. Causality in Dijon spring air temperature.the z-scores for the RcMi measuring the causal effect of (A) Bi and nAO (Bi → T, black curve; nAO → T, dashed blue curve) and (B) Sh and Sh during negative nAO on the spring air temperature in dijon (Sh → T, dashed olive curve; Sh → T | nAO < 0, purple curve).(C) the conditional histogram of the dijon spring air temperature anomaly for Bi > 0 (dashed black curve) and for normalized Sh > 1 during negative nAO (purple curve) and (D) the related z-scores.(E) the left tail of the conditional histogram of the dijon spring daily mean air temperature, and (F) the left tail of the conditional histogram of the dijon spring daily minimum air temperature for Bi > 0 (dashed black curve) and for normalized Sh > 1 during negative nAO (purple curve).the red lines mark the significance levels of ±2 Sd in the z-score graphs.the gray curves and whiskers represent the surrogate mean and mean ± 2 Sd range. in the combined Bi/Sh-conditioned histograms, the thick and thin whiskers are respectively related to the Bi and Sh conditions.values are significantly higher than a random coincidence.The SH is the most important factor for the cold spring extremes in southeastern France, which can also be confirmed using the gridded reanalysis data presented in Fig. 7C.For the majority of Europe, however, the most important role is played by the blocking events (Fig. 7B), while the British islands and a part of Scandinavia are mostly influenced by the NAO.In the above causality analysis, we have found that the causal effect of the SH is detectable conditionally for nonpositive NAO (Fig. 5B).Thus, we can ask what is the "pure" effect of NAO?That is, how many cold extremes coincide with the NAO condition if the SH condition is not fulfilled, i.e., for the normalized NAO index < −1 and the normalized spring SH index < 1?In the Dijon station data, this condition coincides with 13 extreme cold values, which means 20%, and the z-score is 0.76.Thus, the 13 coinciding values can probably occur by chance.Applying the latter condition for the whole Europe gridded data, we observed the significant coincidence values for the pure NAO effect only in the British islands and a part of Scandinavia (Fig. 7D). Comparison with methods from literature To have a comparison of the introduced RCMI approach with relevant methods from the literature, we have applied the well-known GC (7, 47) approach, as well as two causal discovery methods proposed for data with extreme values, CTC (34) and Zanin's causality of extreme events (33) (ZC), to the simulated and experimental climate data analyzed in this study. Considering the linear AR processes C and E and Gaussian noise X, the GC performs well and correctly uncovers the causal relations C → E and X → E. There is even a possibility of identifying the variable causing the extreme values-after a gradual removal of extreme values from the data, the causality C → E persists, while the extremecausing relation X → E disappears.Unfortunately, the applicability of GC is restricted to linear systems.One would expect that the linear GC would not "see" causality in nonlinear systems, i.e., would suffer by high false-negative rates; however, false-positive rates were observed.In the unidirectionally coupled Rössler systems (see the Supplementary Materials), the GC detects causality in both directions.False-positive results of this type have also been observed for other nonlinear dynamical systems by Krakovská et al. (10) The CTC also works well for the linear AR variables C and E and Gaussian noise X.In the case of the nonlinear Rössler systems, CTC correctly identifies the extreme-causing relation; however, it fails to infer the correct causality between the two Rössler systems.Thus, nonlinearity in data can also be a problem for the CTC approach. The ZC is not restricted by a linear model; however, it suffers from false-positive detections already in the case of linear AR variables C and E and Gaussian noise X.Although it correctly identifies the relations C → E and X → E, it also detects false causality C → X, and for some lags, also a false causality E → X.In the case of the unidirectionally coupled Rössler systems (see the Supplementary Materials), ZC indicates bidirectional causality between the two systems and an incorrect causality direction between the extreme-causing Gaussian variable and the driven Rössler system.Next, we have applied the GC, CTC, and ZC methods to the climate data.After we have reported the results from the simulated data, the inconsistency of results from the real data is not surprising.The ZC detects bidirectional links almost everywhere, suffering apparently from high false-positive rates.The only consistent causality detection by GC and CTC is in the case of NAO influence on the winter temperature in Frankfurt.Here, GC and CTC confirm the RCMI results.In the case of NAO and Dijon spring temperature, when RCMI did not detect any causal relation, GC and CTC contradict each other.Inconsistency in detecting influence of BI and SH is probably due to nonlinearity and non-Gaussianity of these data.It should be noted, however, that CTC and ZC investigate the question "do extremes in one variable cause extremes in another variable?"That is, they require similar heavy-tailed PDFs in all studied variables, which is not the case of the studied climate data.RCMI, designed to answer the more general question "which variable is the likely cause of extremes in the affected variable?, " is not restricted by this requirement and is better suitable for real data such as those climate records studied here.Detailed presentation of these results can be found in the Supplementary Materials. DISCUSSION Here, we investigated whether the RCMI is able to identify a cause variable responsible for the occurrence of extreme values in an effect variable.A simple numerical model suggested the answer "yes, " utilizing a simple causal structure of two cause variables and one effect variable, in which the effect variable E was linearly dependent on the cause variable C, and the other cause variable X mechanistically caused extreme values in E.Then, we applied the RCMI analysis on real data from the climate system: The near-SAT from European locations was considered as the effect variable, and indices of the NAO, blocking events (BI), and SH were tested as cause variables, because influences of these circulation phenomena on European air temperature have been observed (40)(41)(42)(43)(44)(45).Previous time series studies, however, mostly considered the NAO and, typically, Pearson's correlations have been computed between (mostly winter) air temperature records and NAO indices [see, e.g., (48)], while only a few causality analyses have been published (13,49).Causal effects of SH have been evaluated for winter SAT over northeast Asia (50).The analysis presented in this study has confirmed the causal influence of all three circulation phenomena on the winter and spring SAT in Europe.Moreover, we have demonstrated that RCMI can infer which cause variable is responsible for the occurrence of extreme values in the effect variable, here the SAT.The effects of NAO and BI are manifested as a shift of the whole air temperature distribution, while SH causes the increase of the probability distribution left tail, i.e., the increase of the probability of extreme cold temperatures.This conclusion, however, requires two remarks.First, the influence of SH is only observable when the NAO is not in its positive phase.Hurrell and Dickson (51) help us to understand this finding: "In the so-called positive phase, higher than normal surface pressures south of 55 ∘ N combine with a broad region of anomalously low pressure throughout the Arctic and subarctic.Consequently, this phase of the oscillation (NAO+) is associated with stronger-than-average westerly winds across the middle latitudes of the Atlantic onto Europe…." In other words, the positive NAO brings warm Atlantic air to Europe, thus preventing the occurrence of cold extremes or even causing warm wet winter weather.The occurrence of cold extremes, especially in spring, requires the cooperation of NAO and SH, and isolated bivariate analyses of temperature T with NAO or T with SH are not sufficient.This is an example of higherorder interactions that have recently attracted attention in studies of complex systems (52). The second remark stems from the above coincidence analysis.Both the NAO and BI conditions seem to explain a significant portion of cold extremes, and only in spring were we able to demonstrate that NAO has no effect without the elevated SH.It will require further research to decide whether such variables as NAO and BI are also causes of extreme events or only serve as facilitators for other causal variables.Thus, at this stage of the research, we could conclude that RCMI can provide evidence that a certain variable is the cause of extreme events, but cannot testify that other cause variables are not responsible for an increased probability of extreme event occurrence. Here, we focused on bivariate causality analysis that cannot distinguish whether the causal effect is direct or indirect.However, for practical applications in predictions or warning systems, the bivariate causality detection is already useful.The distinction of direct causal links is necessary for understanding underlying physical mechanisms or for removal of redundant variables from predictors if the direct cause is available.In any case, the method presented here can be extended to multivariate settings and/or for including multidimensional variables (see the Supplementary Materials).Therefore the next step in this research is an implementation and testing of more effective estimation algorithms, such as those based on the k-nearest-neighbor search or kernel estimators, which have already been tested for the use in the RCMI/RTE analyses (53,54). In the Supplementary Materials, we present detailed results of the standard GC approach, as well as two causal discovery methods proposed for data with extreme values (33,34) applied to the simulated and experimental climate data analyzed in this study.We show the capabilities and shortcomings of the methods as well as different ways of asking the research question.The RCMI method presented here does not ask whether extremes in one variable cause extremes in another variable, but which variable is the likely cause of extremes in the affected variable, regardless of the occurrence of extremes in the cause itself.Therefore, we believe that further development and applications of the RCMI open new research avenues leading to a better understanding of the occurrence of extreme events. Information-theoretic approach to causality Consider a discrete random variable X with a set of values Ξ and a PDF p(x).[For simplicity, we use the notation p(x) instead of the more precise p X (x).]The Shannon entropy H(X) of X is defined as Adding another random variable Y with the set of values Υ, PDF p(y), and the joint PDF p(x, y) of both variables X and Y, we can define the joint entropy H(X, Y) of X and Y as By analogy, we can define the joint entropy for n variables.The conditional entropy H(Y|X) of Y given X is where the conditional probability p(y| x) = p(x, y)/p(x), for p(x) ≠ 0. The average amount of common information, contained in the variables X and Y, is quantified by the mutual information I(X; Y) (14,55), defined as The CMI I(X; Y|Z) of the variables X, Y given the variable Z is For Z independent of X and Y, we have By a simple manipulation, we obtain Equation 7 can be used to redefine the Shannonian CMI into the framework of the Rényi entropy, defined as where α > 0, α ≠ 1.As α → 1, H α (X) converges to the Shannon entropy H(X). We have defined CMI/RCMI for one-dimensional (scalar) variables, because this simple form has been found sufficient for uncovering unidirectional causal relations in the numerical example of variables C(t), E(t), and X(t), as well as in the experimental climate data used in this study.For the definition of higher-dimensional forms of CMI and the discussion of the need for their applications for multidimensional time series as well as Takens reconstructions (56) of multidimensional trajectories of dynamical systems, see the Supplementary Materials and (17). Simulated data Our introductory example consists of three time series.The first one, representing the cause variable C(t) (t = 1, … is a discrete time index), is a realization of a simple autoregressive process of order one (AR1 thereafter) (Fig. 8A, the blue curve).Its present state C(t) is given by a linear combination of its value C(t − 1) one time step back and a random number taken from a normal distribution with zero mean and unit variance where a C = 0.7 and σ 2 C = 0.1.The second cause variable X(t) is a realization of white Gaussian noise with zero mean and unit variance (Fig. 8B, the purple curve).The effect variable E(t) is also of the AR1 type where a E = 0.5, b E = 0.2, and σ 2 E = 0.1, and all noise terms ξ C , ξ X , and ξ E are independent Gaussian random variables with zero mean and unit variance.Note that the present state E(t) is given by a linear combination of noise and previous values of both variables E and C, i.e., E(t − 1) and C(t − 1).This is a simple example of "C Granger causing E. " However, the effect variable, E, is influenced also by the other cause variable, X, in the following way: Each time X(t) is greater than 3, the value of E(t + 1) is set to 1.8.In the full analogy, X(t) < −3 causes E(t + 1) = −1.8.The effect variable E(t) is illustrated in Fig. 8C by the black curve.The extreme values ±1.8 caused by X(t) are marked by the purple bullets. The PDF p of the effect variable E, estimated as a histogram, is presented in Fig. 8E.It is in fact a normal distribution, just on its tails-in the bins containing the extreme values ±1.8, the value of p is much greater than it should be in the normal distribution-see the zoomed graph in Fig. 8F. Figure 8 (G and H) illustrates the effect of taking the αth power p α of the PDF p: For α = 3 (Fig. 8G), the relative weight of the most probable values around the mean (which is equal to zero here) is amplified in the cost of the weight of the values further from the mean.The situation is quite opposite for α = 0.3 (Fig. 8H)-the weight of the extreme values on the tail of the PDF is relatively increased with respect to the weight of the mean.This effect is the inspiration for considering the RCMI as a measure able to distinguish the cause variable responsible for the occurrence of extreme events. Estimation and statistical testing Plugging Eq. 8 into Eq.7, we estimate the RCMI using the simplest equidistant binning algorithm (15) using eight bins for each variable.Thus, the computation of RCMI (Eq.5) requires the estimation of 3-dimensional PDF discretized into 512 bins. RCMI I α [C(t); E(t + 1) | E(t)] quantifying the causal influence of the cause variable C on the effect variable E (in the used notation C → E) is presented as a function of the Rényi parameter α in Fig. 8I (blue curve).The (nonexisting) causality in the opposite direction E → C, quantified by RCMI I α [E(t); C(t + 1) | C(t)], is presented in Fig. 8K (turquoise curve).Although the values of RCMI for the causal direction C → E are greater than those for the direction E → C without any causal influence, in both the cases, the RCMI values are greater than zero and the shapes of the curves I α as functions of α are similar: Reading for α decreasing from 3 to 0, RCMI is stable, then starts to slightly increase for α ≈ 1, while for α < 0.5, RCMI is characterized by a steep increase.Apparently, this is the behavior of the estimator of RCMI, which does not reflect any "physically increasing causality strength" for the decreasing parameter α.To detect really existing causality, we need to prove, with a statistical significance, that the RCMI estimate is indeed nonzero, i.e., greater than values given by possible bias and variance of the RCMI estimator.We apply an approach of computational statistics called the surrogate data test (57,58).The surrogate data represent a null hypothesis of no causality and allow one to compute the range of RCMI values obtained from data in which no causal relation is present.To find a way to construct surrogate data, let us evaluate, for a chosen α = 0.9, the dependence of lag τ (Fig. 8J).Here, we can see a distinctive difference, for small τ, between the RCMI in the causal direction C → E (blue curve) and in the noncausal direction E → C (turquoise curve).The causal effect C → E occurs with lag 1 sample; however, owing to a memory (autocorrelation) in the AR1 process, it is detectable for several values τ ≥ 1.However, for τ ≥ 10, both RCMI values become approximately the same; i.e., the causality is not more detectable.Therefore, for the testing purposes, we can use so-called circularly shifted surrogate data: While one variable is taken with the samples 1,2, …, N, where N is the number of samples, the other variable is taken as the samples k where k is randomly selected from the interval (10, N − 10).See an example in Fig. 8D.The RCMI estimate, as a function of τ, for such surrogate data realization is presented in Fig. 8L, where the blue and turquoise curves for RCMI in both directions coincide in a low (no causality) level. Let us return to the estimates of RCMI I α plotted as a function of the parameter α.The results for the causal direction C → E are presented in Fig. 9A.The blue curve, used for the RCMI of the original C and E series, is, for a large range of α, distinctively greater than the RCMI for 30 realizations of the surrogate data, illustrated using the gray curves.On the other hand, the RCMI for the noncausal direction E → C (Fig. 9B, the turquoise curve) lies inside the bunch of the surrogate data RCMI gray curves representing the null hypothesis of no causality. Reporting the results, we use a more practical illustration of the surrogate data range (e.g., Fig. 1A)-we do not plot the curves for individual surrogate data realizations, but using the gray curve, we present the surrogate data mean values and the gray whiskers present the range ±2 SD, where SD is the SD in 100 surrogate data realizations used in the tests here.Another useful presentation of the test results (e.g., Fig. 1C) uses the so-called z-score, which is defined as the difference between the (RCMI in this case) value obtained from the analyzed data and the surrogate mean, given in the number of surrogate SDs.Typically, for the z-score greater than 2 SD (red lines), the test result is considered statistically significant. Let us study the behavior of the RCMI estimates from the surrogate data in detail.The increase of the RCMI estimates with the decreasing α irrespectively of the existence of causality is confirmed by the surrogate data, the means of which for all the tests are presented in Fig. 9 (C and D), focusing on small α by using the logarithmic scale on the abscissa.Note that the maxima in the four curves in Fig. 9 (C and D) differ, i.e., the positive bias in the RCMI estimates is different not only for different pairs of variables but also for different causality directions in the same pair of variables.Therefore, the good practice in causality testing, coined already by Paluš and Vejmelka (17), is testing each causality direction separately and avoiding the usage of differences of the RCMI (CMI and TE) estimates in the two directions as discriminating statistics.Next, Fig. 9 (E and F) presents the α dependence of the surrogate SDs.Because the surrogate SD is the denominator in the definition of the z-scores, the position of minima of SD explains the position of maxima of the z-scores in α close to 1 (e.g., Fig. 1, C and D).Thus, neither the maxima of the z-scores mean a "strongest causality, " but they mean that the causality tests are the most reliable for α ≈ 1. Let us illustrate the effects of the cause variables on the effect variable using the CDs, estimated as the conditional histograms given a value of the cause variable.To quantify the difference of the conditional histogram from the distribution of C without any condition, we again apply the surrogate data approach.We do not use the histogram of the full dataset, but we apply the same condition on circularly shifted surrogate data in which a possible influence of the cause variable was cancelled by the way of surrogate data construction.In Results, we always summarize 100 conditional histograms computed from 100 realizations of the surrogate data and present the results as mean±2 SD using the gray curves and whiskers, as well as by plotting the related z-scores.However, for illustration, we plot here CDs for 30 realizations of surrogate data using the gray curves. The conditional histogram of the variable E for the condition C < −σ C ( σ 2 C is the variance of C) is plotted in Fig. 9G (blue curve).We can see that the condition C < −σ C shifts the whole histogram of E to the left.In other words, keeping the values of C distinctly negative increases the probability of negative values of E and decreases the probability of positive values of E. Setting the condition C > σ C shifts the whole histogram of E to the right (Fig. 9H), i.e., toward the increased probability of positive values.This is the illustration of the causal effect of C on E. One can ask whether the estimation of CD cannot be used for inference of causality.Constructing the CD of C conditioned on values of E (Fig. 9, I and J) leads to the negative answer.According to the conditional histograms, the effects of C on E and of E on C are symmetrical, although there is no influence of E on C, as it is given by the data construction.Thus, the evaluation of CD can detect mutual dependence between variables, but not the direction of causal influence. Granger causality A variable X is said to Granger-cause a variable Y if the prediction error of Y from a linear vector autoregressive model (VAR), including the past values of Y and X as predictors, is smaller than the prediction error of Y from a linear autoregressive model including only the own past of Y (7).The number of included past values, the model order, is usually determined by the Schwartz-Bayesian information criterion.The statistical significance of helpfulness of the variable X for predicting values of Y is established via an F test.Rejection of the null hypothesis of the F test on α confidence level (i.e., P value ≤ α) means that the coefficients corresponding to the past values X are statistically significantly different from zero in VAR, and it is concluded that X Granger-causes Y.The GC analysis can be performed using the MVGC Matlab toolbox (47). Causal tail coefficient This causal discovery method (34) is tailored for situations when the causal mechanisms manifest themselves in extremes, i.e., in the tails of PDFs.The CTC was defined to reveal the causal relationship between heavy-tailed random variables, say X and Y.The definition of CTC reflects the idea that an extremely large value of X should cause an extreme value of Y in the case of a monotonic causal relationship.The causal structure can be detected by CTC if the relation between variables follows a linear structural causal model (SCM), without feedback mechanism (34).The method introduces, for heavy-tailed SCM, a CTC with positive values, denoted ϕ, and real-valued coefficients, denoted ψ.The knowledge of the CTC for X → Y and the opposite direction allows us to distinguish four scenarios of causal configurations: 3) There is a common cause [if ψ X → Y , ψ Y → X ∈ (1/2,1)].4) There is no causal link (if ψ X → Y = ψ Y → X = 1/2).Similar results hold for ϕ.A value of CTC depends on the number of exceedances, denoted k n , where n represents the number of observations.A confidence interval for CTC is possible to obtain by bootstrapping the original dataset.The calculation of both heavy tail coefficients, ϕ and ψ, can be performed by the available R package at https://github.com/nicolagnecco/causalXtreme.Note that the algorithm was not developed for time series data, where the temporal order of cause and effect could help to estimate causal relationships among variables as it happens in the GC.Therefore, we recovered the direction of time from the lagged data of variables X and Y in the linear SCM.Consequently, instead of a common cause in the case ψ X→Y , ψ Y→X ∈ (1/2,1), a bidirectional connection is detected by CTC analysis for lagged data. Zanin's causality of extreme events The causal relationship between variables X and Y is detected by analyzing how extreme events in one element correspond to the appearance of extreme events in a second one (33).In contrast to the causal discovery in heavy-tailed models, the method is optimized for detecting Fig. 1 .Fig. 2 . Fig.1.Causality in the simulated data.(A) the Rényi conditional mutual information (RcMi) as a function of α measuring the causality C → E (blue) and the RcMi mean ± 2 Sd for the surrogate data (gray).(B) RcMi for the causality of extreme events X → E (purple) and the RcMi mean ± 2 Sd for the surrogate data (gray).(C) z-scores for the RcMi for the causality C → E (blue) and the opposite causality E → C (turquoise).(D) z-scores for the RcMi for the causality of extreme events X → E (purple) and the opposite causality E → X (orange).the red line marks the significance level of 2 Sd.(E) the conditional histogram of E given the condition C < −σ C (blue) and the range mean ± 2 Sd of the surrogate histograms (gray).(F) the conditional histogram of E given the condition X < −1 (purple) and the range mean ±2 Sd of the surrogate histograms (gray).(G) the z-score for the significant differences of the conditional histogram of E given the condition C < −σ C (blue).(H) the z-score for the significant differences of the conditional histogram of E given the condition X < −1 (purple).the red lines mark the significance levels of ±2 Sd. Fig. 4 . Fig.4.Causality in Madrid winter air temperature and circulation modes.the z-scores for the RcMi measuring the causal effect of (A) Bi and nAO on the winter air temperature in Madrid (Bi → T, black curve; nAO → T, dashed blue curve); (B) Bi on nAO (Bi → nAO, black curve) and nAO on Bi (nAO → Bi, dashed blue curve) in winter; (C) Sh and Sh during negative nAO on the winter air temperature in Madrid (Sh → T, dashed olive curve; Sh → T | nAO < 0, purple curve); and (D) Sh on nAO (Sh → nAO, olive curve) and nAO on Sh (nAO → Sh, dashed blue curve) in winter.(E) the conditional histogram of the Madrid winter air temperature anomaly for the normalized nAO index < −1 (dashed blue curve) and Bi > 0 (black curve) and (G) the related z-scores.(F) the conditional histogram of the Madrid winter air temperature anomaly for nAO index > 1 (dashed blue curve) and Bi = 0 (black curve) and (H) the related z-scores.the red lines mark the significance levels of ±2 Sd in the z-score graphs.the gray curves and whiskers represent the surrogate mean and mean ±2 Sd range. in the combined Bi/Sh-conditioned histograms, the thick and thin whiskers are respectively related to the Bi and Sh conditions. Fig. 6 . Fig. 6.Coincidence analysis for the cold extremes in the winter air temperature in Europe.the portions of cold extremes (the first percentile of winter air temperature distribution) coinciding with (A) nAO condition (normalized nAO index < −1), (B) Bi condition (Bi > 0), (C) Sh condition (normalized winter Sh index > 1), and (D) simultaneous nAO and Bi conditions.Only the statistically significant (z-score > 2) coincidence values are colored. Fig. 7 . Fig. 7. Coincidence analysis for the cold extremes in the spring air temperature in Europe.the portions of cold extremes (the first percentile of spring air temperature distribution) coinciding with (A) nAO condition (normalized nAO index < −1), (B) Bi condition (Bi > 0), (C) Sh condition (normalized spring Sh index > 1), and (D) simultaneous nAO and non-Sh conditions (normalized nAO index < −1 and normalized spring Sh index < 1).Only the statistically significant (z-score > 2) coincidence values are colored. Fig. 8 . Fig. 8. Simulated data and their characterization.(A) A segment of the cause variable C(t) generated by an AR1 process.(B) A segment of the cause variable X(t) generated as Gaussian white noise.Gray horizontal lines indicate the values ±3.(C) A segment of the effect variable E(t). the purple bullets mark the extreme values caused by the variable X(t) crossing the values ±3.(D) A realization of the circularly shifted surrogate data for the variable E(t).(E) the histogram (PdF) p of the variable E. (F) A zoomed-in view of the histogram of the variable E. (G) the third power p 3 of the histogram of the variable E. (H) the 0.3th power p 0.3 of the histogram of the variable E. (I) the Rényi conditional mutual information (RcMi) measuring the causal influence of the variable C on the variable E (C → E), as a function of the Rényi parameter α (blue).(K) RcMi in the opposite direction E → C, as a function of α (turquoise).(J) RcMi for α = 0.9, as a function of time lag τ, measuring the causal influence C → E (blue) and RcMi in the opposite direction E → C (turquoise).(L) RcMi for α = 0.9, as a function of time lag τ, measuring the causal influence of the variable C on the circularly shifted surrogate realization of E (blue) and RcMi in the opposite direction (turquoise). Fig. 9 . Fig. 9. Using surrogate data.(A) the Rényi conditional mutual information (RcMi) as a function of α measuring the causality C → E (blue) and RcMi for 30 realizations of the surrogate data (gray).(B) RcMi for the opposite causality E → C (turquoise) and the related surrogate data (gray).(C to F) Statistics for RcMi of the surrogate data as functions of α in the logarithmic scale: (c) mean for the relation C → E (blue) and E → C (turquoise).(d) Mean for X → E (purple) and E → X (orange).(e) Sd for C → E (blue) and E → C (turquoise).(F) Sd for X → E (purple) and E → X (orange).(G to J) conditional probability distributions (cd thereafter) estimated as histograms.cds for 30 realizations of the surrogate data in gray.(G) cd of E given the condition C < −σ C (blue).(h) cd of E given the condition C > σ C (blue).(i) cd of C given the condition E < −σ e (turquoise).(J) cd of C given the condition E > σ E (turquoise).
13,442.8
2024-07-26T00:00:00.000
[ "Environmental Science", "Physics" ]
Can Pilot Free Trade Zones Promote Sustainable Growth in Urban Innovation? : China’s pilot free trade zones play an important role in promoting deep-level reform and high-standard opening up. Based on the panel data of 284 prefecture-level cities in China from 2009 to 2021, the article explores the impact of pilot free trade zones on urban innovation using the multi-period difference-in-differences model, mediation effect model, and spatial difference-in-differ-ences model, treating the pilot free trade zone as a quasi-natural experiment. The study shows the following: the establishment of pilot free trade zones boosts sustained growth in urban innovation, and the results still hold after a series of robustness tests; the enabling effect of pilot free trade zones on urban innovation is most significant in eastern regions and large-scale cities; and the role of pilot free trade zones in promoting innovation varies by stage. The mediation impact study revealed that pilot free trade zones can influence urban innovation via talent concentration, foreign direct investment, market scale, and financial support. The pilot free trade zones enhance the innovation performance of its geographically adjacent cities with economic ties and the innovation level of the region. The analysis offers a policy basis for the sustainable growth of urban innovation. Introduction China's pilot free trade zones (FTZs) are a critical initiative in its efforts to expand its economy and foster high-quality development.The China (Shanghai) FTZ was formally launched in September 2013, and as of September 2023, a total of 21 FTZs have been developed, relying on 49 prefectures and cities, which have formed an opening pattern covering the east, west, south, north, southeast, interior, and coast.In 2022, the 21 FTZs covered less than four thousandths of the country's land area, achieving a total import and export volume of 7.5 trillion yuan, which accounted for 17.8% of the country's total.The growth rate was 6.8 percentage points higher than the national average level, and the actual foreign direct investment was 222.52 billion yuan, representing 18.1% of the country's total, according to the China Pilot Free Trade Zone Development Report 2023. Traditional development zones, primarily governed by local authorities, are susceptible to non-market factors, conventional inertia patterns, etc., which significantly diminish the efficacy of policy execution [1,2].Unlike traditional economic development zones, FTZs are an important initiative for China to implement a more proactive approach to opening up its economy in light of the changing dynamics of global trade.They aim to enhance China's level of openness through institutional innovation and the creation of replicable and widely adopted practices.The greatest incentive for scientific and technological innovation is a stable market environment.It is pivotal to determine whether the establishment of FTZs can enhance innovation and foster the high-quality development of Chinese economy in order to construct an innovative country. FTZs are an essential strategy for nations to acquire competitive benefits in the process of globalization and international commerce [3].Scholars have concentrated their research on its effects on export trade, economic development, industrial upgrading, and capital flows since the concept was introduced.Establishing FTZs can substantially increase the magnitude of regional exports [4][5][6].The rapid growth of the economies in regions where FTZs are located may be attributed to the government's policy support [7][8][9].Meanwhile, China's economy is transitioning from a period of rapid growth to one of high-quality development.Within the framework of China's "four sectors + three economic belts" regional strategy, FTZs promote global trade and accelerate capital flows while reducing tariff barriers [10].Consequently, enterprises are incentivized to enhance their production inputs, consistently raise product quality, and increase sectoral efficiency [11], thereby effectively facilitating green and high-quality economic growth [12,13].Industry is the foundation of regional economic development.The liberalization of trade in FTZs has facilitated the unrestricted exchange of technology, information, and products among enterprises, which has further stimulated the transformation and upgrading of industrial structure [14], especially the upgrading of the industrial structure of the manufacturing industry, but the proactive effect on the rationalization of the industrial structure is unstable and decreases over time after the establishment of FTZs [15].Additionally, enterprises in FTZs are subject to a negative list management model, which improves the transparency of government management and establishes a fair and equitable competitive environment for businesses, thereby facilitating the rapid growth of the regional service industry [16,17].As China's manufacturing advantage is gradually eroded by low-cost competitors in Southeast Asia, the establishment of FTZs as one of the bridges between China and the world has facilitated the removal of restrictions on foreign investment, effectively promoting the amount of actual Chinese utilization of foreign direct investment [18].Trade will stimulate business innovation [19,20].While there are fewer studies on the impact of FTZs and innovation, a favorable institutional environment in FTZs is advantageous for technology and knowledge spillovers, which, in turn, promote innovation [21,22]. In summary, there are several issues need to be further explored in the existing research on FTZs: First, most of the studies use Shanghai as the experimental group, and the research methodology used is the synthetic method, where the control group of provinces is weighted to obtain the "synthetic region".Shanghai was the first city in China to set up an FTZ, serving as a strategic point for China's high-level opening to the outside world, and a single-sample study of Shanghai may overestimate the policy effect.The synthetic control method has a certain degree of subjectivity in the selection of synthetic indicators, which leads to a lack of scientific validity in the conclusions.Second, the majority of existing studies are based on the provincial level, and there are fewer studies on prefecturelevel cities with detailed samples.The layout of the FTZs is at the prefecture level, covering an area of about 120 square meters.Utilizing provincial data for the study may lead to certain errors in the results, and the policy revelations obtained are very limited.Third, although there are articles that study the impact of FTZs on innovation, the existing studies are exclusively centered on the city of Shanghai, and no comprehensive analysis of the mechanism between FTZs and innovation has been conducted. Based on the above discussion, the marginal contributions of this paper are as follows: (1) The paper studies the impact of FTZs on the innovation performance of prefecture-level cities, which provides the basis for the development of regional innovation.(2) We analyze how FTZs affects urban innovation.Additionally, we analyze the impact of FTZs on urban innovation in detail, taking into account the differences in location, city size, quality of innovation, and establishment batch.By analyzing the heterogeneity of the impact on urban innovation, we aim to provide targeted suggestions for the implementation of opening up policy and improvement of urban innovation performance.(3) Furthermore, we use a spatial difference-in-differences model to investigate the impact of FTZs on innovation in adjacent cities. The rest of the paper is organized as follows: the second section is the policy background and research hypotheses; the third section consists of the model setting, variable determination, and data description; the fourth section is the empirical analysis; the fifth section is the spatial effect analysis of the establishment of FTZs on urban innovation performance; and finally, the conclusion is presented. The Background of the Establishment of FTZs The 18th National Congress of the Communist Party of China recognized the necessity of implementing a more proactive opening-up strategy in response to the challenges and pressures posed by new international trade regulations.As a result, the first FTZ was established in Shanghai in September 2013.In 2015, FTZs were established in Guangzhou, Shenzhen, Zhuhai, and Tianjin to build an important hub along the Maritime Silk Road and create a high-level platform for coordinated development in the Beijing-Tianjin-Hebei region.In 2017, there was more growth, with the establishment of FTZs in Fuzhou, Xiamen, Shenyang, Dalian, Yingkou, Zhoushan, Zhengzhou, Kaifeng, Luoyang, Wuhan, Xiangyang, Yichang, Chongqing, Chengdu Xi'an, and Xianyang.The geographical distribution was gradually shifting from coastal cities to inland cities.Finally, Hainan Province established FTZs in 2018, covering an area of 31,500 square kilometers, making it an important gateway for opening up to the Pacific Ocean and Indian Ocean.In 2019, FTZs were set up in cities including Jinan, Qingdao, Yantai, Nanjing, Suzhou, Lianyungang, Nanning, Qinzhou, Chongzuo, Baoding, Shijiazhuang, Tangshan, Kunming, Harbin, Heihe, Mudanjiang, etc.In 2020, FTZs were established in the cities of Beijing, Changsha, Yueyang, Chenzhou, Hefei, Wuhu, Bengbu, Ningbo, Hangzhou, and Jinhua.As of September 2023, China has set up FTZs in 49 prefecture-level cities in 21 provinces (see Figure 1). Research Hypothesis Innovation development relies mainly on the market and enterprises, but it also requires the government to act as an investor and insurer.Over ten years, FTZs have always insisted on the integration of foreign opening and domestic reform.For instance, in 2021, the State Council issued "Several Measures on Piloting the Systematic Liberalization in Conditional Pilot Free Trade Zones and Free Trade Ports to Match the International High Standards" with the aim of attracting a large number of enterprises to relocate to the area.In Hainan, for example, the Hainan pilot free trade zone was established, attracting 28 Fortune 500 enterprises to settle in Hainan, and the growth rates of both the number of new foreign-funded enterprises and the actual utilization of foreign capital exceeded 100% [23].FTZs provide a platform for the agglomeration of businesses, thus increasing market competition among enterprises.Driven by the goal of profit maximization, firms are forced to engage in innovation to expand their market share, especially those located at the forefront of technology. Hypothesis 1. FTZs will promote urban innovation. With the increasing opening of China to the global market, there has been a surge in enterprise exchanges.Enterprise communication essentially involves the transfer of skilled workers, namely, the migration of highly talented personnel.In 2021, Beijing published the "Directory of Human Resource Development for the Establishment of Comprehensive Demonstration Zones for the Expansion and Opening Up of the National Service Industry and the China (Beijing) Pilot Free Trade Zone".The purpose of this publication is to guide human resource development and optimal allocation of resources in key sectors.The mobility of talent drives the overflow of knowledge between different regions and improves the level of regional innovation.In turn, enterprises need to recruit highquality talent to improve market competitiveness.Therefore, the establishment of FTZs will result in the gathering of talent, which will have an impact on innovation performance [24].As a comprehensive experimental platform for reform and opening up, FTZs have continuously expanded investment areas, optimized the business environment, increased the number of foreign-invested enterprises, and attracted foreign direct investment after 10 years of development.The entry of foreign-invested enterprises stimulates the inflow of capital and technology, and the overflow of knowledge and technology, in turn, has an impact on the local innovation, particularly in cases when there exists a technology gap [25]. The main purpose of the establishment of FTZs is to enhance the facilitation of foreign trade through institutional innovation and financial reforms.This includes reducing trade barriers, in other words, reducing "iceberg costs", which is perceived to increase the market size of the destination with the reduction of trade costs under the condition that other factors remain unchanged in the EK model [26].With reference to Ufuk Akcigit (2021), under a perfectly competitive market, the final product is formulated [27]: (1) In Equation (1), jt k represents the quantity of intermediate product j at time t , jt q denotes the quality of that intermediate good; (2) According to the first-order condition of Equation ( 2), the price and wage of intermediate goods j can be obtained as: In monopolistic competition, the marginal cost of the monopolist is ( 0   ) for each intermediate product j , and the profit maximization function faced by producer j is as follows: In Equation ( 3), the quantity and price of intermediate goods in equilibrium are: (5) Hence, the profit of the producer is: The innovation activity of the intermediate goods' producers is random; if the innovation is successful, the production quality of the enterprise will increase from jt q to j q is unsuccessful, the production quality of the enterprise will be jt q ; and the probability of enterprise innovation success is j x .There is a cost to firms to innovate; to simplify the calculation, the innovation cost function is assumed to be , were the purpose of the entrant firm is to permanently replace the incumbent firms, such that the profits gained by the incumbent firms through innovation in the next period are as follows: (7) The probability of choosing innovation when maximizing profits is as follows: In Equation ( 8), an expansion in market scale leads to an increased probability of enterprises opting for innovation, thereby elevating the level of urban innovation. The financial ecosystem exerts a significant influence on innovation, and the development of innovation cannot be separated from financial support.One of the tasks of the FTZs is to deepen openness and innovation in the financial sector.For instance, the People's Bank of China issued "30 Articles on Financial Reform" to bolster the development of the pilot free trade zone at the beginning of its establishment in Shanghai.In addition, the China Banking Regulatory Commission (CBRC) released policies to assist financial institutions in establishing branches, trust institutions, financial leasing entities, etc., while permitting foreign banks to establish subsidiaries, branches, and joint venture banks within these areas, thereby further enhancing their financial ecosystems.FTZs set up in the following years will pertain to the financial programs of the pilot free trade zone in Shanghai.The improvement of the financial system facilitates financing for self-trade enterprises and provides a fiscal guarantee for enterprise-level innovations, consequently promoting urban innovation. Hypothesis 2. The establishment of FTZs can promote urban innovation through talent agglom- eration, foreign direct investment, market size, and financial support. The establishment of FTZs as a national strategic deployment extends beyond its impact on the innovation level of the cities where they are situated.Additionally, it should also consider whether the policy has a spillover effect or siphon effect on the level of innovation in neighboring cities.Growth theory and new economic geography suggest that economic activities and growth are characterized by spatial agglomeration due to localized externalities stemming from knowledge activities and spatially constrained growing returns.Knowledge spillovers play a crucial role in innovation agglomeration.On the one hand, enterprise operations are closely intertwined with societal institutions, sharing networks of suppliers, universities, research institutes, and other public and private entities during the development of new products and processes.On the other hand, these institutions display clear localization characteristics due to their geographical and cultural proximity.Knowledge can be categorized into coded knowledge and implicit knowledge; implicit knowledge relates to individuals' work experience that can only be realized through work communication.Consequently, innovation exhibits a strong local bias for these two reasons and represents a process of mutual learning.The presence of a "dry middle school" and the spillover effect of knowledge in the process of social exchange will have a radiating effect on the surrounding areas.FTZs will result in a clustering of enterprises, fostering heightened competition and facilitating the interexchange between these businesses, which will result in the spillover of technology and knowledge.Hypothesis 3. The establishment of FTZs will enhance the level of innovation in the region and stimulate innovation in neighboring cities. Multi-Period Difference-in-Differences Model The establishment of the FTZs is a policy shock, and the time of the establishment of FTZs in cities is inconsistent; in order to scientifically evaluate the impact of the policy on urban innovation, we take a DID approach to examining the impact of FTZs on innovation.DID-based causal inference offers a clear advantage in addressing potential endogeneity arising from omitted variable bias [28].As is well known, there are many factors that affect regional innovation, analyzing regional innovations requires controlling for a long list of variables, but failure to do so can lead to biased result-omitted variable bias.We sidestep this problem by defining the treatment and control groups in a convincing way and focusing on a relatively short time period before and after the intervention.As a result, our quasi-experiment can partially rule out the influence of other influences on regional innovation, and the results are reliable.The empirical model is constructed as follows: In Equation ( 9 Model Variable 1. Independent Variable (innovation).Currently, researchers primarily focus their research on innovation in two aspects: innovation input and innovation output.The innovation input index is mainly measured by the amount of R&D, and it is well known that it takes at least 1-2 years for innovation inputs to be converted into innovation outcomes.The innovation output is mainly characterized by the sales of new products.However, this process involves not only patent application, product design, etc., but also significant market influences, making it challenging to adequately assess the extent of urban innovation.Refer to the innovation index constructed from the patent data on innovation output to measure urban innovation.The data on the cities in which FTZs were established and the time of their establishment come from the approval documents granted by the State Council.The data for other variables come from the China Urban Statistical Yearbook and various city yearbooks, including Chaohu City in Anhui Province, which has been reorganized as a county-level city and administered by Hefei City since 2011, and the cities of the Tibet Autonomous Region, Bijie, Tongren, Haidong, Hami, Turpan, Sansha, and Danzhou.Shashi City, Danzhou City, and Danzhou City are omitted due to serious quantities of missing data.For the few missing values, we adopted the linear interpolation method and searched for the yearly reports of the national economic and social development statistics of each city.Finally, the urban data of 284 cities for the period of 2009-2021 were obtained.In order to analyze the data of different units of measurement, the data were logarithmically processed (Table 1). Benchmark Regression Table 2 reports the regression results.Column 1 shows the regression results without adding the control variables and fixed effects; column 2 shows the double fixed effects of city and time added based on column 1; and columns 3 and 4 show the regression results with control variables added based on columns 1 and 2, respectively.The regression results show that the estimated coefficient of the FTZs is positive at the 10% level, and the estimated coefficient decreases after adding the double fixed effects as well as the control variables, but it is still significantly positive, which can indicate that the FTZs will promote urban innovation performance by 22.62 units to a certain extent, thus verifying Hypothesis 1. Analysis of Parallel Trend Test Parallel trend test is prerequisite for the analysis of the multi-period DID model.This means that the experimental group and the control group have the same trend of change before the policy shock.Since FTZs were established at different dates in each city, it is not possible to use a single year as the reference point for the policy shock.Therefore, it is necessary to set the relative time dummy variable values for the cities affected by the policy impact.The formula for conducting a parallel trend test is as follows: (10) The time dummy variable in Equation (10) represents the observed values of the year preceding, current, and after the establishment of FTZs in each city.The dummy variable for cities in the control group is assigned a value of 0. The research period for this study spans from 2009 to 2021, with the earliest FTZs established in 2013, so the dummy variables of the cities are at most -4 periods, and the time variables for the -4 periods and the previous period are excluded to avoid multicollinearity.The parallel trend test shows that there is no significant difference between the innovations of the experimental group and the control group before the policy, indicating that the policy conforms to parallel trend hypothesis (see Figure 2).In terms of the dynamic effect of the policy, the impact of the establishment of the FTZs on innovation is unstable in the short term, and the impact coefficient increases after three years of implementation, indicating that the policy can promote urban innovation with a certain lag. Robustness Check 1. Multi-temporal Propensity Score Matching-difference-in-differences model (PSM-DID).Ideally, we should study the difference between the same people affected and those not affected by the policy, but such an ideal situation can hardly be realized in reality.Meanwhile, the FTZs are not strictly natural experiments, and there is the problem of selection bias; hence, PSM-DID was used for the robustness test.Figures 3-6 report the kernel density plots of the experimental and control groups before and after matching under the cross-sectional and year-by-year methods.The deviation of the two kernel density curves is relatively large regardless of the method, and the distance between the mean lines is shortened and the two curves are closer after matching, showing the treatment effect of the cross-section and year-by-year to reduce the sample selectivity bias.Columns 1 and 2 in Table 3 report the results of difference-in-differences regressions under both methods.The estimated coefficients of the policy are significantly positive, which is consistent with the estimation results in Table 2. 2. Replace the explanatory variables.The explanatory variable is replaced by the number of patents granted per 10,000 people for analysis.The result is shown in column 3 of Table 3, the estimated coefficient of the policy is 8.1971 and is significantly positive at the 1% level, indicating that the FTZs will increase the number of patents granted in cities and thus positively affect urban innovation, and the results are still robust. 3. Exclude the influence of other policies.During the research period, the country issued the "innovative city" that is closely related to the research of this paper.Therefore, the interaction term between the year dummy variable and the city dummy variable of the implementation of the policy of the "innovative city" is added to the benchmark regression to control its effect, and the result is shown that the estimated coefficient of FTZs is still significantly positive after the exclusion of the policy of "innovative city" in column 4 of Table 3, indicating that the estimated result is robust to a certain extent and further confirming the validity of Hypothesis 1. Heterogeneity Analysis 1. Regional heterogeneity analysis.To test whether there are regional disparities in the impact of FTZs on urban innovation, the three regions of the East, Central, and West are further analyzed.Column 1-3 in Table 4 show that FTZs has a significantly positive impact on the innovation of the eastern, central, and western regions, with estimated coefficients of 27.30, 10.93, and 9.65.This indicate that the establishment of FTZs has a higher promotional effect on the east than on the central and western regions.It is possible that the level of economic development, talent concentration, financial support, and other innovative environments in the eastern region are better than those in the central and western regions.2. Innovation heterogeneity analysis.Invention patents are regarded as high-quality innovations due to their novelty and technical creativity, while utility model or design patents are regarded as low-quality innovations because of their relatively low technical content.Columns 4-5 of Table 4 demonstrate that the policy has a substantial promotional effect on invention patents and utility patents.The estimated coefficient for invention patents is 1.7526, while for utility patents it is 6.4445.This suggests that the policy has a greater influence on low-quality innovations compared to high-quality innovations.The reason may be that invention patents require longer time and more funds compared with utility patents, the large-scale establishment of FTZs started in 2017, which the policy effect has not yet been fully played.3. Heterogeneity analysis of city size.The expansion of urban scale leads to the agglomeration of innovation factors and thus improves the performance of urban innovation [29,30].According to the Notice of the State Council on the Adjustment of the Criteria for the Division of Urban Scale, the cities were divided into mega-cities (urban resident population > 10 million), mega-cities (urban resident population 5 million to 10 million), big cities (urban resident population 1 million to 5 million), medium-sized cities (urban resident population 0.5 million to 1 million), and small cities (urban resident population below 500,000).To facilitate the analysis, the article divides the city size into large cities (with a resident population of more than 1 million), mediumsized cities (with a resident population of 500,000-1,000,000), and small cities (with a resident population of less than 500,000).Columns 1, 2, and 3 of Table 5 show that the policy is significantly positive for the innovation performance of large cities, and not significant for small and medium-sized cities. 4. Establishment of batch heterogeneity analyses.FTZs are established in different batches, and in 2017, FTZs were established in eight provinces (Liaoning, Zhejiang, Henan, Hebei, Hubei, Chongqing, Sichuan, and Shaanxi).The article chose the cutoff point 2017; the first batch is from 2013 to 2017, and the second batch is from 2018 to 2021.To set up different batches of the free trade area analysis of heterogeneity, the regression results are in Table 5 in the first column ( 4) and ( 5).The results show that the establishment of the FTZs has a positive effect on urban innovation performance at both time points.From the estimated coefficients, the estimated coefficient of the first batch is 7.2566, and the estimated coefficient of the second batch is 10.5481.i.e., the policy effect of the second batch is stronger than the first batch. Mechanism Analysis According to the theoretical analysis in the second section, we use instrumental variable regression to analyze the causal relationship of the establishment of FTZs on the mechanism variables (talent concentration, foreign direct investment scale, market scale, financial support) to investigate the mechanism of FTZs on urban innovation.This analysis the role of the independent variables on the mechanism variables in the first step and, analyzing the relationship between the mechanism variables and the dependent variables in second step using the double-fixed-effects model.The construction of the mediation effect test model is as follows: , = 0 + 1 , + , + + + , Equation ( 11) represents the effect of the establishment of FTZs on the mediator variable, , it Mediator indicates the mediator variable, and Equation ( 12) represents the effect of the mediator variable on the urban innovation performance, if the sign of 1  and 2  is consistent, the mediating effect is established. 1. Talent agglomeration.The talent agglomeration is characterized by the number of employees in the information transmission, computer services, and software industries plus the number of employees in the scientific research, technical services, and geological survey industries divided by the total number of employees.Columns ( 1) and ( 2) of Table 6 report the regression results of talent agglomeration.The influence of the establishment of FTZs on talent agglomeration is analyzed based on the instrumental variable method at the city time level in column (1), which controls the fixed effect of city and time, and column (2) shows the regression results of talent agglomeration on the level of innovation in the city.The results show that the impact of the establishment of the FTZs on talent aggregation is significantly positive, confirming that talent aggregation is one of the channels through which FTZs influence urban innovation. Foreign direct investment (FDI). The magnitude of foreign direct investment (FDI) is quantified by taking the natural logarithm of the actual level of FDI utilization.Given the substantial discrepancy in FDI data and mitigate the influence of extreme value samples on the findings, FDI was re-estimated by shrinking the upper and lower 1% in addition to the logarithmic treatment.The regression results are shown in columns (3) and (4) of Table 6.After controlling for the fixed effects of city and time, the results show that the impact of the establishment of FTZs on FDI is significantly positive, and the estimated coefficient of FDI on urban innovation is significantly positive, confirming that FDI is one of the channels of the establishment of FTZs on urban innovation.3. Financial support.This paper interprets the level of financial support in term of financial scale, which is characterized by the ratio of total deposits and loans of financial institutions to the Gross Domestic Product (GDP).The regression results are shown in columns ( 5) and ( 6) of Table 6.After two regression tests, the findings confirm that financial institutions are one of the channels for the establishment of FTZs to influence urban innovation.4. Market size.This indicator is measured using the logarithm of GDP.The regression results are shown in ( 7) and ( 8) of Table 6.Controlling for city and time-fixed effects, the coefficient of the establishment of the FTZs on market size is positive, and the coefficient of market on innovation in the city is also significantly positive, thus confirming that market size is one of the influencing mechanisms of the establishment of FTZs on urban innovation, thus verifying Hypothesis 2. Analysis of the Spatial Effects According to the above, the establishment of FTZs will have an affect the innovation of local cities (the direct effect) and adjacent cities through spillover effects (the spillover effect).To better examine the spatial effect of the establishment of the FTZs on urban innovation, we apply spatial difference-in-differences, which combine the difference-in-differences and spatial Durbin model [31], the model is constructed as follows: Equation ( 13), ij W shows the spatial weight matrix, ρ indicates the impact of the spa- tial lag term on the dependent variable of urban innovation called "spatial autoregressive coefficient", 1  represents the impact of the establishment of FTZs on urban innovation. The remaining variables are consistent with formula (2).In spatial analysis, spatial linkage is the premise and key to spatial modeling, and the spatial weight matrix indicating the strength of inter-regional linkage is crucial.At present, there is ambiguity among researchers on the selection of a spatial weight matrix.We choose a 0-1 spatial weight matrix and an economic geography matrix, an economic geography nested matrix, for analysis, and the spatial regression results are shown in Table 7. Columns (1), (3), and (5) of Table 7 show the overall impact of the establishment of FTZs on urban innovation under different spatial matrix matrices.The results show that the establishment of FTZs increases local innovation by 18 units, with statistically significant results at the 1% level.The regression results were evaluated to examine the spatial spillover effects of the establishment of FTZs on innovation.The findings are shown in columns ( 2), ( 4), and ( 6) of the table.The impact of FTZs on local innovation is highly positive and statistically significant at the 1% level under the three spatial matrices.Nevertheless, the establishment of FTZs is not significant on the innovation of neighboring cities under the 0-1 proximity weight matrix, and there is a spillover effect on the innovation of the surrounding cities when considering the economic-geographical weight matrix and the economic-geographical nested matrix.This indicates that FTZs will have spillover effects on innovation in cities with local proximity and economic ties, verifying hypothesis 3. Conclusions By constructing panel data set comprising a broad sample (284) of prefecture-level cities in China from 2009 to 2021, we provide the evidence of the FTZ-innovation link. One of the key findings is that FTZs can significantly boost urban innovation.Considering that this positive effect is influenced by model, variables, and other policies during the sample period, we conducted a series of rigorous tests included multi-temporal propensity score matching double differential analysis, substituting key explanatory variables, and removing the influence of other policies.The empirical findings remained statistically significant.Consequently, we found that the positive impact of FTZs on innovation is directly correlated with the level of economic growth and the size of the city, as well as the duration of time that the pilot FTZ has been established.Compared to different levels of innovation, FTZs have a more significant impact on lower levels of innovation. The second main conclusion related to the impact of FTZs on innovation.We found that foreign direct investment, financial support, talent pooling, and market scale are all channels through which FTZs affect innovation. The third main conclusion is that the establishment of FTZs has a substantial positive impact on the innovation of the local area, as well as the cities that have strong economic connections with their neighboring regions, as measured by the economic-geographical matrix and the economic-geographical nested matrix using spatial difference-in-differences. From the policy perspective, FTZs have the potential to greatly enhance urban innovation, particularly cities in China with a high level of economic development.In underdeveloped regions, it is crucial for governments to carefully consider the arrangement of FTZs in combination with other policies in order to optimize the impact of these zones.Meanwhile Chinese government should enhance the level of institutional liberalization, give more autonomy for reforms to FTZs, and adjust the size of the zones in accordance with the real growth of the region, reinforce the connection between the neighboring cities of FTZs to expedite the duplication and dissemination of successful.Future, China's free trade zones are a key part of its efforts to innovate its institutions and demonstrate its ongoing commitment to opening up to global trade.The Chinese Government should further enhance its efforts to increase international integration, establish diverse external development platforms to achieve the most efficient distribution of innovative factors, and consistently elevate the level of urban innovation. The limitation of this research is that the impact of FTZs on urban innovation is analyzed at the macro level.The impact of the establishment of free trade zones on the innovation of enterprises can be further explored from a micro perspective in the future, as enterprises are a primary component of innovation. Figure 1 . Figure 1.Sketch map of the distribution of pilot free trade zones ("南海诸岛"translate into the South China Sea Islands). tL denotes the total labor force, subject to market size, normalizing the price of the final good to 1; and of intermediate good j in production and the elasticity of reverse sub- stitution.The production maximization problem of the final producer can be expressed as: ), Innovation is urban innovation performance, FTA Policy indicates the policy for the FTZs, Var Control denotes the set of control variables, CityFE means the urban fixed effect, YearFE means the fixed effect of time, ε means stochastic error, β means the estimated coefficient. Figure 2 . Figure 2. Results of parallel trend test. Figure 3 . Figure 3. Kernel density distribution before cross-section PSM matching. Figure 4 . Figure 4. Kernel density distribution after cross-section PSM matching. Figure 5 . Figure 5. Year-by-year distribution of kernel density before PSM matching. Figure 6 . Figure 6.Year-by-year distribution of kernel density after PSM matching. The Report on Industrial Innovation Capability of Chinese Cities 2017 (published by the Research Center for Industrial Development (FIND) of Fudan University, the China Center for Economic Research (Think Tank) of Fudan University, and the First Financial Research Institute) obtained the city and industry innovation indexes for the years 2009-2021 by calculating based on the patent data of industries and cities from the State Intellectual Property Office (SIPO) of China.Computing the incremental innovation index for each year by deducting the innovation index of the previous year from the innovation index of the current year to quantify urban innovation.The report's data conclude in 2016.To calculate the urban innovation index for the years 2017-2021, we supplement the city innovation index by multiplying (patents granted in the current year/patents granted in the previous year) by the city innovation index of the previous year, and the number of patent grants per 10,000 people is used for a robustness test.2.Dependent Variable.The article considers FTZs as a quasi-natural experiment, setting the city on where FTZs are located as the experimental group and the other cities as the control group.The policy shocks to FTZs are characterized by the interaction term of the time dummy variable for policy shocks and the city dummy variable.In order to accurately measure the time of policy shocks to the control group, a value is assigned to the experimental group.If the experimental group establishes a FTZ in a specific year, the month in which the FTZ is established is used to determine the value assigned to the control group in that year.This value represents the proportion of the year that has passed since the establishment of the FTZ.For example, if FTZ of Shanghai is set up in September 2013, the value assigned to the control group in 2013 would be 1/3, indicating that one-third of the year has passed since the establishment of FTZ.FTZs are established at various points in time, resulting in non-identical time dummy variables for the cities where FTZs are located.3. Control Variable.Considering that other characteristics of the city can affect innova- tion, other influencing factors are controlled: (1) R&D expenditures, measured by the natural logarithm of urban R&D expenditures; (2) education expenditures, measured by the natural logarithm of urban education expenditures; (3) GDP per capita, measured by the natural logarithm of GDP per capita of the city; (4) population size, measured using the natural logarithm of the urban resident population; and (5) infrastructure development, measured by the urban road space per capital. Table 5 . Heterogeneity analysis of urban size. , Table 7 . Panel space double difference model regression results.
8,576.4
2024-06-24T00:00:00.000
[ "Economics" ]