text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
New Strategies for Intramolecular Annulations : Intramolecular Additions of Silyloxycyclopropane-Derived Anions ; Application to Hydrindenone Syntheses
Como uma extensão de nosso trabalho em anelações intramoleculares via anions derivados de sililoxiciclopropanos, investigamos a química dos sistemas ciclopentilciclopropanos 6-9, em um esforco visando a preparação de hidrindenonas estereoespecificamente funcionalizadas. As ciclizações intramoleculares de anions derivados do ciclopropano foram menos estereosseletivas e mais complicadas do que aquelas com o corresponente sitema cicloexila. Entretanto, rendimentos modestos de hidrindenonas, tais como 20 e 21, foram obtidos, assim como vários produtos derivados a partir de um deslocamento prototrópico que gerou enolatos de ciclopentanona. Estes últimos produtos possuem os sistemas 5,5-pentalenônicos 22 e 23.
Introduction
For some time, synthetic chemists have sought efficient and stereospecific methods for carbocyclic annulations.Recently, we described a highly stereoselective annulative process which showcased the fluoride induced desilylation of a 2-(triethylsilyloxy)-1-carboethoxycyclopropane, resulting in an intramolecular conjugate addition of a γ-oxo- α-ester enolate onto a tethered vinyl sulfone 1 .In our initial report, fluoride induced cleavage of silyloxycyclopropane 1, obtained in three steps from cyclohexenone, resulted in the formation of the trans-fused decalenone 2 in high yield and in a completely stereoselective manner (Scheme 1).Ultimately, this protocol resulted in the efficient synthesis of a known octahydronaphthalene synthon 3 for dihdrocompactin.
The remarkable stereoselectivity of the intramolecular cyclization is presumably controlled by the cis-double bond of the sulfone diene side chain, which provides for a preferred approach of the enolate to the geometically accessible vinyl sulfone double bond.
Due to the high stereoselectivity achieved and the substitution pattern observed about the six-membered ring resulting from the annulation, we envisioned that our an-nulative process would be ideally suited for the synthesis of hydrindenone natural products, such as pulo'upone 2 4 and the ionophore anitbiotic X-14547A 3 5 (Scheme 2).
We would now like to report our results on the extension of this annulative process toward the synthesis of hydrindenone systems.
Results and Discussion
In order to examine the scope and limitations of the cyclization reaction for the synthesis of hydrindenone systems, we chose to examine the cyclizations of four silyloxycyclopropanes (6, 7, 8, 9) upon desilylation with cesium fluoride in acetonitrile (Scheme 3).We elected to incorporate a methyl sulfone into the side chain, as a related study performed in our group 4 , demonstrated that the methyl sulfone significantly improved cyclization.We also chose to study the cyclizations of compounds 8 and 9, which contain a methyl substituent at carbon 5 and would allow for the synthesis of hydrindenones containing an angular methyl group.
The syntheses of the requisite silyloxycyclopropanes are summarized in Scheme 4. Addition of the cis-tri-nbutylstannylvinyl cuprate 5 to cyclopentenone 10 at -78 °C, followed by trapping the resulting ketone enolate with triethylsilyl chloride at -78 °C, provided an 85% yield of the vinyl stannane 11.Under similar conditions, the addition of the stannylvinyl cuprate to 2-methyl-2-cyclopenten-1-one 12 proceeded in low yields.However, high yields of the addition product 13 were obtained when the reaction mixture was warmed to -20 °C prior to trapping the ketone enolate with triethylsilyl chloride at -78 °C.The attachment of the vinyl sulfone unit was efficiently achieved by a Stille reaction 6 With the silyloxycyclopropanes 6, 7, 8 and 9 in hand, we completed this study by subjecting these compounds to the fluoride induced cyclization.We also examined the effects of varying the equivalents of cesium fluoride, in order to hopefully gain further insight into the role of the fluoride source during the cyclization.The cyclization reactions were performed with both one and five equivalents of anhydrous cesium fluoride in dry acetonitrile at 65 °C.When one equivalent of cesium fluoride was used, the reactions were followed by 1 H-NMR analysis, in order to follow the progress of the reaction.An aliquot of the reaction mixture was removed at regular intervals throughout the reaction, quenched with saturated ammonium chloride, and extracted with ethyl acetate.After drying over anhydrous magnesium sulfate and concentration, the crude reaction mixtures were analyzed by 1 H-NMR.When the reactions were complete, the ratios of the products in the crude reaction mixtures were calculated from 1 H-NMR integrations, as analysis by gas chromatography did not completely separate the diastereomeric products.When five equivalents of cesium fluoride were used, the reactions were followed by TLC analysis and quenched when the starting material was consumed.After workup and purifi-cation by silica gel chromatography, the isolated yields of each product were determined.
Treatment of 6 with five equivalents of cesium fluoride in acetonitrile at 65 °C for three hours resulted in the formation four products (Scheme 5).An inseparable mixture of two hydrindenone products, 20 and 21, were obtained in a 44% yield in a 2.3:1 ratio, as determined by 1 H-NMR, respectively 9 .There were two remaining side products isolated from the crude reaction mixture, the first of these side products, 22, was isolated in a 4% yield, and the second, 23, was isolated in a 10% yield.The formation of these products resulted from the isomerization of the ester enolate to the thermodynamically more stable ketone enolate.Cyclization of the ketone enolate onto the vinyl sulfone of the side chain resulted in the formation of two stereoisomeric sulfone anions A and B. In the case of the syn anion B, the anion intramoleculary condenses onto the ester carbonyl, resulting in the formation of tricyclic product 23 (Scheme 5).
Treatment of silyloxycyclopropane 6 with one equivalent of anhydrous cesium fluoride in acetonitrile at 65 °C proceeded over a nine hour period.Aliquots of the reaction mixture were withdrawn at 30 min, 1 h, 2 h, 4 h, 6 h, and 9 h. 1 H-NMR analysis of these aliquots showed the formation of the same four products as described above.Compounds 21 and 22 were visible in the 1 H-NMR spectrum after 30 min.Compounds 20 and 23 were observed in the aliquot removed at one hour.At the completion of the reaction, the ratio of bicyclo[4.3.0]noneneproducts (20 : 21) was 1:1.6.Apparently there was a change in selectivity as the equivalents of cesium fluoride was increased, as in the five equiva- Examination of the coupling constants for 26 and 27 showed very little variation for both compounds (Scheme 7).Because both compounds exhibit a Jad coupling constant of 8 Hz, it was evident that each possessed a cis -ring fusion.Due to the ambiguity of the remaining coupling constants about the six-membered ring, it was impossible to assign the relative stereochemistry of the remaining stereocenters solely on the basis of the coupling constant data.We believed that 26 and 27 were isomeric at the carboxylate center 10 , but we could not assign the stereochemistry with certainty.To assist in this assignment, we obtained a two dimensional NOESY spectra for each compound.
The interpretation of the NOESY spectrum for 26 was straightforward (Scheme 7 crosspeaks between Ha and Hb, Ha and Hc, and Hc and Hd, indicating that these protons were on the same face of the molecule.Additionally, there was a crosspeak observed between Hb and Hd, and no nOe crosspeaks observed between the the methylene protons adjacent to the sulfone and either Ha or Hb, further proof for the stereochemical assignment.On this basis, we were confident that we correctly assigned the relative stereochemistry for 26.
The interpretation of the data for 27 was somewhat more complicated.The NOESY spectrum exhibited nOe crosspeaks between Hb and Hc, suggesting that these protons may be cis as well.It was difficult to determine the presence or absence of any nOe crosspeaks between Ha and Hb and Ha and Hd due to the fact that these protons exhibit small differences in chemical shifts.In the NOESY spectrum, any nOe crosspeaks between these protons would be close to the diagonal, and without performing any computer optimizations on the spectrum, the diagonal was quite broad and noisy.Despite this problem, we were able to assign a long range nOe between Hb and He, and from this crosspeak, we based our stereochemical assignment.This crosspeak was not present in the spectrum for 26.
The cyclization of 7 with one equivalent of cesium fluoride proceeded over a 12 h period.Again, aliquots were withdrawn at regular intervals, submitted to an aqueous workup and analyzed by 1 H-NMR.In this case, product formation was visible after 4 h, at which time each product was observed.At the completion of the reaction, the crude mixture showed the ratio of 26:27 was 1:1.4,showing very little change from the 1.2:1 ratio of the five equivalent case.
Having completed the cyclizations of 6 and 7, the next step was to examine the cyclizations of the methyl substituted cyclopropanes, 8 and 9.
When 8 was treated with five equivalents of cesium fluoride two products were isolated (Scheme 8).The major product, obtained in a 53% yield, was the bicyclo-[4.3.0]nonenecompound 30, accompanied by a 6% yield of the bicyclo[3.2.1]octene compound 31.The unique bicyclo[3.2.1]octene structure most-likely resulted from an intermolecular equilibration of the ester enolate to the more stable ketone enolate, followed by the closure of the ketone enolate onto the side chain.Proof of the relative stereochemistry of 30 was obtained with the aid of a difference nuclear Overhauser enhancement experiment (Scheme 8).Irradiation of the angular methyl group produced a 2.2% enhancement of the signal corresponding to the methylene protons of the sulfone side chain, indicating that these two groups were on the same face of the molecule.Although the magnitude of the enhancement was small, we thought it was significant, particularly since the distance between these nuclei was probably greater than that often examined by nOe.More importantly, however, a 6.4% enhancement of the signal corresponding to the proton adjacent to the ester group was observed, proof that the ester group was trans to both the angular methyl group and the methylene sulfone side chain.
Analysis of the samples removed over a 12 h period during the treatment of 8 with one equivalent of cesium fluoride, showed a significantly different behavior than that observed for 5 and 7.At 30 min, formation of 32, which results from the quenching of the γ-oxo-α-ester enolate, and the bicyclo[4.3.0]nonene 30 were observed (Scheme 9).At 4 h, the side product 31 was observed with the appearance of the characteristic vinyl protons.At 6 h, the starting material was completely consumed.At this point, 32 and 30 were present in a 1:1 ratio.After 12 h, the reaction was quenched and, after 1 H-NMR analysis, the ratios had not changed (32:30:31 were 1:1:0.2).Lastly, treatment of 9 with five equivalents of fluoride provided three products as shown below (Scheme 10).In this case the major product, isolated in a 52% yield, was a 1:1 mixture of bicylo[3.2.1]octenes 33 and 34, which were later separated by repeating the chromatographic separation.Compound 33 was identical in all respects to compound 31, isolated previously.Compound 34 exhibited a downfield shift of the absorptions corresponding to the methylene protons adjacent to the methyl sulfone.We attributed this shift to be due to a shielding effect by the carbonyl group.The minor compound from this reaction was the desired bicyclo[4.3.0]noneneproduct 35, isolated in a 20% yield.The coupling constant of the proton adjacent to the ester group was 2.7 Hz, very similar to that obtained with 30.The relative stereochemistry of 35 was determined by an X-ray crystal structure.
When 9 was treated with one equivalent of cesium fluoride, four products were observed in the 1 H-NMR spectrum.In the sample removed after 30 min, compound 36, resulting from the quenched γ-oxo-α-ester enolate, was observed (Scheme 11).After one hour, compounds 33, 34, and 35 were observed.At the completion of the reaction, the bridged tricyclic compounds (33, 34) and the bicyclo[4.3.0]noneneproduct 35 were present in a 1:1 ratio, and the non-cyclized compound 36 was present in minor amounts.
General procedure for the preparation of bis(cis-Tri-n-butylstannyl-vinyl)cuprate
A solution of diisopropylamine (2.4 mmol) in dry THF (5 mL) in a dried 3-neck round bottom flask, equipped with a solid addition tube containing cuprous cyanide and a closed pasteur pipette, was cooled to -20 °C.n-Butyllithium (2.4 mmol) was added and the solution was stirred for 30 min.Tributyltin hydride 11 (2.4 mmol) was added via syringe.This solution was stirred for 30 min, and then cuprous cyanide (1.2 mmol) was added via the solid addition tube.The resulting solution was stirred for 1 h at -20 °C.The nitrogen supply was removed, and the acetylene apparatus was connected.Acetylene (2.7 mmol, 61 mL) was added via a pasteur pipette whose tip was immersed beneath the surface of the solution.After completion of the acetylene addition, the reaction mixture was stirred at -20 °C for 30 min.The reaction mixture was cooled to -78 °C using a dry ice/acetone bath, and the enone (1.0 mmol) was added rapidly via syringe.The nitrogen supply was then reattached, and the solution was allowed to warm to -65 °C over 45 min.After cooling to -78 °C, triethylsilyl chloride (2.0 mmol) was added dropwise via syringe.The reaction mixture was allowed to warm to -50 °C slowly, then it was poured into a rapidly stirring ice cold mixture of diethyl ether (15 mL), saturated aqueous ammonium chloride solution (10 mL), and ammonium hydroxide (2.7 mL).After stirring for 20 min, the mixture was placed in a separatory funnel, and the phases were allowed to separate.The or- ganic phase was washed with saturated aqueous ammonium chloride:ammonium hydroxide (4:1) (19 mL), distilled water (10 mL), and saturated aqueous sodium chloride solution (10 mL).The organic phase was dried with anhydrous magnesium sulfate and filtered over celite.
The solvent was removed in vacuo to obtain the crude product, which was further purified using flash column chromatography on silica gel, eluting with hexane, at an elution rate of 2 inches per min, to obtain the title compound as a colorless oil.
General procedure for the palladium-catalyzed coupling reaction
To a stirred solution of E phenylsulfonylvinyl tosylate 14 or Emethylsulfonylvinyl tosylate 15 (1.0 mmol), lithium chloride (2.0 mmol), and bis(triphenylphosphine)palladium(II) chloride (5 mol %) in dry THF (10 mL) was added the appropriate vinyl stannane (1.10 mmol) in dry THF (4 mL).The addition was made by cannula and was complete with a wash of additional THF (2 mL).The reaction flask was equipped with a reflux condensor and heated at 65 °C for 24 h.The reaction mixture was then cooled to room temperature and diluted with diethyl ether (30 mL).The resulting cloudy mixture was washed twice with a solution of 4:1 saturated aqueous ammonium chloride/ ammonium hydroxide (10 mL).The organic phase was washed with distilled water (10 mL) and saturated aqueous sodium chloride solution (10 mL) and then dried over anhydrous magnesium sulfate.After filtration over celite, the solvent was removed to obtain the crude product, which was further purified by flash column chromatography on silica gel.
General procedure for the copper-catalyzed cyclopropanation of the silyl enol ethers
To a stirred suspension of bis(N -benzylsalicylaldiminato)copper(II) (0.006 mmol) in the appropriate silyl enol ether (1.0 mmol) at 70 °C was added a solution of ethyl diazoacetate (3.0 mmol) in dry benzene (4 mL).The addition was regulated by syringe pump and proceeded over 15 h.After the addition was complete, the reaction mixture was cooled to room temperature, and diluted with diethyl ether (10 mL).The resulting mixture was filtered through a short pad of silica gel, which was washed with additional diethyl ether.The solvent was removed to obtain a yellow oil, which was further purified as indicated below. | 3,529.4 | 1998-08-01T00:00:00.000 | [
"Chemistry"
] |
Hydrogels for active photonics
Conventional photonic devices exhibit static optical properties that are design-dependent, including the material’s refractive index and geometrical parameters. However, they still possess attractive optical responses for applications and are already exploited in devices across various fields. Hydrogel photonics has emerged as a promising solution in the field of active photonics by providing primarily deformable geometric parameters in response to external stimuli. Over the past few years, various studies have been undertaken to attain stimuli-responsive photonic devices with tunable optical properties. Herein, we focus on the recent advancements in hydrogel-based photonics and micro/nanofabrication techniques for hydrogels. In particular, fabrication techniques for hydrogel photonic devices are categorized into film growth, photolithography (PL), electron-beam lithography (EBL), and nanoimprint lithography (NIL). Furthermore, we provide insights into future directions and prospects for deformable hydrogel photonics, along with their potential practical applications.
Introduction
Photonic devices have become essential in our everyday lives, including numerous technologies, from the lenses integrated into our mobile phones to the LiDAR systems utilized in our cars 1,2 .The design of these photonic devices generally involves the utilization of films or structured formats to control over their optical responses.These design approaches enable precise manipulation of light behavior, enabling tailored functionalities and enhanced performance in optical systems.One of the well-known examples of film formats that exploit interference is the Fabry-Pérot (F-P) interferometer 3,4 .Interference is a fundamental optical phenomenon that arises from two or more light waves interacting with each other, changing their amplitudes and phases.The interference is caused by the reflection and transmission of light at the different refractive index interfaces, where they interact and either reinforce or cancel each other, leading to constructive and destructive interference.Moreover, structured formats provide a higher degree of control over optical properties by incorporating intricate geometries, resonant phenomena, and light-matter interactions [5][6][7][8][9] .Due to the advancement of micro/nanofabrication techniques, such as photolithography (PL) 10 , electron-beam lithography (EBL) 11,12 , and nanoimprint lithography (NIL) [13][14][15] , it has become possible to fabricate sophisticated structures 16,17 .Structured formats include periodic photonic crystals (PCs) featuring a bandgap, which is a range of wavelengths that cannot propagate through the material owing to their periodic arrangement.Resonance represents another fundamental principle used in structured formats, whereby specific dimensions of a structure with a distinct refractive index interact with light, leading to enhanced absorption or reflection phenomena.Additionally, the concept of scattering in the design of structured formats facilitates enhanced scattering and absorption of light by particles whose sizes are comparable to the wavelength of light (more details are discussed in reference) 18,19 .Furthermore, a new type of photonic device known as a metasurface has emerged, which belongs to the category of structured forms.Metasurfaces use well-designed periodic or quasiperiodic arrays of subwavelength structures that can control the interaction between light and matter 1,6,12,[20][21][22] .Metasurfaces, which consist of nanostructure arrays, have achieved unprecedented performance by enabling user-selective modulation of phase and amplitude in a variety of photonics applications.However, even though the extraordinary optical properties of metasurfaces are determined by the materials and structural design arrangement, there is still a lack of dynamic optical properties after fabrication.
According to a well-known tunable mechanism in photonics, a dynamic optical response in photonics can be attained when one of the following conditions is satisfied [23][24][25][26][27][28][29] : (1) control of the incident light [30][31][32][33][34][35][36][37][38] , (2) modulation of the refractive index of the material or surroundings [39][40][41][42] , and (3) control of the geometric parameters of the structures (more details are discussed in this review).Condition (1) typically uses liquid crystals (LC), where the independent LC cells are predominantly attached to the photonic devices for polarization tuning.However, for Condition (2), an approach is used to achieve optical modulation by replacing static materials with dynamic materials (phase change materials 26,39,[41][42][43][44][45][46][47] , metal oxide [48][49][50][51] , etc.) that exhibit differential optical properties in response to external stimuli.For Condition (3), attempts have been reported in deformable materials, wherein their optical response can be controlled by modifying geometrical parameters through external stimuli, such as strain, temperature, humidity, pH, and others.A hydrogel is a three-dimensional crosslinked polymer network produced by the reaction of two or more monomers 52,53 .Hydrogels absorb water and swell in the atmosphere due to the presence of the hydrophilic groups.In particular, the reaction of hydrogels to external stimuli depends on the characteristics of the monomer, charge density, pendant chain, and degree of crosslinking, allowing precise control according to purpose [54][55][56][57] .Furthermore, hydrogels have attracted attention due to their optical transparency, biocompatibility, and compatibility with established manufacturing processes, such as coating, self-assembly, and other method.In particular, hydrogels that exhibit both manufacturing compatibility and nontoxicity are preferred in the field of biosensing due to their intuitive sensing through visible changes [58][59][60] .
In this review, we provide an overview of hydrogels in photonic design and discuss recent progress in hydrogelbased photonics platforms using micro/nanoprocessing techniques (Table 1).In particular, the hydrogel photonic devices are separated based on their formats and include film growth (coating and synthesis), PL, EBL, and NIL; each is described and classified based on their photonics properties and manufacturing processes.Finally, based on an analysis of each, we present a future perspective on hydrogel photonics in terms of fabrication and application.
Properties of hydrogels
The unique properties of hydrogels, with innate deformable properties, cause them to be promising candidates for dynamic photonics applications.In this section, we provide a comprehensive review of the physical and optical properties of hydrogels, specifically focusing on their deformable behavior.
The properties of hydrogels are determined by the interactions between the polymer chains and the water molecules.The polymer chains have hydrophilic groups (-NH 3 , -COOH, -CONH 2 , -CONH-, -OH, etc.), which can interact with water molecules through hydrogen bonding, electrostatic interactions, or van der Waals forces (more details are discussed in reference) 52,61 .These interactions cause the polymer chains to expand and swell, resulting in the characteristic soft and rubbery texture of hydrogels 53 .The deformable behavior of hydrogels can be controlled by adjusting the chemical composition of the polymer chains, the degree of crosslinking, and environmental conditions, such as pH and temperature [62][63][64] .The deformable mechanism of the stimuli-responsive hydrogels is based on the reversible formation or disruption of crosslinks between the polymer chains 65 .When a stimulus is applied to the hydrogel, the crosslinks can be disrupted or reformed, causing the hydrogel to swell or deswell.For example, temperatureresponsive hydrogels, such as poly(N-isopropylacrylamide) (PNIPAAm) hydrogels, can undergo a reversible phase transition from a swollen state to a collapsed state as the temperature is increased above a critical temperature, known as the lower critical solution temperature (LCST) 66 .This phase transition is due to the disruption of the hydrogen bonding between the polymer chains and the water molecules, which leads to the collapse of the hydrogel network.Another example is pH-responsive hydrogels, such as poly(acrylic acid) (PAA) hydrogels, which can undergo a reversible change in their swelling behavior as the pH of the surrounding solution is changed 52 .The mesh size of the polymeric networks can be significantly changed with small changes in pH, and the swelling properties in an acidic or alkaline solution depend on the type of hydrogels.Pendant groups of anionic and cationic hydrogels are ionized above and below the pKa of the polymeric network, respectively.The presence of ions causes a large osmotic force, leading to the swelling of the hydrogel 67 .The other example is electric-responsive hydrogels, such as PAA 68 , poly(2-acrylamido-2-methylpropanesulfonic acid-co-acrylamide) (poly-(AMPS-co-AAm) 69,70 , and poly(2-hydroxyethyl methacrylate) (PHEMA) 71 ; these can undergo a reversible change in swelling behavior as an external electric field is applied 72 .When the hydrogels are positioned within an electric field created between two electrodes with an applied voltage, it results in the attraction of charged ions and counterions in opposing directions due to electrophoretic forces.This phenomenon leads to the electroosmotic movement of water molecules, which can cause deformation of the hydrogel.This property has a great advantage in terms of integration with electronics; thus, there has been a growing interest.The deformable behavior of hydrogels can be quantified using several techniques, including gravimetric analysis, swelling kinetics measurements, and microscopy 52 .
Furthermore, the refractive index (n) of hydrogels changes according to deformation, which can be exploited for designing stimulus-responsive active photonic devices 52 .n of a hydrogel depends on its water (n ~1.33) content and crosslinking density.When the hydrogel is exposed to a stimulus that triggers swelling, the water content of the hydrogel increases, causing n to decrease.Conversely, when the hydrogel is exposed to a stimulus that triggers deswelling, the water content of the hydrogel decreases, causing n to increase.These phenomena can be understood through the effective n, which considers the composition ratio of the hydrogel and solute.For example, when a pH-responsive hydrogel containing acidic groups is exposed to an alkaline solution, the acidic groups deprotonate, causing the hydrogel to swell.As the hydrogel swells, the water content increases, leading to a decrease in n.Similarly, the temperature-responsive hydrogel is exposed to a temperature above its lower critical solution temperature (LCST), and the hydrogel undergoes a phase transition from a swollen to a collapsed state.As the hydrogel collapses, it leads to an increase in n.Moreover, hydrogels possess unique properties that respond to external stimuli, causing them to be promising candidates for designing responsive optical devices for a variety of photonic applications.
Hydrogel-based photonic devices
Hydrogels possess remarkable material properties and stimuli-responsive functionalities; however, to maintain the stimuli-responsiveness of hydrogels after fabrication and achieve desired shapes, compatible fabrication techniques need to be used.In our approach, we classify hydrogels based on the fabrication processes that enable the implementation of film/structure-based photonic devices (Table 1).Devices with a hydrogel film attained by thin film growth skills, such as spin coating and synthesis, can operate as dynamic optical cavities upon exposure to external stimuli.The hydrogel structures fabricated using
Film format photonic devices
Extensive research has been conducted on the photonics application of reconfigurable hydrogel films, exploiting the attractive properties of hydrogels.To prepare hydrogel films, precursor solutions containing monomers or uncrosslinked polymers are initially used to form a film, which subsequently undergoes a sol-gel transition to create the desired network structure.A variety of widely employed film-forming techniques, such as spin-coating, dip-coating, solution casting, spray coating, and molecular self-assembly, can be utilized to create films 64 .The simply designed structure, composed of a homogenous hydrogel thin film on a reflective substrate, can be easily fabricated, providing high robustness, fast response, and excellent film uniformity 73 .Different fabrication strategies have been adopted to create hydrogel-based film format photonic devices (Fig. 1).Although there might be differences in the methodology used to form these films depending on the material requirements (e.g., temperature, type of crosslinking), common cases of coating, crosslinking, and synthesis processes are used to create hydrogel thin films.Conventionally, the metal-insulator-metal (MIM) configuration is a powerful and simple solution that can generate a sharp dip and peak 74 .MIM configurations consist of both the reflective mode and the transmissive mode, depending on the thickness of the bottom metallic mirror.In both types of MIM resonators, the properties of the top layer, including its absorption characteristics, influence the optical response of the resonator.When a lossy top layer is used in a transmissive resonator, it absorbs and dissipates energy, resulting in decreased transmission efficiency and weak interference interactions as it exits through the thin (< skin-depth) bottom metallic layer.In contrast, reflective resonators intentionally incorporate a lossy top layer to enhance absorption.In this configuration, incident light is reflected by the thick (> skin-depth) bottom metallic layer, leading to successive interference due to differences in optical path length, which enables the amplification of specific wavelengths 75,76 .The metalhydrogel-metal (MHM) configuration, in which the insulator layer is replaced with a hydrogel, is intriguing because it can induce tunable optical responses through an external stimulus.From this, a colorimetric sensor was demonstrated by utilizing a chitosan-MHM, which exhibited changes in thickness in response to variations in the surrounding relative humidity (RH).This sensor was successfully integrated with a photovoltaic (PV) cell utilizing a transmissive MHM configuration (Fig. 1a) 77 .The integration of the sensor with the PV cell enabled the generation of an electric signal from transmissive light.The presence of chitosan allowed for the modulation of the transmissive resonance from approximately 600 to 750 nm based on humidity changes (RH 7.5 to 83.7%), thereby enabling tunable coloration.Despite its facile fabrication process, the MHM configuration has proven to be suitable for optical humidity sensors.However, the presence of a densely deposited film obstructed the access of humidity, thereby causing a decline in the responsivity.Consequently, a novel approach has been introduced to enhance sensor responsivity in subsequent research.To improve humidity permeability, the top deposited metal was substituted as the disordered layer of Ag nanoparticles (NPs) in the MHM (Fig. 1b) 78 .While conventional physical deposition methods, such as electron beam deposition and sputtering, were commonly used to create dense and uniform metallic films, they tended to act as barriers hindering the penetration of moisture molecules, thereby reducing responsivity.On the other hand, a film composed of disordered Ag NPs promoted the penetration of moisture molecules.The improved moisture penetration, attributed to the Knudsen effect related to gas flow in the noncontinuum system 79 , contributed to the accelerated responsivity of this colorimetric sensor.The achieved responsivity for each response (RH 20 to 80%) and recovery (RH 80 to 20%) process was 141 ms and 140 ms, respectively.This result demonstrated a significant improvement in sensitivity compared to the previous study, which achieved a response time of 3,800,000 ms.This sensor was used as a humidityresponsive display with applications in anticounterfeiting, providing rapid inspection capabilities by its ultrafast response.Moreover, the upper film, which facilitated optical interactions, caused a response/recovery time delay in stimuli-hydrogel interactions based on its density and material characteristics.
Furthermore, the multilayer film construction for edgeenhanced imaging by stacking metallic and hydrogel was demonstrated (Fig. 1c) 80 .The thermally evaporated Ag and the spin-coated polyvinyl alcohol (PVA) were alternately stacked to form a five-layer nanoslide.When the numerical aperture (NA) increased (from 0 to 0.8), the transmittance trend noticeably decreased (510 to 580 nm).Angular-sensitive transmittance from the multiple interference effect 81,82 allowed high-frequency pass and edge enhancement at 510 nm.The nanoslide alternatively functioned as a low-frequency-pass filter and bright-field imaging because the multilayer stack had wavelength sensitivity 83 , causing a decrease in transmittance as the incident angle increased to 580 nm.In addition, active optical filtering was achieved by adjusting the RH and switching between bright-field and edge-enhanced imaging due to nanocavity swelling.This had the potential for real-time dynamic image processing, biological imaging, and analog computing.
Moreover, hydrogels can be grown as films via chemical synthesis.Among them, the surface-initiated atomic transfer radical polymerization (SI-ATRP) method is a well-known methodology for growing hydrogel films 84 .SI-ATRP polymerization is initiated by the addition of a monomer, a catalyst, and the SI-ATRP initiator that is attached to the hydrogel matrix.During polymerization, polymer chains are formed through the controlled addition of monomers to the radical sites generated by the catalyst.The polymer chains grow within the hydrogel matrix, resulting in the formation of a hydrogel with controlled chemical composition and configuration.Based on this method, a temperature variation MHM
RH increased
Fig. 1 Diverse applications of photonic devices integrated with a two-dimensional hydrogel thin film.a Chitosan-integrated metal-hydrogelmetal (MHM) filter and its transmission spectra for various chitosan thicknesses 77 .b Humidity-responsive security labels using disordered metal nanoparticles 78 .c Multilayered nanoslide for switchable image processing by tuning humidity 80 .d Dye-containing MHM cavity for emission tuning 86 .e Indirectly built MHM nanoarray plasmonic cavity with pH responsiveness 88 .f MHM structure with a transition from the bound state in the continuum (BIC) to the quasibound state in the continuum (qBIC) driven by moisture 90 .Reproduced with permission from Wiley (2020), AAAS (2022), Wiley (2023), ACS Publications (2022), Royal Society of Chemistry (2023), and Wiley (2022) using poly(N-isopropylacrylamide) (PNIPAm), which changed shape at a specific temperature was also reported 85 .Furthermore, photoluminescent hydrogels with added emitters were integrated into the MHM cavity for an emission-tunable platform (Fig. 1d) 86 .The poly(Nisopropylacrylamide)-acrylamidobenzophenone (PNI-PAm-BP) containing the emitter, rhodamine B (RhB), was spin-coated on the e-beam-deposited bottom Au layer and cross-linked through exposure to ultraviolet (UV) light.To improve the film quality directly related to the optical properties, an alternative solution was spin-coated in two steps 87 .Subsequently, the top Au layer was thermally evaporated on the hydrogel layer.When the resonance of the MHM cavity aligned with both the absorption and emission bands of the emitter, the emission was optimally enhanced as a result of the synergistic combination of the Purcell factor enhancement and the excitation rate enhancement.The overlapping region with the absorption and emission of an emitter was controlled by humidity to achieve tunable emission because the cavity resonance wavelength could be modified depending on the hydrogel thickness.The sample color was reversibly changed with the increment and decline of the RH value from 3 to 80%, resulting in the redshift of cavity resonance by 40 nm, from 548 to 588 nm and a nearly twofold enhancement in the emission intensity.The significant spectral shift observed was notable and had substantial significance, particularly for sensing applications.
With further advances from the MHM, structural metallic nanoarrays have been used instead of a top metal layer for dynamic plasmonic color displays.The Ag triangle array was detached from the original substrate and transferred onto the spin-coated poly(N,N-dimethylaminoethyl methacrylate) (PDMAEMA) layer by mediating polylactic acid (PLA) film (Fig. 1e) 88 .The Ag triangle nanoarray was selected because of its strong localized surface plasmon resonance properties and simplicity in fabrication using colloidal lithography 89 .This indirect "layer-by-layer" building strategy avoided chemical contact with the hydrogel that tended to occur during direct conventional fabrication for a plasmonic structure.In this plasmonic cavity, gap surface plasmonic resonance and resonance simultaneously occurred, similar to F-P interference.The coupling mode between the top and bottom metallic layers was determined by the thickness of the hydrogel layer.In addition, by changing the pH value of the surrounding environment, the colors of the MHM plasmonic cavity could be precisely controlled because PDMAEMA exhibited a substantial swelling response.A humidity-driven bound state in the continuum (BIC) switch through a transferred metal nanograting on a metal-hydrogel configuration was demonstrated (Fig. 1f) 90 .In particular, the plasmonic BIC was a resonant state of light confined within metallic nanostructures and had a long lifetime, which was advantageous for applications, such as detection and lasing.This device switched from BIC to quasi-BIC (qBIC) under RH exposure conditions.During the RH exposure process, full switching to qBIC was observed due to the increase in resonance intensity along with the shift of the plasmon BIC wavelength, ensuring a response time of less than 1 second.These outstanding optical properties and fast response speeds could be used for applications, such as humidity detection sensors.As mentioned above, the direct transfer of arbitrary structures to hydrogels could expand processable hydrogels.
Photopolymerization
A photocurable hydrogel precursor composed of a hydrogel and a photoinitiator exhibits the ability to undergo crosslinking under UV exposure conditions 91,92 .The photoinitiator initiates a photochemical reaction upon UV light irradiation, leading to the generation of free radicals.These free radicals facilitate the formation of covalent bonds among polymer chains, thereby establishing a crosslinking network.Consequently, the photocurable hydrogel precursor is capable of forming films and structures when subjected to UV exposure, with or without the use of a mask, owing to the presence of the photoinitiator.Due to the utilization of particles or masks, hydrogel structures can be fabricated at a few hundred nm scale.
A colorimetric analysis system based on a simple hydrogel film was demonstrated for the detection of volatile vapors (Fig. 2a) 93 .The poly(HEMA-co-AAc) precursor was coated onto a mirror substrate, exploiting the interference of two reflected light waves at the interfaces of the air-hydrogel and mirror-hydrogel layers.By controlling the degree of polymerization through exposure using a photomask, the film thicknesses were selectively varied during the precursor polymerization process.The resulting hydrogel films exhibited different swelling behaviors in response to the amounts of volatile vapors, leading to highly accurate detection through color changes.Additionally, the exposed surface of the hydrogel enabled rapid response and recovery times by facilitating direct reactions at the interfaces.
The particle or porous structure allows optical modulation via the effective refractive index of the medium or optical path length 94,95 .The enhancement of the lightmatter interaction by utilizing PC, which consisted of a particle-/pore-embedded hydrogel, was attempted 96 .In photopolymerization, the dispersion contained the hydrogel, target particle, and photoinitiator.In general, the particle-embedded PC was prepared by a simple process of coating and photocuring a particle-hydrogel dispersion.Additionally, the porous-embedded PC was prepared using a two-step process involving film photocuring of a particle-hydrogel dispersion, followed by the etching of particles.
Various PC-based hydrogel films have been investigated.First, a macroporous poly(ethylene glycol)diacrylate (PEGDA) film demonstrated rapid humidity sensing by reversible switching between a transparent at the dried state and colored at the wet state (Fig. 2b) 97 .This macroporous hydrogel was prepared using a process involving the photocuring of a silica-PEGDA precursor by the selective etching of silica particles.In the dried state, it exhibited high transparency due to the random collapse of the macropores and loss of its ordered arrangement.When the films were swollen by water, ethanol, or a mixture of the two, the macropores were restored and recovered their ordered structure, resulting in structural colors.Notably, the resonance wavelength in the wet state was redshifted with increasing concentration of a minor component in the mixture, which could promise simple and sensitive measurements of trace concentration in mixtures.Moreover, the macroporous hydrogel could be tailored into a size distribution with various shapes, further expanding its applications.Furthermore, the micro-opal structured hydrogel has great potential for use in a wide range of sensing applications (Fig. 3c, d) 98,99 .For the rapid and cost-effective process, micromolding-based evaporation-polymerization was demonstrated.The micropatterning procedure began with the evaporative deposition of polystyrene (PS) beads onto patterned polydimethylsiloxane (PDMS).Subsequently, the PEGDA precursor was cast onto the PS bead, and photocuring was initiated to form a bilayered microopal structure (Fig. 2c) 98 .The structural color of the fabricated bilayer could be controlled by adjusting the concentration of PEGDA and the size of the beads.Moreover, it could also exhibit pH responsiveness by incorporating acrylic acid (AA) or methacrylic acid (MAA) with carboxylate functionality into the hydrogel.The hydrogel swelling property caused the difference in the spacing between the PS beads according to the RH/pH state, and a significant peak shift of up to 88 nm resulting in distinct color changes was observed.Similar to prior studies, a composite of thermoresponsive PNIPAm hydrogel, PS NPs, and graphene oxide (GO) was introduced to achieve a drug delivery (load/release) system that responded to temperature changes (Fig. 2d) 99 .It was achieved by in situ polymerization of the PNIPAm hydrogel within the gaps of self-assembled PS templates.
The addition of GO, with its lamellar structure and light absorption properties, resulted in the construction of structural color templates.The PNIPAm exhibited responsiveness to both temperature and alcohol, which originated from its inherent chemical composition.As the temperature increased over the LCST (~32 °C), the polymer chains within the hydrogel underwent a transition from an extended coil structure to a collapsed globule configuration.This structural change caused the hydrogels to shrink, leading to a reduction in the lattice spacing between the embedded microspheres, which prompted a blueshift of the reflected wavelength, and vice versa.Additionally, when the concentration of the ethanol solution was below 40 wt%, the hydrogel underwent contraction and collapse due to the adsorption of ethanol molecules onto the polymer chains, thereby leading to a reduction in the lattice spacing.Based on these phenomena, real-time monitoring of temperature and alcohol concentration was demonstrated.Hydrogel composites were created by mixing superparamagnetic Fe 3 O 4 NPs into a mixture of AA and PEGDA for structural color (Fig. 2e) 100 .Hydrogel-based PC was achieved through magnetically induced self-assembly of the Fe 3 O 4 NPs, followed by photopolymerization using a mask.The Fe 3 O 4 NPs were self-organized as PCs by an external magnetic field, resulting in vivid and tunable structural colors within the hydrogels.Notably, the hydrogel-based PC exhibited swelling in water, causing an increase in the gap between the adjacent Fe 3 O 4 NPs and a noticeable redshift of the reflection peak.Conversely, upon exposure to light, the swelled hydrogels underwent shrinkage due to the photothermal properties of the Fe 3 O 4 NPs, resulting in a blueshift of the reflection peak as the water within the hydrogel evaporates.Similarly, due to swelling/deswelling, the reflection peak was shifted, which induced a color change.According to this effect, each case showed the redshift or blueshift of the reflection peak observed, ultimately inducing a noticeable color change.
Electron beam lithography (EBL)
EBL provides an opportunity to fabricate hydrogel structures at the scale of a sub-50 nm 101 .Unlike conventional fabrications for nanostructures that involve etching, hydrogels can directly serve as the resist itself, enabling the direct formation of nanostructures.When an electron beam is exposed to a specific region of the hydrogel film, the high energy breaks the molecular bonds within the hydrogel, forming radicals, which leads to crosslinking of adjacent polymer chains 102,103 .After cross-linking, the structured hydrogels retain their intrinsic characteristics as hydrogels but become insoluble in solvents.This property enables them to be directly utilized as negative resists 102 .Therefore, several studies have been conducted integrated with EBL, which has the advantages of a high degree of freedom and resolution 12 .
By exploiting the simple fabrication and humidityresponsive tunable benefits, a study was proposed to directly fabricate PVA nanopillars by exposing the PVA film to an electron beam 104 .PVA nanopillars whose diameter could continuously change by modulating the exposure energy gradient of the electron beam in the radial direction were fabricated on an Ag film, and the structural color was implemented using the surface plasmon resonance (SPR) phenomenon at the interface between the metal and dielectric.To induce tunable plasmon resonance in the visible regime, a few hundred nm diameter PVA nanopillars were formed via an aligned electron beam onto the Ag mirror substrate, and then a thin Ag film was deposited.As the diameter of the PVA nanopillars increased, the SPR was modulated, inducing tunable coloration and beam steering in the visible regime (Fig. 3a) 105 .
Grayscale electron beam lithography (G-EBL) is a technique that exploits the electron beam-hydrogel crosslinking effect and enables continuous control of the height and diameter of the hydrogel structure through the modulation of the electron beam exposure dose 106 .This technique is attractive because it can provide advanced modulation of complex light corresponding to intensity and phase in the photonics approach.Due to this advantage, many studies have adopted G-EBL to fabricate complex light-modulating nanophotonic devices, such as tunable structural color and holography.
Since hydrogels have a low n (~1.5), they exhibit weak modulation performance.To overcome this limitation, attempts have been made to introduce cavity configurations that facilitate modulation through complementary light interference within the cavity.
First, a pixelized MHM was demonstrated, which consisted of a grayscale PVA cavity for humidity-responsive dynamic display (Fig. 3b) 107 .The G-EBL could allow for precise control over the PVA thickness and result in programmable reflective resonance covering the entire visible regime.The initial color was determined by the initial PVA thickness; when the RH exposure increased, the color was redshifted along the PVA swelling.For modulation analysis, the RH condition was modulated within a range of 9.8 to 90.1%, and the resonance shift reached over 50 nm.Due to the combination of G-EBL and hydrogel resists a novel solution was presented to overcome the height limitations in F-P devices.
In addition, photonic devices in a similar configuration demonstrated multiplexing imaging with humidityresponsive behavior based on stepwise MHM (Fig. 3c) 108 .Each MHM pixel was designed with different PVA thicknesses to achieve complete decoupling of amplitude and phase correlation, enabling independent encoding freedom.By employing spatial multiplexing of one cell to the superpixel scheme, RGB triple channels were utilized to independently encode multiple images, such as rainy, lightning, and sunny signs.The use of PVA as the core layer of the MHM resulted in a redshift of the operational wavelength as the RH increased (dry to humid), enabling a real-time dynamic tunable display between interchannels.Consequently, it became possible to simultaneously switch between near-field images and far-field metaholography in real time.As a follow-up research, similar transmission devices were proposed to further advance the field of photonics 109 .This device comprised a photonics system that combined dual-channel dynamic color printing and switchable metaholography.This proposed approach sparked subsequent research in various applications, including tunable displays, encryption, and humidity optical sensor technologies.Furthermore, an independently programmable meta-display switch was demonstrated by encoding meta-pixels into a multiplexed matrix, which included nanoprinting images and metaholography (Fig. 3d) 110 .In this case, it utilized hydrogel swelling dynamics, and the hydrogel-nanoantennas were scaled up or down, actively switching as dominant resonance modes between localized surface plasmon resonance (LSPR) and F-P.This created amplitudeprogrammable possibilities and encoding degrees of freedom.
A simplified grayscale-MHM fabrication process with the induction of PVA shrinkage using a direct dose scanning process was reported (Fig. 3e) 111 .As mentioned above, the high-energy electron beam was exposed to the hydrogel, and it became crosslinked without a crosslinker, along with volume shrinkage occurring at the same time.By exploiting the volume shrinkage of PVA, the depth of the PVA within MHM could be determined according to the dose scanning, and the development process was omitted, thereby simplifying the fabrication.With the adoption of a similar multiplexing strategy and by encoding the transmission phase into the stepwise MHM, the color image and dynamic projected holographic were modulated in real-time through humid exhalation.The proposed active displays exhibited rapid responsiveness to surrounding RH changes at a millisecond level (<150 ms).
Similarly, EBL greatly contributed to the achievement of advanced dynamic photonic devices based on its high process flexibility and resolution.
Nanoimprint lithography (NIL)
In addition to pattern formation by the light source and electron beam exposure, there have been attempts to fabricate photonic devices with a simple mechanical pressure mechanism of NIL 94,112 .NIL is an intuitive process enabling semipermanent and parallel production by printing onto resin using a soft mold replicated from a master mold.This process can provide higher throughput and larger scalability than other top-down lithography techniques.Thus, the NIL is cost-effective in terms of economics.Another advantage is its high resolution and pattern fidelity.Due to the direct contact between the mold and the substrate, the pattern transfer is highly precise, and the pattern resolutions that are based on mold resolution can be achieved.Moreover, advanced NIL techniques, such as roll-to-roll, are acquiring prominence for their large-scale applicability to a wide range of substrates, including flexible and curved substrates.Recently, imprinting subwavelength metasurfaces by utilizing a bilayer soft mold consisting of rigid hard PDMS (h-PDMS) and flexible buffered PDMS was attempted 113,114 .Due to the high viscosity and low compression modulus of PDMS, which is the main conventional soft mold, it is possible to pattern structures up to 400 nm.Moreover, low-viscosity h-PDMS is able to handle the compression modulus according to the vinyl ratio of the prepolymer and hydrosilane and can be patterned with high aspect ratios and high resolution.Due to this approach, it has become feasible to expand the scope to approximately 50 nm 115 .
From this, a bilayer soft mold imprinting process was introduced to fabricate nanopixel-based high-resolution displays (Fig. 4a) 116 .Nanopixels (~700 nm 2 ) utilized the F-P resonance occurring in the metal-PVA-metal NP structure to display the reflective color.The presence of a hydrogel within the pixel could result in a redshift of the resonance peak by swelling in response to increasing RH 20 to 90%, allowing the entire RGB gamut to be displayed at each pixel.The various thicknesses had different absorption peaks; thus, they could represent different colors for each thickness.This study used an advanced fabrication technique, EBL overlay, to fabricate a multilevel pixelized master mold.A dual-aligned EBL process produced a master mold consisting of nanopixels of different heights (h) of Pixel A and Pixel B, which could be replicated in the form of the aforementioned bilayer soft mold.The imprint process could simultaneously transfer not only two height types of pixels but also three level thicknesses, including the residue layers that naturally occurred during the process.Direct spin-coating of Ag NPs onto printed samples enable high-resolution chameleon imaging composed of vivid pixels.The spin coating process that omitted the vacuum process could improve productivity, and furthermore, due to the disordered dispersion of particles, the porosity contributed to the response/recovery time that was based on the Knudsen effect.This colorimetric sensor showed hundreds of ms of a quick reaction similar to a chameleon depending on the RH and had vivid colors and wide color modulation, presenting a potential use as a humidity sensor.
Subsequent research showed that a PVA-based metasurface had irreversible/reversible optical encryption (Fig. 4b) 115 .The PVA metasurface was created by spincoating aqueous PVA onto a soft mold, and then it was imprinted onto the substrate through the application of pressure.Due to the use of a water-soluble resin, the mold could be washed with water and reused.The device was designed to multiplex holograms (far-field) and structural colors (near-field) with a geometry phase-based design.This device could show independent optical reactions based on the input light source.Additionally, it could exhibit reversible or irreversible properties depending on the deposition process, which in turn enabled the device to be highly flexible and adaptable to its intended purpose as an encryption device.When the hydrogel metasurface was exposed to high humidity conditions, PVA rapidly expanded and destroyed the nanostructures through aggregation between adjacent meta-atoms.In the swelling process, the hidden information ('PASSWORD') encrypted in metasurfaces could be decrypted at approximately RH 68%.After 85% RH exposure, the meta-atoms containing hidden information were destroyed, and they could no longer be optical modulators, thus acting as irreversible devices.Another interesting result was that an identically fabricated metasurface could be switched to a reversible device through the deposition of a thin 10 nm layer of metal.Different from the aforementioned irreversible devices, no structural defects were found in the metalcoated PVA metaatoms over 100 times repeated RH exposures.The preservation of the hydrogel metaatom indicated that reversible optical encryption-decryption could be maintained depending on RH exposure.Thus, the tunable morphology and fabrication convenience of hydrogels improved the metasurface applicability, with potential encryption and sensing applications.
The aforementioned research has demonstrated the compatibility between the hydrogel and NIL processes and this has led to subsequent research aimed at ensuring productivity.Thermal-based NIL technology, which enables not only high fidelity but also ultrafast printing, has been utilized for photonic device fabrication 117 .This method can achieve fast production through the integration of the master mold and underneath n-doped silicon-based Joule heaters due to the heat being confined to the structured surface and rapid heating/cooling.A humidity-sensitive reflection grating was fabricated by direct ultrahigh-speed printing onto a hydrogel grown on a substrate using initiated chemical vapor deposition (Fig. 4c) 117 .The hydrogel grating can generate diverse diffraction effects, including structural color, depending on structural parameters and angles.The presence of a hydrogel can promote a gradual color change through an expansion/contraction process in response to changes in humidity conditions, potentially applying to sensors and other devices that can detect changes in humidity.In addition, the newly introduced printing equipment can easily fabricate cm-sized nanostructure arrays at ultrahigh speed (more than 1000 pieces/hour).The development of a rapid fabrication process can facilitate the field of hydrogel-photonics both in terms of commercialization and academic research.
The integration of hydrogels and photonic devices provides an additional means of tuning optical responses through external stimuli and is a unique research field on its own.Additionally, hydrogels are inexpensive, ecofriendly, and compatible with various fabrication processes, causing them to be a promising material.Additionally, their applicability to mass production processes, such as NIL, has been demonstrated, and their potential applications are expected to gradually expand.
Conclusion and outlook
The development and attainment of hydrogel photonic devices through diverse micro/nanofabrication processes have provided new opportunities in the field of nanophotonics (Table 1).Hydrogels are compatible with diverse micro/nanofabrication platforms, such as coatings, photopolymerization, EBL, and NIL, providing fabrication advantages.In particular, hydrogel-based photonics are suitable for commercialization due to their facile fabrication, such as NIL, enabling mass production.NIL can be combined with roll-to-roll techniques, enhancing the potential for high-throughput fabrication.In addition, hydrogels demonstrate a novel mechanism for geometric modulation based on their inherent swelling/deswelling properties, providing tunability for photonic devices (Table 2).Moreover, the hydrogel stimuli-responsive characteristics are determined by chemical composition, which can be extended to various stimulus-responsive optical devices by replacing the other hydrogel.Continued advancements in the fabrication process and hydrogel development have demonstrated the tremendous potential of the hydrogel photonic devices 92,[118][119][120][121][122] .
There have been attempts to commercialize hydrogels with nontoxicity, swelling capabilities, and transparency in bioindustries, such as Lab-on-a-Chip 123,124 and drug delivery 125,126 .Nevertheless, hydrogels still have not achieved remarkable commercialization in the field of active photonics.Several points need to be considered in the development of future hydrogel-based photonic devices.
In terms of tunable photonics applications, the two main factors of 'response/recovery time' and 'deformation range' should be used to evaluate the performance of hydrogel photonics.In general, response/recovery time can be explained as the difference in the time required to reach from 10% to 90% (T 10-90 ) and 90% to 10% (T 90-10 ) intensity of each equilibrium state 78,116 .However, the experimental measurements are not standardized, and a few studies have adopted this method, causing difficulty in comparing the sensitivity.Similarly, the deformation range is similar.The dynamic response of hydrogels occurs by hydrogel molecules, and there is no specific molecular ratio setting for performance comparison between other hydrogels.In particular, humidity-responsive PVA has a nonlinear relationship with the storage modulus of PVA.
With an increase in RH, the relaxation process of the polymer is accelerated by the drastic absorption of more water molecules, which leads to an increase in the swelling rate along with the disruption rate of the hydrogen bonds at a high RH 127 .Therefore, setting a specific measurement range is important for verifying the deformation range.Furthermore, the optical characteristics, response speed, and deformation range of the hydrogels can be adjusted to meet the specific requirements of the application.A combination of the hydrogel content and additives of other functional materials (polymers, nanoparticles, stimuli-active materials) can be added to allow for tunability in the optical characteristics, response time, and deformation range.
Additionally, hydrogels can be used to enhance the mechanical stability, reliability, and durability and improve the optical performance.Moreover, through the deeper exploration of the dynamic characteristics of hydrogels at the nanoscale, standardizing the physical properties of hydrogels can promote successful commercialization.However, it remains challenging to analyze the real-time changes in morphology and refractive index associated with hydrogel swelling/deswelling at the nanoscale.Furthermore, the path to commercial success necessitates the establishment of comprehensive performance characterization and testing protocols.These protocols need to assess the reliability and stability performance of hydrogel photonic devices under varying environmental conditions, such as temperature, humidity, or pH.By ensuring their functionality and durability across various conditions, hydrogels can advance the applicability of the hydrogel-based photonic devices.
Fig. 3
Fig.3Electron beam lithography approaches for achieving hydrogel-based tunable photonic devices.a Direct writing of PVA pillars with continuously varying diameters to enhance surface plasmon polaritons (SPPs)105 .b Reflective-type metal-hydrogel-metal (MHM) structure using humidity-responsive PVA as an insulating layer for full color generation depending on the relative humidity in the visible wavelength107 .c Attainment of nanoprinting in the near field and metaholography in the far field simultaneously by spatially multiplexing the MHM structure to superpixels108 .d Reflectivity change of the MHM structure by tuning between the localized surface plasmon resonance (LSPR) mode and lossy Fabry-Pérot (F-P) mode for the attainment of nanoprinting and metaholography110 .e Transmissive-type MHM structure using EBL-induced polymer shrinkage for nanoprinting and metaholography111 .Reproduced with permission from Wiley (2022), Degruyter (2021), Wiley (2021), Wiley (2023), and Wiley (2022)
Table 2
Categorization of deformable hydrogel photonic devices | 9,268 | 2024-01-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Normal activity of microsomal triglyceride transfer protein is required for the oleate-induced secretion of very low density lipoproteins containing apolipoprotein B from McA-RH7777 cells.
The requirement of the activity of microsomal triglyceride transfer protein (MTP) for very low density lipoprotein (VLDL) secretion was determined using McA-RH7777 cells stably transfected with human apoB48 (hB48). Secretion of VLDL containing hB48 (hB48-VLDL) by the transfected cells was induced by exogenous oleate (0.4 mM), and oleate-dependent VLDL secretion was selectively inhibited by brefeldin A (0.2 microg/ml). Two protocols were used to determine the effect of MTP inhibition on VLDL secretion. In the first protocol, cell protein and lipid were labeled with radioactive amino acids and oleate prior to MTP inhibition (using 5 microM of the photoaffinity inhibitor BMS-192951 to reduce MTP activity by 65-70%), and secretion of prelabeled apoB and triacylglycerol (TG) associated with lipoproteins was monitored during oleate-supplemented chase. In control cells, a 6-fold increase in incorporation of prelabeled TG into hB48-VLDL was observed after oleate supplement, while incorporation of prelabeled TG into VLDL containing endogenous rat apoB100 (rB100-VLDL) was unaffected. Inhibition of MTP activity abolished the oleate-induced utilization of prelabeled TG (by 80%) and hB48 (by 70%) for hB48-VLDL secretion but decreased utilization of pre-existing TG (by <25%) and B100 (by 45%) for rB100-VLDL secretion to a lesser extent. Inhibition of MTP did not affect incorporation of prelabeled TG or hB48 into high density lipoproteins containing hB48 (hB48-HDL). In the second protocol, MTP was inactivated prior to metabolic labeling of protein and lipid, and secretion of newly labeled apoB and TG as lipoproteins was monitored after oleate supplement. Under this condition, MTP inhibition decreased incorporation of newly labeled TG (by 80%) and hB48 (80%) into hB48-VLDL but did not affect their incorporation into hB48-HDL. Additionally, MTP inhibition decreased incorporation of newly labeled TG (by 50%) and rB100 (by 90%) into rB100-VLDL. Thus, normal activity of MTP is required for the oleate-induced secretion of hB48-VLDL from McA-RH7777 cells.
The requirement of the activity of microsomal triglyceride transfer protein (MTP) for very low density lipoprotein (VLDL) secretion was determined using McA-RH7777 cells stably transfected with human apoB48 (hB48). Secretion of VLDL containing hB48 (hB48-VLDL) by the transfected cells was induced by exogenous oleate (0.4 mM), and oleate-dependent VLDL secretion was selectively inhibited by brefeldin A (0.2 g/ml). Two protocols were used to determine the effect of MTP inhibition on VLDL secretion. In the first protocol, cell protein and lipid were labeled with radioactive amino acids and oleate prior to MTP inhibition (using 5 M of the photoaffinity inhibitor BMS-192951 to reduce MTP activity by 65-70%), and secretion of prelabeled apoB and triacylglycerol (TG) associated with lipoproteins was monitored during oleate-supplemented chase. In control cells, a 6-fold increase in incorporation of prelabeled TG into hB48-VLDL was observed after oleate supplement, while incorporation of prelabeled TG into VLDL containing endogenous rat apoB100 (rB100-VLDL) was unaffected. Inhibition of MTP activity abolished the oleate-induced utilization of prelabeled TG (by 80%) and hB48 (by 70%) for hB48-VLDL secretion but decreased utilization of pre-existing TG (by <25%) and B100 (by 45%) for rB100-VLDL secretion to a lesser extent. Inhibition of MTP did not affect incorporation of prelabeled TG or hB48 into high density lipoproteins containing hB48 (hB48-HDL). In the second protocol, MTP was inactivated prior to metabolic labeling of protein and lipid, and secretion of newly labeled apoB and TG as lipoproteins was monitored after oleate supplement. Under this condition, MTP inhibition decreased incorporation of newly labeled TG (by 80%) and hB48 (80%) into hB48-VLDL but did not affect their incorporation into hB48-HDL. Additionally, MTP inhibition decreased incorporation of newly labeled TG (by 50%) and rB100 (by 90%) into rB100-VLDL. Thus, normal activity of MTP is required for the oleate-induced secretion of hB48-VLDL from McA-RH7777 cells.
Two forms of apolipoprotein B (apoB) 1 are synthesized by the rat liver, the full-length apoB100 and apoB48, which represents the N-terminal 48% of apoB100 (1). Although the physiological significance of having two forms of apoB in rat liver is not clear (2), both forms of apoB have the ability to assemble very low density lipoproteins (VLDL) (3). The mechanism by which hepatic VLDL is synthesized has not been completely defined. However, significant progress has been made over the past several years concerning the formation and secretion of VLDL containing apoB48 (B48-VLDL) (4). Biochemical evidence has been obtained through studies with primary rat hepatocytes (5,6) and the rat hepatoma cell line McA-RH7777 (7) that B48-VLDL is assembled via two discontinuous lipidation stages in the endoplasmic reticulum (ER). In the first stage, apoB48 is associated with a small amount of lipid to form a primordial particle with high buoyant density. These high density lipoprotein (HDL) particles (designated B48-HDL) may be secreted from the cells if further lipid recruitment does not occur. Alternatively, the B48-HDL particle can undergo a second lipidation stage, expanding its lipid content, primarily triacylglycerol (TG), to form VLDL. This "two-step" assembly model is consistent with the early immunohistochemical studies of hepatic VLDL assembly in rats (8). In rat hepatoma cells, the conversion of B48-HDL into B48-VLDL is associated with increased synthesis of cellular lipid, and the process can be inhibited by brefeldin A (9) or cycloheximide (7). These results suggest that in addition to active TG synthesis, other factors involved in vesicular trafficking or lipid mobilization may participate in the second stage of B48-VLDL formation.
The microsomal triglyceride transfer protein (MTP) is a heterodimeric protein consisting of a 97-kDa catalytic subunit noncovalently linked to protein-disulfide isomerase (10). The ability of MTP to transfer TG between lipid membranes has been demonstrated in vitro (11), and deficiency in the MTP activity is associated with human abetalipoproteinemia (12,13). In HepG2 cells, a physical association between the hydrophobic sequences of apoB and MTP has been detected (14). It is unclear, however, whether the direct interaction between MTP and apoB is essential for the recruitment of lipid. The functional role of MTP in the secretion of lipoproteins containing apoB has been demonstrated by co-expression of MTP and apoB in heterologous cells that normally produce neither pro-tein (15)(16)(17)(18). Data from these reconstitution experiments clearly indicate that MTP indeed plays an important role in the assembly and secretion of lipoproteins containing apoB. However, since cell lines used for the reconstitution experiments lacked the ability to synthesize and secrete VLDL, the requirement for MTP activity in VLDL assembly, particularly in the second step assembly, could not be determined. Thus, an alternative approach to assess the involvement of MTP in VLDL synthesis is to use inhibitors that can specifically inactivate MTP in situ.
Inactivation of MTP using specific MTP inhibitors has recently been reported by several laboratories to inhibit apoB secretion from cells of hepatic or intestinal origin (19 -22). Invariably, the inhibition of MTP activity markedly decreased secretion of the full-length apoB100. However, the effect of MTP inhibition on apoB48 secretion was less consistent. In Caco-2 cells, while MTP inhibition resulted in significantly decreased secretion of apoB100, secretion of apoB48 was unaffected (22). In McA-RH7777 cells, inactivation of MTP seemed to only affect formation of B48-HDL (i.e. the product of the first step assembly) but had no effect on the conversion of B48-HDL into B48-VLDL (19). The differential effect of MTP inhibition on apoB48 secretion in different cells has not been explained.
Several laboratories including ours (23)(24)(25)(26) have presented experimental evidence that the size of lipoproteins is positively correlated with the length of the associated apoB polypeptide during the first step assembly. However, the length of apoB does not seem to play a major role in the conversion of apoB-HDL into apoB-VLDL during the second step assembly. In McA-RH7777 cells stably expressing recombinant human apoB variants or apoAI/B chimeric proteins, HDL that carried either truncated apoB variants (e.g. as short as apoB34) or apoAI/B chimeras containing a segment of apoB (e.g. as short as ϳ5% of apoB100) were readily converted into VLDL in the presence of exogenous oleate (27). Thus, conversion of apoB-HDL into apoB-VLDL in the second step may be determined primarily by synthesis of lipid and by protein factors that mobilize the lipid during assembly rather than specific apoB sequences. Since MTP plays an important role in lipid transfer, we hypothesize that the MTP activity is required for the mobilization of lipid that is utilized for the oleate-induced B48-VLDL assembly. This hypothesis was tested in the current work.
EXPERIMENTAL PROCEDURES
Materials-Culture media and sera were obtained from Life Technologies, Inc. Reagents for polyacrylamide gel electrophoresis were obtained from Bio-Rad. Sheep anti-human apoB antiserum was obtained from Boehringer Mannheim. CNBr-activated Sepharose 4B beads and protein A-Sepharose CL-4B beads were obtained from Pharmacia Biotech Inc. [ Cell Culture and MTP Inhibition-McA-RH7777 cells stably transfected with the human apoB48 (hB48) cDNA were generated and cultured in Dulbecco's modified Eagle's medium (DMEM) plus 20% serum as described previously (28). Inhibition of MTP in McA-RH7777 cells with the compound BMS-192951 was conducted according to an established protocol (19).
Ultracentrifugation of Metabolically Labeled Lipoproteins-The hB48-transfected cells (60-mm dish) were incubated with 400 Ci of [ 35 S]methionine/cysteine in 2 ml of DMEM (20% serum) Ϯ 0.4 mM oleate for up to 4 h. The conditioned media were collected, diluted to 5 ml with phosphate-buffered saline (pH 7.4), and subjected to sucrose density gradient ultracentrifugation as described previously (7). Twelve fractions (1 ml each) were collected from the top of the centrifuge tubes. The 35 S-labeled apoB proteins were immunoprecipitated and analyzed by polyacrylamide gel electrophoresis and fluorography as described previously (27).
Analysis of Metabolically Labeled Lipid-Cells were pulse-labeled with 10 Ci of [ 14 In some experiments, after pulse labeling a 1-or 2-h delay was introduced prior to the initiation of chase to deplete any residual intracellular [ 3 H]oleate pool, and inactivation of MTP with BMS-192951 was performed during the 1-h delay period. VLDL and HDL were separated by ultracentrifugation, and lipoproteins containing rat B100 (rB100-VLDL, d Ͻ 1.02 g/ml) or hB48 (hB48-VLDL, d Ͻ 1.02 g/ml or hB48-HDL, d ϭ 1.08 -1.13 g/ml) were separated by immunoaffinity chromatography. Lipids associated with these particles were extracted with chloroform/methanol. Separation of phospholipids and neutral lipids was performed on silica gel 60 plates as described (29) using egg yolk lipids as a carrier. Cell lipids were also extracted and separated by TLC. The radioactivity associated with individual lipid species was quantified by liquid scintillation counting (Wallac 1409 counter).
Immunoaffinity Chromatography and Lipid Analysis-Monoclonal antibody 1D1 (4 mg) was coupled to CNBr-activated Sepharose 4B beads (1 g) according to the manufacturer's instructions. Twelve fractions obtained from sucrose gradient ultracentrifugation of the conditioned [ 35 S]methionine labeling media were mixed with bovine serum albumin (final concentration 1%) and 100 l of 1D1-affinity beads for 16 h at 4°C to recover hB48-lipoprotein. After hB48-lipoprotein was precipitated, the supernatant was subsequently mixed with an anti-B antiserum to recover lipoproteins containing endogenous rat apoB100. Immunocomplexes with either hB48-or rB100-lipoproteins were washed five times with 1 ml of phosphate-buffered saline by centrifugation. Since these lipoproteins were mostly confined to density fractions 1 and 2 (VLDL, d Ͻ 1.02 g/ml) and 8 -10 (HDL, d ϭ 1.08 -1.13 g/ml), only three combined fractions (i.e. hB48-VLDL, rB100-VLDL, and hB48-HDL) were subjected to lipid analysis. The recovery of hB48-VLDL from the conditioned medium by immunoaffinity purification was greater than 90%, and the purified hB48-VLDL contained less than 15% endogenous rB100-VLDL as determined by Western blot analysis and by quantification of 35 S-labeled apoBs (data not shown).
Oleate-induced B48-VLDL Secretion by Human B48-transfected McA-RH7777 Cells-In
McA-RH7777 cells transfected with recombinant hB48, the secretion of hB48-VLDL is dependent upon oleate supplementation (27). This event is similar to the oleate-induced secretion of VLDL containing rat B48 (rB48) in nontransfected McA-RH7777 cells (7). As reported previously, overexpression of hB48 suppressed endogenous rB48 secretion (28). To demonstrate further that hB48-VLDL secretion was comparable with endogenous rB48-VLDL secretion, we tested the response of the transfected cell line to brefeldin A. It has been reported that the oleate-induced rB48-VLDL assembly (the second step) was sensitive to low doses of brefeldin A (9). We found that in hB48-transfected McA-RH7777 cells, secretion of hB48-VLDL was also sensitive to brefeldin A. Fig. 1A shows fluorograms of 35 S-labeled apoBs that were secreted as lipoproteins from the cells treated with (bottom) or without (top) 0.2 g/ml brefeldin A, and Fig. 1B shows the quantitative assessment of apolipoprotein secretion. In the absence of brefeldin A, hB48 was secreted as both VLDL and HDL (Fig. 1A, top). In the presence of brefeldin A, secretion of hB48-VLDL was decreased by 60% compared with control, whereas secretion of hB48-HDL was not decreased (Fig. 1A, bottom). As expected, endogenous rB100-VLDL secretion was also decreased by 60% by the brefeldin A treatment (Fig. 1A, bottom).
In these experiments, the amount of radioactivity associated with intracellular 35 S-labeled rB100 was 40 -50% lower in the brefeldin A-treated cells than control (during a 2-h labeling period), but the radioactivity associated with intracellular 35 S-labeled hB48 was unchanged (data not shown). Synthesis of TG and phosphatidylcholine (PC) was not affected by brefeldin A treatment, as measured by the incorporation of [ 3 H]glycerol during a 4-h labeling period (data not shown). These results together indicate that mechanisms responsible for the second step hB48-VLDL assembly and secretion are preserved in the transfected McA-RH7777 cells. Thus, in the subsequent experiments the hB48-transfected cells were used to examine the requirement of MTP activity for the oleate-induced B48-VLDL secretion.
Utilization of Pre-existing Triacylglycerol for B48-VLDL Secretion-To monitor lipid incorporation into secreted B48-VLDL, we first labeled the intracellular lipid pool with [ 14 C]oleate under basal conditions (i.e. DMEM plus 20% serum). During this labeling period, additional oleate mass was not present; therefore, the cells did not produce B48-VLDL. After [ 14 C]oleate labeling, the cells were washed and immediately incubated with DMEM (20% serum) containing 0.4 mM oleate to induce the second step. Oleate was not present in the medium of control cells. In addition, [ 3 H]glycerol was included in both chase media to label newly synthesized lipid. Fig. 2A shows that incorporation of [ 3 H]glycerol-labeled TG (newly synthesized) into secreted rB100-VLDL, hB48-VLDL, or hB48-HDL was increased by 4-, 16-, and 2-fold, respectively, by exogenous oleate. Similarly, incorporation of [ 3 H]glycerol-labeled PC into secreted rB100-VLDL and hB48-VLDL (but not hB48-HDL) was increased by the oleate treatment ( Fig. 2A). These results suggest that upon oleate supplementation, newly synthesized TG is utilized for both hB48-VLDL and rB100-VLDL secretion.
However, a striking difference was observed between incorporation of prelabeled [ 14 C]TG into secreted hB48-VLDL or rB100-VLDL. While oleate treatment had no apparent stimulatory effect on incorporation of [ 14 C]TG into rB100-VLDL, it increased incorporation of [ 14 C]TG into hB48-VLDL more than 6-fold (Fig. 2B). Measurement of [ 14 C]TG associated with hB48-VLDL could be an underestimate of secretion of prelabeled TG, since supplementation of the medium with oleate stimulated TG synthesis and inevitably decreased the specific activity of the [ 14 C]TG pool. There was a concomitant 2-fold decrease in the incorporation of [ 14 C]TG into hB48-HDL at the end of a 4-h chase (Fig. 2B). The effect of oleate on the incorporation of prelabeled [ 14 C]PC into the secreted lipoproteins, however, was similar to that for newly synthesized [ 3 H]PC. These results demonstrate that the unique feature associated with the oleate-stimulated hB48-VLDL secretion is the utilization of pre-existing TG. In the following experiments, we used incorporation of pre-existing TG into hB48-VLDL as a marker to assess the requirement of MTP activity in the second step assembly.
Activity of MTP Is Required for Oleate-induced hB48-VLDL Secretion-A photoactivated MTP inhibitor, designated BMS-192951 (19), was used to inactivate MTP in hB48-transfected McA-RH7777 cells. After incubation with cells for 1 h and subsequent photoactivation (under ultraviolet light for 15 min), BMS-192951 at 5 or 10 M reduced the MTP activity by 65-70%. The inhibitory effect persisted for at least 8 h (data not shown).
The effect of MTP inhibition on the recruitment of prelabeled TG during oleate-induced second step was determined by pulse labeling of the cells with [ 3 H]oleate (4 h), inactivating MTP with BMS-192951 (1 1 ⁄4 h), and monitoring lipoprotein secretion during oleate-supplemented chase. In preliminary experiments, we found that introducing a 1-or 2-h delay period between pulse and chase (Fig. 3A) did not affect the oleateinduced secretion of pre-existing TG as hB48-VLDL. After a 1or 2-h delay, incorporation of prelabeled TG into secreted hB48-VLDL was again increased by 5-7-fold upon oleate supplementation (Fig. 3B), while incorporation of prelabeled TG into rB100-VLDL was unchanged (Fig. 3C). Increased TG synthesis during oleate-supplemented chase did not significantly alter the intracellular pool of the prelabeled TG or PC, although a small increase in labeled TG and a slight decrease in labeled PC were consistently observed (Fig. 3, D and E). These results indicate that the oleate-stimulated recruitment of pre-existing TG for hB48-VLDL secretion is not diminished after a 1-or 2-h delay.
However, recruitment of pre-existing TG for hB48-VLDL secretion was abolished by MTP inhibition (Fig. 4). At 5 M BMS-192951, incorporation of 35 S-labeled rB100 and hB48 into secreted VLDL decreased by 45 and 70%, respectively, as compared with cells treated with no inhibitor (Fig. 4, A and B). MTP inhibition had little effect on secretion of 35 S-labeled hB48 with HDL. Inactivation of MTP also decreased the incorporation of lipid into VLDL. Secretion of prelabeled [ 3 H]TG and [ 3 H]PC associated with hB48-VLDL was decreased by 80 and 85%, respectively, at the end of a 4-h chase (Fig. 4C) (Fig. 4C). These results provide evidence that utilization of pre-existing TG for the oleate-induced hB48-VLDL secretion is sensitive to MTP inhibition. The relatively small effect on secretion of pre-existing TG as rB100-VLDL suggests that a considerable amount of rB100-VLDL particles are probably formed before the oleate-induced second step. Fig. 3, D and E), indicating that MTP inactivation did not alter the pools of prelabeled lipid. Nor did MTP inhibition affect secretion of endogenous rat apoAI as HDL (data not shown).
The turnover of prelabeled [ 3 H]TG and [ 3 H]PC in MTPinactivated cells during oleate-supplemented chase was identical to that in untreated cells (see
We then examined the effect of MTP inhibition on VLDL secretion by inactivating MTP prior to metabolic labeling of apoB and lipid. Inactivation of MTP diminished the secretion of 35 S-labeled apoB proteins associated with hB48-VLDL (by 80%) or rB100-VLDL (by 90%) as compared with cells treated without inhibitor (Fig. 5, A and B). Similarly, MTP inhibition decreased secretion of [ 3 H]TG associated with hB48-VLDL (by 6-fold) or rB100-VLDL (by less than 2-fold) (Fig. 5C). The greater decrease in radiolabeled rB100 than in radiolabeled TG in the rB100-VLDL fraction indicates that the trace amount of secreted rB100-VLDL is enriched with newly labeled TG. However, similar to our observations in pulse-chase experiments (Fig. 4), secretion of 35 S-labeled B48 or [ 3 H]TG associated with hB48-HDL was unaffected by MTP inhibition (Fig. 5, B and C). These results are reminiscent of the inhibitory effect of brefeldin A on hB48-VLDL secretion (Fig. 1) and demonstrate further that MTP activity is required for the oleate-induced secretion of hB48-VLDL. Under these experimental conditions, incorporation of [ 3 H]glycerol into cellular TG was decreased by 30% (69.8 Ϯ 9.8% of control, n ϭ 8) during a 4-h labeling period in the inhibitor-treated cells compared with untreated cells, whereas incorporation of [ 3 H]glycerol into PC was not affected (103.0 Ϯ 14.5% of control, n ϭ 8).
Normal MTP Activity Is Required for the Second
Step of B48-VLDL Assembly-In the current work we have inquired whether or not MTP activity is required for B48-VLDL secretion using two experimental protocols: MTP was inactivated either before or after metabolic labeling to assess secretion of pre-existing or newly synthesized apoB and lipid as lipoproteins. We took advantage of the fact that hB48-VLDL could be readily purified from the culture media of hB48-transfected McA-RH7777 cells, which retained the ability to secrete hB48-VLDL upon oleate supplementation. We also took advantage of the fact that pre-existing TG was preferentially utilized for hB48-VLDL secretion, which could be used to monitor the oleate-induced second step. In the present study, we found that under conditions where MTP activity was reduced by 65-70%, secretion of either newly synthesized (Fig. 5) or prelabeled hB48 and TG (Fig. 4) as hB48-VLDL was abolished, whereas their secretion as hB48-HDL was unaffected. Most strikingly, secretion of pre-existing TG as hB48-VLDL that was specifically enhanced during the oleate-induced second step (Figs. 2 and 3) was extremely sensitive to MTP inhibition (Fig. 4). These results suggest strongly that expanding the neutral lipid core during hB48-VLDL assembly could be achieved only with normal MTP activity.
Recently, we found that secretion of VLDL containing other truncated apoB variants (e.g. B37) or apoAI/B chimeric proteins (e.g. AI/B29 -34) was also induced by oleate supplementation (27). Moreover, secretion of VLDL containing B37 or AI/B29 -34 by the transfected McA-RH7777 cells upon oleate supplementation was also abolished by MTP inhibition. 2 These results provide additional evidence indicating that the oleateinduced VLDL secretion is not solely determined by apoB length and that MTP is an important component of the second step VLDL assembly and secretion. Fig. 1A, and the media were collected at 2 and 4 h, respectively, and fractionated to analyze 3 H-labeled lipid as in Fig. 2A. A, fluorograms of 35 S-labeled apoBs in sucrose density fractions. B, quantification of decreased apoB secretion. Each value is the average of two measurements from two independent experiments. C, secretion of 3 H-labeled lipid in lipoproteins. Open square, control; closed square, MTP-inhibited. Different Assembly Pathways for hB48-VLDL and rB100-VLDL-There are two important differences between hB48-VLDL and rB100-VLDL secretion in the lipid recruitment and its sensitivity to MTP inhibition. The first difference was in the kinetics of incorporation of pre-existing TG upon oleate supplementation. Although both newly synthesized and pre-existing TG could be utilized for rB100-VLDL and hB48-VLDL secretion, pre-existing TG seemed to be preferentially incorporated into hB48-VLDL upon oleate-induced VLDL secretion. Thus, while there was a 6-fold increase in the incorporation of preexisting TG into hB48-VLDL, there was no difference in secretion of pre-existing TG as rB100-VLDL upon oleate supplementation (Figs. 2 and 3). These results indicate clearly that assembly and secretion of B48-VLDL and B100-VLDL must be achieved through different pathways. The second difference was observed in the response of pre-existing TG recruitment to MTP inhibition. Although inactivation of MTP (by ϳ70%) decreased secretion of both newly synthesized and pre-existing TG as B48-VLDL and B100-VLDL, the most remarkable effect of MTP inhibition was to abolish the incorporation of preexisting TG into B48-VLDL during oleate-induced second step. Thus, while secretion of pre-existing TG as hB48-VLDL was decreased by 80%, incorporation of pre-existing TG into rB100-VLDL was only slightly affected (Ͻ25%) by MTP inhibition (Fig. 4). The oleate-stimulated incorporation of pre-existing TG into hB48-VLDL and its extreme sensitivity to MTP inhibition provide evidence to support the assembly model, suggesting that bulk lipid is added to a primordial B48-HDL particle in the oleate-induced second step (4). In contrast to B48, the inability of B100 to mobilize additional pre-existing TG in response to oleate and its relative insensitivity to MTP inhibition would support the hypothesis that B100-VLDL is assembled primarily through a "one-step" process even before oleate supplementation (7,30), and that post-translational lipid recruitment may not play a major role in B100-VLDL assembly.
The MTP-mediated TG Mobilization Is Probably Associated with a Pathway Sensitive to Brefeldin A-The present observation that reduced MTP activity results in decreased hB48-VLDL secretion is reminiscent of the similar inhibitory effect of brefeldin A on VLDL secretion (Fig. 1). It has been reported that B48-VLDL secretion induced by exogenous oleate in McA-RH7777 cells can be specifically inhibited by a low dose of brefeldin A (9). Since brefeldin A interferes with the formation of coatomers essential for vesicular transport (31), this result suggests that vesicularization of ER or other intracellular trafficking events may also be components of the second step. Phenotypically, the inhibitory effect of a low dose of MTP inhibitor on lipoprotein secretion was similar to that of brefeldin A: suppressed secretion of hB48-VLDL (and rB100-VLDL) without affecting secretion of hB48-HDL (Figs. 1 and 5). Thus, although speculative, MTP may play a role in facilitating formation of ER-associated TG droplets at the site of the second step by mobilization of the cellular stored TG, processes that could also be sensitive to brefeldin A. Since the current study was not designed to reveal the precursor-product relationship between hB48-HDL and hB48-VLDL, the effect of MTP inactivation on this conversion was not directly examined. The relationship between brefeldin A-sensitive vesicular movement and MTP-facilitated TG mobilization during the second step VLDL assembly needs further evaluation.
Although their overall effects on hB48-VLDL secretion were similar, brefeldin A and MTP inhibitor exerted different effects on the apparent synthesis of intracellular TG. While brefeldin A (0.2 g/ml) had no effect on cell TG synthesis, inactivation of MTP by the inhibitor BMS-192951 (5 M) consistently decreased (by 30%) the incorporation of radiolabeled tracer into cell TG (when metabolic labeling was initiated after MTP inhibition). The apparent decrease in TG synthesis could not be explained by increased turnover of the labeled TG, since the pulse-chase experiment showed that MTP inhibition had no effect on the level of the prelabeled cell TG pool, nor was the decreased TG synthesis the result of impairment of lipid synthesis in general, since incorporation of radiolabeled tracer into cell PC was not affected. Currently, there is no satisfactory explanation for the impaired TG synthesis by MTP inhibition. In the present experiments, the use of a low dose of MTP inhibitor (5 M) preserved 30 -35% of the initial cellular MTP activity. Apparently, the residual MTP activity and the retained active TG synthesis were sufficient to allow normal secretion of the products of first step assembly, such as hB48-HDL. Whether or not the decreased TG synthesis (by 30%) that was associated with MTP inactivation also contributed to the impaired second step hB48-VLDL assembly needs to be determined. Since heterozygotes for human abetalipoproteinemia are asymptomatic, the activity of the MTP expressed from the functional allele is presumably sufficient for normal lipoprotein production. Further experiments using MTP inhibitors in whole animals such as transgenic mice expressing human apoB (2) or using mice bearing nonfunctional MTP mutations will provide additional insights into the requirement of MTP in VLDL production in vivo.
Possible Roles of MTP in the First
Step Assembly-The current model for the mode of MTP action in hepatic B48-VLDL synthesis includes but extends beyond the proposed role that MTP plays in the early stage of lipoprotein assembly (19). In cultured cell lines such as COS and HeLa, cells that do not normally synthesize apoB or MTP, coexpression of recombinant MTP and truncated apoB variants resulted in enhanced secretion of most apoB variants examined (15)(16)(17)(18). In these cells, the requirement of MTP expression and oleate-induced lipogenesis appeared to be a function of apoB length, suggesting that there is an important interplay between lipid availability, MTP activity, and the hydrophobic lipid-binding regions of apoB. In transfected cells that lacked MTP activity, the expressed apoBs were unable to translocate across the microsomal membrane or were degraded immediately after translocation (32). Thus, inactivation of MTP in McA-RH7777 cells would be expected to somehow diminish apoB synthesis, either by premature termination of chain elongation or by rapid degradation of newly synthesized polypeptides (18). The extent of MTP inhibition on apoB synthesis and on the secretion of the first step products (i.e. HDL-and LDL-like particles) also appeared to be a function of apoB length (33). In Caco-2 cells treated with an MTP inhibitor, secretion of B48 was not affected, whereas secretion of B100 was abolished (22). Furthermore, in a murine mammary-derived cell line that lacks MTP activity, assembly and secretion of the transfected N-terminal 41% of human apoB on HDL-sized lipoproteins has been observed (34). These data together reinforce the notion that the requirement of MTP activity for apoB secretion is correlated positively with the length of apoB and with the extent of lipid recruitment (18).
The current study, therefore, presents evidence that the activity of MTP, together with the brefeldin A-sensitive ER vesicularization and other protein factors, may constitute the complex second step of B48-VLDL assembly and subsequent B48-VLDL secretion. These events, induced by exogenous oleate in McA-RH7777 cells, may represent an enhanced mobilization of pre-existing TG for the expansion of the neutral lipid core of VLDL. In addition, the enhanced lipid mobilization may also facilitate the first step lipid assembly by apoB, a step that has been documented to be assisted by MTP. Thus, MTP activity is required for the entire VLDL assembly process, including both co-translational and post-translational addition of bulk neutral lipid. | 6,707.4 | 1997-05-09T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Tensor regularized total variation for denoising of third harmonic generation images of brain tumors
Third harmonic generation (THG) microscopy shows great potential for instant pathology of brain tissue during surgery. However, the rich morphologies contained and the noise associated makes image restoration, necessary for quantification of the THG images, challenging. Anisotropic diffusion filtering (ADF) has been recently applied to restore THG images of normal brain, but ADF is hard‐to‐code, time‐consuming and only reconstructs salient edges. This work overcomes these drawbacks by expressing ADF as a tensor regularized total variation model, which uses the Huber penalty and the L1 norm for tensor regularization and fidelity measurement, respectively. The diffusion tensor is constructed from the structure tensor of ADF yet the tensor decomposition is performed only in the non‐flat areas. The resulting model is solved by an efficient and easy‐to‐code primal‐dual algorithm. Tests on THG brain tumor images show that the proposed model has comparable denoising performance as ADF while it much better restores weak edges and it is up to 60% more time efficient.
| INTRODUCTION
Third harmonic generation (THG) microscopy [1][2][3] is a non-linear imaging technique for label-free threedimensional (3D) imaging of live tissues without the need for exogenous contrast agents. THG microscopy has established itself as an important tool for studying intact tissues such as insect embryos, plant seeds and intact mammalian tissue [2], epithelial tissues [4][5][6], zebrafish embryos [3,7] and zebrafish nervous system [8]. This technique has been applied for in vivo mouse brain imaging, revealing rich morphological information [9]. Brain cells appear as dark holes on a bright background of neuropil, and axons and dendrites appear as bright fibers. More important, THG microscopy has shown great potential for clinical applications. Excellent agreement with the standard histopathology of skin cancers has been demonstrated for THG [10,11] and THG also shows great potential for breast tumor diagnosis [12,13].
In particular, we have recently demonstrated that THG yields high-quality images of fresh, unstained human brain tumor tissue [14]. Increased cellularity, nuclear pleomorphism, and rarefaction of neuropil have been clearly recognized in the acquired THG images of human brain tissue. This finding significantly facilitates the in vivo pathology of brain tumors and helps to reveal the tumor margins during surgery, which will improve the surgical outcomes.
Reliable image processing tools will strengthen the potential of THG microscopy for in vivo brain tumor pathology. In the image analysis pipeline of THG images of brain tissue, image denoising is essential and challenging due to the rich cellular morphologies and the low signal-to-noise ratio (SNR) [15]. Anisotropic diffusion filtering (ADF) lies in the core of image denoising techniques that are able to remove strong image noise while maintaining the edges of objects sharp [16][17][18]. The structure tensor is responsible for capturing the distribution of local gradients, thus enabling ADF to reconstruct certain kinds of structures, such as one-dimensional (1D) flowlike [17,19,20] and two-dimensional (2D) membrane-like structures [21,22], as well as 2D blob and ridges [23]. In a previous study [24], we have applied the classical edge-enhancing ADF model [16] to restore the "dark" brain cells observed in THG images of mouse brain tissue. We have further developed in [15] a salient edge-enhancing ADF model to reconstruct the rich morphologies appearing in THG images of structurally normal human brain tissue. However, all the existing ADF models have the drawback that the restored edges are in fact smooth [25]. So far, most ADF models [19][20][21] are implemented using an explicit or semi-implicit scheme [17,26] to solve the diffusion equation which converges slowly.
The combination of ADF and the total variation (TV) model [27][28][29] provides an approach to overcome the drawbacks of the ADF models. TV regularization is another standard denoising method that has been studied mathematically for over decades [30][31][32][33][34][35]. In [25], an ADF model is formulated as a tensor regularized total variation (TRTV) model to restore the truly sharp edges, but the presented algorithm is based on gradient descent and has a slow convergence rate. The adaptive TRTV (ATRTV) model [36] improves convergence by using the primal-dual algorithm [35] to solve the accompanying convex optimization model. The structure tensor adapts to the local geometry at each point but the estimated tensor may not reflect the true local structures if the image is corrupt by strong noise. Other important regularization approaches include the structure tensor total variation (STV) [37,38] that penalizes the eigenvalues of the structure tensor, but STV does not make use of the directional information [36]. The higherorder regularizations such as the total generalized variation [39] and the Hessian Schatten-Norm regularization (HS) [40] have also been proposed and also ignore direction of derivatives. There are many important alternative approaches to the image denoising problem such as dictionary learning based methods [41], sparse representation based methods [42], non-local based methods [43], prior learning based methods [44][45][46], low-rank based methods [47] and deep learning based methods [48].
In this study, we present a robust and efficient TRTV model that inherits the advantages of both ADF and TV, that is, their abilities of suppressing strong noise, estimating and restoring complex structures, and efficient convergence, to reconstruct 2D and 3D THG images of human brain tissue. The contributions of this study are 3-fold. First, the pointwise decomposition of a structure tensor, which is time-consuming and necessary for both ADF and TRTV, is greatly accelerated by performing the tensor decomposition only in the non-flat areas. We use the gradient magnitude of a Gaussian at each point to estimate the first eigenvalue of the structure tensor and to distinguish flat from non-flat areas. In the flat areas, the identity matrix is used as the diffusion tensor and no tensor decomposition is needed, while in the non-flat regions, the tensor decomposition is applied to construct the application-driven diffusion tensor. Second, existing TRTV models adopt the L 2 norm for the data fidelity term while we use the L 1 norm to make the proposed model (TRTV-L 1 ) robust to outliers and image contrast invariant. In previous work, it has been shown that geometrical features are better preserved by the TV models with the L 1 norm [49]. Third, we solve the TRTV-L 1 model with an efficient and easy-to-code primal-dual algorithm as in [35,36]. In a detailed comparison of methods we show the ability of the TRTV-L 1 model to reconstruct weak edges, which is not well possible with other TRTV models. Weak edges are commonly observed in THG images and are important for clinical applications.
This work is a considerably extended version of the robust TRTV model previously presented at a conference [50]. The rest of this paper is organized as follows: we review the existing TRTV models in Section 2. The proposed TRTV-L 1 model is explained in detail in Section 3. Simulated and real THG images are tested to demonstrate the efficiency and robustness of the proposed TRTV-L 1 model in Section 4. Conclusions follow in Section 5.
| Anisotropic diffusion filtering and regularization
Let u denote an m-dimensional (m = 2 or 3) image, and f be the noisy image. An ADF model [16][17][18][19][20][21][22][23][24]51] has originally been defined by the partial differential equation (PDE) as follows: together with an application-driven diffusion tensor D, where the raw image f is used as the initial condition. D is computed from the gradient of a Gaussian smoothed version of the image ru σ in 3 consecutive steps. First, the structure tensor J is computed at each point to estimate the distribution of the local gradients: Here u σ is the Gaussian smoothed version of u, that is, u is convolved with a Gaussian kernel K of SD, σ, The SD, σ, denotes the noise scale of the target image [17]. To study the distribution of the local gradients, the outer product of ru σ is computed and each component of the resulting matrix is convolved with another Gaussian K of SD, ρ. ρ is the integration scale that reflects the characteristic size of the texture, and usually it is large in comparison to the noise scale σ [17].
Second, the structure tensor J is decomposed into the product of a diagonal matrix with eigenvalues μ i and a matrix of eigenvectors q i that indicate the distribution of the local gradients [17]: The diagonal matrix, diag(μ i ), is the eigenvalue matrix of all the eigenvalues ordered in the descending order, and the matrix Q is formed by the corresponding eigenvectors q i .
Finally, the eigenvalue values in (5) are replaced by the application-driven diffusion matrix diag(λ i ): where λ i represents the amount of diffusivity along the eigenvector q i . By taking the input image f as the starting point and evolving Eq. (1) over some time, the image is smoothed in flat areas and along the object edges, whereas the prominent edges themselves are maintained. Both the explicit and semi-implicit schemes [17] have been widely employed to implement Eq. (1). The explicit scheme is easy-to-code yet converging slowly. The semi-implicit scheme is more efficient because a larger time step is allowed, but harder to code because the inverse of a large matrix is involved.
Mathematically, Eq. (1) closely relates to the regularization problem that is designed to achieve a balance between smoothness and closeness to the input image f: In this functional, the first term is the regularization term (regularizer) that depends on the diffusion tensor D. The second term is the data fidelity term that uses a mathematical norm ||.|| to measure the closeness of u to the input image f. The implementation of this functional therefore depends on the construction of the diffusion tensor D, the choice of the regularizer and the fidelity norm. If we use the L 2 norm for the data fidelity and substitute: into Eq. (7), its E-L equation has the form: which has the same diffusion tensor as (1). Because ru appears quadratically, R behaves as a L 2 regularizer which has been shown unable to recover truly sharp edges [25], and the relation between (9) and (1) explains why the output of ADF is intrinsically smooth.
| Total variation
Another standard image denoising method is the TV model that was introduced into compute vision first by Rudin, Osher and Fatemi (ROF) [27] as follows: The TV regularization penalizes only the total height of a slope but not its steepness, which permits the presence of edges in the minimizer. Although the ROF model permits prominent edges, it tends to create the so-called stair-casing effect and the primal minimization method used converges slowly. To address these drawbacks, several modifications have been made to reduce the stair-casing effect and accelerate the convergence rate: replacing the TV regularization by Huber regularization, replacing the L 2 norm by the L 1 or Huber norm [52], and solving the convex minimization problem by the Chambolle's dual method [31], the split Bregman method [33], or the hybrid primal-dual method [30,32,34,35,53]. These first-order primal-dual algorithms enable easy-to-code implementation of the TV model. However, all these methods cannot properly remove the noise on the edges and cannot restore certain structures like 1D linelike structures, because only the modulus of a gradient is considered in the regularizer, not its directions. Total variation based methods have also been applied to other image processing fields such as compressive sensing, mixed noise removal and image deblurring of natural and brain images [54][55][56].
| Anisotropic total variation filtering
In order to overcome the problems of ADF and TV and combine their benefits, it is helpful to notice the close relation between diffusion filtering and regularization, which was initially studied in [57] for isotropic diffusion. The relation between anisotropic diffusion and the TV regularization was studied in [25], via the TRTV model as follows: The matrix S satisfies D = S T S, with a given diffusion tensor D. The anisotropic regularizer used in (11) overcomes the drawbacks of ADF and reconstructs truly sharp edges. Because the directional information has been incorporated via the diffusion tensor D in this model, it is also able to remove the noise on the edges and restore the complex structure which is not possible with the TV model. Despite these improvements, the minimization used in [25] to solve this TRTV model was based on gradient descent which suffered from slow convergence.
The diffusion behavior of (11) can be analyzed in terms of the diffusion equation given by its E-L equation: The first term on the right corresponds to an ADF with the diffusion tensor D/ j Sruj.
| Adaptive regularization with the structure tensor
In [36], the convexity of the problem (11) was used to improve the computational performance of the TRTV model [25] by applying the primal-dual algorithm [35], to solve the convex optimization of the proposed ATRTV model [36]. Also, the Huber penalty g α was used to regularize the structure tensor and reduce the stair-casing effect caused by the TV regularization: Here is the adaptive tensor used to rotate and scale the axes of the local ellipse to coincide with the coordinate axes of the image domain. This design of the adaptive regularizer has taken into account the local structure of each point to penalize image variations. However, we noticed that the asymmetry of S may create artifacts and reduce the applicability of the algorithm in practice. We also note that the diffusion strength along the ith direction is approximately proportional to 1= ffiffiffiffi μ i 4 p ( 1, which is not enough to suppress the noise when the input is corrupted by strong noise.
| Image samples and acquisition
All procedures on human tissue were performed with the approval of the Medical Ethical Committee of the VU University Medical Center and in accordance with Dutch license procedures and the declaration of Helsinki. All patients gave a written informed consent for tissue biopsy collection and signed a declaration permitting the use of their biopsy specimens in scientific research. We imaged brain tissue samples from 6 patients diagnosed with low-grade glioma and 2 patients diagnosed with high-grade glioma, as well as 2 structurally normal references with THG microscopy [14]. Structurally normal brain samples were cut from the temporal cortex and subcortical white matter that had to be removed for the surgical treatment of deeper brain structures affected by epilepsy. Tumor brain samples were cut from tumor margin areas and from the tumor core and peritumoral areas. For details of the imaging setup, the tissue preparation and the tissue histology, we refer to previous works [9,14].
| The proposed tensor regularized total variation
When applied to THG images of brain tissue, all the methods above have their specific problems. The ADF models are computationally expensive and they cannot restore weak edges. The TV model creates the stair-casing effect and cannot restore thin 1D line-like structures. The existing TRTV models are either too expensive in computation or lack of enough denoising capability. To deal with these drawbacks and to make the TRTV approach applicable to THG images corrupted by strong noise, we present an efficient estimation of the diffusion tensor and we replace the L 2 norm used in the data fidelity term by the robust L 1 norm. We solve the resulting model by an efficient primaldual method.
| Efficient estimation of the diffusion tensor
One time-consuming step of the ADF and TRTV models is that the diffusion matrix D or S needs to be estimated at each point to describe the distribution of local gradients. This is of no interest in flat areas because the gradients almost vanish. In 3D, this tensor decomposition procedure takes about half of the total computational time. If the tensor decomposition is only computed in non-flat areas, the procedure will be substantially accelerated.
To do this, we exploit the fact that the flat regions consist of points whose first (largest) eigenvalue is small, and that this eigenvalue can be roughly estimated by |ru σ | 2 [16]. This fact motivates the idea of thresholding |ru σ | 2 to distinguish flat and non-flat regions. Before thresholding, we use the following function g to normalize and scale exponentially |ru σ | 2 to the range [0,1]: This function has been used in the edge-enhancing ADF model [16] to define the diffusivity along the first direction. Following [16] we set C 4 = 3.31488. λ is the threshold to control the trend of the function [16]. Then we regard the points with g(|ru σ | 2 ) < h (here h is always set to 0.9) as the flat regions and the other points as the non-flat regions. In the flat regions, the diffusion along each direction is isotropic and the diffusion tensor D reduces to the identity matrix I. In the non-flat regions, the diffusion tensor D is defined as a weighted sum of the identity matrix and the application-driven diffusion tensor, with the weight g(|ru σ | 2 ): Therefore, the g(|ru σ | 2 ) has two roles here, one of which is acting as a threshold value and the other is acting as the weight for constructing the diffusion tensor D of the non-flat areas. Note that most of the ADF and TRTV models could in principle be accelerated using the procedure described here with almost no loss of accuracy. When applied to 3D images, we use the following eigenvalue system to optimize the diffusivityλ i : For 2D THG images, the second diffusivity λ 2 is ignored. h τ (Á) is a fuzzy threshold function between 0 and 1 that allows a better control of the transition between 2D plane structures and other regions [21,58], as follows: where γ is a scaling factor that controls the transition and we set it to 100. C plane is the plane-confidence measure [21,59] defined as follows: Smoothing behaviors of the diffusion matrix (17) are different for different regions: in background regions, λ 1 is almost 1 and smoothing is encouraged from all the directions at an equal level (isotropic smoothing). In the vicinity of edges, λ 1 ≈ 0, smoothing at the first direction is discouraged. In plane-like regions, the fuzzy function h τ tends to 1, and λ 2 = 1, and smoothing at the second and third directions is allowed. In 1D structure regions, λ 2 tends to λ 1 and both are close to 0. Smoothing at the third direction is allowed only.
| Robust anisotropic regularization
Given a diffusion tensor D designed as (16), we consider the same regularizer as in Eq. (13) of the adaptive TRTV model [36]: but contrary to [36] we use a symmetric S, S = D. To analyze the behavior of this regularizer in terms of diffusion, we note that the E-L equation that minimizes R(u) is: To analyze the diffusion behavior along each eigenvector direction, we only need to estimate the jSruj: Hence, the regularization problem (20) is a scaled version of the diffusion problem with the diffusion tensor S T S = QΛ 2 Q T , whose behavior along each eigenvector is almost the same as the diffusion problem with diffusion tensor D. Note that in the flat regions, S becomes the identity matrix, and the regularization (20) reduced to the Huber regularization.
| Tensor regularized total variation-L 1
Different from the existing TRTV models, we consider the robust minimization problem as follows: where we have used the L 1 norm in the data fidelity term. Compared to the L 2 norm, the L 1 norm is image contrast invariant, robust to noise and sensitive to fine details [49,60].
| Numerical minimization
To efficiently solve the minimization problem (23), we note that it is a convex problem which can be reformulated as a saddle-point problem. Therefore, it can be solved efficiently by the primal-dual approach [34][35][36]. To describe the problem in matrix algebra language, we reorder the image matrix u row-wisely into a vector with N points, that is, u 2 R N . The minimization problem (23) is written as the following primal minimization problem: where Au(i) = S(i)ru(i) at each point i, and J denotes the Huber norm, J Au ð Þ= Þ. To convert problem (24) into a primal-dual problem, we introduce a dual variable p 2 R mN (m = 2 or 3, the dimension of an image), and the convex conjugate of J (we refer to [61] for a complete introduction to the classical theory of convex analysis) is: Since J ** = J, we have Substituting (26) into (24), we obtain the equivalent saddle-point problem of the minimization problem (24): According to the hybrid primal-dual algorithm described in [34,35], we need to solve the following dual, primal and approximate extra-gradient steps iteratively, Similar to [35,36], the maximization problem (28) has the closed-form solution: where τ 1 is the dual step size and α is defined in the Huber regularization in (14) and (23). For an intuitive understanding of (28a), we note that J * can be interpreted as the indicator function for the unit ball in the dual norm, and then problem (28) is equivalent to solve the dual problem: where X = {p, J * (p) ≤ 1}. Since the ascend direction of (32) is Au k , (28a) can be considered as updating p along the ascend direction and projecting p onto X. We solve the primal problem (29) with the primal algorithm described in [35], where the L 1 norm can be solved by the pointwise shrinkage operations: Here τ 2 is the primal step size and the conjugate of A is: Problem (24) is convex and the efficiency of the proposed algorithm comes from the ability to find closed-form solutions for each of the sub-problems. We summarize the proposed algorithm, including the estimation of the diffusion tensor, in Algorithm . This algorithm is partially inspired by the work of Estellers et al. [36]. Note that we use the forward differences to compute the discrete gradients and backward differences for the divergence to preserve their adjoint relationshipdiv = − r * .
Algorithm: The efficient algorithm for the convex minimization problem (24).
2. Construct diffusion matrix S: in the flat areas, set S as the identity matrix; otherwise, compute S using (16).
| EXPERIMENTAL RESULTS
We validate the proposed TRTV-L 1 model on a 2D simulated image, and around 200 2D and 3D THG images of normal human brain and tumor tissue. The field of view of the 2D and 3D THG images is 273 × 273 μm 2 (1125 × 1125 pixels) and 273 × 273 × 50 μm 3 (1125 × 1125 × 50 voxels), respectively. The intensities of these images are scaled to [0, 255]. We have previously developed a salient edgeenhancing ADF model (the SEED model) to process the THG images of normal brain tissue [15], while the images of tumor tissue have not been published for the purpose of image analysis before. We compare our 2D results with the TV model [34], the edge-enhancing ADF model (the EED model) [16], the BM3D model [62], the HS model [40], the STV model [38], the ATRTV model [36] and our previous SEED model [15]. We only compare our 3D results with the TV model and the SEED model because not all source codes are readily available for other models in 3D. A comparison between EED and SEED has already been made in [15] for 3D.
| Implementation
The proposed TRTV-L 1 model and ADFs are implemented in Visual Studio C++ 2010 on a PC with 8 3.40-GHz Intel(R) Core(TM) 64 processors and 8 GB memory. Multiple cores have been used to implement the 3D algorithms, and a single core has been used for the 2D implementation. The TV model is implemented using the primal-dual algorithm described in [34]. The ADF models are implemented in the semi-implicit scheme [17]. The Matlab source codes for the BM3D model [62], the HS model [40], the STV model [38] and the ATRTV model [36] are available online from the authors' websites. The parameters are manually optimized for each model. The key parameters used for the proposed TRTV-L 1 model involve λ = 0.15, τ 1 = 0.02, τ 2 = 8.5 and θ = 1.0 for 2D and λ = 0.15, τ 1 = 0.05, τ 2 = 1.5and θ = 1.0 for 3D. The convergence accuracy ε is set to 10 −2 .
| Denoising effect
The performance of the proposed TRTV-L 1 model is first evaluated on a 2D simulated image (Figure 1). The simulated image consists of seven horizontal lines of the same width (255 pixels), but of different heights, 50, 30, 25, 10, 5, 3 and 1 pixels. The intensity of each line horizontally increases from 1 to 255, mimicking edges with varying gradients. Gaussian noise with SD of 60 is added to simulate strong noise. The TV model cannot remove the noise on the edges (blue square), creates stair-casing effect, fails to restore the 1-pixel line and restores the 3-pixel line only partially. The ADF models, that is, the EED and SEED models, have the highest peak signal-to-noise ratio [36] and they provide the best denoising effect, but they also lose some weak edges of all the lines. The BM3D model has perfect performance on keeping fine details, for example, a large part of 1-pixel line is kept, but it creates ripple-like artifacts (yellow square) and its denoising effect is not comparable to the tensor methods. The HS model penalizes the second-order derivatives and thus it is able to avoid the stair-casing effect and capture blood-vessel-like structures, but it has limited denoising effect and creates dark-dot-like artifacts. The ATRTV model is able to get rid of most stair-casing effect, but the noise on the edges (blue square) is not properly removed. This behavior remains for other parameter settings. A possible explanation could be that there is not enough diffusion strength along the edge direction, possibly caused by the design of diffusion tensor. Its ability of keeping fine details is also limited, for example, part of the 1-pixel line is swiped out. STV suffers less stair-casing effect than TV, but its performance on denoising and keeping fine details is also limited, because it does not consider the eigenvectors that are the key for restoring local structures. Our TRTV-L 1 model combines the benefits of the L 1 norm and tensor regularization, and has a denoising performance that is comparable to the ADF models and higher than the other models. Moreover, TRTV-L 1 is also able to keep fine details as BM3D does. The weak edges of all the simulated lines are better restored by TRTV-L 1 than by other tensor methods and regularization methods. We then compare the performance of the proposed TRTV-L 1 model with the aforementioned models using around 200 THG images of normal human brain and tumor tissue. One 2D typical example of THG images of normal brain tissue from gray matter is depicted in Figure 2. Brain cells (mainly including neurons and glial cells) and neuropil (consisting of axons and dendrites) are the basic features in a human brain, which appear as dark holes with dimly seen nuclei inside and bright fibers, respectively. Brain cells and neuropil are sparsely distributed in gray matter. The strong noise and rich morphologies contained in these THG images make the image denoising challenging. The TV model is able to remove the noise but it causes the stair-casing effect. It cannot restore the thin fiber-like neuropil because it does not consider the distribution of the local gradients (blue square). The ADF models (the SEED result is similar to the EED result and thus it is omitted) already give very satisfying results. The noise has been properly removed, but a substantial amount of weak edges have been smoothed to some extent, for example, the weak edges of some fibers and dark brain cells (blue square), because they are equivalent to the anisotropic TV with the L 2 regularizer. The BM3D keeps the most fine details (the thin neuropil in the blue square), but its denoising effect is limited and again it creates ripplelike artifacts. Note that the parameter σ involved in BM3D reflects the noise level of an image and the result of σ = 100 shown in Figure 2 indicates that the noise level of THG images is comparable to Gaussian noise with SD of 100. The HS model has limited performance on suppressing the strong noise in THG images, the result seems a bit blurred and dark-dot-like artifacts are created as appeared in the simulated image (Figure 1, HS). The result of ATRTV is similar to TV (but with less stair-casing effect), and it is not able to restore the thin neuropil with weak edges. The STV model causes little stair-casing effect, and does not keep fine details due to the lack of directional information. Compared FIGURE 2 One 2D THG image of normal brain tissue from gray matter. Brain cells and neuropil appear as dark holes with dimly seen nuclei inside and bright fibers, respectively with BM3D and HS, our TRTV-L 1 model is able to keep reasonable amount of fine details yet has a significantly superior denoising performance. Compared to other tensor and regularization methods, TRTV-L 1 can keep all salient edges and many more weak edges and fine details. TRTV-L 1 also provides the best image contrast and suffers almost no stair-casing effect, because of the L 1 norm and the robust anisotropic regularizer used. Results presented in Supporting Information (Figures S1-S7) indicate that the parameter settings in Figure 2 are optimal for BM3D, HS, ATRTV and STV. The comparison of the segmentations resulted from the denoised images ( Figures S5 and S6) not only confirms our qualitative evaluation of the denoising performance but also suggests that the denoising effort of TRTV-L 1 can really benefit the following segmentation step.
3D THG images of normal brain tissue from white matter ( Figure 3) are adequate testing materials to demonstrate the 3D performance of the proposed model, because of the presence of the complex morphologies, for example, nets of neuropil. The density of brain cells, for example, neurons with dimly seen nucleus or with lipofuscin granules inclusions (blue arrow), is low but the density of neuropil is higher than in gray matter. We see that the noise has been removed by all the models. Nevertheless, the TV model cannot enhance the fiber-like structures (the left blue square) and suffers from the stair-casing effect. The SEED model is able to enhance the fibers, but some weak edges have been in fact smoothed (the blue square). Only our TRTV-L 1 model succeeds to reconstruct almost all the sharp and weak edges.
One 2D example of THG images of the low-grade tumor tissue obtained from an oligodendroglioma patient is shown in Figure 4. Compared to the THG images of normal brain tissue, more brain cells (including cell nuclei and the surrounding cytoplasm) are present that indicates the presence of a tumor. Again, the TV model suffers from the stair-casing effect. The ADF models fail to restore the weak edges (blue arrow). BM3D and HS have weaker denoising effect than other methods. BM3D creates ripple-like artifacts and HS blurs the image. In contrast to the conventional approach for tensor estimation, ATRTV attempts to capture the directionality and scale of local structures via another convex approximation, but our results on THG images do not suggest superior merits of this aspect of ATRTV over the conventional approach in restoring local structures. The result of STV is similar to that of ATRTV due to the lack of directional information. Compared to other models, TRTV-L 1 either has better denoising performance and/or restores more fine details and weak edges (blue and yellow arrows).
One 2D example of THG images of the high-grade tumor tissue obtained from a glioblastoma patient is shown in Figure 5. All the fiber-like neuropil are now completely absent and the whole area is filled with cell nuclei. The density of cell nuclei here is even higher than that of the lowgrade tumor tissue, indicating that those cells likely represent tumor cells. The TV model is able to reconstruct both the salient and weak edges but it again causes the stair-casing effect around the edges. The ADF models provide quite similar results without any stair-casing effect, but the weak edges have been blurred. BM3D and HS have limited denoising effect. ATRTV and STV suffer less stair-casing effect than TV, but the contrast seems degenerated. The proposed TRTV-L 1 model has reconstructed the salient and weak edges, which will greatly facilitate applications like automatic cell counting.
| Computational performance
We first evaluate the computational cost of TRTV-L 1 that has been saved by restricting the tensor decomposition to the non-flat areas. Roughly, the flat regions estimated in each iteration increase from 50% up to 90% of the whole image domain, and on average, 80% pixels are considered as flat regions, indicating that tensor decomposition is needed only for 20% of the pixels ( Figure 6A). The reconstruction with and without full estimation of the structure tensor FIGURE 3 One 3D THG example of normal brain tissue from white matter, with the 33th slice shown. More neuropil is observed than in gray matter everywhere has been compared using THG images, from the aspects of timing and restoration quality. No significant difference in the number of iterations needed for convergence is observed between the full and partial estimation of structure tensor. For 2D THG images the partial estimation approach saves~10% of computation time, either in terms of convergence time or time per iteration. No significant degradation has been found in the restoration quality ( Figure 6B,C and Figure S7) when h varies from 0.0 to 0.9, and thus we use h = 0.9 to obtain maximal gain in speed. We also find that the absolution difference per pixel between the two reconstructions is 3.8, indicating the small difference between the 2 solutions. As a reference, the absolution difference per pixel between the reconstruction using partial estimation of structure tensor and the input noisy image is 54.4. For 3D THG images,~40% of computation time is saved by the partial estimation approach. A visual map of non-flat regions that results from the last iterative step is FIGURE 4 One 2D THG example of low-grade tumor tissue from an oligodendroglioma patient. High cell density and thick neuropil indicate the presence of a tumor FIGURE 5 One 2D THG example of high-grade tumor tissue from a glioblastoma patient. The whole area is occupied by tumor cells shown in Figure 6D. This map actually consists of all the sharp edges of the image, which conversely suggests that the weak edges are restored from the regularization and L 1 fidelity rather than from the diffusion. Similar tests on the EED model indicate that the same computational gains can be achieved for the ADF models, using the partial tensor decomposition.
To demonstrate the computational efficiency of the proposed TRTV-L 1 model, we compare the average computational time needed by TRTV-L 1 to the ADF models on 30 2D and 5 3D THG images. The semi-implicit scheme used to implement ADFs allows larger time steps than the explicit scheme. We found that ADFs converge much slower and consume more time per iteration than TRTV-L 1 . For example, TRTV-L 1 on average needs only~1/3 number of iterations of EED to converge to the same accuracy 10 −2 and TRTV-L 1 consumes~2/3 time of EED per iteration ( Figure 6E), which results in a~75% higher speed than EED. In practice, a fixed number of iteration is also a strategy to stop the iterations, and we find that both ADFs and TRTV-L 1 have already produced quite satisfying results after 50 iterations. In this condition, TRTV-L 1 is on averagẽ 30% more time efficient than ADFs on 2D THG images, and~60% more time efficient on 3D THG images. Compared to other tensor regularization models, our TRTV-L 1 model is roughly as efficient as the STV model, faster than the ATRTV model that uses another convex optimization to estimate structure tensor.
| DISCUSSION AND CONCLUSIONS
In this work, we have developed a robust and efficient TRTV-L 1 model to restore images corrupted by noise. THG images of structurally normal human brain and tumor tissue have been tested. The proposed model showed impressively better results on the reconstruction of weak edges and fine details and it was more efficient than existing ADF and TRTV models. Comparisons to other state-of-art denoising techniques that are able to keep fine details, indicate that TRTV-L 1 can restore a reasonable amount of fine details but it has significantly better denoising performance without creating artifacts. The artifacts created by other models may result in false positives in subsequent segmentation steps. Therefore, the proposed TRTV-L 1 model will greatly facilitate the following segmentation and cell counting of THG images of brain tumor, from which we conclude that the robust and efficient TRTV model will strengthen the clinical potential of the THG microscopy on brain tumor surgery. Moreover, based on the tests on the simulated image and the THG images with complex morphologies, we believe that the proposed method can be generalized to other application-driven projects. The efficient estimation of the diffusion tensor we proposed here can be used to accelerate most of the existing tensor diffusion and regularization models, by performing tensor decomposition only in the non-flat regions. Compared to existing TRTV models, for example, the ATRTV model, the approach we combined the diffusion tensor and TV can be easily used to derive other application-driven TRTV models from existing ADF models. The L 1 norm in the data fidelity term makes the proposed TRTV-L 1 model contrast invariant, robust to noise and sensitive to fine details. The primal-dual algorithm used to optimize the proposed model is easy-to-code in comparison with the existing ADF models because no sparse matrix inversions are involved. Although there are many other important types of image denoising methods as aforementioned, in this study we emphasize the benefits of tensorbased techniques that they are able to capture local structures. Compared to other alternative approaches, for example, the machine learning methods, a training step is usually included which needs a training set of clear images with high SNR, but such images are difficult to acquire for THG brain images.
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of the article. Figure S1 Results of BM3D for σ = 50, 100 and 150. The denoising effect of BM3D increases with σ. The denoising effect starts to occur when σ = 50, and achieves its optimal effect for σ = 100. Larger σ does not contribute to further improvement. These images show that BM3D creates ripplelike artifacts and has limited denoising performance. Figure S2 Results of Hessian Schatten-Norm regularization (HS) for λ = 0.1, 0.3 and 0.5. The denoising effect starts to occur when λ = 0.1, and achieves its optimal performance for λ = 0.3. The result becomes too blurred when λ = 0.5. HS creates dark-dot artifacts, has limited denoising performance and blurs the image. Figure S3 Results of ATRTV for λ = 18, 10, 5 and μ = 8.6, 5.0 and 3.0. ATRTV has better denoising effect when λ and μ are small. The denoising effect starts to occur when λ = 18, μ = 8.6, and achieves its optimal performance for λ = 10, μ = 5.0. The result becomes blurred when λ and μ get smaller. The result of ATRTV is similar to that of TV, with less stair-casing effect created, but it is not able to restore fine details and weak edges corrupted by strong noise. Figure S4 Results of STV for λ = 0.24, 0.32 and 0.4. The denoising effect starts to occur when λ = 0.24, and achieves its optimal performance for λ = 0.4. The result of STV is similar to that of TV, with less stair-casing effect created, but it is not able to restore fine details and weak edges corrupted by strong noise. Figure S5 Segmentations of the dark holes (brain cells) within the raw image and the denoised images in Figure , using manually optimized thresholds to detect most parts of the dark holes with least background included. The segmentation of the raw image indicates the strong noise present in the THG image. The segmentations of TV, EED, ATRTV, STV and TRTV-L 1 are similar but the small objects resident in segmentations of BM3D and HS illustrate their poor denoising performance. Figure S6 Segmentations of the bright objects (neuropil) within the raw image and the denoised images in Figure 2, using manually optimized thresholds to detect most parts of the bright objects with least background included (eg, the fiber indicated by yellow arrow). The segmentation of the raw image indicates the strong noise present. The segmentation of TRTV-L 1 is comparable to those of BM3D and HS, where more fibers have been resolved than other models. Sometimes fibers (blue arrows) are even better segmented from the image denoised by TRTV-L 1 , which suggests that BM3D and HS could visually keep more details than TRTV-L 1 but it is not necessarily beneficial for the segmentation followed. Figure S7 Results of the proposed TRTV-L 1 model for h = 0.0 (full estimation), 0.2, 0.5, 0.8 and 0.9 (partial estimation). Almost no degradation has been found in the restoration quality when h varies from 0.0 to 0.9, and thus we use h = 0.9. | 9,682.4 | 2018-08-16T00:00:00.000 | [
"Computer Science"
] |
Distortion of Mendelian segregation across the Angus cattle genome uncovering regions affecting reproduction
Nowadays, the availability of genotyped trios (sire-dam-offspring) in the livestock industry enables the implementation of the transmission ratio distortion (TRD) approach to discover deleterious alleles in the genome. Various biological mechanisms at different stages of the reproductive cycle such as gametogenesis, embryo development and postnatal viability can induce signals of TRD (i.e., deviation from Mendelian inheritance expectations). In this study, TRD was evaluated using both SNP-by-SNP and sliding windows of 2-, 4-, 7-, 10- and 20-SNP across 92,942 autosomal SNPs for 258,140 genotyped Angus cattle including 7,486 sires, 72,688 dams and 205,966 offspring. Transmission ratio distortion was characterized using allelic (specific- and unspecific-parent TRD) and genotypic parameterizations (additive- and dominance-TRD). Across the Angus autosomal chromosomes, 851 regions were clearly found with decisive evidence for TRD. Among these findings, 19 haplotypes with recessive patterns (potential lethality for homozygote individuals) and 52 regions with allelic patterns exhibiting complete or quasi-complete absence for homozygous individuals in addition to under-representation (potentially reduced viability) of the carrier (heterozygous) offspring were found. In addition, 64 (12) and 20 (4) regions showed significant influence on the trait heifer pregnancy at p-value < 0.05 (after chromosome-wise false discovery rate) and 0.01, respectively, reducing the pregnancy rate up to 15%, thus, supporting the biological importance of TRD phenomenon in reproduction.
Analytical models of transmission ratio distortion.Allelic parameterization of TRD.As described by Casellas et al. 4,13 , for a particular locus, the probability of allele transmission (P) from heterozygote parents (A/B) to offspring was parameterized including one overall TRD effect (α) on a parent-unspecific model or differentiating between sire-(α s ) and dam-specific TRD effects (α d ) on a parent-specific model: Flat priors (uniform distribution) were assumed for all TRD parameters within a parametric space ranging from − 0.5 to 0.5.Under a Bayesian implementation, the conditional posterior probabilities of the TRD parameters are defined as: where; y is the column vector of genotypes of the offspring generation.The likelihood of data is a multiplication of the corresponding probabilities for each offspring as: where n is the total number of offspring and P off and y i is the probability and the genotype of the i th offspring, respectively.The probability of the genotype of each offspring was defined by parents' genotypes and TRD parameters.Thus, the probability of a heterozygous offspring from a heterozygous-by-heterozygous mating becomes: Detailed information about the implemented algorithms were described in Id-Lahoucine et al. 5 and Id-Lahoucine 26 .
Genotypic parameterization of TRD.As developed by Casellas et al. 12 , genotypic parameterization can be modeled by assuming additive (α g ) and dominance (δ g ; or over-/ under-dominance) parameters, regardless of the origin of each allele.Following Casellas et al. 14 , the probability of the offspring (P off ) from heterozygous-byheterozygous mating are: For heterozygous-by-homozygous mating, correction for overall losses of individuals in terms of genotypic frequency are needed to guarantee P off (AA) + P off (AB) + P off (BB) = 1.Thus, genotypic frequencies in offspring from AA × AB mating as an example become: Under a Bayesian implementation, the conditional posterior probabilities of the TRD parameters are defined as: Flat priors were assumed for both α g and δ g within a deepened parametric space (i.e., the parametric space of a parameter is conditioned to the other parameter).Thus, the parameter space for α g initially ranges [− 1, 1] with a p(α g ) = ½ and becomes conditioned to δ g when δ g > 0, being restricted to [− 1 + δ g , 1 − δ g ] with a p(α g ) = 2/ (2-2 × δ g ).On the other hand, the parametric space for δ g competent ranges [− 1, |α g |] with a p(δ g ) = 1/(1 + α g ).Notice that these conditions were made to avoid negative probabilities for a given offspring from a particular mating.
Statistical analyses.
The analyses of TRD were evaluated SNP-by-SNP and using a sliding windows approach for haplotypes of 2-, 4-, 7-, 10-and 20-SNP across 92,942 SNPs.For haplotype analyses, the biallelichaplotype procedure described by Id-Lahoucine et al. 5 was implemented following the same parameterization described above.The analyses were performed within a Bayesian framework using TRDscan v.1.0software 5 with a unique Monte Carlo Markov chain of 110,000 iterations where the first 10,000 iterations were discarded as burn-in.The statistical relevance of TRD was evaluated using a Bayes factor 27 (BF).The BF estimates was obtained across iterations with a lag interval of 10 iterations.Both allelic and genotypic parameterizations were compared using the deviance information criterion 28 (DIC).In order to optimize the TRD analyses, the following steps were considered following Id-Lahoucine et al. 5 .Firstly, a minimum of 1,000 informative offspring was considered to guarantee minimal statistical power to characterize TRD across the whole genome.Secondly, a minimum number of informative parents (≥ 20 heterozygous sires and/or ≥ 100 heterozygous dams) were considered to minimize possible false TRD from genotyping errors.As post analyses criteria, the approximate empirical null distribution of TRD 5 at < 0.001% margin error was applied in order to exclude TRD generated by chance (i.e., gametes sampling fluctuations).In the same way, regions with few heterozygous sires displaying full skewed transmission and completely explaining the observed TRD in the corresponding region were discarded as potential genotyping errors.Subsequently, regions with a large credible interval for TRD effects (i.e., coefficient of variation > 20%), potentially as a result of unstable convergence, were filtered out.Finally, in order to combine and integrate all the results to obtain clear highlighted peaks of TRD across the whole genome, the kernel smoothing 29,30 (parametric technique) was applied.The smoothed estimate of BF for the ith base pair (bp) within the range κ i to κ n , was calculated using weighted Gaussian kernel ( where σ is the bandwidth and (κ i -κ j ) is the distance in base pairs.Following Id-Lahoucine 26 , the smoothing process was implemented with a bandwidth of 500,000 bp, which is suggested to be a rationale distance to obtain a considerable initial number of candidate regions in TRD analyses.
Characterization of TRD effects on reproductive phenotypes.
As an additional analyses, the effects of TRD regions (SNPs or haplotypes) were evaluated using the heifer pregnancy trait as recorded in the whole American Angus database.To determine the effects of the alleles, pregnancy rate between matings at risk and control were compared in two ways (for each region separately): AB × AB (risk) with AA × -(control) and AB × -(risk) with AA × AA (control).This first comparison allows to determine the impact of recessive TRD regions whereas the second is useful for allelic TRD regions.The rationale behind these matings is that we do not expect to observe BB offspring for recessive TRD regions, thus, both heterozygous parents are needed for the test.On the other hand, the presence of one single heterozygous parent is enough for testing allelic TRD regions as AB offspring could also present reduced viability.This interaction effect was included in the following animal model: where; PHN was the phenotypic recorded as binary traits (i.e., pregnant or not pregnant), INT is the interaction effect between parent genotypes (recorded as 1 and 0 for mating at risk and control, respectively), CG is a contemporary group (fixed effect comprised of the unique combination of herd-breeding year-season-breeding group-synchronization), ADH is the age the heifer's dam (fixed effect), HA is heifer age at breeding (covariate), SS is first service sire (random effect), A is the animal additive genetic effect and e is the random residual term.The effects included in the model are similar to those used in the national genetic evaluation of American Angus (Angus Genetics Inc., St. Joseph, MO, USA).The analysis was performed using a linear model (assuming Gaussian distribution for random effects).The animal additive genetic follows a multivariate normal distribution, i.e., MVN(0, Gσ a 2 ), where σ a 2 was the genetic variance and G was the genomic relationship matrix constructed with 88,959 SNPs (minor allele frequency > 0.001) using VanRaden's first method 31 .The significance of the interaction effect was tested with a t-test.The total number of genotyped heifers with a pregnancy record was 21,297.The total number of pregnancy records where at least one parent is genotyped was 70,869.When considering the maternal grandsire genotype (i.e., the sire of the heifer), the number of informative records increased to 76,719.
Results and discussion
Characterization of TRD on Angus genome.Single nucleotide polymorphism and haplotype alleles were identified exhibiting distorted segregation ratios with decisive evidence (BF ≥ 100 according to Jeffreys' scale 32 ) across the Angus genome.After the implementation of the different strategies to minimize possible TRD artifacts, a total of 99,580 genomic regions with TRD were identified (including totally or partially overlapped windows) after exclusively keeping the allele-region (SNP or the haplotype allele) with the highest BF.Among them, 5,027 corresponded to SNPs and 9,913, 14,106, 18,088, 21,368 and 31,078 corresponded to haplotype windows of 2-, 4-, 7-, 10-and 20-SNP, respectively.This large number was a result of the sliding window approach, the different window sizes applied and the level of linkage disequilibrium (LD).Thus, it is important to mention that the signals of TRD observed for individual SNPs and/or short haplotype windows are also observed in windows linked to them.Given that the different TRD patterns observed across adjacent regions were potentially generated from one single mutation.Here, we assumed that the best candidate allele harboring the causal variant (or in strong LD with it) will correspond to the allele-region with the highest BF 5 .Thus, after combining and integrate all the results taking into account the LD using the smoothing process, 990 core regions has been highlighted across the whole genome.Within these regions, 139 regions were excluded as they were plausibly explained by genotyping errors or convergence instability after individually checking (visual inspection) the mean and standard deviation of TRD parameters and the corresponding distribution of the offspring across matings.Notice that for genotyping errors could be anticipated when checking the number of heterozygous sires (with at least 10 offspring) that transmitted one allele with a probability > 90% and the distribution of their offspring.Following Id-Lahoucine et al. 5 , the strategy used is based on discarded TRD that was generated fully from few heterozygous sires (e.g., < 3) with a large number of offspring, these sires potentially are homozygous and genotyped incorrectly as heterozygous.
Relevant insight of TRD findings on Angus genome with deleterious alleles.. The whole Angus
genome was characterized with 851 non-overlapping TRD regions, being 177 SNPs and 258, 165, 103, 78 and 70 haplotype windows of 2-, 4-, 7-, 10-and 20-SNP, respectively.Among these findings, it is important to highlight that the majority of regions were detected with more than one of the applied models (i.e., parent-unspecific and -specific allelic model, genotypic model).Despite this overlap, different statistical evidence was observed for TRD estimates for the different models, suggesting different degrees of fit, and consequently, distinctive patterns of inheritance.
Allelic patterns.The majority of TRD regions (657) presented an allelic pattern (i.e., identified with the allelic model with strong relevance).Loci with parent-specific TRD were 3 and 131 for dam-and sire-TRD, respectively.In order to target the most promising regions, following Id-Lahoucine 26 , a moderate-to-high TRD of > 0.20 was considered with at least 5,000 under-represented offspring.That is, 52 regions were selected as the most relevant (Table 1 (first 20 regions) and S1 Table (full list)).The average number of under-represented offspring across 52 regions was 10,099, being 41,008 the maximum number of under-represented offspring (Fig. 1).This finding shows that under the hypothesis of lethality, potentially 41,008 offspring would be lost given one single deleterious allele.This particular region was found with 79,200 informative offspring and a frequency of 0.08 (corresponding to the haplotype allele that is under-transmitted).In addition, among these regions, the penetrance (TRD magnitude) via sire and dam was equal or slightly different in 51 TRD regions (S1 Table ).In contrast, only one single region exhibited sire-specific TRD whereas was null via dam (Fig. 2).It is important to add, even those regions had moderate-to-high TRD signals, part of them may have TRD linked to specific families and where further research is required to better target the causal mutation.On the other hand, it is very interesting to mention that one single region of the allelic pattern was observed with opposite sire-and dam-TRD, where sires showed a preferential transmission of one allele (3,055 AB and 1,014 BB offspring from AB (sire) × BB mating) whereas dams showed a preferential transmission for the opposite allele (1,074 AB and 1,870 BB offspring from BB × AB (dam) mating).In fact, an opposite sire-and dam-TRD were also observed on other regions displaying an excess or deficiency of heterozygous offspring (e.g., sire-TRD = − 0.03, dam-TRD = 0.04, 14,406 AA, 16,618 AB and 10,300 BB from AB × AB mating), but this remarkable region showed a peculiar pattern which adds complexity to TRD phenomenon in cattle.
Genotypic patterns.The genotypic model highlighted 19 regions with recessive patterns (Table 2) and 9 with either deficiency or excess of heterozygous offspring.Here, a minimal 10 ≥ non-observed homozygous offspring was required to target recessive TRD.Thus, the number of non-observed homozygous offspring for these regions with recessive pattern ranges from 10 to 564.The lethality among these regions was diverse, some with potentially full lethality (i.e., full absence of homozygous haplotype) or with reduced viability of offspring in homozygous state.Here, 9 haplotypes were detected with full absence of homozygous offspring.The lethality on other regions was observed with different degrees, comprising the smallest change of mortality to 40%.For an illustrative example, a specific region (AR.9) with 176 AA, 1,528 AB and 740 BB offspring from AB × AB matings had an anticipated rate of mortality of 76% ((740-176)/740*100)); notice that 1,528 (AB) / 740 (BB) ≈ 2 maintains the expected Mendelian ratio.It is important to consider that for this pattern the reduced viability was observed only on homozygous offspring and not heterozygous offspring as in the case of allelic TRD patterns.
On the other hand, for the detected regions with recessive patterns, 3 physically close haplotypes (AR.16.a, AR. 16 www.nature.com/scientificreports/heterozygous sires and dams (Table 2), which potentially points to the same causal mutation (SNP, deletion, etc.).The LD between these 3 haplotype alleles (biallelic-haplotype genotypes) were 0.76, 0.43 and 0.51.This result gives extra evidence supporting the TRD found in this particular region.
Model comparison.
The DIC values across models supported the inheritance pattern of TRD region described in the previous sections.Specifically, for recessive TRD regions the genotypic model was favored in comparison to the allelic model with differences up to 209.40 DIC units (average across the 19 regions was 38.99).In contrast, among the regions with allelic pattern (52), 49 fit better for the allelic model, displaying deference of DIC units ranging up to 630.76 with an average of 116.07 DIC units.The remaining regions (3 from 52), despite displaying low DIC for the genotypic model, the distribution of offspring across matings presented an allelic pattern.Their DIC advantage was coming from the combination of additive and dominance parameters that maximizes the likelihood of data, resulting in a similar or better fit for both models.
Validation and comparison of TRD phenomenon and lethality across breeds.. In general, when
comparing TRD findings between Angus and Holstein breeds 26 , the observed prevalence and magnitude of TRD were higher in Angus population.Whereas the number of regions in Angus was 851 with an average of 0.27 overall TRD magnitude, the number of regions identified in another study from our group in Holstein genome was 604 with an average of 0.22 TRD magnitude 26 .In relation to statistical evidence, 814 and 560 regions presented a log 10 (BF) ≥ 10 for overall TRD for Angus and Holsteins, respectively.It is important to mention that this is not a limitation of statistical power because the number of trios used in Holsteins (283,791) was even slightly superior to Angus (205,954).The observed differences between breeds could be explained partially by the different genotype density used for TRD analyses in both breeds, where higher density SNP array was used in Angus (92,942 SNPs) compared to Holsteins (47,910 SNPs).The advantage of using high-density genotypes, which enables the whole genome to be explored more deeply, allows the potential discovery of more candidate deleterious alleles.On the other hand, similar patterns of overall and sire-TRD were observed in both Angus and Holstein breeds in similar positions across the genome.Among the 851 and 604 characterized TRD regions in Angus and Holsteins, respectively, 353 regions presented similar allelic TRD patterns with 46 of them being specific sire-TRD.Regarding the recessive TRD pattern, only one single region identified in Angus with recessive pattern was physically close to a known lethal allele located in BTA21:21,184,869-21,188,198 (AR.16,Table 2) in Holstein cattle with recessive inheritance as well (Holstein haplotype 0 33,34 ).The causative mutation in Holstein haplotype 0, responsible for the brachyspina syndrome, was a 3.3 Kb deletion in the FA complementation group I (FANCI) gene 34 .If we assume no recent common ancestor between both breeds, it is probably the results of independent www.nature.com/scientificreports/mutations in the same genes which generated similar TRD patterns in both breeds, and consequently, may support the biological function of those genes on reproduction-related traits.Within the same context, one of the detected regions with recessive TRD pattern overlapped with a previously reported candidate lethal allele by Jenko et al. 24 located in BTA14:8,064,004-8,927,881 (AR.11,Table 2) in Aberdeen Angus.This reported haplotype was found to be associated with decreased insemination success and longer interval between insemination and calving 24 .The candidate gene for this haplotype was Zinc finger and AT-hook domain containing (ZFAT) which is associated with prenatal or perinatal lethality in the Mouse Informatics Database 24 .In addition, previously characterized lethal alleles by Jenko et al. 24 in Simmental (BTA13:73,746,516-74,973,171) and Limousin (BTA23:27,923,154-28,649,349) were also physically overlapping with our findings, specifically among the relevant 52 allelic TRD regions (AA.33 and AA.44, S1 Table ).On the other hand, among the 7 recessive lethal haplotypes reported by Hoff et al. 23 in Angus, 3 were found overlying with our results in our Angus data but displaying allelic patterns: BTA8:62,040,920-63,000,189 (AA.21),BTA1:27,786,985-29,095,768 (AA.2) and BTA4:82,467,969-83,996,686 (AA.14).Hoff et al. 23 identified a candidate gene located in BTA1, glycogen branching enzyme (GBE1), which found to produce recessive phenotypes in mammals.
Validation of the identified TRD regions using reproductive phenotypes: heifer pregnancy trait.Significant effects of TRD were found in the heifer pregnancy data.In total, 64 and 20 regions showed significant effects at p-value < 0.05 and 0.01, respectively.Particularly, when comparing between AA × AA and AB × -(mating risk), 49 and 12 regions displayed significant effect for the interaction effect on parent genotypes (at p-value < 0.05 and 0.01, respectively; Table 3 (first 30 regions) and S2 Table (full list)).The number of significant regions after controlling false discovery rate (FDR) at chromosome level 35,36 was 8 and 2 (at q-value < 0.05 Table 3. Effects of transmission ratio distortion (TRD) regions on heifer pregnancy and distribution of pregnancy among at risk (one or both parents carrying the deleterious allele) and control matings.a Number of SNPs on the window; b Parents' genotypes; c Number of matings; FDR: chromosome-wise false discovery rate; Full list of regions is provided in S2 and 0.01, respectively).The maximum observed effect was − 0.085.Hence, whereas for non-risk mating (i.e., AA × AA) the average pregnancy rate was 0.87, the observed pregnancy rate reduced down to as low as 0.74, that is, 15% reduced the pregnancy rate.It is important to mention that these effects were supported by the distribution of the pregnancy rate among both sire × dam and sire × maternal grandsire matings.The use of sire × maternal grandsire matings, allows increasing the number of informative matings by using phenotypes of non-genotyped heifers.These results support the relevance of the allelic TRD pattern, where the presence of the deleterious allele in one single parent is enough to reduce the pregnancy success of the animals.In addition, among these 49 regions, only 3 regions (Reg.12 (AA15), Reg.21 (AA.24) and Reg.44 (AA.50)) presented high TRD magnitude (> 0.20) and exhibiting more than 5,000 under-represented offspring.However, the average TRD magnitude and the number of under-represented among the 49 regions significant with the heifer pregnancy was 0.32 and 2,608, respectively (Table 3 and S2 Table).
On the other hand, for AB × AB risk mating (recessive pattern), 15 and 8 regions displayed significant effect for the interaction effect on parent genotypes (at p-value < 0.05 and 0.01, respectively; Table 4).After chromosomewise FDR, the number of regions reduced to 4 and 2 (at q-value < 0.05 and 0.01, respectively).The region (Reg.52; Table 4) with the largest observed effect was − 0.27, with a pregnancy rate of 0.58 (corresponding to the 31% reduced the pregnancy rate) but with only 9 informative records, using maternal grandsire matings, the observed pregnancy rate was 0.72 with 106 records.Only one of the recessive TRD region (AR.18,Table 1; Reg.63, Table 4) showed a significant effect on heifer pregnancy, with a significant effect of − 0.115, reducing the pregnancy rate to 0.75, that is, 11% reduced the pregnancy rate.In addition, it is important to mention that those regions, found with a significant effect when comparing between AA × -and AB × AB matings, a reduced pregnancy rate was observed in the distribution of AA × AA and AB × -matings in some of these regions as well.In fact, their allelic TRD pattern anticipates that one single copy of the deleterious allele could generate a pregnancy loss and not only in the presence of the two copies (homozygous state).Finally, TRD regions that do not impact pregnancy rate are still important, as they potentially impact a different stage of the reproductive cycle, emphasizing the importance of investigating the consequences of all TRD regions.
Conclusions
The analysis of a large genomic dataset allowed the characterization of TRD of the whole genome of Angus breed.Different parametrization uncovered 19 regions with recessive patterns (potential lethality for homozygote individuals) and 52 regions with allelic patterns.The allelic TRD discoveries exhibited complete or quasi-complete absence for homozygous individuals in addition to under-representation (potentially reduced viability) of carrier (heterozygous) offspring and also parent-specific TRD patterns.Using heifer pregnancy data, 64 and 20 regions showed significant effects at p-value < 0.05 and 0.01, reducing the progeny rate up to 15%.After chromosomewise false discovery rate, the number of regions decreased to 12 and 4 at q-value < 0.05 and 0.01, respectively.The overlapping of TRD regions with recently published candidate lethal alleles in Angus supported the consistency of TRD findings.These novel findings in Angus present candidate genomic regions putatively carrying lethal and semi-lethal alleles providing opportunities to reduce the rates of embryonic losses or death of offspring as a way of improving fertility and fitness in beef cattle populations.
Figure 1 .Figure 2 .
Figure 1.Number of observed and expected offspring for each sire and dam mating and offspring genotypes (sire × dam:offspring) of the TRD region with the highest number of under-repented offspring on the Angus genome.Individual SNP with an overall TRD = 0.21 and log 10 (BF) = 3,553.34.
Table 1 .
Potential candidate lethal or semi-lethal haplotype alleles with allelic transmission ratio distortion (TRD) patterns (allelic parametrization) in Angus cattle.
Table 2 .
Potential candidate lethal or semi-lethal haplotype identified with recessive transmission ratio distortion (TRD) patterns (genotypic parametrization) in Angus cattle.a Number of SNPs on the window; b Parents' genotypes; c Offpsring genotype; BF: Bayes factor; Full list of regions is provided in S2 Table.
Table 4 .
Effects of transmission ratio distortion (TRD) region on heifer pregnancy trait and distribution of pregnancy among at risk (both parents carrying the deleterious allele) and control matings.a Number of SNPs on the window; b Parents' genotypes; c Number of matings, FDR: chromosome-wise false discovery rate. | 5,537 | 2023-08-17T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
miR-365 Ameliorates Dexamethasone-Induced Suppression of Osteogenesis in MC3T3-E1 Cells by Targeting HDAC4
Glucocorticoid administration is the leading cause of secondary osteoporosis. In this study, we tested the hypotheses that histone deacetylase 4 (HDAC4) is associated with glucocorticoid-induced bone loss and that HDAC4 dependent bone loss can be ameliorated by miRNA-365. Our previous studies showed that miR-365 mediates mechanical stimulation of chondrocyte proliferation and differentiation by targeting HDAC4. However, it is not clear whether miR-365 has an effect on glucocorticoid-induced osteoporosis. We have shown that, in MC3T3-E1 osteoblasts, dexamethasone (DEX) treatment decreased the expression of miR-365, which is accompanied by the decrease of cell viability in a dose-dependent manner. Transfection of miR-365 ameliorated DEX-induced inhibition of MC3T3-E1 cell viability and alkaline phosphatase activity, and attenuated the suppressive effect of DEX on runt-related transcription factor 2 (Runx2), osteopontin (OPN), and collagen 1a1 (Col1a1) osteogenic gene expression. In addition, miR-365 decreased the expression of HDAC4 mRNA and protein by direct targeting the 3′-untranslated regions (3′-UTR) of HDAC4 mRNA in osteoblasts. MiR-365 increased Runx2 expression and such stimulatory effect could be reversed by HDAC4 over-expression in osteoblasts. Collectively, our findings indicate that miR-365 ameliorates DEX-induced suppression of cell viability and osteogenesis by regulating the expression of HDAC4 in osteoblasts, suggesting miR-365 might be a novel therapeutic agent for treatment of glucocorticoid-induced osteoporosis.
Introduction
Osteoporosis is a common bone disease characterized by low bone mass and bone structure deterioration, leading to bone fragility and fractures [1]. Glucocorticoid therapy is an important approach for managing inflammatory and autoimmune disorders [2,3]; however, long-term glucocorticoid therapy has several adverse effects including osteoporosis [3,4]. For instance, glucocorticoid administration is the leading cause of secondary osteoporosis [5]. Therefore, it is imperative to develop new drug therapy to counteract glucocorticoid-induced osteoporosis.
Our previous studies showed that miR-365 is a mechanosensitive miRNA and miR-365 stimulates cell proliferation and differentiation by targeting histone deacetylase 4 (HDAC4) in chondrocytes [10,11]. We hypothesize that miR-365 may also enhance osteoblast viability and differentiation by targeting HDAC4 in bone cells, and by doing that, miR-365 ameliorates the glucocorticoid inhibition of osteoblast differentiation. In the present study, we have investigated the effect of miR-365 on dexamethasone (DEX)-induced suppression of osteogenesis in MC3T3-E1 cells. The results showed that miR-365 ameliorated DEX-induced suppression of osteogenesis via a direct interaction between miR-365 and the 3′-UTR of HDAC4 mRNA in osteoblasts, suggesting that miR-365 may be considered a promising therapeutic agent to treat glucocorticoid-induced osteoporosis.
Dexamethasone (DEX) Inhibited Cell Viability and Decreased the Expression of miR-365 in MC3T3-E1 Cells
We examined the effects of DEX on the viability of MC3T3-E1 cells. The addition of DEX inhibited the viability of MC3T3-E1 cells in a dosage dependent manner ( Figure 1A). We also studied the effect of DEX on miR-365 expression. qPCR results showed that DEX treatment significantly reduced miR-365 expression in MC3T3-E1 cells in a dosage dependent manner ( Figure 1B).
MiR-365 Over-Expression Ameliorated DEX-Induced Inhibition of Osteoblast Cell Viability and Alkaline Phosphatase (ALP) Activity
To determine whether miR-365 is sufficient to affect cell viability, miR-365 mimic was transfected into MC3T3-E1 cells. While DEX treatment significantly inhibited the viability of MC3T3-E1 cells, miR-365 over-expression significantly prevented cell viability suppression by DEX at one, two, and three days respectively (Figure 2A). In addition, we detected the effect of miR-365 on ALP activity. MC3T3-E1 cells were incubated in osteogenic medium with or without 1 µM DEX after transfection with miR-365 mimic or miRNA mimic negative control. ALP staining was performed by BCIP/NBT solution on day 7. The result showed that miR-365 over-expression ameliorated DEXinduced inhibition of ALP activity ( Figure 2B).
MiR-365 Over-Expression Ameliorated DEX-Induced Inhibition of Osteoblast Cell Viability and Alkaline Phosphatase (ALP) Activity
To determine whether miR-365 is sufficient to affect cell viability, miR-365 mimic was transfected into MC3T3-E1 cells. While DEX treatment significantly inhibited the viability of MC3T3-E1 cells, miR-365 over-expression significantly prevented cell viability suppression by DEX at one, two, and three days respectively (Figure 2A). In addition, we detected the effect of miR-365 on ALP activity. MC3T3-E1 cells were incubated in osteogenic medium with or without 1 µM DEX after transfection with miR-365 mimic or miRNA mimic negative control. ALP staining was performed by BCIP/NBT solution on day 7. The result showed that miR-365 over-expression ameliorated DEX-induced inhibition of ALP activity ( Figure 2B).
MiR-365 Over-Expression Attenuated the Suppressive Effect of DEX on Osteogenic Genes Expression in MC3T3-E1 Cells
To study the effect of miR-365 on DEX-induced suppression of osteogenic differentiation, we detected the osteogenic genes: Runx2, OPN and Col1a1 expressions in MC3T3-E1 cells. MC3T3-E1 cells were cultured to 80% confluence and transfected with miR-365 mimic or miRNA mimic negative control. Then MC3T3-E1 cells were incubated in osteogenic medium with or without 1 µM DEX for three days. Total RNA was extracted for real-time quantitative PCR. qPCR results showed that, while DEX inhibited the mRNA expressions of Runx2, OPN, and Col1a1, miR-365 attenuated the suppressive effect of DEX on all three osteogenic genes ( Figure 3).
MiR-365 Over-Expression Inhibited the Upregulation of HDAC4 Induced by DEX
To study the involvement of HDAC4 in Dex treated osteoblasts, we determined that levels of HDAC4 mRNA and proteins ( Figure 4). While DEX increased the expression of HDAC4 mRNA and protein in MC3T3-E1, over-expression of miR-365 inhibited the upregulation of HDAC4 induced by DEX ( Figure 4).
MiR-365 Over-Expression Attenuated the Suppressive Effect of DEX on Osteogenic Genes Expression in MC3T3-E1 Cells
To study the effect of miR-365 on DEX-induced suppression of osteogenic differentiation, we detected the osteogenic genes: Runx2, OPN and Col1a1 expressions in MC3T3-E1 cells. MC3T3-E1 cells were cultured to 80% confluence and transfected with miR-365 mimic or miRNA mimic negative control. Then MC3T3-E1 cells were incubated in osteogenic medium with or without 1 µM DEX for three days. Total RNA was extracted for real-time quantitative PCR. qPCR results showed that, while DEX inhibited the mRNA expressions of Runx2, OPN, and Col1a1, miR-365 attenuated the suppressive effect of DEX on all three osteogenic genes ( Figure 3).
MiR-365 Over-Expression Inhibited the Upregulation of HDAC4 Induced by DEX
To study the involvement of HDAC4 in Dex treated osteoblasts, we determined that levels of HDAC4 mRNA and proteins ( Figure 4). While DEX increased the expression of HDAC4 mRNA and protein in MC3T3-E1, over-expression of miR-365 inhibited the upregulation of HDAC4 induced by DEX ( Figure 4).
MiR-365 Directly Targets HDAC4 mRNA in MC3T3-E1 Cells
To investigate whether HDAC4 is a direct target of miR-365 in MC3T3-E1 cells, a wild-type mouse HDAC4 3 -UTR fragment containing miR-365-binding sequence ( Figure 5A) was cloned into a luciferase reporter vector, pmirGLO. pmirGLO carrying wild-type HDAC4 3 -UTR constructs was co-transfected with miR-365 mimic or mimic negative control into MC3T3-E1 cells. The relative luciferase activity of the reporter that contained wild-type 3 -UTR was significantly decreased when miR-365 mimic was co-transfected into MC3T3-E1 cells ( Figure 5B). Furthermore, transfection of miR-365 decreased the expression of the mRNA and protein of HDAC4 ( Figure 5C,D). These results indicate that miR-365 can directly suppress HDAC4 expression by targeting 3 -UTR in MC3T3-E1 cells.
365 decreased the expression of the mRNA and protein of HDAC4 ( Figure 5C,D). These results indicate that miR-365 can directly suppress HDAC4 expression by targeting 3′-UTR in MC3T3-E1 cells.
MiR-365 Increased Runx2 Expression and HDAC4 Over-Expression Inhibited this Effect
Runx2 plays an important role in osteoblast differentiation [12]. To determine whether miR-365 regulates Runx2 in MC3T3-E1 cells, we quantified the Runx2 mRNA level. MiR-365 transfection significantly increased the expression of Runx2 mRNA. Co-transfection of HDAC4 cDNA significantly inhibited the increase of Runx2 promoted by miR-365 ( Figure 6). These data suggest that miR-365 can regulate the expression of Runx2 via inhibition of the HDAC4 pathway. Figure 5C,D). These results indicate that miR-365 can directly suppress HDAC4 expression by targeting 3′-UTR in MC3T3-E1 cells.
Discussion
Glucocorticoids are widely used in the treatment of inflammatory and autoimmune diseases. However, long-term glucocorticoid therapy can lead to reduction in bone mass [5,13]. Glucocorticoid-induced osteoporosis is the third most common type of osteoporosis, preceded by postmenopausal and senile osteoporosis [14]. DEX is a commonly used glucocorticoid and studies have shown that DEX inhibited osteogenic differentiation and bone formation [15,16]. In this study, we showed that DEX can decrease the viability and ALP activity of MC3T3-E1 and miR-365 can significantly reverse this suppressive effect of DEX on MC3T3-E1. Furthermore, we found that DEX inhibited the expression of Runx2, OPN, and Col1a1 in MC3T3-E1 and miR-365 significantly ameliorated the suppressive effect of DEX on osteogenetic genes. Thus, miR-365 may serve as a new therapeutic agent for counteracting the inhibition of glucocorticoid-induced osteoblastic differentiation.
HDACs are a family of enzymes that catalyze the removal of acetyl groups from lysine residues in histones and non-histone proteins and they play a key role in the transcriptional regulation of gene expressions [17,18]. Studies have shown that HDAC4 has a vital role in skeleton formation [19,20]. Mice with a global deletion of HDAC4 display ectopic ossification of endochondral cartilage [21]. HDAC4 participates in parathyroid hormone (PTH)-induced bone metabolism [22] and histone deacetylase inhibitors promote osteoblast differentiation [23,24]. Our data showed that DEX increased the expression of HDAC4 and miR-365 inhibited the promotion of HDAC4 led by DEX. Furthermore, we found that miR-365 directly targets conserved seeding sites within the 3 -UTR of HDAC4. Therefore, HDAC4 is the target, by which miR-365 regulates the glucocorticoid suppression of osteoblast differentiation. In addition, Ko et al. have shown that glucocorticoid promoted the expression of HDAC4 and miR-29a regulated excess glucocorticoid suppression of osteoblast differentiation by targeting HDAC4 [25]. Thus, HDAC4 may be a common target regulated by multiple miRNAs during glucocorticoid suppression of osteoblast differentiation.
In this study, we showed that miR-365 stimulates Runx2, an essential transcription regulator that plays a crucial role in osteoblast differentiation [12,26]. Furthermore, we showed that miR-365 stimulation of Runx2 is mediated by its knockdown of HDAC4 in osteoblasts. Studies have revealed that both intramembranous and endochondral ossification are completely blocked in Runx2 null mice and overexpression of Runx2 can enhance osteoblastic differentiation [26,27]. Furthermore, studies have shown that HDAC4 regulates Runx2 activity [28]. Cao et al. have shown that HDAC4 inhibited Runx2 promoter activity in a human chondrocyte cell line [29]. Smith et al. showed that miR-365 is involved in osteoblastic differentiation in B6 and C3H cells [30]. We have previously shown that miR-365 promotes chondrocyte proliferation and differentiation by inhibiting HDAC4 in chondrocytes [11]. Thus, a miR-365/HDAC4/Runx2 axis may be involved in regulating both chondrocyte and osteoblast differentiation.
Our data suggest that such a regulatory axis may be used for the treatment of glucocorticoid induced bone loss (Figure 7). In this present study, we found that DEX inhibited the expression of Runx2 while miR-365 attenuated such inhibition by DEX. MiR-365 upregulated the expression of Runx2 through direct targeting HDAC4 in osteoblasts. Therefore, Runx2 is involved in the effect of miR-365 for counteracting the suppression of osteoblast differentiation induced by glucocorticoid. In conclusion, our results collectively indicate that miR-365 ameliorates DEX-induced suppression of osteogenesis by directly regulating HDAC4. MiR-365 may be a potent therapeutic agent for the prevention and treatment of glucocorticoid-induced osteoporosis.
MiRNA Transfection
MC3T3-E1 cells were cultured to 80% confluence and transfected with miR-365 mimic (Dharmacon, Lafayette, CO, USA) or miRNA mimic negative control (Dharmacon, Lafayette, CO, USA) by Lipofectamine 3000 (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions. MiR-365 mimic and miRNA mimic negative control were used at a final concentration of 50 nM.
Cell Viability Assay
Cell viability was measured using the Cell Counting Kit-8 (CCK-8, Sigma, St. Louis, MO, USA). Briefly, samples were sub-cultured in a 96-well plate and thecells were transfected with miR-365 mimic or miRNA mimic negative control. 12 h later, the cells were treated with 1 µM DEX or vehicle control for one, two, or three days. The cell viability was assessed by the CCK-8. The absorbance at 450 nm was measured by a microplate reader (SpectraMAX Me 2 , Molecular Device, Sunnyvale, CA, USA).
MiRNA Transfection
MC3T3-E1 cells were cultured to 80% confluence and transfected with miR-365 mimic (Dharmacon, Lafayette, CO, USA) or miRNA mimic negative control (Dharmacon, Lafayette, CO, USA) by Lipofectamine 3000 (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions. MiR-365 mimic and miRNA mimic negative control were used at a final concentration of 50 nM.
Cell Viability Assay
Cell viability was measured using the Cell Counting Kit-8 (CCK-8, Sigma, St. Louis, MO, USA). Briefly, samples were sub-cultured in a 96-well plate and thecells were transfected with miR-365 mimic or miRNA mimic negative control. 12 h later, the cells were treated with 1 µM DEX or vehicle control for one, two, or three days. The cell viability was assessed by the CCK-8. The absorbance at 450 nm was measured by a microplate reader (SpectraMAX Me 2 , Molecular Device, Sunnyvale, CA, USA).
ALP Staining
ALP staining was performed by using BCIP/NBT solution (Sigma, St. Louis, MO, USA). Briefly, the treated cells were washed with phosphate buffer saline (PBS) twice and fixed with 70% ethanol for 10 min. The cells were equilibrated by ALP buffer (0.15 M NaCl, 0.15 M Tris-HCl, 1 mM MgCl 2 , PH 9.5) twice, incubated with NBT-BCIP solution at 37 • C in dark for 30 min. Then the reaction was stopped by distilled water and the plate was dried before taking photos.
Western Blot
All pre-treated samples were washed with PBS and lysed in lysis buffer (M-PER, Life Technologies, Waltham, MA, USA) plus protease inhibitor phenylmethylsulfonyl fluoride (Thermo Fisher Scientific, Waltham, MA, USA) for 30 min on ice. The lysates were centrifuged at 12,000× g for 15 min at 4 • C. The supernatants were collected and the protein concentrations were determined using BCA assay (Thermo Fisher Scientific, Waltham, MA, USA). After being heated for 5 min at 95 • C, equal proteins(30 µg) for each sample were separated by 10% SDS-polyacrylamide gel and then transferred to polyvinylidene difluoride (PVDF) membrane (Whatman, Lafayette, CO, USA) for 70 min at 100 V. The membrane was blocked with 5% bovine serum albumin (BSA) in Tris-buffered saline-Tween 20 (0.1%) (TBS-T) for 1 h and incubated with anti-HDAC4 or anti-actin antibodies (Abcam, Cambridge, MA, USA) at 4 • C overnight. On the next day the membrane was incubated with anti-rabbit-Alexa Fluor 680 (Molecular Probes, Eugene, OR, USA) for 1 h at room temperature. The blots were scanned using an Odyssey fluorescence scanner (LI-COR Biosciences, Lincoln, NE, USA). The band intensity was quantified using the Odyssey software.
Statistical Analysis
All data were presented as mean ± SD and statistical analysis was performed using one-way analysis of variance (one-way ANOVA) among multiple groups and student's t-test between two groups. A value of p < 0.05 was considered statistically significant.
Conclusions
MiR-365 ameliorates DEX-induced suppression of cell viability and osteogenesis by regulating the expression of HDAC4 in osteoblasts. These findings suggest that miR-365 might be a novel therapeutic agent for treatment of glucocorticoid-induced osteoporosis. | 3,474.8 | 2017-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Does Weather Still Affect The Stock Market?
This paper examines the impact of weather phenomena on the German stock market, evaluating cloud cover, humidity, air pressure, precipitation, temperature, and wind speed as weather variables. We use stock market data (returns, trading volume, and volatility) from the DAX, MDAX, SDAX, and TecDAX for the period from 2003 to 2017 and show, with modern time-series (GARCH) models that air pressure is the only weather variable that exerts a potentially consistent effect on the stock market. Air pressure reduces the trading volume on the SDAX and TecDAX, and changes in air pressure lead to increases in returns on the DAX, MDAX and SDAX. The effects of the other weather variables show no clear pattern and are critically discussed. In addition, this article contains an overview of the historical research results on the effects of weather on stock markets.
Introduction
In the first empirical investigation of the impact of weather phenomena on the stock market, Saunders (1993) indicated of the limits of classical capital market theory, showing a significant negative effect of clouds on the returns of North American equity indices. Motivated by this empirical finding, numerous studies with different study designs and overall inconclusive results followed (e.g., Bassi et al. 2013;Chang et al. 2008;Dowling and Lucey 2008;Frühwirth and Sögner 2015;Hirshleifer and Shumway 2003;Kamstra et al. 2003;Krämer and Runde 1997;Symeonidis et al. 2010). These studies confirming the effects of weather contradict the predictions of classical capital market theory but are consistent with behavioral science findings that identify the influence of mood on decision-making processes.
The theoretical basis for the existence of weather effects within stock markets is the assumption that there is an indirect functional chain of weather that influences investor mood, which in turn influences their decision-making processes (e.g., Bassi et al. 2013;Cao and Wei 2005;Frühwirth and Sögner 2015). If this indirect functional chain is operative to some extent and capital market anomalies in the form of weather effects actually exist, then these facts would lend support to theories of behavioral finance, an interdisciplinary field combining economics, psychology and sociology (Shiller 2003), while weakening the support for the efficient market hypothesis (Malkiel and Fama 1970).
There have been many empirical investigations into the effects of the capital market weather anomaly on the New York Stock Exchange (NYSE) (e.g., Saunders 1993 andTrombley 1997), New Zealand Stock Exchange (Keef and Roush 2002), Madrid Stock Exchange (Pardo and Valor 2003), London Stock Exchange (Apergis et al. 2016), Australian stock market (Worthington 2009), Korean stock market (Yoon and Kang 2009), Taiwanese stock market (Chang et al. 2006), Chinese stock market (Lu and Chou 2012), and German stock market (Klein 2005). These studies differ with regard to the stock market examined and the possible indices, weather variables, time period and statistical method used in the analysis.
For the German stock market, the weather capital market anomaly has not yet been exhaustively investigated. Overall, research gaps exist in terms of both content and methodology; the literature has not yet considered all important indices and all relevant weather variables simultaneously. Most studies, especially those on the German market, have analyzed only returns (Apergis et al. 2016;Cao and Wei 2005;Jacobsen and Marquering 2008;Klein 2005;Krämer and Runde 1997;Schneider 2014a). To our knowledge, no study has analyzed trading volumes, and only Dowling and Lucey 2008 investigated volatility.
From a statistical point of view, the usage of ordinary least squares (OLS) regression models for time-series data is insufficient in most cases due to heteroskedasticity problems and poor robustness. The majority of existing studies use only OLS regressions (exceptions include, e.g., Dougal et al. 2012;Symeonidis et al. 2010;Yoon and Kang 2009), which may explain the different outcomes of studies on weather anomalies in capital markets.
To address this shortcoming and provide more recent empirical evidence on weather anomalies in capital markets, we use generalized autoregressive conditional heteroskedastic (GARCH) time-series models and show, with data on German stock market indices (DAX, MDAX, SDAX, and TecDAX) covering August 2003 to July 2017, that weather has an impact on volatility and trading volume. The remainder of this paper is organized as follows. In the first section, we discuss the indirect functional chain linking weather, mood and decision-making. Then, we provide a literature review of the empirical results on a possible weather anomaly in capital markets. Thereafter, we briefly discuss the use of GARCH models for our analysis and present our results. Finally, we discuss the results and provide conclusions.
Theoretical Background and Hypotheses
The theoretical reasoning behind studies on the effect of weather on stock markets is the assumption of an indirect functional chain whereby weather influences investors' mood, which in turn influences their decision-making processes (e.g., Bassi et al. 2013;Cao and Wei 2005;Frühwirth and Sögner 2015). Mood is an affective state that can be influenced by external factors such as an individual's overall (biological) condition and health status. Weather can have an impact on these external factors and, thus, on mood. A study by Fletcher (1988) found that people reported increased joint pain, headaches, irritability, and nervousness in relation to exposure to Chinook winds in Canada. In addition, Guedj and Weinberger (1990) showed that weather can impact physical health, finding that changes in weather related to air pressure, temperature, and precipitation increased the pain sensitivity of rheumatism patients. Moreover, Jamison et al. (1995) obtained similar results. Other studies have focused on the effects of weather on mental health. Rosenthal et al. (1984) discovered, for example, a type of annually recurring depression in autumn or winter known as seasonal affective disorder (SAD). The symptoms of this disease change depending on the climate and latitude. Howarth and Hoffman (1984) identified a negative effect of humidity on concentration and a positive effect on tiredness, as well as a positive correlation between temperature and skepticism and a positive effect of sunshine on optimism. Recent studies by Denissen et al. (2008) and Kööts et al. (2011) have shown similar results and identified a negative impact of sunshine on tiredness. Schwarz and Clore (1983) observed that study participants generally thought more positively about their life on sunny days. The authors concluded that people use their current mood, here influenced by the weather, as a source of information for decision-making. Allen and Fischer (1978) observed that humidity influences mental efficiency, while Delyukov and Didyk (1999) showed that memory performance was impaired by aperiodic variations in air pressure. A meta-analysis of performance as a function of temperature was carried out by Pilcher et al. (2002) and found that both cold and hot temperatures generally have a negative influence on cognitive efficiency. Keller et al. (2005) obtained similar results and showed that high temperatures and high air pressure have a positive impact on memory performance and mental receptiveness. Thus, a variety of external factors related to weather have an impact on mood and ultimately influence decision-making.
The most commonly used models for explaining the influence of positive or negative affective states (which can be significantly influenced by weather) on (risk) behavior are Forgas' affect infusion model (AIM) (Forgas 1994(Forgas , 1995 and Isen and Patrick's mood maintenance hypothesis (MMH) (Isen and Patrick 1983). However, these explanatory approaches differ in terms of their mechanisms of action and are therefore briefly presented below while also providing the basis for hypothesis development.
The MMH describes that individuals who are in a positive affective state try to maintain it (Isen and Simmonds 1978). Then, it follows that individuals in a negative affective state try to leave it (mood repair) to get back to a positive affective state (Cialdini et al. 1973). If this hypothesis is translated to risky situations, such as investment decisions in stock markets, then a positive affective state leads to riskaverse behavior and a negative affective state to risk-seeking behavior (Isen 2008). The AIM argues diametrically and can be described via affect priming and affect as information (Forgas 1995). Affect priming leads to a selective perception of information needed for decision-making; thus, the decision is indirectly influenced by the current affective state. Affect as information describes the adoption of the affective state as an evaluation criterion for decision-making. The affective state then possibly leads to a decision that corresponds to the affective state (Forgas 1994), which, for decisions under risk, results in risk-seeking behavior for positive affective states and risk-averse behavior for negative affective states. The following table shows the relationships involving weather as an affective state. Forgas (1995) concluded that the impact of mood on decision-making processes is stronger for uncertain, riskier, and more abstract situations which applies to financial decisions (Frühwirth and Sögner 2015). Consequently, mood can influence investors' decision-making processes in a way that may impact the capital market. It is therefore reasonable to assume that weather might influence the capital market.
This relationship is also known as a capital market anomaly, which cannot be explained by classical capital market theory and thus provides a justification for the interdisciplinary behavioral finance approach, which combines economics, psychology, and sociology (e.g., Shiller 2003). While there are many capital market anomalies (e.g., Dimson 1988), the following paragraphs discuss only the weather anomaly in relation to returns, volatility, and trading volume.
Return
Saunders' first study on weather phenomena in capital markets (Saunders 1993) marked the beginning of a research trend that continues to this day. The empiri- Cloudiness is by far the most frequently studied variable, followed by temperature and precipitation. For cloudiness and temperature, more than half of the studies found their negative impacts on returns. In contrast, the remaining studies could not detect any effect for the two variables on returns. For other weather variables, the majority of the studies showed no significant correlation. Air pressure has been one of the least studied weather variables, although it is the only weather phenomenon to which people are exposed inside buildings (Schneider 2014a).
K
The dominant statistical method used in these related studies is OLS regression, the majority of which have attempted to take into account the special nature of finance data by applying White or Newey-West standard errors to correct for bias in the results arising from heteroskedasticity problems. Only Chang et al. (2006); Dowling and Lucey (2008); Floros (2011); Kamstra et al. (2003); Kang et al. (2010); Sariannidis et al. (2016); Yoon and Kang (2009), and Zadorozhna (2009) used modern financial market econometrics in the form of GARCH models. Therefore, the possibility cannot be excluded that the effects identified in these studies that use traditional models are based on an insufficient data representation.
As previously described, weather impacts people's mood. The AIM and MMH provide different theoretical explanations for the impact of weather on the stock market. Empirical evidence can be found for both approaches, although the majority of the results favor the AIM. Good weather conditions positively influence mood, which in turn leads to a positive impact on stock returns (AIM). The predictions arising from this line of reasoning are in contrast to the findings of studies that reported a correlation between bad weather 1 and high returns (MMH). But differences can also be identified for the different weather variables. For example, most studies revealed a negative influence of temperature on returns, which might be explained by an increased willingness to take risks under certain weather conditions. According to this line of argument, cold temperatures lead to aggressive behavior, a greater willingness to take risks and, ultimately, increased returns. More pronounced risktaking behavior in bad weather is in line with the results of Raghunathan et al. (2006), who observed riskier behavior among subjects who reported experiencing sadness (Raghunathan and Pham 1999). However, the different empirical results, the majority of which are in favor of the AIM over the MMH, do not allow for a clear theoretical positioning. According to the AIM and MMH, two competing hypotheses can be concluded.
Hypothesis 1a: Good (bad) weather conditions lead to higher (lower) returns on the German stock market (AIM).
Hypothesis 1b: Bad (good) weather conditions lead to higher (lower) returns on the German stock market (MMH).
In addition to absolute weather characteristics or absolute deviations from average weather conditions having an effect on people and their decisions, studies have also shown that changes in weather can have an effect on people's physical constitution (Guedj and Weinberger 1990;Jamison et al. 1995). In addition, Wang (2016) discovered a correlation at the investor level between worsening changes in weather and risk appetite, measured in terms of the number and size of transactions in the UK spread market (MMH). At the index level, Schneider (2014a) found significant effects of positive daily changes in air pressure on the returns of the TecDAX and FTSE (AIM). The analysis of changes in weather also shows no clear direction based on underlying theories, so we also provide two competing hypotheses.
Hypothesis 1c: Positive (negative) changes in weather lead to higher (lower) returns on the German stock market (AIM).
Hypothesis 1d: Negative (positive) changes in weather lead to higher (lower) returns on the German stock market (MMH).
Volatility
Stock market volatility is less commonly assessed in weather studies than are returns. Table 3 summarizes the results of the studies on this indicator.
With the exception of Chang et al. (2008); Frühwirth and Sögner (2015); Lu and Chou (2012), and Pizzutilo and Roncone (2017), the above studies used GARCH models. In studies concerning weather and volatility, the variables cloud cover, temperature, and precipitation were mostly used.
Poor weather conditions, such as high precipitation and wind, lead to increased volatility in stock markets (MMH). It is argued that volatility results from heterogeneity or divergences in investor opinions and expectations (Harris and Raviv 1993;Shalen 1993). Bad weather can cause divergences in mood among investors and thus increase stock market volatility (Chang et al. 2008).
There is also a diametrically opposed argument that could explain a positive correlation between good weather and volatility (AIM), such as high temperature. Good weather could create a positive mood among investors and consequently increase trading activities, that may have an influence on the volatility (Brown 1999). The effects for cloudiness are also attributable to AIM and show reduced volatility as cloudiness increases. In summary, the empirical results show a nearly balanced distribution between effects attributable to AIM and MMH. Thus, we offer the following hypotheses:
Hypothesis 2a: Good (bad) weather conditions lead to higher (lower) volatility on the German stock market (AIM).
Hypothesis 2b: Bad (good) weather conditions lead to higher (lower) volatility on the German stock market (MMH).
Trading Volume
Another dependent variable considered in studies on weather and capital markets is trading volume, with the results shown in Table 4. As in the cases of returns and volatility, the observed studies present a mixed picture with regard to the significance and direction of the effects. One possible reason for the heterogeneous results is that the trading volume variable is operationalized differently across the listed studies. For example, the studies by Goetzmann and Zhu (2005) and Wang (2016) used data at the individual and investor levels, whereas Loughran and Schultz (2004) and Chang et al. (2008) formed portfolios for individual companies. The remaining studies were based on aggregated index trading volumes.
Two possible arguments could explain the effects of weather on trading volume. First, it is possible that when weather conditions are good, professional investors substitute working hours with free time (Connolly 2008), and private investors use their free time for activities other than trading. Second, poor weather conditions might increase the risk appetite of investors and thus their willingness to invest in equity markets. Both arguments propose a consistent effect of weather on trading volume in line with the MMH. Good weather conditions lead to a substitution between trading and different activities, and bad weather conditions lead to a higher risk appetite, which in turn leads to higher trading volume. Even if the argumentation seems conclusive, the empirical results do not show a clear direction in favor of the MMH, and additionally, the number of empirical results is limited thus far. At the same time, it is conceivable that good weather conditions lead to an increase in trading volume (AIM). Thus, we also propose competing hypotheses according to the AIM and MMH for trading volume:
Hypothesis 3a: Good (bad) weather conditions lead to a higher (lower) trading volume on the German stock market (AIM).
Hypothesis 3b: Bad (good) weather conditions lead to a higher (lower) trading volume on the German stock market (MMH).
Data
Market data for several German stock indices (DAX, MDAX, SDAX, and TecDAX) were collected from Thomson Reuters Eikon (formerly Datastream). Trading volume was measured as turnover by volume for the DAX, MDAX, and SDAX and as turnover by value for the TecDAX due to data restrictions in the Thomson Reuters Eikon database. Due to these data restrictions, the time series for trading volume in the TecDAX was shorter and contained 2,144 observations compared to 3,565 observations for all other index and variable combinations. Control variables were also included, which were dummy variables for Monday (Wang et al. 1997), January and December (Agrawal and Tandon 1994;Gultekin and Gultekin 1983), and the 'Sell in May and go away strategy' named the Halloween anomaly (Bouman and Jacobsen 2002). Furthermore, the previous day's yield on the Dow Jones Industrial Average (DJIA) was included as a variable (Drozdz et al. 2001). The DJIA returns were calculated on the basis of the performance index; the German indices were also calculated based on the total return system. Additionally, we added a dummy variable to capture any possible turn-of-the-month effect in the data (Zwergel 2010).
For the weather, we used data from the Climate Data Center (CDC) of the German Weather Service. The data were for the period from August 2003 to August 2017 in Frankfurt (Station-ID 01420). Frankfurt is Germany's financial center; as Germany is relatively small compared to the US or China, it ensures that a large proportion of domestic investors are exposed to the weather in Frankfurt or to similar weather conditions (Schneider 2014a,b). Schneider (2014a) showed, for example, that air pressure conditions are highly correlated across Germany. Similar results were found by Klein (2005), who found high correlations of sunshine duration and cloudiness between major German cities. Therefore, the weather in Frankfurt is a good proxy for that in other German cities. The selected weather variables were sky cover, temperature, precipitation, air pressure, humidity and wind speed. Sunshine was not selected due to multicollinearity with cloud cover. To account for seasonal weather patterns, we followed Hirshleifer and Shumway (2003) and calculated the average value of each weather variable for a particular calendar week over the whole dataset. Each daily observation was subtracted by the corresponding weekly mean. This method ensured that the variable being measured was the impact of abnormal weather conditions on stock markets. Table 5 summarizes the variables and their descriptions.
Descriptives
To test for the normality of the stock market data, we used the Jarque-Bera test. The return, trading volume, and volatility data were not normally distributed, suggesting that the residuals of subsequent regressions would not be normally distributed either. Hence, we used robust standard errors for the significance tests. Autocorrelation in the data was assessed by means of the Ljung-Box (LB) Q test. A significant test result indicated the presence of autocorrelation or the absence of white noise. Moreover, the test results indicated autocorrelation and the existence of volatility clusters. The difference stationarity of the stock market data was tested with the augmented Dickey-Fuller (ADF) test and trend-stationarity with the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. If stationarity was present, then the ADF test should be statistically significant, and the KPSS test should not. These results indicated nonstationarity for turnover by volume and value time series, and thus, we transformed the corresponding values using the first difference of the natural logarithm. Regarding the test results, we assumed that the time series were stationary. Table 6 shows the descriptives of the weather variables, and Tables 7 and 8 show the stock market descriptives and the rest of the results.
Model
Stock market returns have specific characteristics that cannot be adequately represented by classical time-series models or simple OLS regressions. These characteristics include leptokurtic distributions, higher-order autocorrelations and volatility clusters. The autoregressive conditional heteroskedasticity (ARCH) models introduced by Engle (1982) can handle time series with these specific characteristics and assume that conditional variance is a function of the available information from previous periods. In this way, the error term varies over time. To model financial time series, ARCH models have been replaced by GARCH models, which allow for a more parsimonious specification. Classic ARCH or GARCH models assume a symmetrical effect of positive and negative errors on volatility. According to this assumption, both good and bad news should have symmetrical effects on the variation in the data. However, this assumption often does not stand up to empirical scrutiny for certain capital market data. In the case of stock returns, for example, it has been observed that volatility reacts more sensitively to falling prices or bad news than to rising prices or good news, respectively. This asymmetrical reaction of volatility is called the leverage effect (Black 1976) and is considered in the exponential GARCH (E-GARCH) model proposed by Nelson (1991) and the threshold GARCH (T-GARCH or GJR-GARCH) model proposed by Glosten et al. (1993). Indeed, our dataset displayed the abovementioned characteristics. The LB test revealed strong autocorrelation in the returns and trading volume series, and the data showed volatility clustering. As a result, and following other studies (e.g., Chang et al. 2006;Floros 2011;Kang et al. 2010, andKang 2009), we applied GARCH models to capture this volatility clustering and to consider heteroskedasticity in the estimation (Bollerslev 1986).
To investigate the relationship between stock returns and abnormal weather conditions, we chose a linear autoregressive (AR(2)) model with the GJR-GARCH(1,1) process from Glosten et al. (1993). In all models, following good empirical research practices, we applied Bollerslev-Wooldridge error terms from the maximum likelihood estimation, which were robust to conditional nonnormality (Zivot 2009). Eq.
(1) includes autoregressive processes to correct for the autocorrelation of returns. In addition, the weather and control variables are included as explanatory variables. The error term t is a zero-mean white noise process and is normally distributed. Eq. (2) gives the specification of the conditional variance of 2 t at time t, where˛represents the lagged squared residuals and can be interpreted as the news coefficient, with higher values implying that more recent news has a greater impact.ˇis the conditional variance of previous periods, showing the impact of past variance, and˛Cˇmeasures the persistence of volatility (Bollerslev 1986).
The GJR specification allows for an asymmetric impact of bad and good news on conditional variance. The leverage effect is considered via the dummy variable d , where d t D 1 if t < 0 and d t D 0 otherwise. In this way, good and bad news can have different impacts on conditional volatility. Good news ( t 0) has an impact of˛i , while bad news ( t < 0) has an impact of˛C . If is significant and positive leverage exists, then bad news increases volatility. For D 0, the model is reduced to a symmetric GARCH model. The nonnegativity constraint is satisfied if A similar model with a GJR-GARCH(1,1) process is adopted to assess the relationship between stock returns and daily changes in weather.
RET i;t D mu i;0 C w 1 RET i;t 1 C w 2 RET i;t 2 C w 3 WIND t C w 4 PREC t C w 5 SKC t C w 6 PRES t C w 7 TEMP t C w 8 HUMI t C w 9 DJIA t 1 C w 10 MON C w 11 DEC C w 12 Halloween C w 13 JAN C w 14 TURN C w 15 TUR* C i;t : ( To analyze the relationship between stock volatility and weather factors, we selected the linear autoregressive (AR) model with the E-GARCH(1,1) process from Nelson (1991) because it avoided nonnegativity constraints for the parameters in the variance equation, which now include weather and control variables. The logarithmic function of the conditional variance (Eq. 5) ensures that the variance is positive. E-GARCH models, like for GJR-GARCH processes, can capture asymmetry in the volatility.
Eq. (5) assumes that returns follow an AR(1) process with drift, analogous to the series in Symeonidis et al. (2010). M represents the weather and control variables. In equation (6), shows the sign and leverage effect, ‚ indicates the size effect, andˇdisplays the persistence.
The impact of weather on trading volume was tested with several models. Based on (unreported) tests (namely, LBQ statistics and an Engle's ARCH test), a linear AR(5) model with the GJR-GARCH(1,1) process was identified as the most appropriate model. The weather and control variables were regressed against the first difference of the logarithmized trading volume (TUR*). The variance equation conformed to the return models.
Regression Diagnostics and Robustness
For maximum-likelihood-based procedures, the quality of the model fit was determined by means of the Akaike and Bayesian information criteria (AIC and BIC, respectively), which are mainly used for model selection and the detection of overfitting and thus are not relevant for our purposes. To test for the existence of residual heteroskedasticity, we used the Lagrange multiplier (LM) test proposed by Engle (1982). Nonsignificant test results indicated homoscedastic residuals. Table 9 shows the ARCH-LM test results for different lag parameters. With the exception of lag 7 for turnover by volume on the TecDAX, all test results were nonsignificant. Accordingly, we could assume homoscedastic residuals. The autocorrelation of the residuals was tested by means of the LB test with different lags and with standardized and squared standardized residuals (Table 10). The LB test on standardized residuals evaluated the dependence of the first moments with a time lag. The LB test on the squares of standardized residuals, similar to the ARCH-LM test, evaluated the dependence of the second moments with a time lag. The clearly significant results for the turnover-by-volume model for the DAX, MDAX, SDAX, and TecDAX reflected an autocorrelation problem that was already present upon model selection (see Sect. 3.3) and could not be completely resolved by our AR(5) model. However, all further changes to the model specification (e.g., a higher number of lags and the multiple differentiation of trading volume) did not lead to an improvement but, in fact, worsened the diagnostic values. Therefore, we retained the GJR-GARCH(1,1) AR(5) model. The LB test on the squares of standardized residuals and the ARCH-LM test showed no problems. In summary, the regression diagnostics showed the good usability of the models, even if there were autocorrelation problems for the turnover-by-volume model. We tested the robustness of the results in two ways. First, we removed all outliers from the data and then recalculated the GARCH models. The results remained constant, even with the outliers excluded. Another robustness test was carried out to vary the distribution assumption of the GARCH specification. For this, the models were computed with the generalized error distribution (GED) and Student's t distribution, instead of the normal distribution we used for our calculations. Except for the results for trade volume, the effects changed only slightly, even after varying the distribution assumptions. One reason for the lack of robustness in trade volume could be the heteroscedasticity problem discussed earlier. Therefore, we saw no evidence of a lack of robustness in the results. We can provide the comprehensive robustness results upon request.
Results
The results are presented in detail in Tables 16-19 in the appendix and in concise form in Tables 11-14 in this section. For the interpretation of the results, we used only the abridged tables.
In contrast to the findings of traditional studies, here, we could not observe a sunshine or cloud cover effect. One reason for this might be that almost all former studies identifying a sunshine or cloud cover effect adopted classic OLS or time-series models, which cannot accurately represent stock market data, as they are characterized by autocorrelation and volatility clustering. As a consequence, it cannot be ruled out that the significant results detected in the prior literature might be spurious. Only Yoon and Kang (2009) used a model that was appropriate for capital market data, namely, a GJR model, to identify a significant impact of cloud cover on [1990][1991][1992][1993][1994][1995][1996][1997]; however, this impact disappeared in the post-crisis period (1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006). In total, 8 significant effects could be found that could be assigned to the theoretical construct of the AIM and 3 significant effects in connection with the MMH. These findings can be taken as a weak indication that good weather leads to riskseeking behavior and that bad weather to risk-averse behavior in the stock market. A more detailed discussion is provided in Sect. 4.
Returns
The results mainly showed no weather effects in any of the German stock markets when the dependent variable was returns. There was only a statistically significant effect of air pressure on SDAX returns. Thus, good weather conditions may have a positive effect on returns (AIM), but the predominantly missing effects point to a rejection of H1a and H1b. In addition, we modeled the effect of daily changes in weather on returns and found more significant effects. If air pressure increases, then the returns of the DAX, MDAX and SDAX increase (AIM). Only for the TecDax does no significant cor-relation with air pressure appear. In addition, our results show positive effects of a temperature improvement on the DAX and MDAX (AIM). In contrast to the literature (see Table 2), which found mainly negative effects on returns from temperature increases, an increase in temperature in the German market leads to a positive effect on returns, which can be attributed to the temperate climate in Germany, in that a rising temperature represents a positive change in weather, whereas in Asian markets, for example, a rise in temperature tends to denote a worsening of the weather. Given these effects of the changes in weather in terms of air pressure and temperature, this indicates the confirmation of H1c. However, since the effects are not consistently observable across the large and small indices and since other weather influences are absent, we also cannot confirm H1c.
Volatility
Among the weather variables, we observed three statistically significant effects (see Table 13). Wind speed reduced the volatility of the SDAX (AIM), and relative humidity reduced the volatility of the TecDAX (AIM). Thus, bad weather conditions may have had a negative effect on volatility, which is indicative of risk-averse behavior and thus attributable to the AIM. In contrast, cloud cover had a positive impact on TecDAX volatility, which is attributable to the MMH. Since there were no weather effects for the DAX and MDAX and only 3 contradictory effects for the SDAX and TecDAX, we could not confirm H2a or H2b.
Trading volume
The regression results show significant negative effects of air pressure on trading volume for the SDAX and TecDAX. A rise in air pressure could be associated with good weather, which leads to decreased trading (MMH). These effects are in line with H3b, which posited that good weather conditions lead to a lower trading volume. However, since we did not observe effects from any of the other variables, the existing effects could be shown for only the SDAX and TecDAX, and there were still some autocorrelation problems for the analysis of trading volume (see Sect. 3.4), we were not able to confirm H3b.
GARCH vs. OLS
The majority of past empirical weather anomaly studies used OLS regression. However, this was not adequate in most cases due to heteroskedasticity issues, even when controlling for heteroskedasticity using White or Newey-West standard errors. Our literature review showed that for returns, for example, not even one-third of the studies used modern financial econometrics for empirical analysis (see also Sect. 2).
How serious an influence the choice of method has on the results can be shown by a comparative analysis. A calculation of our models with OLS using White estimators led to completely different results compared to those identified using the GARCH model. The following Table shows an overview of the GARCH and OLS results. If there is interest in the detailed regression tables, they can be provided upon request. Table 15 shows that only one significant effect is detectable with OLS regression for the impact of weather on returns. For changes in weather, the results showed a positive influence of sky cover on the DAX. The GARCH model, conversely, identified one positive effect of air pressure on returns in the SDAX and five positive effects of changes in air pressure and temperature on the DAX, MDAX and SDAX. The analysis of trading volume also showed that OLS regression provided a completely different picture of these relationships. Although the GARCH model showed only two negative effects of air pressure on trading volume for the SDAX and TecDAX, OLS regression showed one positive effect of wind on trading volume and eight negative effects of sky cover, air pressure, temperature and humidity for the DAX, MDAX and SDAX.
These different results make it clear that the choice of method has a significant impact on the results or that the violation of application requirements of econometric models for the detection of financial market anomalies can lead to incorrect conclusions. At the same time, it is of great importance to consider which control variables are used. In particular, month effects (e.g., Halloween effect and Monday, January, and December dummies) should be controlled; otherwise, they could be incorrectly assigned to weather.
Conclusions
This study attempts to answer the question of whether there are indeed effects of weather on stock markets. The application of modern time-series regressions to data from the most important German stock indices shows a mixed picture, and thus, this question cannot be answered conclusively. As we mentioned in the results section, we do not regard isolated significant effects of weather variables as an indication of a significant capital market anomaly. This applies, in particular, to the effects of weather on volatility (the negative effect of wind on the SDAX, the positive effect of clouds on the TecDAX, and the negative effect of humidity on the TecDAX).
However, the effect of air pressure shows more consistent results for the various key figures and capital markets. We find a positive effect of air pressure on the returns of the SDAX but not on those of the TecDAX. At the same time, trading volume decreases on the SDAX and TecDAX as air pressure increases. These results show, on the one hand, that air pressure is an important weather variable to be considered and, on the other hand, that the effects of investor mood may be particularly relevant for small-capitalization indices (Baker and Wurgler 2006;Klein 2005;Lee et al. 2002;Schneider 2014a;Statman et al. 2006). Therefore, it may be reasonable to have a higher proportion of domestic investors in small caps compared to blue chips. However, the analysis of the changes in air pressure and temperature also shows effects on the returns of larger indices such as the DAX and MDAX. This finding contradicts the assumption that only the proportion of domestic investors makes the effects of weather detectable. Rather, the strength of the weather influence also seems to play a role. One explanation for the effects on the DAX and MDAX could accordingly be that changes in weather have a stronger influence on people's health and behavior than does the weather itself, and thus, the proportion of domestic investors as an explanation for the effects of weather on the stock market moves into the background.
However, the divergence between the results in the literature and those of this study may also be due to methodological reasons. For example, it is noticeable that some authors use fewer control variables and OLS regression, and thus, their results are only comparable to a limited extent. The comparison of the GARCH model and OLS regression (see Table 15 and Sect. 3.9), for example, shows no OLS effects for air pressure and temperature and thus shows absolutely opposite results based on the method used. This finding shows that a comparison of the studies is questionable when using different financial market econometrics.
Nevertheless, no uniform picture of this situation emerges. Changes in air pressure have a positive effect on the returns of the DAX, MDAX and SDAX, but not on those of the TecDAX. At the same time, it is difficult to explain why changes in temperature have a positive effect on the DAX and MDAX, but not on the SDAX and TecDAX. Accordingly, our results show no empirical evidence that small caps are more vulnerable to the effects of weather than are blue chips due to more local investors. We therefore conclude that changes in weather lead to the most empirically meaningful results in this study. However, the results are not completely conclusive and further research is needed in this area with a focus on changes in weather. In addition, there should be more focus on the composition of the indices and the type of investors to better explain the effects of weather.
When the results are viewed against the background of the AIM or the MMH, a clear positioning for the AIM emerges. A total of eleven statistically significant effects can be demonstrated, eight of which can be attributed to the AIM. If we exclude the results for trading volumes, then since the heteroscedasticity problem could not be completely solved for these models, only one effect for volatility (see Table 18) can be assigned to the MMH and all others to the AIM. Our results should therefore be taken as further empirical evidence for the effects of the AIM.
This empirical study provides added scientific value because it includes a systematic presentation of the state of the art on the effects of weather in capital markets and thus provides directions for future research. In addition, this work fills a gap in the research on the German stock market. No other study has examined the German market so comprehensively, and all relevant weather variables are included in this analysis. Furthermore, this work is not limited to the analysis of returns but also examines volatility and trading volume. Finally, a methodical research gap is bridged. With the application of GARCH models, this empirical work is based on the state of the art methodology in the analysis of stock markets.
Even though the focus of this paper is not on the application of weather trading strategies, the identified correlations could be used for this purpose. Thus, the results may have economic relevance and be exploited by traders, even if this approach is often questioned due to the various historical empirical results (see Sect. 2). Not much literature exists in this regard, although some authors have shown successful weather strategies. In one famous paper, Hirshleifer and Shumway (2003) tested a weather-induced trading strategy and were able to increase the Sharpe ratio 2 , including transaction costs, for a hypothetical investor. Kamstra et al. (2003) showed that a pro-SAD strategy (the reallocation of 100 percent of the portfolio twice a year at the fall and spring equinoxes) leads to an annual average excess return of 7.9 percent compared to a neutral strategy. In a recent preprint, Dong and Tremblay (2020) reported that a global weather-based hedge strategy produced a mean annual return of 15.2 percent compared to mean world index return of 3.1 percent corresponding to a Sharpe ratio of 0.462 relative to 0.005 for the world index. They used premarket weather conditions-sunshine, wind, rain, snow, and temperature-for their calculations. Thus, it might be possible to make profits on the German stock market by using weather strategies that mainly take into account changes in weather and air pressure.
In contrast to the added value of this work, we also note its limitations, which at the same time delineate future research needs. International investors are not influenced by the weather in Germany; thus, despite the mostly insignificant results for the DAX and MDAX, there could be effect of weather on these markets. The second limitation is the inadequate mapping of the models for trading volume. It cannot be ruled out that the identified effects are based on deficits in terms of study design. Accordingly, the replication of this study with the help of another model is advisable.
However, weather-related strategies apply to higher-frequency trades, and thus, transaction costs must be very small for such trading strategies to pay off. Rather, Hirshleifer and Shumway (2003) viewed their empirical results as evidence of psychological effects to which investors are inevitably exposed and of which they should be aware. We also interpret our results as an indication of the existence of effects on stock markets that cannot be rationally explained. Future research should focus more on the consequences of the effects of weather, especially those of changes in weather and the impact of air pressure.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/. | 9,704.8 | 2021-12-10T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
ssc-miR-185 targets cell division cycle 42 and promotes the proliferation of intestinal porcine epithelial cell
Objective microRNAs (miRNAs) can play a role in a variety of physiological and pathological processes, and their role is achieved by regulating the expression of target genes. Our previous high-throughput sequencing found that ssc-miR-185 plays an important regulatory role in piglet diarrhea, but its specific target genes and functions in intestinal porcine epithelial cell (IPEC-J2) are still unclear. We intended to verify the target relationship between porcine miR-185 and cell division cycle 42 (CDC42) gene in IPEC-J2 and to explore the effect of miR-185 on the proliferation of IPEC-J2 cells. Methods The TargetScan, miRDB, and miRanda software were used to predict the target genes of porcine miR-185, and CDC42 was selected as a candidate target gene. The CDC42-3′ UTR-wild type (WT) and CDC42-3′UTR-mutant type (MUT) segments were successfully cloned into pmirGLO luciferase vector, and the luciferase activity was detected after co-transfection with miR-185 mimics and pmirGLO-CDC42-3′UTR. The expression level of CDC42 was analyzed using quantitative polymerase chain reaction and Western blot. The proliferation of IPEC-J2 was detected using cell counting kit-8 (CCK-8), methylthiazolyldiphenyl-tetrazolium bromide (MTT), and 5-ethynyl-2′-deoxyuridine (EdU) assays. Results Double enzyme digestion and sequencing confirmed that CDC42-3′UTR-WT and CDC42-3′UTR-MUT were successfully cloned into pmirGLO luciferase reporter vector, and the luciferase activity was significantly reduced after co-transfection with miR-185 mimics and CDC42-3′UTR-WT. Further we found that the mRNA and protein expression level of CDC42 were down-regulated after transfection with miR-185 mimics, while the opposite trend was observed after transfection with miR-185 inhibitor (p<0.01). In addition, the CCK-8, MTT, and EdU results demonstrated that miR-185 promotes IPEC-J2 cells proliferation by targeting CDC42. Conclusion These findings indicate that porcine miR-185 can directly target CDC42 and promote the proliferation of IPEC-J2 cells. However, the detailed regulatory mechanism of miR-185/CDC42 axis in piglets’ resistance to diarrhea is yet to be elucidated in further investigation.
INTRODUCTION
Diarrhea is the main cause of death of newborn and suckling piglets, which brings enor mous economic loss to the pig industry [1]. The occurrence of piglet diarrhea is related to a combination of genetic and improper management factors, especially the infection of pathogenic microorganisms, such as Escherichia coli [2], Salmonella [3], Clostridium perfringens [46], porcine epidemic diarrhea virus [7], etc. Therefore, it is necessary to search for molecular markers of diarrhea resistance and carry out porcine diseaseresistant breeding. MicroRNAs (miRNAs), a class of small and endogenous non coding RNA molecules with 1925 nucleotides, can play a key role in posttranscription by binding to the 3'untrans lated region (3'UTR) of target mRNA [8,9]. It can play roles in multiple biological processes, such as the cell proliferation [10], apoptosis [11], tumorigenesis [12], and immune in flammation [13] by suppressing the expression of its target genes.
In our previous study, we researched the expression pro files of the ileum miRNAs of 7 days piglets infected with Clostridium perfringens type C using small RNASeq, and found that sscmiR185 was differentially expressed between the resistant group and susceptible group of diarrhea piglets [14]. It is reported that miR185 can play important roles in a variety of cancers, covering pancreatic cancer [15], bladder cancer [16], nonsmall cell lung cancer [17], prostate carcinoma [18], gastric cancer [19], breast cancer [20], hepatocellular carcinoma [21], and colorectal cancer [22]. In addition, miR 185 can also play a regulatory role in response the immune inflammatory. Liu et al [23] analyzed microRNAs in alco holic liver diseases using microarrays and found that miR 185 can participate in immune response, inflammatory response and glutathione metabolism. Ma et al [24] found that the CCAT1/miR1853p/MLCK signaling pathway damages intestinal barrier function and promotes the dete rioration of inflammatory bowel disease. Based on this, we speculate that miR185 also plays an important role in the resistance of piglets to diarrhea infection.
Cell division cycle 42 (CDC42) is one of the members of Rho GTPase family [25]. It is reported that CDC42 regulates cell cytoskeleton and adhesion, cell functions, which are cru cial in the development of various cancer diseases [26]. The research reported that miR137 may directly target CDC42, inducing G1 cell cycle arrest and inhibiting the proliferation and invasion activities of colorectal cancer cells [27]. More over, miR185 is a negative regulator of RhoA and CDC42, and could inhibit the proliferation and invasion of human colorectal cancer cells [28]. However, the function of miR 185/CDC42 in intestinal porcine epithelial cell (IPECJ2) remains to be determined.
In our current study, the relationship between porcine miR185 and CDC42 was investigated in IPECJ2. We pre dicted the target relationship between miR185 and CDC42 using bioinformatics software. The mRNA and protein ex pression level of CDC42 in IPECJ2 were detected after transfection with miR185. The luciferase activity of recom binant plasmids was also detected. In addition, the effects of overexpression miR185 or knockdown CDC42 on pro liferation activity of IPECJ2 were explored. In this study, our results show that porcine miR185 can directly target CDC42 and promote the proliferation of IPECJ2 cells.
Ethics statement
All animal experiments were conducted according to the Regulations and Guidelines for Experimental Animals es tablished by the Ministry of Science and Technology (Beijing, China, revised in 2004) and approved by the Committee for Animal Ethics of the College of Animal Science and Tech nology, Gansu Agricultural University (approval number 2006398).
Sample collection and cell culture
The liver tissue samples were collected from three male landrace at six months and stored at -80°C until RNA extrac tion and as a template for CDC42 gene 3'UTR amplification. The 293T cells and IPECJ2 were purchased from BeNa Culture Collection (BNCC, Beijing, China). The cells were cultured in DMEM/F12 medium (HyClone, New York, NY, USA) supplemented with 10% fetal bovine serum (Gibco, Thermo Fisher Scientific, Inc., New York, USA), and 1% penicillinstreptomycin at 37°C and 5% CO 2 . When cell confluence reached 70% to 80%, the transfection is carried out.
Total RNA extraction and cDNA synthesis
Total RNA was extracted from porcine liver tissues and IPEC J2 cells using TransZol Up reagent (TransGen Biotech, Beijing, China) according to the manufacturer's instructions. Sub sequently, the cDNA was synthesized by reverse transcription using PrimeScript RT reagent kit with gDNA Eraser (TaKaRa, Dalian, China) and stored at -20°C.
Bioinformatic analysis
Since the miR185 is highly conserved among different spe cies, the miRNA databases: TargetScan [29] (http://www. targetscan.org/vert_72/), miRDB [30] (http://www.mirdb. org/), and miRanda [31] (http://www.microrna.org/microrna/ home.do) online software were used to predict the target genes for porcine miR185. Based on predictive criteria, be ing bound to targeted sequences with low free energy of binding and having good complementarity with targeted se quences, CDC42 was selected as a candidate mRNA for followup studies.
Plasmid construction and dual-luciferase reporter assay
To verify the targeting relationship between miR185 and CDC42, a partial segment of the CDC42 mRNA 3'UTR (WT) containing the miR185 bindingsequence was poly merase chain reaction (PCR) amplified using specific primers (Table 1). A mutated segment of the CDC42 mRNA 3'UTR (MUT) in which the miR185 binding sequence TCTCTCC was converted to AGAGAGG was obtained using gene syn thesis and subcloning (GENEWIZ, Suzhou, China). The PCR products were cloned into the pmirGLO (7,350 bp) dual lu ciferase reporter vector (Promega, Madison, WI, USA). The recombinant plasmids were confirmed by double enzyme digestion with Xho I and Sal I (TaKaRa, China) and sequenc ing.
For transfection, the 293T cells reached 70% to 80% conflu ences, cells were incubated in 24well plates. The recombinant plasmids were cotransfected with miR185 mimics (50 nM) and inhibitor (100 nM) using Lipofectamine 2000 reagent (Invitrogen, Carlsbad, CA, USA) according to the manu facturer's protocol, respectively. The miR185 mimics and inhibitor were designed and synthesized by RiboBio Biotech Co., Ltd. (RiboBio, Guangzhou, China). After 48 h post transfection, the luciferase activity was detected using the Dual Luciferase Reporter Assay System (Promega, USA). In this experiment, the pmirGLO vector was considered as a blank control, mimics NC and inhibitor NC were consid ered as a negative control. All reactions were performed in triplicate.
Quantitative polymerase chain reaction
The IPECJ2 cells were collected after transfection with miR 185 mimics and inhibitor. The quantitative polymerase chain reaction (qPCR) reaction was analyzed using TB Green Pre mix Ex Taq II (Tli RNaseH Plus) quantitative kit (TaKaRa, China) in Roche LightCycler 480 II instrument (Roche, Penz berg, Germany). The primer sequences are shown in Table 1. The thermal cycle for PCR was performed at 95°C for 30 sec onds, 40 cycles at 95°C for 5 seconds and 60°C for 30 seconds. The relative mRNA expression of CDC42 gene was normal ized with βactin (ACTB) gene, and the results were calculated using the 2 -ΔΔCt method [32].
Western blotting
After cell transfection 48 h, total proteins were collected from the treated cells by RIPA buffer (Solarbio, Beijing, China) and quantified using the BCA protein assay kit (Solarbio, Beijing, China). The each group of denatured proteins were loaded into 10% sodium dodecyl sulfatepolyacrylamide gelelec trophoresis, and transferred onto polyvinylidene fluoride membrane. Then, the membranes were blocked in Trisbuff ered saline with Tween20 and incubated with 5% skim milk at room temperature for 1 h. Next the membranes were in cubated with primary antibodies (antiCDC42, bs3555R, 1:1,000; antiβactin, bsm33036M, 1:1,500, Bioss, Beijing, China) at 4°C overnight. The membranes were then incu bated with secondary antibodies (HRP, goat antirabbit IgG, bs0295GHRP, 1:2,000, Bioss, Beijing, China) for 2 h at room temperature. The final protein bands were visualized by enhanced chemiluminescence, and the gray level of the protein bands was analyzed using ImageJ software (National Institutes of Health, Bethesda, MD, USA).
Interference RNA synthesis and overexpression vector construction
The interference RNAs used in this experiment were de signed and synthesized by GenePharma Company (Shanghai, China). The siNC was regarded as a negative control. The interference sequences were shown in Table 2. The CDC42 gene was cloned into pcDNA3.1 (+) vector with Nhe I and BmaH I restriction sites. The pcDNACDC42 overexpression vector was constructed by GENEWIZ Company (Suzhou, China).
Methylthiazolyldiphenyl-tetrazolium bromide assay
The methylthiazolyldiphenyltetrazolium bromide (MTT, Beyotime, Shanghai, China) was also used to examine cell viability. The 5×10 3 cells per well were cultured for 24 h in 96well plates before treatment with miR185 mimics, mim ics NC, miR185 inhibitor and inhibitor NC. Then they were incubated for 24 h at 37°C containing 5% CO 2 . Followed by 10 μL of MTT reagent (5 mg/mL) added to per well for an other 4 h. The medium was discarded after 4 h of treatment and the formazan crystals were dissolved using 110 μL of di methyl sulfoxide. The wavelength at 490 nm was selected, and the OD was determined using SkanIt microplate reader (Thermo Fisher Scientific Inc., USA).
5-ethynyl-2'-deoxyuridine assay
The BeyoClick EdU Cell Proliferation Kit with Alexa Fluor 555 (EdU, Beyotime, Shanghai, China) was used to detect cell proliferation. After seeding in 24well plates (5×10 3 cells per well) for 24 h, the IPECJ2 cells were transfected. After transfected 24 h, the cells were incubated with 10 μM EdU solution in growth medium for 2 h. Then, the cells stained with Azide 555 solution (red) and Hoechst 33342 (blue). Finally, the results were observed under a fluorescence microscope (Olympus IX71, Tokyo, Japan) with 200× magnification. The EdU positive cells were analyzed with the ImageJ software.
Statistical analysis
The IBM SPSS Statistics software (version 21.0; IBM, Armonk, NY, USA) was used to analyze the data, all experiments were repeated at least three times. A Student's ttest was applied to compare two groups and oneway analysis of variance (ANOVA) was performed for multiple groups. All values in this study were expressed as the mean±standard deviation, a p value of less than 0.05 was indicated statisti cal significance.
Predicting targeted mRNA
To explore the potential mechanism of sscmiR185, we per formed a multisequence alignment analysis of the mature miR185 sequences in different species and found that the mature sequences of miR185 was highly conserved in ver tebrates ( Figure 1A). The targeting mRNAs of miR185 were predicted using TargetScan, miRDB and miRanda software, and 385, 1,137, and 1,225 target genes were obtained respec tively, and 100 common target genes were obtained by the intersection ( Figure 1B). It was found that CDC42 gene 3'UTR can complement and bind to the seed region of miR185 ( Figure 1C). The CDC42 gene 3'UTR partial sequences con tain miR185 binding sites as showed in Figure 1D.
Recombinant plasmids identification and luciferase activity detection
Double enzyme digestion and sequencing confirmed that CDC423'UTRWT and CDC423'UTRMUT were suc cessfully cloned into pmirGLO luciferase reporter vector ( Figure 2A2D). The TCTCTCC sequences were successful ly mutated to AGAGAGG, without changes to other bases. In order to confirm the role of miR185 in regulating CDC42 3'UTR, the luciferase activity was detected using the Dual Luciferase Reporter Assay System according to specification. We found that miR185 mimics remarkably reduced the luci ferase activity of the pmirGLOCDC42WT (p<0.01), but not that of the pmirGLO and pmirGLOCDC42MUT (p> 0.05) (Figure 3).
Effects of miR-185 on CDC42 expression level in IPEC-J2
To further confirm the effects of miR185 on CDC42, qPCR and Western blot analyses were used to examine the mRNA and protein expression in IPECJ2 cells after transfected with miR185 mimics, mimics NC, miR185 inhibitor and inhibitor NC, respectively. The results showed that the mRNA and protein expression level of CDC42 was dramatically decreased when transfected with miR185 mimics than in transfected with mimics NC (p<0.01), however, the CDC42 expression level both mRNA and protein were significantly increased when transfected with miR185 inhibitor than in transfected with inhibitor NC (p<0.01) ( Figure 4A4C). These results demonstrate that miR185 directly regulates CDC42 expres sion.
miR-185 promotes the IPEC-J2 cell proliferation
In order to explore the effect of miR185 on IPECJ2 cell proliferation, CCK8 assay and MTT assay were used to de tect the cell viability after transfected with miR185 mimics, mimics NC, miR185 inhibitor and inhibitor NC. We found that overexpression miR185 enhanced cell viability, while knockdown miR185 can inhibit cell viability of IPECJ2 cells. The CCK8 and MTT assays showed similar expression trends ( Figure 5A, 5B). The EdU assay was used to detected cell proliferation, and the results showed that the EdU positive cells were significantly increased after transfected with miR 185 mimics. On the contrary, after transfected with miR185 inhibitor, the EdU positive cells were significantly reduced. Therefore, we conclude that miR185 can promote IPECJ2 cells proliferation.
Knockdown CDC42 promotes the proliferation of IPEC-J2 cells
To confirm whether miR185 directly promotes proliferation of IPECJ2 cells by targeting CDC42, we compared knock down and overexpression of CDC42 in IPECJ2 cells. After transfected with siCDC421 and siCDC422, the expres sion of CDC42 was downregulated by 0.828 and 0.238 fold. So, the siCDC422 was used in subsequent experiments. However, after transfected with pcDNACDC42 plasmid, the expression of CDC42 was significantly upregulated (Fig ure 6A). The CCK8, MTT, and EdU assays were used to detect cell vitality and cell proliferation, these results showed that knockdown CDC42 promoted cell proliferation, while over expression CDC42 inhibited proliferation of IPECJ2 cells ( Figure 6B, 6C, 6D, and 6E). Therefore, we hypothesized that miR185 might directly target CDC42 to promote IPECJ2 cell proliferation.
DISCUSSION
Diarrhea is a common disease in pig industry, especially harm ful to piglets. Our previous study found that sscmiR185 was upregulated in the resistance group of diarrhea piglets [14]. We speculated that it may play an important role in re sisting diarrhea, but the specific target gene is unknown. It is well known that bioinformatics prediction combined with experimental validation is an effective method for screening miRNA target genes. In this study, three softwares: TargetScan, miRDB, and miRanda were used to predict the target genes for miR185, which could effectively reduce the false positive rate. By finding the intersection, CDC42 was selected as a candidate target gene. Previous research has shown that CDC42 is a potential target of miR185. For example, Zhang et al [21] confirmed that CDC42 is a direct target of miR185 in human hepato cellular carcinoma using luciferase reporter assays. Liu et al [28] showed that miR185 expression significantly suppressed the RhoA and CDC42 3'UTR activities using a luciferasere porter assay, and could inhibit the proliferation and invasion of human colorectal cancer cells. Notably, the miR185 and CDC42 gene sequence are highly conserved between pig and human. Hence, we assumed that sscmiR185 could be bind ing to the conserved sites of CDC42. In our present study, we found that CDC423'UTR contained miR185 binding site according to the bioinformatics software. The luciferase activity is remarkably suppressed in pmirGLOCDC42WT group after transfection with miR185 mimics. These results indicated that CDC42 was a target gene of porcine miR185. As a chemokine that mediates tumors, CDC42 can partici pate in the migration and invasion of various cancer cells [33]. Previous research reported that microRNA384 inhibits proliferation, migration and invasion of glioma by targeting at CDC42 [34]. Yang et al [35] confirmed that downregula tion of miR25 markedly inhibited A549 cell proliferation, induced G1 cell cycle arrest, by targeting CDC42. In addi tion, miR330 regulates the proliferation of colorectal cancer cells by targeting CDC42 [36].
More and more studies have confirmed that miRNA can negatively regulate the expression of target genes. Niu et al [37] demonstrated that ROCK2 was negatively associated with miR1855p and promoted hepatocellular carcinoma cell migration and invasion. Fang et al [38] revealed that the expression level of miR185 and STIM1 were negatively cor related as detected by qRTPCR and Western blot assays. In this study, the mRNA and protein expression level of CDC42 were dramatically decreased after overexpression of miR185, which further confirms the targeting relationship between the porcine miR185 and CDC42. Functionally, miR185 has been reported to inhibit the proliferation of cancer cells and promote apoptosis. For example, upregulation of miR 185 promotes apoptosis of the human gastric cancer cell line MGC803 [39]. Zou et al [40] found that RKIP through upreg ulation of miR185 suppresses the proliferation and metastasis of breast cancer cell lines. Furthermore, miR185 can inhibit virus infection through the regulation of immunometabolic pathways [41]. In our present research, we detected the exact function of miR185 for proliferation and proved that miR185 promoted the proliferation of normal IPECJ2 cells. However, whether miR185 can resist piglet diarrhea caused by patho genic bacteria infection and inhibit intestinal cell apoptosis requires further research. In summary, our results may pro vide new insights into the screening of miR185/CDC42 molecular markers.
CONCLUSION
In conclusion, luciferase activity, qPCR and Western blot assays displayed that porcine miR185 can directly target CDC42 gene. In addition, overexpression miR185 and knock down CDC42 can promote cell proliferation of IPECJ2. However, the detailed regulatory mechanism of miR185/ CDC42 axis in piglets' resistance to diarrhea requires fur ther investigation.
CONFLICT OF INTEREST
We certify that there is no conflict of interest with any financial organization regarding the material discussed in the manu script. | 4,398.8 | 2020-10-12T00:00:00.000 | [
"Biology"
] |
Development and Implementation of a Low-Cost Test Solution for High-Precision ADC Chips Based on Intelligent Sensor Networks
Analog-to-digital converters (ADCs) are moving toward high speed and high resolution for low-cost testing. Based on the theory of intelligent sensor network, this paper designs a low-cost test solution for high-precision ADC chips, which solves the problems related to signal integrity. It mainly includes the following: designing an appropriate circuit connection scheme, planning an appropriate PCB stack-up structure, formulating detailed layout and wiring constraints, etc., and building a high-speed ADC test platform to obtain static and dynamic performance; based on the existing instruments in the laboratory, the e ff ects of di ff erent signal sources, di ff erent input powers, and the presence or absence of fi lters on the dynamic performance of high-speed ADCs are studied. In the simulation process, the HyperLynx simulation platform is used to design and simulate the signal integrity of the high-speed acquisition board. Combined with the relevant theoretical knowledge of the signal integrity of high-speed digital circuits, the signal integrity analysis and simulation of the ADC module circuit and the DDR3 high-speed memory circuit are carried out, respectively. The results show that, taking the histogram method as a reference, when the optimal 30 windows are selected, the integral nonlinearity (INL) error of the proposed method is 0.12 LSB, the highest sampling frequency is up to 5GSps, and 61440 sampling points are required. The time is reduced by about 30% compared with the excitation error identi fi cation and removal (SEIR) method, which e ff ectively improves the low-cost test e ff ect of the ADC chip.
Introduction
With the rapid development of ultralarge-scale integrated circuits, the functions of chips are becoming more complex and diversified, and the performance requirements for automatic test machines in large-scale production testing are also getting higher and higher [1]. This makes ATE more complex and requires testing capabilities of digital circuits, analog circuits, and memory circuits at the same time, and the SoC test system emerges as the times require [2]. Because the trace is too long, the rising edge of the clock signal received by the CPU chip is no longer monotonic [3][4][5]. For a circuit that triggers collection by the rising edge, the clock signal may collect the same data twice; for a highspeed circuit board, due to the dense layout and wiring, it is inevitable to make some wires close to other wires, and mutual coupling may occur between the wires [6], and the energy is coupled from one wire to another wire, distorting the signal waveform. When propagating on a discontinuous transmission line, reflections occur, which cause signal ringing, which is the recurring overshoot and undershoot. In practical engineering design, these signal integrity problems can be seen everywhere [7].
There are existing ADC standard test methods that can accurately test general-purpose ADCs, but in the field of high-resolution ADCs, this method has limitations such as high signal source accuracy and too many sampling points [8][9][10]. Therefore, research on fast and accurate test methods for high-resolution ADCs has become one of the focuses in the field of ADC testing. Although high-speed digital circuits have real-time and high-speed data processing capabilities, they also bring some problems that do not appear in traditional circuits [11], especially signal integrity problems. With the continuous increase of chip functions, the complexity of chip functions requires automatic test equipment to provide various test resources [12]. This requires that in the test, not only the automatic test equipment can provide digital signals and analog signals of various frequencies, such as sine waves with high and intermediate frequencies and analog signals with modulation information, but also can analyze the signal output by the chip [13]. For example, it can judge the logic state of the digital output pin and can correctly sample the analog signal, Fourier transform, spectrum analysis, etc. [14].
This paper proposes a piecewise polynomial fitting method for testing high-resolution ADCs based on lowprecision signal sources. The method first uses a lowprecision DAC and a DC bias circuit to generate multiple correlated sinusoidal signals with different offsets. After analyzing the signal error, each correlated sinusoid is used as the input of the ADC (DUT) under test. In order to separate the low-precision sinusoidal excitation and the nonlinear characteristics of the DUT, the method uses a set of Fourier series with unknown coefficients to represent the transfer function of the DUT and then combines the known output code of the DUT and the transfer function expression to establish a relationship between the analog conversion level equations, use the least 2 multiplication method to fit a set of optimal Fourier coefficients, and then solve the transfer function of the DUT. Compared with the histogram method, the proposed method greatly reduces the number of required sampling points, but too few points will lead to lower test accuracy. In order to reduce or eliminate this effect, this paper further uses a rectangular window to divide the full-scale input range of the DUT into multiple segments and ensures that the overlap between adjacent segments is not less than 3%. Firstly, according to the technical specifications of the chip, a reasonable test scheme is designed to realize the parallel test of four devices under test. The main technology is to use a signal generator to divide the signal into 4 channels through the splitter and input them into 4 tuners, respectively, to generate intermediate frequency signals to 4 DUTs, including quantization error, offset error, static indicators such as nonlinear error, gain error, and nonlinear error, and dynamic indicators such as equivalent input reference noise, signal-to-noise ratio, signal-to-noisedistortion ratio, and spurious-free dynamic range. In addition, the commonly used test methods for static and dynamic indicators are also given. The downside is that an expensive external signal source is still required. In addition, the stability of the test is difficult to control due to the complexity of the test scheme and synchronization problems.
Related Work
At present, various ATE equipment at home and abroad has gradually transformed from the previous functional subdivision to functional integration: the previous ATE will be mainly divided into digital signal IC special testing machine, analog signal IC special testing machine, and storage IC special testing machine. The current situation is that more and more test functions are concentrated on one or several ATEs. For example, the current mainstream SoC ATE includes digital signal IC, analog signal IC, digital-analog mixed signal IC test, and even many other types of IC test functions [15].
The signal integrity issues are further improved at a later stage of the design, so that the designed circuit has good signal integrity. Xie and Liu [16] insisted on using the method based on signal integrity analysis and simulation in the design of high-speed digital circuits, focusing on the analysis and simulation of sensitive signals, finding and solving signal integrity problems, so as to avoid signal integrity problems on the circuit. Many circuit design companies in foreign countries design circuits according to this design method and process, and this design method based on signal integrity analysis and simulation will gradually become more standardized.
Huan et al. [17] estimated the transfer function of the DUT based on the harmonic power combined with the Chebyrshev polynomial rank and then solved the static parameters of the ADC under test. Since only the dynamic parameters need to be tested, this kind of method can realize the test of the DUT with less sampling points under the condition of satisfying the Nyquist sampling theorem. But it is not difficult to understand that the accuracy of the measured dynamic parameters will directly determine the reliability of the estimation in this method. In order to improve the test accuracy of dynamic parameters, Yang et al. [18] introduced a method to calculate the spectral characteristics of the DUT by means of interpolation DFT and based on the fast Fourier transform method to test the INL value of the DUT and then used the 16 to 20-bit ADC as the test subjects experimentally. On this basis, Coulby et al. [19] further use various window functions to process the sampled data to reduce or eliminate the problem of spectral leakage in actual testing. In addition, Cao et al. [20] also analyzed the influence of spurious components at nonharmonics on the estimation accuracy. However, because the dynamic parameter test is usually carried out under the condition of high frequency input signal, this method does not consider the monotonicity of the ADC under test. In addition, due to the inability to achieve accurate coherent sampling, spectral leakage and noise still exist at the harmonic points [21]. However, the DUT transfer function described by the Chebyshev polynomial will not be able to reflect the local nonlinear abrupt change [22]. Therefore, the method of estimating static characteristics based on dynamic parameters is only suitable for fast testing occasions that do not require very high accuracy of static parameters [23].
Construction of a Low-Cost Test Model for
High-Precision ADC Chips Based on Smart Sensor Networks 3.1. Intelligent Sensor Network Structure. The digital-analog hybrid SoC chip test is usually a method of submodule 2 Journal of Sensors testing for intelligent sensor networks. The circuit function of the digital part is tested first, and then, the analog part is tested. This is because the test principle of the digital module is relatively simple, the requirements for test resources are low, and the test time is relatively short; the test of the analog module requires more complex test methods and test equipment, and the test time is relatively long. During the debugging process, the failure data is collected and processed, and the test items with higher failure probability are ranked first, which reduces the test time of the failed chip and reduces the test cost.
In order to facilitate theoretical analysis, the transmission line model should be simplified as much as possible, so that the electrical characteristics of the medium remain unchanged, and the cross-sectional area of the transmission line also maintains a fixed value, that is, the transmission line is uniform. Transmission lines have distributed parameter electrical properties of capacitance, inductance, resistance, and conductance, especially when high-frequency signals are transmitted; these electrical properties are more pronounced.
During the transmission of digital signals, signal reflection will occur due to impedance mismatch. The reflection coefficient is used to characterize the amount of reflection. From the expression of reflection coefficient, it can be known that the terminal load ZL of the transmission line and the characteristic impedance Z together determine the size of the reflection coefficient. In high-speed digital circuits, signal reflection will cause SI problems such as overshoot, undershoot, signal delay, and ringing, and the mismatch of transmission line impedance is the most fundamental cause of signal reflection problems.
The method of realizing differential input is generally to convert single-ended signals into differential signals by transformers or differential op amps. Generally speaking, transformers have higher operating frequencies, and the linearity of differential op amps is generally better than that of transformers at low frequencies, so transformers are often used for AC coupling, and op amps are not limited by cou-pling in bandwidth.
SNR is usually defined as the ratio of signal power to noise power. For an ideal ADC, the noise mainly comes from quantization error. As mentioned above, different quantization methods have different corresponding minimum quantization errors. This article uses the rounding method for analysis. The rms value of quantization noise can be obtained by taking the square root of the mean of the squares of the quantization errors.
Signal-to-Noise Distortion Ratio SINAD is the ratio of input signal power to all output signal distortion power (including noise and harmonic components, but excluding DC). Figure 1 measures all transfer function nonlinearities of the output signal plus all system noise. SINAD is a parameter that reflects the real performance of the ADC; usually SINAD can be expressed as above.
The effective number of bits refers to an ADC with a resolution of n, whose performance is equivalent to the number of bits of an ideal ADC due to interference from various noise and distortion. ENOB is calculated based on the ADC device signal-to-noise ratio, which converts the transmitted signal quality to the equivalent bit resolution. Here, resistors R13, R14, and capacitors C58-C60 form a differential two-way RC filter structure, where C60 = 20 pF is much larger than the input capacitance value of the ADC. Considering that the differential op amp AD8138 needs a positive and negative 3.3 V power supply, the switched capacitor voltage converter chip LM2644 of TI company is selected for implementation.
ADC Chip Design Indicators.
In high-speed digital systems, many transmission lines are used for interconnection between ADC chip design devices, and the lengths are different. The delay of the signal passing through the transmission line cannot be ignored relative to the transition time of the signal itself. Digital signals are transmitted at the speed of electromagnetic waves on the signal line. At this time, the signal line is a network with parameters such as impedance, capacitive reactance, and inductive reactance, which can only be approximated by a distributed parameter system, that is, the transmission line model. Because the digital 3 Journal of Sensors signal will have a certain delay on the transmission line, it is also called a delay line.
When two signal lines form a closed loop, mutual inductive coupling occurs between the signal lines. Signal changes on the attacker's network will affect the victim network through mutual inductive coupling, and the crosstalk caused by this coupling is inductive crosstalk.
When analyzing integral nonlinearity, sometimes due to offset or gain errors, the end of the conversion range may be shifted from the ideal value. Therefore, there are two methods to determine this end-point connection in practical applications: "best straight line method" and "endpoint method." The best straight line method is defined as the straight line that mathematically best fits the actual conversion curve of the ADC; the end point method is defined as the line connecting the first and last points in the conversion curve.
The offset error is the deviation between the actual analog input value and the ideal analog input when the output code transitions for the first time from the least significant bit. The gain error is the difference between the actual analog input and the ideal analog input corresponding to the highest two codes. The static performance of the ADC can be described by the input-output transfer curve. Ideally, the input and output characteristics show a highly uniform step in the entire dynamic range, but due to factors such as manufacturing process and working environment, the actual relationship between the level and the code word deviates from the ideal situation to a certain extent.
Although according to Nyquist sampling law, such sampling will cause spectral aliasing and cause distortion, but if the input is a band-limited signal, effective sampling quantization can also be performed by using undersampling technology. Typically used for quantization of narrowband signals, new RF and IF sampling structures can be designed Journal of Sensors using undersampling techniques. Since all comparator branches are triggered by the same clock pulse, the fully parallel ADC only needs one clock cycle for each data conversion, so it has the fastest speed.
Signal Source Design of Sensor Network.
In the highspeed acquisition board of the sensor network signal source, the analog-to-digital conversion chip is an important factor to determine whether the indicators of the entire board meet the requirements. In order to obtain higher accuracy and more stable performance, this paper chooses a high-speed chip that is widely used in the capture board and is produced by E2V Company; the model is EV10AQ190. This chip adopts time cross-sampling technology and realizes the purpose of high ADC sampling rate based on the working principle of multichannel analog-to-digital conversion core phase shifting. The input signal used to verify SEIR and the proposed method is generated by the superposition of a 10-bit Maxim 5183 DAC and a DC bias circuit, while the input signal used for the traditional histogram test method is generated by a 16-bit Agilent E33522a. In addition, an additional channel of the instrument will be used to generate the DUT's clock signal. An Agilent 16702B logic analyzer will be used to generate the control signal for the DUT and the input signal for the DAC. The reference signal is a built-in l V voltage, and an external reference signal interface is reserved. The total power supply of the ADC under test is provided by Agilent N6705, and after passing through the combined structure of different LDOs and ferrite beads in the test board, it supplies power to the corresponding modules, respectively. For the digital output of the ADC to be tested, this paper chooses to use the NI PXle 6556 highspeed digital acquisition card in Table 1 to collect. In addition, based on the method proposed in this paper, among the two groups of data in the last window function, the peak-to-peak value of one group of data is equal to the input range of the transfer function corresponding to the window function, while the other group has superimposed DC offset. Therefore, a full "1" output code will appear due to partial cut-off distortion. At this time, the INL value at
Journal of Sensors
the sequence corresponding to the full partial code in the window function will not be solved due to lack of real data. The number of code values for which an exact solution cannot be obtained should be equal to the number of all "1" codes. To reduce this effect, the offset should be as small as possible. In this paper, the offset between the two sets of data in the same segment is selected to be 10LSB.
Compared with the histogram method, the main difficulty of the proposed method lies in the accurate identification and removal of low-precision signal source errors. In the actual test process, in addition to the factors described above that will affect the test accuracy, the accuracy of the selected signal source will directly affect the test accuracy. For example, when the resolution of the selected signal source is more than 3 bits higher than that of the ADC under test, the test error will not be lower than that of the histogram method. As the accuracy of the signal source of Figure 2 decreases, the test error of the proposed method will also change.
In the middle of the test, by stopping the CDR, the frequency register and the phase register are manually operated, so that the comparison edge is at the position of points 1, 2, 3, 4, and 5. At the same time as the test, the width of the eye diagram is tested by adding a bias voltage. Under Journal of Sensors the condition that the performance index of the instrument does not meet the requirements of chip test, various solutions including sampling test and multichannel multilevel test are proposed, and the effectiveness of these methods is verified in the actual test. The actual test result of the 2GSps 6-bit ultrahigh-speed ADC shows that the ADC's maximum conversion speed can reach 2.2GSps, the minimum quantization accuracy is 10 nV, the maximum effective bit can reach 5.7 bits at the highest conversion speed, and the total power consumption of the circuit is 310 mW. The whole test head is divided into 8 groups, each group has a corresponding clock board (clock board), a control board (control board), 4 DC/DC conversion boards for power supply to the board, l A power board (Power s knock Ply board) powered by a chip, and a maximum of 8 digital channel boards, analog modules, or RF modules. Above the test head is the test chip interface (OUT interface). The chip or wafer will be interfaced with the test chip through the test carrier board. The former is the difference between the time midpoint of the positive signal and the midpoint of the negative signal time. The latter is to select two single-ended signals that are not in the same group, and measure the time deviation between the time midpoints of the two signals. All tests should include both low-to-high and high-to-low variations.
ADC Chip Static Parameters.
The construction goal of the acquisition board is as follows: under the control of FPGA, DDR3 can stably and effectively cache the static parameter data of the high-speed ADC chip sampled by the high-speed ADC, can read it out in real time, and send the data to the data through optical fiber or VPX backplane. High-speed clock generation module: it is mainly composed of a high-frequency clock chip ADF4360, a 10 MHz active crystal oscillator, a clock buffer chip CDCVF2310, and a set of SMA interfaces. Main control module: it is mainly composed of a Xilinx V6 series FPGA, which is matched with two XCF32P configuration chips and a JTAG interface. Computing module: it is mainly composed of a piece of TS201, which is matched with a FLASH chip for chip configuration and the corresponding JTAG interface. Cache module: it is mainly composed of two DDR3 chips, which can realize ping-pong operation. The chip is produced by Samsung, and the model is K4B2G1646C. Clock module: it is mainly composed of three active crystal oscillators of 50 MHz, 125 MHz, and 200 MHz and three corresponding clock buffer chips CDCVF2310.
There are four analog-to-digital conversion cores inside the chip, which are marked as A, B, C, and D, respectively. The four analog-to-digital conversion cores in Figure 3 can cooperate with each other to make the chip work in fourchannel mode, two-channel mode, or single-channel mode. All four ADC cores are controlled by the same external input clock signal and the same set of SPI buses. The chip needs to receive a pair of external differential clocks, the maximum clock frequency can reach 2.5 GHz, and the clock should be a sinusoidal signal with a peak-to-peak value of 500 mV and low jitter. There is a clock module inside the chip, which divides the external 2.5 GHz clock signal by two to generate a 1.25 GHz internal sampling clock. Different working modes handle the 1.25 GHz sampling clock differently. The best straight line method is defined as the straight line that mathematically best fits the actual conversion curve of the ADC; the endpoint method is defined as the connecting line between the first point and the last point in the conversion curve. Comparing the two methods, the best straight line method contains information about offset and gain errors, so the "best straight line method" is usually the first choice.
In order to achieve the test of Figure 4, a ramp or sinusoidal signal with a peak-to-peak value slightly larger than the full-scale input range of the DUT and a very low frequency can be selected as the input to the ADC. The square wave is used as the clock signal of the ADC to ensure that each digital output code value of the ADC is sampled at least 10 times. Use the logic analyzer and other instruments to collect the corresponding output codes, and transmit the data to the PC based on the LAN port, and finally process the data based on the histogram principle.
Due to the existence of various noise and nonlinear components, the number of occurrences of each code value is different. Since the peak-to-peak value of the input signal is slightly larger than the full-scale input range of the ADC under test, the maximum and minimum code values obtained by statistics cannot truly reflect the linearity of the ADC at the lowest and highest code values. Therefore, these two points should not be considered when doing statistical analysis of data.
Application and Analysis of Low-Cost Test
Model of High-Precision ADC Chip Based on Intelligent Sensor Network 4.1. Intelligent Sensor Network Data Preprocessing. In the experiment, an intelligent sensor network data test platform was built, and the high-speed digital circuit and the improved circuit with signal integrity problems were tested, respectively, and the improvement effect of the signal integrity problem was verified. By comparing the simulation waveforms, it is shown that the high-speed digital circuit with signal integrity problems cannot work normally, but all parts of the improved circuit can work normally, which ensures the correctness of the circuit design. First, import the IBIS (input/output device information) simulation model, perform signal integrity analysis and interactive simulation of individual key signals, find signal integrity problems such as crosstalk and reflection of key signals, and modify the design of the corresponding parts. Second, use the global fast simulation to verify the signal integrity design of the high-speed acquisition board, and modify the PCB design until all hidden dangers that may cause signal integrity problems are eliminated.
In the actual test process, the method in Table 2 selects two low-precision DACs to generate a sine signal and a DC offset, respectively. After the two are superimposed, the scaling circuit is used to make the scaled signal equal to the full-scale input range of the specific window function, and use the scaled signal as the input to the ADC under test.
On this basis, this paper selects a set of Fourier series to establish the transfer function expression in this segment and uses the relationship between the analog offsets corresponding to the two groups of sampled data in the same segment to be fixed to establish the relationship between equation to simulate the input signal. Finally, the proposed method uses the least squares method to solve the optimal coefficient and transfer function expression and superimposes the same DC offset on the first two sets of signals, so that the signals completely fall into other window functions, and then use the same method to solve others. A complete expression of the DUT transfer function can be obtained by successively processing the transfer function curves of adjacent functions. Journal of Sensors Monotonicity refers to the variation law of the output digital code of the ADC with the increase of the input signal in Figure 5. For a monotonic ADC, when the input analog quantity increases continuously, the output code value of the latter should be greater than or equal to the output code value of the former. When the input signal rate is high, the ADC will operate in a monotonic state due to the limitation of the comparator conversion time. In the actual testing process, the probability that the code value width is 1/10LSB is very small, so the missing code is usually defined as the corresponding code value whose width is less than 1/10LSB.
Low-Cost Test Simulation of High-Precision ADC Chips.
It can use verliog to write high-precision ADC chip test module simulation test, convert the value of D0-Dil from hexadecimal 0000 to binary, and input signal Vi from 5. For 0 V, the reference voltage Vr is 2.5 V. Under the control of the clock signal, Vout increases from 0 V to 2.5 V in steps of 610 μV. The test conditions are the power supply voltage Vcc = 3:3 V, the temperature T = 270°C, and the digitalanalog hybrid simulation is carried out. The digital sequence during the simulation is realized by verliog, and the simulation waveform is shown in the figure. The off-capacitor comparator uses the combination J1: off-capacitor and no-bad comparator. Its advantage is that the differential signal can be compared with a single-ended circuit, and it can be automatically zeroed for J1: the DC offset of the ring comparator. In the latch stage, the amplifier is turned off and the latch works, so the instantaneous output of the preamplifier is amplified by the memory into a logical "1" or "0." The primitives in Figure 6 are used to complete the double data rate (DDR) input function, and there are corresponding primitives, which can be instantiated when used. The IDDR primitive signal pins are shown as it. Q1 and Journal of Sensors Q2 are the data output of the IDDR register, C is the clock input port, CE is the clock enable port, and this enable port affects the data loading of the DDR flip-flop.
The sample-and-hold circuit controlled by the highspeed clock is the core part of the whole circuit, and its speed and accuracy directly determine the speed and accuracy of the entire conversion circuit, so in some designs, the sample-and-hold circuit will be individually designed using a more advanced process. When set to logic 0, clock changes are ignored and no new data is loaded into the DDR flipflops. CE must be a logic 1 to load new data into the DDR flip-flop. D is the IDDR register input from the IOB. R is the synchronous/asynchronous reset pin. S is the sync/async setup pin.
In the actual working process, the logic control unit of Figure 7 first generates a logic digital code whose highest bit is 1 and other bits are 0 and adds it to the DAC input end. The output of this DAC will generate an analog reference signal of 0.5 and serve as the reference input of the comparator. When adding an analog signal to the input end of the ADC, the analog signal will be compared with the 0.5% time through the comparator. If the comparator output is a high level, the MSB is set to 1; otherwise, the MSB is 0. The output of the corresponding DAC is compared with the analog input signal at the input of the comparator to determine the actual logic value of the second most significant bit, which is stored in the output register. Repeat the above steps until all bits of the ADC are determined.
Example Application and
Analysis. Virtex-6 series FPGA devices also include input serial-to-parallel conversion logic resources (ISERDES). The ISERDES unit is used to imple-ment serial-to-parallel conversion, including clock and logic control functions, to assist in high-speed sourcesynchronous applications. The functions of the ISERDES unit include deserializer, serializer, and Bitslip subunit. The high-speed data transmission can be completed through the ISERDES module, and the transmission rate cannot match the operating frequency in the FPGA. The highspeed serial sampling result data is input to the SelectIO module in the FPGA at the LVDS-DDR (differential level double-edge latched data transmission mode) level.
The data sending end is FPGA, the receiving end is the DDR3 data interface shown in Figure 8, and the IBIS simulation model of the driving source FPGA is Virtex-6, obtained from the Xilinx technical support website. Since the ADC conversion system is not completely linear, distortion occurs in the digital spectrum. Harmonic distortion is numerically equal to the ratio of the signal to the rms value of each harmonic component. Harmonic distortion only considers the second to tenth harmonics: harmonics above the tenth are usually assumed to be negligible.
In addition, based on the basic definition of the INL test, first analyze the analog input voltage value in Figure 9 corresponding to the change of each output code value of the ADC. This converts the ADC's many-to-one input-output relationship into a one-to-one transfer function.
Aiming at the overall circuit test of the 6-bit 2GSps ADC in this design, the TEKTRON-11LA7012 logic analyzer is used to directly sample the output of NC. This logic analyzer is the best logic analyzer currently available. It has an analog bandwidth of 3 GHz, and the highest data rate that the probe can reach is 1.4 Gbps. It is currently the only logic analyzer that can meet the test requirements of this ADC. Although the fitting result can improve fitting accuracy due to the shorter transfer function contained in each segment, the test waste caused by repeated sampling is increasing. When the critical point is reached, the test efficiency becomes low.
When the number of window functions in Figure 10 is small, using the window function to process data in segments can significantly improve the test accuracy, and the number of sampling points required is less. When the number of window functions is greater than 100, the error of the test result does not decrease significantly, but the number of required sampling points will increase sharply. The test time caused by the number of sampling points will be extended accordingly. After adding a BPF with a center frequency of 1 MHz, although the noise floor of the signal is reduced, the second and third harmonics become larger. The main reason is that the designed filter attenuates only more than 60 dB at 2 MHz, and when the output power of the signal source is a 1 MHz sine of 0 dBm, its second harmonic is close to -71 dBm. At this time, the BPF is not suitable as an impedance at the ADC input. The real-time signal processor requires the high-speed ADC sampling module to be as close as possible to the antenna end, and the digital signal converted by the ADC of the RF signal is sent to the digital signal processing part, so as to obtain as much useful information as possible. The circuit design also optimizes the coupling capacity between the DAC stages according to the parasitic capacitance value, which improves the accuracy of the ADC. The comparator adopts the independent capacitor comparator of self-elimination and loss of power, which improves the comparison accuracy.
Conclusion
This article describes in detail the design and implementation of 12-bit 125 kHz sampling rate SAR ADC for touch screen controller chip designed in 35 gm CMOS process. In order to reduce power consumption, an ADC circuit that adopts two working modes is designed. When there is a touch event, the ADC will wake up without delay. If it is not turned on, the device will be in a state of drooping. At the same time, it is ensured that the ADC and the internal reference voltage source are turned off during the analogto-digital conversion period. The test results show that under 60 MHz sampling and 20 MHz analog input, its SFDR is as high as 75 dB, and the effective number of bits is 10.6 bits. The paper also discusses the dynamic performance at different amplitudes and with and without filters on the analog input. Finally, combined with the actual test situation, the factors that lead to the unsatisfactory performance of AD9238 under low frequency input are verified and analyzed. According to the parasitic capacitance value, the circuit design optimizes the DAC-level state coupling capacitor, which improves the accuracy of the ADC. The function test of the high-speed interface and the DC parameter test are realized in a loopback way, so as to achieve the goal of greatly reducing the test cost of the integrated chip. Through the research of this subject, it is shown that the test scheme is an economical and effective high-speed interface circuit chip test scheme and has a practical reference value.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 8,197 | 2022-08-09T00:00:00.000 | [
"Computer Science"
] |
Environmental Benefits of Ultra-Low Emission (ULE) Technology Applied in China
: Seven scenarios were designed to study the national environmental benefits of ULE in coal-fired power plants (CPPs), ULE in industrial coal burning (ICB) and NH 3 emission reduction by using the GEOS-Chem model. The results showed that although the CPPs have achieved the ULE transformation target, the PM 2.5 concentration across the country has decreased by 4.8% (1.4 µ g/m 3 ). Due to the complex non-linear chemical competition mechanism among nitrate and sulfate, the average concentration of nitrate in the country has increased by 1.5% (0.1 µ g/m 3 ), which has reduced the environmental benefits of the power plant emission reduction. If the ULE technology is applied to the ICB to further reduce NO x and SO 2 , although the PM 2.5 concentration can be reduced by 10.1% (2.9 µ g/m 3 ), the concentration of nitrate will increase by 2.7% (0.2 µ g/m 3 ). Based on the CPPs-ULE, NH 3 emissions reduced by 30% and 50% can significantly reduce the concentration of ammonium and nitrate, so that the PM 2.5 concentration is decreased by 11.5% (3.3 µ g/m 3 ) and 16.5% (4.7 µ g/m 3 ). Similarly, based on the CPPs-ICB-ULE, NH 3 emissions can be reduced by 30% and 50% and the PM 2.5 concentration reduced by 15.6% (4.4 µ g/m 3 ) and 20.3% (5.8 µ g/m 3 ). The CPPs and ICB use the ULE technology to reduce NO x and SO 2 , thereby reducing the concentration of ammonium and sulfate, causing the PM 2.5 concentration to decline, and NH 3 reduction is mainly achieved through reducing the concentration of ammonium and nitrate to reduce the concentration of PM 2.5 . In order to better reduce the concentration of PM 2.5 , NO x , SO 2 and NH 3 emission reduction control measures should be comprehensively considered in different regions of China. By comprehensively considering the economic cost and environmental benefits of ULE in ICB and NH 3 emission reduction, an optimal haze control scheme can be determined.
Introduction
Haze pollution is a complex phenomenon, resulting from primary emissions from multiple sources interacting with meteorology and atmospheric photochemistry. Over the past decade, China has experienced severe haze pollution, marked by fine particulate matter (PM), especially in the Beijing-Tianjin-Hebei (BTH) regions, according to the air quality status reports released by China's Ministry of Environmental Protection [1]. Fine particulate matter smaller than 2.5 µg (PM 2.5 ) is particularly troublesome and has been implicated in a range of adverse health outcomes [2,3]. The annual mean PM 2.5 concentration in the BTH The environmental benefits of using GEOS-Chem to quantify the application of ULE in CPPs and ICB are based on the data source shown in Table 1. Firstly, an emission inventory of NO x , SO 2 and PM as air pollutants after ULE technology is applied in CPPs and ICB in China, based on reasonable assumptions, was established. Then, the environmental benefits brought about by the application of ULE technology in CPPs and ICB in China were quantified by using the GEOS-Chem model. Finally, some policy suggestions for achieving greater environmental benefits in the future were put forward. [20] Coal consumption data Raw coal consumption in power plants and industries in different provinces in 2015 (detailed classification of the power and industrial sectors is listed in Table S1).
China Energy Statistics
Yearbook [21] Multi-Resolution Emission Inventory for China (MEIC) The MEIC data on spatial distribution and seasonal variation of pollutants as well as total NO x , SO 2 , PM emissions from power plants and industrial sectors were used. MEIC [7,22] Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) The GAINS inventory provides provincial-level emission data for 11 detailed sectors (details are listed in Table S2). The GAINS data on pollutant emissions from different fuel types in the power sector and industrial combustion and industrial processes and production in the industrial sector were used.
GAINS [23,24] In order to establish the emissions inventory of air pollutants by using ULE technology in CPPs, the emission factors of air pollutants after ULE technology are used in CPPs were ensured to be consistent with Liu et al.'s research results [20], which updated the emission factors of ULE technology applied in CPPs in China, and according to the national energy statistics yearbook [21], to obtain the coal consumption data of different provinces. Only NO x , SO 2 and PM pollutant emissions were considered, when estimating the emissions after the ULE technology was applied to the CPPs in China, given the limited data availability for the emission factor of CPPs that install ULE facilities.
The application of GEOS-Chem to quantify the environmental benefits in different scenarios is based on MEIC emission data and the corresponding spatial distribution. In order to quantify the environmental benefits brought about by the application of ULE technology in the CPPs and ICB, the total emissions from the power and industry sectors provided by MEIC were used. However, the total emissions from the power and industry sectors include not only coal combustion emissions but also other emissions (oil, gas and others); MEIC does not provide these detailed data. Due to the data limitations, the proportion of coal combustion and non-coal combustion obtained from GAINS was used. Moreover, we assumed that the proportion of emissions from coal combustion and non-coal combustion in the power and industry sectors on the MEIC was the same as the proportion of emissions from coal combustion and non-coal combustion in the power and industry sectors on the GAINS.
Simulation Scenario Design 2.2.1. Emissions Calculation
The emission of each species (NO x , SO 2 and PM) from CPPs after applying the ULE technology can be calculated as: Here, EF ULE is the emission factor after the ULE technology is applied in the CPP; W S is the standard coal consumption in the CPP. Since raw coal consumption is recorded in the yearbook, the standard coal consumption was converted: Here, Q S is the lower heating value for standard coal (29,307.6 J/kg), Q R is the lower heating value for raw coal (defined by the type of coal), and W R is the raw coal consumption by mass.
Here, P CM , P CG is the proportion of pollutants emitted by CPPs in the total emissions in MEIC and GAINS, respectively; E CG is the amount of pollutants emitted by CPPs in GAINS; E G is the total pollutant emissions from the power sector in GAINS.
Thus, we can determine the pollutant emission amount of the power sector after ULE technology is applied in CPPs: Similarly, we assumed that ULE technology can be used in ICB and achieve the same emission level as CPPs. Therefore, the emission can be obtained after the application of ULE technology in ICB, based on a similar calculation method. It is also assumed that the proportion of emissions from ICB in the industry sector on the MEIC is the same as the proportion of emissions from ICB in the industry sector on the GAINS, and the GAINS provides detailed data of pollutant classification emissions.
Finally, the new emission inventory is obtained after the application of the ULE technology in CPPs and ICB. The difference between the new emission and MEIC inventory is that it reveals the emission reduction of NO x , SO 2 and PM after applying ULE technology in CPPs and ICB. The emission reduction ratio of different pollutants after the application of ULE technology in CPPs and ICB can be obtained as follows: Here, R is the emission reduction ratio; E ULE is the total pollutant emissions of the new emission inventory.
Scenario Design
Seven simulation scenarios were designed by using the GEOS-Chem model to computationally estimate the environmental benefits of the application of ULE technology in CCPs and ICB, which are listed in Table 2. All scenarios used the spatial distribution and seasonal variation of the MEIC emission inventory, and we assumed the same meteorological field as the STD scenario to minimize the impact of weather effects. The period for these simulations was 16 December 2014 to 31 January 2015. December 2014 was used as the spin-up period to eliminate the influence of initial conditions. The simulations predicted the daily concentrations of PM 2.5 , sulfate, nitrate and ammonium. The average values for each of these components were computed using data from 1 January 2015 to 31 January 2015. January was selected as it is the most polluted month and can best reflect the environmental benefits brought about by ULE emission reduction. The standard scenario (STD) is based on the situations before implementing ULE facilities in January 2015. The remaining situations cover different scenarios of interest. The CPPs-ULE scenario assumes that all of China's CPPs have been retrofitted for ULE. The CPPs-ICB-ULE scenario assumes that the ICB process reaches the same emission level as that in the CPPs. The other four scenarios are used to further assess the environmental benefits of NH 3 emission reduction based on the CPPs-ULE scenario and CPPs-ICB-ULE scenario at a national level. NH 3 emission reduction refers to the total emission reduction of industries, residents, transportation and agriculture. The ammonia emission model (PKU-NH 3 ) used in the study has been proven to accurately describe the spatial distribution and seasonal characteristics of NH 3 emissions in China [18,25], and to simulate the changes in secondary inorganic aerosols in atmospheric chemical transmission models [26]. Regarding China's agricultural production, the excessive use of nitrogen fertilizer and extensive animal husbandry caused by NH 3 accounted for 80% of the country's total emissions, based on PKU-NH 3 model estimates, through the peak nitrogen fertilizer emissions during the growing season (spring and summer) to implement reasonable fertilization (i.e., avoid excessive fertilization and the implementation of deep fertilizer) and so on; optimizing agricultural management can achieve an NH 3 emission reduction of 50% [18]. In the field of livestock manure management, promoting the optimization of animal and poultry manure management, such as reducing the surface area and exposure time of the waste in the house, covered storage, low protein feeding and deep burial of feces at the depth of cultivated land, can significantly reduce NH 3 emissions [27,28]. If agriculture and animal husbandry take concerted measures to control NH 3 emissions, China's NH 3 emissions can reach more than 50%; therefore, this study selected NH 3 emission reductions of 30% and 50% to explore the potential environmental benefits.
Model Description and Sensitivity Tests
To assess the impacts of the emission reduction brought about by applying ULE facilities in CPPs and CIB on the ambient PM 2.5 concentration, several simulations using the GEOS-Chem (version 11-01) chemical transport model were conducted. To obtain more accurate results, the GEOS-Chem nested model at a high resolution of 0.25 • latitude × 0.3125 • longitude was used. Boundary conditions needed by the nested model were derived from the global atmospheric chemical model with a resolution of 2 • latitude × 2.5 • longitude every 3 h. Both the nested model and global model are driven by the GEOS-FP meteorology fields, which is from the National Aeronautics and Space Administration (NASA) Global Modeling Assimilation Office. Both models have 47 layers in the vertical direction, and the lowest 10 layers are almost 130 thick. Model convection is parameterized by the relaxed Arakawa-Schubert scheme [29] and the vertical mixing scheme in the planetary boundary layer employs a non-local scheme implemented by Lin and McElroy (2010) [30].
All simulations were run with the full chemistry mechanism (Ox-NOx-CO-VOC-HO x ) and aerosols, which include secondary inorganic aerosols (SIOA, including sulfate, nitrate and ammonium), black carbon (BC), primary organic carbon (POA), dust and sea salts, were calculated online. Modifications of the full chemistry mechanism followed Ni et al., (2018) [31]. The ammonium-sulfate-nitrate aerosol system, which uses the ISORROPIA II thermodynamic equilibrium model [32] to simulate SIOA, was coupled to the full chemistry mechanism and implemented in GEOS-Chem [33]. The parameterization of heterogeneous aerosol chemistry was treated as uptake coefficients [34]. Mineral dust aerosols were calculated online by the DEAD scheme [35] and the implementation of sea salt aerosol parameterization followed Jaegle et al., (2011) [36]. In addition to natural dust aerosols, anthropogenic sources of dust have been verified to play an important role in the total PM 2.5 concentration over China. Thus, the contribution of anthropogenic dust emissions was explicitly considered by adding the gridded monthly PM emission inventory from MEIC into GEOS-Chem.
Compared to observations, standard GEOS-Chem tends to underestimate the ambient concentration of sulfate due to the simplified sulfate production mechanism implemented in the model [37,38]. The standard GEOS-Chem model simulates sulfate production by considering the gas-phase oxidation of SO 2 by OH [39] and the aqueous-phase oxidation of S(IV) (including dissolvedSO 4 2− , HSO 3 − and SO 3 2− ) by hydrogen peroxide (H 2 O 2 ) and ozone (O 3 ) in cloud droplets. To comprehensively present the process of sulfate production, [40] and [41] assessed the heterogeneous uptake of SO 2 on the surface of deliquesced aerosols under high relative humidity conditions and the mechanism of the aqueous-phase oxidation of S(IV) by dissolved nitrogen dioxide (NO 2 ) in cloud droplets and aerosols with high relative humidity was followed. In addition, primary sulfate emissions are also important, accounting for 3.1% of the total Chinese anthropogenic sulfur emissions. The share of primary sulfate emissions over China could be underestimated; thus, the contribution was increased to 4.5%. These updates are welldocumented and are believed to bring the model predictions into closer agreement with measurements, especially for the winter months.
Global anthropogenic emissions were taken from EDGAR v4.2, with regional emissions overwritten by regional inventories such as the MIX Asian emission inventory over Asia [42], EMEP over Europe and NEI2011 over the US. Anthropogenic emissions over China of the standard scenario were taken from MEIC, except NH 3 emissions, which were from PKU-NH 3 [18]. Gridded monthly anthropogenic emissions for simulations conducted for other scenarios were customized and are summarized in Table 2. Parameterization of soil NO x emissions followed Hudman et al., (2012) [43]. Biomass burning emissions were taken from the Global Fire Emission Database version 4 (GFED v4) [44]. Biogenic non-methane volatile organic compound (NMVOC) emissions were calculated online by using the Model of Emissions of Gases and Aerosols from Nature version 2.1 (MEGAN v2.1) [45]. The model simulation results with hourly ground PM 2.5 concentration observation data were verified and evaluated, which concluded that the improved model used in this study can better calculate the concentration and spatial distribution of PM 2.5 in China on the whole, which also indicates that this model can be used for subsequent simulation studies [46].
Pollutant Reduction Potential
Before analyzing the environmental benefits of the application of ULE technology in CCPs and ICB by using GEOS-Chem, the pollutant reduction potential of ULE technology application in CPPs and ICB was analyzed. Based on MEIC, the total national NO x and SO 2 emissions in 2015 were calculated as 23.7 Tg and 18.2 Tg, respectively ( Figure 1). Among them, the amounts of NO x and SO 2 emitted by the power sector were 5.1 Tg and 4.0 Tg, representing 21.5% and 21.9%, respectively; the amounts of NO x and SO 2 emitted by the industrial sector were 9.7 Tg and 11.0 Tg, representing 40.9% and 60.4%, respectively. The ULE of CPPs in China (CHN) can reduce the NO x and SO 2 of CPPs themselves by 87.0% and 93.3%, while the emission reduction of the whole power plant sector is 83.3% and 91.6%, accounting for 17.9% and 20.0% of the total NO x and SO 2 emissions, respectively. If ULE facilities are installed both in CPPs and ICB, the total NO x and SO 2 can be reduced by 34.2% (8.1 Tg) and 51.0% (9.3 Tg), respectively. Due to the differences in the industrial structure of coal-based energy in various provinces and regions, under the CPPs-ULE and CPPs-ICB-ULE scenarios, the emission reductions of NO x and SO 2 are different. Environmental benefits are brought about by the reduction of NO x and SO 2 in regions such as Beijing-Tianjin-Hebei (BTH), the Yangtze River Delta (YRD), the Sichuan-Chongqing Economic Circle (SCC) and the Pearl River Delta (PRD), etc., from the promotion of ULE technology nationwide.
Regional Sector Emission Reduction
The reduction ratios of NO x and SO 2 in the power industry and the total of different regions under different scenarios are shown in Figure 2. The ICB-ULE refers to the realization of ULE transformation in the ICB process, which was only used to calculate the pollutant reduction and does not quantify the corresponding environmental benefits separately. As shown in Figure 2, under the CPPs-ULE scenario, the NO x and SO 2 emission reductions in the power sector in BTH, YRD, SCC and PRD reached 76.2%, 81.4%, 85.4%, 77.2% and 85.2%, 87.1%, 96.4%, 85.9%, respectively. It can be seen that the emissions of CPPs account for an absolute proportion of the emissions of the power sector, and the ULE of CPPs has greatly reduced the emissions of NO x and SO 2 from the power plants. The larger the reduction ratio of NO x and SO 2 in the power sector, the more urgent it is to promote the ULE transformation, and the more significant the environmental benefits will be. Under the ICB-ULE scenario, the NO x and SO 2 emission reductions of the industrial sectors in BTH, YRD, SCC and PRD reach 59.9%, 40.8%, 32.8%, 18.4% and 66.2%, 48.3%, 69.4%, 29.0%, respectively. The difference in the emission reduction ratio of NO x and SO 2 reflects the difference in industrial structure. For BTH, the emission reduction ratio is as high as 60%, indicating that the ICB emissions occupy a dominant position, while for PRD, the emission reduction ratio is less than 1/3, indicating that the ICB emissions occupy a relatively small proportion in the industrial structure. The emission reductions of NO x and SO 2 in the power and industrial sectors of BTH, YRD, SCC and PRD reach 64.3%, 55.0%, 42.6%, 42.1% and 68.6%, 58.3%, 73.5%, 45.9%, respectively, under the CPPs-ICB-ULE scenario.
Regional Emission Reduction
The reduction ratios of NO x and SO 2 in different regions under different scenarios are shown in Figure 3. The reduction ratios of NO x and SO 2 emissions in different regions under different scenarios.
As shown in Figure 3, under the CPPs-ULE scenario, the NO x and SO 2 emission reductions of the BTH, YRD, SCC and PRD regions reached 10.5%, 17.4%, 9.0%, 17.8% and 8.8%, 20.8%, 13.7%, 23.6%, respectively, in total NO x and SO 2 emissions in each region. It can be seen that the ULE of CPPs in the BTH and YRD region reduced the total NO x and SO 2 emissions in the region by around 10%, while the emission reduction of CPPs in the SCC and PRD region reduced the total NO x and SO 2 emissions by around 20%. Under the ICB-ULE scenario, the emission reductions of NO x and SO 2 in the industrial sectors of BTH, YRD, SCC and PRD reached 30.6%, 16.2%, 15.2%, 6.3% and 46.8%, 33.3%, 54.7%, 18.9%, respectively, in total NO x and SO 2 emissions in each region. Under the CPPs-ICB-ULE scenario, the emission reductions of NO x and SO 2 of BTH, YRD, SCC and PRD reached 41.1%, 33.6%, 24.2%, 24.0% and 55.6%, 54.1%, 68.4, 42.5%, respectively, in total NO x and SO 2 emissions in each region.
For the BTH region, the NO x and SO 2 emission reduction ratios for ICB-ULE are 2.9 and 5.3 times higher than CPPs-ULE, indicating that ICB-ULE in this region can significantly reduce pollutant emissions. For the YRD region, the NO x and SO 2 emission reduction ratios for ICB-ULE are 0.9 and 1.6 times those of the CPPs-ULE. For the SCC region, the NO x and SO 2 emission reduction ratios for ICB-ULE are 1.7 and 4.0 times higher than CPPs-ULE. For the PRD region, the NO x and SO 2 emission reduction ratios for ICB-ULE are 1.7 and 4.0 times higher than CPPs-ULE. For the PRD region, the NO x and SO 2 emission reduction ratios for ICB-ULE are 0.4 and 0.8 times those of the CPPs-ULE, indicating that the different emission reduction ratios of various regions are determined by differences in the energy industry structure.
Benefits of Primary Pollutant Emission Reduction
The spatial distribution of the concentrations of the primary pollutants, NO x and SO 2 , under different scenarios based on GEOS-Chem simulation is shown in Figure 4. Figure 4 shows that under the STD scenario, the average NO x concentration in January 2015 in the North China Plain and the Yangtze River Delta was as high as 42-60 ppbv. Over other densely populated regions in Eastern China, the concentration of NO x was 6-24 ppbv, while, in the central and western regions of China, due to the relatively weak human activities, the emission of NO x was relatively low. This concentration was also very low, only 0.5-6 ppbv. The spatial distribution of the SO 2 concentration was slightly different from that of NO x . The difference lies in the high SO 2 concentration of 15-30 ppbv in Chongqing, Guizhou and Hunan province, which is caused by the energy structure. Under the CPPs-ULE scenario, the NO x and SO 2 emission reductions are mainly concentrated in high-value regions, and CPPs-ICB-ULE reduces the NO x and SO 2 emissions more significantly than CPPs-ULE. Figure 5 shows the changes in the average concentrations of NO x and SO 2 in different regions under different scenarios.
It can be seen from Figure 5 that, under the CPPs-ULE and CPPs-ICB-ULE scenarios, the concentration of NO x in the BTH region decreased from 24.9 ppbv to 20.7 ppbv and 13.5 ppbv, a decrease of 17.1% and 45.9%, respectively, while the concentration of SO 2 was reduced from 11.4 ppbv to 10.2 ppbv and 6.5 ppbv, a decrease of 10.7% and 42.7%, respectively, compared with the STD scenario. Similarly, the concentration of NO x in the YRD region was reduced from 19.9 ppbv to 14.9 ppbv and 11.9 ppbv, a decrease of 25.2% and 40.1%, and the concentration of SO 2 was decreased from 7.7 ppbv to 6.0 ppbv and 3.8 ppbv, a decrease of 22.0% and 51.0%, respectively. The concentration of NO x in the SCC region decreased from 4.6 ppbv to 3.8 ppbv and 3.3 ppbv, a decrease of 25.2% and 40.1%, and the concentration of SO 2 decreased from 8.9 ppbv to 7.3 ppbv and 4.8 ppbv, a decrease of 18.6% and 45.9%, respectively. The concentration of NO x in the PRD region was reduced from 6.8 ppbv to 5.3 ppbv and 4.6 ppbv, a reduction of 23.0% and 33.1%, and the concentration of SO 2 was reduced from 4.0 ppbv to 3.1 ppbv and 2.0 ppbv, a reduction of 23.0% and 49.4%, respectively. The concentration of NO x in the CHN region was reduced from 5.0 ppbv to 3.8 ppbv and 3.0 ppbv, a decrease of 24.7% and 39.8%, and the concentration of SO 2 was reduced from 3.6 ppbv to 2.9 ppbv and 2.2 ppbv, a decrease of 18.4% and 40.0%, respectively. By comparison with the actual emission reductions of NO x and SO 2 in the different regions in Figures 1 and 3, it can be seen that the emission reduction ratio of primary pollutants NO x and SO 2 is not consistent with the actual emission reduction ratio. For example, in the CPPs-ULE and CPPs-ICB-ULE scenarios, the actual NO x and SO 2 emission reduction ratios corresponding to BTH are 10.5%, 41.1%, 8.8% and 55.6%, while the corresponding primary pollutant emission reduction ratios are 17.1%, 45.9%, 10.7% and 42.7%, which is mainly because the actual NO x and SO 2 emitted into the atmosphere is also affected by atmospheric transport [47]. It is also affected by complex atmospheric chemical reactions [48,49].
Benefits of Secondary Pollutant Emission Reduction
The spatial distribution of the concentrations of the secondary pollutants, ammonium, nitrate, sulfate and PM 2.5 , under different scenarios based on GEOS-Chem simulation is shown in Figure 6.
It can be seen from Figure 6 that the high-value regions of ammonium, nitrate and sulfate in January 2015 are not consistent under the STD scenario. The high-value regions of ammonium are mainly concentrated in North China, Central China and the SCC regions. Among them, the high ammonium concentration in SCC is greater than 14 µg/m 3 , while in North and Central China, the high ammonium salt concentration is 6-14 µg/m 3 . The high concentration of nitrate is also mainly concentrated in North China, Central China and the SCC regions. The high-value region in North China has a concentration greater than 32 µg/m 3 , while that in the high-value region in SCC is relatively low, at 12-32 µg/m 3 . The concentration distribution of sulfate is mainly concentrated in the SCC region, and the concentration is usually 16-36 µg/m 3 . Ammonium, nitrate and sulfate are important components of PM 2.5 , so the high concentration of PM 2.5 is a combination of the three, mainly concentrated in the North China, Central China and SCC regions, where the PM 2.5 concentration in some regions is greater than 180 µg/m 3 . Under the CPPs-ICB-ULE scenario, the ICB has achieved ULE, further reducing NO x and SO 2 emissions, making the national emission reductions of ammonium, sulfate and PM 2.5 more obvious. The nitrate reduction regions are also concentrated in Northeast China, Henan and part of the western regions. Among them, the decrease in Henan is more obvious (reduced by 4.9%), while in Urumqi, North China, South China, SCC and PRD, the concentration of nitrate increased more significantly.
The formation of ammonium, nitrates and sulfates in the atmosphere is all related to NH 3 emissions, and there is a competition mechanism between nitrates and sulfates [17]: the ammonium ions (NH 4 + ) in the atmosphere are more likely to combine with sulfate ions (SO 4 2− ) to generate ammonium sulfate ((NH 4 ) 2 SO 4 ). Only when NH 3 is abundant will it continue to react with nitrate ions (NO 3 − ) to produce ammonium nitrate (NH 4 NO 3 ), so the reaction between inorganic substances has the following four situations: (a) the emission of NH 3 For regions with high NH 3 emissions, because NH 3 emissions are high, NH 4 + will consume all the SO 4 2− , but it is not enough to combine with all NO 3 − . When SO 2 is reduced, the NH 4 + originally combined with SO 4 2− will combine with the remaining NO 3 − to form NH 4 NO 3 , so the concentration of nitrate will increase, and there will be surplus after the combination of NH 4 + and NO 3 − , which cannot form ammonium. Thus, the concentration of ammonium will also decrease.
Because the sulfur content of coal used in different regions, power plant boiler technology, industrial coal use, agricultural NH 3 emissions, etc., are different, their individual NO x , SO 2 and NH 3 emission ratios are different, which makes the implementation of ULE reduction technology challenging. The emission reduction effects in the past were different, which is ultimately reflected in the different non-linear chemical mechanisms between ammonium, nitrate and sulfate in the atmosphere of various provinces and regions, which affects the environmental benefits brought about by emission reduction policies. Therefore, in order to better reduce the concentration of PM 2.5 , the SCC region should focus on the emission reduction of ammonium and sulfate precursors NH 3 and SO 2 , while, for the North and Central China areas, the focus should be on the reduction of ammonium and nitrate precursors NH 3 and NO x .
It can be seen from Figure 7 that, under the CPPs-ULE scenario, the emissions of primary pollutants NO x and SO 2 are significantly reduced (24.7% and 18.4%), causing the CHN concentrations of ammonium, nitrate, sulfate and PM 2.5 to decrease from 3. In CPPs and ICB, the application of ULE reduction technology can greatly reduce the NO x and SO 2 emissions, and then decrease ammonium sulfate and PM 2.5 concentrations, but it increases the concentration of nitrate; therefore, the CHN regional NH 3 emissions are too high, and only NO x and SO 2 emissions are reduced; therefore, this is not the best method to reduce PM 2.5 measures, and other methods should be considered according to different regional condition NH 3 emissions.
Under the CPPs-ULE scenario, the BTH concentrations of ammonium, nitrate, sulfate and PM 2.5 decreased from 6.8 µg/m 3 , 14.0 µg/m 3 , 7.3 µg/m 3 and 75.8 µg/m 3 to 6.4 µg/m 3 , 14.0 µg/m 3 , 6.3 µg/m 3 and 73.4 µg/m 3 , respectively, which decreased by 5.0%, −0.2%, 13.2% and 3.1%, respectively. With the realization of ULE of CPPs in the BTH region, the emissions of primary pollutants NO x and SO 2 were significantly reduced (17.1% and 10.7%), leading to a significant decrease in the concentration of ammonium and sulfate. However, due to the competition mechanism between nitrate and sulfate, the concentration of nitrate was increased, but the proportion of the increase in nitrate concentration was relatively low (0.2%). Under the CPPs-ICB-ULE scenario, the emissions of primary pollutants NO x and SO 2 were reduced (45.9% and 42.7%), which could reduce the concentrations of ammonium, nitrate, sulfate and PM 2.5 in the BTH region to 5.7 µg/m 3 , 14.1 µg/m 3 , 4.3 µg/m 3 and 68.6 µg/m 3 , respectively, compared to the baseline scenario, which indicates reductions of 15.7%, −0.3%, 40.7% and 9.5%, respectively. It can be seen that for the BTH region, there are greater NH 3 emissions, but the region is not particularly affluent, because the concentration of nitrate under the different scenarios only increases by 0.2% and 0.3%.
Under the CPPs-ULE scenario, the emissions of primary pollutants NO x and SO 2 are significantly reduced (25.2% and 22.0%), causing the YRD concentrations of ammonium, nitrate, sulfate and PM 2.5 to decrease from 7.3 µg/m 3 , 14.0 µg/m 3 , 9.2 µg/m 3 and 72.8 µg/m 3 to 6.8 µg/m 3 , 14.5 µg/m 3 , 7.3 µg/m 3 and 69.5 µg/m 3 , respectively, which are decreased by 6.1%, −3.9%, 20.3% and 4.5%, respectively. Under the CPPs-ICB-ULE scenario, the emissions of primary pollutants NO x and SO 2 are reduced (40.1% and 51.0%), which can reduce the concentrations of ammonium, nitrate, sulfate and PM 2.5 in the YRD region to 6.2 µg/m 3 , 15.1 µg/m 3 , 5.0 µg/m 3 and 65.8 µg/m 3 , respectively, compared to the baseline scenario, with reductions of 14.7%, −8.2%, 45.9% and 9.6%, respectively. It can be seen that, compared with the BTH region, YRD is richer in NH 3 emissions. After the ULE of CPPs is completed, the nitrate concentration is increased by 3.9%. With the completion of the ULE of ICB, it has further reduced the emissions of NO X and SO 2 and has increased the concentration of nitrate to 8.2%. The YRD region pays more attention to NH 3 emission reduction control than the BTH region.
Under the CPPs-ULE scenario, the emissions of primary pollutants NO x and SO 2 are significantly reduced (15.2% and 18.6%), causing the SCC concentrations of ammonium, nitrate, sulfate and PM 2.5 to decrease from 7.9 µg/m 3 , 10.3 µg/m 3 , 13.5 µg/m 3 and 69.6 µg/m 3 to 7.2 µg/m 3 , 11.1 µg/m 3 , 10.8 µg/m 3 and 66.4 µg/m 3 , respectively, amounting to reductions of 8.8%, −7.5%, 20.0% and 4.6%, respectively. Under the CPPs-ICB-ULE scenario, the emissions of primary pollutants NO x and SO 2 are reduced (26.1% and 45.9%), which can reduce the concentrations of ammonium, nitrate, sulfate and PM 2.5 in the SCC region to 6.3 µg/m 3 , 11.7 µg/m 3 , 7.8 µg/m 3 and 62.4 µg/m 3 , respectively, compared to the baseline scenario, with reductions of 20.3%, −13.4%, 42.2% and 10.4%, respectively. It can be seen that, compared with the BTH and YRD regions, SCC has the most abundant NH 3 emissions. After the ULE of CPPs is completed, the nitrate concentration is increased by 7.5%. With the completion of the ULE of ICB, it has further reduced the emissions of NO X and SO 2 and has increased the concentration of nitrate to 13.4%. The SCC region is in urgent need of NH 3 emission control compared with the PRD region.
It can be seen that the PRD area has more NH 3 emissions than the BTH, but it is not richer than YRD and SCC. Therefore, after the ULE of CPPs is completed, the nitrate concentration is increased by 1.0%. With the completion of the ULE of ICB, it has further reduced the emissions of NO X and SO 2 and has increased the concentration of nitrate to 4.9%.
Environmental Benefits of NH 3 Emission Reduction
The spatial distribution of the concentrations of secondary pollutants, ammonium, nitrate, sulfate and PM 2.5 , under the NH 3 emission reduction control scenario based on GEOS-Chem simulation is shown in Figure 8.
As can be seen from Figure 8, under the CPPs-ULE-NH 3 -30% scenario, the ULE of CPPs with NH 3 emission reduced by 30% at the same time can cause the national ammonium, nitrate and PM 2.5 concentrations to significantly decrease, and the concentration of sulfate has no obvious change relative to the STD scenario. The main reason for this is the reduction of NH 3 in the atmosphere, which reduces the concentration of NH 4 + . Because of the non-linear mechanism of ammonium, nitrate and sulfate, NH 4 + is preferentially combined with SO 4 2− to form sulfate, and there is not enough NH 4 + to combine with NO 3 − to produce nitrate, so the concentration of ammonium and nitrate is significantly reduced. As the NH 3 emission reduction reaches 50%, the concentrations of ammonium, nitrate and PM 2.5 can still be further reduced, but this cannot cause a significant change in the concentration of sulfate. This shows that when NH 3 is reduced by 50%, the combination of NH 4 + and SO 4 2− is not affected. In order to better reduce PM 2.5 , the emission of one or more pollutants can be further reduced according to the cost of the emission reduction of NO x , SO 2 and NH 3 . On the basis of CPPs-ICB-ULE, the NH 3 emission reduction was 30% and 50%, respectively, making the changes of ammonium salt, nitrate, sulfate and PM 2.5 in the atmosphere consistent with the changes in the NH 3 emission reduction on the basis of CPPs-ULE. NH 3 emission reduction can significantly reduce the concentration of nitrate while reducing the ammonium and sulfate in PM 2.5 . As can be seen from Figure 8, under the CPPs-ULE-NH3-30% scenario, the ULE of CPPs with NH3 emission reduced by 30% at the same time can cause the national ammonium, nitrate and PM2.5 concentrations to significantly decrease, and the concentration of sulfate has no obvious change relative to the STD scenario. The main reason for this is the reduction of NH3 in the atmosphere, which reduces the concentration of NH4 + . Because of the non-linear mechanism of ammonium, nitrate and sulfate, NH4 + is preferentially combined with SO4 2− to form sulfate, and there is not enough NH4 + to combine with NO3 − to produce nitrate, so the concentration of ammonium and nitrate is significantly reduced. As the NH3 emission reduction reaches 50%, the concentrations of ammonium, nitrate and PM2.5 can still be further reduced, but this cannot cause a significant change in the concentration of sulfate. This shows that when NH3 is reduced by 50%, the combination of NH4 + and SO4 2− is not affected. In order to better reduce PM2.5, the emission of one or more pollutants can be further reduced according to the cost of the emission reduction of NOx, SO2 and NH3. On the basis of CPPs-ICB-ULE, the NH3 emission reduction was 30% and 50%, respectively, making the changes of ammonium salt, nitrate, sulfate and PM2.5 in the atmosphere consistent with the changes in the NH3 emission reduction on the basis of CPPs-ULE. NH3 emission reduction can significantly reduce the concentration of nitrate while reducing the ammonium and sulfate in PM2.5.
As can be seen from Figure 9, compared with the STD scenario, NH3 was reduced by 30% on the basis of CPPs-ULE, and the concentrations of CHN ammonium, nitrate, sulfate
Discussion
Although the CPPs have achieved the ULE transformation target, the PM 2.5 concentration across the country has decreased by 4.8% (1.4 µg/m 3 ) compared with January 2015, and haze pollution is still very serious. In addition, ammonium, nitrate and sulfate are important components of PM 2.5 . Due to the complex non-linear chemical competition mechanism among them, the average concentration of nitrate in CHN increases by 1.5% (0.1 µg/m 3 ) under the condition of a large reduction in NO x and SO 2 emissions, which reduces the environmental benefits of power plant emission reduction. If the ULE technology is applied to the ICB to further reduce NO x and SO 2 , although the PM 2.5 concentration can be reduced by 10.1% (2.9 µg/m 3 ), the concentration of nitrate will increase by 2.7% (0.2 µg/m 3 ). It can be seen that reducing the emission of NO x and SO 2 does not reduce the concentration of PM 2.5 well, and the haze pollution is still very serious.
On the basis of CPPs-ULE, NH 3 emissions are reduced by 30%, which can reduce the national PM 2.5 concentration by 11.5% (reduced by 3.3 µg/m 3 ), by significantly reducing the ammonium (reduced by 21.8%) and nitrate (reduced by 26.0%) concentrations, to achieve a better PM 2.5 reduction effect than ULE of ICB. If the NH 3 emission reduction reaches 50%, the concentrations of ammonium (reduced by 33.4%), nitrate (reduced by 46.8%) and sulfate can be further reduced, and the PM 2.5 concentration can be reduced by 16.5% (4.7 µg/m 3 ). Similarly, on the basis of CPPs-ICB-ULE, NH 3 emissions are reduced by 30% and 50%, which can reduce the national PM 2.5 concentration by 15.6% (4.4 µg/m 3 ) and 20.3% (5.8 µg/m 3 ), by significantly reducing the ammonium and nitrate concentrations.
In conclusion, NO x , SO 2 and NH 3 emission reduction have the same PM 2.5 reduction effect. ULE of CPPs and ICB reduces the PM 2.5 concentration by reducing NO x and SO 2 , and then reducing the concentrations of ammonium and sulfate, while NH 3 emission reduction mainly reduces the PM 2.5 concentration by reducing the concentration of ammonium and nitrate. In order to better reduce the concentration of PM 2.5 , the emission reduction control measures of NO x , SO 2 and NH 3 should be formulated according to the specific conditions of different regions.
As can be seen from Figure 10, for the BTH region in the CPPs-ULE scenario, as with the NO x and SO 2 emission reduction, the concentrations of ammonium and sulfate were reduced by 5.0% and 13.2% respectively, while the concentration of nitrate increased by 0.2%, and the concentration of PM 2.5 was reduced by 3.1% (2.5 µg/m 3 ). Under the CPPs-ICB-ULE scenario, with the further decrease in NO x and SO 2 , the concentrations of ammonium and sulfate decreased by 15.7% and 40.7%, respectively, while the concentration of nitrate increased by 0.3%, and the concentration of PM 2.5 decreased by 9.5% (7.2 µg/m 3 ). Therefore, relative to the CPPs-ULE scenario, reducing the emissions of NH 3 can significantly reduce the concentrations of nitrate and ammonium, and reduces the sulfate concentration by 1.6%. This indicates that the atmospheric NH 3 emission in the BTH region is insufficient, and NH 4 + is insufficient to combine with all SO 4 2− . When the NH 3 emission is reduced to 50%, the concentrations of ammonium, nitrate and sulfate can be reduced by 33.0%, 42.8% and 16.5%, respectively, and the concentration of PM 2.5 can be reduced by 13.9% (10.6 µg/m 3 ). On the basis of CPPs-ICB-ULE, the emission of NH 3 was reduced by 30%, the concentrations of ammonium, nitrate and sulfate were reduced by 27.5%, 18.4% and 41.5%, respectively, and the concentration of PM 2.5 was decreased by 14.1% (10.7 µg/m 3 ). With the reduction of NH 3 to 50%, the concentrations of ammonium, nitrate and sulfate were decreased by 38.6%, 36.0% and 42.5%, respectively, and the concentration of PM 2.5 was decreased by 18.4% (14.0 µg/m 3 ). In conclusion, after the completion of ULE of CPPs in the BTH region, the synergistic effect of NH 3 emission reduction should be comprehensively considered while promoting ULE technology in the field of ICB. For the YRD region in the CPPs-ULE scenario, as with the NO x and SO 2 emission reduction, the concentrations of ammonium and sulfate were reduced by 6.1% and 20.3%, respectively, while the concentration of nitrate increased by 3.9% (0.5 µg/m 3 ), and the concentration of PM 2.5 was reduced by 4.5% (3.3 µg/m 3 ). Under the CPPs-ICB-ULE scenario, with the further decrease in NO x and SO 2 , the concentration of ammonium and sulfate decreased by 14.7% and 45.9%, respectively, while the concentration of nitrate increased by 8.2% (1.1 µg/m 3 ), and the concentration of PM 2.5 decreased by 9.6% (7.0 µg/m 3 ). Therefore, relative to the CPPs-ULE scenario, an emission reduction of NH 3 by 30% can significantly reduce the concentrations of nitrate and ammonium, and also reduces the sulfate concentration by 0.7%. This indicates that the atmospheric NH 3 emission in the YRD region is insufficient, and NH 4 + is insufficient to combine with all SO 4 2− . When the NH 3 emission is reduced to 50%, the concentrations of ammonium, nitrate and sulfate can be reduced by 39.0%, 49.5% and 21.7%, respectively, and the concentration of PM 2.5 can be reduced by 18.2% (13.3 µg/m 3 ). On the basis of CPPs-ICB-ULE, the emission of NH 3 was reduced by 30%, the concentrations of ammonium, nitrate and sulfate were reduced by 30.6%, 19.5% and 46.3%, respectively, and the concentration of PM 2.5 was decreased by 16.5% (12.0 µg/m 3 ). With the reduction of NH 3 to 50%, the concentrations of ammonium, nitrate and sulfate were decreased by 43.7%, 41.7% and 46.7%, respectively, and the concentration of PM 2.5 was decreased by 22.1% (16.1 µg/m 3 ). In conclusion, relative to the BTH region, the emission of NH 3 in the YRD region is higher, severely weakening the reduction effect of ULE in CPPs and ICB. After the completion of ULE of CPPs, an NH 3 emission reduction by 30% can achieve a better PM 2.5 reduction effect than that of ULE of ICB. Therefore, after the completion of ULE of CPPs, priority should be given to the environmental benefits brought about by NH 3 emission reduction, and then to ULE of ICB.
For the SCC region in the CPPs-ULE scenario, as with the NO x and SO 2 emission reduction, the concentrations of ammonium and sulfate were reduced by 8.8% and 20.0%, respectively, while the concentration of nitrate increased by 7.5% (0.8 µg/m 3 ), and the concentration of PM 2.5 was reduced by 4.6% (3.2 µg/m 3 ). Under the CPPs-ICB-ULE scenario, with the further decrease in NO x and SO 2 , the concentration of ammonium and sulfate decreased by 20.3% and 42.2%, respectively, while the concentration of nitrate increased by 13.4% (1.4 µg/m 3 ), and the concentration of PM 2.5 decreased by 10.4% (7.2 µg/m 3 ). Therefore, relative to the CPPs-ULE scenario, an emission reduction of NH 3 by 30% can significantly reduce the concentrations of nitrate and ammonium, and also reduces the sulfate concentration by 0.3%. This indicates that the atmospheric NH 3 emission in the SCC region is insufficient, and NH 4 + is insufficient to combine with all SO 4 2− . When the NH 3 emission is reduced to 50%, the concentrations of ammonium, nitrate and sulfate can be reduced by 37.2%, 59.1% and 20.4%, respectively, and the concentration of PM 2.5 can be reduced by 17.8% (12.4 µg/m 3 ), relative to the STD scenario. On the basis of CPPs-ICB-ULE, the emission of NH 3 was reduced by 30%, the concentrations of ammonium, nitrate and sulfate were reduced by 32.4%, 17.4% and 42.4%, respectively, and the concentration of PM 2.5 was decreased by 16.3% (11.4 µg/m 3 ). With the reduction of NH 3 to 50%, the concentrations of ammonium, nitrate and sulfate were decreased by 43.3%, 45.0% and 42.5%, respectively, and the concentration of PM 2.5 was decreased by 21.7% (15.1 µg/m 3 ).
In conclusion, the relative to the YRD region, the emission of NH 3 in SCC region is higher, severely weakening the reduction effect of ULE in CPPs and ICB. After the completion of ULE of CPPs, an NH 3 emission reduction by 30% can achieve a better PM 2.5 reduction effect than that of ULE of ICB, and an NH 3 emission reduction by 50% can achieve a better PM 2.5 reduction effect than that of ULE of ICB with NH 3 emissions reduced by 30%. Therefore, after the completion of ULE of CPPs, priority should be given to the environmental benefits brought about by NH 3 emission reduction, and then to ULE of ICB.
For the PRD region in the CPPs-ULE scenario, as with the NO x and SO 2 emission reduction, the concentrations of ammonium and sulfate were reduced by 11.4% and 18.8%, respectively, while the concentration of nitrate increased by 1.0% (0.1 µg/m 3 ), and the concentration of PM 2.5 was reduced by 6.2% (2.7 µg/m 3 ). Under the CPPs-ICB-ULE scenario, with the further decrease in NO x and SO 2 , the concentrations of ammonium and sulfate decreased by 23.9% and 40.5%, respectively, while the concentration of nitrate increased by 4.9% (0.3 µg/m 3 ), and the concentration of PM 2.5 decreased by 12.7% (5.6 µg/m 3 ). Therefore, relative to the CPPs-ULE scenario, an emission reduction of NH 3 by 30% can significantly reduce the concentrations of nitrate and ammonium, and also reduces the sulfate concentration by 0.2%. This indicates that the atmospheric NH 3 emission in the PRD region is insufficient after a 30% reduction in NH 3 , and NH 4 + is insufficient to combine with all SO 4 2− . When the NH 3 emission is reduced to 50%, the concentrations of ammonium, nitrate and sulfate can be reduced by 35.3%, 59.7% and 19.2%, respectively, and the concentration of PM 2.5 can be reduced by 17.0% (7.5 µg/m 3 ), relative to the STD scenario. On the basis of CPPs-ICB-ULE, the emission of NH 3 was reduced by 30%, the concentrations of ammonium, nitrate and sulfate were reduced by 36.5%, 29.5% and 40.6%, respectively, and the concentration of PM 2.5 was decreased by 18.6% (8.2 µg/m 3 ). With the reduction of NH 3 to 50%, the concentrations of ammonium, nitrate and sulfate were decreased by 45.3%, 53.1% and 40.6%, respectively, and the concentration of PM 2.5 was decreased by 22.7% (10.1 µg/m 3 ).
In conclusion, the emission of NH 3 in the PRD region is relatively high, severely weakening the reduction effect of the ULE in CPPs and ICB. After the completion of ULE of CPPs, an NH 3 emission reduction by 30% can achieve the same PM 2.5 reduction effect as that of ULE of ICB, and an NH 3 emission reduction by 50% is weaker than that of ULE of ICB with NH 3 emissions reduced by 30% for a PM 2.5 reduction effect. Therefore, after the completion of ULE of CPPs, the formulation of environmental policies should refer to the BTH region, and the synergistic effect of NH 3 emission reduction should be comprehensively considered during the ULE of ICB. Based on the competitive mechanism of sulfate and nitrate, the emission reduction of NH 3 makes the NH 3 content in the atmosphere change from "surplus" to "insufficient", and the concentration of nitrate is significantly reduced in the process of transformation to ammonia deficiency, followed by the concentrations of ammonium and sulfate, which is consistent with the conclusions of other scholars [50]. NH 3 emission reduction converts NH 4 SO 4 into NH 4 HSO 4 . In addition, SO 2 reacts in various ways to form sulfate [48], so, with the increase in the NH 3 emission reduction ratio, the sulfate concentration decreases slowly. The main reason for the decrease in the nitrate and ammonium concentrations caused by NH 3 emission reduction is the reduction in NH 3 available for reaction in the atmosphere and the reduction in precursors of this reaction. Other related chemical reaction mechanisms need to be further studied [49,51,52].
Conclusions
Although the CPPs have achieved the ULE transformation target, the PM 2.5 concentration across the country has decreased by 4.8% (1.4 µg/m 3 ) compared with January 2015, and the haze pollution is still very serious. In addition, ammonium, nitrate and sulfate are important components of PM 2.5 . Due to the complex non-linear chemical competition mechanism among them, the average concentration of nitrate in the country increases by 1.5% (0.1 µg/m 3 ) under the condition of a large reduction in NO x and SO 2 emissions, which reduces the environmental benefits of ULE of CPPs. If the ULE technology is applied to the ICB to further reduce NO x and SO 2 , the PM 2.5 concentration can be reduced by 10.1% (2.9 µg/m 3 ), and the concentration of nitrate will increase by 2.7% (0.2 µg/m 3 ). Therefore, it is concluded that simply reducing NO x and SO 2 emissions cannot sufficiently reduce the concentration of PM 2.5 . Targeted pollutant emission reduction control measures should be formulated according to the specific conditions of different regions to achieve a better reduction in PM 2.5 concentration.
Based on the CPPs-ULE, an NH 3 emission reduction of 30% can significantly reduce the concentrations of ammonium and nitrate, so that the PM 2.5 concentration decreases by 11.5% (3.3 µg/m 3 ), to achieve a better PM 2.5 reduction effect than ULE of ICB. If the NH 3 emission reduction reaches 50%, the concentrations of ammonium, nitrate and sulfate can be further reduced, and the PM 2.5 concentration can be reduced by 16.5% (4.7 µg/m 3 ). Similarly, based on the CPPs-ICB-ULE, NH 3 is reduced by 30% and 50%, and the PM 2.5 concentration is reduced by 15.6% (4.4 µg/m 3 ) and 20.3% (5.8 µg/m 3 ). In summary, NO x , SO 2 and NH 3 emission reduction have the same PM 2.5 reduction effect. The ULE of CPPs and ICB can reduce NO x and SO 2 , thereby reducing the concentrations of ammonium and sulfate, causing the PM 2.5 concentration to be reduced, and NH 3 reduction is achieved mainly through reducing the concentrations of ammonium and nitrate to reduce the concentration of PM 2.5 . In order to better reduce the concentration of PM 2.5 , NO x , SO 2 and NH 3 emission reduction control measures should be comprehensively considered in different regions of China.
The emission bases of NO x , SO 2 and NH 3 in different regions of China are different, so the changes in regional PM 2.5 and its important components under different scenarios are also quite different. For the BTH and PRD regions, after the ULE of CPPs is completed, the synergistic effects of NH 3 emission reduction should be comprehensively considered while promoting ULE reduction technologies in the field of ICB. For the YRD and SCC regions, after the completion of ULE of CPPs, priority should be given to the environmental benefits brought about by NH 3 emission reduction, and then to the ULE transformation of ICB. In conclusion, in order to reduce PM 2.5 more effectively, the economic cost and environmental benefits of emission reduction control measures for NO x , SO 2 , NH 3 and other pollutants should be taken into account comprehensively. | 12,304.6 | 2021-12-17T00:00:00.000 | [
"Chemistry",
"Environmental Science"
] |
Magnetic inhibition of the recollimation instability in relativistic jets
In this paper, we describe the results of three-dimensional relativistic magnetohydrodynamic simulations aimed at probing the role of regular magnetic field on the development of the instability that accompanies recollimation of relativistic jets. In particular, we studied the recollimation driven by the reconfinement of jets from active galactic nuclei (AGN) by the thermal pressure of galactic coronas. We find that a relatively weak azimuthal magnetic field can completely suppress the recollimation instability in such jets, with the critical magnetisation parameter $\sigma_{\rm cr}<0.01$. We argue that the recollimation instability is a variant of the centrifugal instability (CFI) and show that our results are consistent with the predictions based on the study of magnetic CFI in rotating fluids. The results are discussed in the context of AGN jets in general and the nature of the Fanaroff-Riley morphological division of extragalactic radio sources in particular.
waves can no longer provide causal communication across the jet (Lyubarskij 1992).
Using the causality argument, concluded that a = 2 is critical for relativistic magnetised jets as well. In order to test this conclusion, they used a 3D periodic box setup to study the stability of expanding relativistic jets with the magnetisation parameter σ 1 and predominantly azimuthal magnetic field (σ = b 2 /4πw, where b is the magnetic field strength in the fluid frame and w is the relativistic enthalpy). Within the box, their jets had cylindrical geometry and their expansion was promoted via a forced decline of the external gas pressure. The temporal rate of the decline was set to what would be seen in a reference frame moving with relativistic speed through the atmosphere with the gas pressure P ∝ z −a . The results showed progressively increasing reduction of the instability growth rate with increasing value of a, leading to its almost complete suppression for a 2. A number of other computation studies support the reduction of the instability growth rates in expanding jets (e.g. Rosen & Hardee 2000;Moll et al. 2008;McKinney & Blandford 2009;Porth 2013).
The rapid expansion of astrophysical jets can came to a halt when they enter regions with sufficiently high and slowly varying external pressure. This is because the internal pressure of expanding jets decreases very rapidly and may quickly drop below the external pressure. In this case, the external pressure drives a shock, often called a reconfinement shock, into the jet. This shock reheats the jet and establishes approximate pressure balance with the external gas. Steady-state two-dimensional models of reconfined jets predict that they become approximately cylindrical, though with quite strong superimposed oscillations. This creates condition for the development of instabilities, which were previously suppressed in the expansion zone.
For AGN jets, such regions with slow variation of external pressure can be the central cores of hot galactic coronas or the extended cocoon of very hot gas (radio lobes) inflated by the jets themselves. The former option is open for the so-called "naked" jets of FR-1 radio sources, which seem to be in direct contact with the coronal gas, and the latter one for the jets of FR-2 radio sources (Fanaroff & Riley 1974). Falle (1991) proposed a self-similar model for the evolution of the large-scale structures created by FR-2 jets. This model predicts a relatively slow decrease of the cocoon pressure and hence a gradual increase of the jet length to its radius with time (or the source size). Falle (1991) argued that as the length to radius ratio grows sufficiently large, the jet develops instabilities and become turbulent, and that this results in a transition to the FR-1 morphology. Recent 3D hydrodynamic (HD) and magnetohydrodynamic (MHD) simulations of non-relativistic cylindrical jets by Massaglia et al. (2016Massaglia et al. ( , 2019 provided some support to this idea. However, the jets were injected into the computational domain as already perfectly collimated flows, bypassing the initial phase of free or almost free expansion. Tchekhovskoy & Bromberg (2016) carried out relativistic 3D MHD simulations of outflows generated by a rotating sphere with monopole magnetic field. These outflows also inflated cocoons of hot gas which provided their confinement and quasi-cylindrical collimation. These cylindrical flows suffered from CDI kink modes, which in some cases led to a development of morphology reminiscent of FR-1 radio sources. However, these were not proper jets but rather the so-called "magnetic towers" (Lynden-Bell 2003), as the flow speed remained sub-fast magnetosonic.
The recollimation of supersonic (super-fast-magnetosonic in the magnetic case) jets may be accompanied by another instabil-ity (which we tentatively call the recollimation instability), which is not present in cylindrical configurations. This possibility was first recognised by Matsumoto & Masada (2013), who argued that the accelerated transverse motion associated with the radial oscillations of reconfined jets is similar to the radial oscillations of nonequilibrium cylindrical jets. Hence they have shown that the oscillations of cylindrical jets are accompanied by the Rayleigh-Taylor instability (RTI, Rayleigh 1883;Taylor 1950) at the interface between the jet and external medium. They explored this via 2D relativistic HD simulations of oscillating cylindrical jets, which have also demonstrated that the overall effect can be amplified via the Richtmyer-Meshkov instability (RMI, Richtmyer 1960;Meshkov 1972) of the shocks associated with the oscillations. Matsumoto et al. (2017) carried the linear stability analysis of the relativistic RTI using the incompressibility approximation.
Recently, several groups carried out 2D and 3D simulations of non-magnetic reconfined jets (Gourgouliatos & Komissarov 2018a;Gottlieb et al. 2019Gottlieb et al. , 2020bMatsumoto & Masada 2019). They have demonstrated that the recollimation instability develops only in 3D simulations, where it can lead to a rapid transition to a fully turbulent state soon downstream the reconfinement point, in great contrast to the predictions of steady-state axisymmetric models. Gourgouliatos & Komissarov (2018a,b) argued that the recollimation instability is related not to RTI but to the centrifugal instability (CFI, Rayleigh 1917). The relativistic version of this instability in rotating flows was studied by Gourgouliatos & Komissarov (2018b).
AGN jets carry out magnetic field, and it is known that a sufficiently strong magnetic field can inhibit various hydrodynamic instabilities when their development lead to an increase of the field energy. In particular, Komissarov et al. (2019) studied the role of axial magnetic field on the development of CFI at the cylindrical interface between rotating relativistic fluids. Extrapolating the results to the problem of reconfined jets, they concluded that a relatively weak magnetic field, with σ = 0.01 − 0.1 may completely suppress the recollimation instability in reconfined jets. Here we investigate this problem directly, via 3D relativistic MHD simulations of AGN jets.
The structure of the paper is as follows. In section 2, we present our method and the setup of the simulations we performed. Section 3 describes the results of these simulations. We discuss the astrophysical implications of these results in section 4 and summarise our conclusions in section 5.
Overview
In this study, we use computer simulations to investigate the role of the magnetic field on the stability of relativistic jets undergoing reconfinement by the thermal pressure of external gas. In order to ensure the continuity with the previous studies and allow for direct comparison, we use as a starting point the non-magnetic model C1 of Gourgouliatos & Komissarov (2018a). In this model, an initially conical jet propagates through the X-ray corona of the parent galaxy. The pressure distribution of the corona is modelled with the isothermal King law. The initial solution describes a steady-state jet in direct contact with the external gas. There is no cocoon separating the jet from the external gas. This configuration corresponds to the sub-group of FR-1 jets whose morphology is similar to the jets of the radio source 3C31 (the so-called naked jets).
The C1 jet exhibits rapid development of the recollimation instability and hence this model provides a good reference for studying the role of magnetic field in this process. To this aim, we modify the C1 model by adding a purely azimuthal magnetic field, while keeping other parameters unchanged. On the scale of galactic coronae, the azimuthal component is expected to dominate over the poloidal magnetic field emerging from the jet engine. The polarisation observations of FR-1 jets also indicate the presence of a longitudinal component. However it is normally attributed to the small scale irregular magnetic field (e.g. Laing 1981;Begelman et al. 1984;Wardle 2013), which is unlikely to influence long wavelength instabilities.
The overall strategy of our computational experiments is the same as in Gourgouliatos & Komissarov (2018a). First, we find an approximate axisymmetric steady-state solution using the method described in Matsumoto et al. (2012); Komissarov et al. (2015). Next, we use the result to setup initial conditions for timedependent axisymmetric simulations. The purpose of these 2D simulations is 1) to check that the steady-state solutions are sufficiently accurate and 2) to see if they develop axisymmetric instabilities. Finally, we use the steady-state solution to setup initial conditions for fully three-dimensional simulations.
Governing equations
We solve the equation of ideal special relativistic MHD. These are the continuity equation the energy-momentum equation and the Faraday equation Here ρ is the rest-mass density, u α is the four-velocity, w = ρc 2 + γ/(γ − 1)P is the relativistic enthalpy of ideal gas, P is the gas pressure, b α is the 4-vector of magnetic field, and g αβ is the metric tensor of Minkowski spacetime. In the simulations we use the ratio of specific heats γ = 4/3. The 3+1 decomposition of u α and b α is where, Γ is the Lorentz factor and v is the 3-velocity, and B is the magnetic field vector as measured in the laboratory frame. The strength of the magnetic field in the fluid frame is In order to simplify identification of the jet plasma we introduce a passive tracer, τ, governed by the equation In the initial solution, τ is set to unity inside the jet and to zero in the external medium. Moreover, it is kept at unity in the ghost cells of the nozzle boundary, so that the injected flow carries this value of the tracer into the computational domain. The equations are integrated using the AMR-VAC code as described in Keppens
Jet setup
The external gas is assumed to be isothermal, with a sphericallysymmetric mass density distribution described by the King law where r is the spherical radial coordinate, r c is the core radius, and ρ e,0 is the central density. The power-law index of the density distribution is set to a = 1.25, the typical value for giant elliptical galaxies. The jet nozzle is located at the distance z 0 = 0.1r c from the origin, with the initial radius R 0 = 0.02r c . The jet density distribution at the nozzle is uniform ρ = ρ j,0 . Initially, the jet is relativistically cold, with P ρc 2 and hence w ≈ ρc 2 .
The velocity distribution over the nozzle corresponds to a conical flow of the half-opening angle θ 0 = 0.2 emerging from the origin: where {R, φ, z} are cylindrical coordinates aligned with the jet axis and θ = arctan (R/z 0 ). The corresponding Lorentz factor depends only the cylindrical radius, where Γ 0 = 5. This is a smooth approximation of the top-hat profile with Γ = Γ 0 at R = 0 and Γ = 1 at R = R 0 . The magnetic field is assumed to be purely azimuthal. At the nozzle it has the core-envelope distribution where R m is the core radius. The electric current is uniform inside the core and vanishes in the envelope. The return current flows over the jet surface R = R 0 . All our models have R m = R 0 /2. The relativistic magnetisation parameter σ is maximum at the magnetic core radius, where it reaches the value We have studied four models with σ max = 0 (HD), 10 −4 (MHD1), 10 −3 (MHD2), and 10 −2 (MHD3). Figure 1 shows the radial distribution of σ at the nozzle. Models with higher values of σ max were not needed for the purpose of the study. The initial jet density was set by the condition that for r c = 1 kpc the jet power is L = 1.8 × 10 44 erg/s, the value roughly corresponding to the boundary between FR-1 and FR-2 sources.
Steady-state solutions
The complex steady-state solutions describing reconfined jets were constructed using one-dimensional simulations of flows with cylindrical symmetry, following the technique developed in (Matsumoto et al. 2012;Komissarov et al. 2015). Standard time-dependent twodimensional axisymmetric simulations were used to verify their suitability as initial solutions for the instability study.
"One-dimensional" models
In the approach of Matsumoto et al. (2012); Komissarov et al. (2015), equilibria solutions of two-dimensional axisymmetric relativistic jet problems are approximated by solutions of timedependent one-dimensional axisymmetric problems (in the radial direction). According to this approach, 1) the initial configuration of the time-dependent problem describes the distribution of the flow variables at the nozzle, and 2) the time evolution is triggered via forced variation of the external pressure. The approximate steady-state solutions A(z, R) are obtained from the corresponding time-dependent solutionĀ(t, R) via the transformation A(z, R) =Ā((z − z 0 )/c, R). For further details see Komissarov et al. (2015).
In our case, the external pressure was varied according to the prescription P e (r(t)) = P e,0 1 + every time step, where where R j (t) is the jet radius at time t. For these simulations we used a uniform grid with the cell size ∆R = 0.05R 0 , corresponding to 20 cells per initial jet radius at the nozzle. Figure 2 illustrates the properties of the approximate steadystate solutions obtained using this approach, with the model MHD3 used as an example. Given the relative weakness of the magnetic field, the global structure of the solutions for other models are not much different, except for the strength of the magnetic field itself. The most important feature of the solution is the reconfinement shock driven into the jet by the external gas pressure. Initially, both its radius and the jet radius increase but eventually the jet ram pressure drops too low and both radii begin to contract. At z RP ≈ 17 ( in the units of z 0 = 0.1r c ) the reconfinement shock converges at the jet axis (the reconfinement point) and gets reflected as a decollimation shock. In the unshocked inner part of the jet, its mass density and magnetic energy density decrease approximately as z −2 . At the reconfinement shock both these parameters increase and in the shocked outer layer they evolve relatively slowly. As the result, the jet is almost hollow. The distance to the recollimation point decreases with the jet magnetization. In the HD and MHD1 models it is z RP ≈ 23, and in the MHD2 model z RP ≈ 22. The same trend, but at much higher σ, has been seen by Fromm et al. (2017). Figure 3 shows the distribution of σ in two cross-sections, one at about half way to the reconfinement point (z = 10) and another well downstream of this point and near the far boundary of the computational domain (z = 40). One can see that the magnetisation peaks inside the shocked outer layer, where its value exceeds σ max at the jet nozzle. This is consisted with the increase of σ at fast shocks (reference).
Two-dimensional models
The initial solutions for the 2D simulations were set via projecting the approximate steady-state solutions (obtained as described in the previous section) on the computational grid of cylindrical coordinates {R, z}. Prior to their projection, the density distribution of the steady-state solutions was modified. Following Gourgouliatos & Komissarov (2018a), its step-like transition between the jet and the external was replaced with a tanh-profile of thickness δR = 0.1R 0 , where R 0 is the jet radius at the nozzle. This allowed us to substantially reduce the numerical dissipation at the interface, which otherwise would be too strong and corrupted the solution.
The computational domain was [0, 6] × [1, 41], with a uniform grid of 600 × 200 cells (the cell sizes ∆R = 0.01 and ∆z = 0.2). This gives the same radial resolution as in the 1D simulations. The lower z boundary (z = 1) was divided into the nozzle section (0 < R < 0.2) and the corona section (R > 0.2). The values of the physical variables in the ghost cells of the nozzle section were fixed, which is allowed because the jets are super-fastmagnetosonic. In the corona section, we used reflective boundary conditions. Outflow (zero gradient) boundary conditions were imposed at the z = 41 and R = 6 boundaries, and reflective boundary conditions at R = 0. The Newtonian gravity model was used to maintain the hydrostatic equilibrium of the unperturbed coronal gas (Perucho & Martí 2007). This involved an introduction of source terms both in the energy equation and in the momentum equations. These had a little effect on our relativistic jets.
The 2D simulations were run for 2.5 light-crossing times of the computational domain in the z direction. During this time the solution evolved but its deviation from the initial solution was relatively mild. There were no signs of instabilities. This had allowed us to conclude that the initial solutions were close to a steady-state, stable to axisymmetric perturbations, and hence suitable for use in 3D simulations.
3D time-dependent simulations
The 3D simulations were carried out on the Cartesian grid of {x, y, z} coordinates, with the computational domain [−4, 4 1,41]. In order to reduce the computational cost of the simulations, we capitalised on the adaptive mesh capabilities of the AMRVAC code. We used four levels of adaptive mesh refinement, with 100 3 cells at the base level. The corresponding cell sizes are ∆z = 5∆x = 5∆y. At the finest mesh, this was equivalent to the same resolution of 20 cells per nozzle radius at the finest grid as in the auxiliary 1D and 2D simulations. The refinement was controlled according to the Lohner criterion, with the Lorentz factor as a reference parameter.
We used the same boundary conditions as in the 2D simulations, slightly adjusted to the different grid geometry (Obviously, the jet axis is no longer a boundary of the simulation domain, and hence no boundary conditions are needed there.). The same applies to the initial setup and the use of Newtonian gravity. Following Gourgouliatos & Komissarov (2018a), the initial distributions of jet density and pressure were perturbed as where ρ s (x, y, z) and P s (x, y, z) describe the steady-state solution and φ is the azimuthal angle.
RESULTS
The key result of the 3D simulations is illustrated in Figures 4-7. Figure 4 shows the distribution of the Lorentz factor in all three magnetic models by the end of the simulations, mostly in the longitudinal plane inclined to the x axis at 45 • . The solution for the unmagnetised jet is also shown as a reference model. Figures 5-7, complement these plots by showing the Lorentz factor distribution in the cross-sections z = 5, 12, 15 and 40. The final time for all runs is t = 40, which is one light crossing time of the domain in the z direction. Since the flow velocity of the jet is almost equal to the speed of light, this is almost the same as the jet crossing time.
The structure of the jet in the model MHD1, where the strength of the magnetic field of the jet is the weakest, is almost same as the pure hydro model (HD). It exhibits transition from a laminar to a fully turbulent flow at around z = 10, which is somewhat upstream of the reconfinement point in the steady-state solution, which is located at z ≈ 23 . The turbulence promotes entrainment of the external gas, mixing, and jet deceleration. The Lorentz factor reduces from Γ = 5 down to 2 Γ 3.
The recollimation instability also develops in the MHD2 model, but in contrast to MHD1 the transition to a turbulent state is not observed. A closer inspection of the flow structure in the jet cross-section (see figure 6) reveals that the azimuthal number of the dominant mode gradually reduces from m ≈ 20 at z = 10 to m = 4 at z = 40. Presumably, once the growth of higher order modes saturates, they get erased by numerical diffusion and the resultant flow with a thicker transition layer between the jet and the external gas can support only the modes of lowest order. At z = 40, the nonlinear m = 4 mode clearly dominates other modes (figure 6). One can see that it is aligned with the Cartesian grid, and this implies a strong bias due to the anisotropy of the numerical scheme.
In the MHD3 model, the magnetic field is sufficiently strong to completely suppress the recollimation instability. The shape of the jet cross-section shows a gradual transformation from the initial circular geometry to a square-like one near the far end of the computational domain. This square is also aligned with the Cartesian grid, which again suggests the numerical nature of this deformation. This numerical effect is likely to be exacerbated by the fact that the jet radius strongly reduces at the reconfinement point.
The nature of the instability
Several researches interpreted the recollimation instability as a particular form of the Rayleigh-Taylor instability (Matsumoto & Masada 2013;Matsumoto et al. 2017;Toma et al. 2017;Gottlieb et al. 2020c). In order to understand the arguments in favour of this interpretation, it is perhaps most revealing to consider the evolution of the radial structure of a steady-state jet in an inertial frame moving with the jet speed along its axis. In this frame, the interface moves up and down relative to the jet axis and appears similar to the accelerated interface between two fluids in the problem considered by Taylor (1950). Moreover, the structures formed at the nonlinear phase of the instability, but before the turbulent regime, are reminiscent of the fingers and bubbles characteristic to RTI. That is, if they are viewed in the jet cross section (e.g. see figures 5 and 6). Matsumoto & Masada (2013) assumed that the spatial oscillations of steady-state axisymmetric jets are equivalent to the temporal oscillations of under-expanded cylindrical (∂ z = 0) jets. They studied the temporal oscillations of such jets and observed an instability when the jet was heavier than the external gas (cf. Toma et al. 2017). Hence they identified this instability as RTI. This identification makes sense, as in this problem both fluids are accelerated in the direction normal to the interface. However, the similarity between the temporal oscillations of cylindrical jets and the spatial oscillations of steady-state jets is not sufficiently close to ensure the same nature of the instabilities observed in these problems. In particular, Gourgouliatos & Komissarov (2018a) have shown that the oscillating solutions for stationary jets are unstable not only when the jets are heavier than the external medium but also when they are lighter, and that in both these cases the instability looks the same.
From the theoretical viewpoint, the key feature of the Taylor's setup is the same acceleration of both fluids in the direction normal to the interface between them. Hence in the non-inertial frame of the interface, the problem is identical to that studied by Rayleigh (1883), where the initial steady-state configuration describes a hy-drostatic equilibrium in gravitational field. However, in the case of an oscillating steady-state jet this condition is not satisfied. Indeed, whereas the jet fluid moves along curved streamlines and hence experiences the centripetal accelleration, the external medium is at rest and hence has vanishing acceleration. Thus the jet problem is not a variant of the Taylor's problem.
Locally, the motion of jet fluid along the curved interface between a steady-state oscillating jet and external gas is reminiscent of rotation. Rayleigh (1917) established that rotating fluids may be subject to what is now known as the centrifugal instability (CFI). Like RTI, this instability is local and hence the steady-state flow does not have to be a proper rotation (Bayly 1988). In the plane normal to the streamlines, the structures produced by CFI may look very similar to fingers and bubbles of RTI (e.g. Gourgouliatos & Komissarov 2018b). However, in 3D they look more like ridges and trenches aligned with the flow streamlines. This is exactly what was observed in the simulations of reconfined jets by Gourgouliatos & Komissarov (2018a) and is seen in our simulations as well. This suggests that the recollimation instability is an inertial instability closely related to the centrifugal instability of rotating fluids. Using heuristic approach, Gourgouliatos & Komissarov (2018b) derived a generalised Rayleigh instability criterion for relativistic rotating fluids. In the case of a discontinuity between two rotating fluids, it reduces to where Ψ = wΓ 2 Ω 2 , Ω is the angular velocity of rotation, and [Ψ] = Ψ o − Ψ i where suffices "i" and "o" stand for the inner and outer sides of the discontinuity respectively. In the Newtonian limit, Ψ = ρΩ 2 and the criterion reads In the case of uniform density, this reduces to which is the same as the Rayleigh criterion for CFI in incompressible fluid. In the case of solid body rotation law ([Ω] = 0), (18) reduces to which is the same as the instability criterion for RTI. Indeed, in the frame rotating with the fluid, this case is equivalent to the equilibrium in the radial "gravitational field" with the acceleration g = Ω 2 r, where r is the radius vector of cylindrical coordinates aligned with the axis of rotation (cf. Scase & Hill 2018).
In the problem of a steady-state reconfined jet, the external gas is at rest. This corresponds to the rotation problem with Ω o = 0. Hence, the instability criterion (17) reduces to −w i Γ 2 i Ω 2 i < 0 , which is satisfied independently of the inertia of the external gas. This is in total agreement which is in agreement with results of jet simulations by Gourgouliatos & Komissarov (2018a). Thus we conclude that the recollimation instability is a variant of CFI. Komissarov et al. (2019) studied the role of axial magnetic field on the development of CFI at the interface between rotating relativistic fluids. Using heuristic approach, they concluded that the magnetic field suppresses the CFI modes with the wavelength below where b is the magnetic field strength as measured in the fluid frame, w = ρc 2 + (γ/γ + 1)P is the relativistic enthalpy and u = vΓ, where v is the flow rotational velocity and Γ is the corresponding Lorentz factor. Indices "1" and "2" denote the fluids inside and outside of the interface with the curvature radius r in respectively. This criterion was in good agreement both with their Newtonian and relativistic simulations. Based on these results, they predicted complete suppression of CFI modes with the azimuthal wave number m 4 at the interface between reconfined jets and external gas provided where θ 0 is the initial half-opening angle of the jet and Γ is its Lorentz factor. Given the jet half-opening angle θ 0 = 0.2 and the Lorentz factor Γ = 5 of our jet models, equation (22) predicts suppression of the recollimation instability for σ > 0.06. Given the maximum magnetisation of the jet plasma σ max 0.01 at the nozzle, one would expect the instability to develop in all three models. However, we find that in MHD3 with σ max = 0.01 it is completely suppressed. One factor promoting this suppression is the increase of σ at the reconfinement shock. Figure 3 shows that in the shocked layer the magnetisation increased up to σ ≈ 0.04. Another factor is the decrease of the jet Lorentz factor at the interface with the external gas. It is introduced already at the nozzle via boundary conditions, and downstream it is further amplified by the reconfinement shock. Figure 8 shows the radial distribution of (θ 0 Γ) 2 /16 and σ at z=10 in all magnetic models. One can see that the suppression criterion is not satisfied for the shocked layer in the MHD1 and MHD2 models, but it is marginally satisfied in the MHD3 model. Thus the results of our jet simulations are in a good agreement with the predictions based on the study of magnetic CFI in rotating fluids.
When we were working on this paper, the results of a related numerical study of relativistic jets was published by Gottlieb et al. (2020a). They also investigated the impact of magnetic field on the recollimation instability of relativistic jets, but in the context of gamma ray bursts (GRB), and concluded that σ 10 −2 leads to a suppression of the instability. Apparently, they were unaware of our study of the magnetic CFI in rotating flows and did not compared their results with the criterion (22).
When comparing with the criterion (22), one have to keep in mind that they consider relativistically hot jets, which can be thermally accelerated. Indeed, the thermal acceleration of weakly mag-netised jets leads to Γ ∝ R j where R j is the jet radius (e.g. Komissarov 2011). Hence, the flow Lorentz factor may significantly increase compared to its value at the nozzle. Although Gottlieb et al. (2020a) do not show the variation of jet Lorentz factor, this cannot be large as the jet radius prior to the reconfinement point increases only slightly in models for long GRB and only by a factor of few in their setup for short GRBs. At the nozzle, they set Γ 0 θ 0 = 0.7, which is only slightly below of Γθ 0 = 1 in our simulations. Thus, at least in the case of long GRBs, their results are consistent with ours and with the criterion (22).
Implications to AGN jets
According to the VLBI observations of AGN jets, the mean value of θ j Γ is about 0.2 (Jorstad et al. 2005;Pushkarev et al. 2009;Clausen-Brown et al. 2013). For such jets, the criterion (22) gives the critical value σ cr ≈ 0.0025. Such a small value is very problematic for the magnetic collimation-acceleration mechanism because it dramatically loses efficiency when σ drops to the value about unity. Based on the asymptotic solutions, Lyubarsky (2010) gives σ = 1 at the distance z 1 = 10 2 − 10 3 r g (see also Komissarov et al. (2007)) and σ = 0.1 at the distance z 0.1 = z 1 Γ 4 max from the central black hole, where r g is its gravitational radius and Γ max is the terminal jet Lorentz factor. Repeating his calculations for σ = 0.01, we find z 0.01 = z 1 Γ 49 max (the power index of 49 is not a typo), which is ridiculously high for any realistic value of Γ max . Hence, if AGN jets are accelerated via this mechanism then their magnetisation never becomes small enough to allow the recollimation instability.
Given the jet power and radius, one can relate the strength of its magnetic field with σ. Assuming a uniform jet, its total power can be found as We apply this result to the M87 jet, the best studied case of all AGN jets. It has been suggested that its optical knot HST1 coincides with the reconfinement point, the location where the reconfinement shock reaches the jet axis (Stawarz et al. 2006;Nalewajko 2012). It is located at about 250 pc from the supermassive black hole of M87 . At around this point, the radio observations suggest a transition from acceleration to deceleration of the jet (Asada et al. 2014), as well as a transition from parabolic to conical geometry (Asada & Nakamura 2012). The jet radius at the deprojected distance z = 100 pc is R j ≈ 2 pc (Asada & Nakamura 2012) and the Lorentz factor Γ ≈ 5 (Asada et al. 2014). The mean power of the M87 jet can be estimated based on the work done by their expanding radio lobes against the thermal pressure of the surrounding them X-ray emitting gas. This yields L ≈ 10 44 erg/s (Owen et al. 2000). Substituting these values into (23) we find This is well below the equipartition value for the HST1 knot B eq ≈ 1 mG (Harris et al. 2003) and the value based on the variability time-scale of its emission during flares, B var ≈ 0.6 mG (Harris et al. 2009). In fact, these observational estimates suggest σ = 0.03 − 0.08, which is well above σ cr = 0.0025 1 .
If the magnetisation of AGN jets drops well below σ = 0.1 at sub-kpc scales, then their physics is significantly more complicated than it is assumed in the collimation-acceleration model. In fact, there are several indications that this may be the case. For example, the rich morphology of AGN jets is very different from the featureless structure of the theoretical model. The superluminal motion in pc-scale jets and the flares of their cores are clear manifestations of the central engine variability. The synchrotron optical and X-ray emission of AGN jets require in-situ particle acceleration, which suggests a dissipation of either the kinetic energy of the jet bulk motion or of its magnetic energy (see Matthews et al. 2020, and references therein). The polarisation of the jet emission indicates the presence of a strong longitudinal component, which is not expected in the ideal model as B z ∝ R −2 j and B φ ∝ R −1 j Γ −1 . In particular, the longitudinal component is prevalent upstream of the HST1 knot of M87 jet (Perlman et al. 1999).
There exists an alternative model of the jet acceleration, where it is powered by the energy released via magnetic dissipation (e.g. Spruit et al. 2001). In this model, the dissipated magnetic energy is converted into heat, and as the jet expands the heat is converted into the kinetic energy of the jet. Such magnetic dissipation may occur even in freely expanding (unconfined) jet, provided its magnetic field frequently changes polarity due to some magnetic activity of the jet engine. Suppose that the engine changes polarity on the time scale ∆t e . Then the jet contains blocks of alternating azimuthal magnetic field of the length l e = c∆t e in the engine (observer) frame. In the jet frame their length is l e = Γl e = Γc∆t e . If v in is the reconnection speed, then the time scale of magnetic dissipation ∆t d ≈ l e /v in ≈ Γ∆t e /β in . In the observer frame, the corresponding time is ∆t d ≈ Γ 2 ∆t e , leading to the characteristic length scale of magnetic dissipation Based on numerical simulations (PIC) of relativistic pair plasma Liu et al. (2015) give v in ≈ 0.1c a 1 + σ 1 + 0.01σ where c a = c σ 1 + σ 1/2 is the Alfvén speed. Using σ = Γ = 5, and ∆t e = 1 yr (assuming that the polarity changes on the same time scale as the ejection of new superluminal components in VLBI jets), we obtain l d ≈ 30 pc. This shows that, the large-scale azimuthal magnetic field can be destroyed well before the typical reconfinement scale of FR-1 jets. We envisage that at z l d , the jet contains mostly small-scale (tangled or turbulent) magnetic field and the plasma magnetisation drops down to σ < 1. Even if the magnetisation is not as low as σ < σ cr the recollimation instability may still develop if this σ is attributed almost entirely to the small-scale field.
If however this scenario is not followed by the AGN jets and the recollimation instability is not the reason for the observed flaring and deceleration of FR-1 jets within the framework of this paradigm, then KHI and CFI may be the "culprits" instead. In this regard, it is intriguing that MHD3 model shows no signs of these instabilities. Presumably the magnetic field is sufficiently strong to suppress KHI and not strong enough to promote a sufficiently rapid growth of CDI. The shear layer is also known to inhibit these instabilities in cylindrical jets (e.g. Martí et al. 2016;Kim et al. 2016). The non-cylindrical structure of our jets may play a role too.
Future observations with ngVLA are expected to allow detailed study of some AGN jets on the reconfinement scale and observationally explore their stability properties on this scale (Lister et al. 2018;Perlman et al. 2019). Numerical simulations can be used to explore the reconfinement dynamics of jets with σ ≈ 0.1 and that of multi-component jets.
If the AGN jets are almost certainly produced by magnetic central engines, the jets of gamma-ray bursts (GRB) may well be neutrino-driven and as the result have much lower magnetisation than the AGN jets (Woosley 1993;MacFadyen & Woosley 1999). Hence the recollimation instability is likely to be important for these jets.
CONCLUSION
Recollimation of astrophysical jets can lead to instability. This recollimation instability is powered by the centrifugal force emerging along the curved streamlines of recollimating jets and is a variant of the classic centrifugal instability of rotating fluids. Many types of astrophysical jets are magnetised and strong regular magnetic fields can be the most important component of their "jet engines". In this study, we explored the role played by such regular magnetic in the development of the recollimation instability.
As an example, we considered the reconfinement of initially free-expanding jets with purely azimuthal (toroidal) magnetic field by the thermal pressure of external gas, using the parameters suitable to the so-called "naked" AGN jets. In the case of unmagnetised jets, we find that the recollimation instability leads to fullydeveloped turbulence soon after the reconfinement point, entrainment of the external gas, and rapid deceleration of the jets. This is in agreement with the previous studies of such jets, which have lead to the suggestion that the instability may be responsible for the observed morphology of FR-1 extragalactic radio sources.
However, we find that even a rather weak azimuthal magnetic field can fully suppress the development of this instability. For the jets with the half-opening angle θ 0 = 0.2 and the Lorentz factor Γ = 5, the critical relativistic magnetisation parameter can be as low as σ cr = 0.01.
These results are in good agreement with the predictions based of the results for magnetic centrifugal instability of rotating flows, which relate σ cr to the product θ 0 Γ. On one hand, this confirms the identification of the recollimation instability as a variant of the (local in nature) centrifugal instability. On the other hand, this allows us to extrapolate the results to the regimes typical to parsec-scale AGN jets where the observations suggest θ 0 Γ ≈ 0.2, and estimate their critical magnetisation as σ cr ≈ 0.002. Such a low magnetisation can not be reached on the scales typical for AGN jets if they are accelerated via the ideal magnetohydrodynamic collimationacceleration mechanism. In this is indeed the case, the observed disruption of FR-1 jets must have a different origin.
If however, the regular azimuthal magnetic field of AGN jets is destroyed before the jet disruption, then the recollimation instability may still be relevant. For example, the jet engine may change its magnetic polarity on a regular basis, leading to a striped magnetic structure of the jets. This creates conditions for magnetic reconnection at the interfaces between stripes with opposite direction of magnetic field. Provided the characteristic time scale of this variability of the central engine is the same as for the ejection of superluminal component (≈ one year), the reconnection may indeed be completed before kpc scales, leaving behind mostly small-scale field. In fact, this may explain why the polarisation observations of AGN jets are often inconsistent with the predominantly azimuthal magnetic field. The magnetic dissipation accompanying the reconnection may power the particle acceleration required to explain the observed emission of the jets.
ACKNOWLEDGMENTS
Jin Matsumoto and Serguei Komissarov were supported by the STFC grant No. ST/N000676/1. Part of the numerical simulations were carried out on the STFC-funded DiRAC I UKMHD Science Consortia machine, hosted as part of and enabled through the ARC HPC resources and support team at the University of Leeds (www.dirac.ac.uk). Another part of the numerical simulations were carried out on Cray XC50 at the Center for Computational Astrophysics and National Astronomical Observatory of Japan and Cray XC40 at YITP in Kyoto University. Jin Matsumoto was also supported by Research Institute of Stellar Explosive Phenomena at Fukuoka University and the associated project (No. 207002), and also by JSPS KAKENHI Grant Number (JP19K23443 and JP20K14473).
DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author. | 9,567.6 | 2020-10-21T00:00:00.000 | [
"Physics"
] |
Sub-micron scale transverse electron beam size diagnostics methodology based on the analysis of optical transition radiation source distribution
Optical Transition Radiation (OTR) appearing when a charged particle crosses a boundary between two media with different dielectric properties has widely been used as a tool for transverse profile measurements of charged particle beams in numerous facilities worldwide. The resolution of the conventional monitors is defined by the dimensions of the Point Spread Function (PSF) distribution, i.e. the source distribution generated by a single electron and projected by an optical system onto a detector. The PSF form significantly depends on various parameters of the optical system like diffraction of the OTR tails, spherical and chromatic aberrations. The beam image is a convolution of the PSF with a transverse electron distribution in a beam. In our experiment we designed and built a system that can measure the transverse electron beam size through the analysis of the PSF distribution shape. In this paper we present the hardware, data analysis, calibration technique, a discussion on the main source of uncertainties and initial measurements of a micron-scale electron beam size with sub-micrometre resolution.
Introduction
Nowadays accelerators are widely used in very different fields, covering a broad range of applications including both compact ones used for medical or industrial applications and large particle colliders used to study the properties of elementary particles and forces [1]. They require a full suite of diagnostics [2] to monitor and optimise the particle beam behaviour in an accelerator. Transverse beam size and emittance are the key parameters for optimal accelerator performance. In electron machines the beam size diagnostics are challenging because the transverse beam dimensions might reach sub-micrometer dimensions. To diagnose such beams with reasonable accuracy we must employ non-standard solutions going beyond the state-of-the-art to achieve the resolution of the equipment which is much smaller than the size of the beam.
The Accelerator Test Facility (ATF2) was built at KEK: High Energy Accelerator Research Organisation in Japan to demonstrate an ability to generate ultra-low emittance electron beam, that can be focused down to few tens of nanometres as needed for an International Linear Collider (ILC) [3,4]. In ILC [5] the beam will be generated with a relatively large emittance by a photocathode RF gun, pre-accelerated to few GeV beam energy, cooled down in a damping ring, injected -1 -in the main linac for final acceleration to extremely high energy and, then, squeezed down to a few nanometres at the interaction point. The ultimate performance of the collider depends directly on the measurement of low emittance beams from the exit of the damping ring to the interaction point. The ATF2 is a unique facility generating the beam with extremely small emittance that provides ideal experimental conditions in order to develop such challenging beam diagnostics.
State-of-the-art in transverse electron beam size diagnostics in linear accelerators
The resolution of optical diagnostics is usually defined by the diffraction limit, i.e. the optical wavelength and the angular aperture. One of the ways to break the diffraction limit is to use a relative phase of the wave. The state-of-the-art in transverse electron beam size diagnostics is based on Compton scattering interferometry [6], i.e. when a laser beam is split into two parts, which are then recombined to form interference fringes in the centre of the beam pipe. In case of a head-on recombination the standing wave periodicity of half a wavelength is produced. The electron beam is scanned across with the laser interference fringe pattern and the back-scattered Compton photons are registered downstream as a function for the fringe position. The visibility of the pattern is proportional to the electron beam size. In this case the resolution is defined by a fraction of the standing wave period which is much smaller than the laser wavelength. In [6] the authors demonstrated the beam size as small as 70 nm measured with laser interferometer. A laser-wire transverse beam size monitor is another internationally recognised candidate for micron-scale beam sizes for future accelerators [7,8]. In this case a high quality high-power laser beam is focused down to a micrometre transverse dimensions with a sophisticated optics. Then the laser is scanned across the electron beam to measure its size with micrometre resolution. A beam size just a few micrometre wide was measured in [8].
Laser-based diagnostics are complex systems that require a team of expert to look after the high power laser and the alignment of optical system, in order to guarantee a reliable day-by-day operation and smooth maintenance. We have been investigating alternative method to measure micron-scale beam sizes in order to complement or replace laser wire scanners and laser interferometers.
Optical transition radiation diagnostics
Optical Transition Radiation (OTR) appears whenever a fast charged particle crosses a boundary between two media with different dielectric properties (e.g. a vacuum-metal interface). OTR is widely used to measure a few micrometre beam size which is just above the diffraction limit [9]. This is one of the best monitors which is simple in use and gives a two-dimensional beam profile in a single shot. The rms dimension of a so-called Point Spread Function (PSF) defines the resolution of conventional OTR monitors. In classical optics the PSF is an image generated by a point source emitting a spherical wave which is projected by an optical system on a detector. Due to the diffraction effect the image from a point object is no longer a point, but rather an extended distribution with dimensions defined by the wavelength and the optical system angular aperture. In case of OTR the PSF has a different definition. The OTR PSF is an image from a source induced by a single electron on a target surface and projected by an imaging optical system on a detector. As a matter of fact the source generated by a single electron is not a point but a distribution defined by the shape and dimensions of the charged particle electric field. In our previous work [10] we have represented the first observation of the OTR PSF and demonstrated that its vertical polarisation component has a two-lobe distribution. It was also very clear that the visibility of the pattern strongly depends -2 -on the electron beam size. In [11,12] we have performed the first attempt to demonstrate the sensitivity of the OTR PSF to the beam size. In this paper we describe a transverse electron beam size monitor based on the analysis of shape of the OTR PSF including system configuration and laser alignment, detailed explanation of the data analysis, empirically found calibration procedure, and initial measurements of electron beam size. We shall also describe the detailed analysis and propagation of the uncertainties.
Theoretical background
Classical Optical Transition Radiation theory is based on the method of so-called pseudo-photons, i.e. when the relativistic electron field with characteristic radius of γλ/2π (where γ is the charged particle Lorentz factor and λ is the radiation wavelength) is represented as a superposition of virtual photons. The field is reflected off a flat metal-vacuum interface and propagates in the direction of specular reflection with characteristic opening angle of the order of γ −1 as is shown in 1. Consider a charge e moving in a vacuum with constant velocity v that crosses an interface between vacuum and a perfect conductor that results in generation of transition radiation. At ultra-relativistic energies (γ 1) the large target tilt angles of the interface with respect to the particle trajectory do not change the OTR spectral-angular properties [15]. Therefore, mathematical treatment can be restricted to the normal incidence.
Assuming an optical system with an aberration-free lens with radius d located at a distance a from the source which, in this case, is the OTR screen and distance b from the image plane. The vertical polarisation component of the OTR field at an image point P(ρ, ϕ) is given by [16] where M = b/a is the lens magnification factor, λ is the wavelength, θ lens = d/a is the angular acceptance of the lens. β = v/c is the speed of the particle in units of the speed of light c, φ and ρ = x 2 + y 2 are the azimuthal angle and radius vector of the detector plane respectively with x -3 -
JINST 15 P01020
and y being the cartesian coordinates of the detector (see figure 1). The intensity distribution in the image plane is given by The OTR field in (2.1) is derived for a single particle and, thus, (2.2) represents a single particle intensity distribution (Point Spread Function, OTR PSF). If a bunch of particles is now considered, the resulting image is a convolution of the SPF with the transverse distribution of the bunch.
The distribution from (2.2) can be projected onto the y axis as Figure 2 shows the predicted OTR PSF for a particle with energy of 1.3 GeV and λ = 550 nm where θ lens = 0.1. The intensity at y = 0 is zero along the entire area of x. The resulting projection in the vertical direction is also shown for different θ lens . One may see that the diffraction plays an important role in the width of the OTR PSF. If we have another particle which has a different transverse position, its image will have an offset. Considering a bunch of many particles will result in the fact that the minimum between two lobes will be smeared out. The degree of smoothing and, consequently, the sensitivity of the OTR PSF to the transverse beam size will depend on how wide the OTR PSF is. Therefore, an initially small angular acceptance will degrade the resolution of the method. The resolution limitations investigated using the ZEMAX software were well described by us in [17,18]. It has been shown that the OTR PSF, as well as the method resolution, depend on spherical and chromatic aberrations. Nevertheless, if the optical system is well aligned, the OTR PSF depends on vertical beam size only. The effect of other parameters such as angular divergence or beam energy spread is negligibly small. The convolution of the OTR PSF with the transverse electron beam distribution is discussed later when we describe the calibration procedure. Accelerator Test Facility (ATF) at KEK: High energy accelerator research organisation in Japan has been built in nineties to test advanced accelerator concepts for future electron-positron linear collider. It consists of a photo-cathode RF gun, 1.3 GeV S-band linear accelerator, damping ring and an extraction line. The ATF2 is an upgrade of the ATF extraction line with a final-focus test stand with the goal to focus the beam from the ATF damping ring down to the vertical beam size of 37 nm as well as to demonstrate its stability down to the nanometre level [3]. The OTR system is integrated into the laserwire system [8,11,12] with the aim to cross check the laser wire emittance measurement. Due to exactly the same location we anticipated linear correlation between OTR PSF and laser wire measurements unless resolution of either instrument counteracts it.
In order to generate sub-micrometre vertical beam sizes, a special beam optics was designed and used. Figure 3 shows the beta function β x,y , the dispersion η x,y and the predicted electron beam size σ x,y calculated using MAD code. It can be seen that the beam is squeezed down in the vertical plane 55.4 m away from the point of extraction where the OTR monitor is located. The optics was different from the nominal ATF2 optics in which the local beam waist is located ∼ 20 cm further downstream. The dispersion is also set to be zero at the OTR monitor to minimise the beam size given by where β is the beta-function, is the physical beam emittance, η is the dispersion function and ∆E/E is the relative energy spread of the beam. Therefore, in the region of the OTR monitor, the beam -5 -
JINST 15 P01020
size is dominated by emittance term. If the ATF2 beam line is tuned very well and an emittance of 15 pm rad is achieved; therefore, a 0.5 µm beam size can be expected at OTR monitor location. The experimental installation is schematically illustrated in figure 4. The OTR screen is a 30 × 30 × 0.3 mm aluminised silicon wafer tilted at 45 • with respect to the electron beam trajectory. The target position and orientation angle was controlled using a four-dimensional vacuum manipulator (three translation and one rotation degrees of freedom) installed at the top of the vacuum chamber [8]. The OTR radiation propagates at 90 • angle with respect to the electron beam, passes through the optical system consisting of a motorised iris, a lens, a periscope, an optical filter, a polariser and, finally, a CCD camera. The optical system components and their specifications are summarised in the table 1. The iris and the lens were mounted on the same board which can be moved using a stage (S 2 ) to adjust their centre position with respect to the radiation beam line. This board is also mounted on the stage (S 1 ) which along with S 2 allows 3D positioning of the lens. The optical filter followed by the polariser were attached to the CCD camera, which was mounted on a remotely controlled rotation stage (S 3 ). The whole setup was mounted on a breadboard and placed in a light protective enclosure.
OTR experimental station
To align the optical system a special alignment laser setup was used. A detailed description of the laser alignment is well described in [13,14]. The laser stage located approximately 50 m upstream the OTR target consists of a CW He-Ne (Helium-Neon) laser with an output wavelength of 632.8 nm, a spatial filter, and a focusing lens enabling focusing the laser over a 100 m distance. A vacuum mirror is used to send the laser along the beam trajectory. Changing the distance between spatial filter and the focusing lens, one can focus the laser beam at each point of the setup. The -6 - and ∼50% quantum efficiency alignment better than 1/γ = 0.4 mrad was achieved. The OTR screen inserted on the beam can reflect the laser along the optical path providing a reference trajectory to align the position and angles of all components of the optical system.
OTR image calibration
One crucial step in the monitor commissioning is the conversion of the measured distribution transverse coordinates from pixels to micrometres by means of well-defined calibration technique, which in turn determines the scale and dynamic range of the monitor system.
JINST 15 P01020
In order to convert the image size in pixels X pixel i into the image size in microns X µm i , the following procedure has been applied.
Thus the magnification factor needs to be determined experimentally. The OTR screen edge reflecting the laser light can be observed by the CCD camera (see figure 5). The screen was gradually moved out of the vacuum chamber using the manipulator vertical translation mechanism. At each step the image of the OTR screen edge was recorded by the CCD. The position of the target was obtained from the manipulator motor encoder with ±5µm accuracy. A portion of the image shown with two vertical lines in figure 5 was chosen to produce the projections illustrated in figure 6 (left). The following function was used to fit the projected profile: where a 0 is the vertical offset equivalent to the background intensity, a 1 is the amplitude, a 2 is the screen edge position and a 3 provides some information about the image focus, edge quality and the electron beam size. For each motor encoder position, the screen position, a 2 , was found from the image and then plotted against the encoder position along with the linear fit shown in figure 6 (right). The magnification factor is the linear fit gradient multiplied by a single pixel size (5.4 µm in our case), which is M = 20.4 ± 0.5 from figure 6 (right). Using eq. (3.2) now we can convert the image coordinates to microns in the target plane.
It is important to emphasise that the given procedure accuracy directly depends on a good lens longitudinal alignment and should be repeated every time the optical system is changed. Figure 7 shows an example OTR image. Its general appearance is similar to figure 2 in which the point spread function in the vertical direction is clearly visible from the two lobe distribution. However, the horizontal width is much larger due to the large beam size in the horizontal direction. A 'X' shape of the image is present due to the finite field of view of the optical system. The target is tilted at 45 • with respect to the electron beam trajectory. However, the view port used to observe the OTR image is at 90 • . Since our optical system is a microscope with large magnification factor, while the centre of the image is in the focus, the left and right parts of the beam in horizontal direction are out of focus. Such dilution of the OTR image degrades the resolution as the inter-peak distance increases, however, it does not effect the precision for the beam size diagnostics, since the beam size effect is still dominant here.
Optimisation of the OTR monitor and evaluation of uncertainties
The horizontal projection is extracted by integrating the image along 'Y' axis. Employing symmetric Gaussian fit directly gives the horizontal RMS spot size in µm. An example is shown in figure 8 (left).The point spread function dimension in the horizontal direction is much smaller than the horizontal electron beam size. Therefore, the horizontal projection gives the horizontal electron beam profile. A Gaussian fit was used in figure 8 (left) which resulted in a beam size of 132 ± 0.3 µm, which is consistent with expectations from MAD [8,12].
In order to analyse the vertical projection, a special fit function was proposed: where a 0 is the vertical offset of the distribution with respect to zero, a 1 is the amplitude of the distribution, a 2 is the smoothing parameter, a 3 is the horizontal offset of the central minimum with respect to zero and finally a 4 is the distribution width. This fit function is advantageous to the previously employed functions [12] as the distribution only has two maxima allowing the contrast ratio to be calculated analytically. An example vertical projection along with the fit is shown in figure 8 (right).
The contrast ratio of this distribution which is defined as the ratio of the central minimum intensity to the maximum intensity depends on the electron beam size. The ratio can be calculated analytically from the fit function parameters as For optical system optimisation the peak-to-peak (PTP) distance is used. The distance can also be calculated analytically from the fit function as One should point out that the image was measured with 550nm wavelength optical filter. However, the calibration of the image was performed with 632 nm He-Ne laser. In [12] we have demonstrated that the difference in wavelength of a 100 nm causes the change in PTP distance of about 1 % (thanks to the achromat lens), and, therefore the effect can be neglected.
The analysis of the beam extraction from the contrast ratio and calibration procedure are described in section 4.3. During the initial commissioning of the monitor it is not clear what the orientation of the CCD camera is with respect to horizontal and vertical direction of the beam. Therefore, the image shown in figure 7 was digitally rotated to minimise the contrast ratio of the vertical projection. Figure 9 shows the contrast ratio as a function of the image tilt. A parabolic fit enables to extract the minimum ratio. For the current setup the minimum contrast ratio is obtained when the image is rotated by −0.54 • . A similar tilt analysis has been performed for multiple images. All images share -10 -2020 JINST 15 P01020 the same minimum rotation angle. Thus performing the rotation scan with one image is enough to determine the minimum for all other images in the data set. Nevertheless, when the entire system is upgraded or realigned the rotation scan has to be repeated. Due to a large magnification factor the image width in the detector plane is very sensitive to the longitudinal lens position. In [17][18][19] the authors used ZEMAX simulation software and analytical theory to demonstrate that the resolution for the beam size measurements directly depends on the OTR PSF width. An offset of a 100 µm can lead to significant resolution degradation. Therefore, a methodology of a beam-based longitudinal steering has been developed. When the lens is moving longitudinally the PTP distance changes. Figure 10 shows the PTP distance calculated from the fit using eq. (4.3) as a function of the S 2 stage longitudinal readout position. A parabolic fit enables us to find the minimal PTP distance that precisely corresponds to the best focus point which was used for beam size measurements.
Optimisation of the longitudinal lens position
A fine adjustment can be done by centring the minimum of the 'X' shaped image shown in figure 7. However, this shaped is only pronounced at 90 degrees observation angle with respect to the beam trajectory or smaller. At larger observation angles (e.g. 140 degrees) [20] the procedure shown in figure 10 should be used.
Beam size calibration procedure
To perform the calibration procedure a data set with varying vertical beam sizes is required. For each image, the PSF-like fit is used to extract the contrast ratio. The file with the smallest contrast ratio is then used for calibration as it is the closest fit to the original point spread function generated by a single electron. The fit curve for that file is then regenerated setting in (4.3) a 0 = 0 to remove any offset due to constant background and a 4 to zero in order to minimise the initial contrast ratio.
According to the principle of optics [21] for an idealised optical system, the PSF only depends on the incoherent source size. In accelerators the incoherent source size is defined by the transverse beam size. The influence of all other beam parameters is negligible and can be omitted. Assuming a gaussian beam profile, the numerical convolution with the fit function is performed to take into account the effect of beam size as Here f (x i ) is the measured intensity, x i is the x−axis coordinate, and σ is the beam size. The left plot is figure 11 shows the effect of beam size on the PSF-like distribution. Increasing σ the contrast ratio slowly increases until the convoluted distribution reaches a pure Gaussian. The calibration curve, i.e. σ y (x = I min /I max ), is illustrated in figure 11 (right). The following fit function is applied to obtain an analytical equation used to extract the beam size from calibration curve: The fit is also shown in the figure 11 (right, red). The resultant curve exhibits a symmetric nature. We expect larger beam size errors at each end of the curve. With the calibration now determined, it can be used to extract the vertical beam size for each image in the data set.
Integration window and its position selection
The choice of the integration window and its position affect the contrast ratio. Therefore, the calibration procedure has to be done every time the integration window is chosen, otherwise it might generate significant systematic deviation of the obtained beam size from a real value. The size of the integration window is also very important because together with useful signal we accumulate electronic noise in pixels and acquisition system of the CCD camera. To take that into account we developed a procedure enabling us to choose a range of integration windows and keep it during the beam size and emittance measurements. high contrast ratio. In order to choose the correct region, the correlation of contrast ratio to beam size is used, as shown in figure 12 (right). The correct region for the gap size was chosen to be where the correlation has the densest number of points. This range does not exceed 5% of the mean value. In figure 12 (left) this is shown as a shadowed region and in figure 12 (right) as red points.
Uncertainties
The beam size measurements are single-shot. The uncertainty in this case is defined by pixel-bypixel signal fluctuations of the CCD camera. Those fluctuations are well illustrated in figures 7 and 8. The numerical fits using eqs. (4.1) and (4.5) were performed using the Levenberg-Marquardt method to calculate the best parameters which minimised the weighted mean squared deviation of the experimental data and the best nonlinear fit function was used. The uncertainty of each parameter was then found by square-rooting the diagonal of the covariance matrix.
The uncertainty in the contrast ratio presented by eq. (4.2) was then found using the following propagation formula: where a 2 and a 4 are the free parameters of the fit function eq. (4.1), and ∆a 2 and ∆a 4 are their uncertainties from the fit procedure. While the beam size cannot be extracted directly from the fit as in the case for the horizontal projection, the beam size is strongly dependent of the contrast ratio. In order to convert the contrast ratio to beam size, a self calibration procedure has been introduced in section 4.3. The uncertainty of the beam size obtained from eq. (4.5) is evaluated as
Beam size and emittance measurements
Initially the accelerator was carefully tuned to minimise the background levels and provide a stable working regime during the data taking. The longitudinal position of the focusing lens was adjusted to -13 -minimise the PTP distance of the OTR PSF as described in section 4.2 and the centre the 'X'-shaped image was adjusted. After that, the monitor was ready to perform beam size measurements. We employed a single bunch mode, with repetition rate of 3.12 Hz and the bunch charge of ∼ 1 nC. A quadrupole scan was then performed in which the current of the QM14FF quadrupole magnet was changed in the range from -88 A to -97 A in steps of 1 A. For each value of the quadrupole current, three OTR images were taken in order to reduce the effect of statistical fluctuations. Figure 13 shows the measured horizontal and vertical beam sizes at each current. It can be seen that the horizontal beam size remains roughly constant at around ∼ 145 µm as desired while the beam size in the vertical direction shows a hyperbolic dependence with a minimum beam size of 0.91 ± 0.05 µm. The emittance can be found by measuring the σ-matrix. σ = σ 2 y σ yy σ yy σ 2 y = y β y −α y −α y γ y (5.1) The rms beam size σ 2 y (s 2 ) at some distance downstream the quadrupole is related to the rms beam size at the quadrupole σ 2 y (s 1 ) using the thick lens approach as where m i j are the elements of the transfer matrix. For a focusing quadrupole the transfer matrix elements are defined as where l q is the effective length of the quadrupole, K is the quadrupole field strength, and L is the distance between the quadrupole and the OTR screen. By making the following substitution a = σ 2 y (s 1 ), b = σ yy (s 1 ) σ 2 y (s 1 ) , c = Thus, the emittance is = √ ac. From the data and the fit in figure 14, the emittance was found to be 59.3± 4.2 pm rad. The emittance was consistent with the laser-wire measurements [8]. The emittance was also measured the following shift using the multiOTR [23] and found to be 23 pm rad; however, this was measured after a series of beam tuning routines were performed and thus, a smaller emittance was expected. | 6,540.8 | 2020-01-17T00:00:00.000 | [
"Physics",
"Engineering"
] |
Combinatorial level densities for practical applications
We review our calculated energy-, spinand parity-dependent nuclear level densities based on the microscopic combinatorial model described in ref. [1]. We show that this model predicts the experimental sand p-wave neutron resonance spacings with a degree of accuracy comparable to that of the best global models available and also provides reasonable description of low energies cumulative number of levels as well as of the experimental data obtained by the Oslo group [2]. We also provide a renormalization recipe which enables to play with the tabulated results for practical applications. Finally, we study the impact of temperature dependent calculation on s-wave neutron resonance spacings.
Introduction
The knowledge of nuclear level densities (NLDs) has been a matter of interest and study for years going back at least to 1936 with Bethe's pioneering work [3].Level densities are required when modeling nuclear reactions as soon as the number of levels to which decay occurs is too large to allow for an individual description.With the development of new industrial or experimental facilities, as well as for astrophysical interest, the increasing need of nuclear data far from the valley of stability challenges the nuclear reaction models.Indeed, so far, cross section predictions have relied on more or less phenomenological approaches, depending on parameters adjusted to scarce experimental data or deduced from systematical relations.While such predictions are expected to remain reliable for nuclei not too far from experimentally accessible regions, the predictive power of analytical models in general, and of analytical level densities expressions in particular, is more and more questionable when dealing with more and more exotic nuclei.To face such difficulties, it is preferable to rely on approaches as fundamental as possible.Such microscopic description by a physically sound model based on first principles ensures a reliable extrapolation away from experimentally known region.
Global microscopic models of NLD have been developed for the last decades (see [1] and references therein), but they have almost never been used for practical applications, because of their lack of accuracy in reproducing experimental data (especially when considered globally on a large data set) or because they do not offer the same flexibility as that of the highly parametrized analytical expresa e-mail<EMAIL_ADDRESS>have therefore developed a combinatorial approach and demonstrated that such an approach can clearly compete with the statistical approach in the global reproduction of experimental data [1,4].As we will see, this approach provides the energy, spin and parity dependence of the NLD, and, at low energies, describes the non-statistical limit which, by definition cannot be described by statisticalbased analytical approaches, and yet clearly can have a significant impact on cross section predictions.We will also show how we plan to improve our predictions in the coming future.
The combinatorial method
The combinatorial method has been extensively described in refs.[1,4,5] and we just summarize here its main features.It consists in using the single-particle level schemes obtained from constrained axially symmetric Hartree-Fock-Bogoliubov (HFB) method based on the BSk14 Skyrme force [6] to construct incoherent particle-hole (ph) state densities as a function of the excitation energy, the spin projection (on the intrinsic symmetry axis of the nucleus) and the parity.Once these incoherent ph state densities are determined, collective effects are included.In [4] the vibrational effects was described by multiplying the total level densities by a phenomenological enhancement factor similar to that of refs.[7,8] once rotational bands had been constructed.However, such a choice has shown its limits [9] and has been replaced by an improved but more complicated treatment.The latter explicitly allows for phonon excitations using the boson partition function of ref. [5] and includes quadrupole, octupole as well as hexadecapole vi-EPJ Web of Conferences brational modes.However, whereas single-particle levels are theoretically obtained for any nucleus, phonon's energies are taken from experimental information when available and from analytical expressions [1] otherwise.Once the vibrational and incoherent ph state densities are computed, they are folded to deduce the total state densities.Level densities are then obtained by constructing rotational bands if present (i.e. if the nucleus is deformed) or using the classical expression that relates state and level densities for a spherical nucleus [1].To account for the damping of vibrational effects with increasing energies, we restrict the folding to the ph configurations having a total exciton number (i.e. the sum of the number of proton and neutron particles and proton and neutron holes) N ph ≤ 4.This restriction stems from the fact that a vibrational state results from a coherent excitation of particles and holes, and that this coherence vanishes with increasing number of ph involved in the description.Therefore, if one deals with a ph configuration having a large exciton number, one should not simultaneously account for vibrational states which are clearly already included as incoherent excitations.
Results
The new NLD are now compared with experimental data.In spite of considerable experimental efforts made to derive NLD, the lack of reliable data-especially over a wide energy range-constitutes the main problem that the NLD theories have to face.Nevertheless, a large number of analysis of slow neutron resonances and of cumulative numbers of low energy levels have greatly helped to provide experimental information on NLD.Other sources of information have also been suggested, such as analysis of spectra of evaporated particles and coherence widths of cross section fluctuations.However, most of these experimental data are affected by systematic errors resulting from experimental uncertainties as well as the use of approximate theories to analyze them.
Back-Shifted Fermi Gas HFBCS+Statistical Present work
Fig. 1.Ratio of HFB plus combinatorial (D th ) to the experimental (D exp ) s-wave (squares) and p-wave (circles) neutron resonance spacings compiled in [10] compared with other global approaches such as the Back-Shifted-Fermi-Gas model of ref. [11] and the HFBCS+statistical approach of ref. [12].
The most extensive and reliable source of experimental information on NLD remains the s-and p-wave neutron res-onance spacings [8,10] and the observed low-energy excited levels [10].We show in Fig. 1 the result of our HFB plus combinatorial approach with respect to experimental s-and p-wave spacings compiled in the RIPL-2 database [10].The quality of a global NLD formula can be described by the rms deviation factor defined as where D th (D exp ) is the theoretical (experimental) resonance spacing and N e is the number of nuclei in the compilation.Globally, as can be seen in Fig. 1, the resonance spacings are predicted within a factor of 2 (the exact rms factor amounts to f rms = 2.3) for both the s-and p-wave data.This result is to be compared to the deviations of global analytical formula [11,13] typically of the order of 1.7 − 1.9 and to the f rms = 2.14 value obtained with the HFBCS+statistical model introduced in ref. [12], which was the first global microscopic NLD prescription having the capacity to compete with phenomenological models in the reproduction of experimental data.Our new approach therefore gives rather comparable predictions with respect to the other existing global models.The HFB plus combinatorial model also gives satisfactory extrapolations to low energies.As an example, we 04005-p.2CNR*09 compare in Fig. 2 the predicted cumulative number of levels N(U) with the experimental data [10] for 12 nuclei, including light as well as heavy and spherical as well as deformed species.Globally, the present model provides similar results as those illustrated in Ref. [4].Yet one can observe significant disagreement in certain case which could be reduced with a simple renormalization procedure.Such renormalizations are also often required, in particular for nuclear data evaluation or for an accurate and reliable estimate of reaction cross sections.Though the HFB plus combinatorial NLD are provided in a table format, it is possible to renormalize them on both the experimental level scheme at low energy and the neutron resonance spacings at U = S n in a way similar to what is usually done with analytical formulae.More specifically, the level density can be renormalized through the expression where the energy shift δ is essentially extracted from the analysis of the cumulative number of levels and α from the experimental s-wave neutron spacing.With such a renormalization, the experimental low-lying levels and the D exp values can be reproduced reasonably well, as discussed in detail in [13].Eq.( 2) has been used to fit the 289 nuclei for which both an experimental s-wave spacing (D 0 ) and a discrete level sequence exist.The corresponding δ and α values are plotted in Fig. 3.It is important to note that the obtained α and δ parameters show no systematic trend or A-dependence, and more particularly no correlation with shell closures.Of course, when no D exp value is available and only the experimental discrete level scheme is known, only the δ shift is used to reproduce at best the low-lying levels.Comparisons have also been performed with the experimental data extracted by the Oslo group from the analysis of particle-γ coincidence in the ( 3 He,αγ) and ( 3 He, 3 He'γ) reactions [2].The experimental determination of level densities out of these reactions is however model-dependent and requires a normalization at the neutron binding energy as explained and discussed in ref. [1].If we normalize our calculation using the same total level density as that of the Oslo group, using Eq. ( 2) for each isotope, with an α parameter such that our combinatorial results agree extremely well with the so-called experimental values below S n , as illustrated in Fig. 4. As mentioned previously, one advantage of the combinatorial method with respect to the statistical method is its non statistical feature which enables us to obtain realistic parity and spin distributions.Deviations from the usually adopted equipartition of parities have been shown to have a non negligible impact, in particular when looking at capture cross sections [4].Concerning the spin distribution, the combinatorial approach provides nuclear level densities which strongly deviate from the usually adopted Wigner law, in particular at low energy.Such deviations can play a key role in the description of the decay to spin isomers at low energies [14], since high spin population is usually strongly underestimated within a statistical approach.In particular, this implies an underestimate of the decaying probability to high spin levels.
Last but not least, is the test of our predictions in the most complicated case of fission cross section predictions.Indeed, one has then to deal with nuclear reactions, which involve not only the level densities at equilibrium deformations but also at deformation corresponding to the top of each fission barrier encountered in the classical modeling of the fission process in terms of multiple humped fission barriers.If very accurate fits of fission cross section can be achieved [15,16], it is mainly thanks to the use of 04005-p.3EPJ Web of Conferences a very large number of parameters which are generally not constrained by experimental data.More than in any other channel, the predictive power of the traditional approaches is poor, and by no mean such approaches can be employed to make extrapolations far from the regions where fission cross section has been measured.The only solution left in this case is to rely on microscopic predictions provided they give reasonable answers.Quite a complete study has been performed on the use of microscopic ingredients applied to fission cross section prediction [17] and we just summarize here part of this work, by plotting in Fig. 5, calculated fission cross sections using both the microscopic fission barriers and associated combinatorial level densities for several actinides.As can be seen, the quality ob-Fig.5. Neutron-induced fission cross sections obtained with the microscopic fission path and the combinatorial nuclear level densities using the raw fission paths (green lines), when the fission paths are renormalized for each actinide (red line) or by a systematical factor depending on the oddness of the nuclei (blue dotted line).
tained by default (green line) is not satisfactory for practical applications which require a few percent of accuracy.In this case both nuclear level densities and fission barriers are directly obtained from HFB+BSk14 predictions.The bad description obtained using these raw (not adjusted) ingredients is mainly due to the fact that the microscopic bar-riers are generally too high by a few hundreds of keV [6,17] which is too big an error to provide reasonable cross sections.If the barriers are individually normalized, without even modifying the combinatorial level densities, it is possible to obtain cross sections which are in much better agreement with experimental data (red lines).However, such normalizations only make sense if experimental data are available.If not, it is still possible to use systematic normalizations deduced by averaging those which have been obtained fitting the nuclei for which experimental data are available.In that case, one obtains fission cross section which are globally within a factor of 3 (blue dotted lines).Of course, the quality of the fit can be further improved using Eq. 2, but this would go beyond our present discussion.
Temperature effect on combinatorial level densities
We have seen that the combinatorial level densities we have obtained and tabulated provide quite good results when compared to available experimental data as well as when they are used to produce cross sections, even in complicated reactions such as neutron induced fission on actinides.Yet, they still suffer from several approximations which can be reduced with more or less complicated treatment.Among these approximations, the way collective enhancement evolves with increasing excitation energy remains questionable.If it is well established that with increasing energy, a deformed nucleus in its ground state becomes spherical and that the vibrational enhancement vanishes.These features are usually described using more or less elaborated phenomenological approximations [12,18].The disappearance of the rotational enhancement at increasing excitation energies has already been studied theoretically [19] for a few nuclei but not within a systematic approach.Another way to describe this transition to sphericity is to use temperature-dependent HFB approach following the method described in ref. [20].In particular, for a heated system, the average value of an observable O reads where the free energy F depends on the energy E, the temperature T and the entropy S through the usual relation F(q) = E(q) − T S (q) and q is the quadrupole deformation considered to be the most relevant property to describe the deformation changes with the temperature evolution.In a first approximation, neglecting the thermal fluctuation, the equilibrium deformation of the nucleus at a temperature T corresponds to the one minimizing the free energy F. For a given temperature, corresponding to a given excitation energy U = E − E(T = 0), the single-particle level scheme and pairing properties have been determined at the equilibrium deformation to estimate the level density on the basis of our combinatorial method.The recently determined D1M Gogny force [21] is used here to estimate all nuclear ingredients.As illustrated in Fig. 6, the obtained level den-04005-p.4 CNR*09 Fig. 6.Total nuclear level densities for 238 U calculated for several temperatures.Each level density curve covers an energy interval which starts at the excitation energy corresponding to the chosen temperature.In the present case the total level density is plotted without any smoothing procedure (see text for more details).
sities strongly depend on the corresponding temperature and display discontinuities stemming from the vanishing (gradual or sudden) of shell and pairing effects as well as from the fact that we use a finite set of temperatures to compute our level densities.Indeed, in principle the level density determined for a given temperature is only valid at the corresponding excitation energy.However, for practical reasons, we consider the level density determine for the temperature T i to be valid over the excitation energy interval [U i , U i+1 ].To suppress the discontinuity at a given temperature T i , an energy-dependent shift is applied to the level density in the [U i−1 , U i ] range.With such a simple treatment, the discontinuous level density of Fig. 6 can be regularized into that illustrated in Fig. 7 and the NLD estimated within a reasonable computer time.
As can observed, the temperature dependent level density is significantly different from that obtained on the basis of the T = 0 ingredients.In particular, the T-dependent HFB plus combinatorial NLD is higher than the one obtained with the T = 0 approximation, stemming from a modification of the single-particle configuration at increasing energies and in particular the disappearance of the shell effect.The temperature effect has a non negligible impact even at low energies around the neutron binding energy and consequently may affect the prediction of the s-wave spacing at B n .To check that, we have estimated the Tdependent HFB plus combinatorial NLD for all the nuclei for which an experimental s-wave spacing is available.The results are displayed in Fig. 8 and compared with the results obtained within the T = approximation.The improvement obtained with the temperature dependent method is quite clear.In terms of the to f rms factor introduced in Eq.( 1), we found f rms = 4 in the T = 0 case and f rms = 2.7 for the temperature-dependent calculation.Note however that contrary to what has been done in ref. [1], we have not included here any hexadecapole vibrational phonon the energy of which remains highly uncer-Fig.7. Total nuclear level densities for 238 U calculated for several temperatures connected smoothly thanks to the procedure described in the text.The reference points, i.e those not affected by the smoothing procedure, are also plotted.The level density obtained using the HFB ingredients determined with T = 0 is shown for comparison (gray curve).Fig. 8. Ratio of HFB plus combinatorial (D th ) to the experimental (D exp ) s-wave neutron resonance spacings compiled in [10] with the temperature dependent treatment (red circles) and using the HFB ingredients obtained for T = 0 (squares).
tain.In the current situation, the only phenomenological ingredients are the octupole vibrational phonon energies and the number of phonons and particle-hole configuration included in the vibrational-intrinsic state density folding procedure.In particular note that the analytical expression for the quadrupole phonons' energies has now been replaced by those predicted coherently using the method described in refs.[22,23] on the basis of the same D1M Gogny interaction [21].
Conclusion
Microscopic nuclear level densities have been determined for more than 8000 nuclei in a tabular form using the combinatorial method.These tables are available at the website http://www-astro.ulb.ac.be and we have shown they pro-04005-p.5EPJ Web of Conferences vide fairly good results both when compared with purely experimental level density data or when employed to predict nuclear reaction cross sections.Yet, the combinatorial method can still be improved to better account for collective effects and in particular their evolution with excitation energy.A first attempt has been made to account for the variations of nuclear structure properties with increasing excitation energy through the temperature-dependent Hartree-Fock-Bogoliubov calculations.The results are very promising and encouraging for further investigation.
Fig. 2 .
Fig. 2. Comparison of the cumulative number of observed levels (thin staircase) with the HFB plus combinatorial predictions (thick line) as a function of the excitation energy U for a sample of 12 nuclei.Only for 208 Pb, both curves have been shifted by 2.5 MeV, the energy range corresponding consequently to [2.5-7.5]MeV instead of [0-5] MeV.
Fig. 3 .
Fig. 3. α and δ values plotted as a function of the atomic mass.See text for more details.
Fig. 4 .
Fig. 4. Comparison between the total NLD determined by the Oslo group (Grey areas) and the HFB combinatorial predictions (solid lines).The full triangles correspond to the modeldependent normalization point derived from the D 0 value used by the Oslo group.See text for more details. | 4,503.8 | 2010-03-01T00:00:00.000 | [
"Physics"
] |
Highly Conducting Li(Fe1−xMnx)0.88V0.08PO4 Cathode Materials Nanocrystallized from the Glassy State (x = 0.25, 0.5, 0.75)
This study showed that thermal nanocrystallization of glassy analogs of LiFe1−xMnxPO4 (with the addition of vanadium for improvement of glass forming properties) resulted in highly conducting materials that may be used as cathodes for Li-ion batteries. The glasses and nanomaterials were studied with differential thermal analysis, X-ray diffractometry, and impedance spectroscopy. The electrical conductivity of the nanocrystalline samples varied, depending on the composition. For x=0.5, it exceeded 10−3 S/cm at room temperature with an activation energy as low as 0.15 eV. The giant and irreversible increase in the conductivity was explained on the basis of Mott’s theory of electron hopping and a core-shell concept. Electrochemical performance of the active material with x=0.5 was also reported.
Introduction
Lithium batteries have been developed for more that 70 years [1]. However, the electrochemical properties of lithium itself were studied even earlier, in 1913 [2]. Since the late 1960s, non-aqueous 3-V lithium-ion primary batteries have been available in the market. In 1974, M.S. Whittingham patented the Li//TiS 2 battery. In 1973, J.B. Goodenough et al. proposed LiCoO 2 as a new cathode with a potential as high as 3.9 V vs. Li + /Li. This opened a new era for the Li-ion battery market. In 1997, J.B. Goodenough et al. [3] proposed a new class of cathode materials-phospho-olivines-and since then LiMPO 4 (M = Fe, Mn, Co, Ni) materials have been widely studied for their application [4]. From the whole family of isostructural compounds, only LiFePO 4 was successfully introduced into mass production. LiMnPO 4 has a significantly higher potential versus metallic lithium compared to LiFePO 4 (4.13 V and 3.43 V, respectively) and a comparable theoretical gravimetric capacity ca. 170 mAh/g [4]. However, the synthesis of LiMnPO 4 compounds, which can work in batteries with high loads, is more difficult [5]. One of the successful synthetic routes consists in the preparation of intentionally non-stoichiometric compositions [6]. Another possible method is to synthesize LiMn 1−x Fe x PO 4 phospho-olivines [7,8], i.e., LiMnPO 4 with partial Mn substitution to Fe.
The low electronic conductivity of LiMPO 4 materials belongs to a group of factors that significantly limit their electrochemical performances [4]. This issue is usually addressed by surface coating with a highly conducting nanometer thickness layer or by particle size control. In recent years, J.E. Garbarczyk's group has proposed and investigated an alternative route to the conductivity enhancement-a thermal nanocrystallization of glassy analogs of selected crystalline cathode materials, such as: V 2 O 5 , LiFePO 4 , and Li 3 V 2 (PO 4 ) 3 [9]. This approach has several advantages: the absence of carbon additives, simplicity, and the straightforwardness of synthesis. Preparation consists of two stages only: (i) glass preparation by melt-quenching and (ii) proper thermal treatment of the glass to conduct its nanocrystallization. By the appropriate heat treatment, one can achieve a giant (even by a factor 10 9 ) and irreversible conductivity enhancement.
The possibility and influence of iron partial substitution with vanadium in LiFePO 4 was studied, e.g., by M.S. Whittingham and co-workers [10]. It was demonstrated that the addition of vanadium enhances the electrochemical performance of the materials, especially at high current densities. From the point of view of thermal nanocrystallization, the addition of vanadium also improves glass-forming properties of the compound and positively affects the electronic hopping. Such an effect was observed in LiFe x V 1−2.5x PO 4 glasses and nanomaterials [11]. In recent research [12], we aimed to replace some of the iron ions in LiFe 0.88 V 0.08 PO 4 glass with manganese in order to obtain highly conducting nanomaterials. The reasons for introducing some vacancies on Fe sites are as follows. Firstly, this provides charge compensation, when Fe 2+ ions are replaced by V 3+ ions. Secondly, nonstoichiometry may lead to an improvement in electrochemical performance, as reported, e.g., in [13].
In this article, extended studies on three compositions Li(Fe 1−x Mn x ) 0.88 V 0.08 PO 4 with different Fe and Mn contents are reported. In particular, we focused on the influence of the Fe/Mn ratio on the electrical conductivity of synthesized nanomaterials.
Materials and Methods
Three compositions of general formula Li(Fe 1−x Mn x ) 0.88 V 0.08 PO 4 (x = 0.25, 0.5, 0.75) were selected for investigation (Table 1). Appropriate amounts of precursors: Li 2 CO 3 , PO 4 , and V 2 O 5 were mixed in a mortar, melted at 1300°C in a reducing atmosphere, and rapidly quenched. Their amorphousness was verified with X-ray diffractometry (XRD). Thermal events occurring in the samples were observed with differential thermal analysis (DTA), using a SDT Q600 setup (TA Instruments). The measurements were conducted with a heating rate of 10°C/min in argon flow. Crystallization processes occurring upon heating were observed by HT-XRD in nitrogen flow, preventing the samples from possible oxidation. The diffraction studies were carried out on a Philips X'Pert Pro apparatus using the Cu Kα line (λ = 1.542 Å), equipped with an Anton Paar oven. Electrical conductivity was measured upon heating and subsequent cooling ramps with impedance spectroscopy within the wide frequency range 10 mHz-10 MHz. The set-up consisted of a Novocontrol Alfa-A analyzer and a tube furnace (Czylok) controlled by Eurotherm 2404 [14]. The spectra were acquired when a temperature stability as good as 0.1°C was reached. The step between measurements was 25°C. The average heating/cooling rate was less than 1°C/min. For this experiment, platinum electrodes were sputtered at the opposite sides of the studied samples. The sample with x = 0.5 was selected for electrochemical characterization. About 1 g of the sample was heated in a tube furnace to 580°C at 1°C/min heating rate, i.e., in conditions similar to these used in the electrical measurements, in order to obtain a highly-conducting material. The procedure was carried out in argon flow to prevent the material from oxidation. Then, the sample was mixed with carbon black and ball-milled in a planetary mill for 20 h at 300 rpm in order to get a fine powder. To prepare the layer (75 wt% active material, 15 wt% carbon black (CB-TIMCAL Graphite & Carbon Super P ® Conductive Carbon Black), and 10 wt% PVDF), a slurry was made by mixing these materials in N-methyl-2-pyrrolidone (NMP, Aldrich) for 3 h, in order to obtain a homogenous mixture, using a magnetic stirrer. The suspension was spread at room temperature on an aluminium current collector by using a doctor blade. The gap was set to 0.1 mm. After the evaporation of the solvent in an oven at 50°C for 24 h, the foil was transferred to an Ar-filled dry-box, where the procedure continued. It was cut in disks of 12 mm in diameter with a loading of the active material of about 3 mg/cm 2 . A metallic lithium plate was used as an anode and 1 M LiPF 6 (in EC : DEC) as the liquid electrolyte. The cell was charged/discharged with rates varying from C/50 to C within the 2.0-4.5 V range.
Cyclic voltammetry (CV) was performed in a three-electrode Swagelok-type cell with the prepared cathode layer as a working electrode, metallic lithium plates as the reference and the counter electrodes, and with a liquid electrolyte. Firstly, the cell was held at an open circuit voltage (OCV) for stabilization for 24 h and then measured in the potential range of 2.0-4.7 V (vs. Li + /Li 0 ) at a scan rate of 0.05 mV/s in 5 cycles.
As a supplementary study, the microstructure of the sample with x = 0.5 was investigated with a high-resolution transmission electron microscopy (HR-TEM). It was performed using a FEI Titan Cubed 80-300 microscope at the Institute of Physics, Polish Academy of Sciences.
Differential Thermal Analysis (DTA)
DTA curves of the synthesized samples were typical for glassy materials ( Figure 1). A glass transition and two or three crystallization peaks were observed. The glass transition temperature was ca. 435°C regardless the composition. The main crystallization peak was centered at ca. 490°C. The position of the second significant crystallization peak varied from 555 to 587°C and was shifted towards higher temperatures for the samples with greater manganese content. In the sample with x = 0.75, an additional minor crystallization peak appeared at 540°C. In the sample with x = 0.5, an endothermal event was observed at ca 650°C. The origin of this event is unclear. It may be a melting of one of the phases, and it determined the upper thermal stability of crystallized materials. Additionally, in Figure 1, one can see that the ratio between areas of the first and the second peak became smaller with increasing x.
Exact temperatures of the observed thermal events are presented in Table 2. The temperature of the first crystallization peak decreased with the increasing Mn content, whereas the temperature of the second crystallization peak significantly increased with the increasing value of x. This may be related to the energy of formation LFP and LMP phases upon different concentrations of iron and manganese. Differences in crystallization temperatures were previously observed e.g., in the case of LiFeBO 3 and LiMnBO 3 glasses [15,16].
X-ray Diffractometry (XRD)
While DTA curves were typical for glassy materials, XRD patterns of the synthesized samples ( Figure 2) appeared to be intriguing. One can observe an amorphous halo at low angles (20-40°). However, low-intensity but distinct peaks were observable in all samples. This means that the samples had partially crystallized upon fast cooling from the melt. In the case of samples with x ≤ 0.5, the identification of crystalline phases was difficult, due to the low intensity of the peaks. In the case of x = 0.75, the positions of the major peaks were in agreement with Li(Fe 0.25 Mn 0.75 )PO 4 reference pattern (ICDD card no. 04-024-8018).
Regardless of the initial impurities, XRD patterns acquired upon heating the samples to 580°C (Figure 3a-c) confirmed crystallization in three crystalline phases: triphylite LiFePO 4 (abbrev. LFP, space group Pnma), lithiophilite LiMnPO 4 (abbrev. LMP, space group Pmnb), and lithium vanadium phosphate Li 3 V 2 (PO 4 ) 3 (abbrev. LVP, space group P2 1 /n). Since the positions of the peaks in all three patterns were quite similar and the peaks in nanocrystalline samples were broad, it was not easy to distinguish the crystallization of each phase at first sight. In general, the unit cell constants of LiFe 1−x Mn x PO 4 increases with increasing manganese content [17]. Therefore the diffraction lines of LMP were shifted towards lower Braggs' angles, in comparison to LFP. The quality of the patterns and their complexity did not allow us to perform reliable Rietveld refinement. Nevertheless, an analysis of minor peaks allowed us to suspect that the crystallization of Li 3 V 2 (PO 4 ) 3 appeared first and was followed by LiFePO 4 . Due to lower vanadium content, these two processes might overlap in the first crystallization peak observed by DTA. Therefore the second crystallization peak can be ascribed to the crystallization of LiMnPO 4 . This hypothesis is supported by the fact that the ratio of the areas under these two peaks becomes a favorite for the second crystallization process with growing x, i.e., with growing Mn content in the nominal composition. Eventually, most of the the reflexes originating from LFP and LMP merged at high temperatures. In Figure 4, one can see a comparison between three high-resolution patterns collected at room temperature after heat treatment for each value of x. The reference patterns for mixed LiFe 1−x Mn x PO 4 compounds with x = 0.25 and 0.75 are given as well. One can see that the position of the main peaks slightly shifts towards lower angles, and it is in good agreement with the reference patterns. It suggests that for compositions with x = 0.25 and 0.75, Mn/Fe ions incorporate into the same structure with different unit-cell parameters. On the contrary, for the composition with x = 0.5, separated lines from ironrich and manganese-rich olivine-like phases were observed. Some impurity phases were also detected and identified, including Fe 2 O 3 , V 2 O 5 , and Li 3 V 2 (PO 4 ) 3 . This suggests that not all of vanadium was doped into the olivine structure.
Electrical Conductivity
The initial electrical conductivity of as-synthesized glassy samples at room temperature was modest, within the 10 −14 -10 −13 S/cm range. The impedance figures in Nyquist coordinates were similar for all compositions and consisted of a single semicircle, which is a typical shape for glasses with predominant electronic conductivity. In the glassy phase, the Li + conductivity might be suppressed by a lack of conduction channels that are present in a periodic crystalline structure. However, it was a good starting point for significant improvement. IS measurements performed for the samples showed that a proper thermal treatment of glassy samples resulted in a significant and irreversible increase in the conductivity (Figure 5a-c). The best electrical conductivity of a nanocrystallized sample-i.e., 1.4 · 10 −3 S/cm at RT-was observed for the composition with x = 0.5. A slightly lower value-i.e., 0.8 · 10 −3 S/cm at RT-was recorded for the composition with x = 0.25. The lowest value-below 10 −5 S/cm at RT-was reached in the case of x = 0.75. The values of the activation energy ranged from 0.12 eV to 0.19 eV for x = 0.25 and x = 0.75, respectively. These values were much better than the electronic conductivity in LiFePO 4 crystals, which was 10 −7 S/cm at room temperature, and its activation energy varied between 0.55 and 0.59 eV [18]. The differences in electrical properties appeared also at elevated temperatures. Impedance figures acquired at ca. 250°C presented in Nyquist plots are shown in Figure 6a-c. Mainly, they consisted of a single semicircle. The equivalent circuit can be described as (RP), where R is the total resistance of the samples and P is a constant phase element (CPE), with parameter n close to 1. However, for samples with x ≥ 0.5, an ionic spur at low frequencies was more pronounced. This behavior can be modeled with a serial CPE element with n ≈ 0.5. More detailed discussion of basic equivalent circuits describing electronic and ionic conductors can be found, e.g., in Ref. [19].
Upon cooling, the impedance figures were strongly affected by the induction of a holder, due to low resistance of the samples. Nevertheless, no low-frequency spur was observed, which would be evidence for ionic conductivity with values comparable to electronic conduction. This is not to say that nanocrystalline materials exhibited no ionic conductivity, but it had to be a couple of orders of magnitude lower than the very high electronic conductivity of nanocrystallized samples. Such a behavior has been previously observed by us in many glassy analogs of cathode materials, e.g., olivine-like ones [9,20]. Such a phenomenal increase in the electrical conductivity and a significant decrease in the activation energy can be explained on the basis of Mott's theory of electron hopping in oxide glasses containing transition metal ions (i.e., Fe, Mn, and V) [21]. In our approach, the conductivity increase can be ascribed to the formation of interfacial regions (shells) around the nanocrystallites (cores). The resulting mixed valence of iron, manganese, and vanadium in these regions is advantageous for small polaron hopping, because the distances between pairs of hopping centers, (Fe 2+ -Fe 3+ , Mn 2+ -Mn 3+ , and V 3+ -V 4+ ), becomes shorter. A detailed explanation of this phenomenon and further discussion of the core-shell concept can be found, e.g., in Ref. [9]. At this point, it is worth mentioning that alternative hypotheses for the giant increase in the conductivity were carefully investigated and, eventually, rejected. This phenomenon cannot be ascribed to the appearance of metallic easy conductive paths as in the work: [22]. The increase in the conductivity due to metal-insulator transition in vanadium oxides does not explain the phenomenon, as this transition would be reversible in a function of the temperature [23].
Electrochemical Characterization
In Figure 7a, charge-discharge curves at various current rates from C/50 to C are shown. In general, the potential is monotonically changing. However, two steps are usually observable. One could ascribe Fe 2+ /Fe 3+ and Mn 2+ /Mn 3+ redox pairs to these features. However, typical crystalline olivine cathode materials exhibit a broad plateau, resulting in a nearly constant potential during charging and discharging. A mix LiFe 1−x Mn x PO 4 compound exhibits a similar behavior, with two plateaus corresponding to iron and manganese redox pairs [17,24,25]. On the contrary, a continuous change in the potential may be evidence for the presence of a non-stoichiometric single phase in nanograins, rather than a two-phase mechanism (i.e., fully lithiated and entirely delithiated phases). Quite similar charge/discharge curves were observed for nano LiFePO 4 by P. Gibot et al. [26]. In our experiment, only up to 95 mAh/g was reached with a 4.5-V cutoff, which is considerably lower than the theoretical capacity (ca. 170 mAh/g). The cyclability of the cell is presented in Figure 7b. We may expect that the rest of the capacity could be reached at a higher potential due to the V 3+ /V 4+ redox pair in Li 3 V 2 (PO 4 ) 3 .
In Figure 7c, CV curves of a prepared lithium cell are shown. In all cycles, the highest oxidation peak was observed around 3.6 V, which corresponds to a step observed in the charge curve described earlier. The associated redox peak at a lower potential ca. 3.4 V confirms the appearance of Fe 2+ /Fe 3+ redox pairs in the olivine structure. The analysis of all cycles confirms the reversibility of this process. Additionally, an interesting shape of a peak associated with manganese oxidation was observed in the first cycle. A similar shape was observed for Li x Fe 1−y Mn y PO 4 by J. Molenda et al. [8]. It is possible that there is some irreversible process in this area, because the shape of the peak changed in the second cycle, and it remained unchanged for the rest of the cycles. A redox peak at potential 3.9 V corresponds to Mn 2+ /Mn 3+ redox pairs. The most intriguing is the occurrence of a strong irreversible reduction at 3.2 V and an increasing current above 4.5 V. While the first of these phenomena can be assigned to the influence of vanadium as in the works [27,28], the origin of the irreversible peak above 4.5 V is not clear. However, one should keep in mind that at this potential, the extraction of the last lithium ion from Li x V 2 (PO 4 ) 3 should occur [29].
Transmission Electron Microscopy
In Figure 8, a high-resolution TEM image of a nanograin is shown. One can see distinct crystallographic layers. The grain is surrounded with a residual glassy matrix. Such a microstructure is typical for materials synthesized in the way of the thermal nanocrystallization from a glass [30].
Conclusions
This research showed that thermal nanocrystallization of glassy analogs of LiFe 1−x Mn x PO 4 resulted in highly conducting materials that may be used as a cathode in Li-ion batteries. The addition of vanadium was proposed to improve its glass forming properties and to provide favorable conditions for electron hopping in the nanomaterials. The best electrical conductivity of the nanomaterial with x = 0.5 exceeded 1 mS/cm. This giant and irreversible increase in the conductivity was explained with Mott's electron hopping theory and a core-shell concept.
All of the three phases detected in the samples (i.e., LiFePO 4 , LiMnPO 4 , and Li 3 V 2 (PO 4 ) 3 ) are electrochemically active and therefore are suitable to be used as cathodes in Li-ion batteries, which was confirmed in preliminary galvanostatic and CV experiments. Further works on laboratory cells are worth carrying out in order to increase the electrochemical performance of the studied materials at higher current rates. Highly conducting material synthesized without carbon additives should be beneficial in terms of electrochemical performance under high current loads. | 4,705.4 | 2021-10-27T00:00:00.000 | [
"Materials Science"
] |
Alix: A Candidate Serum Biomarker of Alzheimer’s Disease
Alzheimer’s disease (AD) is the most common fatal neurodegenerative disease of the elderly worldwide. The identification of AD biomarkers will allow for earlier diagnosis and thus earlier intervention. The aim of this study was to find such biomarkers. It was observed that the expression of Alix was significantly decreased in brain tissues and serum samples from AD patients compared to the controls. A significant correlation between Alix levels and cognitive decline was observed (r = 0.80; p < 0.001) as well as a significant negative correlation between Alix and Aβ40 in serum levels (r =−0.60, p < 0.001). The receiver operating characteristic curve (ROC) analysis showed the area under the curve (AUC) of Alix was 0.80, and the optimal cut-off point of 199.5 pg/ml was selected with the highest sum of sensitivity and specificity. The diagnostic accuracy for serum Alix was 74%, with 76% sensitivity and 71% specificity respectively, which could differentiate AD from controls. In addition, the expression of Alix was found to be significantly decreased in AD compared to vascular dementia (VaD). ROC analysis between AD and VaD showed that the AUC was 0.777, which could be indicative of the role of serum Alix as a biomarker in the differential diagnosis between AD and VaD. Most surprisingly, the decreased expression of Alix was attenuated after the treatment of Memantine in different AD animal models. In conclusion, our results indicate the possibility of serum Alix as a novel and non-invasive biomarker for AD for the first time.
INTRODUCTION
Alzheimer's disease (AD) is one of the most common progressive neurodegenerative diseases in the elderly, accounting for 60-80% of all cases (Sabayan and Sorond, 2017;Garre-Olmo, 2018). AD is characterized by cognitive impairment with the progressive loss of basal forebrain cholinergic neurons, deposition of extracellular senile plaques formed by amyloid β (Aβ), and intracellular neurofibrillary tangles (NFTs) of hyperphosphorylated tau (Scheltens et al., 2016). Although many recent pieces of research have revealed a great deal about AD (Hodson, 2018;Jack et al., 2018), the exact pathogenesis is not yet fully known. Current treatments can only help improve the clinical symptoms, but cannot delay or reverse the progression of AD. Thus, earlier diagnosis will allow for the earlier intervention of the therapeutic strategies that might have the best efficacy. The identification of AD biomarkers would have great values to aid in the diagnosis of AD. However, to date, there is no non-invasive and cost-effective biomarker to improve the diagnosis.
Until now, the only definitive way to diagnose AD has been to search for plaque with a brain autopsy after the patient dies (DeTure and Dickson, 2019). The acknowledged biomarkers in cerebrospinal fluid (CSF) include Aβ, total tau (T-tau), and phosphorylated tau (P-tau; Olsson et al., 2016), but we found it difficult to distinguish AD from controls due to the nonspecific changes in AD. Moreover, obtaining CSF through the invasive lumbar puncture on large numbers of elderly individuals is challenging. All these limit the application of CSF biomarkers. In recent years, many scientists focused on blood biomarkers (Olsson et al., 2016;Hampel et al., 2018;Penner et al., 2019;Zetterberg and Burnham, 2019). Many candidate proteins in blood have been found by the proteomic approach (Kitamura et al., 2017;Shen et al., 2017;Petersen et al., 2020), but the following validation showed unsatisfactory results. It is necessary to continue to look for viable diagnostic biomarkers for AD.
Alix is also called ALG-2 interacting protein X, and it participated in a regulatory Ca 2+ -dependent pathway through interaction with ALG-2 protein (Missotten et al., 1999). ALG-2, an EF-hand calcium-binding protein, could regulate the cell death program underlying apoptosis by changing the Ca 2+ concentration following endoplasmic reticulum stress (Maki et al., 2016;Mercier et al., 2016). Numerous observations suggest that the brains from AD display an early impairment in the endosomal system, which appears in neurons long before amyloid plaque and neurofibrillary tangle formation (Nixon, 2005). It has been recently demonstrated that Alix and ALG-2 form a molecular coupling between endosomes and neuronal death in the presence of calcium, apical caspases, and tumor necrosis factor α receptor 1 (Mahul-Mellier et al., 2009). It was reported that Alix is also involved in caspase 9 activation and apoptosis triggered by calcium (Strappazzon et al., 2010). Given that, we speculated whether Alix could act as a valuable biomarker in AD. Thus, we detect the levels of Alix from serum and brain tissues in AD patients and healthy controls, and we also evaluated the diagnostic values of Alix as an AD biomarker. It was very meaningful for the discovery of ideal biomarkers of AD.
Control, AD, and VaD Brains
We collected frozen cortical and hippocampal tissues of 12 AD patients, eight vascular dementia (VaD) patients, and 12 age-matched controls. They were matched with the control group in terms of age of onset, gender, body mass index (BMI), and educational level. Written informed consent regarding the donation was provided and then approved by the Institutional Review Board (IRB). The National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) Workgroup criteria were set for the clinical diagnosis of AD, and the same was organized for the National Institute for Neurological Disorders and Stroke and Association Internationale pour la Recherche et I' Enseignement en Neurosciences (NINDS-AIREN) criteria for VaD patients. All controls had MMSE scores between 28 and 30. After the death of AD patients, the search for plaque with a brain autopsy was done to confirm the diagnosis of AD as the only definitive way.
Serum Collection and Preparation
In this study, we recruited 404 AD subjects, 52 VaD patients, and 404 age-and gender-matched controls. Detailed demographic information of the subjects enrolled in this study is presented in Table 1. These patients were diagnosed clinically as having probable AD according to Diagnostic and Statistical Manual of Mental Disorders-IV (DSW-IV), International Classification of Diseases-10 (ICD-10), and NINCDS-ADRDA criteria, and they were clinically diagnosed as having probable VaD using NINDS-AIREN criteria. The Mini-Mental State Exam (MMSE) was used to assess the cognitive severity of dementia. Controls had an MMSE score between 28 and 30 without the cognitive decline, and they did not present a history of depression, psychosis, or use of medications that had side effects of cognitive impairments. All controls were followed clinically for 2 years in order to rule out the development of cognitive decline. In this study, no subject, originating from Northern Han Chinese populations, presented with major and known co-morbidities, including hypertension, cardiopathy, diabetes, or renal dysfunction. Written informed consent was acquired from all subjects, and the protocol was approved by the Institute Ethical Committee of Ludong University. Blood was collected in evacuated collection tubes without anticoagulant and allowed to clot for 2 h on ice prior to centrifugation at 4,000 g for 8 min at 4 • C. After that, serum samples were aliquoted (50 ml/tube) and stored in −80 • C.
AD Animal Models
In this study, AD animal models include APP/PS1 double transgenic mice and Aβ 25-35 intracerebroventricular-injected rats. We purchased APP/PS1 double transgenic mice and age-matched wild-type (WT) mice from the Jackson Laboratory Company, and Memantine from Sigma-Aldrich. Male Wistar rats (3 months old, 220-250 g) were obtained from the Experimental Animal Center of Ludong University. 1 nM Aβ 25-35 was injected into the lateral cerebral ventricle of these rats. These AD animal model mice (rats) were randomly divided into two groups of 8-10 mice each: vehicle model group and Memantine (30 mg/kg) group. The administration by oral gavage was started at 12 months old and lasted for 12 weeks. All the experiments were approved according to the institutional guidelines of the Experimental Animal Center of Ludong University.
Western Blot Analysis
The brain tissues were homogenized thoroughly in a RIPA lysis buffer containing 150 mM NaCl, 50 mM Tris (pH 7.4), 1% NP40, 0.5% sodium deoxycholate, and 0.1% SDS. Then, the samples were centrifuged at 25,000 g at 4 • C for 60 min, and the supernatants were collected and stored at -80 • C until use. The serum samples from these subjects were also dissolved in the above buffer. Before that, protein concentration was measured with a BCA kit. Subsequently, western blot analysis was made according to the protocol previously published. All the samples were subjected to electrophoresis, transferred onto PVDF membranes, and incubated with the primary antibodies: rabbit anti-Alix (1:500, Cell Signaling Technology), mouse anti-β-actin (1:10,000, Sigma), and mouse anti-IgG (1:10,000, Abcam). Digital images were obtained with the LAS4000 FujiFilm imaging system (FujiFilm, Japan), and the densitometric analysis was made by Quantity-One software (Bio-Rad, United States).
ELISA Analysis
Alix levels in serum were measured using a commercially available human Alix quantitative sandwich enzyme immunoassay (Uscnk, Wuhan, China). The standards and test samples were pipetted into 96-well plates pre-coated with anti-Alix antibody and incubated for 2 h at 37 • C subsequently. After the removal of the liquid from each well, 100 ml of biotin-conjugated antibody specific for Alix was added and incubated for 1 h at 37 • C. Then, they were mixed gently until the solution appeared uniform at room temperature, and this was then washed with wash buffer three times. After that, the avidin conjugated HRP was added and incubated for 1 h at 37 • C. To remove any unbound avidin-enzyme reagent, the aspiration/wash process was repeated five times. TMB substrate was added and incubated for 15-30 min at 37 • C. After that, the stop solution was added. The optical density was determined using an MQX200 microplate reader (Bio-Tek, United States) set to 450 nm. The serum level of Alix in the samples was interpolated from kit-specific standard curves generated using GraphPad Prism software. CV% was less than 8% in the intraassay precision (Precision within an assay), and CV% was less than 10% in the inter-assay Precision (Precision between assays). For the assessment, three samples of known concentration were tested 20 times on one plate. The detection range of the ELISA kit is 47-3,000 pg/ml. The detection limit of human Alix is 11.7 pg/ml. It was determined the mean OD value of 20 replicates of the zero standard added by their three standard deviations. High sensitivity and specificity for the detection of human Alix were shown in this assay, and, moreover, no significant cross-reactivity or interference between human Alix and analogs was observed. Similarly, Aβ 40 levels in serum were also measured using a commercially available human Aβ 40 quantitative sandwich enzyme immunoassay (Abcam, Cambridge, UK).
Statistical Analysis
The data were analyzed using SPSS 13.0 software. Comparison between the groups was made using Student's t-test and one-way ANOVA with Tukey-Kramer method as a post hoc test. Correlations between Alix level and MMSE scores were performed with the Spearman correlation coefficient. The sensitivity and specificity of the measured variable for AD diagnosis were determined by ROC analysis. The best cut-off value was selected as those which minimize the sensitivityspecificity difference and maximize discriminating power of the tests. Statistical significance was set at p < 0.05.
Decreased Alix Expression in AD Patients
In our previous study, Alix was identified to be significantly decreased in APP/PS1 transgenic mice compared to the age-matched WT mice as well as in serum samples from a small number of AD patients (Sun et al., 2015). To determine whether Alix was also downregulated in brain tissues of AD patients, the expression of Alix in the cortex and hippocampus tissues from AD patients and controls after death was also assessed.
Western blot analysis showed a statistically significant decrease in protein expressions by 18% ( Figure 1A) in the cortex and by 20% ( Figure 1B) in the hippocampus of AD patients compared to the controls. Meanwhile, the serum level of Alix was validated again in our present study, and a significant decrease of 50% ( Figure 1C) was observed in AD sera.
Serum Alix and Aβ Levels Detected by ELISA
The decreased serum level of Alix was validated subsequently by ELISA in a larger population. All samples were matched for age, gender distribution, and education. MMSE score is an important measure of the cognitive level. Relative to the control group, the AD group had a lower MMSE score (mean MMSE score: 28.7 ± 0.8 vs. 17.1 ± 4.0; Table 1). As shown in Figure 2, the results of ELISA showed that Alix level was markedly lower in AD than that in the control group in serum (Control: 246.3 ± 77.4 pg/ml, AD: 168.9 ± 53.4 pg/ml, p < 0.01). The 95% confidence intervals (CIs) were 163.7-174.2 pg/ml in AD and 238.7-253.8 pg/ml in the control group in which no overlap between AD and Control was shown. Corresponding results are shown in Table 2. In Table 2, the Aβ 40 level in serum was also shown. Aβ 40 had a significantly increased expression in AD compared to the control (Control: 32.4 ± 8.8 pg/ml, AD: 41.0 ± 13.3 pg/ml, p < 0.01). The 95% confidence intervals (CIs) were 39.7-42.3 pg/ml in AD as well as 31.5-33.2 pg/ml in the controls.
Correlation Analysis
The potential correlation between the cognition (evaluated by MMSE scores) and the levels of serum Alix was analyzed and shown in Figure 3A. Spearman correlation analysis showed a significant positive correlation within the AD group (r = 0.80, p < 0.001). It is generally accepted that the risk of AD dementia is associated with Aβ. Previous studies have reported that Aβ levels are closely associated with incident AD risk (Reiss et al., 2018). In our present study, there was a significant increase of serum Aβ 40 levels in AD patients compared to the controls, as seen in Table 2. Correlation analysis between serum Alix and Aβ 40 levels was performed. As shown in Figure 3B, it revealed a very good negative correlation with statistically significant between serum Alix and Aβ 40 levels (r = −0.60, p < 0.001), suggesting that Alix was probably involved in the amyloid pathogenesis of AD.
ROC Analysis
To determine the potential diagnostic value of Alix as a biomarker for discriminating between AD and healthy controls, the ELISA result of Alix was used to generate ROC curves ( Figure 4A). The area under the curve (AUC) was determined to evaluate the diagnostic performance. The results showed that the AUC of Alix was 0.80, and the optimal cut-off point of 199.5 pg/ml was selected with the highest sum of sensitivity and specificity. The diagnostic accuracy for serum Alix was 74%, with 76% sensitivity and 71% specificity respectively, which could differentiate AD from controls. Aβ 40 is known to be involved in the pathogenesis of AD and is currently one of the most promising biomarkers that might predict disease progression. Figure 4B showed the AUC value of Aβ 40 was 0.69, and the AUC for the combined detection of Aβ 40 and Alix was raised to 0.75.
Decreased Serum Alix Levels in AD Compared to VaD
AD and VaD belong to the most common types of dementia. However, there are some difficulties to distinguish them by the clinical presentation and feature. In this study, Alix levels in AD and VaD sera were detected to estimate whether Alix could identify the two diseases as a serum biomarker. Western blot analysis showed that Alix was significantly decreased in brain tissues of AD patients compared to the controls, and had an obviously decreased level in AD compared to VaD, but there was no significant difference between VaD and the control groups ( Figure 5A). ELISA analysis for serum samples showed a similar results to the western blot, and the serum level of Alix in VaD was detected to be 224.5 ± 55.7 pg/ml ( Figure 5B). Through ROC curve analysis between AD and VaD, it was shown that the AUC of Alix was 0.777 ( Figure 5C), suggesting that Alix might act as a marker to distinguish AD from VaD.
Effect of Memantine on Alix Expression
In order to observe the effects of positive anti-AD drugs on Alix expressions, APP/PS1 mice and Aβ 25-35 intracerebroventricular-injected rats were orally administrated 30 mg/kg Memantine. Western blot analysis showed that Alix was significantly decreased in serum samples of AD Frontiers in Aging Neuroscience | www.frontiersin.org animal models compared with the controls, and the decreases of Alix were significantly attenuated with the treatment of Memantine (Figure 6).
DISCUSSION
Cholesterol level in the central nervous system is known to be related to AD. Alix was reported to have a role to play in cholesterol homeostasis by facilitating the interaction between the E3-ubiquitin ligase NEDD4-1 (neural precursor cell-expressed developmentally downregulated gene 4) and its targets, ATP-Binding Cassette (ABC) transporters, including ABCG1 and ABCG4 (Alrosan et al., 2019). It was reported that ABCG1 could reduce the synthesis of Aβ peptides by enhancing cholesterol efflux from neurons to apolipoprotein E, and might play an additional proposed role in restricting the brain entry of Aβ in AD (Sano et al., 2016;Dodacki et al., 2017). ABCG4 is expressed mostly exclusively in astrocytes and neurons in the brain and could export cholesterol, oxysterols, and cholesterol synthesis intermediates ( Kerr et al., 2011). Similar to ABCG1, ABCG4 was also found to be implicated in AD as a transporter of Aβ from cells (Dodacki et al., 2017). In addition, ABCG1 and ABCG4 could suppress γ-secretase activity and disturb γsecretase distribution on the plasma membrane, leading to the decreased Aβ secretion, which may inhibit the development of AD (Sano et al., 2016). In our present study, a good negative correlation between serum Alix and Aβ 40 levels was shown, which further proved that Alix was involved in the amyloid pathogenesis of AD.
Glutamate-induced neuronal cell death via N-methyl-Daspartic acid receptors (NMDARs) excitotoxicity is thought to contribute to AD development (Zhang et al., 2016;Wang and Reddy, 2017). The influx of Ca 2+ through NMDARs is essential for stimulating intracellular signaling cascades to cause cell death, but the precise molecular mechanisms of NMDARs in neuronal death still remain unclear (Hardingham and Bading, 2010;Szydlowska and Tymianski, 2010). Alix, a known modulator of caspase-dependent and caspase-independent cell death, has been found within the human postsynaptic density (PSD), in which NMDARs are central components and could trigger Ca 2+dependent neuronal cell death (Salim et al., 2019). Moreover, dopamine signaling is a critically important process in the brain, and dopamine receptors (DARs) are closely related to neurodegenerative diseases such as AD and have become an important target for the prevention and treatment of AD (Reeves et al., 2017;Pan et al., 2019). In a previous study, Alix was identified as a novel dopamine receptor-interacting protein, up-regulating DARs expression and playing important roles for their stability and trafficking (Zhan et al., 2008). Given that DARs also interact with NMDARs (Lee et al., 2002;Liu et al., 2006;Devor et al., 2017), we speculated that Alix might have the capacity to influence NMDARs triggered neuronal death. Furthermore, it was reported that the close proximity of both Alix and NMDARs allowed Alix to influence the downstream pathways following NMDARs activation in low or absent glutamate/glycine concentrations (Salim et al., 2019), and thus, it was speculated that Alix might act as a potential modulator of NMDARs function. Our present study showed that the down-regulation of Alix was significantly attenuated after the treatment with Memantine as an NMDA receptor antagonist in APP/PS1 mice and Aβ 25-35 intracerebroventricular-injected rats, which further confirmed that Alix might remain closely tied to the NMDA-related pathology of AD.
One of the most important symptoms in AD patients is brain atrophy. A previous report showed that Alix knock-out mice suffer from the severe reduction of brain volume and size, especially in both mediolateral length and thickness of the cerebral cortex (Laporte et al., 2017). A previous report showed that overexpression of Alix in the chick neural tube induced massive apoptosis of neuroepithelial cells, leading to the reduction by 25% in the width of the neural epithelium (Mahul-Mellier et al., 2006). However, it is still unclear whether Alix is up-regulated or down-regulated in AD patients. Our present study confirmed that the expression of Alix was significantly decreased in postmortem brain tissues of AD patients. We speculate that the lack of Alix could induce apoptosis probably by disrupting the balance of the signal pathways in the apoptotic regulation.
In order to explore whether Alix could serve as an AD biomarker, we performed a series of experiments in our present study. Alix level was demonstrated to be significantly decreased in brain and serum samples of AD patients compared to the controls in our present study. The good correlation between MMSE scores and Alix levels suggests that Alix was closely related to disease severity of AD, and meanwhile, a good correlation between Alix and Aβ 40 serum levels was also observed. ROC analysis showed that Alix had high diagnostic values as a reliable biomarker to distinguish patients with AD from the controls, as well as AD from VaD. Moreover the decreased expression of Alix was attenuated after the treatment of Memantine in APP/PS1 mice and Aβ 25-35 intracerebroventricular-injected rats. The above results showed the large possibility of Alix as a potential indicator to predict the risk of AD and as a drug target to antagonize AD progression. It is known that Aβ could cause pore formation resulting in the leakage of ions and the disruption of cellular calcium balance, eventually promoting apoptosis, causing synaptic loss, and disrupting the cytoskeleton (Reiss et al., 2018), while Alix in combination with Ca 2+ was demonstrated to be involved in the apoptotic process (Scheffer et al., 2014;Laporte et al., 2017). Thus, we speculate that the dysregulation of Alix probably plays an important role in the pathology induced by Aβ. Further investigation is needed to explore the exact roles of Alix in the pathogenesis of AD.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the ethics Committee on Human Experimentation of Ludong university. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by the ethics committee of Ludong university.
AUTHOR CONTRIBUTIONS
YS conceived and designed the studies. JH, JY, GC, and HG enrolled all the subjects and collected the serum samples. YS, JY, and JL performed the research. YS analyzed the data and wrote the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
Our study was supported by Natural Science Foundation of Shandong, China (No. ZR2020QH131 andNo. ZR2019MC004), by the high-end talent team construction foundation (No. 108-10000318), and the National Natural Science Foundation of China (No. 81860218, No. 31360246, and No. 81300973). The work was also financially supported by the High-end Full-time Innovative Talent Introduction Foundation ''twohundred plans'' of Yantai. | 5,308.8 | 2021-06-15T00:00:00.000 | [
"Biology"
] |
Discrete anatomical coordinates for speech production and synthesis
The sounds of all languages are described by a finite set of symbols, which are extracted from the continuum of sounds produced by the vocal organ. How the discrete phonemic identity is encoded in the continuous movements producing speech remains an open question for the experimental phonology. In this work, this question is assessed by using Hall-effect transducers and magnets -mounted on the tongue, lips and jaw- to track the kinematics of the oral tract during the vocalization of vowel-consonant-vowel structures. Using a threshold strategy, the time traces of the transducers were converted into discrete motor coordinates unambiguously associated with the vocalized phonemes. Furthermore, the signals of the transducers combined with the discretization strategy were used to drive a low-dimensional vocal model capable of synthesizing intelligible speech. The current work not only addressed a relevant inquiry of the biology of language, but also shows the performance of the experimental technique to monitor the displacement of the main articulators of the vocal tract while speaking. This novel electronic device represents an economic and portable option to the standard system used to study the vocal tract movements.
INTRODUCTION
Among all species humans are the only ones capable of generating speech. This complex process, that distinguishes us from other species, emerges as an interaction between the brain activity and the physical properties of the vocal system. This interaction implies a precise control of a set of articulators (lips, tongue and jaw) to produce a continuous change in the shape of the upper vocal tract 1 . The output of this process is the speech wave sound, which could be discretized and represented by a finite set of symbols: the phonemes.
Moreover, the phonemes across languages could be hierarchically organized in terms of articulatory features, as described by the International Phonetic Alphabet 2 (IPA). On the other side of the process, at brain level, intracranial recordings registered during speech production showed that motor areas encode the same set of articulatory features 3 . Then, one missing piece of the puzzle is: how does the continuous vocal tract movement generating speech encode the discrete information?
During the speech production process the displacement of the articulators modifies the vocal tract configuration allowing: (i) to apply different filters on the sound initiated by the oscillations of the vocal folds at the larynx (i.e. vowels) and (ii) to produce a turbulent sound source by occluding (i.e. stop consonants) or constricting (i.e. fricatives) the tract 4 . Previous works developed biophysical models for this process 5,6 and tested its capabilities to synthesize realistic voice 7 . In principle, those models could have a high dimensionality, especially due to the many degrees of freedom of the tongue 5 . However, its dimension ranges between 7 and 3, suggesting that a small number of measurements of the vocal tract movements should be able to successfully decode speech and to feed the synthesizers.
In this study the oral dynamics is monitored using sets of Hall-effect transducers and magnets mounted on the tongue, lips and jaw during the utterance of a corpus of syllables (including all the Spanish vowels and voiced stop consonants). By applying a threshold strategy on the signals recorded by three sensors it was possible to decode the uttered phonemes well above chance. Moreover, the signals are used to drive an articulatory synthesizer producing intelligible speech. The results disclose that continuous measurements from the oral movements could be represented in a discrete motor-coordinates space; explicitly showing that all steps comprising the speech process can be described in terms of discrete units.
From a technical point of view, the present work represents a benchmark on the state of the art of the measurement techniques used in the speech production field. During the last decades no many improvements have been achieved on the experimental methods used to measure the vocal tract movements. The widely used technique on the field is the electromagnetic articulography 8,9 (EMA). This equipment produces very accurate measurements but it presents two main disadvantages: it is non-portable and is expensive. The device described in the current work (which is shown to be capable of tracking the vocal tract during continuous speech) represents an alternative method without the problems described above.
RESULTS
Following the procedure described by Assaneo et al. 10 sets of Hall-effect transducers and magnets ( Figure 1a) were mounted on the upper vocal tract to record the displacement of the articulators (jaw, tongue and lips). More specifically, 3 transducers and 4 magnets were placed on the oral cavity of the participant following the configuration displayed in Figure 1b. The position of the elements was chose in a way that each transducer signal is modulated by a subset of magnets (color code in Figure 1b, see Methods for a more detail). The upper teeth transducer signal represent an indirect measurement of aperture of the jaw, the lips transducer signal represents the roundness and closure of the lips and the palate transducer gives an indirect measure of the position of the tongue within the oral cavity. To diminish the body surface in contact with the glue participants wore plastic molds in their upper and lower dentures ( Figure 1a). Just three elements were glued directly on the participants skin: the ones on the tongue and lips. Also, this strategy diminishes the variance on the device configuration between different sessions (the elements on the mold stayed fixed).
Four native Spanish speakers were instructed to vocalize a corpus of syllables while wearing the device. The transducer signals (h J (t), h T (t) and h L (t) for the jaw, tongue and lips respectively) were recorded simultaneously with the produced speech, an example of the four signals is shown in Figure 1c
From continuous dynamics to a discrete motor representation
A visual inspection of the data revealed that the sensors signals remain stable during the utterance of each phoneme and execute rapid excursions during the transitions in order to reach the next state (see Figure 1c); moreover, the signals persist in the same range of values for different vocalization of the same phoneme.
This observation invited to hypothesize that each phoneme could be described in a three dimensional discrete space by adjusting thresholds over the signals. This hypothesis was mathematically formalized and tested by using a subset of the data to extract the thresholds and the rest to compute the decoding performance of the phonemic identity.
Thresholds
A previous study showed that applying one threshold for each transducer was enough to decode the 5 Spanish vowels 10 . Following the same strategy an extra threshold per signal is added in order to include the stop consonant to the description. Then, the signals were discretized by fitting two thresholds: a vowel threshold v, dissociating vowels, and a consonant threshold c, dissociating vowels from consonants. A visual exploration of the signals (see Figure 1c for an example, in Supplementary Materials the whole dataset is available) suggested the following rules to fit the thresholds: The thresholds were fixed independently for each transducer signal by choosing the value that better accomplishes the previous described rules (see Methods).
Mathematical description for the discretization process
The transformation from continuous transducer signals to discrete values can be mathematically accomplished through saturating functions of the form: This function goes from zero to one in a small interval around x=0, whose size is inversely proportional to m.
is zero for h(t)<v and one for h(t)>v. These are the conditions that define the binary coordinates for vowels. Using the transducer signals h L (t), h J (t), h T (t) and the threshold values v L , v J and v T for the lips, jaw and tongue respectively, the vowels read: Plosive consonants represent articulatory activations reaching the dark areas of Figure 1c, assigned to the value 2. In order to include them to the description an extra saturating function was added to each coordinate, using the consonant thresholds c J , c T and c L . Following the previous notation, phonemes (either vowels or consonants) can be represented in the discrete space directly from the transducer signals as:
Decoding performance: Intra-subject & intra-session
To perform the decoding it is necessary to define the threshold values. In this case, one set of thresholds was adjusted for each participant and session. More precisely, the first 15 VCVs of the session were used as training set, i.e. to fix the thresholds (see Table S1 for the numerical values). The following 60 VCVs of the corresponding session were used as the test set, i.e. to calculate the decoding performance using the thresholds optimized on the training set. Figure 2a shows the confusion matrix obtained by averaging the decoding performance across participants and sessions (see Figure S1 for each participant's confusion matrix). Every phoneme is decoded with performances well above chance levels. This result validates the discretization strategy and discloses a discrete encoding of the phonemic identity in the continuous vocal tract movements.
Decoding performance: Intra-subject & inter-session
The previous result leads to the question of whether thresholds can be defined for each participant, independently of the variations in the device mounting across sessions. To explore this, the VCV data of all sessions were pooled together for each participant. Then the 10% of the data was used to adjust thresholds and tested the performance on the rest of the data. More specifically, a 50-fold cross-validation was performed over each subject's data set. The confusion matrix of Figure 2b exposes the confusion matrix obtained by averaging the decoding performance across participants (see Figure S2 for individual participant's confusion matrixes and Table S1 for the mean value and standard deviation of the 50 thresholds). The performance remained well above chance for every phoneme, with the only one exception of the vowel /e/, that can be confused with /a/. As shown in Figure 1d, these two vowels are distinguished by the state of the tongue, the articulator for which the mounting of the device is more difficult to standardize.
Decoding performance: Inter-subject & inter-session
Next the robustness of the configuration, regardless anatomical differences amongst subjects, was tested.
Therefore the VCV data from all sessions and participants were pooled together and the 10% of the data, with a 50-fold cross-validation, was used to fix thresholds (see Table S1). The confusion matrix of Figure 2c represent the average values obtained from 50-fold cross-validation. As in the previous case, the vowel /e/ was mistaken for /a/, revealing that the mounting of the magnet on the tongue needs to be treated with a more fine protocol. This results shows that the discretization strategy is robust even while dealing with different anatomies, suggesting that the encoding of the sounds of language in a low-dimensional discrete motor space represents a general property of the speech production system. Table S1 and S2, respectively, at Supplementary Materials.
Occupation of the consonant's free states
As pointed out before, vowels and consonants have different ranks in the discrete representation: while each vowel is represented by a vertex of the cube of Figure 1d, each consonant is compatible with many states, shown as the points on the 'walls' surrounding the cube. The occupation levels of those states were explored.
The discrete state for each consonant was computed using the intra-subject and intra-session decoding, for all participants and sessions, and just the VCVs that were correctly decoding were kept for this analysis. The occupations of the different consonantal states are shown in Figure 3a.
The /b/ is defined by the lips in state 2; the tongue and the jaw are free coordinates. The state 2 has not been observed in the tongue, and is presumably incompatible with the motor gesture of this consonant, however no significant differences were found between the states 0 and 1 (binomial test with equal probabilities, p=0.1). Similarly, for the jaw coordinate the state 0 is underrepresented, with an occupation of the 18%, below the chance level of 1/3 (binomial test, p<0.001). The /d/ is defined by the tongue in state 2; the lips and the jaw are free coordinates. The lips show a dominance of the state 1 over the 0 (binomial test, p<0.001), and the state 0 of the jaw is significantly less populated than the others with an occupation of the 8%, lower than the chance level of 1/3 (binomial test, p<0.001). The /g/ has free lips and tongue coordinates.
The lips show no significant differences between the states 0 and 1 (binomial test with equal probabilities, p=0.52), and the state 0 was preferred for the tongue (binomial test with equal probabilities, p=0.006).
A well-known effect in the experimental phonology field is the coarticulation; this effect implies that the articulation of the consonants is modified by the neighboring vowels 11 . The occupation levels of the consonants as a function of their surrounding vowels were calculated ( Figure 3b) and coarticulation effects were revealed. The results show that when the surrounding vowels share some of the consonant's free states, this state is transferred to the consonant. Specifically, when the previous and following vowels share the lip state its value is inherited by the consonants with free lip's coordinate, being /d/ and /g/ (p<0.001 for the four binomial tests). Additionally, /b/ inherits the state of the tongue of the surrounding vowels when both share that state (p<0.001 for both binomial tests), and when both vowels share the tongue state 0, it is inherited by the /g/ (binomial test, p<0.001). No coarticulation is presented by the jaw: for /b/ and /d/ the jaw is homogeneously occupied by the states 1 and 2, regardless of the states of the surrounding vowels. Finally, to test the intelligibility of the synthetic speech, the samples were presented to 15 participants, who were instructed to write down a VCV structure after listened to each audio file. The confusion matrixes obtained from the transcription are shown in Figure 4 for consonants and vowels. All values are above chance levels (33% for consonants and 20% for vowels). In order to recover the vowel's identity just one threshold per signal is needed. Thus, the vowels are represented in the discrete motor space as the corners of a cube. Curiously, the dimension of the vowel cube (eight) is in agreement with the number of Cardinal Vowels 12 , a set of vocalic sounds used by the phoneticians to approximate the whole set of cross-language vowels. This dimension match suggests that the discrete motor states captured by this study could represent the basic motor gestures of vowels. Moreover, the state on each articulator's transducer corresponds to an extreme value along the two-dimensional coordinate system used by the International Phonetic Alphabet to describe vowels. Interestingly, the same discrete representation for vowels could be recover from direct measurements of human brain activity during vocalizations 13 .
The consonants chosen for this study were /b d g/. They cause a complete occlusion of the vocal tract produced by the constriction gesture of one of the three independent oral articulator sets (lips for /b/, tongue tip for /d/ and tongue body for /g/); and they have been suggested as the basic units of the articulatory gestures 14 . Therefor, they appear as the natural candidates to study the presence of discrete information within the continuous movements of the oral tract. shared feature with the brain activation during speech, which represents a clear benefit for brain-computer interface applications.
From a more general point of view, this implementation represents an alternative to the extended strategy used in the bioprosthetic field: large amounts of non-specific physiological data processed by statistical algorithms to extract relevant features for vocal instructions 23,24 . Instead, in the current approach a small set of recordings from the movements of the speech articulators, in conjunction with a threshold strategy, are used to control a biophysical model of the vocal system.
Although this approach shows potential benefits for bioprothetic applications, further work is needed to optimize the system. On one hand, the mounting protocol for the tongue should tighten up to get stable threshold across sessions. On the other, the protocol should be refined to include the whole consonants data set. Arguably, the manner of articulation could be integrated by including different sets of thresholds; and increasing the number of magnet-transducer sets mounted on the vocal tract could retrieve other places of articulation. Regarding the vowels, the current vocalic space is complete for the Spanish language, and has the same dimension as the cardinal vowels suggesting that it would be enough to produce intelligible speech in any language 25 .
The state of the art of the techniques used to monitor the articulatory movements during speech remained stagnant during the last decades, with some exceptions employing different technologies to measure the different articulators 24 . The standard method used to track the articulator's displacements during speech is the EMA 8,9 . This technique has been prove to provide very accurate recordings [26][27][28] at the expenses of being non portable and expensive. Here, a novel method is introduced and proved to be able to capture the identity of the uttered phoneme, to detect coarticulation effects and to correctly drive an articulatory speech synthesizer. This device presents two main advantages: it is portable and is non-expensive. The portability of the system makes it suitable for bioprosthetic applications; and, crucially, because of the low cost of its components, it could significantly improve the speech research done in non-developed countries.
Ethics Statements
All the participants signed a written consent to participate in the experiments, which were approved by the CEPI ethics committee of Hospital Italiano de Buenos Aires, qualified by ICH (FDA -USA, European Community, Japan) IRb00003580.
Participants
Four individuals (1 female) within an age range of 29±6 years and no motor or vocal impairments participated in the recordings of anatomical and speech sound data. They were all native Spanish speakers, graduate students working at the University of Buenos Aires. Fifteen participants (9 females) native Spanish speakers participated in the audio tests. Details of the configuration of the 3 magnet-transducer sets shown in figure 1b:
Experimental device for the anatomical recordings
Red, lips: One cylindrical magnet (3.0 mm diameter and 1.5 mm height) was glued to the dental cast between the lower central incisors. Another one (5.0 mm diameter and 1.0 mm height) was fixed with medical paper tape at the center of upper lip. The transducer was attached at the center of the lower lip. The magnets were oriented in such a way that its magnetic field has opposite signs in the privileged axis of the transducer.
Green, jaw: A spherical magnet (5.0 mm diameter) and the transducer were glued to the dental casts, in the space between the canine and the first premolar of the upper and lower teeth respectively.
Blue, tongue: A cylindrical magnet (5.0 mm diameter and 1.0 mm height) was attached at a distance of about 15 mm from the tip of the tongue, using a small amount of denture adhesive. The transducer was glued to the dental plastic replica, at the hard palate, approximately 10 mm right over the superior teeth (sagittal plane).
Transducer wire was glued to the plastic replica and routed away to allow free mouth movements.
Articulatory synthesizer
During the production of voiced sounds, the vocal folds oscillate producing a stereotyped airflow waveform 29 that can be approximated by relaxation oscillations 30 such as the produced by a van der Pol system: The glottal airflow is the variable u for u>0, and u=0 else. The fundamental frequency of the glottal flow is f 0 (Hz) and the oscillations' onset is attained for a>-1.
The pressure perturbations produced by the injection of airflow at the entrance of the tract propagate along the vocal tract. The propagation of sound's waves in a pipe of variable cross section A(x) follows a partial differential equation 31 . Approximations have been proposed to replace this equation by a series of coupled ordinary differential equations, as the wave-reflection model [32][33][34] and the transmission line analog 35 . Those models approximate the pipe as a concatenation of N=44 tubes of fixed cross-section A i and length l i . In the transmission line analog, the sound propagation along each tube follows the same equations as the circuit shown in Figure 5, where the current plays the role of the airflow u and the voltage the role of sound pressure p. The flows u 1 , u 2 and u 3 along the meshes displayed in Figure 5, follow the equations: The
Discrete states to vocal tract anatomies
The shape of the vocal tract can be mathematically described by its cross-sectional area A(x) at distance x from the glottal exit to the mouth. Moreover, previous works 34,36,37 developed a representation in which the vocal tract shape A(x) for any vowel and plosive consonant can be expressed as The first factor in square brackets represents the shape of the vocal tract for vowels, the vowel substrate. The function Ω(x) is the neutral vocal tract, and the functions φ 1 (x) and φ 2 (x) are the first empirical modes of an orthogonal decomposition calculated over a corpus of MRI anatomical data for vowels 37 This description of the anatomy of the vocal tract fits well with our discrete representation. A previous study 10 showed that a simple map connects the discrete space and the morphology of the vocal tract for vowels. It is carried out by a simple affine transformation defined by: The numerical values of the transformation were phenomenologically found to correctly map the discrete states to the vowel coefficients q 1 and q 2 . Together, Equations 5 and 6 allow the reconstruction of the vocal tract shape of the different vowels from the transducer signals.
During plosive consonants, the vocal tract is occluded at different locations. In our description, this corresponds to have a value 2 in one or more coordinates, which means that the transducers signal cross the consonant threshold c. The saturating functions with the consonant threshold were used to control the parameter w c of Equation 5 that controls the constriction. More specifically, the following Equations were used to generate the consonants: The values of x c and r c are in units of a vocal tract segmented in 44 parts, starting from the vocal tract entrance (x c =1) to the mouth (x c =44).
This completes the path that goes from discretized transducer signals h J (t), h T (t) and h L (t) to the shape of the vocal tract A(x,t) for vowels and plosive consonants.
Vocal tract dynamics driven by transducers' data
To produce continuous changes in a virtual vocal tract controlled by the transducers, it is necessary to replace the infinitely step functions in Equation 6 and 7 by smooth transitions from 0 to 1. Therefor, the condition m=∞ is replaced by finite steepness values m 1 , m 2 and m 3 . The values used to synthesize continuous speech were m 1 =300, m 2 =300 and m 3 =900 for lips, tongue and jaw, respectively. These numerical values were manually fixed with the following constrain: applying Equation 6 over the recorded signals during the stable part of the vowels, and using the obtained (q 1 , q 2 ) to synthesize speech should produce recognizable vowels. This process is explained below.
First, the mean values of the transducer signals during the production of vowels for one participant were computed (left panel Figure 6). More precisely, just the set of corrected decoded vowels for subject 1, using the intersession threshold, were selected. Second, different exploratory sets of (m 1 ,m 2 ,m 3 ) were used to calculate the corresponding (q 1 , q 2 ) , by means of Equation 6. Then, the given vocal tract shapes (A(x) in Equation 5) could be reconstructed and the vocalic sounds were synthetized, from which the first two formants were extracted using Praat 38 . Each sets of (m 1 ,m 2 ,m 3 ) produce a different map going from the sensor space to the formants space ( Figure 6) The first two formants of a vocalic sound defines its identity 29 ; its variability for real vocalizations of Spanish vowels is represented by the shaded areas on the right panel of Figure 6 according to previous reported results 39 . The chose steepness values (m 1 =300, m 2 =300 and m 3 =900) map more than 90% of the transducer data into the experimental (F 1 ,F 2 ) regions.
Synthetic speech
To synthesize speech it is necessary to solve the set of equations described in the Articulatory synthesizer tubes, using a Runge-Kutta 4 algorithm 40 coded in C at a sampling rate of 44.1 kHz. The sound intensity of the files was equalized at 50 dB.
Fifteen participants using headphones (Sennheister HD202) listened to the synthetic speech trials in random order. They were instructed to write down a VCV structure after listened to each audio file. The experiment was written in Psychtoolbox 41 .
Aknowledgements
This work describes research partially funded by CONICET, ANCyT, UBA. | 5,861.8 | 2017-06-14T00:00:00.000 | [
"Physics"
] |
Software Ecosystem: Features, Benefits and Challenges
Software Ecosystem (SECO) is a new and rapidly evolving phenomenon in the field of software engineering. It is an approach through which many variables can resolve complex relationships among companies in the software industry. SECOs are gaining importance with the advent of the Google Android, Apple iOS, Microsoft and Salesforce.com ecosystems. It is a co-innovation approach by developers, software organisations, and third parties that share common interest in the development of the software technology. There are limited researches that have been done on SECOs hence researchers and practitioners are still eager to elucidate this concept.
A systematic study was undertaken to present a review of software ecosystems to address the features, benefits and challenges of SECOs.
This paper showed that open source development model and innovative process development were key features of SECOs and the main challenges of SECOs were security, evolution management and infrastructure tools for fostering interaction. Finally SECOs fostered co-innovation, increased attractiveness for new players and decreased costs
INTRODUCTION
The notion of ecosystems originates from ecology.One definition in Wikipedia defines an ecosystem as a natural unit consisting of all plants, animals and micro-organisms (biotic factors) in an area functioning together with all of the nonliving physical (abiotic factors) of the environment.
Although the above is an excellent definition, it is less suitable here and therefore we start from the notion of human ecosystems.A human ecosystem consists of actors, the connections between the actors, the activities by these actors and the transactions along these connections concerning physical or non-physical factors.
Software ecosystems (SECO) refer to the set of businesses and their interrelationships in a common software product or service market [9].A Software Ecosystem consists of the set of software solutions that enable, support and automate the activities and transactions by the actors in the associated social or business ecosystem and the organizations that provide these solutions [1].This is an emergent field inspired in concepts from and business and biological ecosystems [14].
Well known examples of communities that may be seen as software ecosystems are Apples iPhone, Microsoft, Google Android, Symbian, Ruby and Eclipse.
Ecosystem concept may refer to a wide range of configurations.Yet, they all involve two fundamental concepts: a network of organisations or actors, and a common interest in the development and use of a central software technology.
The software industry is constantly evolving and is currently undergoing rapid changes.Not only are products and technologies evolving quickly, many innovative companies are experimenting with new business models, leading occasionally to fundamental shifts in entire industry structures and how firms and customers interrelate [17].Recently, many companies have adopted the strategy of using a platform to attract a mass following of software developers as well as endusers, building entire "software ecosystems" (SECOs) around themselves, even as the business world and the research community are still attempting to get a better understanding of the phenomenon.This paper explores the main terms under consideration which are the meaning of SECO, identify the main features of Software Ecosystems (SECOs) and finally establish the benefits and challenges of SECOs
II.
WHAT IS THE PROBLEM In the past few decades, we have witnessed different types of software development methodologies ranging from waterfall, spiral, component, chaos, rapid application development, rational unified process to agile models respectively.Almost all the models mentioned encourage development of software product entirely on the organisation concerned.
The emergent of Software Ecosystem (SECO) development paradigm has brought about co-innovation as a result of different players, however research communities and practitioners are still grasping to understand this concept.Hence this work is aim to expose what is known about software ecosystems (SECOs).
III. OBJECTIVES OF THE STUDY
The goal of the study is to carry out a systematic study of software ecosystems in order to present a wider view of what is currently known about software ecosystems The specific objectives are to: www.ijacsa.thesai.orga) Identify the main features of Software Ecosystems (SECOs).b) Establish the benefits and challenges of SECOs
IV. SCOPE OF THE STUDY
It is not easy to study existing Software Ecosystems (SECOs) due to the fact that many SECOs are closed communities and it is hard to get access to information.Therefore, we adopted free open software ecosystems as our subject of studies. V.
SIGNIFICANT OF THE STUDY The significance of the study is to create awareness about the emergent fields of software ecosystems for research communities and practitioners and to establish research direction for software ecosystems.
VI. REVIEW OF RELATED RESEARCH
Bosch [1] proposed a Software Ecosystem (SECO) taxonomy that identifies nine potential classes of the central software technology as shown in Table1 below, according to classification within two broad dimensions.The first one is the category dimension, which ranges from operating systems to applications, and to end-user programming.The second one is the platform dimension, ranging from desktop to web, and to mobile.In Software Engineering (SE) community, studies of SECOs were motivated by the software product lines (SPLs) approach aiming at allowing external developers to contribute to hitherto closed platforms [1].[4], opined that a potential benefit of being a member of a software ecosystem is the opportunity to exploit open innovation an approach derived from open source software (OSS) processes where actors openly collaborate to achieve local and global benefits.External actors and the effort they put into the ecosystem may result in innovations being beneficial not only to themselves (and their customers) but also to the keystone organisation, as this may be a very efficient way of extending and improving the central software technology as well as increasing the number of users.
According to [8] closer relationships between the organisations in an ecosystem may enable and improve active engagement of various stakeholders in the development of the central software technology.
When explaining the concept of software ecosystems it is also necessary to address how software ecosystems relate to the development of open source software [6].There are clear similarities between these two concepts, but also several differences, which justify the definition of software ecosystems as a unique concept.The main difference between these two relates to the underlying business model.[3], explain the open-source business model as follows: "The basic premise of an open-source approach is that by "giving away" part of the Company's intellectual property, you receive the benefits of access to a much larger Market.These users then become the source of additions and enhancements to the product to increase its value, and become the target for a range of revenue-generating products and services associated with the product." Whereas in a closed software ecosystem the intellectual property (the code) is not shared in any way.
However, different research directions indicated by literature and industrial cases re-enforce a lot of important perspectives to be explored, such as architecture, social networks, modelling, business, mobile platforms and organizational-based management [9].Besides, SECOs involve a multidisciplinary perspective, including Sociology, Communication, Economy, Business and Law.These studies are also motivated by the software vendors' routine since they no longer function as independent units that can deliver separate products, but have become dependent on other software vendors for vital software components and infrastructures such as operating systems, libraries, component stores, and platforms [2].
VII. ARCHITECTURE OF MAJOR SOFTWARE ECOSYSTEMS (SECOS) 1) Symbian Software Ecosystem
In this ecosystem as shown in figure 1, the different categories of licenses and partner relationships included are as shown: Fig. 1.Symbian Ecosystem [16] www.ijacsa.thesai.orgSymbian described its network of customers and complementors as an "ecosystem", In the Symbian ecosystem, the different categories of licenses and partner relationships included are: System integrators or "licensees" (handset manufacturers) that integrated externally sourced software and internally developed hardware to create new devices (i.e.handsets) for sale to end users.
CPU vendors worked to ensure Symbian OS compatibility with their latest processors.
User Interface companies.
Other software developers sometimes referred to as independent software vendors (ISVs) including developers of user applications and also middleware components such as databases.
Network Operators, which in most countries were the dominant distribution channel for phones, and also decided what software components were preloaded on phones.
Enterprise software developers, for cases where a company developed Symbian compatible software for its employees that use Symbian phones.
In many cases, members of Symbian's ecosystem were also members of competing mobile phone ecosystems, such as those surrounding the Palm OS, Windows Mobile, and later Linux based platforms such as the LiMo Foundation and Google's Open Handset Alliance (Android).
2) Microsoft Software Ecosystem (SECO)
Microsoft ecosystem consists of the following components: Device manufacturers, Independent Software Vendors (ISVs), Value Added Resellers (VARs), Office Equipment Dealers and Systems Integrators (SI) as shown in (Figure 2), and can all benefit from working together.But rarely do the ecosystem pieces remain static.New software applications are consistently being rolled out.And the VARs, dealers and SIs that sell and support these systems change with them.Fig. 2. Microsoft Software Ecosystem [7] Microsoft sit at the centre of ecosystem.Ecosystems are an essential ingredient in delivering customer-focused solutions.And they help drive standards.And, they present revenue opportunities for all the partners involved.It's no wonder that Microsoft spends so much money on building their ecosystem The Microsoft ecosystem of applications, partners, and highly skilled IT resources provides customers with the best choice.
3) iPhone Software Ecosystem
The iPhone ecosystem which is one of the Apple's three sub-ecosystems consists of the following components
4) Ruby Software Ecosystem
Ruby is a dynamic, open source programming language with a focus on simplicity and productivity.It has an elegant syntax that is natural to read and easy to write.It was created by Yukihiru Matsumota in 1995 in Japan.
The Ruby Software Ecosystem consists mainly of two elements i.e.Gems and Developers with possible relationships www.ijacsa.thesai.orgamong them.If a developer has a relationship with a gem, he is a developer of that specific gem.Fig. 4. Ruby Software Ecosystem [11] The entire Ruby ecosystem consists of all developers, gems and their relationships as shown in figure 4. Some corporate high technology initiatives with Ruby are: Sun Microsystems, Microsoft, Apple, IBM and SAP.
5) Google Android Ecosystem
Android is a comprehensive open source platform designed for mobile devices.It is championed by Google and owned by Open Handset Alliance.The open Handset Alliance prominent members include: T-Mobile, Motorola, Samsung, Sonny Ericsson, Toshiba, Vodafone, Google, Intel, and Texas instrument.This list has grown multi fold with over 80 in number [5].
Android is revolutionizing the mobile space.It is a truly open platform that separates the hardware from the software that runs on it.This allows for a much larger number of devices to run the same applications and creates a much richer ecosystem for developers and consumers.
One way in which Android is quite different from other platforms is the distribution of its applications.On most other platforms, such as iPhone, a single vendor holds a monopoly over the distribution of applications.On Android, there are many different stores, or markets.Each market has its own set of policies with respect to what is allowed, how the revenue is split, and so on.As such, Android is much more of a free market space in which vendors compete for business.The figure 5 below summarised android software stack.
6) Eclipse Ecosystem
Eclipse is an open source integrated development environment (IDE) for Java.It was originally aimed to provide a united platform for different IDE products from IBM.
The Eclipse project, which began at the end of 1998, has an ambition to "eclipse" the leader of the IDE market.Within few years, Eclipse has evolved from Java IDE (version 1.0) to a universal tooling platform (version 2.0), and finally evolves to an application framework for building rich client application (version 3.0).Commercial software development tools such as IBM Rational tool, web sphere studio, and Borland JBuilder have been developed based on Eclipse.
Eclipse is currently managed by the Eclipse foundation with over 100 members including HP, IBM, Nokia, INTEL and Borland.The biggest challenge for the foundation is to cope with its rapid growth from its community.
Eclipse ecosystem Architecture
The functional building blocks of the Eclipse IDE are illustrated in Figure 6 The C/C++ Development Tools (CDT) project is creating a fully functional C and C++ IDE for the Eclipse platform.
Plug-in Development Environment
The Plug-in Development Environment (PDE) supplies tools that automate the creation, manipulation, debugging, and deploying of plug-ins.
Java Development Tools
Java Development Tools (JDT) are the only programming language plug-ins included with the Eclipse SDK.However, other language tools are available or under development by Eclipse subprojects and plug-in contributors
Eclipse Runtime Platform
The core runtime platform provides the most basic level of services such as Loading plug-ins and managing a registry of available plug-ins, managing resources, update and help facility.
Integrated Development Environment
The Eclipse IDE provides a common user experience across multi-language and multi-role development activities.
Web Tools Platform
The mission of the Web Tools Platform (WTP) project is to provide a generic, extensible, and standards-based tool platform that builds on the Eclipse platform and other core Eclipse technologies.
Rich Client Platform
The Eclipse Rich Client Platform (RCP) is a set of plug-ins needed to build a rich client application.
The eclipse consortium is currently hosting eight top level projects and over thirty sub-level open source projects.There are also countless number of commercial and open source Eclipse related products, plug-ins, and distributions available from the internet.This virtual ecosystem takes care of software development, application life cycle, data management, and business operations
X. BENEFITS OF SOFTWARE ECOSYSTEMS
1) Fosters the success of software co-evolution and innovation inside the organization involved and increases attractiveness for new players 2) Decreases costs involved in software development and distribution 3) Help analyse and understand software architecture 4) Supports cooperation and knowledge sharing among multiple and independent software vendors 5) Enables better analysis of requirements and communication among stakeholders 6) Help to overcome the challenges during design and maintenance of distributed applications 7) Provides help to the tasks of business identification, product architecture design and risk identification 8) Provides information for the product line manager regarding software dependencies XI.CHALLENGES OF SOFTWARE ECOSYSTEMS 1) Establishing relationships between ecosystem actors and proposing an adequate representation of people and their knowledge in the ecosystem modelling.
2) Several key architectural challenges such as: platform interface stability, evolution, management, security, reliability.
3) Heterogeneity of software licenses and systems evolution in an ecosystem and how organizations must manage these issues in order to decrease risks of dependence.
4)
Companies have difficulty at establishing a set of resources in order to differentiate from competitors.www.ijacsa.thesai.org5) Technical and socio-organizational barriers for coordination and communication of requirements in geographically distributed projects.
6) Insufficient infrastructures and tools for fostering social interaction, decision-making and development across organizations involved in both open source and proprietary ecosystems.
XII. CONTRIBUTIONS
This paper contributes to the field of software ecosystems by providing 1) A necessary foundation for understanding how Software Ecosystems are composed and further aids understanding of this new and expanding area of software development. 2
XIII. FUTURE DIRECTIONS FOR SOFTWARE ECOSYSTEMS
As with most novel approaches, this paper on SECO has opened up possibilities for new and exciting future directions.This following area should be investigated as future research directions/challenges for SECOs.b) How can organisations involved achieve and maintain a healthy position in a SECO?
3) Analysis a) How can an ecosystem be analysed.b) Is it possible to create models, visualizations, and large data sets for analysis?
4) Openness
Every software platform at the centre of an ecosystem has to have some degree of openness.The main research question here is How can openness in software affects and influences the success of a business, where there appears to be a real tradeoff between the height of entry barriers and number of third parties willing to participate in the ecosystem.
5) Quality a)
How can ecosystems deliver the highest quality experience to customers in the ecosystem?b) What are measures that participants can take to increase quality?
XIV. CONCLUSION
This paper provides a review of SECOs and confirmed that it is an emergent field that has been mainly inspired by studies from business and natural ecosystems.We highlighted that SECOs field needs more industrial studies to increase its body of evidence.Also, given the current state of research and practice in SECOs, we envisaged the need to conduct integrative studies among research communities and industry.
Finally the paper proposes a number of open research questions and challenges to enable scholars interested in SECOs to swiftly gain an overview of the research area and to help them in their own research endeavours.
Developers Services and Advertisers iPhone components are shown in figure3 below.
Fig. 3 .
Fig. 3. iPhone components Developers designs and implement complex interfaces smoothly and efficiently on limited hardware.C++ and Objective-C are the primary languages used.Apple has historically put very little effort into supporting developers and designers, but has stepped up efforts for the iPhone platform.Designers are crucial to the success of iPhone applications.Developers simply utilise various technologies available to give designers what they want and need to build excellent interfaces.
Fig. 5 .
Fig.5.Android Software Stack[13] below.The entire platform is open source and royalty-free for other open source or commercial products that add new building blocks.
1 )
In Open source ecosystems.a) How can quality be measured per developer?b) How can relationships be formed between developers?c) How can conflicts be resolved in open source ecosystems?d) How can application program interfaces (APIs) to third-party components be used.2) Governance.a) What are the best strategies for survival in an ecosystem? | 4,044.6 | 2013-01-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Why does the sign problem occur in evaluating the overlap of HFB wave functions?
For the overlap matrix element between Hartree-Fock-Bogoliubov states, there are two analytically different formulae: one with the square root of the determinant (the Onishi formula) and the other with the Pfaffian (Robledo's Pfaffian formula). The former formula is two-valued as a complex function, hence it leaves the sign of the norm overlap undetermined (i.e., the so-called sign problem of the Onishi formula). On the other hand, the latter formula does not suffer from the sign problem. The derivations for these two formulae are so different that the reasons are obscured why the resultant formulae possess different analytical properties. In this paper, we discuss the reason why the difference occurs by means of the consistent framework, which is based on the linked cluster theorem and the product-sum identity for the Pfaffian. Through this discussion, we elucidate the source of the sign problem in the Onishi formula. We also point out that different summation methods of series expansions may result in analytically different formulae.
I. INTRODUCTION
The Hartree-Fock-Bogoliubov (HFB) theory gives a simple but profound basis for the nuclear many-body problem where the competition between the nuclear pairing and deformation plays a primary role in the determination of the ground state as well as the excited states. Especially, a combination of the HFB method with the technique of angular momentum projection allows the direct comparison between the theoretical calculations and the experimental data. The projected HFB states can produce more elaborate and accurate calculations although the simplicity of the HFB wave functions is kept from a mean-field point of view. In this way, not only the simple HFB state but also a superposition of different HFB states (i.e., the projected HFB state) have been extensively used for nuclear-structure studies. Behind this success of the HFB theory, there was a hidden problem for the overlap matrix element between the HFB states.
Half a century ago, a formula for the overlap matrix element was derived by Onishi and Yoshida [1] and is called the Onishi formula [2]. To derive the Onishi formula, we begin with the Thouless representation [2], [3] of the HFB wave functions, |φ (k) (k = 0, 1) defined as where c † 's are the creation operators and |− is the bare Fermion vacuum with c i |− = 0 (i = 1, · · · , N ). The dimension of the Fermion single-particle space is N . Z is an N ×N complex skew-symmetric matrix. The Thouless representation is a specific one of the Bogoliubov quasiparticle states. In this representation, the overall phase is fixed for two HFB wave functions |φ (0) and |φ (1) , respectively, as in Refs. [4], [5]. The overlap matrix element between these two HFB wave functions is defined as which can be expressed as This formula is known as the Onishi formula [1]. Due to the square root function, the Onishi formula is twovalued and does not give a definite sign if Z's are complex matrices. This indefiniteness of the sign assignment is referred to as the sign problem of the Onishi formula, which becomes quite serious in the application of the full angular momentum projection. So far, there are several approaches known to remedy the problem [4], [5], [6], [7], [8].
Among them, Robledo [5] has recently derived an alternative and ambiguity-free formula for the overlap matrix element by the Pfaffian as φ (0) |φ (1) where s N = (−) N (N +1)/2 and I is the N ×N identity matrix. This formula is proved with rather advanced techniques, that is, the Fermion coherent state and Grassmann integral. His proof is mathematically very elegant and interesting [5]. Moreover, these techniques led us to a relation to the generalized Wick's theorem and its related topics [9], [10], [11], [12], [13]. The proof by Robledo is, however, rather abstract and it somewhat keeps us from an intuitive understanding of the reason for the disappearance of the sign ambiguity.
In the present paper, we derive both formulae in Eqs. (3) and (4) directly from Eq.(2) and elucidate an origin of the sign problem. First, we expand the exponential operators in Eqs. (1,2). After handling the vacuum expectation values of the product of the creation-annihilation operators, the overlap matrix element can be, in principle, expressed as a polynomial of the matrix elements of Z. The overlap is, therefore, single-valued.
Next, we will consider two summation methods. We show that an expansion of the HFB wave function in Eq.(1) can be expressed by the Pfaffians and that the overlap matrix element can be, thereby, revealed by a finite series of the product-sum of the Pfaffians. This finite series can be summed up into Robledo's Pfaffian formula in Eq. (4). By this derivation, Robledo's Pfaffian formula is turned out to be obviously single-valued and to be free of the sign problem. We also show that the other summation method with the linked cluster theorem [14] brings us to the Onishi formula. We present that this summation concerning the connected diagrams involves an infinite series, which is in sharp contrast to the original finite series and that it gives rise to the square root function in the Onishi formula. We also clarify that the skew-symmetric property of the Thouless matrix Z can remove the sign problem from the Onishi formula completely.
The present paper is organized as follows. In Sec. II, we show a basic structure of the overlap matrix element through the series expansion of the HFB wave functions and present an alternative derivation of Robledo's Pfaffian formula. In Sec. III, we show a relation between the Onishi formula and the linked cluster theorem, and we discuss the origin of the sign problem. In Sec. IV, we give a conclusion. In the appendices, we summarize useful identities concerning the Pfaffian and show the derivation of the connected term.
II. OVERLAP FORMULA WITH THE PFAFFIAN
A. Basic structure of the overlap matrix element First, we show a basic mathematical structure of the overlap matrix element by expanding the exponential operators in Eq. (2). Defining pair-annihilation and paircreation operators, andB aŝ the HFB wave functions are shown by The overlap matrix element is simply denoted by By expanding exponential operator, the HFB wave function can be shown as where this series expansion terminates in order N/2 because the number of single particle states, namely, the dimension of the matrices Z 0 and Z 1 is N . The overlap matrix element is rewritten by Note that −| l l!B k k! |− vanishes if l = k because and B are pair-annihilation and pair-creation operators, respectively.
Next, we investigate an expanded form of the 1 k operator is generally expressed by the 2k creation operators with the coefficients in terms of matrix elements of Z 1 as, As the k operator is also similarly shown, the overlap matrix element can be straightforwardly expressed as a function of matrix elements of Z 0 * and Z 1 as, The matrix element in the third line of the above equation gives intricate restriction concerning p's, q's, p ′ 's, q ′ 's by taking the contractions. The above formula shows a very complicated structure regarding the matrix elements of Z 0 * and Z 1 . It is, however, quite evident that the overlap matrix element is a polynomial of the matrix elements of Z 0 * and Z 1 and has, thereby, no sign ambiguity.
In the subsequent subsections, we will show that Eq. (12) can be rewritten by the Pfaffians and will directly derive Robledo's Pfaffian formula from Eq. (12). Furthermore, in the next section, by handling Eq.(12) with the linked cluster theorem, we will derive the Onishi formula.
B. Overlap formula with product-sum of the Pfaffians
In this subsection, we consider to rewrite Eq.(12) by investigating the detailed structure of Eq. (11).
For example, the 2nd order term is expressed as where 2! is removed due to the additional condition p 1 < p 2 . Now let us change the integer indices p 1 , q 1 , p 2 , q 2 to new indices n 1 , n 2 , n 3 , n 4 with n 1 < n 2 < n 3 < n 4 , and let us obtain the coefficients b {n} in the following form; where n's run 1 to N under the condition n 1 < n 2 < n 3 < n 4 . These two kinds of indices have different conditions. The relation between them is classified into the following three cases: . Therefore, the coefficients b {n} can be given in terms of the Pfaffian as where we use Eq.(A4). In general, the 1 k!B k operator is expressed by the 2k creation operators with the coefficients in terms of matrix elements of Z 1 as, where the summations are performed with the restriction, p 1 < q 1 , · · · , p k < q k , p 1 < · · · < p k . The k! is removed due to the additional condition p 1 < · · · < p k . As the same procedure, we change the integer indices p 1 , q 1 , · · · , p k , q k (p 1 < q 1 , · · · , p k < q k , p 1 < · · · < p k ) to 2k different integer indices n 1 , · · · , n 2k (n 1 < · · · < n 2k ). As this condition is the same as Eqs.(A2,A3), we can introduce the Pfaffian as where n's run 1 to N under the restriction, n 1 < · · · < n 2k and the m × m skew-symmetric matrix Z 1 m is defined as The HFB wave functions are expressed as where i takes 0 or 1. The expansion of the HFB wave function can be generally shown by the Pfaffians. The overlap matrix element, φ (0) |φ (1) , is given in terms of the Pfaffians as By introducing an index set I = {n 1 , n 2 , · · · , n 2t } (n 1 < · · · < n 2t ), we can define the sub-matrix Z 1 2t as Z 1 I , which is defined as where i, j run 1 to 2t. With this notation, the overlap matrix element is compactly expressed as where I N 2t is a set, consisting of subsets with 2t elements in {1, 2, 3, · · · , N }. Next, we show that the Pfaffians in Eq.(22) can be expressed with single Pfaffian, thanks to the productsum identity of the Pfaffian, Eq.(B1) in Appendix B. We define two skew-symmetric matrices P and Q as The l.h.s. of Eq. (B1) is rewritten as which is Robledo's Pfaffian, except s N . As the dimension of Z 0 and Z 1 is N , m = 2N in Eq.(B1). The r.h.s. of Eq.(B1) becomes where I 2N 2r is a set, consisting of subsets with 2r elements in {1, 2, · · · , 2N }. Below, we will show that Eq.(25) can be reduced to Eq.(22) by the Pfaffian identities and classification of the indices in Eq.(25).
As the matrices P and Q have a bipartite structure, we divide the I into two parts I 1 and I 0 , We classify the I by the symmetry with the bipartite structure. One is symmetric concerning I 1 and I 0 and is denoted as I s . The I s with numbers of elements 2r is a sum of I 1 = {l 1 , l 2 , · · · , l r } and I 0 = {l 1 + N, l 2 + N, · · · , l r + N }. The {1, 2, 5, 6} is an example of I s . The other is asymmetric and is called I a , whose examples are {1, 3, 7, 8}, {2, 4, 5, 8}. Only these symmetric sets I s with even integer r can contribute to Eq.(25), while other sets, namely, symmetric sets I s with odd integer r and asymmetric sets I a give no contribution.
In the former case, I s with 2r (r:odd), matrix dimensions of Z 1 where we use Eq.(A7). Therefore, the number of elements of I s , giving non-vanishing Pfaffian, is 4t (t:integer).
In the latter case, if the numbers of elements of I a 1 and I a 0 are different, Pf (Q I a )=0 because Q I a is not a square matrix. For example, we again take the case of {1, 2, 4, 5} for N = 4 and r = 2. The numbers of elements of I a 1 and I a 0 are 3 and 1, respectively. The matrix Q I a is 1 × 3 and is not a square one. If the numbers of elements of I a 1 and I a 0 are the same and are 2t, P f (Q I a )=0 due to asymmetry of index. For instance, we again take the case of {1, 3, 7, 8} for N = 4 and r = 2. Its complementary set is {2, 4, 5, 6}. The Q I a is given by q 2,2 q 2,4 q 2,5 q 2,6 q 4,2 q 4,4 q 4,5 q 4,6 q 5,2 q 5,4 q 5,5 q 5,6 q 6,2 q 6,4 q 6,5 q 6,6 because of the definition of Q, Eq.(23). Therefore, its Pfaffian is zero. In general, Q I a is shown in the bipartite form as where the diagonal block matrices are zero and the offdiagonal block matrix is denoted by C. By the identity Eq.(A8), it reduces to (−1) m(m−1)/2 Det [C] where m = N − 2t. Let I a 1 and I a 0 be {i 1 , i 2 , · · · , i m } (i 1 < i 2 < · · · < i m ) and {j 1 + N, j 2 + N, · · · , j m + N } (j 1 < j 2 < · · · < j m ), respectively. As some indices are the same and others are different, we re-sort the indices as {i 1 , i 2 , · · · , i m } → {i ′ 1 , · · · , i ′ k , · · · , i ′ m } and In this representation, the matrix C can be shown as while the diagonal block matrices in Eq.(28) are still zero. Therefore, P f (Q I a ) = 0 is proved. Next we consider I s with 4t (t:integer), which consists of I 1 s = {l 1 , l 2 , · · · , l 2t } and I 0 s = {l 1 +N, l 2 +N, · · · , l 2t + N }. For instance, such a case is {1, 2, 5, 6} for N = 4 and t = 1(r = 2). The Pfaffian of P is rewritten as where Z 1 where I I s 1 is a (N − 2t) × (N − 2t) identity matrix and we use Eq.(A8). Thus, the product-sum identity of the Pfaffian is reduced into As |I s | = 2|I s 1 | + 2N t, (−1) |I s | = 1. Thereby, the sign of r.h.s. of Eq.(32) becomes (−1) 1 2 N (N +1) , which is just Robledo's s N [5]. Therefore, the obtained overlap matrix element agrees with Robledo's Pfaffian expression as, Thus, we algebraically derived Robledo's Pfaffian formula, summing up the expansion terms by applying the product-sum identity of the Pfaffian. As this expansion forms a finite series and is a polynomial, it is evident that its summation is also single-valued and the obtained formula has no sign problem.
III. OVERLAP FORMULA WITH THE DETERMINANT
The Onishi formula was first obtained by Onishi and Yoshida [1], and Onishi and his collaborators used the linked cluster expansion [14] for the double-variational method [15], [16]. Here, we derive the Onishi formula based on the linked cluster theorem [14], which is more standard in view of the quantum many-body theory. Especially we focus on the origin of square-root function.
A. Expansion of overlap matrix element with the contractions
Let us begin with the overlap matrix element in Eq. (2), which is rewritten as where N is a dimension of the model space. This is the same equation as Eq.(10) while here we denote a vacuum expectation value simply as Ô ≡ −|Ô|− wherê O is an arbitrary operator. As Eq.(34) forms a finite series, it seems to be difficult to derive square-root function. However, by evaluating Eq.(34) with contractions, we naturally obtain an infinite series. Before we discuss the general term, we explicitly show several terms for k = 0 ∼ 3. The 0-th order term is unity.
For k = 1, taking the contractions, we can obtain the 1st order term as, where Y ≡ Z 0 * Z 1 . The contribution only from the connected diagrams is denoted with the suffix "c" attached to the expectation value. For this notation, the 1st order term is shown as, where the 1st power of Y is denoted as Y 1 for later convenience. The second-order term is shown by The contractions are classified into two groups. One is a disconnected term and is a product of 1st-order connected terms. The other is a connected term. The former one is shown by where 2! is the number of the repeated diagrams. The other is a connected term, which corresponds to where the coefficient 1 −2·2 is explained in Appendix C. Therefore, the second order term is given by The 3rd-order term has two disconnected terms as ÂB 3 c and 2! c ÂB c and one connected term as 3! c . One of the disconnected terms is shown by The other disconnected terms are shown by The connected term is shown by where the coefficient 1 −2·3 is explained in Appendix C. The 3rd order term is, therefore, given by (44) In general, we consider k-th order term, which has several disconnected terms and one connected term.
The disconnected terms are shown as ÂB k c , 1 , · · ·. The p q power of q-th connected term, 1 The k-th term is thereby expressed as In Eq.(46), the connected term is shown as where the coefficient 1 −2k is explained in Appendix C.
B. Onishi formula via the linked cluster theorem
According to the linked cluster theorem, the logarithm of the overlap matrix element is expressed with its connected diagrams as The contribution of connected diagrams for the overlap matrix element is given with the summation of all connected terms Eq.(47) as where this expression becomes an infinite series because the connected terms, which is a sum of the connected terms and the disconnected terms as shown in Eq.(46), are always zero for k > N/2 due to Fermi statistics. As a result, the overlap matrix element is also expressed by an infinite series through Eq.(48), although Eq.(34) is a finite series. This fact means that the overlap matrix element can be expressed in two analytically different ways.
Next, we continue to investigate Eq.(49). The eigenvalues of the Y matrix are denoted as e i (i = 1, · · · , N ). Then Tr(Y ) = N i=1 e i . The summation of the connected terms is shown as Here we consider the complex logarithmic function ln(1 − z) by its power series ln(1 − z) = −z − z 2 2 − z 3 3 − · · ·, which has the convergence radius |z| < 1. For the domain beyond the convergence radius, the logarithmic function can be defined by the analytic continuation, except on the singularity at z = 1. Note that, as for z = 1, the logarithmic function diverges, this point cannot be defined and is a singularity. The existence of this singularity was reported as the nodal line (a collection of zeros of the norm overlap) through the numerical investigation [7], which was found to be the major obstacles in the implementation of the Hara-Hayashi-Ring method [6].
where pv. means principal value and square root function appears. In the case of the inverse function of the logarithm as a complex function, that is, e lnz = z, the infinite-multivalued nature of complex logarithm vanishes. In this case, due to the factor of 1 2 , we have to discriminate the Riemann surface as pv.lnξ This is the well-known form of the Onishi formula [1]. Clearly, the present expression contains the square root function and it suffers from the sign problem. As discussed above, the origin of the square root is due to the infinite series expansion concerning the connected diagrams, which results in the logarithm with the 1 2 factor. Now we go back to Eq.(50), where we use the eigenvalues e i 's of the Y matrix. As this Y matrix is a product of two skew-symmetric matrices, the eigenvalues of such a matrices are doubly-degenerated [17]. Therefore, we can rewrite Eq.(50) as, where the eigenvalues of the Y matrix are pair-wisely denoted as {e i , e i } (i = 1, · · · , N/2) andξ = . This double degeneracy of the eigenvalues was also rediscovered by K. Neergård and E. Wüst [4], who proved it indirectly. This doubly degenerate nature cancels the 1 2 factor of the logarithm in Eq.(50) that is the direct origin of the sign problem. Eq. (51) can be, thereby, changed to e eÂeB c = e lnξ =ξ.
(54) Therefore, the skew-symmetric property of Z 0 and Z 1 matrices, which comes from Fermi statistics, can fundamentally and completely remove the sign problem from the Onishi formula. Thus, paying attention to such a double degeneracy of the eigenvalues, we have directly derived the sign-problem-free version of the Onishi formula from the definition of the overlap matrix element, Eq.(2).
IV. SUMMARY
We investigated two kinds of analytically different formulae for the overlap matrix elements between HFB wave functions. One is the Onishi formula that was derived half a century ago [1]. This formula is, in general, twovalued as a complex function and has the sign problem. The other is Robledo's Pfaffian formula [5], which has been derived recently. This formula is single-valued and is free of the sign problem. It is theoretically interesting to investigate why there exist two analytically different formulae and why the sign problem occurs only in the Onishi formula.
To understand both formulae more deeply, we began with the Thouless representation [2], [3] of the HFB wave function in Eq.(1) and the overlap matrix element in Eq.(2). By a naive series expansion of the exponential operators in Eq.(2), the overlap matrix element can be, in principle, expressed with a polynomial with respect to Z's matrix elements due to the Fermi statistics, as shown in Eq. (12). Thereby, the overlap is essentially singlevalued although such a simple expansion does not give rise to any useful formula. Hence, we investigate various summation methods of the series expansion.
First, by expanding the exponential operator in the HFB wave function in Eq.(1), we found that the HFB wave function can be expressed with the Pfaffians in Eq. (19), which is a finite series due to Fermi statistics. The overlap matrix element can be, thereby, rewritten by the product-sum form of the Pfaffians in Eq.(22), which is also a finite series as the naive expansion. Thanks to the product-sum identity of the Pfaffian Eq.(B1), we can sum up these Pfaffians, and finally, we succeeded in algebraically deriving Robledo's Pfaffian formula [5]. This derivation shows a relation between the finite series expansion and Robledo's Pfaffian formula [5], as well as its single-valued property and the sign-problem-free nature.
Next, starting with the overlap matrix element in Eq.(2), we evaluated the Onishi formula with the linked cluster expansion [14]. We investigated the summation procedure in the series expansions of the overlap matrix element in detail, where an infinite series of the connected diagrams shows up. This infinite summation can alter the analytical property. In fact, the summation for the connected diagrams leads to the logarithm with the factor 1 2 . As the overlap matrix element is given by the exponential function of the summation of the connected diagrams, this factor 1 2 results in the square root function, which can be considered as the origin of the sign problem. We also pointed out that the sign problem is completely clarified by paying attention to a mathematical fact that the eigenvalues of a product of two skew-symmetric matrices are always doubly-degenerated [17]. More details and further considerations about this aspect are to be discussed elsewhere [18]. This double degeneracy removes the factor 1 2 , and the sign problem disappears. Finally, through this study, we showed a case that the linked cluster theorem gives a different analytic expression for the matrix element from its original analytic property. It may be interesting to find other cases with regard to the sign problem caused by the use of the linked cluster theorem.
For an n × n (n = odd) skew-symmetric matrix, Pf [A] = 0. For a 2 × 2 skew-symmetric matrix, Pf [A] = a 12 . For a 4 × 4 skew-symmetric matrix, Pf [A] = a 12 a 34 − a 13 a 24 + a 14 a 23 . (A4) For a skew-symmetric matrix A with dimension 2n × 2n, the following relations hold as and where Q is an arbitrary 2n × 2n matrix. The Pfaffian of skew-symmetric block diagonal matrix A with dimension 2n× 2n becomes a product of the Pfaffians of n × n sub-matrices A 1 and A 2 as For a special block skew-symmetric matrix with n × n sub-matrices C, the following identity holds as, where I = {i 1 , i 2 , · · · , i 2r } is a subset with 2r elements of {1, 2, · · · , m} and || means sum of the elements, that is, |I| = i 1 + i 2 + · · · + i 2r . I is the complementary set of I concerning {1, 2, · · · , m}. This proof is given by papers in pure mathematics [19] and [20]. | 6,265 | 2017-11-01T00:00:00.000 | [
"Mathematics"
] |
Elevation of circulating fatty acid-binding protein 4 is independently associated with left ventricular diastolic dysfunction in a general population
Background Fatty acid-binding protein 4 (FABP4) is expressed in both adipocytes and macrophages. Recent studies have shown secretion of FABP4 from adipocytes and association of elevated serum FABP4 level with obesity, insulin resistance, hypertension, and atherosclerosis. However, little is known about role of FABP4 in cardiac function. Methods From the database of the Tanno-Sobetsu Study, data for 190 subjects (male/female: 82/108) who were not treated with any medication and underwent echocardiography in 2011 or 2012 were retrieved for analyses of relationships between serum FABP4 concentration, metabolic markers and parameters of echocardiography. Results Serum FABP4 level was positively correlated with age, body mass index (BMI), blood pressure (BP), LDL cholesterol, HOMA-R and mean left ventricular (LV) wall thickness (LVWT, males: r = 0.315, females: r = 0.401, p < 0.01) and was negatively correlated with HDL cholesterol, estimated glomerular filtration rate (eGFR) and peak myocardial velocity during early diastole (e’; males: r = −0.434, females: r = −0.353, p < 0.01), an index of LV diastolic function. However, no significant correlation was found between FABP4 level and LV end-diastolic dimension, LV ejection fraction or LV mass index. There were significant correlations of e’ with age, BMI, BP, eGFR, brain natriuretic peptide (BNP), FABP4, metabolic markers and LVWT. Multivariate regression analysis adjusted by HOMA-R, BMI, eGFR, BNP or LVWT in addition to age, gender and BP revealed that serum FABP4 concentration was independently correlated with e’. Conclusions Elevation of circulating FABP4 may contribute to LV diastolic dysfunction in a general population.
Background
Fatty acid-binding proteins (FABPs) are a group of intracellular lipid chaperones that coordinate lipid responses in cells [1,2]. FABPs are about 14-15-kDa proteins that can reversibly bind hydrophobic ligands, such as saturated and unsaturated long chain fatty acids, with high affinity [1,2]. FABPs have been proposed to facilitate the transport of lipids to specific compartments in the cell. Among FABPs, fatty acid-binding protein 4 (FABP4), known as adipocyte FABP (A-FABP) or aP2, is expressed in adipocytes, macrophages and capillary endothelial cells [1][2][3]. Emerging evidence indicates that FABP4 acts at the integration between metabolic and inflammatory pathways and plays an important role in the development of insulin resistance and atherosclerosis [4][5][6]. It has also been demonstrated in experimental models that chemical inhibition of FABP4 could be a therapeutic strategy against insulin resistance, diabetes mellitus, fatty liver disease and atherosclerosis [7].
Adipose tissue is now known to secrete a variety of bioactive molecules called adipokines, such as tumor necrosis factor-α (TNFα), leptin and adiponectin, which are implicated in a wide range of biological phenomena.
Interestingly, recent studies have shown that FABP4 is secreted from adipocytes [8,9], though there are no typical signal peptides for secretion in the sequence of FABP4 [1]. It has also been demonstrated that secretion of FABP4 is via a non-classical secretion pathway and that FABP4 acts as an adipokine for the development of hepatic insulin resistance [9]. Furthermore, elevated serum concentration of FABP4 has been shown to be associated with obesity, insulin resistance, hypertension and atherosclerosis [8][9][10][11][12].
Obesity is a risk factor for several kinds of cardiac insults, such as left ventricular (LV) hypertrophy, LV diastolic dysfunction and heart failure with preserved or reduced ejection fraction. It has been suggested that several adipokines provide a direct pathophysiological link between enlarged adipose tissue and obesity-associated cardiac dysfunction [13]. However, little is known about the relationship between circulating FABP4 and cardiac function, especially in a general population. Therefore, we hypothesized that increase in serum FABP4 reflects LV diastolic dysfunction as an early stage of cardiac insults in a general population. To address this hypothesis, we conducted a study to investigate the cross-sectional associations between serum FABP4 concentration and several echocardiographic parameters in subjects who had not regularly taken any medications.
Study population
The Tanno-Sobetsu Study is a study with a populationbased cohort design recruiting residents of two rural towns, Tanno and Sobetsu, in Hokkaido and includes annual health examination, pathophysiological assessment of metabolic syndrome and cardiovascular disease, and follow-up survey. A total of 357 female subjects (mean age: 66 ± 13 years) in 2011 and 277 male subjects (mean age: 66 ± 13 years) in 2012 received annual examinations in Sobetsu Town. Female and male participants in 2011 and 2012, respectively, were invited to receive echocardiographic examinations. Subjects who were being treated with any regular medications for diseases were excluded. Other exclusion criteria were atrial fibrillation and conductional abnormalities such as left bundle branch block on electrocardiogram or severe valvular disease and left ventricular hypertrophy (wall thickness >12.5 mm) on echocardiogram. A total of 190 subjects who underwent echocardiography (male/female: 82/108, mean age: 63 ± 13 years) contributed to the present analyses. This study conformed to the principles outlined in the Declaration of Helsinki and was performed with the approval of the Ethical Committee of Sapporo Medical University. Written informed consent was received from all of the subjects.
Measurements
Medical check-ups were performed between 06:00 h and 09:00 h after an overnight fast. After measuring anthropometric parameters, blood pressure was measured twice consecutively on the upper arm using an automated sphygmomanometer (HEM-907, Omron Co., Kyoto, Japan) with subjects in a seated resting position, and average blood pressure was used for analysis. Body mass index (BMI) was calculated as body weight (in kilograms) divided by the square of body height (in meters). Peripheral venous blood samples were obtained from study subjects after physical examination for complete blood count and biochemical analyses of the serum. The serum samples were analyzed immediately or stored at −80°C until biochemical analyses.
Serum concentration of FABP4 was measured using a commercially available enzyme-linked immunosorbent assay kit for FABP4 (Biovendor R&D, Modrice, Czech Republic). The accuracy, precision and reproducibility of the kit have been described previously [8]. The intra-and inter-assay coefficient variances in the kit were < 5%. Fasting plasma glucose was determined by the glucose oxidase method. Fasting plasma insulin was measured by a radioimmunoassay method (Insulin RIA bead, Dianabot, Tokyo, Japan). Creatinine (Cr) and lipid profiles, including total cholesterol, high-density lipoprotein (HDL) cholesterol and triglycerides, were determined by enzymatic methods. Low-density lipoprotein (LDL) cholesterol level was calculated by the Friedewald equation. Hemoglobin A1c (HbA1c) was determined by a latex coagulation method and was expressed in national glycohemoglobin standardization program (NGSP) scale. Brain natriuretic peptide (BNP) was measured using an assay kit (Shionogi & Co., Osaka, Japan). High-sensitivity C-reactive protein (hsCRP) was measured by a nephelometry method. As an index of renal function, estimated glomerular filtration rate (eGFR) was calculated by an equation for Japanese: eGFR(ml/min/1.73m 2 ) = 194 × Cr (-1.094) × age (-0.287) × 0.739 (if female). HOMA-R, an indicator of insulin resistance, was calculated by the previously reported formula: insulin(μU/ml) × glucose(mg/dl)/405.
Echocardiography
After medical check-ups and collection of urine and blood samples, echocardiographic examinations were performed by three well-experienced echocardiographers who were blinded to clinical data, using Vivid 9 (GE Health Care, Tokyo, Japan) equipped with a 2.5-MHz frequency transducer. Two-dimensional and color tissue Doppler imaging modes were used to obtain images from standard echocardiographic views, including parasternal long-axis and apical four-, three-, and two chamber views at a left lateral decubitus position. Standard parameters in two-dimensional measurements, including LV end-diastolic and end-systolic dimensions (mm) and septal and posterior wall thicknesses at end-diastole (mm), were determined. Mean LV wall thickness (mm) was calculated by the average of septal and posterior wall thicknesses at end-diastole. LV ejection fraction (%) was calculated using biplane modified Simpson's method. LV mass was calculated according to the recommendations of the American Society of Echocardiography [14] and normalized for body surface area (LV mass index, g/m 2 ). Left atrial (LA) dimension (mm) was measured by M-mode echocardiography, and LA volume was measured using biplane Simpson's method and normalized for body surface area (LA volume index, ml/m 2 ) [14]. Each parameter was evaluated by averaging two to three measurements. Transmitral flow velocities were obtained by pulsed wave Doppler echocardiography, positioning a sample volume at the level of a mitral tip in an apical four-chamber view. Mitral flow parameters, including peak velocities during early (E) and late diastole (A) and E-wave deceleration time, were measured, and the E/A ratio was calculated. Tissue velocity curves were obtained from color tissue Doppler imaging. A sample volume was placed at the lateral annulus in the apical four-chamber view, and peak myocardial velocity during early diastole (e' , cm/sec) was measured, and the ratio of mitral to myocardial early diastolic peak velocity (E/e') was calculated.
Statistical analysis
Numeric variables are expressed as means ± SD for normal distributions or medians (interquartile ranges) for skewed variables. The distribution of each parameter was tested for its normality using the Shapiro-Wilk W test, and non-normally distributed parameters were logarithmically transformed for comparison and regression analyses. Comparison between two groups was done with an unpaired t test. One-way analysis of variance and Tukey-Kramer post hoc test were used for detecting significant differences in data between multiple groups. The correlation between two variables was evaluated using Pearson's correlation coefficient. Multivariate regression analysis was performed to identify independent determinants of e' using the variables with a significant and nonconfounding correlation in simple regression analysis as independent predictors, showing the t-ratio calculated as the ratio of regression coefficient and standard error of regression coefficient and the percentage of variance in the object variables that they explained (R 2 ). A p value of less than 0.05 was considered statistically significant. Holm-Bonferroni sequential correction was also performed in multivariate regression analysis. All data were analyzed by using JMP 9 for Macintosh (SAS Institute, Cary, NC).
Results
Basal characteristics of the study subjects are shown in Table 1. Male subjects were significantly older than the female subjects and they had significantly larger BMI and waist circumference and had higher levels of systolic and diastolic blood pressures, triglycerides, glucose, HbA1c, insulin, HOMA-R and Cr and lower levels of total cholesterol, HDL cholesterol, LDL cholesterol and FABP4 than did the females. No significant difference in eGFR or BNP was found between male and female subjects. In echocardiographic parameters, LA dimension, mean LV wall thickness, LV end-diastolic dimension, LV mass index and E-wave deceleration time were significantly larger in males than in females. On the other hand, LV ejection fraction and E/A ratio were smaller in males than in females. Levels of e' and E/e' were comparable between male and female subjects.
In analyses of data from all study subjects, serum FABP4 level was positively correlated with age, BMI, systolic and diastolic blood pressures, total cholesterol, LDL cholesterol, triglycerides, insulin, HOMA-R, Cr and hsCRP and was negatively correlated with eGFR ( Table 2). Similar correlations between the parameters were observed when male and female subjects were separately analyzed.
Regarding echocardiographic parameters, FABP4 concentration was positively correlated with LA dimension, LA volume index and mean LV wall thickness (males: r = 0.315, females: r = 0.401, p < 0.01), though correlation with FABP4 was not significant for LV end-diastolic dimension or LV mass index. FABP4 level was positively correlated with E/e' and negatively correlated with e' (Figure 1; males: r = −0.434, females: r = −0.353, p < 0.001), an index of LV diastolic function, and E/A ratio (Table 2), whereas LV ejection fraction was not correlated with FABP4 level. Among echocardiographic parameters, e' was positively correlated with LV ejection fraction and E/A ratio and was negatively correlated with LA dimension, LA volume index, mean LV wall thickness, LV mass index, E-wave deceleration time and E/e' (Table 3). Of extra-cardiac parameters, age, BMI, waist circumference, systolic and diastolic blood pressures and biochemical markers, including eGFR, BNP, hsCRP and FABP4, were found to be significantly correlated with e' (Table 3).
Multivariate regression analysis was performed to identify independent determinants of e' using systolic blood pressure, the most strongly correlated factor among anthropometric and biochemical parameters (r = −0.465, p < 0.001), in addition to age and gender (Model 1) and showed that serum FABP4 concentration was independently correlated with e' (Table 4). Next, the variables with a significant and non-confounding correlation in simple regression analysis were additionally chosen as possible independent predictors in Model 2~6: a marker of adiposity (BMI, Model 2), glucose and insulin metabolism (HOMA-R, Model 3), renal function (eGFR, Model 4), cardiac damage (BNP, Model 5) or cardiac morphology (LV wall thickness, Model 6). When the each parameter was additionally incorporated into the adjustment, FABP4 remained as an independent predictor of e' in Model 2~6 (Table 4), although the independent correlation in Model 2 was cancelled after Holm-Bonferroni sequential correction. Additional multivariate regression analysis using all of the used parameters in Model 1~6, including age, gender, systolic blood pressure, BMI, HOMA-R, eGFR, BNP, mean LV wall thickness and FABP4, showed that FABP4 level (t = −2.36, p = 0.020) was independently correlated with e' after adjustment of other variables (overall R 2 = 0.563).
In low and middle tertiles of BMI, e' in a group with low levels of FABP4 (FABP4-Low) was significantly higher than that in a group with high levels of FABP4 (FABP4-High) (Figure 2). Furthermore, there was no significant difference in e' between the FABP4-Low and FABP4-High groups in high tertile of BMI, but the FABP4-Low group in high tertile of BMI had significantly lower e' than did that in low tertile of BMI.
Discussion
The salient finding in the present study was that FABP4 was independently and negatively correlated with e', which reflects LV relaxation and is known as one of the most sensitive indexes of LV diastolic function in a healthy population [14]. LV diastolic dysfunction often precedes Variables are expressed as number, means ± SD or medians (interquartile ranges). BNP, brain natriuretic peptide; eGFR, estimated glomerular filtration rate; hsCRP, high-sensitivity C-reactive protein; LV, left ventricle. *P < 0.01, †P < 0.05 vs. male.
LV systolic dysfunction in heart diseases, and moderate diastolic dysfunction alone potentially induces heart failure, which is referred to as heart failure with preserved ejection fraction (HFpEF) [15]. A recent study in which data from the Framingham cohort study were analyzed showed that age, diabetes mellitus, BMI, smoking and atrial fibrillation were predictors of HFpEF [16]. It is notable that the correlation of FABP4 level with e' was independent of age, BMI, HOMA-R and LV wall thickness (Table 4). These results suggest that serum FABP4 is a novel marker of LV diastolic dysfunction and potentially a predictor of HFpEF. Previous studies using animal models indicated that FABP4 plays a significant role in several aspects of metabolic syndrome, including insulin resistance, type 2 diabetes and atherosclerosis, through its action at the interface of metabolic and inflammatory pathways in adipocytes and macrophages [1,2,[4][5][6]. Epicardial fat has been reported to directly influence cardiac function because of the absence of a fibrous fascial layer between fat and the underlying myocardium [17,18]. FABP4 mRNA expression in epicardial adipose tissue was recently reported to be profoundly increased compared with its expression in paraaortic adipose tissue in patients with metabolic syndrome [19]. Furthermore, it has recently been reported that exogenous FABP4 acutely suppresses shortening amplitude in cardiomyocytes by attenuating intracellular systolic peak Ca 2+ level in a dose-dependent manner [20] and impairs the insulin-dependent nitric oxide pathway in vascular endothelial cells [21]. Therefore, it is possible that either FABP4 secreted from epicardial fat tissue or circulating FABP4 released from subcutaneous and/or visceral adipose tissue or from macrophages may directly modulate cardiac function. In the heart, FABP3 known as heart-type FABP (H-FABP) is abundant and is rapidly released from cells into the circulation after onset of cardiomyocyte damage. Serum concentration of FABP3 has been characterized as an early biochemical marker of acute myocardial infarction and a sensitive marker of ongoing myocardial damage in patients with heart failure [22,23]. Impact of circulating FABP3 is apparently different from that of FABP4. Inflammation is an important factor in the pathogenesis and progression of heart failure. It has been shown that increased inflammatory cytokines produced by mononuclear cells including macrophages and/or damaged myocardium impaired myocardial function by inducing apoptosis, necrosis and hypertrophic response in cardiomyocytes [24]. In the Framingham Heart Study, increased inflammatory markers, such as CRP, interleukin-6 and TNFα levels, were able to identify asymptomatic older subjects in the community who were at high risk for the future development of heart failure [25]. In the present study, FABP4 was positively correlated with hsCRP, being consistent with the results of several previous studies [10,12]. The macrophage is a critical site of FABP4 action, and macrophage-specific FABP4 deficiency leads to reduced activation of nuclear factor κ B (NF-κB) and c-Jun Nterminal kinase (JNK), resulting in reduced production of a cluster of inflammatory cytokines [5]. Conversely, several inflammatory stimuli have been shown to cause significantly increased expression of FABP4 in macrophages [5]. Local inflammation mediated by FABP4 in macrophages of the heart may participate in mediating cardiac dysfunction.
Up-regulation of FABP4 expression and other adipokines in heart failure has been demonstrated in recent studies [26][27][28], indicating complex neurohormonal and metabolic abnormalities associated with heart failure. Of note, upregulation of inflammatory cytokines, catecholamines and natriuretic peptides in heart failure is known to mediate increased lipolysis and insulin resistance [29]. It has been reported that lipolysis is mediated in part through the interaction of FABP4 with hormone-sensitive lipase in adipocytes [30]. A recent study also showed that FABP4 is secreted from adipocytes in a non-classical secretion pathway in relation to lipolysis [9]. Although most of the recruited subjects in the present study were considered to be healthy, relatively high level of lipolytic stimuli, such as inflammatory cytokines, catecholamines and natriuretic peptides, in asymptomatic cardiac dysfunction may increase serum FABP4 concentration. Circulating FABP4 level was associated with increased LV mass in overweight and obese women [31] and in patients with obstructive sleep apnea syndrome [32]. Recent studies also showed an independent correlation of elevated serum FABP4 with NT-proBNP in heart failure patients [33] or deterioration of LV systolic function in non-obese patients hospitalized for acutely decompensated heart failure [26] and in patients with coronary artery disease [34]. In contrast, there was no significant association between FABP4 level and concurrent [32] or subsequently developed [27] systolic dysfunction in subjects without obvious cardiac disease. In the present study using apparently healthy subjects with no medication, serum FABP4 level was weakly correlated with mean LV wall thickness but with LV mass index or LV ejection fraction. These findings suggest only a marginal contribution of FABP4 to development of the early phase of LV hypertrophy and systolic dysfunction. Similar to our results, a very recent study by Baessler et al. [35] demonstrated that FABP4 level was independently correlated with e' after adjustment of age, sex and adiposity in 96 obese subjects and 24 healthy normal weight control subjects, although the association of FABP4 levels with LV diastolic dysfunction was mainly observed in obese subjects with metabolic complications but not in metabolically healthy obese subjects. However, LV diastolic dysfunction in the previous study was defined by combination of several parameters, such as e', E/e', E/A, E-wave deceleration time and left atrial dimension. This definition may affect the results. Of note, we showed that FABP4 level was an independent predictor of e', which is known as an index of LV relaxation and one of the most sensitive indicator of LV diastolic function compared with other indices, especially in a healthy population [14].
A genetic variant at the FABP4 locus associated with decreased FABP4 expression in adipose tissue has been reported to reduce the risk of cardiovascular disease in a population study [36]. We and others previously showed that serum FABP4 level predicts long-term cardiovascular events [37][38][39]. Furthermore, a large-scale prospective study showed that concentration of FABP4 predicted the risk of heart failure during a median follow-up of 10.7 years [27]. Accumulating evidence of a causative role of FABP4 in cardiac dysfunction would prove that FABP4 is a novel target for prevention of heart failure.
Since FABP4 is a low-molecular-weight protein and freely filtered at the glomerulus, a decrease in glomerular function was shown to result in an elevation of FABP4 concentration [37]. In the present study, FABP4 was negatively correlated with eGFR but remained as an independent predictor of LV diastolic dysfunction even after adjusting for renal function. Besides eGFR, multivariate regression analysis demonstrated that the association of FABP4 level with LV diastolic dysfunction was independent of blood pressure, LV wall thickness and BNP, a well-known predictor of cardiac damage.
The present study has some limitations. Since it has been reported that several drugs, including statin, angiotensin II receptor blocker and peroxisome proliferatoractivated receptor γ agonist, affect FABP4 concentrations [40][41][42], we excluded subjects who had been treated with any drugs in the present study. Therefore, only a small number of subjects could be enrolled, and the statistical power was not large. Another limitation of this study is its cross-sectional design. Prospective longitudinal studies using larger numbers of subjects with no medication are necessary for determining whether FABP4 level is indeed a major determinant of subsequent development of cardiac dysfunction. In addition, the results of our study rely on correlation analyses. A direct relationship between FABP4 level and progression of LV diastolic dysfunction remains unclear. This issue warrants further investigation using an interventional approach.
Conclusions
The present study is the first study to show an independent association of serum FABP4 level with LV diastolic dysfunction in a general population. The increase in serum FABP4 concentration might precede development of the early phase of cardiac dysfunction. Whether FABP4 can serve as a biomarker for early diagnosis of high-risk individuals with heart disease and a potential therapeutic target for cardiac dysfunction warrants further investigation. | 5,045.6 | 2014-08-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Hybrid radar emitter recognition based on rough k-means classifier and SVM
Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this article, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, i.e., the primary signal recognition and the advanced signal recognition. In the former step, the rough k-means classifier is proposed to cluster the samples of radar emitter signals by using the rough set theory. In the latter step, the samples within the rough boundary are used to train the support vector machine (SVM). Then SVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and has a lower time complexity than the traditional approaches.
Introduction
Radar emitter recognition is a critical function in radar electronic support system, for determining the type of radar emitter [1]. Emitter classification based on a collection of received radar signals is a subject of wide interest in both civil and military applications. For example, in battlefield surveillance applications, radar emitter classification provides an important means to detect targets employing radars, especially those from hostile forces. In civilian applications, the technology can be used to detect and identify navigation radars deployed on ships and cars used for criminal activities [2].
The recent proliferation and complexity of electromagnetic signals encountered in modern environments is greatly complicating the recognition of radar emitter signals [1]. Traditional recognition methods are becoming inefficient against this emerging issue [3]. Many new radar emitter recognition methods were proposed, e.g., intrapulse feature analysis [4], stochastic context-free grammars analysis [1], and artificial intelligence analysis [5][6][7][8].
In particular, the artificial intelligence analysis approach attracted much attention. Among the artificial intelligence *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>1 School of Electronics and Information Technology, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, China Full list of author information is available at the end of the article approaches, neural network and support vector machine (SVM) are widely used for the radar emitter recognition. In [6], Zhang et al. proposed a method based on rough sets theory and radial basis function (RBF) neural networks. Yin et al. [7] proposed a radar emitter recognition method using the single parameter dynamic search neural network. However, the predication accuracy of the neural network approaches is not high and the application of neural networks requires large training sets, which may be infeasible in practice. Compared to the neural network, the SVM yields higher prediction accuracy while requiring less training samples. Ren et al. [2] proposed a recognition method using fuzzy C-means clustering SVM. Lin et al. proposed to recognize radar emitter signals using the probabilistic SVM [8] and multiple SVM classifiers [9]. These proposed SVM approaches can improve the accuracy of recognition. Unfortunately, the time complexity of SVM increases rapidly with the increasing number of training samples. The classification method with high accuracy and low time complexity is becoming the focus of research.
Classifiers can be categorized into linear classifiers and nonlinear classifiers. A linear classifier can classify linear separable samples, but cannot classify linearly inseparable samples efficiently. A nonlinear classifier can classify http://asp.eurasipjournals.com/content/2012/1/198 linearly inseparable samples, nevertheless the time complexity of the nonlinear classifier will be increased when processing linearly separable samples. In practice, the radar emitter signals consist of both linearly separable samples and linearly inseparable samples, which makes classification challenging. In the traditional recognition approach, only one classifier is used; thus, it is difficult to classify all radar emitter signal samples. In this article, a hybrid recognition method based on the rough k-means theory and the SVM is proposed. To deal with the drawback of the traditional recognition approaches, we apply two classifiers to recognize linearly separable samples and linearly inseparable samples respectively. Samples are firstly recognized by the rough k-means classifier, while linearly inseparable samples are picked up and further recognized by using RBF-SVM in the advanced recognition. The simulation results show that the proposed approach can recognize radar emitter signals more accurate and has a lower time complexity when compared with the existing approaches.
The rest of the article is organized as follows. In Section 'Basic concepts' , some basic concepts are reviewed. In Section 'Radar emitter recognition system' , a novel radar emitter recognition model is proposed. The performance of the proposed approach is analyzed in Section 'Simulation results' , and conclusions are given in Section 'Conclusions' .
Rough sets
An information system can be expressed by a fourparameters group [10]: S = {U, R, V , f }. U is a finite and non-empty set of objects called the universe, and R = C ∪ D is a finite set of attributes, where C denotes the condition attributes and D denotes the decision attributes. V = ∪v r , (r ∈ R) is the domain of the attributes, where v r denotes a set of values that the attribute r may take. f : U × R → V is an information function. The equivalence relation R partitions the universe U into subsets. Such a partition of the universe is denoted by U/R = E 1 , E 2 , . . . , E n , where E i is an equivalence class of R. If two elements u, v ∈ U belong to the same equivalence class E ⊆ U/R, u and v are indistinguishable, denoted by ind(R). If ind(R) = ind(R − r), r is unnecessary in R. Otherwise, r is necessary in R.
Since it is not possible to differentiate the elements within the same equivalence class, one may not obtain a precise representation for a set X ⊆ U. The set X, which can be expressed by combining sets of some R basis categories, is called set defined, and the others are rough sets. Rough sets can be defined by upper approximation and lower approximation. The elements in the lower bound of X definitely belong to X, and elements in the upper bound of X belong to X possibly. The upper approximation and lower approximation of the rough set R can be defined as follows [11]: where R(X) represents the set that can be merged into X positively, and R(X) represents the set that is merged into X possibly. Suppose P and Q are both the equivalent relationship of system U, and the knowledge systems decided by them are U/P = [x] P |x ∈ U and U/Q = y Q |y ∈ U . If for any then knowledge P is dependent on knowledge Q completely, that is to say when disquisitive object is some characteristic of Q, it must be some characteristic of P. P and Q are of definite relationship. If knowledge P is dependent on knowledge Q partly, P and Q are of uncertain relationship. So the dependent extent of knowledge P to knowledge Q is defined as [10] where POS Q (P) = ∪Q(x) and 0 ≤ γ Q ≤ 1. The value of γ Q reflects the dependent degree of knowledge P to knowledge Q. γ Q = 1 shows knowledge P is dependent on knowledge Q completely; γ Q close to 1 shows knowledge P is dependent on knowledge Q highly. γ Q = 0 shows knowledge P is independent of knowledge Q.
Rough k-means algorithm
The k-means algorithm is one of the most popular iterative descent clustering algorithms [12]. The basic idea is to make the samples have high similarity in a class, and low similarity among classes. The center of a cluster can be given by: where x denotes the sample to cluster, X i denotes the cluster i, card(X i ) denotes the number of the elements in X i , and I denotes the number of clusters. The k-means algorithm is efficient for clustering. But k-means clustering algorithm has the following problems: 1. The number of clusters in the algorithm must be given before clustering [13]. 2. The k -means algorithm is very sensitive to the initial center selection and can easily end up with a local minimum solution [13,14]. 3. The k -means algorithm is also sensitive to the isolated point [15]. http://asp.eurasipjournals.com/content/2012/ 1/198 To overcome the problem of isolated points, Pawan and West [15] proposed the rough k-means algorithm. This method introduces upper approximation and lower approximation into k-means clustering algorithm. The improved cluster center is given by [15]: where the parameters ω lower and ω upper are lower and upper subject degrees of X relative to their clustering centers. For each object vector v, d(x, t i ) denotes the distance between the center of cluster t i and the sample. The lower and upper subject degrees of x relative to its cluster is based on the value of d( the sample x is subject to the lower approximation of its cluster, where λ denotes the threshold for determining upper and lower approximation. Otherwise, x will be subject to the upper approximation. The comparative degree can be determined by the number of elements in the lower approximation set and the upper approximation set, as follows:
SVM
In this section, we give a very brief introduction to SVM.
Let (x i , y i ) 1≤i≤N be a set of training examples, each example x i ∈ R d , d being the dimension of the input space, belongs to a class labeled by y ∈ {−1, 1}. It amounts to finding w and b, which satisfy The aim of SVM is to find the hyperplane which makes the samples with the same label at the same side of the hyperplane. The quantity ||w|| 2 is named the margin, and optimal separating hyperplane (OSH) is the separating hyperplane which maximizes the margin. The larger the margin, the better the generalization is expected to be [16].
To search the minimum ||w|| 2 , Lagrange multiplier is usually used, leading to maximizing subject to where α = (α 1 , . . . , α N ) denotes the non-negative Lagrange multipliers, x i denotes the input of the training data and y i denotes the output of the training data [17].
The decision function is In the nonlinear case, the approach adapted to noisy data is to make a soft margin. We introduce the slack variables (ξ 1 , . . . , ξ i ) with ξ 1 > 0 so that The generalized OSH is the solution of minimizing subject to (12) and ξ i > 0. The parameter ξ i is the upper bound on the number of training errors and C is the penalty parameter to control errors.
In the nonlinear SVM, a kernel function is introduced to change the initial data into a feature space with high dimension. In the new space the data should be linearly separable. Then the quadratic optimization problem can be converted to maximize subject to (10) and 0 ≤ α i ≤ C. K (x, x i ) is the kernel function. As one of the most popular kernel functions, the RBF kernel function is considered in this article, and it takes the following form [18,19]: subject to (10) and 0 ≤ α i ≤ C. The new decision function is : The result of the minimization is determined by the selection of parameters C and γ . Usually, C and γ are determined by using cross validation. http://asp.eurasipjournals.com/content/2012/1/198
Radar emitter recognition system
In this section, a hybrid radar emitter recognition approach that consists of a rough k-means classifier in the primary recognition and a SVM classifier in the advanced recognition is proposed. This approach is based on the fact that in the k-means clustering, the linearly inseparable samples are mostly at the margins of clusters, which makes it difficult to determine which cluster they belong to. To solve this problem, a linear classifier based on the rough k-means and a nonlinear classifier SVM are adopted. This approach can classify linearly separable samples and pick up those linearly inseparable samples to be classified in the advanced recognition using SVM.
After sorting and feature extraction, radar emitter signals are described by pulses describing words. Radar emitter recognitions are based on these pulses describing words. The process of the hybrid radar emitter recognition approach is shown in Figure 1. Based on the pulses describing words, we can obtain an information sheet of radar emitter signals. By attribute discretization and attribute reduction, the classification rules are extracted. These classification rules are the basis of the initial centers of the rough k-means classifier, i.e., they determine the initial centers and the number of clusters. After that, the known radar emitter signal samples are clustered by the rough k-means while the rough k-means classifier in the primary recognition is built, as described in the following section. The samples in the margin of a cluster are picked up to be used as the training data for the SVM in the advanced recognition. The unknown samples to be classified are recognized firstly by the rough k-means classifier. The uncertain sample set, which contains most of the samples with linear inseparability, is classified by the SVM in the advanced recognition.
Based on the process of the recognition approach described above, the accuracy of recognition can be given by: (18) where A total is the accuracy of the hybrid recognition, A primary is the accuracy of the primary recognition, A advanced is the accuracy of the advanced recognition, N WIU is the number of samples which are falsely classified in uncertain area, and N W is the number of wrong classified samples.
Primary recognition based on improved rough k-means
As mentioned above, a classifier based on the rough kmeans is proposed as the primary recognition. In the rough k-means algorithm, there are two areas in a cluster, i.e., certain area and rough area. But in the rough k-means classifier proposed in this article, there exist three areas. For example, in two dimension, a cluster is depicted in Figure 2. At the edge of the cluster, there is an empty area between the borderline and the midcourt line of the two cluster centers. We name this area as the uncertain area. In clustering, there is no sample in the uncertain area. When the clustering is completed, these clusters will be used as the rough k-means classifiers. When unknown samples are classified, for each cluster center, samples are nearer than the midcourt line are classified into its class. Linear inseparable samples are usually far from cluster centers and probably out of the cluster, i.e.,in the uncertain area. Thus after distributed into their nearest clusters, the unknown samples in uncertain area will be recognized by the advanced recognition. For those unknown samples in the certain area and rough area, the primary recognition outputs final results. As shown in Figure 2, in the training process of the rough k-mean classifier, we calculate the cluster center, rough boundary R ro and uncertain boundary R un in every cluster. After clustering, the center of a cluster and the farthest sample from the center of the cluster are determined. The area between rough boundary and uncertain boundary (R ro < d x < R un ) is defined as rough area, where d x denotes the distance from a sample to the center. In the training, if a training sample is in the rough area, it will be http://asp.eurasipjournals.com/content/2012/1/198 Figure 2 An example of a cluster. There are three areas in our rough k-means classifier, the certain area, rough area and uncertain area. These areas will be introduced in detail in the text.
used to train the SVM in the advanced recognition. The uncertain boundary threshold R un is defined as where max(d x ) is the distance from the farthest sample to the center.
In a cluster, The area beyond uncertain boundary (d x > R un ) is the uncertain area. When unknown samples are recognized, they will be distributed into the nearest cluster. If d x > R un , these samples will be further recognized by the advanced recognition. For other unknown samples, the result of the primary recognition will be final.
Figure 3
Comparison of radiuses of the rough k-means cluster and the k-means cluster. The radius of a cluster in rough k-means is shorter than that in k-means. http://asp.eurasipjournals.com/content/2012 /1/198 As the discussions in previous section, the k-means algorithm has some problems. The rough k-means method can solve the problems of nondeterminacy in clustering and reduce the effect of isolated samples [20]. But it still requires initial centers and the number of centers as priors. In addition, the choice of initial centers is very important for rough k-means. So the initial centers are usually determined by computing the least means square. In this article, we propose to determine the initial centers based on rough sets theory. Using this approach, the initial centers are computed based on the classification rules of rough sets. The process can be described as follows: 1. Classification rules are obtained based on the rough sets theory. 2. The mean value of every class is obtained. 3. Define the mean values as the initial clustering centers. The clustering number equals to the number of rules: where X p denotes the set of samples in the classification rule p of the rough sets theory. In (5), the parameter λ determines the lower and upper subject degree of X k relative to some clustering. If the threshold λ is too large, the low approximation set will be empty, while if the threshold λ is too small, the boundary area will be powerless. Usually, λ is set to a value, which makes most samples in the lower approximation and the upper approximation not empty. The threshold λ can be determined by: 1. Compute the Euler distance of every object to K class clustering centers and distance matrix D(i, j). 2. Compute the minimum value d min (i) in every row of matrix D(i, j). 4. Obtain the minimum value d s (i) (except zero) in every row.
λ is chosen from the minimum value d s (i).
After that, known samples are clustered by using (5).
The cluster centers C, the rough boundary R ro and the uncertain boundary R un are determined.
In addition, the primary recognition result is effected greatly by radiuses of clusters. Rough k-means clustering can lessen the radiuses of clusters effectively. As shown in Figure 3, the radius of k-means cluster is the distance from the cluster center to the farthest isolated sample. In the rough k-means, the cluster center is the average of the lower approximation center and the upper approximation center. The upper approximation center is near to the farthest sample. So the cluster radius of rough k-means R r is less than the k-means radius R, obviously. As the radius is shorten, when unknown samples are recognized, the probability that an uncertain sample is recognized as a certain sample is reduced. Therefore, the accuracy of the primary recognition is increased.
The time complexity of the hybrid recognition approach
The time complexity of the approach proposed in this article consists of two parts, namely the time complexity of the primary recognition and the time complexity of the advanced recognition. In the training of the primary recognition, samples are clustered by using rough k-means. The time complexity of the rough k-means is O(dmt), where d, m, and t denote the dimensionality of samples, the number of training samples and the iterations, respectively. In this article, the optimal initial centers are determined by analyzing the knowledge rule of the training sample set based on rough set theory, instead of iteration. Thus, the time complexity of the primary recognition is O(dm).
The SVM is used as the advanced recognition in our approach. The time complexity of SVM has nothing with the dimension of samples, but is related with the number of samples. The time complexity of SVM training is discussed with respect to the complexity of the quadratic programming. Standard SVM training has O(m 3 ) time complexity [21]. In
Data set description and experiment design
The validity and efficiency of the proposed approach is proved by simulations. In the first simulation, radar emitter signals are recognized. The type of radar emitter is the recognition result. The pulse describing words of the radar emitter signal include a radio frequency (RF), a pulse repeating frequency (PRF), antenna rotate rate (ARR) and a pulse width (PW). 240 groups of data are generated on above original radar information for training, while 200 groups are generated for testing. This simulation is repeated 100 times, and the average recognition is obtained. Another simulation is adopted to test the efficiency of the hybrid recognition with the Iris data set. Iris data set contains 150 patterns belonging to three classes. There are 50 exemplars for each class and each input is a four-dimensional real vector [22]. The recognition accuracy and time complexity are compared between SVM and our approach. There are two parts in this simulation. In the first part, all 150 samples are used in training. And these 150 samples are used to test the training accuracy. In the second part, 60 samples from the Iris data set are used to train classifiers and other 90 samples are used for test. The generalization of the proposed approach is proved.
Results of experiment 1: classification of the radar emitter signals
An information sheet of radar emitter signals is built, which is shown as Table 1. Data in the information table should be changed into discrete values, because continuous values cannot be processed by the rough sets theory. There are many methods for data discretization and here the equivalent width method [11] is adopted in this article. Based on the equivalent width method, the range is divided into intervals. Intervals of one attribute are of the same size and different attributes can have different numbers of intervals. In our article, attributes are divided into three intervals. The attribute values in the same interval have the same discrete value. The discrete information is shown in Table 2, where A, B, C, and D denote the attribute RF, PRF, ARR, and PW, respectively. After that, the dependent extent of radar type to each attribute is computed used (3). γ A = 7 8 , γ B = 7 8 , γ C = 0, and γ D = 7 8 . As the dependent extent of radar type to the attribute C (ARR) is 0, the attribute C is unnecessary for classification and removed. The knowledge rules are obtained. Table 3 shows these rules, where-denotes an any value. Some radars have different operating modes and in different operating mode, the emitter signals parameters vary obviously. In the clustering of radar emitter signal samples, if cluster samples of one radar emitter into one cluster, that samples of the same radar may gather in several subregions in the cluster. The aggregation of the cluster would be reduced. Thus, we cluster the samples based on the subregions determined by using rough sets. The samples of three types of radar emitters are distributed into seven clusters.
Based on these knowledge rules, initial clustering centers are obtained using (20). The known radar emitter samples are clustered by using the rough k-means on these initial cluster centers. As shown in Table 4, 240 training samples are clustered into seven clusters. The cluster centers, rough boundary and uncertain boundary of the primary recognition are computed. The rough k-means classifier has been built.
The classification accuracy of each radar emitter is as the confusion matrices shown in Table 5. For example, row (1) indicates that on the 34 samples of the subclass 1, 32 have been classified correctly and 2 in subclass 5. The primary recognition accuracy is 86%. The advanced recognition accuracy is 92%. The number of samples that are falsely classified in uncertain areas is 18, while the total number of wrong classified samples is 28. As (18) described, the theoretical accuracy can be computed as: A total = 86% + 14% × 18 28 × 92% = 94.28%. The proposed method is compared with the RBF neural network studied by Zhang et al. [6], the RBF-SVM and the probabilistic SVM radar recognition approach studied by Lin et al. [8]. As shown in Table 6, the accuracy of the hybrid recognition proposed in this article is 94.5%, which is higher than existing methods, i.e., 92, 92.5, and 93%. The accuracy of the hybrid recognition from simulation experiments is close to the theoretical value, i.e., 94.28%.
Results of experiment 2: classification of the data set Iris
From Table 7, we can know the proposed approach has not only a higher recognition accuracy than SVM, but also high training accuracy and good generalization. In the first part of this experiment, all the 150 samples are used to train and test these two methods. The hybrid recognition proposed in this article has a high training accuracy, i.e., 99.33%, which is higher than SVM's, i.e., 98.67%. In the second part of this experiment, 60 samples are used as training samples, and other 90 samples are used to test SVM and the hybrid recognition. The recognition accuracy of the proposed approach is 97.78%, which can indicate the hybrid recognition has a good generalization.
In From the comparison above, we can know that the time complexity of the hybrid recognition is obviously lower than the classical SVM.
Conclusions
In this article, a hybrid recognition method has been proposed to recognize radar emitter signals. The hybrid classifier consists of a rough k-means classifier (linear classifier) and a SVM (nonlinear classifier). Based on the linear separability of the classifying sample, the sample is classified by the suitable classifier. Thus for the radar emitter sample set containing both linearly separable samples and linearly inseparable samples, the approach can achieve a higher accuracy. A linear classifier based on the rough set and the rough k-means has been proposed, i.e., the rough k-means classifier. The rough k-means clustering can reduce the radius of the clusters and increase the accuracy of the primary recognition. The initial centers for the rough k-means are computed based on the rough set, which can reduce the time complexity of the rough k-means clustering. The rough k-means classifier can classify linear separable samples efficiently and pick up linearly inseparable samples. These linear inseparable samples are processed by the SVM in the advanced recognition. Therefore, the training samples for the SVM in the advanced recognition are reduced. Simulation results have shown that the proposed approach can achieve a higher accuracy and a lower time complexity, when compared with existing approaches.
The hybrid recognition approach in this article is suitable for the classification of the radar emitter signal sample set containing both linearly separable and linearly inseparable samples. We admit that our hybrid recognition approach is based on the fact that these linearly inseparable samples which reduce the accuracy of clustering are mostly at the edges of clusters. From (18), we know that if the linearly inseparable sample appears frequently in the center region instead of the edge, the accuracy of recognition will be reduced. How to solve this problem is the focus of our future study. | 6,370 | 2012-09-18T00:00:00.000 | [
"Computer Science"
] |
Learning dynamical information from static protein and sequencing data
Many complex processes, from protein folding to neuronal network dynamics, can be described as stochastic exploration of a high-dimensional energy landscape. Although efficient algorithms for cluster detection in high-dimensional spaces have been developed over the last two decades, considerably less is known about the reliable inference of state transition dynamics in such settings. Here we introduce a flexible and robust numerical framework to infer Markovian transition networks directly from time-independent data sampled from stationary equilibrium distributions. We demonstrate the practical potential of the inference scheme by reconstructing the network dynamics for several protein-folding transitions, gene-regulatory network motifs, and HIV evolution pathways. The predicted network topologies and relative transition time scales agree well with direct estimates from time-dependent molecular dynamics data, stochastic simulations, and phylogenetic trees, respectively. Owing to its generic structure, the framework introduced here will be applicable to high-throughput RNA and protein-sequencing datasets, and future cryo-electron microscopy (cryo-EM) data.
E nergy landscapes encapsulate the effective dynamics of a wide variety of physical, biological, and chemical systems 1,2 . Well-known examples include a myriad of biophysical processes [3][4][5][6][7] , multi-phase systems 2 , thermally activated hopping in optical traps 8,9 , chemical reactions 1,10 , brain neuronal expression 11 , cellular development [12][13][14][15][16] , and social networks 17 . Energetic concepts have also been connected to machine learning 18 and to viral fitness landscapes, where pathways with the lowest energy barriers may explain typical mutational evolutionary trajectories of viruses between fitness peaks 19,20 . Recent advances in experimental techniques including cryo-electron microscopy (cryo-EM) 3,21,22 and single-cell RNA-sequencing 23 , as well as new online social interaction datasets 24 , are producing an unprecedented wealth of high-dimensional instantaneous snapshots of biophysical and social systems. Although much progress has been made in dimensionality reduction [25][26][27] and the reconstruction of effective energy landscapes in these settings 3,13,16,17,28 , the problem of inferring dynamical information such as protein-folding or mutation pathways and rates from instantaneous ensemble data remains a major challenge.
To address this practically important question, we introduce here an integrated computational framework for identifying metastable states on reconstructed high-dimensional energy landscapes and for predicting the relative mean first passage times (MFPTs) between those states, without requiring explicitly timedependent data. Our inference scheme employs an analytic representation of the data based on a Gaussian mixture model (GMM) 29 to enable efficient identification of minimum-energy transition pathways [30][31][32] . We show how the estimation of transition networks can be optimized by reducing the dimension of a high-dimensional landscape while preserving its topology. Our algorithm utilizes experimentally validated analytical results 8,9 for transition rates 1, [33][34][35] . Thus, it is applicable whenever the time evolution of the underlying system can be approximated by a Fokker-Planck-type Markovian dynamics, as is the case for a wide range of physical, chemical, and biological processes 1,34 .
Specifically, we illustrate the practical potential by inferring protein-folding transitions, state-switching in gene-regulatory networks, and HIV evolution pathways. Current standard methods for coarse-graining the conformational dynamics of biophysical structures 36,37 typically estimate Markovian transition rates from time-dependent trajectory data in large-scale molecular dynamics (MD) simulations 38,39 . By contrast, we show here that protein-folding pathways and rates can be recovered without explicit knowledge of the time-dependent trajectories, provided the system is sufficiently ergodic and equilibrium distributions are sampled accurately. Furthermore, we show that the dynamics of state-switching or phenotype-switching in generegulatory networks 40 can be inferred directly from static snapshots of protein abundances in regimes where deterministic modeling only captures a single steady state 41,42 . The agreement of our inferred results with two separate sets of time-dependent measurements suggests that the inference of complex transition networks via reconstructed energy landscapes can provide a viable and often more efficient alternative to traditional timeseries estimates, particularly as new experimental techniques will offer unprecedented access to high-dimensional ensemble data.
Results
Minimum-energy-path network reconstruction. The equilibrium distribution p x ð Þ of a particle diffusing over a potential energy landscape EðxÞ is the Boltzmann distribution pðxÞ ¼ exp ÀEðxÞ=k B T ½ =Z; where k B is the Boltzmann constant, T is the temperature, and Z is a normalization constant. Given the probability density function (PDF) pðxÞ, the effective energy can be inferred from where p max is the maximum value of the PDF, included to fix the minimum energy at zero. Our goal is to estimate the MFPTs between minima on the landscape using only sampled data. We divide this task into three steps, as illustrated in Fig. 1 for test data (Supplementary Methods). In the first step, we approximate the empirical PDF by using the expectation maximization algorithm to fit a GMM in a space of sufficiently large dimension d Fig. 1 Inference scheme for estimating transition networks and mean first passage times (MFPTs). We apply the protocol to test data generated from a Gaussian Mixture Model (GMM; Supplementary Methods). a Inputs are the instantaneously measured data, sampled here from a ten-dimensional GMM with five Gaussians, plotted in the first three principal components (PCs); colors denote the Gaussian that a point was sampled from. b Top: a GMM is fit to the samples to construct the empirical probability distribution, which is then converted to the energy landscape using Eq. (1). Background color indicates the projection of the empirical energy landscape onto the first two PCs. Minimum-energy paths (MEPs, gray lines) between minima 1-5 on the landscape are calculated using the NEB algorithm (Supplementary Methods). Bottom: disconnectivity graph illustrating minima on the energy landscape (circles) and saddle points between them (squares). c A Markov state model (MSM) is constructed with transition rates given by Eq. In the second step, the inferred energy landscape EðxÞ is reduced to a minimum-energy-path (MEP) network whose nodes (states) are the minima of EðxÞ (Fig. 1b top). Each edge represents an MEP that connects two adjacent minima and passes through an intermediate saddle point (Fig. 1b). The MEPs are found using the nudged elastic band (NEB) algorithm 30,31 , which discretizes paths with a series of bead-spring segments (Supplementary Methods).
Markov state model. Given the MEP network, the final step is to infer the rates for transitioning from a minimum α to an adjacent minimum β. Assuming overdamped Brownian dynamics, the directed transition α ! β can be characterized by the generalized Kramers transition rate 1 where γ is the effective friction, E b is the energy difference between the saddle point S on the MEP (over the energy barrier) and the minimum α, ω α i are the stable angular frequencies at the minimum α, and ω S i and ω b are the stable and unstable angular frequencies at the saddle, respectively. Equation (2) assumes isotropic friction but can be generalized to a tensorial form 1 if anisotropies are relevant. In most practical applications, the error from assuming γ to be isotropic is likely negligible compared with other experimental noise sources. In principle, Eq. (2) can be refined further by including quartic (or higher) corrections to the prefactor ω b =γ to account for details of the saddle shape 1 . Such corrections can be significant for GMMs (Supplementary Methods).
Each edge ðαβÞ has two weights, k αβ and k βα , assigned to it. The rate matrix ðk αβ Þ completely specifies the Markov state model (MSM) on the network. Solving the MSM yields the matrix of pairwise MFPTs between states ( Fig. 1c and Methods). In a simple two-state system, the MFPTs are determined up to a time scale by detailed balance, but for three or more states the influence of landscape topography and the associated state network topology (Methods) can lead to interesting hierarchical ordering of passage times. Identifying these hierarchies and ways to manipulate them is the key to controlling protein-folding or viral evolution pathways.
Topology-preserving dimensionality reduction. To ensure that the inference protocol can be efficiently applied to larger systems with a high-dimensional energy landscape, we derive a general method for reducing the dimension D of an energy landscape while preserving its topology. A PDF with C well-separated Gaussians in D dimensions can be projected onto the d ¼ C À 1 dimensional hyperplane spanning the Gaussian means using principal component analysis (PCA); projecting onto a hyperplane of dimension d À 1 risks losing information about the relative positions of the Gaussian means and, in general, does not allow a correct recovery of the MFPTs (Supplementary Methods). In practice, it suffices to choose C to be larger than the number of energy minima if their number is not known in advance.
To preserve the topology under such a transformation-which is essential for the correct preservation of energy barriers and MEPs in the reduced-dimensional space-one needs to rescale GMM components in the low-dimensional space depending on the covariances of the Gaussians in the D À d neglected dimensions (Fig. 1c). Explicitly, one finds that within the subspace spanned by the retained principal components as long as p satisfies certain minimally restrictive conditions (Supplementary Methods). Here, U d denotes the first d ¼ C À 1 columns of the matrix of sorted eigenvectors U of the covariance matrix of the Gaussian means, and ϕ i , p d i and Σ i are the mixing components, reduced-dimensional PDF, and the covariance matrix of each individual Gaussian in the mixture, respectively (Supplementary Methods). Neglecting the determinant scale factors in Eq. (3), as is often done when GMM models are fitted to PCA-projected data, leads to inaccurate MFPT estimates (Fig. 1c, bottom). It is noteworthy that Eq. (3) does not represent inversion of the transformation performed on the data by PCA, unless all D dimensions are retained; if some dimensions are neglected, Eq. (3) represents a rescaling of the marginal distribution in the retained dimensions to reconstruct the PDF in the original dimension. In other words, the transition rates are best recovered from the conditional-not marginal-distributions, which are given by Eq. (3) up to a constant factor that does not affect energy differences.
Dimensionality reduction can substantially improve the efficiency of the NEB algorithm step as follows: when the MEPs in the reduced d-dimensional space have been computed, the identified minima and saddles can be transformed back into the original data dimension D to calculate the Hessian matrices at these points, allowing Kramers' rates to be calculated as usual ( Fig. 1c and Supplementary Methods). Alternatively, in specific situations where the MEPs lie outside the hyperplane spanning the means (Supplementary Methods), the MEP in the reduced d-dimensional space can be transformed back to the D-dimensional space and can be used as an initial condition in that space, significantly reducing computational cost. These results present a step towards a general protocol for identifying reaction coordinates or collective variables for projection of a highdimensional landscape onto a reduced space, while quantitatively preserving the topology of the landscape.
Protein folding. To illustrate the vast practical potential of the above scheme, we demonstrate the successful recovery of several protein-folding pathways, using data from previous large-scale MD simulations 38 . The protein trajectories, consisting of the time-dependent coordinates of the alpha carbon backbone, were pre-processed, subsampled by a factor of 5, treated as a set of static equilibrium measurements, and reduced in dimension before fitting a GMM (Methods). As is typical for highdimensional parameter estimation with few structural assumptions, the fitting error due to a finite sample size n in d dimensions scales approximately as ffiffiffiffiffiffiffi ffi d=n p (Supplementary Methods); see refs. [44][45][46] for advanced techniques tackling sample-size limitations. Here, d < 10 so the sample size n $ 10 5 suffices for effective recovery; indeed, our results were found to be robust for trajectories further subsampled by up to a factor of 25, leaving around 500 samples per Gaussian (Supplementary Fig. 3).
For each of the four analyzed proteins Villin, BBA, NTL9, and WW, the reconstructed energy landscapes reveal multiple states including a clear global minimum corresponding to the folded state (Fig. 2a, b). To estimate MFPTs, we determined the effective friction γ in Eq. (2) for each protein from the condition that the line of best fit through the predicted vs. measured MFPTs has unit gradient. Although not usually known, γ could in principle be calculated by incorporating time-dependent information from MD simulations or experimental data. Our MFPT predictions agree well with direct estimates (Supplementary Methods) from the time-dependent MD trajectories (Fig. 2c). Detailed analysis confirms that the MFPT estimates are robust under variations of the number of Gaussians used in the mixture ( Supplementary Fig. 1). Also, the estimated MEPs are in good agreement with the typical transition paths observed in the MD trajectories ( Supplementary Fig. 2).
Gene-regulatory networks. Next, we demonstrate the ability of our protocol to infer state-switching pathways in multistable generegulatory networks. Using a Gillespie stochastic simulation algorithm (SSA; Methods), we simulated three repressilator-type generegulatory network motifs 47 with self-activation. Gene network motifs with features such as these have been studied extensively in recent years, owing to their ability to exhibit precise oscillations 48 and to their possible importance in the determination of multiple cell fates 49 in the appropriate parameter regimes, although the role of noise in such networks is not well understood. In our simulated gene networks, each gene encodes a protein that activates the expression of its associated gene and represses another, with D ¼ 2, 3, and 4 dimensions at low molecule numbers ( Fig. 3a and Supplementary Methods). In each case, parameters were chosen to preclude oscillatory dynamics (Fig. 3a). The energy landscapes reconstructed from the simulation datasets in protein moleculenumber space (with time-dependence removed) revealed multiple metastable states for each network (Fig. 3b and Supplementary Fig. 5). Broadly, we found each state to correspond to a mixture of low and high abundances for each separate protein, with the two most common states in D ¼ 4 dimensions consisting of two abundant and two depleted proteins (Fig. 3b). In agreement with previous studies 41,42 , the identified metastable states were not recovered from deterministic simulations of the governing ordinary differential equations (Supplementary Methods), but could only be identified directly from the stochastic data (Fig. 3a, b). We determined the effective friction γ in Eq. (2) for each D as in the protein example. The predicted MFPTs and MEPs between each metastable state were found to be accurate in comparison with time-dependent measurements ( Fig. 3c and Supplementary Fig. 5b) and were robust to measurement noise typically encountered in single-cell sequencing ( Supplementary Fig. 6). Our framework also correctly predicted MFPTs for a 5D asymmetric gene network ( Supplementary Fig. 7). These results demonstrate the utility of our protocol for gene-regulatory network datasets and, more generally, energy landscapes in discrete spaces. Viral evolution. As a final proof-of-concept application, we demonstrate that our inference scheme recovers the expected evolution pathways between HIV sequences as well as the key features of a distance-based phylogenetic tree (Fig. 4). To this end, we reconstructed an effective energy landscape from publicly available HIV sequences sampled longitudinally at several points in time from multiple patients 50 , assuming that the frequency of an observed genotype is proportional to its probability of fixation and that the high-dimensional discrete sequence space can be projected onto a continuous reduced-dimensional phenotype space ( Fig. 4a and Supplementary Methods). First, a Gaussian was fit to each patient and then combined in a GMM with equal weights, to avoid bias in the fitness landscape towards sequences infecting any specific patient (Supplementary Methods). Thereafter, we applied our inference protocol to reconstruct the effective energy landscape, transition network (Fig. 4b), and disconnectivity graph (Fig. 4c), where each state is associated to a separate patient. As expected, states corresponding to patients infected with different HIV subtypes are not connected by MEPs (Fig. 4a, b). The disconnectivity graph reproduces the key features of a coarse-grained patient-level representation of the phylogenetic tree (Fig. 4c). Using our inference scheme, vertical evolution in the tree can be tracked along the MEPs in a reduceddimensional sequence space (Fig. 4b). The energy barriers, represented by the lengths of the vertical lines in the disconnectivity graph (Fig. 4c), provide an estimate for the relative likelihood of evolution to fixation via point mutations between fitness peaks (energy minima). If mutation rates are known, the MEPs can also be used to estimate the time for evolution to fixation from one fitness peak to another 51 .
Discussion
Finding the appropriate number of collective macro-variables to describe an energy landscape is a generic problem relevant to many fields. For example, although some proteins can be described through effective one-dimensional reaction coordinates 5,7,52,53 , the accurate description of their diffusive dynamics over the full microscopic energy landscape requires many degrees of freedom 54,55 . Whenever dynamics are inherently high-dimensional, topology-preserving dimensionality reduction can enable a much faster search of the energy landscape for minima and MEPs. In practice, data dimension is often reduced with PCA or similar methods before constructing an energy landscape [55][56][57][58][59][60][61][62] . The extent to which commonly used dimensionality reduction techniques alter MEP network topology or quantitatively preserve energy barriers is not well understood. Equation (3) suggests that reducing dimensions using PCA should not introduce significant errors if the variance of the Patient labels correspond to those used in ref. 50 . b MEPs between minima corresponding to patients infected with Type B HIV, plotted in the first three PCs. Paths between minima indicate likely evolutionary pathways. Minima corresponding to patients with Type 01_AE and Type C HIV were unconnected to the other minima. c Disconnectivity graph for connected minima, where vertical evolution frequency is assumed to be proportional to the normalized energy barriers (top). The disconnectivity graph reproduces the majority of the structure of a distance-based phylogenetic tree (bottom), where the lengths of vertical lines are proportional to the Jukes-Cantor sequence distance (scaled to ½0; 1). landscape around each state (energy minimum) in the neglected dimensions is similar. For instance, we found that the proteinfolding data could be reduced to five dimensions while maintaining accuracy ( Supplementary Fig. 1), although additional higher energy states may become evident in higher dimensions.
As an alternative to using Eq. (2) in the last stage of our approach, a method such as maximum caliber [63][64][65] , which does not take the derivatives of landscape topology into account, could be supplied with the sizes of the energy barriers and used to infer MFPTs. However, we found that owing to the dependence of the MFPTs on the prefactors in Eq. (2) for different transitions, this technique could not recover all transition rates accurately for either proteins or gene-regulatory networks (Supplementary Fig. 4).
Overall, our theoretical results demonstrate the benefits of combining an analytical PDF with a linear dimensionality reduction technique so that the neglected dimensions can be accounted for explicitly. Rapidly advancing imaging techniques, such as cryo-EM, will allow many snapshots of biophysical structures to be taken at the atomic level in the near future 3,21,22,28,66,67 . A biologically and biophysically important task will be to infer dynamical information from such instantaneous static ensemble measurements. The protein-folding example in Fig. 2 suggests that the framework introduced here can help overcome this major challenge; in principle, the framework requires only the pairwise distances between recognizable features of the protein as input (here we used the carbon alpha coordinates). Another promising area of future application is the analysis of single-cell RNA-sequencing data quantifying the expression within individual cells 23 . Related to this application, Fig. 3 demonstrates that our protocol recovers state-switching pathways in multistable gene-regulatory networks, which are thought to underlie cell-fate decisions. These results are most relevant in lowmolecule-number regimes, in which noise is known to be an important factor 68 . In relevant recent work, an effective nonparametric energy landscape of single-cell expression snapshots was inferred using the Laplacian of a k-nearest neighbor graph on the data, allowing lineage information to be derived via a Markov chain 15 . The GMM-based framework here provides a complementary parametric approach for reconstructing faithful low-dimensional transition state dynamics from such highdimensional data.
Furthermore, the proof-of-concept results in Fig. 4 suggest that our inference scheme for Markovian network dynamics can be useful for studying viral and bacterial evolution, which are often modeled as movements through a series of DNA or protein sequences 69 . The fitness landscape of an organism in sequence space is analogous to the negative of an effective energy landscape. The process of fixation by a succession of mutants in a population, whereby each mutant replaces the previous lineage as the population's most recent common ancestor, has been modeled as a Markov process 70 . Successive sweeps to fixation have been observed in long-term evolution experiments, promising groundbreaking data for future analysis as whole-genome sequencing technologies improve 71 .
The inference protocol opens the possibility to analyze previously intractable multi-phase systems: many high-dimensional physical, chemical, and other stochastic processes can be described by a Fokker-Planck dynamics 1 , with phase equilibria corresponding to maxima of the stationary distribution. By taking near-simultaneous measurements of many subsystems within a large multistable Fokker-Planck system, the above scheme allows the inference of coexisting equilibria and transition rates between them. Other possible applications may include neuronal expression 11 and social networks 17,24 , which have been described in terms of effective energy landscapes.
Although we focused here on normal white-noise diffusive behavior, as is typical of protein-folding dynamics, the above ideas can in principle be generalized to other classes of stochastic exploration processes. Such extensions will require replacing Eq.
(2) through suitable generalized rate formulas, as have been derived for correlated noise 1 . Conversely, the present framework provides a means to test for diffusive dynamics: if the MFPTs of an observed system differ markedly from those inferred by the above protocol, then either important degrees of freedom have not been measured, the system is out of equilibrium on measurement time scales, or the system does not have Brownian transition statistics, necessitating further careful investigation of its time dependence.
By construction, the above framework is applicable to systems whose steady-state dynamics is approximately Markovian and can be described by a Fokker-Planck-type dynamics. This broad class includes thermal equilibrium systems as well as nonequilibrium systems that can be approximated by effective equilibrium theories 72,73 . However, such approximations can become inaccurate if probabilistic non-equilibrium fluxes dominate the system dynamics 74 . For example, reconstructing dynamical geneexpression information from static snapshots is sometimes possible in the presence of oscillatory dynamics caused by processes such as the cell cycle, but can fail for gene networks with large oscillations that are not orthogonal to the processes of interest 15 . Adapting the above protocol to reconstruct the dynamics in the latter case, and of far-from-equilibrium systems in general, will require incorporating more sophisticated theories that include time-resolved information [75][76][77][78] and improved expressions for non-equilibrium transition rates 79 , and account for probabilistic fluxes 80 .
To conclude, the conformational dynamics of biophysical structures such as viruses and proteins, and the state-switching dynamics of noisy gene-regulatory networks, are characterized by their metastable states and associated transition networks, and can often be captured through Markovian models. Current experimental techniques, such as cryo-EM or RNA-sequencing, provide limited dynamical information. In these cases, transition networks must be inferred from static snapshots. Here we have introduced and tested a numerical framework for inferring Markovian state transition networks via reconstructed energy landscapes from high-dimensional static data. The successful application to protein-folding, gene-regulatory network, and viral evolution pathways illustrates that high-dimensional energy landscapes can be reduced in dimension without losing relevant topological information. In general, the inference scheme presented here is applicable whenever the dynamics of a highdimensional physical, biological, or social system can be approximated by diffusion in an effective energy landscape.
Methods
Population landscapes. A GMM was used to represent the PDF, or population landscape, of samples. The PDF at position x of a GMM with C mixture components in d dimensions is where ϕ i are the weights of each component, μ i are the means, and Σ i are the covariance matrices. More details on GMMs and how they were fit to data are given in the Supplementary Methods.
Mean first passage times. We form a discrete-state continuous-time Markov chain on states given by the minima of the energy landscape. For a pair of states α and β directly connected by a minimum-energy pathway via a saddle, we approximate the transition rate α ! β by the Kramers rate k αβ in Eq. (2), whereas if α and β are not | 5,727.4 | 2018-08-27T00:00:00.000 | [
"Computer Science",
"Biology",
"Physics"
] |
Characterization and Cloning of a DictyosteliumSte20-like Protein Kinase That Phosphorylates the Actin-binding Protein Severin*
After receiving an external stimulusDictyostelium amoebae are able to rearrange their actin cytoskeleton within seconds, and phosphorylation is a prime candidate for quick modification of cytoskeletal components. We isolated a kinase from cytosolic extracts that specifically phosphorylated severin, a Ca2+-dependent F-actin fragmenting protein. In gel filtration chromatography severin kinase eluted with a molecular mass of about 300 kDa and contained a 62-kDa component whose autophosphorylation caused a mobility shift in SDS-polyacrylamide gel electrophoresis and stimulated phosphorylation of severin. Severin kinase activity could be specifically precipitated with antibodies raised against the 62-kDa polypeptide. Phosphorylation of severin was strongly reduced in the presence of Ca2+, indicating additional regulation at the substrate level. Peptide sequencing and cloning of the cDNA demonstrated that the 62-kDa protein belongs to the Ste20p- or p21-activated protein kinase family. It is most closely related to the germinal center kinase subfamily with its N-terminal positioned catalytic domain followed by a presumptive regulatory domain at the C terminus. The presence of a Ste20-like severin kinase inDictyostelium suggests a direct signal transduction from the plasma membrane to the cytoskeleton by phosphorylation of actin-binding proteins.
The dynamic rearrangements of the actin cytoskeleton in motile cells are mainly regulated by actin-binding proteins which either interfere directly with the polymerization kinetics of actin or alter the viscoelasticity of the filamentous network (for reviews, see Refs. [1][2][3]. Severin from Dictyostelium discoideum, a model for amoeboid cell motility, belongs to the class of F-actin fragmenting and capping proteins whose members are structurally and functionally related (4). This class includes among others the vertebrate proteins gelsolin (5), villin (6), gCap39 (7), or from Physarum polycephalum the protein fragmin (8). F-actin fragmenting proteins are especially well suited for causing quick rearrangements in the filamentous actin network. At micromolar Ca 2ϩ levels they sever actin filaments by rupturing the noncovalent bonds between actin subunits in a filament. This leads to a rapid increase of short filaments together with a dramatic decrease in viscosity. After having severed the actin filaments, the proteins remain bound at the barbed end of the filaments and thereby prevent filament elongation. It is assumed that this results in solation of the viscous cytoplasm with a large number of short but capped filaments. For several members of this family it has been shown in vitro that uncapping is caused by polyphosphoinositides. In vivo this could then lead to free barbed ends ready for rapid elongation (9).
There is increasing evidence that actin fragmenting proteins might be targets in signaling cascades to the cytoskeleton. Gelsolin has been implicated in the phosphoinositide-mediated F-actin uncapping of human platelets following stimulation of thrombin receptors (10). Fibroblasts of gelsolin null mice have excessive actin stress fibers and migrate more slowly than wild type fibroblasts (11), while overexpression of gelsolin in NIH 3T3 fibroblasts leads to an increase in motility (12). In addition to Ca 2ϩ and polyphosphoinositides, phosphorylation seems to play an important role in regulating proteins from this family as well (13). However, except for a fragmin kinase (14) no other kinase has been described in detail so far.
The intracellular responses to external signals are very often mediated by kinase cascades. The best studied kinase cascade activated by external signals is the mitogen-activated protein kinase (MAPK) 1 system. Its core comprises a module of three kinases in which the most distal MAPK is activated by a MAPK kinase (MAPKK) which itself is activated by a MAPKK kinase (MAPKKK). MAPK modules are ubiquitous among eukaryotes and in recent years it has become clear that in every cell several pathways, responsive to different external stimuli, exist in parallel (15)(16)(17). The protein kinase PAK1 from rat brain was identified based upon its ability to interact with the small GTPases Rac1 and Cdc42. The binding of active Rac1/Cdc42 stimulated autophosphorylation and activity of PAK1. The sequence of PAK1 was found to be closely related to Ste20p, a key regulator in the mating pheromone response pathway in Saccharomyces cerevisiae, that acts via activation of a MAPK cascade (18 -20). The growing family of related kinases is referred to as either the PAK or Ste20-like kinase family. Although their in vivo role has not yet been clearly defined, PAK family members are considered to be promising candidates for the mediation of both Cdc42/Rac-induced effects, cytoskeletal reorganization, and transcriptional activation via a MAPK cascade (21,22).
Based on primary structure and mode of regulation, the PAK family can be subdivided into two main branches. Close relatives of PAK1 and Ste20p (true PAKs) are characterized by a C-terminal kinase domain and an N-terminal regulatory domain of variable length that contains a p21-binding domain (23) and, in some cases, a pleckstrin homology domain as well. Members of the second branch of the PAK family, the so-called GCK subfamily, have their catalytic domain positioned at the N terminus followed by a C-terminal regulatory region (21). Here we describe the isolation and characterization of a severin kinase from Dictyostelium, whose 62-kDa subunit is most closely related to human SOK-1, a member of the GCK subfamily of Ste20-like kinases.
MATERIALS AND METHODS
Protein Purification-Cells of D. discoideum strain AX2 were cultivated axenically at 21°C in 5-liter Erlenmeyer flasks up to a density of 5 ϫ 10 6 cells/ml, harvested without starvation, and homogenized by nitrogen excavitation in a Parr bomb essentially as described (24) in the presence of a mixture of protease inhibitors in the homogenization buffer (25). Usually 50 to 80 g of cells (wet weight) from 12 liters of culture were used for protein purification.
Rabbit actin was prepared from skeletal muscle according to Spudich and Watt (27) and further purified by gel filtration on Sephacryl S300. D. discoideum severin and actin were purified as described (28). The concentration of actin was measured as described (29). All other protein concentrations were determined by the method of Bradford (30) using bovine serum albumin as a standard.
Recombinant regulatory domain of severin kinase was purified from M15[pREP4] cells that had been transformed with the expression plasmid pQE32 containing the corresponding coding region. Cells were grown at 37°C, induced at an OD 580 nm of 0.6 with 0.5 mM isopropyl-1thio--D-galactopyranoside for 2 h, harvested, opened by ultrasonica-tion as described (28), and insoluble material was pelleted (20 min, 30,000 ϫ g). The resulting pellet was stepwise re-extracted, once with TEDABP, pH 8.0, containing 150 mM NaCl and five times with TED-ABP, pH 8.0, containing 6 M urea (TEDABUP). The latter five supernatants were combined and applied onto a Ni-NTA column equilibrated in the same buffer. The column was washed with TEDABUP and then with TEDABUP containing 20 mM imidazole. The regulatory domain was eluted with TEDABUP containing 200 mM imidazole. Regulatory domain purified in this way was slowly dialyzed against TEDABP, pH 8.0, and concentrated by ultrafiltration in a Centricon-10 microconcentrator (Amicon GmbH, Witten, Germany). Two rabbits were immunized with the recombinant protein according to established procedures.
Immunoprecipitation-Polyclonal antibodies directed against the regulatory domain were used for immunoprecipitation of severin kinase from either a partially purified fraction (severin kinase pool after the Mono Q column) or from a crude fraction after opening the Dictyostelium cells (100,000 ϫ g supernatant). Kinase containing fractions (700 l) were incubated overnight at 6°C with 50 l of polyclonal antibody 5196 or polyclonal antibody 5197 in a total volume of 1 ml with a final concentration of 0.1% Triton X-100 in phophate-buffered saline buffer (150 mM NaCl, 100 mM Na 2 HPO 4 , 30 mM KH 2 PO 4 ), pH 8.0. Control immunoprecipitations were carried out in the absence of either polyclonal antibody or severin kinase. Protein-A Sepharose beads (25 l) were added, the suspensions shaken for 1 h, and centrifuged for 2 min at 10,000 ϫ g. The supernatants were removed, the pellets washed three times with 300 l of phosphate-buffered saline, resuspended in 50 l of TEDABP, pH 8.0, and aliquots (20 l) used in a phosphorylation assay with either severin or severin in the actin-severin complex as a substrate. The remaining beads were boiled after addition of 20 l of 3 ϫ SDS sample buffer and analyzed by SDS-PAGE.
Phosphorylation Assays-Severin kinase activity was assayed in a reaction mixture (40 l) containing 10 mM Tris/HCl, pH 7.5, 1 mM dithiothreitol, 1 mM EGTA, 1 mM Na 3 VO 4 , 2 mM sodium fluoride, 10 mM MgCl 2 , 0.05 mg/ml bovine serum albumin, 0.1 mM ATP (2-5 Ci of [ 32 P]ATP), 0.01% NaN 3 , and 1-4 M substrate. The reaction was initiated by addition of either the substrate or the kinase to the reaction mixture and carried out at 30°C. Severin (2-4 M final concentration), DS211C (2-4 M), or the 1:1 actin-severin complex (1-2 M each) were used as substrates. The actin-severin complex was allowed to form in G-buffer, followed by the addition of EGTA (1 mM final concentration) resulting in the EGTA stable 1:1 complex (26). Determination of the substrate dependence of severin kinase was carried out with different substrates at a final concentration of 2 M. For testing the Ca 2ϩ dependence of the phosphorylation reaction, the mixture contained, in addition, 2 mM Ca 2ϩ , and in the assays with Mn 2ϩ -ATP, 10 mM MnCl 2 instead of MgCl 2 . The pH dependence was tested by the addition of 1/10 volume of the following buffers to the reaction mixture: 500 mM MES, pH 6.0 and 6.5, 500 mM MOPS, pH 7.0, 500 mM Tris/HCl, pH 7.5, 8.0, and 8.5. Activation by autophosphorylation was tested by preincubating severin kinase with unlabeled ATP for 0, 5, or 20 min in the absence of substrate, then substrate was added and the reaction allowed to proceed for 20 min at 30°C. If not stated otherwise phosphorylation was terminated after 30 min by the addition of 20 l of 3 ϫ concentrated SDS sample buffer and boiling for 3 min. Proteins were separated by SDS-PAGE on minislab gels (110 ϫ 83 ϫ 0.5 mm) using the buffer system of Laemmli (31). Electrophoresis was terminated before the running front reached the lower buffer chamber and the gel was cut just above the running front to remove the lower gel strip which contains most of the non-incorporated radioactive ATP. Protein bands were visualized by staining with Coomassie Brilliant Blue and, after drying of the gels, labeled proteins were detected by autoradiography on Kodak X-AR films. For quantitation of incorporated phosphate, bands were scanned densitometrically and intensities evaluated with the program NIH Image 1.61.
Cloning and DNA Sequence Analysis-Tryptic fragments of the 62-kDa subunit of severin kinase were resolved by reversed-phase chromatography and subjected to Edman degradation on an Applied Biosystems gas-phase sequencer according to Eckerskorn et al. (32). 32 P]dATP and employed to screen a gt11 cDNA library (33) as described (34). From one positive clone the cDNA insert was amplified by PCR using primers of the gt11 flanking regions, cloned into pUC19 vector (35), and sequenced with the chain termination dideoxy method (36) using uni and reverse primers, as well as sequence specific oligonucleotide primers. The isolated cDNA had a size of about 1.2 kb and contained the 3Ј end, but lacked the 5Ј end of the gene. A 5Ј 0.8-kb EcoRI fragment of this clone was used to screen a random primed gt11 cDNA library (CLONTECH Inc., Palo Alto, CA) which yielded another positive clone with an insert of about 1.3 kb harboring the ATG start codon preceded by two in-frame stop codons but lacking the 3Ј end of the gene. The two cDNA clones had about 1-kb sequence in common and internal restriction sites were used to combine the two clones to yield the full-length cDNA in pUC19. In order to exclude possible errors resulting from PCR amplification of the gt11 cDNA clones, we confirmed both sequences at least once with independently amplified and cloned PCR products.
Standard techniques were used for cloning, transformation, and screening (37). Searches for similarities to other protein sequences were done with the program BLAST (38) using the combined non-redundant entries of the Brookhaven Protein Data Bank, Swiss-Prot, PIR, and GenBank at the NCBI. The sequence was analyzed by using the UWGCG (University of Wisconsin Genetic Computer Group; Madison, WI) and PHYLIP (Phylogeny Inference Package, version 3.5c by Joseph Felsenstein, University of Washington) program packages. Northern analysis followed established procedures (39).
The coding sequence for the regulatory domain (aa 277-478) was amplified by PCR (denaturation 94°C, 60 s; annealing 60°C, 60 s; elongation 72°C, 60 s; 25 cycles) with primers Sevkin-Ntreg (5Ј-CGCG-GATCCATATGAGAAGACAAAAATGGTTACAAT-3Ј) and Sevkin-Ct (5Ј-GCGAAGCTTTTATCTTTTAAGGGTTTCAATG-3Ј). Primer sequences corresponding to the coding sequence of severin kinase are shown in italic and restriction sites in the 5Ј overhang sequence for cloning in bold. The resulting PCR product was cloned into the BamHI, HindIII sites of the pQE32 expression vector (Qiagen GmbH, Hilden, Germany). The complete expression construct was sequenced to exclude possible errors resulting from the PCR.
Partial Purification and Characterization of Severin
Kinase-To identify protein kinases from D. discoideum that phosphorylate cytoskeletal proteins, we screened DEAE column fractions of soluble homogenates for kinase activities by adding actin, severin, or a 1:1 complex of both proteins as a substrate. We detected an activity that phosphorylated the actin-fragmenting protein severin either on its own or in a complex with actin. This severin kinase activity was further purified by additional chromatographic steps including gradient elution from S-Sepharose or Mono Q, and gel filtration on Superose 12 (Fig. 1). In the final Superose 12 gel filtration step the kinase eluted at a position corresponding to a molecular mass of about 300 kDa (Fig. 1D, inset, Superose 12). These active fractions contained a polypeptide of about 62 kDa which (i) coeluted with kinase activity, (ii) was strongly phosphorylated in the presence of [␥-32 P]ATP and (iii) shifted almost completely to a higher molecular mass in SDS-PAGE after preincubation with unlabeled Mg 2ϩ -ATP ( Fig. 2A). The low percentage gel (7.5% acrylamide) used in this experiment resolved the rather broad 62-kDa signal shown in Fig. 1 into at least three distinct bands (Fig. 2A, lane 3) which suggests multiple autophosphorylation of the 62-kDa protein. It is not yet clear whether native severin kinase is composed of only the 62-kDa subunit or whether it constitutes a heteromer. Autophosphorylation of the 62-kDa polypeptide exactly followed severin kinase activity during all purification steps and was therefore likely to represent the severin kinase (Fig. 1). In addition, polyclonal antibodies directed against the regulatory domain specifically precipitated the 62-kDa polypeptide from either a partially purified severin kinase fraction or from the soluble fraction after opening the Dictyostelium cells. In phos-phorylation assays, the 62-kDa polypeptide in the immunoprecipitate as well as added severin either on its own or in complex with actin were strongly phosphorylated (Fig. 2B, lanes 2 and 3). Phosphorylation of severin in the actin-severin complex was more pronounced than phosphorylation of severin alone. Results obtained with a second independently generated antiserum were very similar. Control immunoprecipitations were carried out in the absence of either polyclonal antibody or severin kinase fraction as described above. In the absence of antibodies (Fig. 2B, lane 1) or severin kinase (data not shown) there was no phosphorylation of severin. These results clearly demonstrate that the 62-kDa polypeptide is essential for severin kinase activity.
We used the peak fractions from the gel filtration column, the last purification step, to biochemically characterize severin kinase and to obtain sequences from tryptic peptides. Fig. 3 shows the time dependence (A), the activation by autophosphorylation (B), and the pH dependence (C) of severin kinase as measured by phosphorylation of domains 2 and 3 of severin (DS211C; see below). During early time points, there was an almost exponential increase of incorporation of phosphate into the substrate which could be attributed to self-activation of the kinase and excess of substrate (Fig. 3A). Autophosphorylation for 5 min increased the activity of severin kinase more than 3-fold and a nearly 6-fold increase in activity was observed after 20 min of in vitro autophosphorylation (Fig. 3B). The activity of severin kinase decreased rapidly at pH values below pH 7.0, while pH values above 7.5 decreased its activity only moderately (Fig. 3C). Routinely, phosphorylation assays were carried out for 30 min at pH 7.5 and 30°C.
The substrate specificity of severin kinase has been tested with domain 1 (DS151), domain 2 (DS111M), and domains 2 and 3 (DS211C) of severin (26), the 1:1 actin-severin complex, and with the Dictyostelium actin-binding proteins ␣-actinin, ABP120 gelation factor, and hisactophilin. Besides severin, on its own as well as in the 1:1 complex with actin, only DS151 (residues 1-151 of severin) and DS211C (residues 152-362 of severin) turned out to be substrates of severin kinase (data not shown). In particular DS211C was very strongly phosphorylated being an even better substrate than native severin. Since DS151 and DS211C have no overlapping amino acids, severin kinase must either phosphorylate native severin at two or more sites, or alternatively, there must be a cryptic phosphorylation site in the constructs that is not accessible in the native molecule.
Severin phosphorylation appeared to be regulated also at the substrate level as incorporation of phosphate was strongly reduced in the presence of Ca 2ϩ . The activity of severin kinase itself was not affected under these conditions because its autophosphorylation and also phosphorylation of DS211C remained nearly unaltered in the presence of Ca 2ϩ (Fig. 4). The addition of Ca 2ϩ triggers a conformational change in severin (28) that might render either the target amino acid or the complete protein inaccessible for severin kinase. Since severin kinase accepted also Mn 2ϩ -ATP as phosphate donor we tested the Ca 2ϩ dependence of the phosphorylation reaction under these conditions as well. A similar Ca 2ϩ dependence of severin phosphorylation was found with Mn 2ϩ -ATP, autophosphorylation of severin kinase, and phosphorylation of DS211C, however, overall phosphorylation was not as pronounced as with Mg 2ϩ -ATP (Fig. 4). region. Northern blot analysis of growth phase Dictyostelium cells showed one mRNA band with a size of approximately 1.7 kb (data not shown). The difference between the apparent molecular mass of 62 kDa in SDS-PAGE and the calculated molecular mass of approximately 53 kDa could be explained by reduced mobility of the polypeptide in SDS-PAGE. Similar differences in apparent and calculated molecular mass were reported for the two related kinases Krs-1 and Krs-2 (see below). Krs-1 and 2 have an estimated molecular mass of 63 and 61 kDa on SDS-polyacrylamide gels while the predicted molecular mass is 56.3 kDa for Krs-1 and 55.6 kDa for Krs-2 (40). In this report the authors suggest that a highly acidic region in the C-terminal domain could be responsible for the slightly aberrant migration behavior in SDS-PAGE. The calculated pI of the kinase is 6.7; all microsequences collected from the protein were present in the cDNA deduced amino acid sequence.
The predicted protein sequence indicates a two-domain organization of the protein (Fig. 5). The N-terminal part with 276 residues constitutes the catalytic domain characteristic of Ser/ Thr-and Tyr-protein kinases. All 11 subdomains typically found in these protein kinases (41) are present. The C-terminal domain encompasses 202 residues and is rich in glutamine, threonine, and proline residues. Most obvious are two glutamine-rich stretches between aa 323 and 347 and one threonine-rich stretch between aa 359 and 369 (9 out of 11 residues). Several proline residues (17 in total) are scattered throughout the central part of the C-terminal domain between residues 312 and 429. These regions could constitute binding interfaces for regulatory proteins. In addition, a highly acidic region is present between aa 290 and 306 with 10 negatively charged amino acids out of 17. Short acidic regions of unknown function have also been found in other proteins from the PAK family including mammalian PAKs, Ste20p, Krs-1 and 2, and SOK-1 (21,40).
In data base searches with the program BLAST (38) the highest degree of sequence similarity was observed in members of the PAK family of protein kinases. These kinases share a highly conserved catalytic domain and have the so-called PAK signature "GTPY/FWMAPE" in common (Fig. 5). They can be subdivided into two groups based on their structure and regulation. Ste20p, PAK1, MIHCK (42), and related PAKs have a C-terminal kinase domain and a p21 binding motif in the The proteins were separated by SDS-PAGE in 7.5% gels, stained with Coomassie Blue (lanes 1 and 2), or stained and processed for autoradiography (lane 3). The positions of the 62-kDa polypeptide before (*) and after (*Ј) autophosphorylation are indicated. The presence of at least three distinct bands in the autoradiogram suggests multiple autophosphorylation of the 62-kDa polypeptide. B, partially purified severin kinase from the Mono Q column was incubated in the absence (lane 1) or presence of polyclonal antibodies (lanes 2 and 3) that were raised against the regulatory domain of the 62-kDa polypeptide. After precipitation with protein-A Sepharose, the beads were used in phosphorylation reactions with either severin (lanes 1 and 2) or the actin-severin complex (lane 3) as substrates. Proteins were separated by SDS-PAGE and processed for autoradiography as described above. Please note that the specifically precipitated material showed the characteristic autophosphorylation and phosphorylation of the substrates . FIG. 3. Time dependence (A), activation by autophosphorylation (B), and pH dependence of severin kinase activity (C). The phosphorylation reactions with DS211C as a substrate were carried out for the indicated periods of time (A), for 20 min after allowing severin kinase to autophosphorylate for 0, 5, or 20 min in the presence of unlabeled ATP (B), or for 30 min at the pH values stated (C). The reaction mixtures were separated by SDS-PAGE in 12% gels and the dried and stained gels subjected to autoradiography. The radioactive DS211C bands were scanned densitometrically and intensities evaluated with the program NIH image 1.61. N-terminal part. In the GCK branch of the PAK family the catalytic domain is positioned at the extreme N terminus (21). Based on its primary structure and sequence homology, the 62-kDa subunit of severin kinase clearly belongs to the GCK subfamily (Fig. 6A). Sequence comparisons of its catalytic domain with the catalytic domains of the other kinases revealed that the kinase subunit of severin kinase is most closely related to human SOK-1 (75% identity) and the open reading frame T19A5.2 from Caenorhabditis elegans (72% identity). However, even with the most distant member in this comparison, Ste20p from S. cerevisiae, the kinase from Dictyostelium shared 42% sequence identity in the catalytic domain.
To further clarify the relationship between PAK family members and the kinase subunit of severin kinase, we calculated multiple sequence alignments of the catalytic domains of PAK family members. In the evolutionary tree derived from these alignments the members of the two PAK branches are separated as expected, and for GCK subfamily members the tree is split into two main branches, one formed by KHS1, Rab8ip, HPK1, NIK, the other one by MST1, MST2, MESS1, NRK1, T19A5.2, SOK-1, and the kinase subunit of severin kinase. SOK-1, T19A5.2, and the kinase subunit of severin kinase are most closely related and listed together (Fig. 6B).
A sequence alignment with human SOK-1 is shown in Fig. 7. The two proteins are 75% identical and 84% similar in their catalytic domains. In addition, they share significant sequence similarity in the C-terminal domain. Two long regions of 44 and 58 amino acids with approximately 31% identity and 40% similarity, respectively, are present in this domain; the first one is adjacent to the catalytic domain and the second one is located at the extreme C terminus. A third short stretch of 16 amino acids in the central part of the C-terminal domains displays 33% sequence similarity and is flanked in the 62-kDa subunit of severin kinase by two insertions of 22 (amino acid 320 -341) and 51 (amino acid 358 -408) residues. Interestingly, when we compared the entire C-terminal domains of the Dictyostelium kinase and Rab8ip from mouse (43) or GCK from human (44) with the program Bestfit we found only one short homologous region of 19 residues with about 37% sequence similarity. All 16 residues of the central C-terminal homology region of the Dictyostelium kinase and SOK-1 were contained in these 19 residues, thus raising the possibility for this sequence to constitute an as yet unknown p21-binding motif. DISCUSSION Based on in vitro phosphorylation assays we have partially purified a severin kinase from cytoplasmic extracts of D. dis- coideum. Severin kinase has a molecular mass of about 300 kDa and harbors a 62-kDa subunit that is closely related to p21-activated protein kinases. Sequence comparisons of the catalytic domains of selected PAKs and the severin kinase subunit clearly identified the kinase as a new member of the GCK subfamily. It displays the highest similarity to human SOK-1 that is activated by oxidant stress (45). Both proteins are 75% identical in their catalytic and 31% identical in their regulatory domains and could therefore fulfill a similar or even identical in vivo function.
Severin kinase is the first example of a GCK subfamily kinase with a cytoskeletal protein as a possible in vivo target. In contrast to GCK subfamily members, several true PAKs have recently been implicated in cytoskeletal reorganization or the regulation of cytoskeletal proteins. Ste20p was found to bind to Bem1p which associates with actin (46) and PAK1 is thought to regulate actin organization in mammalian cells via an as yet unknown effector (47,48). MIHCK from Dictyostelium and its homologue from Acanthamoeba phosphorylate the heavy chain of some of the myosin I isozymes on a single serine or threonine residue and thereby stimulate their actin-activated Mg-ATPase activity 30 -50-fold (49,50). Cloning of the corresponding genes revealed that MIHCK is a member of the PAK family and closely related to mammalian PAK and yeast Ste20p molecules (42,51). In gel overlay assays and affinity chromatography experiments, MIHCK from Dictyostelium interacted with GTP␥S-labeled Rac1 and Cdc42, which probably bind to a conserved p21-binding domain commonly found in the N-terminal regulatory domain of true PAKs. Interestingly, in the presence of active Rac1 and Cdc42, autophosphorylation of MIHCK increased from 1 up to 9 mol of phosphate per mol of kinase concomitant with an approximately 10-fold stimulation of the rate of myosin ID phosphorylation. These results suggest that MIHCK directly links Cdc42/Rac signaling pathways to motile processes driven by myosin I molecules (42).
For members of the GCK subfamily the putative regulatory role of the C-terminal non-catalytic domain is not clear. In the case of MST1, MST2, and SOK-1 it apparently has an inhibitory function because its removal resulted in an increase in kinase activity (45,52). Furthermore, it has been shown that the C-terminal domains of MST1 and MST2 mediate homo-and heterodimerization (52). Rab8ip, the murine homologue of human GCK has been isolated in a two-hybrid screen as a Rab8 interacting protein (43). This finding was surprising because members of the GCK subfamily lack the conserved p21-binding domain of 16 amino acids found in the N-terminal regulatory domains of true PAKs (21,23). Thus it is possible that also other GCK subfamily members may be regulated by small GTPases as well, but that a common binding motif is not identified yet. We compared the sequences of the non-catalytic domain of GCK, Rab8ip, and the kinase subunit of severin kinase and found a stretch of 19 similar amino acids (amino acids 341-359 in the 62-kDa subunit of severin kinase), that could constitute a binding site for a small GTPase.
Like MIHCK and other kinases of the PAK family, severin kinase showed strong and possibly multiple autophosphorylation ( Fig. 2A) which resulted in a severalfold activation of kinase activity (Fig. 3B). In addition, severin phosphorylation seemed to be regulated at the substrate level since Ca 2ϩ strongly reduced phosphorylation of severin, whereas auto- FIG. 7. Sequence alignment of the kinase subunit of severin kinase (62 kDa) and human SOK-1 (SOK-1). The sequence alignment was done with the program Clustal from the UWGCG program package. Identical and similar residues are indicated by a star or a point, respectively. The arrowhead marks the start of the regulatory regions. Both sequences are closer related in their catalytic domains than in their regulatory regions. A small stretch of similar amino acids (341-359 in 62 kDa) is also present in GCK and Rab8ip, and might be a putative binding site for small GTPases. Residue numbers of both proteins are shown on the right. phosphorylation of the kinase and phosphorylation of DS211C were nearly unchanged (Fig. 4). In two-dimensional gel electrophoresis, purified severin resolved in three bands suggesting that also in vivo severin is subject to phosphorylation. Treatment of purified severin with severin kinase resulted in an additional more acidic spot (data not shown). It is at present not clear whether phosphorylation influences one or more of the in vitro activities of severin. This important issue is difficult to resolve because one has to be able to obtain not only fully phosphorylated severin, but also distinguish and characterize the phosphorylation sites in native severin as opposed to recombinant domains 1 (DS151) and domains 2 ϩ 3 (DS211C). Phosphorylation of fragmin, the Physarum homologue of severin, by a casein kinase II enzyme had no effect on the in vitro activity of fragmin (14). The authors speculate that phosphorylation of fragmin could be associated with an intracellular redistribution similar to gCAP39 which was shown to be preferentially associated with nuclear preparations in the phosphorylated state (13).
Several members of the GCK subfamily are responsive to cellular stress. Sps1p has been shown to become activated in response to nutrient deprivation (53). Human Krs-1 and Krs-2, which are identical with MST1 and MST2, are activated upon treatment of cells with staurosporine, okadaic acid, high concentrations of sodium arsenite, and extreme heat shock at 55°C (40,54,55). The activity of another member, human SOK-1, was shown to be induced severalfold by oxidant stress, but not by growth factors, alkylating agents, cytokines, heat shock, and osmotic stress. It most likely controls a novel stress response pathway since it is not involved in already defined MAPK cascades (45). This leads to the assumption that members from the GCK subfamily are important for the response of eukaryotic cells to environmental stresses. The in vivo regulation of severin kinase is not yet known. However, its close similarity to human SOK-1 and other kinases of the GCK subfamily suggests that it might also be activated in response to cellular stress. Possibly its activation leads to phosphorylation of severin and connects an extracellular signal to the cytoskeleton via an as yet unknown regulatory cascade. Disruption of the severin kinase gene and in vivo labeling experiments should help to unravel the in vivo role of Dictyostelium severin kinase. | 7,155.4 | 1998-05-22T00:00:00.000 | [
"Biology",
"Chemistry"
] |
SYNTHESIS, MAGNETIC AND SPECTROSCOPIC STUDIES of Ni(II), Cu(II), Zn(II) and Cd(II) COMPLEXES OF A NEWLY SCHIFF BASE DERIVED FROM 5-BROMO- 2-HYDROXYBEZYLIDENE)-3,4,5-TRIHYDROXYBENZOHYDRAZIDE)
A new hydrazide Shiff base ligand GHL1 (5-bromo-2-hydroxybezylidene)-3,4,5trihydroxybenzohydrazide) was prepared by refluxing of trihydroxybenzhydrazide with an ethanolic of 5-bromo2-hydroxybenzaldehyde. The ligand reacted with Ni(II), Cu(II), Zn(II) and Cd(II) (acetate salts). All the complexes were characterized by elemental analysis, molar conductivity, TGA, UV-Vis and FT-IR spectral studies. All the complexes have octahedral geometry except Ni(II) complex which has tetrahedral geometry.
INTRODUCTION
The tridentate of benzhydrazone derivatives ligand containing ONO donor atoms can be synthesized easily by reacting benzhydrazide with any aldehyde or ketone [1].The presence of donor atoms in the ligand will plays an important role in the formation of a stable chelate ring and this situation facilitates the complexation process [2].Moreover, the synthesis, spectroscopic characterization and reaction of transition metal with hydrazone ligands have shown wide spectra of biological and pharmaceutical activities such as possess antimicrobial, antibacterial, antifungal, anti-inflammatory, anticonvulsant, antitubercular, antiviral, antioxidative effects and inhibition of tumor growth [3][4][5].The bioinorganic chemistry paid great attention to the Schiff base complexes because many of these complexes have biologically important species [6].So, the synthesis of new ligands and complexes would be important step in the development of coordination chemistry which exhibit novel properties and reactivity [5].In searching for complexes of transition metals with novel coordination spheres, it was found that the tridentate coordination is made of the hydrazone ligands.This makes them suitable chelating agents for metal ions and preferring the octahedral geometry [7,8].Thus in this paper the synthesis and characterization of Ni(II), Cu(II), Zn(II) and Cd(II) complexes with hydrazide derivative are described.
Infrared spectra were obtained using KBr discs (4000-400 cm -1 ) on Perkin-Elmer FT-IR spectrometer.The electronic spectra were carried out using a Cary 50Conc.UV-Visible spectra were recorded using spectrophotometer in DMSO solution 10 -3 M. Thermal analysis studies of the complex were performed on Perkin-Elmer Pyris Diamond DTA/TG Thermal System under nitrogen atmosphere at a heating rate of 10 o C/min from 30-900 o C. Elemental analysis (C, H, N) were performed by using a Flash EA 1112 Series elemental analyzer.
Synthesis of the metal complexes
An ethanolic solution of 5-bromo-2-hydroxybenzylidene)-3,4,5-trihydroxybenzohydrazide (0.35 g, 0.8 mmol) was added to 50 mL of an aqueous solution of the metal salts with three drops of triethylamine.The mixture was stirred and refluxed for 5 hours.The solid product was precipitate, filtered washed and recrystallized from DMSO and dried in desiccator.
RESULTS AND DISCUSSION
The prepared complexes were found to be solids, soluble in dimethylsulfoxide.The elemental analysis shown in Table 1 indicates that all the complexes were in good agreement with the values calculated from the proposed formula
Electronic spectra and magnetic studies
The electronic spectrum of the ligand GHL1 shows two bands at 307 and 340 nm due to the n→π* transition of the chromophore (-C=N-NH-CO).In the spectra of the complexes, these bands were shifted to the lower frequencies which indicate that the imin nitrogen atom and the oxygen atom were involved in coordination with the metal ions [3,7].
The electronic spectrum of Ni(II) complex displayed two bands in the visible region observed at 422 and 626 nm which are assigned to the electronic transitions 3 T 1(F) → 3 T 1(P) (ν 3 ) and 3 T 1(F) → 3 T 2(F) (ν 1 ), respectively.The band (ν 2 ) is attributed to the transition 3 T 1 (F) → 3 A 2 (F) which corresponds to the charge transfer (C.T.) at 385 nm [11].The calculated value of the ligand field parameter 10Dq is 19967 cm -1 for (ν 1 ).Thus, the interelectronic repulsion parameter B was calculated and found to be 116 cm -1 for Ni(II) complex, this value is less than the free Ni 2+ ion value of 1040 cm -1 which was due to overlapping and delocalization of electrons over the molecular orbital that encompasses both the metal and ligands.Moreover, the nephelauxetic ratio β = B/B o = 0.11 indicates appreciable covalent character in this complex [3-6, 8, 12].So, the magnetic moment value is 3.4 B.M., which demonstrates that the Ni(II) complex is paramagnetic and has a high spin tetrahedral configuration with 3 T 1(F) ground state [3,6,13,14].Furthermore, the molar conductivity of 10 -3 M in DMSO at room temperature is 73 ohm -1 cm 2 mol -1 indicates that the Ni(II) complex is electrolyte [15].
The electronic spectrum of the Cu(II) complex displayed strong bands in the range of 324-340 nm which can be assigned to n→π*, charge transfer LMCT band exhibited in the range of 400-415 nm.Thus, the spectrum showed d-d electronic transition at 607 nm which assigned to 2 Eg (D) → 2 T 2 g (D) .The broadness of the band is due to the ligand field and the Jahn-Teller effect.These absorption prefer the distorted octahedral geometry for the Cu(II) ion.Moreover, the magnetic moment for the Cu(II) complex is 1.9 B.M. which within the expected value for one electron.Furthermore, the complex is non-electrolyte as the molar conductance was found to be at 0.87 ohm -1 cm 2 mol -1 in 10 -3 M in DMSO [5,11,[16][17][18].
Finally, the diamagnetic Zn(II) and Cd(II) complexes show absorption bands at 325 and 285, 320 nm, respectively.These bands are attributed to the charge transfer MLCT as the electronic configuration of these complexes confirmed the absence of any d-d transition [16,19,20].All data and remarks are found in Table 2.
Infrared spectra studies
The infrared spectrum of the ligand GHL1 showed strong bands at 3569 and 3223 cm -1 which are due to the ν(OH) and ν(NH), respectively.Thus, the ν(C=N) band of the ligand was observed at 1637 cm -1 and this band was shifted to the lower frequencies by 48-19 cm -1 in the spectra of the complexes.Furthermore, the complexes exhibited weak bands between 550-575 cm -1 which are attributed to ν(M-N).This indicates that the ligand was coordinated with the metal ions through the N atom.However, the spectrum of the ligand showed strong band at 1545 cm -1 which attributed to ν(C=O).Actually, this band was shifted to the lower frequencies by 20-4 cm -1 in the spectra of the complexes.While, the ν(C-O) appeared at 1089 cm -1 in the spectra of the free ligand which was shifted to the higher frequencies by 85-104 -1 .Moreover, the spectra of the complexes exhibited weak bands between 440-481 cm -1 which is attributed to the ν(M-O).This indicates that the GHL1 is tridentate ligand which is coordinated with the metal ions through ONO atoms.
The asymmetrical and symmetrical vibration of ν(COO -) were noticed at 1450 and 1346 cm -1 , respectively in the spectrum of the Cd(II) complex.So, the complex was also exhibited weak band at 481 cm -1 which is due to ν(M-O).This indicates that the acetate group is coordinated with Cd(II) ion through the two of O atoms [21][22][23].Characteristic vibrations and assignments of free ligand and its complexes as KBr pellets are listed in Table 3. Table 3. Characteristic IR bands (cm -1 ) of the ligand and its metal complexes.
Thermal studies
The weight loss was measured from 40 to 950 o C. The weight losses for each chelate were calculated for the corresponding temperature ranges are shown in Table 4.The metal percentages calculated from metal oxide or metal residues were compared with those determined by the analytical metal content determination [24].The Ni(II) complex was stable up to 40 o C and its decomposition started at this temperature and was completed at 952 o C. A mass loss occurred within the temperature range (40-200) o C corresponding to the loss of four hydrated water molecules.The Ni(II) complex decomposed and produced NiO as residue [found(calculated)%: 8.945(8.28)] in four steps in the temperature range 40-200, 200-350, 350-500 and 500-795 o C, respectively.In the decomposition process of Ni(II) complex, the mass losses corresponded to 4(H 2 O), 2(CH 3 CO 2 ), 2(BrPh-CH) and 2[(OH) 3 Ph-CON 2 H)], respectively [24].
The Cu(II) complex was stable up to 35 o C and its decomposition started at this temperature and was completed at 608 o C. A mass loss occurred within the temperature range 36-266 o C corresponding to the loss of three hydrated water molecules [24].The Cu(II) complex decomposed and produced Cu as a residue [found(calculated)%: 7.21 (8.4)] in three steps in the temperature range of 36-266, 271-446, and 452-608 o C, respectively.In the decomposition process of Cu(II) complex, the mass losses corresponded to 3H 2 O, 2(C 7 H 5 N 2 OBr) and 2(C 6 H 5 O 3 ), respectively.The Zn(II) complex was stable up to 35.00 o C and its decomposition started at 35.04 o C and was completed at 710.61 o C. A mass loss occurred within the temperature range 35.04-315.21o C corresponding to the loss of three hydrated water molecules [24].The Zn(II) complex decomposed and produced Zn as a residue [found(calculated)%: 8.85(9.43)] in four steps in the temperature range 35.04-315.21,318.53-341.79,347.33-592.10 and 596.53-710.61o C, respectively.In the decomposition process of Zn(II) complex, the mass losses corresponded to 3H 2 O, (C 6 H 5 O 3 ) 2 CO and (BrPhCHNNH) 2 CO, respectively.
The Cd(II) complex was stable up to 40.00 o C and its decomposition started at this temperature and was completed at 950 o C. A mass loss has occurred within the temperature range of 40.00-140.0o C corresponding to the loss of two hydrated water molecules.The Cd(II) complex decomposed and produced CdO residue [found(calculated)%: 12.2 (19.2)] in four steps in the temperature range of 40.00-140.0,145.0-250.0,300.0-470 and 470.0-950.0o C, respectively.In the decomposition process of complex, the mass losses corresponded to 2H 2 O, 2H 2 O, (2CH 3 CO 2 , N 2 ) and (BrPhCH 2 + C 7 H 6 O 4 ), respectively.The TGA curves of these complexes, 6.411%, 6.322%, 8.147% and 6.060%, respectively, indicate weight loss.This shows that the complexes contain 4, 4, 3 and 3 moles of water per complex molecule, respectively.The IR spectra of the complexes are characterized by appearance of a broad band in the region of 3416-3377 cm -1 , due to the ν(-OH) of the water [25].This water was not identified by the elemental analyses, therefore location of the water molecules were outside the complex structure.The curves of TGA concerning the solid complexes reflected the experimental results for the residual amount of loss of mass which were in good agreement with the calculated results.The intermediate and the final products of the thermal decomposition of the complexes were identified by IR spectra as well.The thermal decomposition processes of the complex were summarized in Table 4.
CONCLUSIONS
In this paper synthesis and spectroscopic characterization of hydrazone derivative ligand and their Ni(II), Cu(II), Zn(II) and Cd(II) complexes were presented.The elemental analysis, magnetic susceptibility, FT-IR, UV-Vis, TGA spectral observation suggest the tetrahedral geometry around Ni(II) complex as shown in Figure 1
Table 1 .
Physical properties and analytical data of the ligand and its complexes.
Table 2 .
The electronic spectra of free ligands and their complexes.
Table 4 .
Thermal analysis data for some metal complexes of GHL1. | 2,570.4 | 2012-01-01T00:00:00.000 | [
"Chemistry"
] |
Quadrature demultiplexing using a degenerate vector parametric amplifier
We report on quadrature demultiplexing of a quadrature phase-shift keying (QPSK) signal into two cross-polarized binary phaseshift keying (BPSK) signals with negligible penalty at bit-error rate (BER) equal to 10−9. The all-optical quadrature demultiplexing is achieved using a degenerate vector parametric amplifier operating in phase-insensitive mode. We also propose and demonstrate the use of a novel and simple phase-locked loop (PLL) scheme based on detecting the envelope of one of the signals after demultiplexing in order to achieve stable quadrature decomposition. © 2014 Optical Society of America OCIS codes: (060.2320) Fiber optics amplifiers and oscillators; (190.4970) Parametric oscillators and amplifiers; (190.4380) Nonlinear optics, four-wave mixing. References and links 1. J. Kakande, R. Slavı́k, F. Parmigiani, A. Bogris, D. Syvridis, L. Grüner-Nielsen, R. Phelan, P. Petropoulos, and D. J. Richardson, “Multilevel quantization of optical phase in a novel coherent parametric mixer architecture,” Nat. Photonics 5, 748–752 (2011). 2. T. Richter, R. Elschner, and C. Schubert, “QAM phase-regeneration in a phase-sensitive fiber-amplifier,” in 39th European Conference and Exhibition on Optical Communication (ECOC 2013), paper We.3.A.2. 3. B. Corcoran, S. L. I. Olsson, C. Lundström, M. Karlsson, and P. A. Andrekson, “Mitigation of nonlinear impairments on QPSK data in phase-sensitive amplified links,” in 39th European Conference and Exhibition on Optical Communication (ECOC 2013), paper We.3.A.1. 4. S. L. Olsson, T. A. Eriksson, C. Lundström, M. Karlsson, and P. A. Andrekson, “Linear and nonlinear transmission of 16-QAM over 105 km phase-sensitive amplified link,” in Optical Fiber Communication Conference (OFC 2014), paper Th1H.3. 5. Z. Zheng, L. An, Z. Li, X. Zhao, and X. Liu, “All-optical regeneration of DQPSK/QPSK signals based on phasesensitive amplification,” Opt. Commun. 281, 2755 – 2759 (2008). 6. R. P. Webb, J. M. Dailey, R. J. Manning, and A. D. Ellis, “Phase discrimination and simultaneous frequency conversion of the orthogonal components of an optical signal by four-wave mixing in an SOA,” Opt. Express 19, 20015–20022 (2011). 7. F. Da Ros, K. Dalgaard, L. Lei, J. Xu, and C. Peucheret, “QPSK-to-2×BPSK wavelength and modulation format conversion through phase-sensitive four-wave mixing in a highly nonlinear optical fiber,” Opt. Express 21, 28743–28750 (2013). 8. F. Da Ros, K. Dalgaard, Y. Fukuchi, J. Xu, M. Galili, and C. Peucheret, “Simultaneous QPSK-to-2×BPSK wavelength and modulation format conversion in PPLN,” IEEE Photon. Technol. Lett. 26, 1207–1210 (2014). 9. M. Gao, T. Kurosu, T. Inoue, and S. Namiki, “Low-penalty phase de-multiplexing of QPSK signal by dual-pump phase sensitive amplifiers,” in 39th European Conference and Exhibition on Optical Communication (ECOC 2013), paper We.3.A.5. #223548 $15.00 USD Received 22 Sep 2014; revised 10 Nov 2014; accepted 10 Nov 2014; published 18 Nov 2014 (C) 2014 OSA 1 December 2014 | Vol. 22, No. 24 | DOI:10.1364/OE.22.029424 | OPTICS EXPRESS 29424 10. R. P. Webb, M. Power, and R. J. Manning, “Phase-sensitive frequency conversion of quadrature modulated signals,” Opt. Express 21, 12713–12727 (2013). 11. F. Lorences-Riesgo, F. Chiarello, C. Lundström, M. Karlsson, and P. A. Andrekson, “Experimental analysis of degenerate vector phase-sensitive amplification,” Opt. Express 22, 21889–21902 (2014). 12. N. K. Kjller, M. Galili, K. Dalgaard, H.-C. Mulvad, K. Røge, and L.-K. Oxenløwe, “Quadrature Decomposition by Phase Conjugation and Projection in a Polarizing Beam Splitter,” 39th European Conference and Exhibition on Optical Communication (ECOC 2014), paper Tu.4.6.2. 13. F. Parmigiani, R. Slavı́k, G. Hesketh, P. Petropoulos, and D. J. Richardson, “Quadrature Decomposition of Optical Fields using two Orthogonal Phase Sensitive Amplifiers,” 39th European Conference and Exhibition on Optical Communication (ECOC 2014), paper P.3.8. 14. C. McKinstrie and S. Radic, “Phase-sensitive amplification in a fiber,” Opt. Express 12, 4973–4979 (2004). 15. F. Parmigiani, G. Hesketh, R. Slavik, P. Horak, P. Petropoulos, and D. J. Richardson, “Optical phase quantizer based on phase sensitive four wave mixing at low nonlinear phase shifts,” IEEE Photon. Technol. Lett. 26, 2146– 2149 (2014). 16. X. Liu, A. R. Chraplyvy, P. J. Winzer, R. W. Tkach, and S. Chandrasekhar, “Phase-conjugated twin waves for communication beyond the Kerr nonlinearity limit,” Nat. Photonics 7, 560–568 (2013). 17. M. Gao, T. Kurosu, T. Inoue, and S. Namiki, “Phase comparator using phase sensitive amplifier for phase noisetolerant carrier phase recovery of QPSK signals,” in 18th OptoElectronics and Communications Conference (OECC 2013) held jointly with 2013 International Conference on Photonics in Switching, paper TuS2-4. 18. C. Lundstrom, R. Malik, L. Gruner-Nielsen, B. Corcoran, S. L. I. Olsson, M. Karlsson, and P. A. Andrekson, “Fiber optic parametric amplifier with 10-dB net gain without pump dithering,” IEEE Photon. Technol. Lett. 25, 234–237 (2013). 19. P. Johannisson, M. Sjödin, M. Karlsson, H. Wymeersch, E. Agrell, and P. A. Andrekson, “Modified constant modulus algorithm for polarization-switched QPSK,” Opt. Express 19, 7734–7741 (2011). 20. V. Ataie, E. Temprana, N. Alic, and S. Radic, “Demonstration of Local-Oscillator Phase-Noise Tolerant 40 GBaud/s Coherent Transmitter,” 39th European Conference and Exhibition on Optical Communication (ECOC 2014), paper Tu.4.6.2. 21. J. Proakis and M. Salehi, Digital Communications, McGraw-Hill higher education (McGraw-Hill Education, 2007). 22. L. Grüner-Nielsen, S. Herstrm, S. Dasgupta, D. J. Richardson, D. Jakobsen, C. Lundström, P. A. Andrekson, M. E. V. Pedersen, and B. Palsdottir, “Silica-based highly nonlinear fibers with a high SBS threshold,” in IEEE Winter Topicals Meetings (WTM 2011), paper MD4.2. 23. B. P.-P. Kuo, J. M. Fini, L. Grüner-Nielsen, and S. Radic, “Dispersion-stabilized highly-nonlinear fiber for wideband parametric mixer synthesis,” Opt. Express 20, 18611–18619 (2012). 24. M.-C. Ho, M. Marhic, K. Wong, and L. G. Kazovsky, “Narrow-linewidth idler generation in fiber four-wave mixing and parametric amplification by dithering two pumps in opposition of phase,” J. Lightwave Technol. 20, 469–476 (2002).
Introduction
In long-haul transmission systems, the use of multilevel modulation formats such as m-level phase-shift keying (PSK) or quadrature-amplitude modulation (QAM) in conjunction with coherent detection has increased steadily in recent years.Due to this, research on schemes which perform all-optical signal processing of advanced modulation formats has also attracted much attention.There have been different demonstrations which covered different applications and modulation formats such as phase and amplitude regeneration of quadrature-phase-shift keying (QPSK) [1] and 8-QAM [2] signals and compensation of non-linear-transmission distortions of QPSK [3] and 16-QAM [4] signals.
An important functionality attractive for future network applications is all-optical quadrature decomposition where the in-phase (I) and quadrature (Q) components of a signal are separated.Decomposition of quadratures would enable many modulation format conversions such as obtaining two binary-phase-shift keying (BPSK) signals from a QPSK signal or two 4level amplitude-shift keying (ASK) signals from a 16-QAM signal.Quadrature demultiplexing can be utilized to achieve regeneration of QPSK signals by performing quadrature demultiplexing, subsequent regeneration of both BPSK signals and lastly combining both regenerated signals into a QPSK signal [5].A scheme to demultiplex a QPSK signal into two BPSK signals at two different wavelengths has already been demonstrated numerically [6], and experimen- tally [7,8] using phase-sensitive four-wave mixing (FWM).This scheme is challenging due to the need to control and lock the phases of four pump waves such that the QPSK signal is correctly decomposed.Another limitation of this scheme is that the output signals are located at two different wavelengths from the input signal.Using a conventional two-pump degenerate scalar phase-sensitive amplifier (PSA) to obtain one of the signal quadratures has also been demonstrated experimentally [9].A QPSK signal can be converted into a BPSK signal corresponding to either the I or the Q quadrature with a correct phase relation between the signal carrier and the pumps.However, this scheme outputs only one quadrature and the other quadrature could only be obtained by parallelization which increases the complexity.
Decomposition of both quadratures on two signals at the same wavelength but with orthogonal polarizations has also been proposed and demonstrated numerically [10].This scheme requires the use of four phase-locked waves and the output wavelength is different from the input wavelength which makes this scheme less attractive.Recently, we proposed the use of a dual-pump driven degenerate vector fiber-optic parametric amplifier (FOPA) in order to convert a QPSK signal into a dual-polarization (DP)-BPSK signal [11].To achieve quadrature demultiplexing, the input signal needs to be co-polarized with one of the pumps and cross-polarized with the other pump.The output idler is a conjugated copy of the input signal at the same wavelength but with orthogonal polarization.The combination of the signal and idler correspond to the decomposition of the signal quadratures on two cross-polarized waves when they are equalized in power.Therefore, this scheme does not have such strong requirements regarding the number of locked waves.Moreover, the output is at the same wavelength as the input.
Experimentally, demultiplexing one quadrature using a low-gain degenerate vector FOPA and a polarizer has recently been demonstrated [12].The second quadrature could be simultaneously demultiplexed with another polarizer.In that work, bit-error rate (BER) measurements were reported but with noticeable error-floor.Both quadratures were obtained at different times by aligning a PC before the polarizer.As the quadratures were driven by inverse sequences and differentially detected, it is not clear how the quadratures were distinguished.Similarly, in [13], a high-gain FOPA followed by a polarization-beam splitter (PBS) were used to demonstrate quadrature demultiplexing.While constellation diagrams show that the scheme demultiplexes both quadratures, full BER measurements are necessary in order to fully understand the limitations and penalties of the scheme.Therefore, there is need for an experimental demonstration of decomposition of both quadratures without major penalty using the degenerate vector FOPA.Furthermore, a complete theoretical description of this scheme still needs to be reported in order to better understand its limitations In this paper, we analyze the use of the vector FOPA in order to perform quadrature decomposition.We comprehensively analyze the proposed scheme theoretically and derive the dependence of the output wave on the input wave and fiber parameters.The theoretical description shows that the polarizations of the waves into which the signal is decomposed are only stable when the relative phase between the signal and the pumps is kept constant.We experimentally demonstrate QPSK to DP-BPSK conversion with very low penalty at a BER of 10 −9 .To overcome the phase fluctuations induced by ambient perturbations (e.g.temperature drifts and mechanical vibrations), we describe and implement a novel phase-locked loop (PLL) scheme.
Principle of operation
The scheme of a degenerate vector FOPA operating in phase-insensitive (PI) mode (no input idler, Id) is shown in Fig. 1.The input consists of three waves, two cross-polarized pumps, P 1 and P 2 , and the signal, S, which is co-polarized with one of the pumps.Besides amplifying the signal, the vector amplifier creates a fourth wave, the idler, which is a conjugated copy where μ and ν are the transfer coefficients (related by |μ| 2 − |ν| 2 = 1) and the input signal is defined as S in = |S in |e jφ s .The orthogonal vectors, x and y, express the pump polarizations which do not need to be linearly polarized.Equation (1) indicates that the output degenerate wave can be expressed as the sum of two waves with two different polarizations which carry the I/Q information of the input signal.Therefore, we can obtain the I and the Q components of the input signal with a polarizer aligned to the polarizations defined by The output of this polarizer, S pol , can be expressed as when selecting the I, Eq. ( 4), and the Q, Eq. ( 5) components.Thus, in order to demultiplex only one quadrature by the means of a vector FOPA and a polarizer, the gain required for the parametric amplifier can be relatively low which translate in low requirements in pump powers.However, the loss of the total scheme (degenerate vector FOPA and polarizer) would be in the order of |ν| in this case.A similar concept, using a vector FOPA followed by a polarizer, was also investigated in order to achieve phase squeezing with low pump powers [15].The polarization axes with I and Q information are orthogonal in the limit |μ|, |ν| >> 1 approaches zero with high parametric gain.This means that with high gain, the signal quadratures are decomposed into two orthogonal waves at the same wavelength as the incoming signal and therefore, we can split the two independent signals by the means of a PBS.The same decomposition can be obtained if signal and idler are equalized after the amplifier, although equalization would reduce the gain of the overall system.
Creating a wave and its orthogonally polarized conjugated copy was experimentally achieved for the purpose of cancelling impairments due to the nonlinearities in the fiber transmission [16].In contrast to the scheme presented here based on a degenerate vector FOPA, Liu et al. [16] created both conjugated waves by electro-optic modulation within the transmitter.Thus, the scheme proposed in this paper has the capability to create such pair of conjugated waves all optically.
Phase-locked loop for quadrature decomposition
In the previous section, we have explained that the degenerate vector FOPA with high gain can demultiplex the signal quadratures into two orthogonal polarizations.Thus, a QPSK signal can be split into two BPSK signals or a 16-QAM in two 4-ASK signals by a PBS after the degenerate vector PI amplifier.However, splitting the output signal by means of a PBS is not trivial since the orientation of the polarization axes depends on the phase of the coefficients μ and ν as seen in Eqs. ( 2) and (3); i.e., the polarization orientation depends on the pump phases relative to the signal carrier phase.Therefore, a drifting phase relation between the pump phases and the signal carrier phase means that the signal quadratures are decomposed into drifting polarizations axes.These polarization variations can be compensated for in digital signal processing (DSP) in a coherent receiver but both BPSK signals cannot be split into two different paths unless their polarization is orthogonal and stable.Thus, a PLL is required in order to overcome the polarization instability and achieve quadrature demultiplexing.
The design of a PLL scheme for quadrature decomposition is not trivial and different PLL solutions have been implemented in previous experimental demonstrations of quadrature decomposition [7][8][9]12,13].A PLL scheme based on a complex and hardware-demanding scheme which involved the use of an additional parametric amplifier, the so-called phase-comparator [17]), was demonstrated to perform well at a BER as low as 10 −7 [9].A PLL scheme based on detecting the mean power of one of the BPSK signals was also implemented [7,8,12].It is not clear how this PLL scheme worked since the mean power of a quadrature does not depend on the constellation rotation for the QPSK decomposition.Furthermore, the penalty in the measured BER curves due to the PLL was not negligible, implying that it is essential to design a simple and efficient PLL scheme to perform quadrature decompositions.
In order to further understand the importance of the PLL, we assume that the input signal is a QPSK signal defined with constant amplitude and φ s = φ Drift + φ Data , with φ Data ∈ {±π/4, ±3π/4} defining the constellation symbol and φ Drift being the rotation of the signal constellation with respect to the pumps which is usually caused by thermal and mechanical effects (for simplicity we assume that the pump phases are stable and the signal constellation is rotating with regard to them).Then, the wave after a polarizer aligned to obtain the I component is Equation ( 6) implies that maintaining φ Drift = 0 rad is required to obtain a BPSK signal corresponding to the I component of the input QPSK signal.We also observe that the mean power of the optical signal after the polarizer does not depend on the constellation rotation, φ Drift , because the data information is still present.However, the optical power of the signal after the polarizer is constant in the ideal case, φ Drift = 0 rad.Due to symmetry, the optical power is also constant when φ Drift = {±π/2, π} rad.Otherwise, the optical power of the signal after the polarizer has variations determined by the data.Maximum power variation correspond to φ Drift = {±π/4, ±3π/4} rad.The fact that the instantaneous optical power depends on the constallation rotation, φ Drift is used for the feedback to the proposed PLL whose block diagram is shown in Fig. 2. As depicted, one of the signals in which the input signal is decomposed is detected by a fast photodetector (as fast as the data modulation) whose electrical output current is I PLL ∝ cos(2φ Data + 2φ Drift ), where we neglect the direct current (DC) component.Under ideal conditions, without having ambient perturbations, the electrical signal is constant.In practice, instead of constant amplitude, the electrical signal has fast transitions on the order of the symbol rate determined by the phase drift, φ Drift .We can use an envelope detector (square law) in order to extract the phase fluctuations due to environmental drifts.The envelope detection outputs I 2 PLL ∝ cos(4φ Data + 4φ Drift ) and therefore the fast transitions due to the data modulation have been removed.This signal is digitized by an analog-to-digital converter (ADC) in order to be processed such that a feedback signal minimizing I 2 PLL is created.The ideal BPSK signal is then obtained from the demultiplexing.The ADC bandwidth is determined by the speed of the drift which are induced by the kHz-speed ambient perturbations (e.g.temperature and vibrations).
The proposed PLL circuit is valid to any quadrature QPSK demultiplexer regardless of the demultiplexing scheme.This concept can also be applied when decomposing a 16-QAM signal despite the different power levels of the signals.Note that in the proposed scheme for quadrature decomposition based on a degenerate vector FOPA, the PBS is not required for the PLL implementation.The output degenerate wave could be tapped after the vector amplifier.Then, we can obtain one of the quadratures by placing a polarization controller (PC) followed by a polarizer.The signal after the polarizer can be used as the feedback signal such that the DP-BPSK signal would have stable polarization on both waves carrying the BPSK data.
Experimental setup
The experimental setup is shown in Fig. 3.An electro-optic comb driven by a tunable laser was created with about 40 lines and 25 GHz frequency separation.From this comb, a wavelengthselective switch (WSS) selected three lines at wavelengths of 1554.0 nm (P 1 ), 1556.2 nm (S) and 1558.4 nm (P 2 ).These three waves were divided in three different paths by a wavelenghtdivision multiplexing (WDM) coupler.The path for the two pumps, waves at 1554 nm and 1558.4 nm, corresponded to the upper and lower branch in Fig. 3.They were amplified by two high power EDFAs after injection locking (EM4 distributed feedback lasers, injected power of about -5 dBm) which assured a good optical signal-to-noise ratio (OSNR).Note that a high pump OSNR was required due to the low signal power launched into the FOPA, about -23 dBm, and the low pump power after the comb.A comb with lower insertion loss and smaller number of lines would have avoided the use of injection locking.After the EDFAs, both pumps were filtered with 1 nm bandwidth optical band-pass filters (OBPFs) and their state of polarizations (SOPs) were controlled using PCs.The signal propagated through the middle branch and was modulated in the transmitter consisting of an IQ-modulator driven at 10 Gbaud.The I quadrature was modulated with a non-return to zero (NRZ) pseudo-random binary sequence (PRBS) of length equal to 2 15 − 1 and the Q quadrature with the inverse signal delayed by 23 ns.After the modulator, we included a PC in order to tune the signal polarization at the input of the vector amplifier.The three waves were combined before the PSA which consisted of a cascaded four stages of strained higly-nonlinear fiber (HNLF) separated by isolators similar to the one previously reported [18].The pump powers were 29 dBm each and the signal power was -23 dBm.Through monitor ports placed at both the HNLF input and the HNLF output, we tracked the input spectrum, the constellation diagram with a coherent receiver and the orthogonality between the pumps with a polarimeter.After the HNLF, the combined signal/idler wave was filtered out and split in two paths by a 3 dB coupler.In the upper path, the degenerate wave was split by a PBS into either the signal and the idler or the waves corresponding to quadrature demultiplexing by controlling a PC before the PBS.One of the PBS output was connected to a preamplified balanced receiver and therefore we could either evaluate the performance of the QPSK signal and conjugated QPSK idler, or the performance of the BPSK signals by aligning the polarization of the degenerate wave into the PBS.In the lower path, the combined signal/idler wave was first amplified and then passed through a polarizer to obtain the feedback signal for the PLL scheme which was based on the proposed PLL in Section 3. In our experiments, we use a photodiode of 40 GHz bandwidth, RF amplifier of 14 GHz bandwidth and RF dectector of 50 GHz bandwidth.After the RF detector, the ADC sampled the electrical at a speed of 320 kSamples/s (8 times the frequency of the dithering tone).
Experimental results
We first aligned the polarizations such that the pumps were cross-polarized at the input of the HNLF as well as the output.Due to the fiber polarization-mode dispersion (PMD), the pumps could get partially co-polarized along the fiber if they were not launched with specific states of polarizations.To make sure that the pumps were orthogonal, we monitored the pump degree of polarization (DOP) at the input and the ouput of the HNLF using a polarimeter.During the measurements, we kept the pump DOP minimal at both input and ouput of the HNLF.Note that in an ideal case the P 1 -P 2 DOP is 0 only when the pump have equal powers and are crosspolarized.Then, the input signal polarization was aligned such that we minimized the output signal/idler power variations when the input signal was not modulated and the relative phase between the pumps and signal was not stabilized by the PLL.An ideal PI behavior would correspond to 0 dB swing and PS operation, input signal not being co-polarized with one of the pumps, translates into signal/output power variations if the input signal is not modulated.Note that power variations in PS mode would also vanish if the input signal is modulated as Fig. 4. Spectra of the vector FOPA input and output decomposed on the polarizations given by the P 2 and the P 1 polarizations .FWM, four-wave mixing; HOI, higher-order idler.a QPSK signal.The power variations of the combined signal/idler wave at the HNLF output (no modulation on the input signal) were measured to be in a 0.7 dB range which confirmed we were operating close to PI mode as desired.The spectra after a PBS when the input polarization was aligned to obtain P 2 at the output in one output port are shown in Fig. 4(a).The pumps had a extinction ratio of about 30 dB (limited by the extinction ratio of the PBS).The signal polarization was almost parallel to the P 2 polarization with an extinction larger than 11 dB.One could expect larger power extinction, however previous studies show that degenerate vector FOPAs are very affected by the PMD in the HNLF [11].
Once we made sure we were operating in PI mode, we evaluated the output of the amplifier.The spectra of the signal and idler at vector FOPA output are shown in Fig. 4(b).As can be seen, the signal and the idler are quite equal in power with a power imbalance of 0.8 dB.The net gain (defined as combined output idler-signal power with respect of the input signal power) of the vector amplifier was about 12.9 dB (16.5 dB on-off gain).At the FOPA output, higherorder idlers (HOIs) were also present.These HOIs do not affect the FOPA performance in order to achieve quadrature demultiplexing.Furthermore, additional waves were also created in the FOPA due to weak FWM between the pumps caused by PMD.We show the signal constellation at the input and output of the vector FOPA in Fig. 5.The constellations were measured using an intradyne coherent receiver.For the input constellation, we used the standard constant-modulus algorithm (CMA), frequency estimation based on the Fourier transform and Viterbi-Viterbi phase estimation.For the output constellation, we adapted the CMA algorithm used for polarization-switched-QPSK signals [19] in order to obtain the DP-BPSK signal.This CMA algorithm works for any polarization-switched-m-level-PSK and the DP-BPSK format is equivalent to polarization-switched-BPSK given a certain polarization rotation.We represent the output as a DP-BPSK signal, but it could be also represented as a DP-QPSK signal where one QPSK signal is conjugated with regard to the other one.
Once known that the QPSK signal was converted to a DP-BPSK signal, we investigated the performance of the proposed scheme by measuring BER curves, shown in Fig. 6, after the PBS which output either the signal and idler or both BPSK signals by controlling the PC before the PBS.As reference, we also measured the BER curves of the back-to-back (without degenerate vector FOPA) BPSK and QPSK signals using the same receiver.When measuring the signal and the idler, the QPSK signal and the conjugated copy, we first connected one output of the PBS to the receiver and then we connected the other output without modifying the PC before the PBS.This ensures that both signals are orthogonally polarized.The same procedure was carried out when measuring the BPSK signals in which the I and Q components were demultiplexed.We aligned the PC before the PBS such that we obtained the I component at one PBS output and the Q component at the other output.Since the proposed PLL does not distinguish whether we are decomposing the I or Q component, we additionally delayed one of the components by about 10 ps.Then, when measuring the BER of one of the PBS outputs, we were sure that it always corresponded to the same quadrature and when changing to the other output it also corresponded to the other quadrature since due to the symmetry between quadratures we could not otherwise distinguish between them.This small delay between the quadratures did not bring any improvement in the BER curves or PLL performance, and the only function was to distinguish between the quadratures of the QPSK signal.
Regarding the QPSK signals, the sensitivity of the QPSK signal without being amplified by the vector FOPA is -35.5 dBm at BER = 10 −9 .The penalty introduced by the vector FOPA is about 1.1 dB in both the signal and the idler (QPSK signal and conjugated copy) at BER = 10 −9 and negligible penalty at BER = 10 −3 .We believe that the main penalty source is the weak PS behavior of the vector FOPA since with a continuous wave (CW) signal, the output power of the degenerate wave varied about 0.7 dB.The performance of the signal and the idler are comparable.We also aligned the PC before the PBS to demultiplex the QPSK into its I and Q components.Both BPSK signals had also similar with sensitivity at BER = 10 −9 of -38.2 dBm which corresponds to a penalty of 1.9 dB with respect to a BPSK signal detected with the same receiver.Comparing the sensitivity penalties when measuring the signal and the idler sensitivities to the DP-BPSK, we observe a 0.8 dB extra penalty in the BPSK signals.The main reason for this extra penalty is due to the imbalance between the signal and the idler powers and thus, the penalty due to the PLL is low.
Discussion on quadrature decomposition
The experimental results demonstrate that a QPSK signal can be converted into a DP-BPSK signal using a degenerate FOPA operating in PI mode and thus its quadratures can be demultiplexed by means of a PBS.Converting a QPSK into a DP-BPSK signal mitigates the penalty seen using coherent receivers with large local-oscillator phase-noise [20].In differential directdetection, such conversion can also improve the receiver sensitivity since the penalty of differ- entially detecting a QPSK signal at BER=10 −9 is about 2.3 dB larger than when differentially detecting a BPSK signal [21].Using the degenerate vector FOPA, obtaining a DP-BPSK from a QPSK signal can be achieved with high gain in the parametric amplifier.Achieving high gain without the use of phase-modulated pumps in a FOPA is possible using strained HNLFs [18] although straining the fibers increases the PMD [22,23].For example, in this case the fiber differential-group delay (DGD) was about 0.4 ps which is at least one order of magnitude higher than the DGD of a conventional HNLF with the same length (600 m).Correct aligning of the pump and signal polarizations in presence of high PMD is essential in order to achieve the expected behaviour of the FOPA [11].Indeed, our results show that the penalty for the QPSK signals after the vector FOPA is 1.1 dB and the main reason for this penalty is expected to be not exclusive PI behaviour of the degenerate vector FOPA.Counter-phase-modulated pumps could be used in order to use standard HNLFs which are not strained [24].Apart from increasing the complexity, counter-phase-modulated pumps still causes a certain penalty in practice.When using a low-gain FOPA, quadrature demultiplexing can also be performed by splitting the FOPA output degenerate wave into two paths with one polarizer in each path.Each polarizer is aligned such that we obtaining each quadrature in one of the branches.If the purpose is QPSK-to-DP-BPSK format conversion, the DP-BPSK can be obtained by combining both BPSK signals with a PBC.However, using a low-gain FOPA and a polarizer introduces loss over the output BPSK signals which should be compensated with an additional amplifier.
Discussion on PLL
Regarding the PLL performance, we believe that no major penalty is caused by the PLL.Usually PLL-caused penalty manifest as an error floor at low BERs (e.g.BER equal to 10 −9 ).We did have an additional penalty of 0.8 dB in the sensitivity of the BPSK signals compared to the penalty of the QPSK signal after the vector FOPA.However, as mentioned, the main reason for this penalty is the 1 dB power imbalance of the signal and idler.The proposed PLL can work in any quadrature demultiplexing scheme which requires stabilization.The DSP code in the PLL minimized the electrical input signal to the ADC which means that we were obtaining a BPSK signal at the input of the photodetector used in the PLL circuit.Maximizing the electrical signal, φ Dri f t = ±π/4 to the DSP would translate into an three-level signal at the input of the photodetector for the PLL according to Eq. ( 6).The amplitude of this signal would carry the information of the bits given by XOR operation between the quadrature and inphase bits used to generate the QPSK signal.The FOPA output degenerate wave would still be a DP-BPSK signal, with stable polarization but with a 90 • rotation in the Poincaré sphere of the polarization axes for each BPSK signal.However, if the polarization of the degenerate wave into the PBS after the vector FOPA is not realigned, the PBS outputs will not correspond to the BPSK signal but to the three-level signal; which in turn means that modulation switching can be performed by only controlling the DSP implementation.
Conclusion
We have demonstrated quadrature demultiplexing of a QPSK signal into two BPSK signals using a degenerate vector parametric amplifier operating on phase-insensitive mode.The vector amplifier created an idler (conjugated copy of the signal) at the same frequency of the signal but with orthogonal polarization.The combination of the output signal and idler enables quadrature demultiplexing when the signal and the idler are equalized in power.The signal constellations at the input and the output of the amplifier verified the quadrature decomposition.A novel phaselocked loop circuit proposed here based on a envelope detector allowed us to split both BPSK signals in two different path by the means of a PBS and detect both of them with low penalty (1.9 dB with respect a back-to-back BPSK signal).The performance was mainly limited by the PMD in the HNLF.Overcoming this limitation while maintaining the parametric gain would enable quadrature decomposition with even lower sensitivity penalty.
#Fig. 5 .
Fig.5.Constellation diagrams of the degenerate wave at the vector FOPA input (QPSK signal) and output (DP-BPSK signal).The input signal is a single-polarized QPSK signal co-polarized with P 2 and cross-polarized with P 1 .The polarization for each output BPSK signal forms at 45 • (Jones space) angle with each pump polarization.Note that pump polarizations are chosen to be 'X' and 'Y' in order to maintain the definitions used in Section 2. | 7,498.4 | 2014-12-01T00:00:00.000 | [
"Physics"
] |
Controllable odd-frequency Cooper pairs in multi-superconductor Josephson junctions
We consider Josephson junctions formed by multiple superconductors with distinct phases and explore the formation of nonlocal or inter-superconductor pair correlations. We find that the multiple superconductor nature offers an additional degree of freedom that broadens the classification of pair symmetries, enabling nonlocal even- and odd-frequency pairings that can be highly controlled by the superconducting phases and the energy of the superconductors. Specially, when the phase difference between two superconductors is $\pi$, their associated nonlocal odd-frequency pairing is the only type of inter-superconductor pair correlations. Finally, we show that these nonlocal odd-frequency Cooper pairs dominate the nonlocal conductance via crossed Andreev reflections, which constitutes a direct evidence of odd-frequency pairing.
I. INTRODUCTION
Superconductivity is caused by electrons binding together into Cooper pairs below a critical temperature and has attracted great interest due to its properties for quantum technologies [1].The applications of superconductors are thus intimately linked to the Cooper pairs, specially to the symmetries or their wavefunction or pair amplitude.Due to the fermionic nature of electrons, the pair amplitude is antisymmetric under the exchange of all the quantum numbers describing the paired electron states plus the exchange of their relative time coordinates.Of particular interest is that the antisymmetry enables the formation of odd-frequency Cooper pairs, where the pair amplitude is odd in the relative time, or frequency ω, of the paired electrons [2][3][4][5][6][7].As a result, the odd-ω Cooper pairs characterize a unique type of superconducting pairing that is intrinsically dynamic [8][9][10][11][12][13].
In this work we demonstrate the generation, control, and direct detection of spin-singlet odd-ω Cooper pairs in Josephson junctions (JJs) formed by multiple superconductors [Fig.1].In particular, we exploit the degree of freedom offered by the multi superconductor nature of the setup and find that inter-superconductor even-and odd-ω Cooper pairs naturally arise and can be controlled by the superconducting phases and onsite energies of the superconductors.Interestingly, for a JJ with two superconductors, the even-ω amplitude vanishes either when the superconducting phase difference is π or at zero onsite energy, leaving only odd-ω pairing.This behaviour remains when the number of superconductors increases but only at weak couplings between superconductors.Furthermore, we discover that crossed Andreev reflections (CARs) directly probe odd-ω Cooper pairs and can be controlled by the superconducting phases.Our work thus puts forward multi-superconductor JJs as a powerful and entirely different route for odd-ω Cooper pairs.The remainder of this article is organized as follows.In Sec.II, we introduce the multi-superconductor JJs stud-ied in this work, while in Sec.III we show how to obtain the emerging pair amplitudes.In Sec.IV we present the obtained even-and odd-ω pair amplitudes and discuss their tunability by the superconducting phases.In Sec.V we demonstrate how the nonlocal odd-ω pair amplitude is detected via CAR processes.Finally, in Sec.VI we present our conclusions.
II. MULTI-SUPERCONDUCTOR JJS
We consider JJs as shown in Fig. 1, where n conventional spin-singlet s-wave superconductors are coupled directly.For the sake of simplicity, we model these JJs by only considering the contact regions, with a Hamiltonian given by where the first two terms describe the superconductor S j , where c jσ (c † jσ ) annihilates (creates) an electronic state with spin σ at site j with onsite energy ϵ j , phase ϕ j , and induced pair potential ∆ from a parent spinsinglet s-wave superconductor with order parameter ∆ sc .Moreover, H T = t 0 n j=1 c † jσ c j+1σ + h.c.represents the coupling between superconductors with equal strength t 0 and c n+1 = c 1 .Away from the bulk gap edges, ∆ is determined as ∆ = τ 2 /∆ sc [110][111][112][113] where τ is the coupling between S j and the bulk superconductor.Below we choose τ = 0.7 and ∆ = 0.5 such that ∆ sc = 1 is larger than the induced gap and fix it as our energy unit.We also drop the spin index for simplicity but keep in mind that the superconductors in Eq. ( 1) are spin-singlet.Despite the simplicity of our model, it captures the main effects we aim to explore in this work, namely, the multi superconductor nature and the distinct superconducting phases.Systems involving multiple JJs have been studied before but in the context of topological phases [114][115][116][117][118][119][120][121][122][123][124][125].Here, we expand the playground of these multi-superconductor JJs for realizing controllable odd-ω Cooper pairs.
III. SUPERCONDUCTING PAIR AMPLITUDES
We are interested in inter-superconductor pair correlations which we also refer to as nonlocal pair correlations as they reside between superconductors.Pair correlations are described by the anomalous Green's function )⟩ where T is the time ordering operator, c n annihilates an electronic state with quantum numbers n at time and position 1 = (x 1 , t 1 ) [126,127].The fermionic nature of electrons dictates the antisymmetry condition F nm (1, 1 ′ ) = −F mn (1 ′ , 1), which enables the classification of superconducting pair correlations based on all the quantum numbers, including time and space coordinates [8][9][10][11][12][13].Thus, this con- dition enables even-and odd-ω pair correlations when F nm (ω) = ±F nm (−ω), with F nm (ω) being the Fourier transform of F nm (1, 1 ′ ) into frequency domain.In the case of multi-superconductor junctions, the multiple superconductor nature introduces an additional quantum number n, the superconductor index, that broadens the classification of pair symmetries in a similar way as the band index in multiband superconductors [12].In Table I we present all the allowed pair symmetry classes that respect the antisymmetry condition in JJs with spin-singlet and spin-triplet superconductors: four classes correspond to odd-ω pair correlations which are the four bottom classes in Table I, see Supplementary Material [128] for details.It is evident that the superconductor index (sup.index) plays a crucial role for broadening the allowed pair symmetries.
In the JJs with spin-singlet s-wave superconductors considered here, the symmetric and antisymmetric combination F +(−) nm = (F nm ± F mn )/2 become even-and odd-ω pair symmetry classes, respectively [128].These two pair symmetry classes correspond to the ESEE and OSOE classes indicated in orange in Table I.In practice, the pair correlations F nm are obtained from the electron-hole component of the Nambu Green's function, whose equation of motion in frequency space reads [ω − H nJJ ]G(ω) = I, where H nJJ is the Nambu Hamiltonian of the JJ with n superconductors described by Eqs.(1)
IV. INTER-SUPERCONDUCTOR PAIR AMPLITUDES IN JJS
To begin, we focus on the pair correlations in a JJ with two superconductors coupled directly.This system is modelled by H 2JJ with n = 2 in Eq. (1).As described in the previous section, the pair correlations are obtained from electron-hole components of the Green's function associated to the Nambu Hamiltonian in the basis Ψ = (c 1 , c † 1 , c 2 , c † 2 ) T .Without loss of generality, we assume a phase difference ϕ 2 − ϕ 1 = ϕ.Then, considering ϵ 1,2 ≡ ϵ, the symmetric and antisymmetric pair amplitudes in superconductor index are given by [128] where ω represents complex frequencies unless otherwise stated and First, both pair amplitudes in Eqs. ( 2) have the same denominator which is an even function of ω and reveals the formation of Andreev bound states (ABSs) when P +2∆ 2 t 2 0 cos(ϕ) = 0.This is seen in the bright regions of Fig. 2, where we plot the absolute value of the symmetric and antisymmetric amplitudes as a function of the phase difference ϕ.Second, the numerators of both F + 12 and F − 12 have different functional dependences, oscillating with the phase difference ϕ in an alternate fashion as cos(ϕ/2) and sin(ϕ/2), respectively [129].While the numerator of the symmetric term is an even function of ω with a linear dependence on ϵ, the antisymmetric component is interestingly linear in ω and, therefore, an odd function of frequency.The symmetric even-ω part vanishes either when ϵ = 0 or ϕ = π, while the antisymmetric odd-ω pair amplitude remains remarkably finite at these points and even acquiring large values.The surprising features of the nonlocal pair amplitudes can be seen by comparing the panels of Fig. 2(a,b,d), where the vanishing values of the even-ω part is indicated by white arrows in Fig. 2(a).The vanishing values of the even-ω pairing can be better seen in Fig. 2(c) where we plot the ratio between the two 12 is an odd function of ω and thus vanishes at ω = 0, R ± has a clear interpretation only for ω ̸ = 0.In sum, JJs with two superconductors exhibit highly tunable odd-ω pairing that is the only type of inter-superconductor pair correlations.
For JJs with more superconductors n > 2, the expressions for the nonlocal pair amplitudes become lengthy, but still capturing the formation of ABSs in the denominator and with numerators that strongly depend on all ϕ i [128].We find that the symmetric and antisymmetric pair amplitudes between nearest neighbour superconductors develop even-and odd-ω symmetries, respectively.While the odd-ω part is proportional to ∼ (e iϕj+1 − e iϕj ), the even-ω term to ∼ (e iϕj+1 + e iϕj ) + P (ϕ 1,••• ,n ), where P is a function of all the system parameters [128].Thus, the odd-ω term depends on the sine of the phase difference of the involved superconductors as in JJs with two superconductors discussed above.However, the even-ω part has a cosine part as for JJs with two superconductors, but also an additional contribution due to the rest of the system.Nevertheless, both pair amplitudes exhibit a high degree of tunability by means of the superconducting phases.To visualize this fact, in Fig. 3 we plot the even-ω and odd-ω pair amplitudes for a JJ with three superconductors as a function of ϕ 2 and ϕ 3 at ϕ 1 = 0.The main feature of this figure is that the behaviour of both pair amplitudes is highly controllable by the superconducting phases.Interestingly, there are regions where the even-ω component acquires vanishing small values while the odd-ω remains sizeable large, see dark and bright regions in Fig. 3(a,c) and Fig. 3(b,d), respectively.
The vanishing and finite values of the even-and odd-ω pair amplitudes can be further visualized in a simpler regime.Specially, for very weak couplings between superconductors t 0 and for superconductors with the same onsite energy ϵ, the nearest neighbour nonlocal pair amplitudes up to linear order in t 0 are given by [128] where j = 1, • • • , n and ϕ n+1 = ϕ 1 .Strikingly, only the pair amplitudes between nearest neighbour superconductors remain finite at leading order in t 0 [130].
(3) exhibit evenand odd-ω spin-singlet symmetries, respectively.Interestingly, both pair amplitudes acquire the same form as their counterparts in JJs with two superconductors, see Eqs. (2).In this regime, the even-ω pairing thus vanishes either at ϵ = 0 or when e iϕj+1 + e iϕj = 0 which needs a phase difference of ϕ j+1 − ϕ j = π between superconductors.However, the odd-ω component remains always finite in this regime, exhibiting high tunability by ϕ j .We have verified that this behaviour remains even in JJs with finite superconductors and also in JJs with superconductors coupled via a normal region [128].Hence, multi-superconductor JJs represent a rich platform for the generation and control of nonlocal odd-ω pair correlations that do not require magnetic elements.Before closing this part, we highlight that the odd-ω pair amplitudes presented here are a proximity-induced superconducting effect bound to the device, exhibiting wide controllability by the superconducting phases and with important impact on physical observables as we discuss next.
V. CAR DETECTION OF ODD-ω PAIRING
Having established the emergence of intersuperconductor odd-ω pairs in multi-superconductor JJs, now we inspect a direct detection protocol.Due to the nonlocal character of the pair correlations found here, it is natural to explore nonlocal transport of Cooper pairs [28,36,75,131].Without loss of generality, we focus on JJs formed by two superconductors and aim at detecting the odd-ω pairs obtained in Eqs. 2. Hence, we attach two normal leads at the left and the right of the system as in Fig. 1 and include them in our model via retarded selfenergies Σ r L(R) , such that the system's retarded Green's function is Here, H 2JJ describes the JJ described by Eq. ( 1) with n = 2 and ω now represents real frequencies.In the wide-band limit, Σ r j = −iΓ j /2, where Γ j = π|τ | 2 ρ j characterizes the coupling to lead j with surface density of states ρ j , and τ the hopping between leads and superconductors.
At weak Γ j , the JJ can be probed by nonlocal transport.Specially, the transport of Cooper pairs is characterized by nonlocal Andreev reflection or crossed Andreev reflection (T CAR ), which competes with electron tunneling (T ET ) to determine the nonlocal conductance ∼ (T CAR − T ET ) [128].These CAR and ET processes involve electron-hole (hole-electron) and electron-electron (hole-hole) transfers, T CAR = T eh + T he and T ET = T ee + T hh , which can be obtained from G r as [75] where g r 12 (ḡ r 12 ) and F r 12 ( F r 12 ) are the normal and anomalous (or pair amplitude) components of the intersuperconductor retarded Green's function, obtained from G r [128].Interestingly, the CAR processes T eh(he) are directly determined by the squared modulus of the intersuperconductor pair amplitudes F r 12 .We note that, while the pair amplitudes F r 12 and F r 12 are not directly measurable, their modulo respectively determines the finite value of the nonlocal probabilities T eh and T he , thus facilitating the detection of these emergent pairings.
Under general circumstances, F r 12 includes both symmetric even-ω and antisymmetric odd-ω terms, the symmetric part vanishes at ϵ = 0 for any ϕ, see Eqs. 2. Thus, the CAR amplitudes have the potential to directly probe the antisymmetric inter-superconductor odd-ω pairing.However, as shown above, the CAR processes T eh(he) are always accompanied by electron tunnelings T ee(hh) .Therefore, even if T eh(he) directly probes odd-ω pairs, their total effect in the non-local conductance can be masked if T ee(hh) are larger.For this reason, to directly detect inter-superconductor odd-ω pairing, a regime where T ee(hh) ≪ T eh(he) is needed.Even though this regime might sound challenging to find, we now demonstrate that it is in fact possible.To show this, we consider ϕ 1 = −ϕ/2, ϕ 2 = ϕ/2 and assume symmetric couplings to the leads Γ j = Γ.Then, for ϵ = 0, g r 12 and the antisymmetric pair amplitude F r,− 12 are given by [128] where , and ḡr 12 (ϕ) = −g r 12 (−ϕ), and F r 12 (ϕ) = F r 12 (ϕ).We note that F r 12 can be obtained from Eqs. (2) by replacing ω → ω + i0 + + iΓ/2.Now, we can exploit the fact that the energy of the ABSs at ϵ = 0 and ϕ = π is given by |ω ± | = |t 0 − ∆|, which clearly vanishes for t 0 = ∆.In this regime we have |g r 12 |/|F r,− 12 | ≈ ω/(2∆) ≪ 1 for low frequencies.Thus, it is possible to obtain a regime where the antisymmetric pair amplitude is larger than the normal contribution.Hence, in this regime T eh(he) are expected to be larger than T ee(hh) and constitute the main contribution to the non-local conductance, whose finite value indicates a direct evidence of inter-superconductor odd-ω pairing.To visualize the above argument, in Fig. 4 we plot ET and CAR processes as a function of ϕ and ω at ϵ = 0.The most important feature is that at high frequencies, ET processes T ee(hh) acquire large values near ϕ = 0, 2π but are vanishing small at low ω near ϕ = π, in line with the discussion presented above.Interestingly, the CAR processes T eh(he) acquire large values around ϕ = π at low frequencies but smaller values at higher frequencies.The finite values of these CAR processes directly probe the formation of induced odd-ω pairs.Of particular relevance here are the values around ϕ = π and low ω, because, at such points, CAR dominates over ET and it thus determines the nonlocal conductance.We have verified that this behaviour also holds for JJs with more than two superconductors but in the weak tunneling regime, thus supporting the direct detection of proximity-induced inter-superconductor odd-ω pairing in a nonlocal transport measurement.Hence, despite being an induced effect, the nonlocal odd-ω pairs determine CAR processes by simply tuning the superconducting phases in multisuperconductor JJs.
VI. CONCLUSIONS
In conclusion, we have studied multi-superconductor Josephson junctions and found that inter-superconductor even-and odd-ω Cooper pairs can be generated, controlled, and detected by virtue of the superconducting phases.We found that even-ω pairing vanishes when the phase differences between two superconductors is π, thus leaving only odd-ω pairing as the only type of intersuperconductor pair correlations.While this finding is exact for Josephson junctions with two superconductors, it is only valid at weak couplings between superconductors in junctions with more than two superconductors.Due to the vanishing of even-ω pairing, only odd-ω pairs contribute to CAR processes, whose finite values directly probe the presence of odd-ω Cooper pairs.Given the advances in the fabrication of superconducting heterostructures, including a promising tunability of CAR processes [133], we expect that the physics discussed here could be soon realized in multi-terminal Josephson junctions [117,122,[134][135][136] and in superconducting quantum dots [137][138][139][140][141][142][143][144][145][146].Of particular relevance are Refs.[117,122,[134][135][136] because they have already demonstrated the fabrication of multi-superconductor Josephson junctions and the control of several superconducting phases.In this regard, our work offers an entirely unexplored route for the generation, control, and detection of odd-ω Cooper pairs that might be even possible to explore using already existent experimental techniques.
Before carrying out any calculation in a specific system, here we present all the allowed superconducting pair symmetries in multi-superconductor Josephson junctions with quantum numbers that involve frequency (ω), spins (σ, σ ), superconductor indices (n, m), and spatial coordinates (x, x ).For this purpose, we remind that the antisymmetry condition dictates that the pair amplitudes F σ,σ n,m (ω; x, x ) must be antisymmetric under the total exchange of quantum numbers, namely, where ω represents complex frequencies unless otherwise stated.Thus, the allowed pair symmetries should fulfil Eq. (S1) under the total exchange of quantum numbers.However, F σ,σ n,m (ω; x, x ) can be even or odd under the individual exchange of quantum numbers as long as Eq.(S1) holds.Thus, for instance, the pair amplitude can be even (odd) under the exchange of frequency ω → −ω and pick up a plus (minus) sign, which translates as Moreover, as stated at the beginning of this part, F σ,σ n,m (ω; x, x ) can be even (odd) under the individual exchange of the rest of the quantum numbers.Thus, F σ,σ n,m (ω; x, x ) can be even (odd) under the exchange of spins, superconducting indices, or spatial coordinates, respectively, when Therefore, the allowed pair amplitudes can be obtained by performing all the possible combinations of the individual exchanges of quantum numbers (Eqs.(S2) and ( S3)) that fulfil the antisymmetry condition Eq. (S1).By doing this, we find eight allowed pair symmetry classes that respect the antisymmetry condition, where 4 correspond to odd-frequency correlations, see Table S1.Of particular relevance is that the superconducting index (sup.index) plays a crucial role for broadening the allowed pair symmetries.Table S1 is presented as Table 1 in the section on "Inter-superconductor pair amplitudes in JJs" of the main text.
In the absence of any spin-mixing field, the spin symmetry of the emergent pair correlations is the same as the spin symmetry of the parent superconductor.Thus, the pair symmetry classes allowed in our study, where no spin-mixing field is present, are ESEE and OSOE pair symmetry classes: they correspond to a pair amplitude with even-frequency (odd-frequency) spin-singlet even (odd) in superconductor indices, even parity.By including a spin-mixing field, it is possible to obtain odd-frequency spin-triplet pair amplitudes which correspond to OTEE and OTOO pair symmetry classes in Table S1, which could be used as sources of spins highly controllable by the superconducting phases and thus promising for superconducting spintronics.Since we do not have spin-mixing fields in the results presented in the main text, the pair symmetries therein exhibit the spin-symmetry of the parent superconductor, namely, spin singlet.This is specially discussed in the section on "Inter-superconductor pair amplitudes in JJs" of the main text.
GREEN'S FUNCTIONS OF JOSEPHSON JUNCTIONS WITH SUPERCONDUCTORS COUPLED DIRECTLY
To obtain the Green's functions of the Josephson junctions studied in the main text, we first write their model Hamiltonian in Nambu (electron-hole) space and then obtain them from the equation of motion [ω − H nJJ ]G(ω) = I, where H nJJ is the Hamiltonian of the phase-biased Josephson junctions and G(ω) its associated Green's function.A Josephson junction with n superconductors coupled directly is modeled by Eq. ( 1) in the main text.In Nambu space, the Hamiltonian of each superconductor S j is given by where j represents the onsite energy of the superconductor, ∆ is the spin singlet s-wave pair potential and φ j its superconducting phase.Similarly, the coupling between superconductors is described by where t j,j+1 represents the coupling strength between nearest superconductors S j and S j+1 .For simplicity, we consider that all such couplings are the same t j,j+1 = t 0 and thus drop the indices in the coupling matrix (V j,j+1 = V ).Below, we discuss Josephson junctions with distinct superconductors and obtain expressions for their associated Green's functions.
Josephson junctions with two superconductors
The Hamiltonian of a Josephson junction between two superconductors is The eigenvalues of this Hamiltonian are given by which, for 1,2 = , reduce to Then, Therefore, the gap closes at φ = π if t 0 = √ ∆ 2 + 2 ; for = 0, the gap closes when t 0 = ∆.The associated Green's function has the following structure The entries of G 2JJ correspond to the Green's functions inside the superconductors (G 11 (22) ) or between the superconductors (G 12 (21) ).We thus term these components as intra-and inter-superconductor Green's functions, respectively.The Nambu structure of each G ij is given by where the diagonal elements allow us to obtain the density of states, while the off-diagonal ones give the pair amplitudes.Thus, for the pair amplitudes we obtain For the normal components we get extract the pair amplitudes between the last site of the left superconductor and the first site of the right superconductor and label them just by L and R indices denoting that they represent pair correlations between left (L) and right (R) superconductors.These nonlocal pair amplitudes F LR are then decomposed into their symmetric and antisymmetric components under the exchange of L and R, which are then denoted by F + LR and F − LR , respectively.Moreover, we note that since there are no spin-mixing fields, these nonlocal pair amplitudes have spin-singlet symmetry which implies that F ± LR corresponds to even-ω (odd-ω), spin-singlet, even (odd) in superconductor indices.Interestingly, these nonlocal pair amplitudes correspond to the even-and odd-ω nonlocal pair amplitudes discussed in the main part of our manuscript.In Fig. S3(a-d) and Fig. S4(a,b,d,e) we present the magnitude of these pair amplitudes as a function of frequency ω and phase difference φ for distinct lengths of the superconductors.In Fig. S4(c,f) we also show the ratio between even-and odd-ω pair amplitudes for several realistic lengths of the superconductors.
At zero phase φ = 0 only the even-ω component is finite for frequencies within the gap (and also outside) [Figs.S3(a,c)], while at φ = π it is the odd-ω pair amplitude the only pair amplitude that remains finite [Figs.S3(b,d)].
Of course that at zero frequency the odd-ω pairing vanishes as expected for any odd-ω function.Having odd-ω pairing as the only type of nonlocal superconducting pairing at φ = π remains even when the length of the superconductors L S increase, supporting the discovery reported in the first part of our manuscript, specially Fig. 2. The intriguing and interesting behaviour of these pair correlations can be further seen in Fig. S4, where we clearly observe that the even-ω part completely vanishes at φ = π, while the odd-ω pairing becomes the only finite nonlocal pair amplitude, seen by comparing Fig. S4(a,d) with Fig. S4(b,e).Furthermore, by inspecting the ratio between even-and odd-ω pair correlations, we note that such ratio vanishes at φ = π due to the vanishing of even-ω pairing, see Fig. S4(c,f).As a result, we conclude that odd-ω pairing becomes the only type of nonlocal superconducting pairing at φ = π in Josephson junctions with realistic superconductors.These results are in line with our findings presented in the first part of the main text where, however, Josephson junctions are modelled by single site superconductors.The agreement between the results presented in this section and those shown in the main text clearly demonstrate that our findings about odd-ω pairing being the only type of nonlocal superconducting pairing at φ = π are robust and very likely to appear even in realistic Josephson junctions.The reason for this agreement is because our simple model in the main text Eq. (1) already captures the tunnelling processes that permits us to explore Josephson transport in multi-superconductor Josephson junctions.As a result, having odd-ω pairing as the only type of nonlocal pairing implies that it is the main effect for enabling CAR at φ = π exactly in the same way as discussed in the section on "CAR detection of odd-ω pairing" of the main text.
FIG. 1 .
FIG. 1. JJs formed by coupling superconductors Si with distinct phases ϕi, and same induced pair potential ∆.In each Si local pairs are depicted in gray ellipses containing two electrons (black filled circles), referred to as intra superconductor (local) pairs.Due to the tunneling between superconductors, inter-superconductor (nonlocal) pair correlations emerge (cyan) which can be controlled by ϕi.Normal leads (green) are attached to two Si for exploring nonlocal transport and detecting inter-superconductor Cooper pairs.
TABLE S1 .
Allowed superconducting pair symmetries in multi-superconductor Josephson junctions under the presence of spin-mixing fields.The classes ESEE and OSOE, which are spin-singlet, correspond to the pair correlations reported in the main text of this work. | 6,133.4 | 2023-06-05T00:00:00.000 | [
"Physics"
] |
Identication of Cryptic Putative IRESs Initiating the Translation of Nonstructural Proteins Encoded by the HRV16 Genome
Cap-dependent initiation of translation is a canonical mechanism adopted by eukaryotic cells. Internal ribosome entry site (IRES)-dependent translation is a mechanism distinct from 5′ cap-dependent translation. IRES elements are located mainly in the 5′-untranslated regions (UTRs) of viral and eukaryotic mRNAs. In addition, IRESs are found in the coding regions of some viral and eukaryotic genomes and initiate the translation of some functional truncated isoforms. Here, via IRES-initiated expression of proteins, bicistronic vectors and ribosome proling of the human rhinovirus 16 (HRV16), we found that the coding region of the nonstructural proteins P2 and P3 contained 5 putative IRES elements. These 5 putative IRESs were located within nucleotides 4286-4585, 5002-5126, 6245-6394, 6619-6718 and 6629-6778 and initiated green uorescent protein (GFP) expression in vitro. This alternative mechanism might be effective and economical for eliminating the time and raw material required to synthesize the full-length polyprotein.
Introduction
The canonical eukaryotic translation mechanism is 5′ cap-dependent (m7GpppN) [1]. However, picornaviruses have adopted an alternative IRES-dependent translation mechanism to initiate polyprotein translation [2][3][4][5][6]. Studies show that the mRNA of multiple viruses and a minority (<10%) of the mRNA in eukaryotic cells contain IRES elements and that classical internal ribosome entry sites (IRESs) are located in the 5′-untranslated region (UTR). According to the secondary structure of their host RNA, IRESs can be divided into four types. Type IRESs are found in Enterovirus and Rhinovirus genomes [7,8]. Type IRESs are contained in Cardiovirus and Aphthovirus genomes [9]. HCV and HCV-like IRESs are found in some members of the Flaviviridae and Picornaviridae families, such as hepatitis C Virus [10], classical swine fever virus [11,12], porcine teschovirus-1 [13], and simian virus 2 [14]. Intergenic region(IGR) IRESs are originally found in the cricket paralysis virus genome and exist widely in members of the Discistroviridae family [15]. In contrast, IRESs in eukaryotic cells are di cult to classify into different types because of their different structures. In addition to being located in the 5′-UTRs of eukaryotic cellular genes [16][17][18], some IRESs are also found in the coding region of eukaryotic cellular genes, such as those encoding the 14-3-3 and prion proteins [19,16]. In addition, our previous study showed that multiple putative IRESs are located in the coding region of the Coxsackievirus B type 3 (CVB3) genome (unpublished). IRESs in the coding regions of viral genomes might initiate the translation of speci c proteins during a particular virus life cycle. The Picornaviridae family is one of the largest known virus families and includes many important human and animal pathogens. Picornaviruses are nonenveloped RNA viruses possessing a single-stranded RNA (positive-sense (+)) genome (7-8 kb) composed of a 5′-NTR, an open reading frame (ORF), a 3′-NTR and poly(A) tail. The ORF is translated into a large polyprotein, which is proteolytically cleaved by viral proteases(2A, 3C) to release 4 structural proteins (VP4, VP2, VP3, and VP1) and 7 nonstructural proteins (2A pro , 2B, 2C, 3A, 3B, 3C pro , 3D pol and, in some genera L) [20,21].
Human rhinovirus 16 (HRV16) belongs to the Rhinovirus genus in the family Picornaviridae. The HRV 16 genome is a single-stranded positive-sense RNA genome with a length of approximately 7.5 kb [21].
According to the viral genome structure and classical virus replication mechanism, nonstructural proteins associated with viral replication, such as 2A pro , 2B, 2C, 3A, 3B, 3C pro and 3D pol , are synthesized after the structural proteins. However, nonstructural proteins are the central players in the RNA replication and transcription machinery during the life cycle of RNA viruses. 3D pol , an RNA-dependent RNA polymerase (RdRp), is indispensable for both replication and transcription of the viral genome. 3D pol uses VPg (3B) as a primer to initiate the replication process. Both 2A pro and 3C pro are cleaved to form viral functional components; the molecular weight of the large protein is approximately 240 kDa. Synthesis of such a large protein is certain to affect the e ciency of viral replication. Therefore, we believe that there may be a more effective mechanism by which viruses can synthesize proteins from the genome. Considering this possibility in combination with results from previous studies, we believe that IRESs are contained within the coding region of the viral genome in addition to the 5' noncoding region. Therefore, HRV16, was used as a model, system to search for putative IRES elements in the viral genome.
To support the hypothesis, a complete experimental scheme was designed to search for putative IRESs, and then verify their function. We found 5 putative IRESs with length varying from 100 bp to 300 bp in the coding region of the nonstructural proteins P2 and P3 in the viral genome. Thus, HRV16 utilizes putative IRESs within the coding region to initiate the translation of nonstructural proteins.
Western blotting
Cells were collected with cell scrapers and were then centrifuged at 1000 × g for 3 min. Cell pellets were washed with phosphate buffer saline (PBS) and then lysed with cell lysis buffer (Beyotime) on ice for 40 min. Cell supernatants were harvested by centrifugation at 4°C and 15000 × g for 10 min. Protein samples were separated by SDS-PAGE and were then transferred to nitrocellulose (NC) membranes (GE). NC membranes were blocked with 5% skim milk in PBS at room temperature for 2 h and were then incubated with anti-GFP (Proteintech), anti-GRP 78 (Abcam) and anti-β-actin (Abcam) monoclonal antibodies overnight on a shaker at 4°C. Next, NC membranes were incubated with a horseradish peroxidase-conjugated anti-mouse IgG secondary antibody (Abcam) at room temperature for 1 h. Speci c protein bands on the NC membranes were detected with enhanced chemiluminescence (ECL) detection kit (PerkinElmer).
Luciferase assay
BHK-21 cells were cultured in 96-well plates to 70-80% con uence and were then transfected with bicistronic luciferase plasmids. Luciferase activity was measured with a Dual-Luciferase Reporter Gene Assay Kit (Beyotime) according to the manufacturer's instructions at 24 h post-transfection. The ratio of F-Luc activity to R-Luc activity (F-Luc/R-Luc) showed the translation initiation ability of the putative IRES.
RNA preparation and quantitative reverse transcription-PCR (qRT-PCR)
Bicistronic reporter plasmids containing the nal truncated putative positive IRES sequences were transfected into BHK-21 cells. Total RNA was extracted with TRIzol reagent (Sigma-Aldrich) 24 h post transfection and reverse transcribed to cDNA with a reverse transcription kit (Takara). Two pairs of speci c primers targeting the R-Luc and F-Luc genes were designed with Oligo software (supplementary materials 2). qRT-PCR was performed with SYBR Premix Ex Taq II (Takara) using the primers described above. ERS increased the expression of glucose-regulated protein 78 (GRP 78) has been reported in previous studies [22,23]. BHK-21 cells were treated with 0.25 μM TG (Sigma-Aldrich) for 12 h. The expression of GRP 78 was detected.
BHK-21 cells were cultured in 96-well plates to 70-80% con uence and were then transfected with the bicistronic luciferase plasmids. The BHK-21 cells were treated with or without 0.25 μM TG at 12 h posttransfection. Luciferase activity was detected at 24 h post-transfection.
Results
3.1. Identi cation of putative IRESs within coding regions in the HRV16 genome initiating the translation of nonstructural proteins According to previous studies [19,16], the target genome was inserted into pEGFP-N1. After transfection, GFP-fused proteins translated via de novo synthesis and dependent on an IRES-mediated mechanism were identi ed by Western blotting with an anti-GFP antibody. If the same putative IRESs appeared as two or more inserted fragments, bands of the same sizes would be detected in the adjacent lanes on the Western blot. To investigate the presences of putative IRES in the nonstructural protein coding region of the HRV16 genome, the sequence of the ORFs from each start codon (AUG) to the C-terminal codon of P2 or P3 were cloned into the vector pEGFP-N1 separately. If two start codons were very close to each other, only the longer sequence with two start codons was selected for cloning into pEGFP-N1. Thus, 5 sequences in the P2 region and 8 sequences in the P3 region were individually cloned into the vector pEGFP-N1; the resulting plasmids were designated pP2(2969-4861), pP2(3632-4861), pP2(3926-4861), pP2(4256-4861), pP2(4586-4861), pP3(4586-7084), pP3(4874-7084), pP3(5177-7084), pP3(5672-7084), pP3(5993-7084), pP3(6164-7084), pP3(6395-7084) and pP3(6596-7084). The 5 sequences in the P2 coding region were nt 2969-nt 4861, nt 3632-nt 4861, nt 3926-nt 4861, nt 4256-nt 4861, nt 4586-nt 4861 and 8 sequences in the P3 coding region were nt 4586-nt 7084, nt 4874nt 7084, nt 5177-nt 7084, nt 5672-nt 7084, nt 5993-nt 7084, nt 6164-nt 7084, nt 6395-nt 7084, and nt 6596-nt 7084. Because the full-length 5′ termini of P2 and P3 did not contain an AUG, the rst AUG in the P2 sequence was selected in the VP1 coding region, and the rst AUG in the P3 sequence was selected in the 2C coding region (Fig. 1a). The molecular weights of the GFP-fusion proteins corresponding to the Fig. 1 b). Bands corresponding to the same molecular weights namely, approximately of 55 kDa, 40 kDa and 38 kDa detected in many adjacent lanes (Figure 1b). After removal of GFP-tag, the nucleotide sequences corresponding to the remaining part of the three proteins were nt 4256-nt 4861, nt 4481-nt 4861 and nt 4586-nt 4861, individually. 300-bp sequences upstream of the above nucleotide sequences were nt 3956-nt 4255, nt 4181-nt 4480 and nt 4286-nt 4585, individually. We concluded that the putative IRESs in the P2 region were located within the nt 3956-nt 4255, nt 4181-nt 4480 and nt 4286-nt 4585 sequences. Multiple protein bands were also detected after transfection of pP3(4586-7084), pP3(4874-7084), pP3(5177-7084), pP3(5672-7084), pP3(5993-7084), pP3(6164-7084), pP3(6395-7084) and pP3(6596-7084) (Fig. 1c). Speci cally, 7, 7, 8, 7, 6, 7, 5 and 4 bands were found in the lanes of samples transfected with pP3(4586-7084), pP3(4874-7084), pP3(5177-7084), pP3(5672-7084), pP3(5993-7084), pP3(6164-7084), pP3(6395-7084) and pP3(6596-7084), respectively. Bands corresponding to the same 6 molecular weights namely, approximately 100 kDa, 75 kDa, 70 kDa, 58 kDa, 38 kDa and 32 kDa appeared in many adjacent lanes (Fig. 1c). After removal of GFP-tag, the nucleotide sequences corresponding to the remaining part of the six proteins were nt 5177-nt 7084, nt 5672-nt 7084, nt 5993-nt 7084, nt 6395-nt 7084, nt 6719-nt 7084 and nt 6929-nt 7084, individually. 300-bp sequences upstream of the above nucleotide sequences were nt 4877-nt 5176, nt 5372-nt 5671, nt 5693-nt 5992, nt 6095-nt 6394, nt 6419-nt 6718 and nt 6629-nt 6928, individually. We concluded that the putative IRESs in the P3 region were located within the nt 4877-nt 5176, nt 5372-nt 5671, nt 5693-nt 5992, nt 6095-nt 6394, nt 6419-nt 6718 and nt 6629-nt 6928 sequences. In summary, we concluded that 9 putative IRESs might be located in the nonstructural protein coding region of the HRV16 genome. 3.2. Con rmation of putative IRES elements within the nonstructural proteins coding region of the HRV16 genome through bicistronic vectors The above results indicate that 9 putative IRESs are located upstream of 9 potential IRES-dependent proteins sequences. To search for putative IRESs sequences, according to a previous study [16,25,26], a dualluciferase reporter plasmid (p-IRES.CHECK) with a hairpin structure (ΔGcal = -74.4 kcal mol-1) between the R-Luc and F-Luc sequences was constructed ( Fig. 2a and 2b (Fig. 2a and2b). We established a criterion of an F-Luc/R-Luc ratio of more than 3-fold greater than that of the negative control after transfection of a plasmid containing a putative IRES sequence ( Fig. 2a and 2b, right panels). The above plasmids were transfected into BHK-21 cells for 24 h and the F-Luc/R-Luc ratio of each reporter vector was calculated relative to that of the negative control. (Fig. 4e, right panel). The F-Luc/R-Luc ratio in cells transfected with of pP3-IRES-(6629-6728) was less than 0.7-fold that in cells transfected with pP3-IRES-(6629-6778). Therefore, we concluded that the putative IRES is located within a 150-nucleotide region spanning nucleotides 6629 to 6778. In summary, we found that 5 putative IRESs were located at nt 4286-nt 4585 in the 2C region, and at nt 5002-5126, nt 6245-nt 6394, nt 6619-nt 6718 and nt 6629-nt 6778 in the P3 region. The positions of the 5 putative IRESs in the HRV16 genome are shown (Fig. 5). 3.5. Veri cation of the function of the putative IRESs To verify the function of these putative IRESs in initiating protein expression in vitro, p-IRES.CHECK-GFP was constructed by inserting the hairpin structure downstream of R-Luc and replacing the F-Luc gene with the GFP gene based on p-IRES.CHECK (Fig. 6a). The abovementioned putative IRESs were inserted between the hairpin and GFP genes to generate vectors pP2-IRES(4286-4585)-GFP, pP3-IRES(5002-5126)-GFP, pP3-IRES(6245-6394)-GFP, pP3-IRES(6619-6718)-GFP and pP3-IRES(6629-6778)-GFP. The EMCV IRES sequence was inserted to generate pEMCV-GFP as the positive control vector. The scrambled putative IRES sequence shu ed randomly was inserted between the hairpin structure and the GFP gene to generate the negative control plasmid. After transfection into 293T cells for 24 h, GFP proteins in each group were detected through Western blotting with an anti-GFP antibody. Except for the nt 6245-nt 6394 and scrambled sequences, all sequences, including nt 4286-nt 4585, nt 5002-nt 5126, nt 6619-nt 6718 and nt 6629-nt 6778, effectively initiated GFP expression (Fig. 6). 3.6. Ribosome pro ling of the HRV16 genome Cycloheximide stalls ribosomes on mRNA by blocking translation. To understand the ribosome pro le of the HRV16 genome and further con rm the putative IRES sequences, H1-HeLa cells were treated with cycloheximide 21 and 24 h.p.i. The ribosome-protected segments were subjected to next generation sequencing. RiboSeq reads mapped to the HRV16 genome were counted (Supplemental materials 3). The ribosome pro ling results showed that successive nucleotide sequences with more than 150 reads covered the positions of the 5 putative IRESs described above ( Table 1). The locations were basically consistent with the genomic locations (Fig. 7). This consistency supported additional evidence for the biological function of the putative IRESs veri ed above by a bioinformatic approach.
Discussion
Translation initiation is a key step in protein synthesis in living cells [27,28]. The 5′-UTR of most eukaryotic mRNAs contains a cap structure that interacts with ribosomes to initiate translation [28]. The cap-dependent translation initiation mechanism is the canonical mechanism and is used by eukaryotes and most viruses. IRES elements were initially found in the 5′-UTR of the EMCV genome [29]. Viruses utilize an IRES-dependent translation mechanism to synthesize viral protein [30] and shut down the 5′ cap-dependent translation initiation mechanism of host cells in the endoplasmic reticulum. Approximately 10% of eukaryotes mRNAs contain IRES elements, which are related to cell growth, maturation, apoptosis, stress, and cycle regulation [31,32,30]. IRESs are RNA regions with certain structures that can recruit eukaryotic ribosomes and then initiate translation under certain conditions in which cap-dependent mechanisms are suppressed, such as DNA damage [33,34] and heat shock [35]. In general, most known IRESs are located in the 5′-UTRs of mRNAs. However, a few IRESs are located in the coding regions of viral or eukaryotic genomes. In murine hepatitis virus (MHV), an IRES is located in mRNA 5; its 280 nucleotides span ORF 5a and ORF 5b, the 3′ border comprises the initiation codon of ORF 5b [36].
GFP has generally been used as a tag in protein expression applications. Green uorescence can be observed though uorescence microscopy to indicate the expression of the fusion protein. In addition, a low molecule weight protein fused with GFP has an increased molecular weight and can be detected with an anti-GFP antibody, obviating the need for a monoclonal antibody against a speci c protein. As previous study showed, if one gene contains several putative IRESs, the fusion protein whose translation is initiated by the putative IRES can be detected in the same lane by Western blotting with an anti-GFP antibody. In this research, the Western blots showed that 9 bands corresponding to truncated proteins with the same molecular weight and located in the same position in multiple lanes, indicating that these proteins might be expressed through an IRES-dependent mechanism.
The indicated molecular weight of the protein expressed by pP2(2969-4861) was approximate to the predicted molecular weight (S Fig. 1). The insigni cant differences in size might be di cult to distinguish by SDS-PAGE separation. The molecular weight of 3ABCD-GFP (111.8 kDa) was less than that of the predicted full-length fusion protein (121.8 kDa) (Fig. 1c). We concluded that this discrepancy was due to cleavage by the viral protease 3C pro , because the protein expressed by pP3(4586-7084) contained a 3C pro cleavage site at the junction of proteins 2C and 3A. Upon treatment of 5 μM or 10 μM rupintrivir, the predicted full-length fusion protein expressed by pP3(4586-7084) was detected. In addition, the inhibitory on 3C pro activity was related to concentrations of rupintrivir (S Fig. 2). The region encoding the 3ABCD protein spanned nucleotides 4862-7084 in the HRV16 genome, and the predicted full-length fusion protein expressed from pP3(4874-7084) did not contain the 3C pro cleavage site at the junction of proteins 2C and 3A. Additionally, the 25-kDa bands observed in every lane might be correspond to free GFP cleaved from fusion protein (Fig. 1b and 1c) [37].
In previous studies, the nucleotide sequence lengths of IRESs in coding regions were generally found to be less than 300 bp [36,16]. To identify the authenticity of putative candidate IRES elements, the putative IRES sequences with a length of 300 bp were cloned into a bicistronic expression vector containing the R-Luc and F-Luc genes. In addition, the classical EMCV IRES was inserted into a bicistronic expression vector to generate a positive control vector, similar to the method used in other research [38]. A hairpin structure (ΔG cal = -74.4 kcal mol -1 ) was inserted between R-Luc and F-Luc genes to guarantee that F-Luc gene was translated without interference of R-Luc gene. As a barrier, the stable hairpin structure prevented ribosomes from reading through the stop codon of the R-Luc ORF but did not affect the expression of downstream genes [39,40,16]. The R-Luc gene in the bicistronic expression vector was translated in a capdependent manner. In contrast, F-Luc gene expression was dependent on the inserted nucleotide sequence. Putative IRES activity was represented by the F-Luc/R-Luc ratio relative to that in the negative control cells. According to our previous research (unpublished), the criterion for a true putative IRES was an F-Luc/R-Luc ratio at least 3-fold greater than that in the negative control cells. The results showed that 5 putative IRES elements were located in the nonstructural protein coding region of the HRV16 genome. Deletion analysis was conducted to map the ranges of the 5′ and 3′ boundaries of these putative IRESs. If the putative IRES activity of the truncated isoform was greater than 0.7-fold that of the intact nucleotide sequence, the putative IRES element was deemed to be located in the truncated region.
According to this criterion, the putative IRES activity of nucleotide sequence 4286-4585 was dependent on nt 4286-nt 4435 and nt 4436-nt 4585 (Fig. 3a). In mapping the putative IRES within the nt 6095-nt 6394 region, the putative IRES activity of nucleotide sequences 6245-6394 was found to be greater than 0.7fold that of the full-length nucleotide sequence 6095-6394 (Fig. 3c). Further deletion mutation of nucleotide sequence 6245-6394 led to a signi cant reduction in putative IRES activity (Fig. 4c). Similar experimental results were found in mapping putative IRESs within the nt 6629-nt 6928 region ( Fig. 3e and Fig. 4e). The putative IRES activity of the truncated region was greater than 0.7-fold that of the full-length sequences, showing that the truncated region was essential for putative IRES activity [32].
After truncation, the putative IRES activity levels of nucleotide sequences 4952-5101 and 5027-5176 were much higher than that of nucleotide sequence 4877-5176 (Fig. 3b). The putative IRES activity increased appropriately after further truncation (Fig. 4a). Similarly, in mapping the putative IRES within nt 6419-nt 6718, putative IRES activity was found to be slightly increased when the nucleotide sequences were truncated stepwise ( Fig. 3d and 4d). This phenomenon might have arisen from alleviation of steric hindrance on ribosome binding to the IRES or IRES trans-acting factors (ITAFs) after truncation. The putative IRES activity was almost equal in nucleotide sequences 5002-5101 and 5027-5126. In addition, the main segment of these two regions overlapped. Therefore, the putative IRES was located roughly within the 125-nucleotide region spanning nucleotides 5002-5126. Ribosome pro ling showed that the ribosome positions in the genome "snapshot" were in the abovementioned putative IRES regions [41].
According to our previous study [16], to exclude the possibility of alternative splicing in the constructed vectors, copies of two cistrons were detected by RT-qPCR after transfection. The mRNA copy numbers of F-Luc and R-Luc were not signi cantly different, indicating that no major aberrant splicing isoforms were contained in the constructed vectors (S Fig. 3 and S Table 1).
The TG used as an inducer of ERS has been reported in previous studies [23,42] Our results indicated that the putative IRES can effectively initiate the expression of GFP in vitro (Fig. 6). The stable secondary structure was essential for putative IRES activity [45]. Obvious stem loops were found in the secondary structures of putative IRESs. The possibility of the shu ed sequence forming a stable secondary structure was much less than that of the putative IRES (supplementary materials 4).
The potential secondary structure in the shu ed sequence might lead to insigni cant cap-independent translation.
In general, IRES elements are located within the 5′-UTRs of cellular or viral mRNAs upstream of the AUG initiation codon and initiate the translation of full-length proteins [46]. However, several reports have shown that IRESs, within the coding regions of viral genomes, drive protein translation. The RNA genome of HIV-1 contains two IRESs. The rst one is located in the 5′-UTR and mediates viral replication during the G 2 /M phase of the cell cycle [26]. The other one is located within the coding region of gag and initiates the expression of a novel Gag isoform though an AUG initiation codon in the gag coding region. This Gag isoform may participate in HIV-1 replication in vitro [47]. In addition, the HIV-2 virion contains three isoforms of Gag (57 kDa, 50 kDa and 44 kDa) and two N-terminally truncated shorter isoforms are translated by an IRES element located in the coding region of Gag. As integrated Gag polyproteins participate in viral capsid assembly and genome packaging, the two shorter Gag isoforms might be synthesized independently and perform other functions of Gag [40]. In the canonical mechanism, a single integrated polyprotein is synthesized though the IRES in the 5′-UTR of viral genomic RNA and is then cleaved by viral proteases 2A and 3C to generate structural and nonstructural proteins [21]. The polyprotein synthesis needed to meet a possible requirement for certain viral proteins during the infection cycle of HRV16 might cost time and waste energy.
Considering our results, we concluded that the translation of proteins with various molecular weights is mediated through different putative IRESs in the HRV16 genome. The translational e ciency of viral nonstructural proteins is inversely related to the order of the corresponding protein coding region in the genome. A segment of 3D (52.3 kDa) dependent on the putative IRES within nt 6619-nt 6718 or nt 6629-nt 6778 was translated in the highest amount, and whether this protein is conducive to viral replication needs further study. In addition, the translation of a protein containing a segment of 3B and the complete 3CD protein was initiated by the putative IRES located between nucleotides 5002 and 5126, and the translation of a protein containing a segment of the 2C protein was initiated by a putative IRES located between nucleotides 4286 and 4585. These nonstructural proteins e ciently promote virus replication. Tables Table 1 The positions of successive nucleotide sequences with more than 150 reads basically covered those of the 5 validated IRESs identi ed above The F-Luc/R-Luc ratio relative to that of the negative control (F-Luc/R-Luc/N) was calculated. Experiments were repeated independently three times. *P, positive. N, negative | 5,566.4 | 2021-03-23T00:00:00.000 | [
"Biology"
] |
Competing Interactions and Traveling Wave Solutions in Lattice Differential Equations
The existence of traveling front solutions to bistable lattice differential equations in the absence of a comparison principle is studied. The results are in the spirit of those in Bates, Chen, and Chmaj in[1], but are applicable to vector equations and to more general limiting systems. An abstract result on the persistence of traveling wave solutions is obtained and is then applied to lattice differential equations with repelling first and/or second neighbor interactions and to some problems with infinite range interactions.
Introduction
We study the existence of traveling wave solutions for lattice differential equations (LDEs) by means of a perturbation argument and Fredholm theory for mixed type functional differential equations. In particular, we prove persistence of traveling waves for a general class of lattice differential equations with bistable nonlinearity. Consider the following equation, Our primary interest is in competing interactions between first and second nearest neighbors when d 1 < 0 and d 2 < 0. We develop a general technique for continuation of solutions of vector dissipative lattice differential equations and obtain results on existence of traveling front solutions for (1.1) when d 1 < 0, 0 < −d 2 ≪ 1 and when d 2 < 0 and |d 1 | ≪ 1.
Our contribution is to develop techniques based upon the implicit function theorem that are applicable for vector equations that are similar to that developed by Bates, Chen, and Chmaj [1] for scalar equations. Whereas in [1] the limiting system is the traveling wave equation associated with the PDE u t = u xx − f (u), we consider, through the use of the Fredholm theory for mixed type functional differential equations [14], limiting equations that may correspond to lattice differential equations. Among the chief motivations in this work (and in [1]) for the use of implicit function theorem based techniques is the desire to handle cases in which there does not exist a comparison principle.
Traveling wave solutions to (1.1) have been extensively studied when d 1 > 0 and d 2 = 0. In particular, the work of Weinberger based upon the development of an abstract comparison principle is applicable to both PDEs and LDEs, although primarily for monostable as opposed to bistable problems. Zinner proved existence of traveling fronts using topological fixed point results [21] and stability [20] in the bistable case. A general stability theory was developed by Chow, Mallet-Paret, and Shen [5] and Shen employed comparison principle techniques to prove results on existence, uniqueness, and stability of traveling fronts in which f ≡ f (u, t) may depend periodically on t. More recently Chen, Guo, and Wu developed a framework for existence, uniqueness, and stability of bistable equations in periodic media [4] and Hupkes and Sandstede [10] prove the existence of traveling pulse solutions for discrete in space Fitz-Hugh Nagumo equations that occur when coupling a relaxation variable to the discrete Nagumo equation ((1.1) with d 1 > 0 and d 2 = 0). Associated with traveling waves for (1.1) when d 1 > 0 and d 2 = 0 is the mixed type functional differential equations −cϕ ′ (ξ) = d 1 (ϕ(ξ − 1) − 2ϕ(ξ) + ϕ(ξ + 1)) − f (ϕ(ξ)) which results from the traveling wave ansatz u j (t) = ϕ(j − ct). Among the important contributions to the study of these types of equations is the pioneering work of Rustichini [17,18], the development by Mallet-Paret of a Fredholm theory for linear mixed type function differential equations [14] and its use to understand the global structure of traveling wave solutions [15]. Exponential dichotomies for these equations were investigated in [6] and [13] and center manifold theory and Lin's method were developed in [8] and [9], respectively.
The case in which d 1 < 0 and d 2 = 0 was investigated in [19] and [2]. In [19] a model was developed for the dynamics of twinned microstructures that arise in martensitic phase transformation, e.g., in shape memory alloys, which led to (1.1) in an overdamped limit. Subsequently, the bistable nonlinearity f (u) = u − H(u − a), H the Heaviside step function, was employed and transform techniques were utilized to determine waveforms and wavespeeds. In [2] the cubic nonlinearity was employed and the problem was converted to a periodic media problem so that the results of [4] could be applied. A wealth of traveling wave solutions of both bistable and monostable type were revealed. Similar techniques may be used to determine traveling fronts when d 2 < 0 and d 1 = 0 which results in two decoupled systems of equations. In [19,2] one of the essential ideas (see also [3]) was to convert to a system in terms of odd and even lattice sites. This effectively allows us to consider connecting orbit problems between vector equilibria as opposed to connecting orbit problems between time independent spatially periodic solutions. Existence and structure of traveling fronts for higher space dimension versions of (1.1) was recently investigated in [11] using comparison principle and continuation techniques. This paper is organized as follows. In section 2 we present some of the notation we will employ and background on Fredholm theory from [14] for linear mixed type functional differential equations. In addition, we summarize two approaches to the existence of traveling wave solutions in lattice differential equations. The first due to Chen, Guo, and Wu [4] provides existence, uniqueness and stability results for traveling front solutions of (1.1) when d 1 and d 2 are positive. The second is due to Bates, Chen, and Chmaj [1] and provides existence of traveling front solutions when d 1 + 4d 2 > 0. Section 3 contains our main results and establishes the persistence of traveling wave solutions for vector equations. In particular, we consider systems of lattice equations and allow, under certain non-restrictive conditions, general limiting systems. In section 4 we consider the application of general results in section 3 to the existence of traveling fronts to (1.1) for values of d 1 , d 2 which even after rewriting as a system (equivalently in a periodic media) do not possess a comparison principle. We end up with conclusions in section 5.
Fredholm Alternative for Lattice Differential Equations
If X, Y are Banach spaces with norms · X , · Y respectively, then we let L(X, Y ) denote the Banach space of bounded linear operators T : X → Y . Denote the kernel and range of T ∈ L(X, Y ) by Recall that T is a Fredholm operator if T satisfies the following: In [14], Mallet-Paret investigated the Fredholm alternative for the following functional differential equations of mixed type, for 1 ≤ p ≤ ∞, where I is some bounded interval, r 1 = 0, and r j = r k , 1 ≤ j < k ≤ N 1 , N 1 ≥ 2. We may write it as 2) and we have the homogeneous equation is a constant matrix, which is independent of x, we denote it by A j,0 and then we may write equation (2.5) We recall Theorem A in Mallet-Paret's paper [14]: [14]) For each p with 1 ≤ p ≤ ∞, Λ L is a Fredholm operator from W 1,p to L p provided that equation −cu ′ (x) = Lu is asymptotically hyperbolic.
We note here that for linear mixed type functional differential equations the standard formula for computation of the Fredholm index is generally not valid, but this is remedied using the spectral flow formula (see [14] Theorem C).
Traveling waves for Bistable Dynamics
In this subsection, we will state the results of the study of the traveling waves of lattice equations for bistable dynamics in [4] and [1]. In [4], consider a general system of spatially discrete reaction diffusion equations for u(t) = {u n (t)} n∈Z : u n (t) = k a n,k u n+k (t) + f n (u n (t)), n ∈ Z, t > 0, (2.6) where the coefficients a n,k are real numbers and have the following assumptions: A1. Periodic medium. There exists a positive integer N such that a n+N,k = a n,k and f n+N (·) = f n (·) ∈ C 2 (R) for all n, k ∈ Z. A2. Existence of ordered, periodic equilibria.
After an appropriate changeof-variables, the equilibria take the form φ − = 0 and φ + = 1. A3. Finite-range interaction. There exists a positive integer k 0 such that a n,k = 0 for |k| > k 0 and for all n ∈ Z. a is,i s+1 −is > 0.
As ǫ → 0, we have It is well-known that (2.8) has a unique traveling wave solution denoted by (c 0 , φ 0 ).
Then there exists a positive constant ǫ * such that for every ǫ ∈ (0, ǫ * ), the problem (2.7) admits a solution (c ǫ , φ ǫ ) satisfying lim Next, with the Fredholm theory in [11] and ideas in [1], we will study the existence of traveling waves to vector LDEs in an abstract framework.
Persistence of Traveling Waves to Lattice Differential Equations with Perturbations
In this section, our goal is to study the persistence of traveling waves of the lattice differential equations, where Λ is defined as in (2.5), that is, A j (x)u(x + r j ) with r 1 = 0, and r j = r k , 1 ≤ j < k ≤ N 1 , N 1 ≥ 2. The perturbed system of (3.1) is of the form, where ǫ > 0 and Bu := We now give the assumptions for the systems of (3.1) and (3.2). We make the following assumption for the nonlinear term: We remark that even though our application examples in next section focus on bistable nonlinearity, (H1) is a more general assumption. Assume that (H2)There exists a traveling wave solution connecting 0 and 1 for (3.1).
We let (c 0 , φ 0 ) be a traveling wave with speed c 0 > 0 for (3.1). We make the following assumption for the perturbed term: For simplicity, let Λ ǫ = Λ + ǫB. We may write (3.2) in It is natural to hope that at least for small ǫ, (3.3) also has a traveling wave solution.
0 are asymptotically hyperbolic then (H4) is satisfied. This is equivalent to check the assumption that L ± 0 are hyperbolic at ±∞. Note that φ 0 (∞) = 1 Then (H4) is equivalent to the following: In applications, we may use (Ĥ4) instead of (H4) if needed since (Ĥ4) can be more easily verified.
To prove Theorem 3.1, with the arguments of perturbation of Fredholm operators, we borrow ideas from [1], which are applicable to vector LDEs. We made assumption (H2) for (3.1) instead of giving some specific equation having a traveling wave solution like (2.8). Existing literature like Theorem 2.2 in Section 2 that Chen, Guo and Wu proved in [4] can provide nice candidates for (3.1) satisfying (H2). To verify (H4), the Fredholm alternative theory (See [14] and [11]) plays an important role. Let X := H 1 (R, R N ). Since L ± 0 are Fredholm operators and dim(K(L ± 0 )) is finite, X can be decomposed by Following the ideas as in [1], we let φ = φ 0 + ψ for ψ ∈ X η and formulate the problem as where (3.7) In some places, we need study the operator of L + ǫ = c 0 ψ ′ − Λ ǫ ψ + γ(φ 0 )ψ, and its adjoint Let (2) There exists a positive constant C 0 , which depends only on F, such that Proof. (1) L + 0 ψ + 0 = 0 follows by differentiating the equation and a direct computation. By Theorem 2.1, dim(K(L + 0 )) = dim(K(L − 0 )), and then there exists Without loss of generality, we assume φ n H 1 = 1. Thus, we have . On the other hand, by the construction, u is in the orthogonal complement of Let c(ψ) be the unique constant such that R(c, ψ)⊥ψ − 0 . Thus we have Proof. It can be verified by direct computation.
Next we prove the case with ǫ = ǫ * . We simply put φ( exists a subsequence u ǫ k that converges uniformly on bounded set. Recall that Let c ǫ * = lim ǫ k →ǫ * c ǫ . This completes the proof. Remark 3.1. Replacing (c 0 , φ 0 ) by (c ǫ * , φ ǫ * ) and following the arguments of the proofs in Theorems 3.1, ǫ * can be extended further unless (H4) is not satisfied.
In [14], Mallet-Paret provided some sufficient conditions for the one dimensional kernel to scalar LDEs and in [11], Hupkes and the first author of current paper generated the results in [14] to vector LDEs, where γ ≥ 0 and ρ ∈ V ⊂ R. Assume that, (HA) A is irreducible (i.e,it is not similar to a block upper-triangular matrix) and nonnegative.
Applications:Existence of Traveling Waves for Mixed Type LDEs
We will introduce four examples in this section. In the first three subsections, we consider equation (1.1). In the last subsection, we consider the perturbations of equation (2.6) with infinity range interactions. Let d = d 1 + 4d 2 . u j is called a stationary solution of (1.1) if u j satisfies d 1 (u j+1 − 2u j + u j−1 ) + d 2 (u j+2 − 2u j + u j−2 ) − f a (u j ) = 0. u j is called a N-Periodic stationary solution of (1.1) if u j is a stationary solution and u j+N = u j .
Traveling Waves Connecting 0 and 1
Define Consider the following equation, For d > 0, it is well-known that the equation (4.2) has a unique traveling wave φ 0 and the speed c 0 . Now we consider the system (4.1). By changing variables, we can make f satisfy (i) in Theorem 2.3. Equation If (A1) is not satisfied, for d < 0, we will transform our model to a new one which is in the framework of perturbation method developed in the previous section. In section 4.2, we will consider the case with d < 0 but d 1 dominates d 2 in the sense |d 1 | ≫ |d 2 |. In section 4.3, we will deal with the case with d < 0 but d 2 dominates d 1 in the sense |d 2 | ≫ |d 1 |.
Traveling Waves Connecting Two 2-Periodic States
As in the work of Brucal -Hallare and Van Vleck [2], we will use a 2-D transformation. First we write the even and odd nodes of the above equation as x = {x j } j∈Z N and y = {y j } j∈Z N , respectively, and obtain To compute the equilibria, define (x ± , y ± ) by The equilibria satisfy E :
Then substituting into (4.3) we obtain
where d e = d 1 By choosing proper x,y such that d e , d 0 > 0. If d 2 = 0, this is the case studied in [2]. We remark that the case with d 2 = 0 can be easily extended to the case with d 2 ≥ 0.
Proof. The proof follows from the direct computation.
We can pick those equilibria (w ± , x ± , y ± , z ± ) such that after the transformation to 0 and 1, any other 4-periodic state φ = {φ n } n∈Z with φ n ∈ (0, 1), if it exists, is unstable. In this paper, we focus on the cases having bistable dynamics after the transformation. By Theorem 2.2, there exists a traveling wave solution (c 0 , φ 0 ) for (4.12).
Remark 4.2.
In this section we have consideredà j such that the results in [4] on existence of traveling waves for bistable problems give monotone waveforms for the limiting system. This yields, via the results in [11], a one dimensional kernel for the linearization about the reference solution. Alternatively, if perturbations include all terms multiplying d 1 , then the limiting system is decoupled and (A4) is not satisfied. However, by considering the even and odd systems independently, the linearization about the reference solution has a two dimensional kernel and the behavior of solutions under perturbation may be analyzed using the bifurcation equations obtained through the Lyapunov-Schmidt reduction.
Traveling Waves for LDEs with Infinite-Range Interactions
In this section, we study the a generalized model of [4] by adding some infinite range interactions. Consider the following: u n (t) = k a n,k u n+k (t) + f n (u n (t)), n ∈ Z, t > 0, (4.14) where the coefficients a n,k are real numbers satisfying k a n,k e kλ < ∞ for any λ ∈ R and satisfy the assumptions (A1,A2,A4,A5). Compared with the equation in [4], the essential difference is in (A3) and (A5), where we remove the assumption (A3), finite range interactions, and consider an infinite sum in (A5).
Let (B k 0 φ) i := |k|<k 0 a n,k e kµ φ i+k for given µ ∈ R. Consider the eigenvalue problem: Lemma 4.4. For each k 0 , if B k 0 is irreducible and quasipositive(i.e, off-diagonal elements are nonnegative), then principal eigenvalue exists, denoted by λ(k 0 ). Moreover, if both λ(k 0 ) and λ(∞) exist, lim Proof. The existence of a principal eigenvalue is followed by Krein-Rutman theorem. Moreover, we have that λ(k 0 ) = lim n→∞ B n k 0 1/n , which implies that lim Proof. This can be proved by modifying the arguments (replacing k 0 , that defines the finite range of interactions, with n, the period of the media) in the proof of Theorem 2 of [4].
Then we have the following theorem.
Proof. The existence of traveling waves follows from the arguments in Section 3. Next we show that monotonicity persists under small perturbations. By the arguments in Theorem 2 of [4] (see Lemma 4.5), a traveling wave must have exponential tails: e (i−ct)λ 1 φ 1 i = λ 1 h + . Note that λ 0 > 0 and λ 1 < 0. We have that ∂u i (t) ∂t has the same sign as |i − ct| > M for some large M. Thus the traveling wave will preserve the monotonicity at the two far ends for small perturbation because the principal eigenvalue will preserve the sign for small perturbation. Obviously, ∂u i (t) ∂t will preserve the sign on i − ct ∈ [−M, M] for small perturbation. This completes the proof.
Conclusion
In this paper we develop an existence theory via perturbation arguments for traveling wave solutions of vector lattice differential equations. Motivation comes from problems in which there is not a comparison principle. In particular, we consider lattice differential equations in which there are repelling first and/or second nearest neighbor interactions. The structure of the kernel (see Proposition 8.2 in [11]) of the linearized operator of the limiting system is central to our analysis. Our general result is modeled after the perturbation arguments in [1]. A possible alternative approach is the Newton/Lyapunov-Schmidt method developed in [7,11]. Finally, we employ the technique developed here to show the existence of traveling waves for bistable lattice differential equations in periodic media with infinite range interactions. Although the results obtained here are primarily of a local nature, they may be extended to global continuation results in certain cases. This necessitates a Fredholm theory for linearized operators that do not satisfy a strict ellipticity conditions such as (A5), e.g., see [1], together with results on the dimension and structure of the kernel. While the Fredholm theory for problems with infinite range interactions is not well developed, the results in [12] apply to certain infinite range interactions. | 4,705.4 | 2012-10-28T00:00:00.000 | [
"Physics",
"Mathematics"
] |
FREE AND ZEOLITE-IMMOBILIZED PROBIOTIC MIXTURE VERSUS SODIUM VALPROATE IN PREVENTION OF OXIDATIVE STRESS AND MODULATION OF THE L-ARGININE INTRACELLULAR METABOLIC PATHWAYS IN THE RAT BRAIN AND BLOOD FOLLOWING DEXAMPHETAMINE-INDUCED BIPOLAR DISORDER
Experimental bipolar disorder (BD) was induced by repeated daily injection of the increasing doses of d-amphetamine sulfate (AMPH) (2-4 mg kg, 18 injections) in male young adult Wistar rats characterized by temporal arousal mimicked mania, and reduced exploratory and locomotor activities associated with behavioural depression under the condition of withdrawal of AMPH. At the end of the injection course, a stimulation of the lipid peroxidation processes and alterations in the mitochondrial and cytoplasmic activities of both arginase and nitric oxide synthase (NOS) were observed in the regions of brain corticolimbic system (prefrontal cortex, striatum, hippocampus and hypothalamus) and blood leukocytes. We have shown for the first time that a reversal treatment with the mixture of the specific probiotics with psychoand antifungal activities in free (PMF) and zeolite-immobilized (PMZ) forms, and/or with a mood stabilizer, sodium valproate (VPA) inhibited oxidative stress and modulated differentially the L-arginine metabolic pathways in the brain and blood following AMPHinduced BD. Both PMF and PMZ efficiently normalized the activities of arginase isoforms and upregulated the suppressed intracellular NOS along with the gut microbiota restoration and prevention of the histopathological changes in the brain regions accompanied by normalization of rat behaviour.
Experimental bipolar disorder (BD) was induced by repeated daily injection of the increasing doses of d-amphetamine sulfate (AMPH) (2-4 mg kg -1 , 18 injections) in male young adult Wistar rats characterized by temporal arousal mimicked mania, and reduced exploratory and locomotor activities associated with behavioural depression under the condition of withdrawal of AMPH.At the end of the injection course, a stimulation of the lipid peroxidation processes and alterations in the mitochondrial and cytoplasmic activities of both arginase and nitric oxide synthase (NOS) were observed in the regions of brain corticolimbic system (prefrontal cortex, striatum, hippocampus and hypothalamus) and blood leukocytes.We have shown for the first time that a reversal treatment with the mixture of the specific probiotics with psycho-and antifungal activities in free (PMF) and zeolite-immobilized (PMZ) forms, and/or with a mood stabilizer, sodium valproate (VPA) inhibited oxidative stress and modulated differentially the L-arginine metabolic pathways in the brain and blood following AMPHinduced BD.Both PMF and PMZ efficiently normalized the activities of arginase isoforms and upregulated the suppressed intracellular NOS along with the gut microbiota restoration and prevention of the histopathological changes in the brain regions accompanied by normalization of rat behaviour.
INTRODUCTION
Bipolar disorder (BD), a complex, psychiatric disorder is one of the leading causes of disability, low quality of life both among men and women, affecting about 60 million people worldwide. 1,2In recent years the incidence of BD has increased, the mean age of patients decreased up to 42 years, and without treatment approximately 15 % of patients with BD commit suicide. 3Nowadays, the multiple character of the BD etiology is accepted involving oxidative stress, mitochondrial dysfunction, inflammation, cell signalling, apoptosis, impaired neurogenesis, etc., that are controlled by current mood stabilizers such as valproate, lithium, lamotrigine. 4It has also become obvious, that the microbiome alterations are implicated in stress response, memory functions, social behaviour, and mood contributing to the pathophysiology of BD and other neuropsychiatric disorders. 5,6Moreover, gut microbiota can affect cognition and behaviour through a number of immune-related mechanisms. 7Since, normalizing of microbiota with probiotics (live useful bacteria) is showing antipsychotic and antidepresssant effects, their possible adjunctive therapeutic role in mood-related psychiatric symptoms has been suggested. 8,9eviously, we have shown that administration of zeoliteimmobilized probiotics may protect from a development of depression/anxiety and cognitive deficit in stressed rats. 10e have also demonstrated the changes in the behaviour, gut microbiota, brain morphology and redox homeostasis are accompanied by perturbations in the L-arginine intracellular metabolic pathways in the regions of corticolimbic system and blood leukocyte following damphetamine-induced BD. [11][12][13] In this study we show for the first time the effect of reversal treatment with the mixture of specific probiotics in free (PMF) and zeolite-immobilized (PMZ) forms on the mitochondrial and cytoplasmic activity of argininemetabolizing enzymes, arginase and nitric oxide synthase (NOS) compared to reversal treatment with sodium valproate (VPA), a multiple action anticonvulsant and mood stabilizer that is also known as an effective drug in the treatment of BD. 14 sulfate (AMPH) (Sigma, St. Louis, Mo.), N G -monomethyl-L-arginine hydrochloride (Calbiochem La Jolla, CA), Dextran (Mr ̴ 70 000) (Serva, Heidelberg Germany), bovine serum albumin was from Carl Roth (GmbH, Karlsruhe) were used.All other reagents were purchased from Sigma-Aldrich Chemical Co.(St.Louis, MO, USA).
Animals and treatments
All procedures involving animals were in accordance with the International Laboratory Animal Care and the European Communities Council Directive (86/809/EEC) and approved by the respective local committee on biomedical ethics (H.Bunyatyan institute of biochemistry, Yerevan, NAS RA).Two-to 3-month-old maleWistar rats from our breeding colony were used.All animals were maintained on a 12 h light/dark cycle at normal room temperature and housed in groups of 6 per cage with free access to food and tap water.
Experimental design
The animals were divided into control group -native rats and experimental groups, in which BD was reproduced by repeated intramuscular (i.m.) injection of non-neurotoxic escalating doses of AMPH (2-4 mg kg -1 ). 15,16Rats received AMPH once a day on each weekday (but not on weekends), in total 18 injections.After the ninth injection the experimental animals were divided into four groups, an AMPH-group, in which rats continued receive AMPH only, VPA group, in which in parallel with AMPH injection animals were orally gavaged with VPA at a dose of 200 mg kg -1 , and PMF and PMZ groups in which in parallel with AMPH injection animals were fed with 1 mL (6 x 10 9 CFU mL -1 ) of the PMF and/or PMZ.Toward the end of the treatment, all of the animals underwent a behavioral testing in open field (OP) and elevated plus-maze (EPM).Stereotypy ratings were also scored.Thereafter, rats were decapitated.
Open field (OF) test
The rats were placed singly into an OF (diameter 1m, divided by 2 concentric circles into 16 equal sections on the floor of the arena) and observed in 3 min to measure locomotor activity (the number of sectors crossed with all paws (crossing), exploratory behavior i.e., the number of rears (posture sustained with hind-paws on the floor) and grooming (including washing or mouthing of forelimbs, hind-paws, body and genitals), and boluses (anxiety) counted manually/visually. 17
Elevated plus-maze test
Immediately after the OF test the rats were placed singly into a common central platform (10 cm × 10 cm) of elevated plus-maze comprised two open and two closed arms (45 cm x 10 cm x 10 cm) and elevated to a height of 80 cm above the floor.During a 3-min observation period, the following parameters were measured: number of open arms entries and number of closed arm entries.Exploration (grooming and rearing) and risk assessment (number of hanging over the open arms). 18At the end of each trial, the open field and elevated plus-maze were wiped clean with ethanol.Stereotypy ratings were scored as previously described. 19
Microbiota
After decapitation trunk blood was collected, each animal was opened aseptically, samples of faeces from the lower part of the gut and washouts of brain were immediately placed into an anaerobic chamber for bacteriological analysis.Samples were incubated in sucrose broth at 37 °C for 24 h (blood was diluted by 1:5 v/v), then examined by microscopy, inoculated to the solid culture media, agar plates (Endo, sucrose, and blood agar), and incubated for 24 h.Blood samples were incubated for 5 days to facilitate a growth of microbes.The characteristics such as morphology and color of the colonies, as well as hemolysis, plasmacoagulation, aerobic fermentation of mannitol were examined for identification of microorganisms. 20
Histopathological analysis
Formalin fixed brain region tissues were stained with hematoxylin and eosin (H&E) and examined for any histopathological changes.Pathological diagnosis of each brain specimen was assessed and analyzed by specialized histopathologist in a blinded manner.
Composition of chemically modified natural minerals
The multielemental composition, chemical modification, grinding (about 50 μm powder) of zeolite, bentonite and diatomite were previously dtermined, similarly dosedependent effect on growth promotion in cultures of specific strains of Lactobacilli and Bifidobacteria and efficiency of their immobilization have been studied. 10Selected probiotics were cultured and immobilized using composition of micronized modified natural minerals (MNM) (zeolite (80 %), diatomite (10 %), and bentonite (10 %).
Brain cytoplasmic and mitochondria
Preparation of brain cytoplasmic and mitochondrial fractions was performed by differential centrifugation. 22rains were rapidly removed from the skulls, placed on a cold plate, and prefrontal cortex (PFC), striatum, hippocampus and hypothalamus were dissected and homogenized in ice-cold 20 mM HEPES buffer pH 7. homogenizer (1500 rpm for 3 min).Homogenates were centrifuged at 3000 rpm for 10 min to remove nuclear fraction.Supernatants were collected, centrifuged at 15000 rpm for 20 min, and cytoplasm in the supernatants and mitochondria in the pellets were obtained.Mitochondria were washed twice using the above mentioned buffer.
Isolation of blood leukocyte
Freshly obtained blood was drawn into 3.8 % sodium citrate anticoagulant, then mixed with 6 % dextran (prepared in 0.9 % NaCl) and incubated at 37 °C for 60 min to remove erythrocytes from blood by gravity sedimentation, and decanted layer was centrifuged at 1000 rpm for 5 min and the pellet containing leukocytes was washed twice and used, whereas plasma was obtained from supernatant by centrifugation at 6000 rpm for 20 min at 4 °C. 23eparation of leukocyte cytoplasmic and mitochondrial fractions was performed by differential centrifugation of the leukocyte homogenates. 22Leukocytes were resuspended and homogenized in ice-cold 20 mM HEPES buffer pH 7.4, containing 0.25 M sucrose, (1:10, w/v) using Potter homogenizer (1500 rpm for 3 min), then centrifuged at 1200 rpm for 10 min at 4 °C to remove nuclei and cell debris.Pellet was discarded and the supernatant further was centrifuged at 11000 rpm for 20 min at 4 о С to yield the crude mitochondrial preparation which was washed twice, resuspended and homogenized in the buffer used.The cytoplasm was in the supernatant fraction.
Arginase assay
The samples were added to the reaction mixture containing 20 mM HEPES buffer (pH 7.4), 3.9 mM MnCl2•4H2O, 15.4 mM L-arginine•HCl and incubated at 37 °C for 60 min, followed by the addition of 10 % TCA to stop the reaction. 24Parallel control experiments were conducted in the presence of 20 mM L-valine, a nonselective inhibitor of the arginase isoforms.Following a centrifugation (15000 rpm, 3 min) the protein-free supernatants were sampled and analyzed for L-ornithine content.The arginase activity expressed as produced in an hour L-ornithine per mg of total protein.
Measurement of L-ornithine
Samples were mixed with 4.5 % ninhydrin (1:1, v/v), heated (90 °C, 20 min), cooled to the room temperature and the absorbance was measured at 505 nm wavelength against reagent blank containing all the reagents minus the sample. 24
Nitric oxide synthase assay
A total NOS activity was assessed by measuring stable intermediate of NO, nitrite (NO2 -) accumulated during a long-term incubation of samples (37 °C for 22 h) in 20 mM HEPES buffer pH 7.4 in the presence of NOS substrate, 15.4 mM L-arginine•HCl, and cofactors included 0.2 mM NADPH, 6 µM FAD, 5.5 µM FMN, 20 µM ((6R)-5,6,7,8tetrahydro-L-biopterin dihydrochloride) (BH4) and 1.7 mM CaCl2. 25 Parallel control experiments were conducted in the presence of 15 mM N G -monomethyl-L-arginine•HCl, non-selective inhibitor of all the NOS isoforms.Reaction was initiated by addition of samples to the incubation medium and terminated by subsequent addition of 0.5 N NaOH and 10 % ZnSO4•7H2O.Following a centrifugation (15000 rpm, 3 min) the protein-free supernatants were sampled and analyzed for nitrite content.The NOS activity is expressed as produced in 22 h nitrite per mg of total protein.
Measurement of nitrite
Samples were deproteinized with 0.5 N NaOH and 10 % ZnSO4•7H2O.Following a centrifugation (15000 rpm, 3 min), the protein-free supernatants were sampled and analyzed for nitrite using colorimetric technique based on diazotization reaction.Samples were mixed in equal parts with Griess-Ilosvay reagent (1:1 mixture of 0.17 % sulfanilic acid and 0.05 % α-naphthylamine in 12.5 % acetic acid) and measured at 546 nm wavelength against reagent blank containing all the reagents minus the sample. 26dices of oxidative stress referring to lipid peroxidation processes were established by measuring malondialdehyde (MDA) using thiobarbituric acid (TBA). 27Samples were deproteinized with 10 % TCA and the precipitates were removed by centrifugation at 15000 rpm for 3 min, supernatants were mixed with 0.6 N HCl and 0.72 % TBA, heated for 15 min in boiling water bath that resulted in the formation of pink-colored secondary product of MDA and the absorbance was measured at 535 nm wavelength against reagent blank containing all the reagents minus the sample.
Protein was determined using crystalline bovine serum albumin as standard. 28
Statistical analysis
All data were analyzed using a one-way analysis of variance (ANOVA) followed by post hoc Holm-Sidak test (SigmaStat 3.5 for Windows).Data are expressed as the mean ± S.E.M. Differences are considered statistically significant at a probability level of P < 0.05.
Results and discussion
2][13] Elevation of C. albicans and its association with worse positive psychiatric symptoms in patients with BD and schizophrenia have been demonstrated. 30We showed that preventive treatment with a mixture of the specific probiotic have benefits in the AMPH-induced BD. 31 This probiotic mixture was composed of psychobiotics, L. rhamnosus and Bifidobacterium bifidum, and Lactobacilli with fungicidal activity, as well as E. coli М17, which plays a pivotal role in the modulation of microbiota and maintaining homeostasis. 32,33 the use of probiotics in vivo is that they must survive and sustain transit through the detrimental factors of the gut in large quantities to facilitate their colonization in the host and confer in vivo health benefits, and immobilization of probiotics may protect them from the harmful gut factors and enable their transport and normal functioning in gut. 34,35tural minerals such as zeolites, diatomite and bentonite with absorbent and ion-exchange properties containing macro-and microelements have been effectively used as carriers and promoters of bacterial growth. 36Natural minerals can also replenish the need of organism for minerals and used as enterosorbents improving metabolism via absorption of toxins from the intestine, and even from blood due to a diffusion through the intestine. 37 addition, natural minerals do not exert mutagenic effects, they are non-toxic, effective, versatile and economical, therefore bentonite and diatomite are E558, E551 food additives approved in EU as anti-caking agents.
Notably, potential benefit of a micronutrient treatment (consisting mainly of vitamins and minerals) is shown for various psychiatric symptoms, including bipolar II disorder with co-occurring attention-deficit/hyperactivity disorder. 38ased on this, we immobilized the above probiotics using the micronized chemically modified natural minerals composition (MNM) with domination of zeolite (see Experimental) and conducted a comparative study of the specific probiotic mixture in free (PMF) and MNMimmobilized (PMZ) forms versus VPA in reversal treatment of AMPH-induced BD.
Effect of treatment with probiotics vs. sodium valproate on histopathological changes in the regions of corticolimbic system
Our results show that reversal treatment with both probiotics and/or VPA prevented bacterial translocation and mainly normalized microbiota and rat behavior.However, in the gut of VPA-treated animals single colonies of S. aureus were found.It is in line with finding that sodium valproate is selectively potent in vitro against C. albicans, while it exerted low activity against S. aureus. 39storation of balanced microflora via treatment with probiotics and VPA apparently contributed to amelioration and prevention of histopathological changes in the brain regions of corticolimbic system which were examined using H&E staining.Reversal treatment with both PMF and PMZ showed the similar effect on the brain regions morphology like in prevention treatment with the same mixture of probiotics, i.e., in the most of regions were detected only unremarkable changes from control. 31wever, after reversal treatment with PMF in the hypothalamus were observed proliferation and edema, and in the PFC were seen multiple blood microvessels, following PMZ-treatment, presumably, related to protective capillary creation (Figure 1, A, B).As shown in Figure 1C-F, after reversal treatment with VPA, edema and compensatory full-blooded vessels were detected in the PFC, and interfibrillar edema, proliferation, cellular polymorphism were observed in the rest of brain regions.Biochemical pattern associated with the effects of the probiotics and VPA was also studied.
Effect of treatment with free and immobilized probiotics vs. sodium valproate on the lipid peroxidation processes in brain and blood
Overproduction of reactive oxygen species accompanied by protein oxidation, lipid peroxidation and oxidative damage to DNA/RNA plays crucial role in the pathophysiology of BD. 40,41 Thiobarbituric acid reactive substances (TBARS) are formed as a byproduct of lipid peroxidation, and TBARS levels reflect the oxidative stress state which increases both in the acute phase of BD (mania/hypomania and depression) and with BD progression stage. 42We used assay of TBARS to measure MDA level formed via the decomposition of certain primary and secondary lipid peroxidation products and is a marker of oxidative stress.AMPH-induced BD was associated with an elevation of MDA level of a 2.7, 2.2, 2.0 and 4.5-fold in the PFC, striatum, hippocampus and hypothalamus and by about 6 and 3 times in the leukocyte and plasma compared respectively to ontrol.In reversal treatment administration all of preparations, PMF, PMZ and VPA decreased the MDA content in brain and blood (Figure 2).PMF normalized the MDA content in the PFC and striatum.Both PMF and PMZ didnot influence MDA level in hippocampus, but reduced it almost halved in thehypothalamus, in which it remained 2.4 times above the control.At the same time, PMF diminished the MDA content in theleukocytes by 1.6 times, compared to control, whereas it was normalized by PMZ and VPA.Notably, VPA reduced the MDA content by 1.7, 1.8 and 1.5 times below the control in thePFC, striatum and hippocampus respectively, and normalized in hypothalamus.Antidepressant mechanism of VPA is shown perhaps linked to an inhibition of oxidative damage via improvement serum MDA level, and serum catalase and superoxide dismutase activities, and upregulation of tyrosine hydroxylase and tryptophan hydroxylase in the PFC of rats exposed to chronic unpredicted stress. 43Both PMF and PMZ reduced drastically the MDA level in plasma by approximately 7 and 4.6 times compared to control, while after VPA treatment it was twice the norm.Thus, probiotics and VPA suppress differently AMPHinduced lipid peroxidation, i.e., system-wide oxidative stress response there through preventing oxidative damage in the brain regions responsible for cognitive function, emotion and mood, as well as in blood leukocyte and plasma.However, it should be considered that a drop of the MDA level below the norm could decrease the physiological level of oxidant challenge essential for governing life processes through redox signaling. 44
Effect of treatment on the arginase activity in brain and blood
Recent research has identified inflammatory agents and reactive oxygen species as drivers of the pathologic elevation of arginase activity and expression. 45Arginase hydrolyzes L-arginine to urea and L-ornithine, and exists in 2 isoforms, cytoplasmic (A1) and mitochondrial (A2). 46mmunolocalization studies have shown the presence of both A1 and A2 in brain, especially in hippocampal neurons. 47Differential expression of the arginase isoforms could provide a means to preferentially direct ornithine either to proline or excitatory amino acid glutamate
Prevention of oxidative stress and L-arginine metabolism modulation in rat brain
Section C-Research paper Eur.Chem.Bull.2018, 7(1) 42-51 synthesis via ornithine aminotransferase in cytoplasm or to polyamine synthesis via ornithine decarboxylase in mitochondria. 48A significant increase in the concentration of polyamines in some structures of the limbic system and reticular formation in autopsy specimens of the brain of patients with schizophrenia has been found. 49Previously, we have shown that activation of lipid peroxidation processes was accompanied by a region-specific stimulation of the arginase isoforms in the cytoplasm and mitochondria in the brain corticolimbic system regions and blood leucocyte following AMPH-induced BD. 12,13 Here we observed that reversal treatment with VPA, and free and immobilized probiotic mixture exerted a modulatory effect on the intracellular arginase activity in brain and blood following AMPH-induced BD. 24 hours after discontinuation of treatment with probiotics and VPA and injection of dexamphetamine, A1 and A2 activities were mainly reduced in the brain regions studied, with exception for the A1 activity in hippocampus resistant to any treatment used (Figure 3).Oxidized lipoproteins can upregulate A1 in mouse macrophages. 50Superoxide anion (O2 •− ) and hydrogen peroxide (H2O2) can also enhance mRNA content and A1 activity in the rat alveolar macrophages. 51A decrease of arginase isoforms activity partially is due to suppression of lipid peroxidation processes by the preparations.Of interest, both PMF and PMZ did not also decrease lipid peroxidation in the hippocampus following AMPH-induced BD.However, despite the fact that VPA decreased the level of MDA, the A1 activity was not reduced in the hippocampus of VPA-treated rats indicating the existence of other factors that may affect the expression and activity of the enzyme.It should be noted that on one hand A2 may contribute to oxidative stress via stimulation of mitochondrial reactive oxygen species (O2 •− and H2O2) production and there through promote macrophage inflammatory responses. 52On the other hand, A2 preferentially direct ornithine to putrescine which suppresses lipid peroxidation, and support brain functions in adaptation to extreme environmental conditions. 53This complicates the picture studied.So, both PMZ and VPA equally reduce the MDA content in the leukocyte, but they differentially decrease the arginase isoforms activity (Figure 3).
It should be noted that during BD enhanced arginase activity and a subsequent decrease in the L-arginine levels can activate a stress kinase pathway that impairs function of T lymphocytes and also can inhibit the mitogen-activated protein kinase signaling pathway required for macrophage production of cytokines in response to bacterial endotoxin/lipopolysaccharide. 54A decreased the A1 and A2 activities by 1.9 and 4 times below control values in the PFC.PMZ caused a decrease in the activity of A1 and A2 of a 2.4 and 3.2-fold below the norm in the leukocytes respectively.Such suppression can affect the functions of arginase, which plays a role in protection against NH3 toxicity and cell growth and repair.So, hyperammonemia is caused by valproate therapy or overdose, and L-arginine could be potentially used therapeutically to correct this phenomenon. 55
Effect of treatment with probiotics and sodium valproate on the nitric oxide synthase activity in brain and blood
Arginase and nitric oxide synthase (NOS) share common substrate L-arginine, and another likely mechanism which may also contribute to the arginase effects is influence on the nitric oxide (NO) production via quenching of Larginine and limiting its supply or via synthesis of urea, which inhibits a dimerization of the inducible NOS (iNOS) monomers to active form. 56,57Moreover, arginase-derived Lornithine is converted to putrescine and then to the polyamines, spermidine and spermine, which inhibits iNOS translation and NO overproduction. 58NO is a versatile messenger molecule, with the characteristics of neurotransmitters, that may influence the levels of dopamine, noradrenaline, serotonine, acetylcholine, GABA. 59Moreover, NOS/NO system appeared to be involved in the pathophysiology of BD. 41 We have previously demonstrated that a total NOS activity was decreased in the cytoplasm and mitochondria in the regions of corticolimbic system and blood leucocyte following AMPH-induced BD. 12 Reversal treatment with VPA and free and immobilized probiotic mixture not only prevented an inhibition of NO production, but also stimulated the latter (Figure 5).
A total NOS activity has increased approximately equally over shooting the control values in the cytoplasm and mitochondria of striatum and hippocampus following treatment with PMF and PMZ.Pronounced differences in the influence of PMF and PMZ on total NOS are observed in the mitochondria of PFC, and in the cytoplasm of hypothalamus leveled after a week post-treatment (data not shown).The tendency to normalization of the NOS in the cellular compartments of hippocampus and hypothalamus predominated following VPA treatment, whereas in the PFC and striatum a total NOS activity was not significantly affected by VPA.This indicates that an increase in the activity of NOS is not necessarily related to the inhibition of the activity of arginase isoforms by preparations studied.Moreover, a stimulation of NOS could contribute to inhibition of arginase reaction, as the first intermediate of NO synthesis, N G -hydroxy-L-arginine is a well-known arginase inhibitor. 60The most pronounced drop about thrice in the NOS activity observed in the cytoplasm and mitochondria of blood leukocyte was prevented following reversal treatment with probiotics and VPA at AMPH-induced BD (Figre 6).VPA modulate a NOS activity in the cytoplasm and increased it in the mitochondria above the norm.PMF also caused a significant increase in the intracellular NO production, whereas PMZ had almost no effect.Nevertheless, a total NOS activity normalized in the cell compartments of leukocytes of both PMF-and PMZ-treated rats, a week after treatment in contrast to self-recovery group (data not shown).It should be noted that increased arginase activity following AMPH-induced BD could restrict the supply of Larginine required for NO production, and NOS will become uncoupled and use molecular oxygen to form superoxide, which reacts rapidly with any available NO to form peroxynitrite, further decreasing NO and further uncoupling NOS by oxidizing the co-factor BH4. 61,62Of interest, a negative correlations between NOS activity and free radical generation were revealed in the active rat cerebral cortex (animals selected using the emotional resonance test). 63The antioxidant effects of NO is a consequence of direct reaction with alkoxyl and peroxyl radical intermediates during lipid peroxidation, thus terminating lipid radical chain propagation reactions. 64
CONCLUSION
Taken together the data presented in this report provide further support to the claim that psychoactive and antifungal probiotics mixture both in free and immobilized forms may normalize gut microbiota and histopathological changes in the brain corticolimbic system, as well as may efficiently suppress oxidative stress and modulate the L-arginine metabolic pathways in region-specific manner in the brain, and in blood leukocyte following dexamphetamine induced BD.Further study is needed to confirm whether L-arginine intracellular alternative metabolic pathways are represent new targets for developing methods to diagnose and treat BD, and whether PMF and PMZ are effective for BD, both as mono-therapy and in combination with mood stabilizers.
Figure 1A .
Figure 1A.Effect of treatment with PMF on proliferation and edema in hypothalamus.
Figure 1B .
Figure 1B.Effect of treatment with PMZ on multiple blood microvessels of PFC.
Figure 1C .
Figure 1C.Effect of treatment with VPA on edema and fullblooded vessels of PFC.
Figure 1D .
Figure 1D.Effect of treatment with VPA on Striatum showing intensive interfibrillar edema.
Figure 1E .
Figure 1E.Effect of treatment with VPA on Hippocampus showing proliferation, cellular polymorphism and the presence of large cells.
Figure. 3 .
Figure. 3. Effect of treatment with PMF, PMZ and VPA on the arginase activity in the cytoplasm and mitochondria of the brain corticolimbic system regions.
Figure 4 .
Figure 4. Effect of treatment with PMF, PMZ and VPA on the in the arginase activity in the cytoplasm and mitochondria of leukocyte.
Figure 5 .
Figure 5.Effect of treatment with PMF, PMZ and VPA on the total nitric oxide synthase activity in the cytoplasm and mitochondria of the brain corticolimbic system.
Figure 6 .
Figure 6.Effect of treatment with PMF, PMZ and VPA on the total nitric oxide synthase activity in the cytoplasm and mitochondria of leukocyte. | 6,307 | 2018-03-07T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry",
"Biology"
] |
Valley Hall edge solitons in a photonic graphene
We predict the existence and study properties of the valley Hall edge solitons in a composite photonic graphene with a domain wall between two honeycomb lattices with broken inversion symmetry. Inversion symmetry in our system is broken due to detuning introduced into constituent sublattices of the honeycomb structure. We show that nonlinear valley Hall edge states with sufficiently high amplitude bifurcating from the linear valley Hall edge state supported by the domain wall, can split into sets of bright spots due to development of the modulational instability, and that such an instability is a precursor for the formation of topological bright valley Hall edge solitons localized due to nonlinear self-action and travelling along the domain wall over large distances. Topological protection of the valley Hall edge solitons is demonstrated by modeling their passage through sharp corners of the $\Omega$-shaped domain wall.
Topological edge solitons were introduced as hybrid states that are affected by both the topological nature of the system and the nonlinear self-action. Their investigation was mostly limited to polaritonic systems with external magnetic field [44,46,62,63] and to waveguiding systems with longitudinal refractive index modulations [60,66,69] serving to break time-reversal symmetry of the system. At the same time, it is known that the appearance of topological edge states in valley Hall systems does not require time-reversal symmetry breaking and is associated instead with breakup of the inversion symmetry of the system [the word "valley" is associated here with specific features (presence of the local extrema) of bands of corresponding systems: for example, when inversion symmetry of the underlying honeycomb lattice is broken by detuning of two constituent sublattices, the gap opens between former Dirac points and local extrema in two upper bands develop that are called valleys]. The latter setting therefore can be realized without using external magnetic fields or longitudinal system modulations, that are always associated with losses. Even though valley Hall edge solitons were considered previously in sophisticated lattice geometries possessing type-II Dirac cones in the spectrum [70], the specific structure of the underlying lattice did not allow illustration of their topological protection. Such states, bifurcating from linear topological edge states at Bloch momenta yielding appropriate sign of the group velocity dispersion, and their topological protection so far were not considered at the domain walls between usual detuned honeycomb lattices (used in the majority of experiments on linear valley Hall edge states), which are much easier for experimental implementation.
In this paper, we report on valley Hall edge solitons forming at the domain wall in conventional honeycomb waveguide array (a photonic graphene). We study properties of the linear and nonlinear edge states at such domain walls and present long-living topological edge solitons that demonstrate topological protection upon passage through sharp bends of the domain wall. Our results suggest experimentally straightforward approach to implementation of such states.
In Fig. 1, we display a photonic array of straight waveguides with honeycomb structure, consisting of two sublattices A and B. The refractive index modulation depths ( A and B ) in two sublattices can be made slightly different (detuned), as shown by different colors in Fig. 1. This results in the breakup of the inversion symmetry of the array, disappearance of the Dirac cones in the spectrum, and opening of the gap between them. It should be stressed that even though forbidden gap emerges in the band structure, the Berry curvature Ω of the first and second bulk bands satisfies the condition Ω(−k) = −Ω(k), which indicates that the Chern numbers of the two upper bulk bands remain zero [73]. In this case six valleys appear in the spectrum, with three valleys around K (K ) points being equivalent. Valley Chern number of a specific valley is determined to be either +1/2 or −1/2. Moreover, if the valley Chern number for a certain valley is +1/2 for the lattice with A > B , then for the lattice with A < B the Chern number for the same valley will be equal to −1/2 [74]. Thus, if one designs a composite honeycomb lattice with a domain wall (highlighted by the red ellipse in Fig. 1) between two inversion-symmetry-broken honeycomb lattices with opposite detunings, the valley Chern numbers at both sides of the interface become opposite. In this case, bulk-edge correspondence principle, applied to valley Hall system, predicts the formation of the edge states localized on the domain wall and decaying in the direction perpendicular to it. The appearance of such edge states is a manifestation of the well-known valley Hall effect [75][76][77] and corresponding edge states of topological origin are usually called valley Hall edge states [74,[78][79][80].
Band structure and linear valley Hall edge state
The propagation of the valley Hall edge state along the longitudinal axis of the waveguide array with focusing cubic nonlinearity can be described by the nonlinear Schrödinger equation, where is the dimensionless field amplitude, and are the normalized transverse coordinates, and is the normalized propagation distance, the function R stands for the refractive index distribution in the honeycomb array that is independent of the longitudinal coordinate . The profiles of individual waveguides in the array can be described by Gaussian functions of width : stand for the depths of waveguides in two sublattices, and ( , , , ) are the coordinates of the nodes in the honeycomb grid. We consider a configuration that is periodic along the axis and is limited along the -axis with outer boundaries located far away from the domain wall, so that R ( , ) = R ( , + ) with = √ 3 and being the array constant. As a representative parameter values we choose = 1.4 and = 0.5. The average refractive index modulation depth is set to be in = 10.3, while the detuning = 0.55. For the honeycomb array on the left side of the domain wall in Fig. 1 we set A = in + and B = in − , while for the array on the right side of the domain wall we assume inverted detuning, so that A = in − and B = in + . The domain wall emerging between these two arrays that we consider here is characterized by the reduced refractive index for all sites, see red ellipse in Fig. 1. Normalized parameters described above correspond to the following real physical values in waveguide arrays inscribed in fused silica with femtosecond laser pulses [3,8,10,11,81,82] if laser radiation at the wavelength of 800 nm is used and characteristic transverse scale is set to 10 m, that corresponds to dimensionless coordinates , = 1. In this case the array constant is 14 m, waveguide width is 5 m, and in = 10.3 corresponds to the refractive index modulation depth of ∼ 1.1 × 10 −3 .
We obtained the bandgap structure of the composite array with a domain wall by substituting the solution ( , , ) = ( , ) + into linear counterpart of Eq. (1). Here ( , ) = ( , + ) is the periodic Bloch wave function, ∈ [−K /2, K /2) is the Bloch momentum in the first Brillouin zone with K = 2 / , and is the propagation constant of the linear mode that is a function of . Using plane-wave expansion method we obtained the bandgap structure shown in Fig. 2(a), in which the bulk states are indicated by the black lines and the in-gap valley Hall edge state is indicated by the red line. To better understand properties of the edge state, we also display the first-order = / (solid line) and second-order = 2 / 2 (dashed line) derivatives of the propagation constant of the edge state in Fig. 2(b). The first-order derivative provides the group velocity = − with which edge state moves along the domain wall, while the second-order derivative quantifies the dispersion of the edge state and allows to estimate, in particular, the rate of expansion along the domain wall of the localized envelope, if it is superimposed on the edge state. As shown in Fig. 2(b), is negative in the entire Brillouin zone which is necessary to obtain bright solitons. If the value is positive, one obtains dark solitons [83]. In Fig. 2(c), we display two examples of the linear valley Hall edge states corresponding to the red and greed dots in Fig. 2(a). The localization of the state at = −0.3K with propagation constant closer to the center of the gap (red dot) is much better than that of the state at = −0.467K taken close to the gap edge (green dot). For both these values > 0 that corresponds to the motion in the negative direction during propagation. Notice that the same domain wall supports states propagating in the opposite direction, since the valley Hall system is time-reversal symmetric, and there must be a back-propagating state as the time-reversal conjugate of the forward-propagating state. In the valley Hall system, the counter-propagating states can hardly couple (such coupling is only possible under the action of strong localized defects that couple two valleys, while smooth large-scale perturbations do not couple them) making such systems beneficial in comparison with topologically trivial waveguide arrays. In the following we consider states with Bloch momentum = −0.3K , but point out that their properties remain similar for other values.
Nonlinear valley Hall edge state and quasi-soliton
To obtain bright valley Hall edge solitons, we first calculate nonlinear extension of the valley Hall edge states. To do this, we insert the ansatz ( , , ) = ( , ) + , where is the nonlinear propagation constant shift, into nonlinear Eq. (1) that yields the equation which can be solved by using Newton method for a given nonlinear propagation constant shift that lies in the interval li ≤ ≤ ge . Here, li ≈ 3.473 is the propagation constant of the linear valley Hall edge state at = −0.3K [see the red dot in Fig. 2(a)], while ge ≈ 3.832 is the propagation constant corresponding to the top edge of the gap for the same Bloch momentum . The peak amplitude (solid curve) and power per one -period of the structure = ∫ +∞ −∞ ∫ 0 | | 2 (red curve) for the nonlinear valley Hall edge state family are shown in Fig. 3(a). They both monotonously increase with increasing nonlinear propagation constant shift and they vanish exactly in the point, where nonlinear edge state family bifurcates from the linear family. Amplitude profiles | | of two representative nonlinear edge states with = 3.516 and = 3.8, corresponding respectively to the black and red dots in Fig. 3(a), are shown in Fig. 3(b). Since the state indicated by the red dot is much closer to the top edge of the gap, its localization is worse than that of the state corresponding to the black dot. We choose the state corresponding to the black dot with = 3.516 and investigate its modulational instability by adding a perturbation into its initial profile that is ∼ cos( ), with = 0.01 and being the amplitude and the frequency, respectively. Such small perturbations experience clear exponential growth at the initial stage of instability development as long as modulation frequency is within the modulational instability band. The dependence of the perturbation growth rate on frequency can be easily obtained from direct simulations of propagation, as shown in Fig. 3(c). The dependence ( ) unveils that the modulation instability bandwidth is finite. Next we consider dynamics of propagation of the nonlinear edge states. We are interested mostly in the edge states with not too small peak amplitudes (or their behavior will be close to that of linear edge states) that exhibit relatively fast decay in the course of propagation due to the development of modulational instability. For example, one can choose the same nonlinear edge state, whose modulation instability is studied in Fig. 3(c). To illustrate the development of modulational instability, one can introduce the periodic perturbation as adopted in Fig. 3(c). Besides, one can also perturb the nonlinear valley Hall edge state by a random 5%-amplitude noise i.e., consider input in the form -periods, as shown in Fig. 4(a). The amplitude profiles shown at different propagation distances reveal the development of modulational instability, which results in breakup of the wave into multiple bright spots -precursors of bright solitons, whose formation is possible in this system due to focusing nonlinearity and appropriate sign of the second-order dispersion . Notice that instability development does not lead to dramatic radiation into the bulk, i.e. nearly all power remains in the vicinity of the domain wall. The peak amplitude of the nonlinear state during propagation is depicted in Fig. 4(b). The red dot on this dependence corresponds to = 210 and to sufficiently pronounced edge state modulations, as it follows from Fig. 4(a). To confirm that isolated bright spots emerging as a result of modulation instability development indeed can give rise to stable valley Hall edge quasi-solitons (here "quasi" means that such states still exhibit small radiative losses during propagation, even though these losses are so weak that they do not lead to noticeable decrease of peak amplitude even at ∼ 10 4 , see below), we selected one of such spots indicated by the green circle in Fig. 4(a) as an input state at = 0 in Fig. 5(a) and propagated it up to = 10 4 . The evolution of the peak amplitude nlin (black curve) and integral center position of the emerged quasi-soliton during propagation is presented in Fig. 5(c). One can see that after slight initial decrease, the peak amplitude nlin of so constructed input exhibits only small oscillations and does not decrease with distance, clearly indicating on the fact that nonlinear self-action has compensated diffraction broadening for this self-sustained state. The edge soliton moves along the -axis in its negative direction with constant velocity and in our case traverses -window (where we used periodic boundary conditions) multiple times, without any signature of diffractive broadening. In Fig. 5(a), we also display amplitude profiles of the quasi-soliton at different distances: they show that the profile of the quasi-soliton remains nearly unchanged. We would like to note that the input spot in Fig. 5(a) is not exactly the valley Hall edge soliton solution and this is the reason for slight initial decrease of the peak amplitude that corresponds to the stage at which wavepacket self-adjusts to soliton shape. If nonlinearity in Eq. (1) is switched off, the same input quickly and dramatically spreads in linear medium, extending along the domain wall. This is illustrated in Fig. 5(b), where we show the output distribution at = 500 after linear propagation, when it substantially extends along the domain wall. In Fig. 5(c), we also show the corresponding peak amplitude lin during linear propagation, but only within the region ≤ 1000. Further propagation will lead to the interference of the state due to the limited size of the calculation window which affects the peak amplitude of the state. Besides the method of generation of quasi-solitons adopted here that utilizes modulation instability [44,46,70], one can also derive an envelope equation for such solitons directly from Eq. (1) using the methods developed for continuous topological systems in [66,69].
Topological protection of the valley Hall edge soliton
One of the most representative properties of the topological edge states is their topological protection. While certain small-scale modulations of the domain wall may still cause backscattering in the valley Hall system, topological solitons in this system can circumvent sharp corners without backward reflection or radiation. Notice that in previously reported example [70] of the nonlinear valley Hall system with type-II Dirac cones, the specific geometry of the interface did not allow to illustrate this type of dynamics. In contrast, topological protection can be easily visualized in our system. We thus construct a domain wall with a Ω-like shape, that possesses 4 sharp corners (the angle is 60 • ), as shown by the blue channel in Fig. 6(a). Since the lattice unit cell used in this work has C 3v symmetry, the formation of such zigzag-type turns with an angle of 60 • or 120 • is allowed. We use the same input as in Fig. 5(a) and check its propagation dynamics along the Ω-shaped domain wall, see Fig. 6(b). Presented results clearly show that soliton passes all sharp corners in the domain wall without experiencing reflection. Notice that one of the main advantages of our system is considerable width of the gap -hence all obtained solitons have sufficiently large propagation velocities = − allowing them to pass through bends and corners over sufficiently small propagation distances [thus, = 200 in Fig. 6(b) is of the order of experimentally available sample length for laser-written waveguide arrays]. An animation corresponding to the propagation in Fig. 6 is provided in the Visualisation 1, that visually shows the topological protection.
Conclusion
Summarizing, we have demonstrated valley Hall edge solitons in a composite honeycomb lattice with broken inversion symmetry. We have shown that a domain wall created in the composite honeycomb lattice supports edge states originating from the valley Hall effect. Their nonlinear counterparts bifurcating from the linear valley Hall edge states were obtained by using the Newton method. We used modulational instability to demonstrate that such nonlinear edge states split into sets of solitons, each of which can show extremely long stable propagation along the domain wall. Finally, topological protection was illustrated by considering interactions of valley Hall edge solitons with Ω-shaped domain walls. Our work suggests experimentally feasible approach to generation of topological edge solitons that does not rely on longitudinal array modulations leading to enhanced losses. Our results may be generalized to other platforms where nontrivial topology can be combined with nonlinear response of the system [55-57, 84, 85].
Disclosures. The authors declare no conflicts of interest.
Data Availability Statement. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. | 4,215.4 | 2021-11-04T00:00:00.000 | [
"Physics"
] |
Evaluation of polar stratospheric clouds in the global chemistry-climate model SOCOLv3.1 by comparison with CALIPSO spaceborne lidar measurements
. Polar Stratospheric Clouds (PSCs) contribute to catalytic ozone destruction by providing surfaces for the conversion of inert chlorine species into active forms and by denitrification. The latter describes the removal of HNO 3 from the strato-sphere by sedimenting PSC particles, which hinders chlorine deactivation by the formation of reservoir species. Therefore, an accurate representation of PSCs in chemistry-climate models (CCMs) is of great importance to correctly simulate polar ozone concentrations. Here, we evaluate PSCs as simulated by the CCM SOCOLv3.1 for the Antarctic winters 2006, 2007 and 5 2010 by comparison with backscatter measurements by CALIOP onboard the CALIPSO satellite. The year 2007 represents a typical Antarctic winter, while 2006 and 2010 are characterised by above-and below-average PSC occurrence. The model considers supercooled ternary solution (STS) droplets, nitric acid trihydrate (NAT) particles, water ice particles, and mixtures thereof. PSCs are parameterized in terms of temperature and partial pressures of HNO 3 and H 2 O , assuming equilibrium be-tween gas and particulate phase. The PSC scheme involves a set of prescribed microphysical parameters, namely ice number 10 density, NAT particle radius and maximum NAT number density. In this study, we test and optimize the parameter settings by several sensitivity simulations. The choice of the value for the ice number density affects simulated optical properties and dehydration, while modifying the NAT parameters impacts stratospheric composition via HNO 3 -uptake and denitrification. Depending on the NAT-parameters, reasonable denitrification can be modeled. However, its impact on ozone loss is minor.
Introduction
Although the occurrence of clouds in the wintertime polar stratosphere has been observed for a long time, their importance for 25 stratospheric ozone depletion was only recognized after the discovery of the Antarctic ozone hole in the mid 1980s (Farman et al., 1985). Stratospheric clouds composed of supercooled ternary solutions (STS, H 2 SO 4 -HNO 3 -H 2 O mixtures), crystalline nitric acid trihydrate (NAT) and water ice provide surfaces, on which inert reservoir species like HCl and ClONO 2 are transformed into active forms (Solomon et al., 1986). The activated species then are responsible for springtime ozone depletion induced by catalytic cycles (Molina and Molina, 1987). While STS droplets are responsible for most of the chlorine activation temperature bias. WACCM-CCMI (Garcia et al., 2017), where the cold bias was reduced by introducing additional mechanical forcing of the circulation via parametrized gravity waves, compared best with observations.
In this study, we compare a simple equilibrium scheme of STS, NAT, ice and mixtures thereof with state-of-the-art PSC satellite data, aiming to optimize the scheme for economic and efficient use in a chemistry-climate model (CCM). To this end, 95 we evaluate the representation of PSCs in the CCM SOCOLv3.1 for the Antarctic winter 2007. We convert the simulated PSCs into an optical signal to mimick a satellite measurement and compare the results with CALIPSO observations. We further evaluate the impacts of the simulated PSCs on the chemical composition of the stratosphere by comparison with satellite observations of HNO 3 , H 2 O and O 3 . A more detailed description of our methodology and the datasets utilized is given in Sect. 2. In Sect. 3 we present the results of the comparison, and Sect. 4 provides conclusions. The state-of-the-art chemistry-climate model SOCOLv3.1 (Stenke et al., 2013;Revell et al., 2015) is based on the middleatmosphere general circulation model (GCM) MA-ECHAM5 (European Centre/HAMburg climate model; Roeckner et al., 2006), coupled to the chemistry module MEZON (Model for Evaluation of oZONe trends; Egorova et al., 2003). MEZON 105 contains 57 chemical species, 56 photolysis reactions, 184 gas-phase reactions and 16 heterogeneous reactions in and on aqueous sulfuric acid aerosols as well as three types of PSCs, namely STS droplets, NAT and water ice. Heterogeneous hydrolysis of N 2 O 5 on tropospheric aerosols is as well taken into account. The chemistry module MEZON covers stratospheric ozone chemistry in detail as well as the tropospheric background chemistry, including the oxidation of isoprene (Pöschl et al., 2000).
The coupling between the GCM and the chemistry module takes place through simulated winds and temperatures, as well as 110 through the radiative forcing caused by ozone, methane, nitrous oxide, water vapor and CFCs. The dynamical time step is 15 min, whereas the radiation and chemistry schemes are called every 2 h.
The parametrization of PSCs in MEZON includes the three PSC types water ice, NAT and STS. STS droplets form upon the uptake of gas-phase HNO 3 and H 2 O by aqueous sulfuric acid aerosols (supercooled binary solutions, SBS), following the expression by Carslaw et al. (1995). In SOCOLv3.1, the binary aerosols are prescribed as a time series of observed monthly 115 mean sulfate aerosol surface area density, mainly based on SAGE (Stratospheric Aerosol and Gas Experiment) observations (Stenke et al., 2013). NAT is formed if the HNO 3 partial pressure exceeds its saturation pressure (Hanson and Mauersberger, 1988). For NAT particles, a mean radius of 5 µm is assumed, and the maximum number density is set to 5·10 −4 cm −3 . This limitation accounts for the observational evidence that NAT clouds are often strongly supersaturated and prevents condensation of all available gas-phase HNO 3 onto NAT particles. The assumptions of n N AT,max =5·10 −4 cm −3 and r N AT =5 µm allow for 120 ∼10% of the HNO 3 at beginning of winter to be taken up into NAT particles (0.77 ppbv at 50 hPa and 195 K, assuming 5 ppmv H 2 O). For water ice, a particle number density of 0.01 cm −3 is prescribed. This represents the background ice number density but not ice formed in mountain waves, where very high nucleation rates result in much higher ice number densities of ∼ 5-10 cm −3 (Hu et al., 2002) and particle sizes of <3 µm (Höpfner et al., 2006). For water ice particles as well as for is included. The fall velocities of NAT and ice particles are based on Stokes theory (described in Pruppacher and Klett, 1997).
Advection of PSC particles is not explicitly calculated in SOCOL, but at the end of each chemical time step all condensed HNO 3 and H 2 O evaporates back to the gas phase. To prevent spurious PSC formation caused by potential model temperature, HNO 3 and/or H 2 O biases in regions where PSCs are usually not observed, and to avoid overlap with the regular cloud scheme of the GCM, the occurrence of PSCs is spatially restricted. Water ice particles are allowed to occur between 130 hPa and 130 11 hPa and polewards of 50 • N/S. NAT particles are allowed between the tropopause and 11 hPa. STS and NAT particles may form at all latitudes.
For the present study SOCOLv3.1 was run with T42 horizontal resolution (about 2.8 • x 2.8 • in latitude and longitude) and 39 vertical levels between the surface and the model top centered at 0.01 hPa (∼80 km). In order to allow for a direct comparison with observations, the model was run in specified dynamics mode, i.e. the prognostic variables temperature, vorticity, 135 divergence and the logarithm of the surface pressure are relaxed towards ERA-Interim reanalysis data (Dee et al., 2011). We applied a uniform nudging strength throughout the whole model domain, with a relaxation timescale of 24 h for temperature and logarithm of the surface pressure, 48 h for divergence and 6 h for vorticity. The boundary conditions follow the specifications of the reference simulation REF-C1 of phase 1 of the Chemistry Climate Model Initiative (CCMI-1; Morgenstern et al., 2017). All simulations for this study were run between 01 May 2007 and 31 October 2007 with a 12-hourly output time step. 140 We chose 2007 for our evaluation, which represents an average winter in terms of PSC occurrence, while data coverage for CALIPSO was rather high.
CALIPSO PSC observations
The simulated PSCs in SOCOL are compared to measurements from the CALIOP instrument onboard CALIPSO, an Earth observation satellite in the A-train constellation in operation since 2006 (Winker and Pelon, 2003;Winker et al., 2007Winker et al., , 2009).
145
The A-train of satellites orbits the Earth 14-15 times per day, covering the latitudes between 82 • S and 82 • N on each orbit.
CALIOP is a dual-wavelength lidar with three receiver channels, one measuring the 1064 nm backscatter intensity, the two others measuring the parallel and perpendicular polarized components (β and β ⊥ ) of the 532 nm backscattered signal. The frequency of the lidar pulse is 20.25 Hz, corresponding to one measurement every 333 m along the flight track. From the measured backscatter coefficients (e.g. β 532 ) the total (sum of particulate and molecular) to molecular backscatter ratio can be calculated, with β m being the molecular backscatter coefficient. β m is calculated as described in Hostetler et al. (2006) using molecular number density profiles provided by the MERRA-2 (Modern-Era Retrospective analysis for Research and Applications, version 2) reanalysis products (Gelaro et al., 2017). With the separation of the 532 nm backscatter signal into parallel and perpendicular polarized components, the depolarization ratio (δ aerosol , i.e. the perpendicular to parallel component) 155 of the 532 nm signal can be derived, which is an indicator of the particle shape and hence phase (liquid/solid).
In this study we use the Lidar Level 2 Polar Stratospheric Cloud Mask Product (available via Michael C. Pitts), which was derived with version 2 (v2) of the PSC detection algorithm (Pitts et al., 2018) from the CALIOP v4.10 Lidar Level 1B data products. This CALIOP PSC dataset contains profiles of PSCs with classification and optical properties, also providing temperature, pressure and tropopause height derived from MERRA-2 reanalyses. The spatial resolution of PSC data is 5 km in high number densities and ice as well as to wave ice PSCs.
MLS observations
In this study, modeled HNO 3 , H 2 O and O 3 mixing ratios are compared to satellite measurements of the instrument Microwave The colors indicate the number of PSC measurements in one bin. Dotted lines denote dynamical classification boundaries or thresholds and solid lines denote fixed classification boundaries.
Model-measurement comparison
While CALIOP measures backscatter signals and depolarization ratios, the SOCOL model provides surface area densities (SAD) for STS, NAT and water ice as function of pressure, latitude and longitude. From the simulated SADs and the assumed 190 microphysical parameters, we calculate the number density and/or radius for each particle type. This information is used in Mie and T-matrix scattering codes (Mishchenko et al., 1996) to compute optical parameters of the simulated PSCs, i.e. R 532 , δ aerosol and β ⊥ , for comparison with CALIOP observations. For NAT and ice particles, circular symmetric spheroids with an aspect ratio of 0.9 are assumed. Refractive indices of 1.31 for ice and 1.48 for NAT were chosen. The CALIOP PSC data product includes both detection threshold values, R 532,thresh and β ⊥,thresh , for each measurement. To achieve a better comparability 195 between model and observations, these daily threshold values are also applied on the calculated optical properties of the PSCs simulated by SOCOL. For this purpose, we calculated the daily mean thresholds from all observations for each pressure level.
This procedure is essential for a fair comparison between model and satellite data, as the geographical PSC extent strongly depends on these detection limits.
To ensure best possible comparability between model and measurements, observational uncertainties have to be applied 200 to the calculated optical properties of the modeled PSCs. We followed the approach by Engel et al. (2013). The uncertainty scales inversely to the square root of the horizontal averaging distance along a flight path, which we set to 135 km. This value corresponds to the best case for detection, which maximizes the comparability with the model (which obviously does not have a detection threshold). An example for the added measurement noise is shown in Fig. 2. When looking at the individual PSC types (Fig. 2a), STS and NAT, due to their spherical shape and fixed radius, appear at constant δ aerosol -values of 0 and 0.167, respectively. The variable radius of ice particles results in a variable δ aerosol -value. Applying the uncertainties to the parallel and the perpendicular backscatter coefficients primarily causes a large spread in depolarization ratio (Fig. 2b). When considering all PSC particles to be mixed within a grid box (Fig. 2c), their points are located mainly at the lower and left side of the composite histogram.
3 Results and discussion 210 3.1 Comparison along an orbit As a first example we compare SOCOL with CALIPSO along a single flight track. Figure 3 shows a curtain of observed backscatter ratios R 532 along orbit 2 on 01 July 2007 (Fig. 3a) and the corresponding PSC compositions (Fig. 3g) boxes overflown by CALIPSO. Figures 3c and 3f show the same, but before detection thresholds and instrument uncertainty had been added. The model output also reveals a large PSC over the Antarctic Peninsula. However, the spatial extent of the simulated PSC is larger. The simulated backscatter ratio R 532 peaks around 6, which is substantially lower than observed. Due to the coarse resolution and orography, SOCOL is not able to capture high ice particle number densities associated with mountain wave events. Applying the CALIPSO classification scheme on the model output results in a layer of ice PSCs located 225 around ∼20 km, which is slightly higher than in the observations. The ice cloud is surrounded by NAT mixtures, while the observations indicate STS. Below those NAT mixtures, pure STS clouds occur in the model, most of which are tenuous enough such that they fully disappear after applying the optical thresholds (Fig. 3e).
The actual modeled composition (see Appendix, Fig. A1) shows a similar pattern than the CALIPSO classification scheme, but with more ice Mix and less STS. These differences can also be seen in Fig. 2c, where most of the ice mixtures (blue) are The modeled month-to-month variability of R 532 values and areal extent agrees well with CALIPSO observations. In July, the center of the PSC area is also tilted towards East Antarctica and slightly towards the Peninsula in August. However, peak values of R 532 are clearly lower for SOCOL. In comparison to the observations, the spatial distribution of SOCOL PSCs is more homogeneous. As mentioned above, this results mainly from a poor representation of mountain waves in the model. The observed PSC area is calculated from the daily fraction of PSC measurements within ten equal-sized latitude bands, while the modeled PSC area is determined for every grid box based on the PSC occurrence (above the detection thresholds) for two until end of October, which is longer than observed. However, SOCOL simulates a substantially larger PSC area, in particular between 13 and 23 km altitude, where 1.5·10 7 km 2 are almost continuously exceeded.
Spatial distribution
It is most likely that the different methods for calculating PSC areal coverage contributes to this overestimation. For each output time step, we considered the entire grid box to be covered by PSCs as soon as PSCs (above the detection thresholds) occur in the model. Further, also a cold-temperature bias in the model contributes to the larger PSC area.
265
The modeled PSC area calculated without the optical thresholds applied (Fig. 5c) is significantly larger, especially below 13 km altitude, where large areas with STS clouds occur in the model (see also Fig. 3f). Those large-scale STS clouds are very tenuous since they are filtered out by the conservative PSC detection threshold and hence do not play an important role in ozone chemistry. However, it highlights the crucial role of the detection thresholds when comparing PSC areas. Due to this sensitivity to the applied methods, quantitative comparisons of the areal coverage must be interpreted with caution.
270 Table 1. Overview over the SOCOL simulations and the microphysical parameter settings.
Sensitivity to microphysical parameters
As described in Sect. 2.1 SOCOL's PSC scheme includes some prescribed microphysical parameters such as the ice particle number density, n ice , or the NAT radius, r NAT . These values had once been chosen based on what was known about PSCs back then. However, the current parameter setting might not be optimal. For example, the rather low value for n ice of 0.01 cm −3 prevents the formation of ice PSCs with high number densities as observed in mountain wave events. To investigate the sensi-275 tivity of the simulated PSCs to the microphysical parameters in the PSC scheme, we performed additional simulations for the Antarctic winter 2007 with increased n ice and/or increased n NAT,max (Table 1). Figure 6 shows the composite histograms for the various SOCOL simulations. There are considerable differences to the observations (Fig. 1), but also between the simulations. PSCs in the REF simulation show a strong relative maximum located in the STS domain with 1/R 532 values between 0.4 and 0.2 (Fig. 6a). Only very few PSCs are classified as ice, i.e. the relative 280 maximum towards the upper right, as observed by CALIPSO, is missing. That the PSC mixtures in the simulations are located more at the lower and left side of the histogram can also be seen in Fig. 2c. There are several reasons for this difference. First, SOCOL does not resolve mountain waves due to the coarse model resolution and orography. Furthermore, the modeled PSCs are representative for large grid box (2.8 • x2.8 • horizontally and approximately 2 km vertically), while the observations resolve much smaller scale structures (starting from 5 km horizontally along a track and 180 m vertically). Finally, the fixed ice number 285 density of 0.01 cm −3 does not allow for large ice particle cross sections, even if mountain waves would be resolved. Based on these findings we performed one sensitivity simulation with a tenfold ice number density, S n(ice) . As shown Fig. 6b the tenfold increase in n ice results in a strong maximum to the upper right, mainly within the enhanced NAT mixture domain. The higher number density of ice particles increases the cross section of ice, leading to enhanced backscatter in ice-containing grid cells.
Due to its solid state, backscatter from ice has δ aerosol >0. This results in a shift towards higher R 532 and higher δ aerosol values in 290 the histogram. Overall, modifying n ice leads to a better agreement with CALIPSO.
While ice PSCs are less important for stratospheric ozone chemistry, NAT formation and subsequent denitrification of the stratosphere play a crucial role. NAT formation in SOCOL depends on two parameters, n NAT,max and r NAT . To test the model's sensitivity to those parameters, we ran further simulations with the upper boundary for NAT number densities increased by a latter is not presented here.
The simulation with four times higher n NAT,max (Fig. 6c) shows a maximum shifted towards lower R 532 values compared to the REF simulation, which is located around the optical thresholds at the lower left corner. As long as temperatures are below T NAT and enough HNO 3 is available for NAT formation, an increase in n NAT,max or r NAT results in more HNO 3 -uptake by NAT particles. This reduces the available gas-phase HNO 3 for STS growth. Also, more HNO 3 through sedimentation of the solid NAT particles is removed. With larger r NAT this removal occurs even faster due to the higher sedimentation velocity.
The reduction in surface area density of STS results in less backscatter and subsequently a shift towards lower R 532 values in the composite histogram. This shift towards lower R 532 values worsens agreement with observations.
In a final simulation (S n(ice),n(N AT,max) , Fig. 6d) we set n ice to 0.05 cm −3 and n NAT,max to 10 −3 cm −3 . This simulation shows a superposition of the two effects described above, resulting in two distinct relative maxima in the composite histogram.
305
One maxima is located to the upper right, similar to S n(ice) . The second maximum at low R 532 and low δ aerosol values is similar to the pattern in S n(N AT,max) . The shift towards lower R 532 values is again a result of less STS formation due to the reduced availability of HNO 3 . Although the composition histograms of all sensitivity simulations differ substantially from observations, we find the best agreement for the simulation S n(ice),n(N AT,max) .
To investigate the impact of the applied modifications on the simulated chemical composition of the polar stratosphere (60- Prior to the decline, an increase in HNO 3 is observed at 68 hPa. It results from the evaporation of sedimenting NAT particles formed at higher altitudes (renitrification) and is an indication of denitrification of the upper levels. During July/August the absolute HNO 3 values from the reference run agree well with the observations. However, in late winter SOCOL again underestimates MLS. All simulations show a decline due to HNO 3 -uptake into NAT particles and STS droplets. However, S REF
320
(black) and S n(ice) (cyan) show a weaker and delayed HNO 3 decline with a plateau in July/August. In S n(N AT,max) (green) the decline at both levels is considerably stronger than in S REF as well as in MLS. This is due to the enhanced uptake of HNO 3 into NAT particles and the subsequent removal by sedimentation. As a consequence also the renitrification at lower levels is clearly enhanced. Both indicates a more efficient denitrification than in S REF .
The simulation S n(ice),n(N AT,max) (red), in which n NAT,max is twice as large as in S REF , but only half of S n(N AT,max) , falls 325 in between the other simulations. The denitrification starts about half a month later than in S n(N AT,max) . The HNO 3 -uptake is reduced and subsequently HNO 3 stays longer in the gas-phase. However, in August HNO 3 concentrations reach about the same level as in S n(N AT,max) . Simulations with enhanced r NAT have similar effects (not shown). Increasing the parameter n ice affects the modeled stratospheric composition only very little by reducing dehydration. But the increased SAD of ice leads to slightly lower O 3 in S n(ice) compared to S REF . Increasing the upper NAT boundary overall reduces SAD of PSC due to reducing the abundance of HNO 3 . However, due to enhanced denitrification, S n(N AT,max) and S n(ice),n(N AT,max) show even slightly lower O 3 concentrations.
345
We have presented an evaluation of PSCs as simulated by the CCM SOCOLv3.1 in specified dynamics mode for the Antarctic winter 2007. SOCOL considers STS droplets as well as water ice and NAT particles. PSCs are parametrized in terms of tem- Overall, the spatial agreement with CALIOP observations is good and the observed month-to-month variability is represented. However, due to the coarse model, mean orography, but also the fixed ice number densities, mountain wave events and associated wave ice clouds with high backscatter ratios over the Antarctic Peninsula are not resolved in SOCOL. The temporal 355 and spatial evolution of PSCs inside the polar vortex as expressed by the areal coverage indicates an overestimation of PSCs in SOCOL. This is partly explained by a cold temperature bias, but also by the coarse model resolution: even a small amount of PSCs within a grid cell adds a large contribution to the areal coverage. This is reflected by the sensitivity of this quantity towards the applied detection thresholds.
Furthermore, we have tested the assumptions about the maximum NAT number density, NAT radius and ice number density 360 by various sensitivity simulations. The parameter n ice determines primarily the optical signal through its impact on the particle cross section and also dehydration due to changing settling velocities with changing particle radius. While increasing n ice from 0.01 cm −3 to 0.1 cm −3 improves the agreement of the optical signal with CALIOP, the simulated dehydration is more realistic for smaller n ice and therefore larger ice particles.
The upper boundary for NAT number densities determines the HNO 3 -uptake and subsequently the magnitude of STS for-365 mation, which is crucial for halogen activation. We have shown that for an increased max. NAT number densities the temporal agreement of de-and renitrification with MLS measurements is improved. However, SOCOL in general clearly underestimates observed HNO 3 in the polar stratosphere, which makes a solid conclusion about the best set of microphysical parameters respectively. Further work would be required to extend our findings to simulated PSCs in the Arctic or to other years. Nevertheless, this study demonstrates that also a simplified PSC scheme based on equilibrium assumptions may achieve good approximations of fundamental properties of polar stratospheric clouds needed in chemistry-climate models.
Code and data availability. Since the full SOCOLv3.1 code is based on ECHAM5, users must first sign the ECHAM5 license agreement before accessing the SOCOLv3.1 code (http://www.mpimet.mpg.de/en/science/models/license/, last access: 2020). The SOCOLv3.1 code is then freely available upon request from Andrea Stenke (andrea.stenke@env.ethz.ch | 5,853.4 | 2020-06-24T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
An intrinsic causality principle in histories-based quantum theory: a proposal
Relativistic causality (RC) is the principle that no cause can act outside its future light cone, but any attempt to formulate this principle more precisely will depend on the foundational framework that one adopts for quantum theory. Adopting a histories-based (or ‘path integral’) framework, we relate RC to a condition we term ‘Persistence of Zero’ (PoZ), according to which an event E of measure zero remains forbidden if one forms its conjunction with any other event associated to a spacetime region that is later than or spacelike to that of E. We also relate PoZ to the Bell inequalities by showing that, in combination with a second, more technical condition it leads to the quantal counterpart of Fine’s patching theorem in much the same way as Bell’s condition of local causality leads to Fine’s original theorem. We then argue that RC per se has very little to say on the matter of which correlations can occur in nature and which cannot. From the point of view we arrive at, histories-based quantum theories are nonlocal in spacetime, and fully in compliance with RC.
Introduction
Causal relationships are important in Quantum Field Theory (QFT), in classical General Relativity (GR), and for Quantum Foundations.Causality appeals to us as a basic scientific category, and yet cause and effect lack a clear definition in quantum physics.There is no consensus on whether EPR-like correlations indicate that quantum physics is nonlocal, or not relativistically causal, or both.
The meaning of cause is elusive, even classically.In quantum mechanics and QFT it is even harder to give it meaning.Causality concerns events in spacetime, whereas the field operators in terms of which one usually formulates the causality conditions (as they are usually described) of relativistic QFT are not events.The only true spacetime event in an operator formulation of a quantum theory is a "measurement" with its ensuing "collapse of the state-vector", but measurement and collapse rely on external observers in spacetime.Measurement and collapse are needed in the canonical operator-formulation but these concepts are not intrinsic to the quantum system.In terms of the histories of a system (trajectories in quantum mechanics, field configurations in QFT), though, one can meaningfully speak of spacetime events without reference to anything external to the quantum system.It is thus worthwhile to explore questions of causality and locality in the context of a histories-based formulation of quantum dynamics.
Quantum Measure Theory (QMT) is such a framework and within it one has the concept of an event in spacetime as a set of histories [1][2][3][4][5][6].J.B. Hartle's Generalised Quantum Mechanics (GQM) [7][8][9][10] is a closely related histories-based framework for quantum foundations.The two frameworks GQM and QMT are based on the same concepts of history, of event (called a "coarse-grained history" in the GQM literature), and of decoherence functional, though they diverge in their attitudes to decoherence and probabilities, and in their interpretational schemes.Since the technical results in this article do not depend on an interpretational scheme, they are equally applicable to QMT and GQM.This article will explore causality and locality in Quantum Measure Theory, and in connection with EPR-type correlations.We will consider only particle trajectories and field configurations on spacetime with no appeal to external observers or agents.Our aim is to provide a dynamical axiom that has some claim to be regarded as a principle of relativistic causality.Thus, relativistic causality will be thought of as restricting a general class of conceivable dynamics to a subclass that deserves to be described as causal.We will assume, however, that spacetime is provided with a fixed, or "background" relativistic causal structure; we will not attack the important questions that arise when causal structure graduates from background to dynamical, as it necessarily does whenever gravity is involved.
We begin in Section 2 by reviewing the basics of the histories-based, path integral inspired framework of Quantum Measure Theory, including the concept of the event Hilbert space of a system.We then specialise to the case of quantum theory in a relativistic spacetime background in Section 3 and introduce the restriction of "Persistence of Zero" on the dynamics of such a quantum system.Persistence of Zero (PoZ) is, as advertised in the title of this article, an intrinsic condition for a quantum theory, not reliant on concepts such as measurement by an external agent.We motivate PoZ by showing that it implies, for each event, E, that there exists an event-operator Ê on the event Hilbert space of the past (to be defined), such that Ê acting on the universal vector (also to be defined) produces the vector in the event Hilbert space that corresponds to event E. PoZ implies that if an event E has measure zero then the event 'E and F ' also has measure zero whenever F is an event that is nowhere to the past of E. We also show that the event-operators for spacelike events commute.
In Section 4 we further support the proposal that PoZ should be considered to be a causality condition.Specifically, we show that, in the case that the event Hilbert space of the whole system equals the event Hilbert space of the past-which condition we call "Lack of Novelty"-PoZ plays the crucial role in the quantal "patching theorem" analogous to that played by the familiar factorizability condition in A. Fine's original classical version of the theorem [11].
We follow this with two sections of more informal discussion.In Section 5 we argue that relativistic causality, taken in isolation, is a rather weak condition which in particular imposes no limitation on the spacelike correlations that a localized cause can induce among events in its future.We illustrate this by reframing two well-known examples-the Popescu-Rohrlich box and the Greenberger-Horne-Zeilinger experiment-in the language of events, showing that there is nothing about the correlations in either of these cases that warrants the conclusion that relativistic causality is violated.The only residual implication of relativistic causality, then, is that a localized cause should not induce correlations between events that fail to be in its future, and this seems to be what the PoZ condition is aiming to insure.However, as we discuss in section 6, PoZ goes further by incorporating a limited amount of spacetime locality that one might perhaps term as 'causal severability', this being one way to think of what the quantal patching theorem expresses.
Finally, Appendix A discusses a quantal analog of classical factorizability that holds in relativistic QFT, and which could conceivably take over the role of Lack of Novelty in an alternative proof of the quantum patching theorem.
2 Quantum measure theory QMT and GQM are path-integral inspired frameworks in which a quantum system is characterized by a triple, (Ω, A, D), consisting of a set of histories Ω, an event algebra A (a subalgebra of the power set of Ω), and a decoherence functional D(•, •) on A × A. In this section we review the basic concepts of history, event and decoherence functional and refer the reader to [1][2][3][4][5][6] for more details on QMT, to [7][8][9][10] for more details on GQM, and to [12][13][14][15][16] for more details on the event Hilbert space and the vector-measure.
Event Algebra
The kinematics of a quantum system in QMT is specified by the set Ω of histories and one may have in mind that this is the set of histories over which the path integral is performed.Each history C in Ω is as complete a description of the physical system as is in principle possible in the theory.For example, in n-particle quantum mechanics, a history is a set of n trajectories in spacetime and in a scalar field theory, a history is a real or complex function on spacetime.
Any physical statement about (or property of) the system is a statement about (or property of) the history of the system.And, therefore, the statement/property corresponds to a subset of Ω in the obvious way.For example, in the case of the non-relativistic particle, if R is a region of spacetime, the statement/property "the particle passes through R" corresponds to the set of all trajectories that pass through R. We adopt the terminology of stochastic processes in which such subsets of Ω are referred to as events.
An event algebra on a sample space Ω is a non-empty collection, A, of subsets of Ω such that It follows from the definition that ∅ ∈ A, Ω ∈ A and A is closed under finite intersections.An event algebra is an algebra of sets by a standard definition, and a Boolean algebra.For events qua statements about the system, set operations correspond to logical combinations of statements in the usual way: union is "inclusive or", intersection is "and", complementation is "not" etc.
An event algebra A is also an algebra in the sense of a vector space over a set of scalars, Z 2 , with intersection as multiplication and symmetric difference (or "boolean sum") as addition: In this algebra, the unit element, 1 ∈ A, is the whole set of histories 1 := Ω.The zero element, 0 ∈ A, is the empty set 0 := ∅.Note that E + F = E ∪ F if and only if E and F are disjoint i.e. if and only if EF = 0. We will use this arithmetic way of expressing set algebraic formulae whenever convenient, both for events and also for regions of spacetime.If an event algebra A is also closed under countable unions then A is a σ-algebra but we do not impose this extra condition on the event algebra.(See [14,15,17] for work on extending the domain of the quantum measure from A to a larger subset of the σ-algebra generated by A.)
Decoherence functional, quantum measure, vector-measure
A decoherence functional on an event algebra A is a map D : A × A → C that encodes both the initial conditions and the dynamics of the quantum system and satisfies the conditions: See section 6 of [10] for these axioms in the context of GQM.A quantum measure on an event algebra A is a map µ : ) is a quantum measure.And, conversely, if µ is a quantum measure on A then there exists (a non-unique) decoherence functional D such that µ(E) = D(E, E) [2].
The question of which is the primitive concept, quantum measure or decoherence function, remains open.However, in this article we take the decoherence functional to be the primitive concept because we will assume all quantum systems satisfy the further axiom of strong positivity which -at our current level of understanding -can only be directly imposed on the decoherence functional: Strong positivity of the decoherence functional holds in established unitary quantum theories due to the form of the decoherence functional as a Double Path Integral (DPI) of Schwinger-Keldysh form, see for example equation (47) in Appendix A. More generally, it has been shown that the set of strongly positive quantum-measure systems is the unique set that is closed under tensor-product composition and is "full" in the sense that if a system can be composed with every element of the set, then that system is in the set [18].
Event Hilbert space and vector-measure
Henceforth in this article all quantum systems are assumed to have strongly positive decoherence functionals.Then, for each quantum system (Ω, A, D) a Hilbert space, H, can be constructed as the completion of a quotient of the free vector space over the event algebra [12,13], such that for each event, E ∈ A, there is an event-vector 1 |E⟩ ∈ H such that: • H is spanned by {|E⟩} E∈A .This spanning set is (very) over-complete: there is not a unique expansion of a vector in H as a linear combination of event-vectors (see, for example, the previous point).
• An event has quantum measure zero if and only if its vector-measure is zero.
• If B is a subalgebra of A then the event Hilbert space of B is a subspace of the event Hilbert space of A.
• The vector-measure |Ω⟩ of the whole history space Ω is a unit vector in H, and we call it the universal vector.If B is a (unital) subalgebra of A then the universal vector is in the event Hilbert space of B. The minimal subalgebra of A is {Ω, 0}, and its event Hilbert space is the 1-d vector space spanned by |Ω⟩.
• In a unitary quantum theory in which the decoherence functional is defined using the Schwinger-Keldysh double path integral with a pure initial state, the event Hilbert space coincides with the canonical Hilbert space. 2 In this case, the universal vector |Ω⟩ in the event Hilbert space is identified with the pure initial state in the canonical Hilbert space.If the decoherence functional is defined using a mixed initial state or "density matrix" ρ of rank r then the event Hilbert space is a direct sum of r copies of the canonical Hilbert space.The universal vector |Ω⟩ is then the direct sum of the eigenvectors of ρ, each normalised so that its norm squared is its eigenvalue [13].
• If X is a family of events that constitute a partition of Ω 3 then 3 Spacetime as an organising principle Our concern is relativistic quantum physics and we assume that the histories of the system, the elements of Ω, reside in a fixed spacetime endowed with a causal structure that is mathematically a partial order.Any "back reaction" on the spacetime metric or causal structure is thus being ignored.This background may be a continuum such as a 4-d globally hyperbolic spacetime or it may be a discrete partial order such as a causal set.Let us call this spacetime M , and its causal order-relation ⪯.The availability of this invariable substratum lets us organise events in A according to their location in M .For each spacetime region R ⊆ M there is a subalgebra A R ⊆ A, where an event E is in A R iff the property that defines whether a history Γ ∈ Ω is in E or not is a property of Γ in region R.In other words, if one can tell whether Γ is in E or not by examining the restriction of Γ to R then E is in A R , otherwise it is not.We say 'event E is in region R' to mean the same thing as 'E is an element of algebra A R '.For each spacetime region R, the vector-measures of the events in A R span a subspace of the event Hilbert space: thus to each region corresponds a sub-Hilbert space H R of H.If two events are located in mutually spacelike regions, we will say that they are mutually spacelike events.
Persistence of Zero
Let us consider a spacetime region R and define the region R to be the set of points that are not to the causal future of any point in R: Note that RR = 0, since R ⊆ J + (R).
2 Technically, they are naturally isomorphic.At the level of mathematical theorems, this remains to be proved in general.The existing theorems cover the cases of nonrelativistic quantum mechanics and finite quantum systems [13].
3 In Hartle's GQM this is called an exclusive, exhaustive set of coarse grained histories.
If the dynamics is relativistically causal, then R is the region of spacetime that no event E in R can influence.The question now is whether this informal prohibition can be expressed (at least in part, and subject to later revision when quantum-foundational questions are better clarified) as a condition on the vector-measure of the system.
To that end, let us begin by asking whether it is possible to associate to the event E in R, not only a vector in H R ⊆ H, but an operator on H.We will denote this putative operator by E and refer to it as an event-operator. 4In the first instance, however, it works better to limit the domain of E to H R ⊆ H.
The Hilbert space H R is spanned by the event-vectors {|F ⟩ : F ∈ A R}. Let E be an event in R and let us try to define a map E from H R to H by defining it on an arbitrary event-vector in H R as: and then extending it by linearity to the whole of H R. EF is the conjunction event 'E and F '. Given the causal relation between R and R, although E may be partly or wholly spacelike to F , E cannot be to the causal past of F ; it may therefore help to understand (3) to think of EF as the event 'F and then E'.Now (3) is not necessarily a consistent definition, because the expansion of a vector in H R as a linear combination of event-vectors is not unique.This motivates the following definition: Definition 2 (Persistence of Zero).A quantum system satisfies Persistence of Zero (PoZ) if for every region R, every finite collection C of events in R, and every event E in R we have where {ψ F } are complex coefficients.
It is characteristic of quantum theory that a subevent of an event of measure zero can have nonzero measure due to interference: for example in the iconic double slit experiment the measure of the event of the particle arriving at a dark fringe is zero but the measure of the event of the particle passing through the left slit before arriving at a dark fringe is nonzero.But PoZ implies that if an event F has measure zero then any subevent of the form F E, where E is an event that is future and/or spacelike to F , also has measure zero.So we can already see that PoZ is some sort of causality condition.Moreover, it is just what the definition of E needs for consistency: Lemma 1.If a quantum system satisfies PoZ then (3) defines, for each event E in a region R such that R is nonempty, a linear map, the event-map E : H R → H.
Proof.The event-vectors in H R span H R and condition PoZ is exactly the condition that E is well defined when extended by linearity to all linear combinations of event-vectors. 5orollary 1.If a quantum system satisfies PoZ then (3) defines, for each event E in a region R such that R is nonempty, and each subregion Q ⊆ R, the event-map, E : Corollary 2. If a quantum system satisfies PoZ then event-map E acting on the universal vector equals the event-vector for E:
Spacetime arrangement in the case of interest
In this article we are interested in the following physical setup in spacetime.As shown in figure 1, Z is a region that contains its own causal past: Z is a 'past set'.Let regions A and B both lie in the future domain of dependence of Z and be disjoint from Z (recall that the future domain of dependence of Z includes Z) and be spacelike to each other.The union of regions Z, A and B is also a past set.The heuristic justification for this arrangement is that, in a relativistically causal theory, any cause of a correlation between events in A and B must be in Z.In particular any "preparation event" in an experiment of EPR type will automatically be contained within the region Z.
Z A B
Figure 1: Z is a past set (i.e.contains its own causal past).A and B are in the future domain of dependence of Z, do not intersect Z and are spacelike to each other.The union of Z, A and B is also a past set.
In unitary quantum field theory in a globally hyperbolic spacetime, the event Hilbert space of the future domain of dependence of Z equals the event Hilbert space of Z.For example, if Z is the past of a Cauchy surface then the event Hilbert space of Z and the event Hilbert space of the future domain of dependence of Z both equal the canonical Hilbert space of the whole system. 6 This can be thought of as a condition of 'lack of novelty': the past region Z is rich enough in events that further events anywhere in the future domain of dependence of Z do not add anything new to the physics as encoded in the event Hilbert space.We formalise this condition for quantum measure theories in general: Definition 3 (Lack of Novelty).A quantum system satisfies Lack of Novelty (LoN) if, for every region of spacetime Z that contains its own past, the event Hilbert space of the future domain of dependence of Z equals the event Hilbert space of Z.
In the case of the regions shown in figure 1 LoN implies that Hilbert spaces H Z+A , H Z+B and H Z+A+B all equal H Z , where subscript Z + A refers to the union of regions Z and A, and so on.So, for example, if E A is an event in A then there exist complex coefficients {ψ F } such that Lemma 2. Let the quantum system satisfy PoZ (definition 2) and LoN (definition 3).Let regions Z, A and B be as described in figure 1.If E A is an event in A and E B is an event in B, then their respective event-maps, E A and E B , are operators on H Z , and they commute.
Proof.Corollary 1 gives us the event-map Similarly for E B .Henceforth we refer to such operators as eventoperators.
Then, for each The first equality follows by applying Corollary 1 to region A and the second equality follows by applying Corollary 1 to region B. We also used the fact that multiplication commutes in the Boolean algebra so
Patching theorems
In this section we prove the quantum analog of Fine's patching theorem (Proposition 3 of [11]), taking PoZ and LoN as our inputs.
Let us consider, for definiteness and following Fine, the Clauser-Horne-Shimony-Holt (CHSH) scenario [20] with two spin-half particles and two "local experiments", one in 6 As mentioned previously in footnote 2 the existing theorems cover the cases of nonrelativistic quantum mechanics and finite quantum systems.The formal extension to QFT would follow the same proofstructure: showing that the physically induced map from the event Hilbert space to the canonical Hilbert space-which is injective because it preserves the inner product-is surjective [13].
each of two spacelike wings.Each local experiment takes the form of a Stern Gerlach analyzer that admits of two possible orientations or "settings", and through which the particle emerges in one or the other of two exit beams, the upper beam or the lower beam, where it registers in a detector.This scenario can be generalised to any number of spacelike separated regions with any number of settings per analyzer and any number of beams per setting (the particles might have different spins, or there might be sequences of concatenated beam-splitters), and our results generalise mutatis mutandis.
Let us call the two spacelike separated regions where the local experiments take place The two spin-half particles are prepared and sent to the analyzers, one particle to A and one particle to B. Let Z be a region of spacetime such that Z is a past set and such that A and B lie in the future domain of dependence of Z without intersecting Z, and the union of A, B and Z is also a past set, as shown in figure 1.
For each local setting, a say for the analyzer in A, there are two possible beams in which the particle can be detected and two beam-events, corresponding to the particle being detected in the upper beam (u) and the lower beam (d) respectively.There are four sets of experimental probabilities, one for each global setting: where i, i ′ , j, j ′ labels the event of the particle being detected in the upper beam or lower beam (each label taking values u or d respectively) for setting a, a ′ , b, b ′ respectively.We use the shortened term "beam-event" to signify the detection of a particle in a particular beam, and as above, we sometimes use a simplified notation in which the labels i, i ′ , j, j ′ stand for the beam-events itself.We now postulate that the four probability distributions are compatible with each other in the sense that P ab (j) = P a ′ b (j), etc.These conditions (the so-called no-signalling conditions) signify that the experimental probabilities of the events in B do not depend on the setting in A and vice versa.
The patching theorems are about different measure theories in the same region of spacetime and we will need the following concepts and definitions.For a measure theory in a spacetime M with history-space Ω, if R ⊂ M is a region of spacetime, then a history in R is an element of Ω restricted to R. We use the notation Ω R = {Γ| R : Γ ∈ Ω} for the space of histories restricted to R. Definition 4 (History-event in a region).Let R ⊆ M be a region of spacetime, a historyevent in R is a cylinder set, where γ R is a history in R.
Definition 5 (Agreement of measure theories in a region).Two measure theories, i.The two history-spaces restricted to R are equal, in which case there is an obvious physical isomorphism between A 1 R and A 2 R , the event algebras for the two theories restricted to R.
ii.The decoherence functionals D 1 and D 2 , restricted to A 1 R and A 2 R respectivelydenoted D 1 R and D 2 R -are equal via the isomorphism.i.e. if the physical isomorphism is then When two measure theories 1 and 2 agree in a region R, then we can and will henceforth identify the algebras A 1 R and A 2 R via the physical isomorphism.And then the decoherence functionals are equal in R: D 1 R = D 2 R , and further, the sub-event Hilbert spaces in R are equal: R are subspaces of different event Hilbert spaces in different theories, they are different spaces but we can and will identify them.
Fine's classical patching theorem
In the framework of QMT, a classical theory is a quantum measure theory (Ω, A, D) that satisfies the additional condition In a classical theory, all the information in the decoherence functional is encoded in the measure, µ(E) = D(E, E), which measure satisfies the Kolmogorov sum rule and is referred to as a classical measure or equivalently as a probability measure.A classical measure theory (Ω, A, µ) is a level 1 theory in the hierarchy of measure theories delineated in [2].
Definition 6 (Factorizability).A classical measure theory (Ω, A, µ) on a spacetime Z + A + B where regions Z, A and B are as in figure 1, is factorizable if for all E A ∈ A A and E B ∈ A B and all history-events E γ Z in A Z .
If µ(E γ Z ) is nonzero, and if we divide through by its square, then this becomes the statement (sometimes called 'screening off') that the joint probability of E A and E B factorizes when conditioned on a history-event in Z.
Definition 7 (Factorizable model).A factorizable model for the probabilities (10) in the CHSH scenario is a set of four factorizable classical measure theories in Z + A + B, (Ω αβ , A αβ , µ αβ ) labelled by the four global settings αβ = ab, ab ′ , a ′ b, a ′ b ′ , with the following properties.
i.For each local setting, a, a ′ , b or b ′ , the two theories that share that setting agree (as per definition 5) in the relevant spacetime region: theories ab and ab ′ agree in Z + A, theories a ′ b and a ′ b ′ agree in Z + A, theories ab and a ′ b agree in Z + B, and theories ab ′ and a ′ b ′ agree in Z + B. This implies that all four theories agree in Z.
ii.For each local setting, the particle is detected either in the upper beam or in the lower beam.For example, for local setting a let E a u and E a d be the beam-events where we have implemented our declared intention to identify events in regions where theories agree.Then E a u ∪ E a d = Ω ab in measure theory ab and E a u ∪ E a d = Ω ab ′ in measure theory ab ′ . 7Similarly for each of the other local settings: e.g. for setting a ′ we have , and so on.
iii.Each measure µ αβ has the corresponding experimental probabilities P αβ (10) as marginals: where k labels the histories in Z and i, j, i ′ , j ′ = u, d.
Now we can state
Theorem 1 (Fine's patching theorem [11]).If there exists a factorizable model (Definition 7) for the probabilities (10) then there exists a joint probability measure on all the beamevents and all events in Z that has the four factorizable measures, {µ αβ } as marginals and that therefore (by ( 17)) has the probability measures (10) as further marginals.
Proof.We will use i, i ′ , j, j ′ and k as shorthand for the events First note that due to the agreement of the four measures we have µ ab (ik) = µ ab ′ (ik) and so we can define µ a (ik) := µ ab (ik) = µ ab ′ (ik).And similarly for each of the other 3 local settings, a ′ , b and b ′ .Similarly we can define µ(k Now, define a joint probability measure on all the beam-events and the Z-history-event k: (18) is well defined since if µ(k) = 0 then all the probabilities in the numerator vanish too (in which case we define µ patch (ii ′ jj ′ k) to equal zero).The factorizability of the four measures {µ αβ } implies that µ patch has each of them as marginals.For example, The calculation for the other three global settings is similar.Summing further over the history-events k in Z then gives, by (17), the probabilities (10) as marginals.
By summing (18) over the history-events in Z, one obtains a patched probability measure on the beam-events alone: and Fine further shows that the existence of such a probability measure implies the CHSH-Bell inequalities [11].
We specialised to the CHSH scenario for concreteness and so that we didn't have to introduce complicated notation for arbitrary scenarios as for example in [21].The extension of Fine's patching theorem to general scenarios is straightforward.Finding the corresponding generalised Bell inequalities, on the other hand, is a hard problem.
A quantum patching theorem
Factorizability ( 16) has a direct analog in QMT [22]: for all E A , ĒA ∈ A A , E B , ĒB ∈ A B , and all history-events E γ Z , E γZ ∈ A Z .This looks just like factorizability in the classical case except "doubled" with each relevant event in A or B or Z replaced by a pair of events in A or B or Z, respectively (E A replaced by (E A , ĒA ), for example).And, promisingly, this "quantum factorizability" condition is satisfied formally by relativistic QFT, as we show in Appendix A. However, the patched joint decoherence functional for the CHSH scenario obtained by using the analog of the formula of (18) fails to exist in general because the numerator does not necessarily vanish when the denominator vanishes.Nor is it necessarily strongly positive even if it is defined [22].
In [22] it was shown that in ordinary quantum mechanics, the existence of projection operators for the beam-events in each of the local settings allows the construction of a patched decoherence functional.Since event-operators can take the place of projection operators in the derivation, and since PoZ enables the construction of event-operators, we look now to PoZ instead of factorizability as the basis of patching.
In the quantum CHSH scenario most generally, instead of 4 probability measures on the beam-events there are 4 decoherence functionals, one for each global setting: When the i, i ′ , j, j ′ refer to beam-events that are detection events that behave classically, these decoherence functionals will be diagonal, and the diagonal elements will be the experimental probabilities (10).For the quantum patching theorem below, however, this diagonal property is not needed and so we will consider (25) merely as 4 decoherence functionals that are compatible on common events ("no-signalling"), i.e.
and all similar conditions.We now define a PoZ model for decoherence functionals by analogy with factorizable model for probability measures (definition 7): ii.For each local setting, the particle is detected either in the upper beam or in the lower beam.For example, for local setting a E a u ∪ E a d = Ω ab in measure theory ab and E a u ∪ E a d = Ω ab ′ in measure theory ab ′ .Similarly for each of the other local settings: e.g. for setting a ′ we have and so on.
iii.Each decoherence functional D αβZ has the corresponding decoherence functional D αβ (25) as marginals.For example: where k is shorthand for the history-events γ k in Z labelled by k.And similarly for a ′ b, ab ′ and a ′ b ′ .
PoZ implies that there are four Hilbert spaces, H abZ , H ab ′ Z , H a ′ bZ and H a ′ b ′ Z , one for each global setting.Since all the four PoZ theories agree in Z, these four Hilbert spaces each contain as a subspace the Hilbert space H Z that is spanned by the event-vectors Then by Lemma 2, for each beam-event for each local setting there exists an operator on H Z , so there are 8 event-operators on H Z -{ Êa i , Êa ′ i ′ , Êb j , Êb ′ j ′ }-and the event-operators for beam-events in A commute with the event-operators for beamevents in B. Lemma 3. If there is a PoZ model for the four decoherence functionals (25) and if LoN holds for the four measure theories in the model, then for each local setting, a, a ′ , b or b ′ , the two event-operators on H Z corresponding to the up and down beam-events for that setting sum to the identity operator.For example for setting a, Êa u + Êa d = 1.
Proof.Consider an event-vector, |E Z ⟩, in H Z .Choose local setting a and consider the two corresponding beam-events E a u and E a d .By condition (ii) in the definition of a PoZ model, E a u + E a d = Ω in each of the measure theories ab and ab ′ that a is a part of. Then, The event-vectors span the Hilbert space H Z and so Êa u + Êa d = 1 on H Z .There is a similar proof for each of the other local settings, a ′ , b and b ′ .
Theorem 2 (Quantum patching theorem).If there exists a PoZ model (Definition 8) for 4 decoherence functionals (25) and each of the 4 measure theories D αβZ in the PoZ model also satisfies LoN for αβ = ab, ab ′ , a ′ b, a ′ b ′ then there exists, mathematically, a joint decoherence functional on all the beam-events and all events in Z that has the 4 PoZ model decoherence functionals, D αβZ , as marginals.
Proof.By lemma 2, there are 8 event-operators on Hilbert space H Z , Êa i , Êa ′ i ′ , Êb j and Êb ′ j ′ for i, i ′ , j, j ′ = u, d such that the operators in A commute with the operators in B.
From these we can define, for each history-event-vector Using these vectors we define a patched joint decoherence functional, D patch depends on the choice of ordering of the event-operators in the string in (31).However, any ordering will work in the following calculation because the event-operators for events in A commute with the event-operators for events in B.
We now show that D patch has D αβZ , for each αβ, as marginals.For example, for where lemma 3 is used to go from line (34) to (35).Now, where the last line follows from the definition of event-vectors (section 2.3).A similar calculation shows that D patch has D αβZ as marginals for the 3 other global settings The fact that the set of D αβZ 's is a PoZ model (Definition 8) of the decoherence functionals D αβ (25) implies that further marginalization over k and k gives the 4 decoherence functionals (25).
There is a converse of sorts: Theorem 3. (Lemma 4.2 in [22]) If there exists a joint decoherence functional, D joint on all the beam-events that has the four decoherence functionals (25) as marginals, then one can construct, formally, a PoZ and LoN model for (25) that is quantum factorizable.
Proof.We prove this by explicit construction.By assumption there exists a decoherence functional D joint (ii ′ jj ′ , īī ′ jj ′ ) with (25) as marginals.We construct 4 measure theories, one for each global measurement, that agree in Z where there are exactly 16 historyevents.Each history-event in Z is labelled by a 4-bit string: {II ′ JJ ′ | I, I ′ , J, J ′ = u, d}.These history-events are posited formally and the 4 {u, d} bits are just labels and do not imply that anything in Z is "up" or "down".In the decoherence functionals for the 4 measure theories, the relevant beam-events in A and B are simply determined by the corresponding bit values in the past history-event in Z as follows: It can be verified that this is a PoZ model of the the four decoherence functionals (25) and that the decoherence functional for each global setting satisfies LoN and is factorizable (24).
When i, i ′ , j, j ′ refer to beam-events that are detection events, the decoherence functionals in (25) will be diagonal and the diagonal elements will be the experimental probabilities (10).The existence of a joint decoherence functional on all the beam events then implies the Tsirel'son inequalities for the experimental probabilities [22] and indeed the stronger condition known as Q 1 [21] in the Navascues-Pironio-Acin hierarchy [23].
In the remaining sections of this paper, we adopt a less formal tone and explore the implications of relativistic causality in relation to spacelike correlations, following which we consider further how the conditions of PoZ and LoN relate to relativistic causality and other foundational notions like locality, time-asymmetry, and logic.We begin by arguing that unadorned relativistic causality has very little to say on the matter of which correlations can occur in nature and which cannot.
Nonlocality rescues causality!
Tradition has it that the observed violation of the Bell/CHSH inequalities implies that some sort superluminal causation is taking place; and there are purely logical antinomies (e.g.Kochen-Specker-Stairs [25], Greenberger-Horne-Zeilinger (GHZ) [26], Hardy [27]) that are even more persuasive in this regard.In all of these instances the evidence for the alleged superluminality consists of certain spacelike correlations, either probabilistic or deterministic.Against tradition, however, we maintain that none of these correlations entails superluminal causation.And this is true independently of whether the context is classical or quantum.
Why then do so many people 8 feel that certain kinds of correlations among events in mutually spacelike regions of spacetime do conflict with relativistic causality?We think this comes about because they are taking for granted that causes can only act locally in spacetime, with the result that they slide from relativistic causality per se to an enhanced condition like that to which John Bell gave the name 'local causality'.Let us focus instead on what relativistic causality (RC) demands when taken alone, unalloyed with any further requirement of locality or local causation.What it wants to say is then very simple (albeit far from precisely formalized): An event that happens in region X cannot influence events in region Y disjoint from Future(X).Now let X be, for example, a spacetime region where a "source" emits an entangled pair of particles.The correlations at issue relate events in region A to events in region B, and both of these regions lie in the future of the source.The correlations themselves pertain to neither region individually, but to their union, Y = A ∪ B, which a fortiori also lies within Future(X).But for something that happens in a region X to cause something else to happen in that region's future in no way conflicts with relativistic causality.The alleged contradiction disappears.
That is the whole argument, but perhaps some further comments would be helpful.In order to feel fully at home with the above reasoning it's necessary to grant a certain conceptual independence to events as such, as indeed the framework of QMT does. 9A correlation between events in A and events in B is an event in its own right, an event in A ∪ B not reducible to some event in A together with some other event in B.
What you must not think to yourself is that Z can cause such a correlation-event only by separately inducing a particular A-event and a particular B-event.If we are right, it is the unacknowledged (or only partly acknowledged) embrace of this intuition of locality that creates the apparent conflict between quantum mechanics and relativistic causality.It thus seems worthwhile to try to identify the minimum hypothesis (weaker than Bell's local causality) that needs to be given up if one is to avoid the conflict.We will state this hypothesis (or principle) in relation to our two spacelike-separated regions, A and B, although the formulation can be extended in an obvious manner to any set of disjoint spacetime-regions. 8Representative authors are Albert Einstein, John Bell [28], and Travis Norsen in his otherwise very clear and careful exposition [29].(Similarly, some of the philosophical literature: e.g.[30,31].)Bell and Norsen do not seem to have specified the "beables" they had in mind; but Einstein (who presumably was unaware of histories-formulations) seems to have been taking the wave-function ψ as the quantum model of reality, and pointing at its remote "collapse" as the "spooky distant-action" [32] he was opposing. 9A quote from Kolmogorov conveys this insight [for "elementary event" read individual history]: "the notion of an elementary event is an artificial superstructure imposed on the concrete notion of an event.
In reality, events are not composed of elementary events, but elementary events originate in the dismemberment of composite events" (for an English translation by R. Jeffrey of Kolmogorov's 1948 paper see [33]).
Definition 9 (Principle of Sheafy Causation).It asserts the following.A cause can influence an event in region A ∪ B only by influencing separately events in A and events in B: a cause's effect/action in A ∪ B is fully given by its effect/action in A together with its effect/action in B.
Adopting a turn of phrase popular in the philosophical literature, one might say that according to this principle, a cause's action in A ∪ B "supervenes on" its action in A and its action in B. (We have chosen the adjective "sheafy" because a mathematical sheaf is the paradigmatic object whose global properties supervene on its local properties.)
Dispelling the supposed paradox
We can illustrate how sheafy causation leads to a paradox by adopting it provisionally, and then reasoning about a minimal fragment of the EPRB Gedankenexperiment, the fragment concerning perfect correlations.
Let there be two Stern-Gerlach analyzers with fixed settings so aligned as to produce a perfect correlation between the respective beams (u or d) in which the A-and B-particles emerge.Following a suitable preparation-event in region X ⊆ Z, the A-particle emerges from its analyzer in the upper beam if and only if the B-particle does the same.In other words, the event "uu or dd" (which we will henceforth write as uu+dd or uu∪dd) must happen.Now causality insists that particle-A cannot learn what particle-B is doing, and particle-B cannot learn what particle-A is doing.Therefore both particles must know in advance whether they will choose their 'u' beam or their 'd' beam.Hence the source or preparation must have pre-determined both of these separate "choices".(Or so it seems!) It is this supposed predetermination of the outcomes, uu, dd, ud, or du, that is the source of the paradoxes (basically because it authorizes the introduction of "hidden variables" λ that bear the information about which outcome has been determined to happen in any particular run.)To undermine the reasoning that seemed to lead to deterministic beam-events is thus to dispel the paradoxes.
What, then, was wrong with the reasoning we just rehearsed?The fallacy, as we have already indicated, was to have conflated causation per se with sheafy causation: to have ignored that events in X can cause any event in X's future, without necessarily being the cause of any other event.In particular they can cause uu + dd to happen, without needing to cause either uu or dd.That is, a cause need not (and in this example does not) influence the particles individually, but only jointly.
If we accepted the principle of sheafy causation, we would have to deny this possibility.Instead, we would infer that in order to force uu + dd to happen a cause would either have to force u-left and u-right to happen (and thus force uu to happen) or else force dd to happen.In some experimental runs, the fine details of the preparation-event would deterministically produce the u-event in each wing, while in other runs they would produce d in each wing.
It might be helpful to express our view of the situation in negative language.The preparation-event has prevented the events ud and du from happening.Beyond that it has done nothing, having had in particular zero influence on A and zero influence on B.
It has caused a correlation, but no more than that. 10 Note.An event like uu + dd is a logical (Boolean) combination of events in A and events in B: it "supervenes on" these events.This is indeed a species of locality, but it is purely kinematical, while the crucial nonlocality is dynamical.The causal influence which the X-event exerts in region A ∪ B does not supervene on its influences in regions A and B separately.
Perhaps we should also clarify here that by focussing on the correlation event uu + dd we do not mean to endorse an assertion like "Neither uu nor dd happens in nature, but only uu + dd".Such a claim would not make sense, given that both uu and dd are macroscopic events.The present paper revolves around questions of cause, locality, and logic.We do not address the measurement-problem, which we would view as the task of explaining why macro-reality can be identified with a single (macro-)history, even if microreality cannot.Empirically however, this is a fact, which can also be expressed by saying that macro-events follow classical logic.To explain this fact is a task for "Quantum Foundations", but if we take it as given, then it follows at the macroscopic level that uu + dd happens =⇒ either uu happens or dd happens.However, this still doesn't imply that the preparation-event causes uu or causes dd.What it causes is still just uu + dd. 11
How does the path-integral explain the perfect correlations?
If some of these these explanations seem unduly abstract or slippery, it might help to go through the path-integral calculation presented in [34] that shows in detail how the correlations come about.One sees in particular how the precluded event ud acquires a net amplitude of zero.As one will readily observe, the calculation is global in nature, because the amplitudes that enter into it are themselves global in nature.(They are functions of histories.)Within a path-integral framework, the fact that a correlation is an essentially nonlocal effect is clearly visible in the computation that one performs to deduce the correlation.One sees concretely how the preparation-event exerts its causal influence globally without doing so locally.Event u at A acquires a positive quantum measure µ(u) > 0, as does event d at B; but the measure of the intersection ud (their conjunction) vanishes.
In order to avoid a possible confusion, we should mention here that the setup in [34] differs from that discussed in Section 4 in one important respect.The beam-events considered in Section 4 were macroscopic instrument-events: the registering of the presence of one or more particles by one or more detectors placed in the corresponding beams.In contrast, the computation in [34] did not include detectors, and indeed probably no one has attempted something like that with realistic detector-models. 12Rather, the events 10 One might challenge the words, "zero influence", by claiming that the preparation "caused events u-left and d-left in A to be equally likely", and similarly for region B. However, even if one accepts this as a causal influence, it does nothing toward producing the correlation between u-left and u-right. 11In observing that reality is described by a single history at the macroscopic level, we are not claiming the same about microscopic reality.That the course of microscopic events involving a given particle could correspond to a single worldline of that particle would contradict RC as we understand it, as illustrated by the purely logical cousins of the EPR paradox.Nor do we mean to imply that the particle detectors in the u or d beams "only reveal" the locations of the particles they are detecting.Nor do we mean to imply that they don't! 12It's interesting that realistic source-models are much easier to devise.In the arrangement of [34] whose measures µ were computed in [34] were the corresponding beam-events without detectors present.This simplification is always made in practice when people compute observational probabilities, but of course it ultimately needs to be justified -something which will only be achieved fully when the so-called measurement problem has been solved.Pending that, we can perhaps be content with: (1) the assumption (or widespread conviction) that if the computation could be done, the measure µ of the particles emerging in certain beams without detectors present, would equal the measure of detectors placed in the same beams registering the particles' presence; together with (2) the rule of thumb that the measure of an instrument-event can be interpreted as a probability in the sense of a relative frequency.Together these are equivalent to the Born Rule.
For nonlocality
Not too many years ago, drawing a distinction between "local causality" and causality per se might have seemed to be splitting hairs, but now physicists possess many reasons to take a fundamental nonlocality seriously; and most of these reasons have nothing to do with the Bell inequalities.One can mention here the puzzle of the cosmological constant Λ; the continuing interest in nonlocal field theories, non-commutative geometry, and twistors; and the fact that (as illustrated by causal sets) a spatio-temporal discreteness can be combined with Lorentz invariance only by accepting a radical nonlocality.Indeed, if spacetime is ultimately discrete, then locality will largely lose its meaning simply because the concept of infinitesimal neighborhood of a point will no longer be available.But even a relatively limited amount of nonlocality would erase any difference of principle between causing an event in a small neighborhood of a spacetime point and causing an event in a much larger region, and this in turn would suffice to undermine the "sheafy" reduction of a causal influence on the amalgamated region A + B to separate influences on the constituent regions A and B.
Correlations involving multiple instrument-settings
The discussion above pertains to analyzers whose settings are fixed, whereas (as with patching) the Bell inequalities and the gedankenexperiments relating to superluminality all require the consideration of multiple settings.However, the case of variable settings is more general in appearance only.It can be handled in the same way as the fixed case if one treats the settings as the dynamical events they actually are (so enlarging the history-space and event-algebra to include the instruments and their histories).In place of a correlation event like 'uu + dd', one now puts an event like 'S → (uu + dd)', where S denotes a setting-event, u and d denote beam-events when the Stern-Gerlachs have been oriented by S, and → denotes the Boolean operation of so-called "material implication", defined by a single mirror (or beam-splitter) suffices to "entangle" a pair of photons with each other.The design takes advantage of the photons' bosonic statistics, and it requires no nonlinearity of the kind involved in parametric down-conversion.
Instead of saying that the preparation event in region X ⊆ Z causes in region A ∪ B the event, uu + dd, to happen, one now says that it causes the event, S → (uu + dd), to happen. 13In this manner, perfect correlations involving multiple instrument-settings can be handled exactly as above.In particular this covers the case of the EPRB setup with variable, but matched, settings of the analyzers.For a more generic example, consider the correlations of the Popescu-Rohrlich-boxes [35], which can be expressed as follows.Let E P R be the event14 where S 1 and S 2 are settings, u and d are beam-events as before, and where for instance S 2 S 2 denotes the event "setting S 2 at both A and B".The causal influence in this case can be expressed by saying that the preparation event P causes event E P R to happen.
[Alternatively, one could say that it causes four different events to happen, namely the events, S 1 S 1 → (uu ∪ dd), S 1 S 2 → (uu ∪ dd), S 2 S 1 → (uu ∪ dd), and S 2 S 2 → (ud ∪ du).]As a final example (one which is more amenable to experiment), consider the 3-beam GHZ correlations [26], which are encapsulated in the event, where S x and S y are again settings.In this case, the causal influence would be expressed by saying that the preparation event P caused E GHZ to happen.
In these examples one is still dealing with perfect correlations, but there's no need to go further if one's interest is in the conflict between relativistic causality and locality.Indeed the perfect GHZ correlations, for instance, are more trenchant in that respect than the merely probabilistic correlations involved in the CHSH/Bell inequalities.Nevertheless, the Bell inequalities are still the most relevant to accomplished experiments, and one can ask how the discussion of this section would look in relation to them.More generally, how should relativistic causality be conceived in the context of probabilistic correlations?
In that context, one is dealing with a broader and less transparent concept that one might term "stochastic causation", and it seems clear that cause-effect implications of a straightforward logical nature no longer suffice.Instead of statements like "P causes Q", it seems that one would need to make sense of locutions like "P causes Q with probability p", or (more obscurely) "P causes Q to have the probability p", or perhaps even "a string of repetitions of P causes the frequency of outcome Q to be p."But these matters concern stochastic causation and probability as such, and are not really germane in the present context.(Recall here that because the setting-and beam-events under discussion are macroscopic instrument-events, quantal interference is absent, whence one can employ ordinary ("homomorphic") logic and ordinary probability theory in reasoning about them.) 6 Why are certain correlations not seen?
The message from the preceding section is that the principle of relativistic causality imposes no limitation on the correlations that a localized cause can induce among events in its future.In particular, relativistic causality is perfectly compatible with the spacelike correlations that have often been supposed to contradict it (up to and including hypothetical "signalling correlations").Conflicts arise when you supplement relativistic causality with sheafy causation, or with some still stronger condition like continuous propagation of cause-effect chains in spacetime (Bell's Local Causality).But experience teaches that these stronger principles are frequently violated in the quantal world, and therefore must be given up.
If this were the end of the story, however, then one might expect to have observed in nature correlations much stronger than those discovered so far, and in particular stronger than provided for in established quantum theories, which respect constraints like "nosignalling" and the Tsirel'son inequalities (the simplest of the latter being CHSH with 2 → 2 √ 2 [36,37].)This suggests that some other principle beyond bare relativistic causality is active in nature, and that it might be encoded in certain structural features of the quantum measure which lie at the base of more phenomenal regularities like the "patching property" and its consequences for correlations, like the "no-signalling" equalities.If the explanation for these regularities is indeed some hitherto unrecognized structural principle governing the decoherence functional, then discovering what it is would not only help to illuminate our current theories, but it might usefully guide the search for new theories, especially theories of quantum gravity.Without really knowing how to frame such a principle, we will as a placeholder give it the name of causal severability, and try to indicate how PoZ might be a step in its direction.
In ordinary quantum mechanics, one proves the Tsirel'son inequalities by assuming that the correlators they relate can be expressed as expectation values of products of projection operators.In quantum measure theory, as already mentioned, the same inequalities follow from the hypothesis of a joint quantum measure, the analog of a joint probability measure in Fine's theorems.That is, they can be derived from "quantal patching".A would-be principle of causal severability would thus want to provide a basis for quantal patching.
In the "decoherent histories" and "consistent histories" interpretations of quantum mechanics [7,[38][39][40]) the decoherence functional is commonly defined in terms of sequences of projection operators in a fixed Hilbert space.If we could assume that this were its most general form, then patching would follow rather simply, but that assumption is not tenable in the context of path-integrals, let alone in quantum gravity.Fortunately it is not needed either, because one can appeal instead to event-operators, as we have seen in Section 4 of this paper.
But this in turn makes us ask what kind of condition would ensure that the required event-operators will be available?As detailed above, one possible answer, is that such a condition is PoZ, or rather PoZ supplemented by what we have termed Lack of Novelty (LoN).In light of this service that PoZ provides to quantal patching, one can see it as a species of causality principle, one that (thanks to the "slicing freedom" inherent in a Lorentzian temporal structure) is able to play a similar role quantum mechanically to what Bell's Local Causality played in the derivation of the CHSH inequalities, or to what the Principle of Sheafy Causation plays in the derivation of deterministic hidden variables from the perfect correlations of the original EPR paper.(In relation to CHSH one has schematically that, classically: Local Causality → Factorizability → classical patching → CHSH; and quantally: PoZ + LoN → event-operators → quantal patching → Tsirel'son.)Moreover, quantal patching also ensures, essentially by definition, that the resulting correlations will satisfy the condition on the marginal probabilities that goes by the name of "no-signalling".This, then, is one way in which PoZ relates to "causal severability".
But PoZ has other features, too, that make contact with our causal intuitions.First and foremost (and in contrast to other "causality principles" like spacelike commutativity) it manifests the essential time-asymmetry inherent in the causality-concept: the timereverse of PoZ is a condition that will almost never be satisfied!Moreover, the particular manner in which PoZ provides this "arrow of time", relates to irreversibility and the "stability of the past", because it can be read as saying that a certain kind of "property of the past" cannot be undone in the future.The preclusion of a past event is such a property, and although the full statement of PoZ goes beyond simple event-vectors to linear combinations of them, one can perhaps regard the vanishing of a sum like that on the LHS of (4) as also being a property of the past.
Finally, we should comment on the possibility that certain causality-principles could retain their heuristic value for theories in which the metric, and therefore the temporal/causal structure of spacetime, becomes dynamical, in other words for theories of quantum gravity.In that treacherous soil, it seems very possible that none of our cherished causality principles will be able to take root.However, it's worth mentioning that for decoherence functionals defined on the event-algebra of labeled causal sets there does exist a natural analog of the PoZ condition.
Acknowledgments
For stimulating questions touching the topic of this paper, RDS would like to thank participants in the "Tonyfest" symposium, held
A Appendix
Consider a unitary local quantum field theory on globally hyperbolic spacetime M .Let the field be Φ and let Σ i be some Cauchy surface in the past on which there is an initial state ψ which is a set of amplitudes ψ[φ] for each spatial configuration φ = Φ| Σi on Σ i .The decoherence functional for events E and Ē in a spacetime region R between the initial Cauchy surface Σ i and a final Cauchy surface Σ f is given by a double path integral of Schwinger-Keldysh type: where the delta functional enforces the condition that the two histories Φ and Φ are equal on the final truncation surface Σ f .This path integral may also be thought of as a single integral over what we call Schwinger histories [41] consisting of the pair (Φ, Φ) that agree on Σ f .It is a property of unitary theories that the value of the decoherence functional does not depend on the position of the truncation surface Σ f , so long as Σ f is nowhere to the past of events E and Ē.Now consider regions Z, A and B as in the article, as shown in figure 1.We denote the union of Z and A by Z + A, the union of Z and B by Z + B and the union of Z, A and B by Z + A + B, as in the article.We denote the future boundary of a region X by ∂ + X and its past boundary by ∂ − X.
Theorem 4. Consider the unitary quantum field theory with decoherence functional as described above.Let regions Z, A and B be as in figure 1.Let E A and ĒA be events in A, E B and ĒB be events in B and let E γ Z and E γZ be history-events in Z. Then Proof.Recall that a history-event in Z is the cylinder set defined by a single field configuration on Z and let us refer to the field configurations on Z corresponding to history-events E γ Z and E γZ as γ and γ respectively.
The portion of the initial surface Σ i that is not contained in Z can be ignored and, instead of initial and final Cauchy surfaces, we have initial and final partial Cauchy surfaces: the initial surface is d := ∂ − Z ∩ Σ i and the final, truncation surface is the future boundary of the relevant region, Z, Z + A, Z + B or Z + A + B for the decoherence functional in hand.See figure 2. The initial amplitudes ψ are defined for spatial field configurations on the initial partial Cauchy surface, d.
We consider the four decoherence functionals in the identity (48) in turn starting with the simplest, D(E γ Z , E γZ ).For this, there is no path integral at all because we can choose the truncation surface to be the future boundary of Z: ∂ + Z can be partitioned into three:
A
and B, and let us call the possible settings, a or a ′ in A and b or b ′ in B. We refer to a, a ′ , b and b ′ as local settings.There are then four possible global settings: ab, ab ′ , a ′ b and a ′ b ′ .
Definition 8 (
PoZ model).A PoZ model for the decoherence functionals (25) is a set of four PoZ quantum measure theories in spacetime Z + A + B, (Ω αβ , A αβ , D αβZ ) labelled by the four global settings αβ = ab, ab ′ , a ′ b, a ′ b ′ , with the following properties.i.For each local setting, a, a ′ , b or b ′ , the two theories that share that setting agree (as per definition 5) in the relevant spacetime region, i.e. theories ab and ab ′ agree in Z + A, theories a ′ b and a ′ b ′ agree in Z + A, theories ab and a ′ b agree in Z + B, and theories ab ′ and a ′ b ′ agree in Z + B. This implies that all four theories agree in Z.
and the complement c := ∂ + Z \ (a ∪ b) as shown in fig 2. The delta function for the fields on ∂ + Z is therefore a product of 3 delta functions for the field on a, b and c: δ
Figure 2 :
Figure 2: The horizontal line Σ i is the initial Cauchy surface.The dashed portion of Σ i represents d := ∂ − Z ∩ Σ i .The future boundary of Z can be partitioned into 3 sets: a := ∂ − A ∩ ∂ + Z, b := ∂ − B ∩ ∂ + Z and the complement c := ∂ + Z \ (a ∪ b). a and b are shown as bold lines and c is indicated by the dotted line.
)
Corollary 3. If a quantum system satisfies PoZ and E and F are disjoint events in a region R then E + F = E + F .Corollary 4. If a quantum system satisfies PoZ and {E 1 , E 2 . . .E n } is a collection of events in a region R that is a partition of Ω, then the event-maps { E 1 , E 2 , . . .E n } sum to the identity map from H R to H R considered as a subspace of H.
3 February 2019, at the Raman Research Institute in Bengaluru.FD acknowledges the support of the Leverhulme/Royal Society interdisciplinary APEX grant APX/R1/180098.FD is supported in part by STFC grant ST/P000762/1.Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation.RDS is supported in part by NSERC through grant RGPIN-418709-2012. | 15,593.2 | 2023-05-26T00:00:00.000 | [
"Physics"
] |
The Nexus between Downsizing and Financial Performance of Selected Commercial Banks in Nigeria: A Comparative and an Empirical Exploration
Downsizing popularly is a situation in which a firm reduces its workforce tremendously as a measure to improve profit by cutting down operating and overhead costs. In this study, we explored the relationship between downsizing and financial performance of five selected commercial banks in Nigeria from 2010 to 2015. These banks over the years have rolled out computerized transaction channels leading to reduction in their workforce. The study applied the paired sampled T-Test to assess if there is any significant difference between financial performance expressed with return on assets and return on equity before and after downsizing. The panel data analysis was used to explored the relationship between the variables of interest. The result of our paired sample T-Test indicates that there is no significant difference between financial performance indices (return on Original Research Article Ikechukwu and Chijindu; AJEBA, 1(1): 1-14, 2016; Article no.AJEBA.28925 2 assets and return on equity) before and after downsizing. The random effect estimation depicts that selected commercial banks failed to achieve their objectives of increasing overall assets level by way of downsizing its workforce. On the other hand, we found no evidence that downsizing is a good corporate strategy for improving the wealth of shareholders in Nigeria. Downsizing not effecting the return on assets and return on equity may be because of the global financial meltdown within the period covered by this study. In view of our finding and considering the level of economic growth and development in Nigeria, downsizing should be discouraged in view of its inability to spur expansion in assets base of banks and obvious economic and social problems. On downsizing insignificant positive relationship with return on equity, it should be noted that Nigeria do not have opportunities and enabling competing environment as their counterpart in developed economies.
INTRODUCTION
Downsizing may be defined in the context of a special case of the labour input strategy of a firm. From the perspective of popular sense, downsizing is a situation in which a firm slashes its workforce tremendously (higher than 10% of active full time employees) as measures to improve profit, regardless of the financial position of the firm [1]. In this paper, downsizing will be a case in which the firm chooses the number of people on its payroll as the variable that maximize the stock price. Such firm is in a perfectly competitive industry and acts as a price taker. Its choice of the level of employment determines how much it can produce and sell at a given price and therefore, what its profit will be. Downsizing is probably also one of the most misunderstood and misinterpreted contemporary phenomena [2]. Downsizing process was first introduced in American firms. The American economy in the 1980s was strong, inflation was falling, and the Gross National Product (GNP) was growing at a steady, confident pace. Corporate profits had reached historically high level, and investors were on buying spree in the stock market, pushing it from one record to close at the next. Unemployment has fallen to a level that many economist felt was consistent with non-accelerating inflation. Expectation of inflation were abated and the boom seemed to be poised to last for a long time, with no economic downturn in sight. At the same time the major corporation in the United States of America (USA) appeared to be firing workers by the hundreds of thousands and job insecurity has risen to a surprisingly high level [3]. Regardless of the seniority, the company's profitability or the surging demand for the firm's output, the threat to an employee finding a pink slip in the next stage envelop was real and widespread. No job seemed safe any longer.
The scenario of the United States of America economy in the mid-1990s seem inconsistent not only with a standard textbook characterization of an economic boom, but also with historically observable relationship between the labour market and other economic arenas, such as the financial market or the goods market [3]. Politicians and unions pointed to the greed of corporate America and the intensity of management to the contributions and value of workers. Standard microeconomics was at a complete loss to explain the phenomenon. If strong firms were anticipating greater demand for their products during the economic boom, and labour cost of their products were not rising excessively relative to productivity, the question therefore was why were firms firing workers?
The right size of workforce that enhances the survival of any organization is vital and indispensable [4]. Downsizing emerged to describe an action of dismissing a large proportion of firm's workforce in a very short period of time, particularly when the firm was highly profitable. The essential of downsizing as carried out by various organisations across the globe is to enhance efficiency by way of reducing operating costs, improving revenues or strengthening competitiveness [2]. In practice, it is expected that when a firm downsizes or reduces its labour force, efficiency and profitability measured through return on assets, return on equity, net profit margin, growth in revenues, etc. would improve [5]. In a standard downsizing story, a profitable firm-well poised for growth would announce that it was firing a large percentage of its workforce. The equity market would get excited, and initiate a buying frenzy of the firm's stock. This goes counter to a standard microeconomic analysis in which a weak firm anticipates a slump in the demand for its products, and lay off workers, while a strong firm forces a jump in the demand for its products and hires more workers to increased production [5].
Investors care about downsizing, since it contain severe implications for the short term profitability and even the long term growth of a firm. Conventionally, downsizing is quite unlike layoff; a worker is asked to temporarily leave during periods of weak demand but will be ask back when business picks up. This is most applicable in the construction firms in Nigeria. In a downsizing, the separation between a worker and a firm is permanent. A downsizing is not a dismissal for individual incompetence, rather a decision on the part of the management to reduce the overall workforce. Downsizing does not just occur to an organisation, it is not something that happens it is a change that the management of an organisation makes by purpose hence, downsizing is an ensemble of intentional activities [6].
Problem Statement
Banks are considered as critical sector of the economy and it is important that the workforce supporting these banks are well motivated and effective in delivering the services [7]. The banking industry in Nigeria witnessed the highest act of downsizing during the consolidation exercise of 2004/2005. A lot of employees were laid off as a result of restructuring, reengineering, takeovers, mergers and acquisition thus, increasing the level of unemployment in the country.
Studies on the nexus between downsizing and corporate performance of banks in Nigeria are scarce. The online available study on the subject matter by [8] did not indicate any proxy (return on assets, return on equity, net profit margin, etc.) for measuring corporate performance. Secondly, the use of mere questionnaire responses without recourse to employee's efficiency and profitability indices available in financial position and annual report of the banks in analysing the relationship between downsizing and corporate performance is a source of criticism. The few other studies focused on job satisfaction among survivors e.g.
Consequently, in bridging the gap in literature, it is the sole aim of this study to explore the relationship between downsizing and financial performance of selected commercial banks in Nigeria for the period 2010 to 2015 using return on assets and return on equity as performance indices. To the best of our knowledge based on internet searches, this study is the first to cautiously explore the relationship between downsizing in Nigeria and bank financial performance measured with return on assets and return on equity of selected commercial banks.
This study is structured as follows. In the first section, it begins with an introduction. The second section is the review of related literature which succinctly clarified the concept of downsizing and corporate performance, empirical studies. The third section briefly explains research methodology. In the fourth section, the findings of the study were discussed. Finally, some implications for this study along with the conclusion and limitations were presented in section five.
Concept of Downsizing
Downsizing in a layman's language is the reduction in the workforce of an organization in order to improve efficiency and profitability. Downsizing refers to an ensemble of actions carried out by the management of an organisation in order to ameliorate its productivity, efficiency and competitiveness. The objectives of downsizing are to increase efficiency and productivity, control over costs, fewer underutilized human resource, lessen management layers which will overcome communication gap and speed up communication, improvement in decision making process by reduction in time consumption [4]. Downsizing could be interpreted as a simple diminution of the organisational size, however this explanation leads to misinterpretation [6]. From organizational management perspective, downsizing is a normal practice and necessary for continuous existence of the organization. On the other hand, most employees sees downsizing as unfair corporate practice even when they receive favourable severance package. Downsizing is a phenomenon that is unwanted and unprovoked by either the organization or the employees. Therefore, from whatever angle one views redundancy in organizations, the nature and types of redundancy may be identified according to their causes [7]. There have been downsizing in Nigeria both in the public and private sector, but the manner and ways in which it is done, undermine the good reasons for downsizing hence unintended consequences [10].
Financial Performance
Financial performance is how well a firm has performed relative to the use of its assets resulting in revenue generation over a period of time. Financial performance address the financial health of a firm over a specified time frame. The primary motive of any business entity is profit maximization translated into financial performance. The ability of a firm to ascertain its operations and policies in monetary value over a given period of time would be referred to as financial performance. Financial performance would be measured using various variables such as return on assets, return on equity, profit before or after tax, net profit margin, sales growth, growth in revenue, earnings per share, dividend per share and price earnings ratio among others. However, for the purpose of this study, we applied only return on assets and return on equity to measure financial performance of selected banks as these are the two major proxies for measuring financial performance of any firm. Financial performance measures such maximization of profit, maximizing the profit on assets, as well as maximizing the benefits that accrue to shareholders are at the centre of measure of effectiveness of the firm [11].
Downsizing as a Corporate Strategy in Nigeria
There are some reasons for adoption of downsizing as a corporate strategy by commercial banks in Nigeria. These reasons are briefly discussed as follows:
Restructuring or re-engineering
The banking sector reform of the Central Bank of Nigeria in 2004/2005 accelerated downsizing as a major corporate strategy in order for banks to survive competition. In other to avoid revoking of operating licence, many banks were forced to go into mergers and acquisition. The number of commercial banks in Nigeria reduced to 25 as at 1 st January, 2006 against 89 in operation before the 31 st December, 2005 deadline for banks to recapitalize. This exercise led to massive retrenchment of workers in the name of restructuring and re-engineering of operations in the banks.
Poor operating profit
To enhance operating profits, banks embarked on downsizing of their workforce. Some of the banks were making comfortable profits in their operation, they only wanted to maintain a lead in the banking industry in the country. In USA, unlike in Nigerian system, the corporate aim of downsizing is to enhance the stock values of the companies involved in the exercise and probably enhance profit earnings in the long run.
Fluctuations in marginal productivity of capital or labour
Banks could add or remove workers based on changes in the marginal productivity of labour, given that demand was constant. If workers' productivity increased the firm would need a smaller head-count in order to produce the same output. Presumably, the workers would be compensated for their increased productivity, so the total wage bill of the bank would not change when certain number of workers are fired. If the wage bill and demand do not change, then presumably profitability and cash flow will be stagnant, as will the stock price.
Adoption of technology in operations
The computerization of banking transactions in Nigeria has contributed to reduction in the workforce despite the expansion in branches of commercial banks. Immediately after the consolidation exercise of 2005, commercial banks rolled out various technology service delivery channels to attract more customers to stay in the business and compete favourable in the industry.
Downsizing, It's Effect in the Economy
Nigeria situation is completely different from what is obtained in USA and other advanced countries of the world. For instance, in the USA during the mid-1990s under the Clinton administration, the American economy boomed. There was massive job creation so that many people who lost their jobs in one firm or the other, picked up jobs and very little percentage (3-5%) unemployment situation was observed. Most people who lost their jobs as a result of downsizing in different firms gained new employment. In some cases, they got better jobs with better condition and higher pay. Displaced people affected by the downsizing exercise were given a good package in recognition of their valuable contributions to the growth of these firms. In Nigeria's situation, most employees who were laid off never got absorbed into other sectors simply because there is no jobs and stigmatization associated with job loss, especially in the banking sector. In Nigeria, bank workers as considered as the big guys, the guys on money. Jobs are never created, and it will be a tragedy in this country for one to lose his job involuntary. Cases of voluntary resignation in Nigeria very rarely occurs. According to the Central Bank of Nigeria monthly report of June, 2016, over 80% of youths are unemployed and declared that unemployment remains a severe threat to Nigeria's economy.
Despite all the incentive that a worker may receive in the event of downsizing, a good number of them never fared well years after being laid off. This is because most of them never get new jobs. The general economic situation is nothing to write home about especially with decline in government revenue from oil as a result of fall in crude oil price in the international oil market. Jobs were not created and high inflation level was equally a threat particularly were the fund paid were not properly invested and managed. Many of the laid off workers never prepared or anticipated retirement from the job they had put a greater part of their adult age and energy and grew with these banks. Those who could not cope with the harsh economic condition of the time even lost their life as a result of heart failure and some could not comfortably manage their families, and their social life became unimpressive. Some were never given opportunities to work in the banking industry by the regulatory authority (Central Bank of Nigeria) for no convincing reasons. They could not comfortably pay their children's school fees especially those of them whose children were not enjoying banks scholarship.
This exercise increased labour mobility in the banking industry because of the inherent uncertainty (no safety) witnessed in the job. Highly educated and experience ones move from one bank to another in search of higher pay because of uncertainty. It equally encouraged more frauds in the banks through internal collision and to-get rich quick syndrome in the country. [4] conducted a study in order to measure whether the banking sector are successful in achieving their objectives of downsizing or not. The banks that downsized during the last decade were selected as sample of the study. Predownsizing and post-downsizing financial data were analysed at two time spans. Six different ratios were calculated as the indicators of financial performance which were: loan per employee, deposit per employee, return on assets, return on equity, loan to assets and nonperforming loans to loan ratio. To test the hypothesis statistically, paired sample test was used. It was observed that banks could not achieve their desired results of profitability.
Empirical Studies
[12] used a unique dataset to study the short term effects of downsizing on operational and financial performance of large German firms. The operational and financial performance measures were retrieved and calculated from the Amadeus database, made available by Bureau van Dijk. They focus on various indicators of firm performance such as labour, capital and total factor productivity as well as average wage costs and profit margin and applied a Difference-in-Difference approach to identify the impact of downsizing on these indicators. Combining both subsamples, they found that productivity as well as profitability drop during downsizing and do not surpass their before-restructuring levels afterwards. Differentiating on the reason behind the downsizing decision, some differences emerged. Productivity after downsizing seems to have decreased especially for those firms that tried to increase their efficiency, while firms downsizing due to business downturn, only witnessed a contemporaneous drop in productivity.
[13] on clarifying the background of downsizing as a strategy, measuring the profitability effects of downsizing and finding out the signal value of downsizing announcements in the capital markets. The research focused on deriving the effects of downsizing among Finnish large cap companies between 2005 and 2010. The sample of 197 downsizing events consists of stock exchange releases regarding new downsizing actions from Helsinki stock exchange OMX 25 companies. The study shows evidence that downsizing does not have a significant impact on profitability on an aggregate level. Market adjusted return on assets and return on equity improve roughly 1% whereas earnings before interest and taxes margin decreases by the same amount among downsizers during three years after the announcement. [14] explored the relationship between downsizing decisions and corporate financial performance after top management has decided to downsize. Their focus was on the financial consequences arising from the amount of downsizing and the use of disengagement incentives. They used a sample of downsizing announcements in the Spanish press from 1995 up to 2001. Although the results showed that the amount of downsizing is not significantly related to post-downsizing profitability, the evidence provided supports the finding that the use of disengagement incentives (which motivate workers to leave the organization) is negatively related to firm performance. The analysis of the study helps to understand the role that strategic downsizing decisions play in explaining observed variance in the performance of downsized firms. [15] used data set of Chinese state-owned enterprises (SOEs) and private firms to evaluate the effects of labour downsizing on firms' technical efficiency, financial performance, and employee wages. Since downsizers and nondownsizers differ greatly in firm characteristics, they used propensity score matching to deal with firm heterogeneity. They found that downsizing has serious short-term costs in terms of allocation efficiency and financial performance. For mild downsizing, SOEs suffer more in profitability, and private firms more in allocative efficiency. The distribution of surplus after downsizing is more favourable to owners in private firms, and labour in SOEs. For severe downsizing, SOEs and private firms exhibit lower technical efficiency and financial performance growth with similar magnitudes.
[16] examined whether Portugal's eight largest banks realized their financial objectives upon the execution of downsizing activities during 2008-2010. Financial performance was measured through employee efficiency, profitability, and asset quality. Six hypotheses were defined using six different financial ratios which were deemed as integral tools for measuring financial performance of deposit-accepting banks. The secondary data were analysed within a defined framework of two distinct phases: pre-and postdownsizing phases. A key statistical tool, the paired sample t-test, was applied to determine whether there were statistically significant differences in the ratios between the two timeframes. The analysis demonstrated that there were statistically significant differences between the pre-and post-downsizing ratios of loans per employee and deposits per employee. In contrast, no statistically significant difference was found in return on assets, return on equity, loans to assets, and non-performing loans to loans ratios. On the basis of this analysis, the study has concluded that downsized large Portuguese banks have largely failed to achieve their projected financial objectives.
[17] examined the relationship between downsizing and financial performance of Turkish banks. The scope of the study is deposit accepting banks operating in Turkey. There is a great amount of decrease in the number of employees working at banks between 2000 and 2003. In this study, the pre and after downsizing performance of the banks was measured by using Paired Samples T-Test. According to the hypothesis test results, there is no significant difference between the profitability of Turkish banks before and after downsizing. Four of the performance variables in the hypotheses did not reveal any significant relation between downsizing and performance. Turkish banks could not achieve the intended results by downsizing between 2000 and 2003.
METHODOLOGY
To explore the relationship between downsizing and financial performance of banks in Nigeria, five commercial banks listed on the floor of the Nigeria Stock Exchange were chosen. [1] noted that on the postulation of literature, a firm is said to downsize if such a firm should decrease its work force by 10% or above annually compared with the previous year. However, based on the peculiarity of Nigerian as a developing country, the selected banks have rolled out major information communication technology transaction infrastructure resulting in reduction in workforce. The action of these banks have made citizens of the country to see banking jobs as the most insecure firm to work. Furthermore, their annual reports and financial statement from 2010 to 2015 are available. The banks are Zenith Bank Plc, United Bank for Africa Plc, Guaranty Trust Bank Plc, First City Monument Bank Plc and Access Bank Nigeria Plc. The data for the period 2010 to 2015 were collected from annual and financial report of banks as relevant. We compared the mean of the year downsizing took place and the year after downsizing using the paired sample T-Test of SPSS version 21. It computes the difference between the two variables for each case and tests to see if the average difference is significantly different from zero [17]. To examine the relationship between the variables of interest, we applied the panel data analysis. The Hausman Specification Test was conducted to determine the suitability of fixed and random effect estimation.
Model Specification
To explore the relationship between downsizing and financial performance of selected commercial banks in Nigeria, we developed a model based on the peculiarity of Nigeria environment. Our expectation is that reducing workforce by banks will have positive relationship with financial performance particularly with implementation of the Treasury Single Account (TSA) by the federal government of Nigeria. This is in line with the adoption of downsizing as a corporate strategy. Furthermore, commercial banks in Nigeria are majorly dependent on government fund. With the closure of all government agencies accounts by the current administration, banks are left with only deposits of its private or corporate customers hence, downsizing as a means of cutting down overheads and operating expenses. Our models are advanced as follows: Where ܣܱܴ ௧ and ܧܱܴ ௧ are return on assets and return in equity respectively in year ;ݐ ߚ is the coefficient constant; ߚ ଵ is the coefficient of downsizing; ܹܵܦ ௧ is downsizing in year ;ݐ and ߤ ௧ is the error term in year .ݐ
Note:
We measured downsizing by the number of workers retrenched/sacked during each year. In other word, the difference between the workforce in previous year and current year (for instance, the difference between the workforce in 2010 and 2011).
Hypothesis
On the premises of the objective of this study, we tested this hypothesis: The level of downsizing has a positive and significant relationship with return on assets/ return on equity.
Decision criteria
If the p-value as determined by the suitability of fixed or random effect estimation is less than 0.05, the null hypothesis is rejected. On the other hand, if the p-value is greater than 0.05, the null hypothesis is accepted.
Data Presentation
The data of banks that have downsized their workforce from 2010 to 2015 are presented in
Discussion of Findings
To test the difference between return on assets of selected commercial banks the year downsizing took place and the year after downsizing, the paired sample T-Test was applied. The result in Table 4.4a indicates that there is no significant difference between return on assets of banks before and after downsizing. This is in line with [16] for a study conducted in the context of Portugal. Table 4.4b shows the pooled OLS, fixed and random effect estimation. The Hausman Specification Test discloses the suitability of the random effect to fixed effect estimation as the p-value is insignificant at 5% level of significance. [12,14] and [15] that downsizing is negatively related with financial performance in Germany, Spain and China. It can be inferred from Table 4.4b that the selected commercial banks failed to achieve their objectives of increasing overall assets level by way of downsizing its workforce. The Adjusted Rsquared reveals that -3.55 variation in return on assets was as result of downsizing exercise of banks over the period of the study. In essence, downsizing has not contributed positively to growth in return on assets of banks in Nigeria. However, the variation in return on assets as attributed to downsizing is not statistically significant. Furthermore, the Durbin Watson value of 1.92 is quite close to the bench mark of 2.0 thus, the model is free from autocorrelation problem. From the paired sampled T-Test in Table 4.5b, we observed also that there is no significant difference between the return on equity before and after downsizing. The p-value of the Hausman Specification Test in Table 4.5b prefers the random effect to fixed effect estimation. Downsizing has a positive relationship with return on equity. However, this is not statistically significant at 5%. A unit increase in downsizing increase return on equity by 0.97 units. This agrees with [13] that downsizing improves shareholder's wealth in Finland. This positive relationship between downsizing and return on equity suggest that downsizing is a good corporate strategy for maximizing wealth of the shareholders thus, in line with the aim of downsizing.
The Adjusted R-squared discloses that -3.53 changes in return on equity was explained by downsizing as a corporate strategy over the period of the study. It is clear from Table 4.4b Table 4.4b shows that there is no significant and positive relationship between the level of downsizing and return on assets. Thus, the null hypothesis that the level of downsizing has positive and significant relationship with return on assets would not be rejected. Table 4.5b shows that there is positive but insignificant relationship between the level of downsizing and return on equity.
Conclusion
The relationship between downsizing and financial performance of selected banks in Nigeria was explored in this study. The application of the paired sample T-Test demonstrate that there is no significant difference between financial performance of banks before and after downsizing. The panel analysis reveals that downsizing does not increase the assets base of banks. Furthermore, there is no statistical evidence that downsizing has the capability of increasing the wealth of shareholders. Downsizing not effecting the return on assets and return on equity may be because of the global financial meltdown within the period covered by this study. Other factors such as banks corporate governance, not adhering to bank lending principles, withdrawal of government funds from the banks via Treasury Single Account (TSA) implementation with effect from September, 2015 and the regulatory authority intervention through monthly monetary policy committee determination of monetary policy rate may also contribute to insignificant explanation of return on assets and return on equity by downsizing. Downsizing in Nigeria is therefore not really a welcome corporate strategy for growth. Few individual firms may benefit from this, in view of the fact that it has helped in the rise of the profit over the years. However, the percentage of the beneficiaries is quite negligible when the entire economy is compared. For example, the downsizing in 2004/2005 by Fidelity Bank Nigeria Plc was responsible for reduction in bank overheads over the years and the resultant effect been in marginal profit rise.
Policy Implication
Downsizing should be discouraged in Nigeria in view of its inability to spur expansion in assets base of banks coupled with obvious economic and social problems. Although, the Nigeria Labour Congress (NLC) had tried to fight this in the past and recently the federal government (by issuing directive to banks to suspend further downsizing on 4 th June, 2016), but there have limitation as a result of labour laws. On the basis of its positive relationship with return on equity (maximizing shareholders wealth), it should be noted that Nigeria do not have opportunities and enabling competing environment as their counterpart in developed economies. New jobs are never created to take care of displaced people. Even young school leavers do not find new jobs after many years of graduating.
Limitation
This research has some limitations that would be addressed in future studies. One of the limitations is the scope and period covered. It would be fascinating if all the commercial banks operating in the country is captured and the time frame extended to relatively permit for larger number of observations that would make the result more robust and reliable. The result of this study should not be viewed as conclusive empirical evidence, but rather an additional motivation for further research in the area with regards to other financial performance indices such as dividend per share and earnings per share among others. | 6,995.8 | 2016-01-10T00:00:00.000 | [
"Business",
"Economics"
] |
Trend Prediction of Event Popularity from Microblogs
: Owing to rapid development of the Internet and the rise of the big data era, microblog has become the main means for people to spread and obtain information. If people can accurately predict the development trend of a microblog event, it will be of great significance for the government to carry out public relations activities on network event supervision and guide the development of microblog event reasonably for network crisis. This paper presents effective solutions to deal with trend prediction of microblog events’ popularity. Firstly, by selecting the influence factors and quantifying the weight of each factor with an information entropy algorithm, the microblog event popularity is modeled. Secondly, the singular spectrum analysis is carried out to decompose and reconstruct the time series of the popularity of microblog event. Then, the box chart method is used to divide the popularity of microblog event into various trend spaces. In addition, this paper exploits the Bi-LSTM model to deal with trend prediction with a sequence to label model. Finally, the comparative experimental analysis is carried out on two real data sets crawled from Sina Weibo platform. Compared to three comparative methods, the experimental results show that our proposal improves F1-score by up to 39%.
Introduction
With the rapid development of the Internet and the rise of the era of big data, microblog has become the main means for people to spread and obtain information. Timely and accurate prediction of the evolution trend of microblog events can help the government accurately evaluate the development trend of microblog events and provide effective decision support for the formulation of public event guidance strategies [1]. Generally, hot events on microblog platforms are often defined as the focus of public discussion and concern, which are the concentrated embodiment of netizens' interests and emotions. When an event is exposed in social network, the upsurge of the Internet media and netizens' discussion about the event on social media will affect the popularity of the events in real time. In addition, when social users exchange information with each other, they influence and are influenced by others [2]. Thus, social networks provide a large amount of real-time and continuous data for exploring the evolution of microblog events [3]. However, due to the non-linear and multivariate characteristics of microblog data, this paper has to solve two challenging problems for microblog event popularity prediction.
(1) How can the weight of each factor's impact on the popularity of microblog events be evaluated? In the previous work on microblog events' popularity prediction, most of them take one indicator as the popularity of microblog event for prediction. However, for multivariate microblog data, univariate analysis cannot well reflect the systematic change of the trend of microblog popularity. Obviously, various factors have different impacts on the popularity of microblog events, so they should play different roles in popularity prediction modeling. Therefore, it is very necessary to weigh the influence of each factor on the event popularity.
(2) How can the future trend of nonlinear time series be predicted? The popularity evolution of microblog events tends to be a nonlinear and irregular time series. Therefore, to solve the popularity trend prediction, it is necessary to extract the trend components of popularity time series. However, statistical learning methods or traditional neural network methods have a poor prediction effect on nonlinear data. Consequently, this paper needs to design an effective prediction method to denoise the time series data and extract the different components for trend prediction.
Aiming at these issues, this paper presents an effective approach to predicting the trend of microblog events' popularity. Firstly, the popularity time series of microblog events is modeled by comprehensive weighting. Secondly, the information entropy algorithm is used to measure the effects of various factors on the popularity of microblog events. Meanwhile, aiming to explore and predict the evolution trend of microblog events' popularity, this paper transforms the changes between every two time nodes in the time series into state features. Then, the box-plot method is used to divide the popularity of the microblog event into various trend spaces. Finally, this paper utilizes the Bi-LSTM model to solve the trend prediction of microblog events' popularity by learning the long-term dependence between the time steps of popularity time series.
In summary, the following contributions are made in this paper.
(1) Aiming to deal with time series modeling for nonlinear data, this paper leverages an effective model based on series data analysis to extract trend components of popularity time series. Specifically, by selecting the influence factors and quantifying the weight of each factor with information entropy algorithm, the microblog event popularity is modeled. And then, the singular spectrum analysis is carried out to decompose and reconstruct the time series of the popularity of microblog event (Section 3).
(2) This paper exploits the learning method to deal with trend prediction with a sequence to label model. Firstly, this paper models the Bi-LSTM network using past and future data from time series. Secondly, by learning the long-term dependence between the time steps of the popularity time series, the future trend prediction of microblog events is solved. Compared with the traditional LSTM model, our proposal based on Bi-LSTM network has better prediction performance (Section 3).
(3) This paper conducts experiments on a real microblog dataset from the Sina Weibo platform. Compared with three general-purpose algorithms for popularity prediction, the experimental results show that our approach achieves the best performance compared to its competitors, which provides a new solution to the trend prediction for event evolution on microblog platforms (Section 4).
Related Work
Scientific research on the evolution trend of microblog events can effectively monitor the development of event popularity at all stages [4][5][6], which is of great significance to the supervision of network opinion. At present, existing work can be divided into the following two aspects: event propagation research and event trend prediction research.
In order to reveal the propagation mechanism and the evolution law of microblog events, the pioneering work [7] carried out feature extraction on Weibo data and developed an outlier knowledge management framework for dealing with public events. Compared with traditional methods, the graph theory-based approach performed well in modeling the interaction between users [8]. Meanwhile, it is worth mentioning that text and sentiment analysis are often used to analyze netizens' attitudes when they disseminate information. For example, the LSTM model [9] was exploited to capture the features of social contexts and can integrate them into text features. Additionally, Xu et al. [10] proposed to use convolutional neural network (CNN) combined with word2vec technology to establish emotion classification model. Moreover, among all text analysis methods, LDA model was widely considered to discovery of microblog text topics [11,12].
Regarding the studies on events evolution trend prediction, the previous work is mainly divided into dynamic model and machine learning model. In Yin's work [13], a Future Internet 2021, 13, 220 3 of 13 modified epidemic model was proposed to predict the dynamics of topic reading, which represents one indicator of event popularity. In their further work [14], considering both public contact and participation on microblog, they proposed the Susceptible-Reading-Forwarding-Immune (SRFI) model to predict the overall microblog event trend in all stages. Meanwhile, Pan et al. [15] developed a Stochastic Differential Equation (SDE) to describe the observed collective patterns of the online microblogging behavior and predict the Sina Weibo volume data. On the other side, for machine learning-based method, the BP neural network [16,17] was applied to predict the trend of microblog events in early. Aimed at the sudden and non-linear characteristics of microblog events, timely grasp of the information increment of microblog events plays a key role in measuring the event evolution trend, which can be better solved by using LSTM network model [18]. Feng et al. [19] introduced LSTM model to analyze the sequence information and complete the prediction for the number of blogs for a certain event in a period. Moreover, in Mughees's work [20], the bidirectional LSTM network model is proven to be not only capable of learning long-term dependencies between the time steps of sequence data, but also can effectively use past and future information for prediction. Consequently, thanks to the ability to process time series data, the Bi-LSTM model has been successfully applied in many time series prediction tasks [21][22][23][24][25][26][27]. Inspired by this idea, this paper aims to exploit the learning method based on Bi-LSTM model to deal with event trend prediction with a sequence to label model.
Generally, the popularity of microblog events is affected by many factors. However, to our knowledge, so far there are very few works towards microblog events prediction considering multidimensional influential factors to model popularity index. Meanwhile, the dynamic change of microblog event popularity is easy to be nonlinear and irregular in a period of time; nevertheless, most of the previous works validate their methods and experiment only on one dataset, which may limit the applicability of the model especially for nonlinear time series data. This paper presents effective solutions to deal with trend prediction of microblog events' popularity. More specifically, by selecting the influence factors and quantifying the weight of each factor with information entropy algorithm, the microblog event popularity is modeled. And then, the singular spectrum analysis is carried out to decompose and reconstruct the time series of the popularity of the microblog event. Finally, this paper utilizes the learning method to deal with trend prediction with a sequence to label model.
Methodology
In order to solve the issue of trend prediction of event popularity from microblogs, this paper presents an effective predictive algorithm based on Bi-LSTM network model. Figure 1 shows the framework of our predictive system. Specifically, the system framework has three main modules, namely, data acquisition, data processing and modeling. Here, the modeling module contains the main core idea of this paper. For the data acquisition module, firstly, this paper uses the method of simulated login to enter the microblog platform, namely, Sina Weibo platform. And then the parameters and time window for retrieval are set using the advanced search function. In the end, our proposal uses Python to write a web crawler according to the functional requirements to crawl the microblog information derived from the keywords. In the data processing stage, this paper screened the results collected by the crawler to eliminate repeated and missing items. On the basis of the first two steps, this paper designs a model framework to solve the trend prediction of microblog event popularity. Compared to existing methods, our proposal proposes to model the microblog event popularity by selecting the influence factors and quantifying the weight of each factor with information entropy algorithm in order to deal with time series modeling for nonlinear data. Meanwhile, aimed at the issue of the trend prediction of nonlinear time series, the singular spectrum analysis is carried out to decompose and reconstruct the time series of the popularity of a microblog event while a learning method to deal with trend prediction with a sequence to label model is proposed in our proposal. The details will be discussed in the following four subsections. reconstruct the time series of the popularity of a microblog event while a learning method to deal with trend prediction with a sequence to label model is proposed in our proposal. The details will be discussed in the following four subsections.
Popularity Index Modeling and Weighting
Regarding multivariate microblog data, univariate analysis cannot well reflect the systematic change of the trend of microblog popularity. Additionally, various factors have different impacts on the popularity of microblog events, so they should play different roles in popularity prediction modeling. Meanwhile, for microblog messages, those with more user interactions have greater social popularity. Therefore, the post number, forwarding number, commenting number and the total number of likes are used to model the popularity index. In the paper, the information entropy algorithm is used to assign the weight of popularity indicators. Information entropy can be used to measure the dispersion of an index. The greater the dispersion of an index, the greater the impact of the index on the comprehensive evaluation, that is, the greater the weight.
Specifically, this paper takes the number of days of each event as the evaluation individual and the above four indicators as evaluation index. Here, in order to eliminate the dimensional differences between the evaluation indexes, this paper needs to preprocess origin data through benefit (positive) index calculation and cost (negative) index calculation, defined by Formulas (1) and (2) The symbol
Popularity Index Modeling and Weighting
Regarding multivariate microblog data, univariate analysis cannot well reflect the systematic change of the trend of microblog popularity. Additionally, various factors have different impacts on the popularity of microblog events, so they should play different roles in popularity prediction modeling. Meanwhile, for microblog messages, those with more user interactions have greater social popularity. Therefore, the post number, forwarding number, commenting number and the total number of likes are used to model the popularity index. In the paper, the information entropy algorithm is used to assign the weight of popularity indicators. Information entropy can be used to measure the dispersion of an index. The greater the dispersion of an index, the greater the impact of the index on the comprehensive evaluation, that is, the greater the weight.
Specifically, this paper takes the number of days of each event as the evaluation individual and the above four indicators as evaluation index. Here, in order to eliminate the dimensional differences between the evaluation indexes, this paper needs to preprocess origin data through benefit (positive) index calculation and cost (negative) index calculation, defined by Formulas (1) and (2) respectively.
The symbol max(x i,j ) and min(x i,j ) represent the maximum and minimum of evaluation index, respectively. And then, the proportion of the evaluation object can be calculated by Formula (3).
Additionally, the entropy of evaluation index is defined by Formula (4), where n represents the number of evaluation index. Finally, the weight of each indicator is represented by Formula (5). Consequently, our proposal can get the time series of event popularity by index modeling and weighting.
Popularity Time Series Denoising
The popularity evolution of microblog events tends to be a nonlinear and irregular time series. Therefore, to solve the popularity trend prediction, it is necessary to extract the trend components of popularity time series. Specifically, this paper needs to denoise the time series data and extract the different components for trend prediction. Our proposal decomposes and reconstructs the trajectory matrix of the time series to solve time series denoising through singular spectrum analysis [28]. Here, there are four steps for time series denoising.
(1) Embedding Supposed that the time series of event popularity from microblog is represented by X = [x 1 , x 2 , · · · , x m ] T , then the paper transforms it into a p-dimensional trajectory matrix Y = [y 1 , y 2 , · · · , y p ] T , where 2 ≤ p ≤ m. Consequently, the trajectory matrix can be defined by Formula (6).
Firstly, the trajectory matrix Y obtained above is decomposed into d components, where d is the rank of matrix Y. And then, the paper can get parameter group (λ i , U i , V i ) of matrix YY T by using Singular Value Decomposition (SVD) algorithm. Here, λ i represents the singular value of the matrix, and the symbol U i and V i are used to define the left eigenvector and the right eigenvector, respectively. Subsequently the trajectory matrix Y and its components Y i are defined by the following Formulas (7) and (8).
(3) Grouping In the paper, k of the d components are selected as the popularity trend components, denoted as I = {I 1 , · · · , I k }. Meanwhile, the valuable extraction component Y I of the time series is represented by Y I = Y I 1 + Y I 2 + · · · + Y I k . Therefore, the other d-k decomposition components are considered as the noise of the time series.
(4) Reconstitution
By means of diagonal averaging, the valuable trend component Y I formed in the grouping stage can be converted into the previous time series x component = x I 1 , · · · , x I k . Consequently, the original time series X is represented as the sum of component series data x component and noise series data x noise .
Popularity Trend State Partition
In order to explore and predict the popularity trend of events from microblog, this paper considers the change between each time node and the previous node into state Future Internet 2021, 13, 220 6 of 13 characteristics. Firstly, this paper needs to difference the denoised time series and calculate the state trend value H(t) by Formula (10).
However, the same data set may cover multiple hot events, and different hot events may cause different responses from netizens, which may eventually lead to a large deviation in the popularity of multiple events derived from a period of dataset. In order to explore the systematic change of event popularity, the box plot method is used to divide the popularity trend into various states. The box plot algorithm defines a standard for identifying outliers, which are usually defined as values less than Q L − 1.5IQR or greater than Q U + 1.5IQR, where the symbol Q L and Q U represent the lower quartile and the upper quartile respectively, and the difference between Q L and Q U is defined as IQR. The box plot method does not require data to follow a certain distribution, so it is reasonable to use it to judge the state change of event popularity. The structure of the box plot is shown in the Figure 2.
Popularity Trend State Partition
In order to explore and predict the popularity trend of events from microblog, this paper considers the change between each time node and the previous node into state characteristics. Firstly, this paper needs to difference the denoised time series and calculate the state trend value ( ) H t by Formula (10).
However, the same data set may cover multiple hot events, and different hot events may cause different responses from netizens, which may eventually lead to a large deviation in the popularity of multiple events derived from a period of dataset. In order to explore the systematic change of event popularity, the box plot method is used to divide the popularity trend into various states. The box plot algorithm defines a standard for identifying outliers, which are usually defined as values less than , where the symbol L Q and U Q represent the lower quartile and the upper quartile respectively, and the difference between L Q and U Q is defined as IQR .
The box plot method does not require data to follow a certain distribution, so it is reasonable to use it to judge the state change of event popularity. The structure of the box plot is shown in the Figure 2. Actually, the popularity trend is divided into four states, defined by Formula (11), according to the box plot distribution. Actually, the popularity trend is divided into four states, defined by Formula (11), according to the box plot distribution.
In the paper, the lower bound and the upper bound of the box plot are set to Q L − 1.5IQR and Q U + 1.5IQR, respectively. And the paper uses H max and H min to represent the maximum and minimum values of event popularity trend correspondingly.
Popularity Trend Prediction
Firstly, before modeling Bi-LSTM network for prediction, this paper needs to build the basic model, LSTM network, in order to solve sequence-to-label modeling for microblog data. In memory block of LSTM network model, there are one or more self-connected memory blocks and three multiplication units: input unit, output unit and forgetting unit. The input unit is mainly used to store the current information while the output unit is used to output the state trend changes of microblog events. Meanwhile, the forgetting unit is designed to filter valuable information and selectively forget certain past information. The network structure of LSTM model is shown as Figure 3. the basic model, LSTM network, in order to solve sequence-to-label modeling for microblog data. In memory block of LSTM network model, there are one or more self-connected memory blocks and three multiplication units: input unit, output unit and forgetting unit. The input unit is mainly used to store the current information while the output unit is used to output the state trend changes of microblog events. Meanwhile, the forgetting unit is designed to filter valuable information and selectively forget certain past information. The network structure of LSTM model is shown as Figure 3. At each time node, the event popularity is considered as the input sequence, and the state trend data is represented as the output. Meanwhile, the input unit, output unit and forgetting unit correspond to respectively. Specifically, the forgetting unit takes the current output and the previous hidden-layer-state output as input, and uses sigmoid activation function to control the unit to discard redundant information. Finally, the value of state trend, between 0 and 1, can be calculated by Formula (12).
In Formula (12), the weight matrix that maps the hidden layer input to the forgetting unit is denoted as f w , and f U represents the weight matrix that connects the output state of the previous time node to the forgetting unit. Additionally, this paper uses the symbol f b and g δ to denote the offset vector and activation function, respectively.
After the information passes through the forgetting structure, the LSTM model will consider which new information to add to the unit. The added information is jointly controlled by the hidden layer state output of the previous time node (13) and (14).
i W and C W refer to weight matrices, which map the hidden layer input to the traditional input unit and input unit states, respectively. i U and C U refer to weight matrices, At each time node, the event popularity is considered as the input sequence, and the state trend data is represented as the output. Meanwhile, the input unit, output unit and forgetting unit correspond to i t , o t , f t respectively. Specifically, the forgetting unit takes the current output and the previous hidden-layer-state output as input, and uses sigmoid activation function to control the unit to discard redundant information. Finally, the value of state trend, between 0 and 1, can be calculated by Formula (12).
In Formula (12), the weight matrix that maps the hidden layer input to the forgetting unit is denoted as w f , and U f represents the weight matrix that connects the output state of the previous time node to the forgetting unit. Additionally, this paper uses the symbol b f and δ g to denote the offset vector and activation function, respectively.
After the information passes through the forgetting structure, the LSTM model will consider which new information to add to the unit. The added information is jointly controlled by the hidden layer state output of the previous time node h t−1 and the current input x t . Through the activation function tanh, the new state output C t is obtained. And the weight between 0 and 1 is added to each component of C t to control the amount of new added information by Formulas (13) and (14).
W i and W C refer to weight matrices, which map the hidden layer input to the traditional input unit and input unit states, respectively. U i and U C refer to weight matrices, connected to the previous unit, which map the output states to the input unit and input unit states. And the deviation vectors are defined as b i and b C . In addition, when information is forgotten by forgetting unit, the new unit state is updated by Formula (15).
Finally, the state of the output unit is calculated according to the following two Formulas (16) and (17).
Here W o is the weight matrix of the hidden layer to the output unit and U o is the weight matrix of the output state of previous unit to the output unit. Meanwhile, this paper uses the symbol b o to represent the deviation matrix.
Based on LSTM modeling structure, this paper aims to construct the Bi-LSTM network to explore and predict the hidden layer state of popularity time series. More specifically, Future Internet 2021, 13, 220 8 of 13 forward LSTM and reverse LSTM can obtain the past information and future information in time series, respectively. The following three Formulas (18)- (20) can reflect the mechanism of Bi-LSTM model in our work.
The symbol T refers to the time span of microblog event. And the structure of Bi-LSTM network is shown in Figure 4.
per uses the symbol o b to represent the deviation matrix.
Based on LSTM modeling structure, this paper aims to construct th work to explore and predict the hidden layer state of popularity time serie ically, forward LSTM and reverse LSTM can obtain the past information an mation in time series, respectively. The following three Formulas (18)-(20 mechanism of Bi-LSTM model in our work.
The symbol T refers to the time span of microblog event. And the LSTM network is shown in Figure 4.
Experiment
In this section, the performance evaluation of our algorithms is discus aims to measure the effectiveness of our proposal for trend prediction of ev from microblogs.
Dataset
This paper uses the crawler to collect data by means of daily statisti ically modified food" as the search keyword from Sina Weibo platform. Af advertisements, repeated posts and irrelevant microblogs, a total of 28,3 sages from 1 January 2017 to 9 July 2019 are retrieved from microblogs. Th Weibo posts are as shown in Table 1.
Experiment
In this section, the performance evaluation of our algorithms is discussed. This paper aims to measure the effectiveness of our proposal for trend prediction of event popularity from microblogs.
Dataset
This paper uses the crawler to collect data by means of daily statistics with "genetically modified food" as the search keyword from Sina Weibo platform. After eliminating advertisements, repeated posts and irrelevant microblogs, a total of 28,326 Weibo messages from 1 January 2017 to 9 July 2019 are retrieved from microblogs. The details of the Weibo posts are as shown in Table 1. In order to verify the effectiveness and universality of our approach, the paper selects two representative events from Weibo posts as datasets to build the predictive model. The specific datasets are as shown in Table 2. The famous blogger posts a blog: the harm of genetically modified food will be written into the textbook of colleges Table 3 gives a sample of dataset 1, which has four popularity factors and spans three days. In order to train the model, this paper needs to divide the data set into training set and test set. In this paper, the model is trained by about 83% of the time series length, and evaluate the performance of the model by about 17%. More specifically, the paper takes the microblog messages of 94 days from 4 July 2017 to 15 October 2017 in dataset 1 as the training set, and the remaining 19 days as the test set. In data set 2, a total of 116 days of microblog messages from 4 January 2019 to 29 April 2017 are used as the training set, and the remaining 24 days of posts are used as the test set.
Evaluation Metrics
Aiming to evaluate the prediction accuracy of the proposed model, this paper performs model evaluations in terms of different metrics. More specifically, the evaluation metrics are defined as follows: (1) Precision (2) Recall Future Internet 2021, 13, 220 10 of 13 (3) F1-score In the paper, TP represents the number of samples that are labeled as positive samples and also classified as positive samples; FP refers to the number of samples labeled as negative samples but classified as positive samples; FN is the number of samples that are labeled as positive samples and classified as negative samples. However, this paper mainly focuses on the prediction accuracy of the popularity trend of microblog events. Therefore, this paper chooses specific three metrics that pay more attention to the accuracy to evaluate the prediction model, which are P micro , P weight and F weight .
Results of Event Popularity Time Series Construction
This paper conducts experiments for popularity index modeling and weighting on two datasets. Specifically, according to Formulas (1)-(5) in Section 3.1, the paper uses the information entropy model to measure the microblog event popularity, and then construct the time series of event popularity, as shown in Figure 5.
In the paper, TP represents the number of samples that are labeled as posi ples and also classified as positive samples; FP refers to the number of samples l negative samples but classified as positive samples; FN is the number of sample labeled as positive samples and classified as negative samples. However, th mainly focuses on the prediction accuracy of the popularity trend of microblo Therefore, this paper chooses specific three metrics that pay more attention to the to evaluate the prediction model, which are micro P , weight P and weight F .
Results of Event Popularity Time Series Construction
This paper conducts experiments for popularity index modeling and wei two datasets. Specifically, according to Formulas (1)-(5) in Section 3.1, the pape information entropy model to measure the microblog event popularity, and struct the time series of event popularity, as shown in Figure 5.
Results of Popularity Time Series Denoising
According to the observed changes of time series on the two data sets in Figure 5, it is found that the two time series have the characteristics of chaos and nonlinearity. Therefore, the trend components of the two time-series need to be extracted to effectively eliminate the further influence of noise on the subsequent prediction modeling and improve the prediction accuracy.
Firstly, the data set is divided into training set and test set. And then, the singular spectrum analysis method is applied into the two data sets respectively on the training set. After component extraction and series denoising according to Formulas (6)- (9) in Section 3.2, the new time series is reconstructed. The differences between the original data and the denoised data are shown in Figure 6. inate the further influence of noise on the subsequent prediction modeling and improve the prediction accuracy.
Firstly, the data set is divided into training set and test set. And then, the singular spectrum analysis method is applied into the two data sets respectively on the training set. After component extraction and series denoising according to Formulas (6)-(9) in Section 3.2, the new time series is reconstructed. The differences between the original data and the denoised data are shown in Figure 6.
Results of Popularity Trend Prediction
This paper takes the popularity of microblog events as input and the changed state, measured by box plot algorithm, as output to build Bi-LSTM network model. For the evaluation of predictive performance, the paper compares our proposal with three existing methods, including BP neural network (BPNN) [16], Particle Swarm Optimization Based Support Vector Machine model (PSO-SVM) [29] and Long Short Term Memory network (LSTM) [18]. Meanwhile, the model effectiveness is measured in terms of the metric micro P , weight P and weight F .
The experimental results as shown in Figures 7 and 8 show that our model achieves better results compared with other algorithms in the task of popularity trend prediction. Especially, compared with the basic LSTM network, the proposed model in the paper is characterized by component extraction on the basis of LSTM to achieve the effect of denoising. In addition, considering the influence of future information on event popularity, the model forms a bidirectional LSTM, so it has better performance than the basic LSTM model.
Results of Popularity Trend Prediction
This paper takes the popularity of microblog events as input and the changed state, measured by box plot algorithm, as output to build Bi-LSTM network model. For the evaluation of predictive performance, the paper compares our proposal with three existing methods, including BP neural network (BPNN) [16], Particle Swarm Optimization Based Support Vector Machine model (PSO-SVM) [29] and Long Short Term Memory network (LSTM) [18]. Meanwhile, the model effectiveness is measured in terms of the metric P micro , P weight and F weight .
The experimental results as shown in Figures 7 and 8 show that our model achieves better results compared with other algorithms in the task of popularity trend prediction. Especially, compared with the basic LSTM network, the proposed model in the paper is characterized by component extraction on the basis of LSTM to achieve the effect of denoising. In addition, considering the influence of future information on event popularity, the model forms a bidirectional LSTM, so it has better performance than the basic LSTM model.
Conclusions
This paper proposes an effective approach to deal with popularity trend prediction for microblog events based on SSA and Bi-LSTM model. Firstly, the algorithm of singular spectrum analysis is used to extract trend components of popularity time series from Weibo posts. Then, after time series denoising by SSA model, the event popularity is divided into different trend states by using the box plot analysis. Finally, the paper exploits the Bi-LSTM model to deal with popularity trend prediction with a sequence to label model. Meanwhile, the comparative experiments on two real datasets with three existing methods are conducted. The experimental results show that our model performs best on both datasets with respect to various metrics, which demonstrates the superiority of our proposal. In the future, this paper will explore how to follow some KDD standards, e.g., CRISP-DM standard, to optimize the process of the system. Meanwhile, the other language datasets will also be considered to verify and improve the system performance on cross-language.
Conclusions
This paper proposes an effective approach to deal with popularity trend prediction for microblog events based on SSA and Bi-LSTM model. Firstly, the algorithm of singular spectrum analysis is used to extract trend components of popularity time series from Weibo posts. Then, after time series denoising by SSA model, the event popularity is divided into different trend states by using the box plot analysis. Finally, the paper exploits the Bi-LSTM model to deal with popularity trend prediction with a sequence to label model. Meanwhile, the comparative experiments on two real datasets with three existing methods are conducted. The experimental results show that our model performs best on both datasets with respect to various metrics, which demonstrates the superiority of our proposal. In the future, this paper will explore how to follow some KDD standards, e.g., CRISP-DM standard, to optimize the process of the system. Meanwhile, the other language datasets will also be considered to verify and improve the system performance on cross-language. | 8,194.2 | 2021-08-24T00:00:00.000 | [
"Computer Science"
] |
Sialic Acid-Siglec Axis as Molecular Checkpoints Targeting of Immune System: Smart Players in Pathology and Conventional Therapy
The sialic acid-based molecular mimicry in pathogens and malignant cells is a regulatory mechanism that leads to cross-reactivity with host antigens resulting in suppression and tolerance in the immune system. The interplay between sialoglycans and immunoregulatory Siglec receptors promotes foreign antigens hiding and immunosurveillance impairment. Therefore, molecular targeting of immune checkpoints, including sialic acid-Siglec axis, is a promising new field of inflammatory disorders and cancer therapy. However, the conventional drugs used in regular management can interfere with glycome machinery and exert a divergent effect on immune controlling systems. Here, we focus on the known effects of standard therapies on the sialoglycan-Siglec checkpoint and their importance in diagnosis, prediction, and clinical outcomes.
Introduction
The immune homeostasis is a complex and precise mechanism that underlies tissue environment control, regeneration, and repair processes, as well as the surveillance of pathogens and malignancies [1,2]. All events controlled by the immune system depend on the cellular interactions that maintain the balance between tolerance and defense processes. The communication between host cells and their environment recruits the cellular and molecular mechanisms responsible for the recognition, adhesion, and secretory activity [3]. Recent advances in immunology show that targeting the molecules underlying immune homeostasis is a promising therapeutic tool for inflammation, autoimmunity, cancer, and neurodegeneration [4,5]. The immune checkpoints are the system of regulatory proteins that play a critical role in self-tolerance processes and prevent autoimmune reactions against self-produced antigens [6]. The interplay between stimulatory or inhibitory checkpoint molecules with their specific ligands modulates cellular functions to avoid immune injury. However, the mechanisms underlying these processes are not fully understood. Moreover, the molecular mimicry of checkpoint systems by pathogens leads to cross-reactivity with host antigens resulting in suppression and tolerance in the immune system [7][8][9]. The clinical data support that the prolonged exposure to bacterial and viral antigens leads to overexpression of several checkpoint receptors, e.g., programmed cell death protein 1 (PD-1) and cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) in effector lymphocytes that provide negative signals and induce reversible exhaustion state. The anti-PD-1 antibody-based directed therapies have been shown to have a beneficial effect on malignant cell clearance [10][11][12][13]. The PD-L1 and CTLA-4 targeting have been introduced successfully into oncological practice, and the combination immunotherapy and multiple immunomodulatory targets open promising therapeutic strategies [14].
A growing body of evidence supports the role of sialoglycans at various clinical stages of immune-based pathologies [15,16]. Since sialylated glycans are involved in many biological processes, their frequently altered expression, as well as recognition by individual sialic acid-binding immunoglobulin-like lectins (Siglecs), can be related to the increased progression of the pathological processes [17][18][19]. This review briefly focuses on the engagement of the sialic acid-Siglec axis in some pathophysiological processes and its importance in routine clinical practice.
Sialic Acid and Immune Recognition
More than 50 nine-carbon monosugars derived from neuraminic acid belong to the family of sialic acids, among which N-acety1-5-neuraminic acid also called sialic acid (SA, Neu5Ac, NANA) is the most common form found in cell membrane glycoproteins and body fluids [20]. The sialic acid is ubiquitously expressed, typically at the terminal position of glycoproteins and lipids in the glycosylation process, resulting in co-translational and posttranslational modifications of approximately 80% of cell proteins [21]. Sialylation as the final stage of glycosylation is based on the balance achieved by the expression and activity of sialyltransferases and sialidases involved in the decoration of sugar chains, and on the sialic acid precursors contained in nutrient resources, as well as the expression of several metabolic enzymes implicated in the synthesis and conversion of sialic acid molecules. The attachment of sialic acid enhances the complexity of the glycosylation processes and results in wide microheterogeneity of glycoconjugates, which can be used to predict the occurrence of pathology, diagnosis, and therapy monitoring [22,23]. In contrast to the stable and reproducible glycosylation pattern under normal conditions, the unbalanced sialylation processing enzymes lead to the dramatic differences in sialic acid expression. It is of particular importance in the context of immune recognition processes underlying chronic inflammatory diseases and immune tolerance in cancer [24][25][26][27]. The biological recognition processes are closely linked to the biological function of sialic acids and include the regulation of adhesion that occurs from cell-cell and cell-extracellular matrix (ECM) interaction [28,29]. Binding of specific membrane sialoglycoproteins is the first step in the adsorption of pathogens on host cell membranes and further colonization of tissues and organs. This process has been confirmed in bacterial (Escherichia coli, Streptococcus suis), viral (influenza, Cardiovirus, Paramyxovirus), and protozoan (Plasmodium falciparum) infections [30][31][32][33][34][35]. Sialoglycans, especially sialo-Lewis a,b,x,y epitopes, play a crucial role in the interaction with selectins, which are the molecular basis of adhesion processes linked to the migration of immune cells to the target organs through the vascular endothelium and outside the circulatory system. Thus, the sialic acid-mediated negative charge on membranes reduces the mutual adhesiveness of cells, which underlies the migration of highly sialylated cancer cells in the metastatic process [36]. In addition, the aberrant sialic acids mask the underlying glycan structure, thereby avoiding recognition by other lectins such as galectins and C-type lectins [37]. The host's immune system, whose cells express sialo-Lewis antigens does not produce specific antibodies and allows the invasion of sialo-Lewis positive pathogens by way of molecular mimicry. Many malignancies use this mechanism to hide their epitopes, which inhibits the complement activation pathway to reduce immunogenicity, recruits of plasma factor H to control of alternative complement pathways. Furthermore, the sialic acid epitopes protect the human colon mucins from the clearance by liver receptors, including the hepatocyte asialoglycoprotein receptor (ASGPR), macrophage galactose lectin-1,-2 (MGL-1,-2), hyaluronic acid receptor for endocytosis, and scavenger receptors (SRs) [17,38,39]. The digestion with neuraminidase becomes cells more immunogenic, and the weaker antigenic sites more accessible. Loss of membrane sialic acid in lymphoid cells increases their migration to the liver and makes them more deformable and phagocytic [40]. Recent advances in glycoimmunology indicate the interplay between the cell membrane sialylated glycans with Siglec immune receptors as a new checkpoint axis in the regulation of the immune system [41,42]. The human CD33-related Siglecs, as well as their mouse homologs, form a major subfamily of the Siglecs characterized by the specificity of distribution in the immune cells and recognition of sugar products [43,44]. Differences in the structure of the intracellular domain of Siglecs determine the activating or suppressive signaling pathways responsible for the function of the immune cells. Posttranslational glycosylation of cell adhesion molecules (CAMs) plays a pivotal role in regulating cell proliferation, differentiation, migration, and survival that underlie ontogenetic development and cellular plasticity [45]. In the central nervous system (CNS), the glycan-dependent cross-talk between neurons, glia, and microglia form a balance between synapse formation, potentiation, and removal, thereby maintaining homeostasis of the brain by controlling the tissue architecture, microenvironment, and defense reactions [46,47]. Clinical observations, animal models, and in vitro co-culture systems confirmed the significance of glycoconjugates sialylation in innate immunity and its relationship with development, cognition, regeneration, and aging [17,48]. In the brain, sialylated glycocalyx is recognized by Siglec-expressing microglial cells that normalize normal brain wiring, as well as various type leukocytes infiltrating the infected and/or damaged structures [49,50]. The polysialylated derivatives of neural CAMs (PSA-NCAMs) are known as a specific ligand for the microglial Siglec-11 receptor, which transduces an immunosuppressive signal and inhibits several immune functions. The binding of PSA with the Siglec-11 receptor in the neuron-microglia co-culture system was closely associated with limited immune function. It may reflect the control mechanism, called cis-interaction, which prevents the autoimmune processes in the healthy CNS [50]. The imbalance between sialidases and sialyltransferases activities, as a result of pathology or exposure to degenerative factors, disturbs the sialylation pattern, and modulates the function of "On" and "Off" signaling system. Interestingly, the enzymatic removal of sialic acid reduces the neuritic density and the number of perikaryons and induces changes in the morphology of microglia expressed by the transformation of the resting to the activated form [50]. In line with this observation, selective enzymatic removal of sialic acids attached by α2,3 and α2,6 linkages reduces the reactivity of the suppressive Siglec-F receptor protein to its ligands on the neuronal surface, which can be part of the mechanism of neuronal protection and homeostasis in the brain [50].
Sialic Acid-Siglec Checkpoint in Human Pathology
The molecular pattern of glycosylation has an essential role in biological recognition and could predict the involvement of the immune system in pathology initiation and progression. Recent advances in glycobiology are focused on the prognostic value of sialylated epitopes as markers of pathology [44,51]. The applied experimental models and analyzed clinical material referring to the various human pathologies demonstrated changes in the level of cell membrane sialoglycans. Sialylation processes pretend to be a useful prognostic marker and the potential target for drug development as well as an indicator in monitored therapy [17]. To date, serum total sialic acid as well as lipid-and protein-bound sialic acids are the fields of clinical interest in their importance as diagnostic markers of pathology. The assessment of differences in the level of free sialic acid and patterns of sialylation of particular glycosaminoglycans is characterized by the various methodological approaches in current glycoscience. The quantification of serum and plasma sialic acids by colorimetric, fluorimetric, and enzymatic methods confirmed their significance as a prognostic factor in clinical practice. However, multiple interferences from substances present in biological samples are the strong limitation for routine use of these analyses [52]. Since the immunological methods have been developed, the specific monoclonal antibodies and labeled sialic acid-binding lectins are widely used in the evaluation of basement membrane sialic acid composition by electrophoretic and ELISA methods in the studies of cancer biology [53,54]. The latest advances in this field are enriched by the development of mass spectrometry (MS) with high resolution and mass accuracy that allows analyzing glycans in terms of structure [55,56]. For example, the recent analysis of the sialic acid linkages of the glycome of the epithelial ovarian cancer (EOS) patients by the matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry revealed significant differences in α2.3-linked/α2.6-linked sialic acid ratio in EOS patients when compared to healthy individuals [57].
Most Siglecs participate in negative signal transduction resulting in the downregulation of the immune response and are critical for self-tolerance processes and prevent autoimmune reactions against self-produced antigens [43]. Sialylated glycoconjugates belong to the self-associated molecular patterns (SAMPs) that bind to the individual immunoreceptor tyrosine-based inhibition motif (ITIM) associated Siglec receptors presented on the same cell membranes and orchestrate inflammatory reactions within damaged tissues. Intriguingly, the pathogen-associated molecular patterns (PAMPs) developed the ability to recognize both, ITIM and ITAM (immunoreceptor tyrosine-based activation motif) associated Siglecs that underlie the mechanisms of chronic inflammation and neurodegeneration, as well as impaired immune surveillance in pathogen infections and cancer invasion [19].
CNS Diseases
Despite the role of the sialic acid-Siglec checkpoint, it was not widely studied in brain functions. There is increasing evidence that chronic stress exerts proinflammatory effects, which are associated with the local activation of microglia, the production of proinflammatory factors, neuronal atrophy, and increased expression of sialylated acute-phase proteins. The measurement of sialidase activity as a posttranslational indicator of glycoproteins remodeling revealed the pivotal role of PSA-NCAMs in chronic stress-induced cognitive disturbances [58]. Changes in PSA-NCAM expression in response to stress stimuli may reflect hippocampus atrophy in long-term exposure to corticosteroids known as endocrine modulators in stress [59]. On the cellular level, the polysialylated NCAMs are recruited in neuron-microglia interaction via Siglec-11 binding and ITIM-coupled signaling, and they restrict damage by immune cells during brain inflammation [60,61]. This scenario could reflect immune controlling mechanisms in the brain after exposure to proinflammatory factors. In an animal model of systemic inflammation, intraperitoneal injection of lipopolysaccharide (LPS) caused significant changes in the sialylation pattern in CNS. The elevated PSA-NCAMs expression in the hippocampus was correlated with the intracellular level of inflammation mediators [62]. Besides the role of PSA-NCAMs as regulators of neural plasticity in the hippocampus, their engagement in compensatory and protective mechanisms during neurodegeneration was also described. The presence of glycans, including α2.8-linked sialic acids, protects glycoconjugates against proteolysis and affects proper regeneration through the reestablishment of a crude topographic map of reinnervation [63]. The LPS-induced acute inflammation was not accompanied by the altered expression of suppressive Siglec-F both in vivo and in vitro studies and can be interpreted by the altered regulation of Siglec receptor expression in respective stages of inflammation [62]. Siglec receptors that contain ITAM promote proinflammatory cellular activity in the acute phase, whereas receptors with the ITIM inhibitory domain (e.g., Siglec-F, -G), reduce the cytotoxicity of immune cells in the chronic phase of inflammation [64].
The cell-intrinsic mechanism involving Siglecs can be associated with divergent outcomes of pathology within the brain. Moreover, the CD33-mediated suppression of microglia seems to be regulated alternatively by the hypersialylation of proteins and lipids. The sialic acid-rich glycoconjugates on the surface of amyloid plaques, mimicking the cell surface glycocalyx, activate the Siglec-11 receptor, and thereby switch the "Off" signaling which allows pathological structures to avoid immune surveillance of microglia [65]. It has been shown that Siglec-3 (CD33) and CD33-related Siglecs, including Siglec-11, belong to the top-rated factors which may confer the risk for Alzheimer's disease (AD) [66,67]. Given the microglia ability for amyloid-β (Aβ) clearance, it seems that CD33-coupled signaling pathways can regulate their phagocytic potential. The postmortem analysis of the AD cortex evidenced an increased number of CD33-positive microglia, which was concomitantly linked with elevated CD33 mRNA level [68]. Nevertheless, the knocking out CD33 in the experimental mouse model of AD caused efficient phagocytosis of pathogenic Aβ by microglia and macrophages [69]. Therefore, the interactions between the sialoglycans and Siglecs are a promising targeting therapy based on antibodies with a monovalent affinity to different Siglecs.
Respiratory System Disorders
Clinical studies and animal models of respiratory tract obturation demonstrated that increased expression and specific distribution of Siglec-8 is closely associated with inflammation [70][71][72]. Progressive inflammation in airway tissues promotes the expression of specific sialoglycans carrying predominantly 6-sulfo-sialyl Lewis x epitopes. The cross-linking with Siglec-8 initiates ITIM-signaling cascades and downstream effector proteins that lead to the apoptosis of infiltrating eosinophils. This process occurs when eosinophils are in proinflammatory cytokine milieu, indicating that Siglec-8 and its murine functionally convergent paralog, Siglec-F, regulate the turnover of activated cells in the context of inflammation. A growing body of evidence suggests that Siglec-8 is an important regulator of inflammation and disease. In animal models of the respiratory tract inflammation, mice lacking Siglec-8 have an increased inflammatory response and hypereosinophilic syndrome (HES) [73]. Contrary, the administration of Siglec-F antibodies in mouse models of chronic asthma normalizes eosinophilic pulmonary inflammation and eliminates lung tissue remodeling [74,75]. Interestingly, monocyte turnover in simian immunodeficiency virus (SIV) infections correlates with the severity of pulmonary lesions contribute to chronic pulmonary inflammation [76].
As with many human phenotypes, the controlling mechanisms of chronic inflammation, neurodegeneration, and immune surveillance depend on the levels of multiple genes. Siglec-5 and Siglec-14 belong to the group of paired receptors that show the extreme similarity of the amino acid sequence of the extracellular part and identical distribution in tissues and cells. This phenomenon results from partial conversion of the closely related SIGLEC5 and SIGLEC14 genes in the evolution process resulting in similar ligand recognition properties but an opposing signaling system [77]. According to the published data, the expression of the activating Siglec-14 receptor predominates in the European population and may potentiate inflammatory response in bacterial (Haemophilus influenza) and viral infections (influenza virus), which are the cause of chronic respiratory diseases. The clinical observation of patients with chronic obstructive pulmonary disease (COPD) has shown that the loss of Siglec-14 reduces the risk of COPD exacerbations related to bacterial infections [78]. The predominant expression of Siglec-5 is observed in the Asian population and is closely linked to reduced bactericidal and virucidal abilities during infections with Streptococcus, Neisseria, Pseudomonas, Campylobacter, and HIV [79]. It is of particular importance in exposure to proinflammatory conditions. The in vitro studies revealed an increase in the expression of paired Siglec-5/14 receptors in THP-1 cells exposed to cigarette smoke (CS). Simultaneous changes in immune activity as an increase in intracellular interleukin 1β (IL1β) and interleukin 10 (IL10) expressions and impairment of phagocytic capacity were observed. In parallel to CS-induced changes in human monocytes, an increase of sialoglycans in lung epithelial cells was observed [80]. It confirms the overwide hypothesis that CS cigarette smoke may induce functional alterations in the immune response in cells of the respiratory system. Changes in the expression of the paired Siglec-5/14 receptor may be important for predicting the risk of exacerbations in respiratory diseases and the immune system performance in bacterial and viral infections in both regular and social smokers.
Pathogen Invasion
In each immune disorder, the microbial invasion can contribute to the different stages of hyperinflammation. As suggested above, the interplay between pathogen sialoglycans and the host Siglec-5/14 can act as a regulatory mechanism of bacterial infections in the respiratory system. Besides the divergent role of paired Siglec-5/14 in the pathogen-dependent course of COPD, there is evidence of engagement of Siglec-5/14 in life-threatening organ dysfunction during infections with group B Streptococcus (GBS). Carlin et al. demonstrated that β-protein in GBS plays a pivotal role in the mechanism of molecular mimicry through the interaction with inhibitory Siglec-5 resulting in the impaired phagocytic function of lymphocytes [81]. In GBS-infected Siglec-5/14 +/+ individuals, Siglec-14 on neutrophils counteract Siglec-5-mediated immunosuppression by activating p38 mitogen-activated protein kinase (MAPK) and Akt signaling pathways [82]. In sepsis, recently defined as endotoxin tolerance, monocytes undergo reprogramming to generate immunosuppression in the late phase of the disease. It has been shown that α2,3and α2,6-sialylation on the LPS-induced tolerant RAW264.7 cell surfaces were significantly increased and correlated with enhanced Siglec-1 mRNA expression [83]. The interaction between Siglec-1 and the heavily sialylated proteins, e.g., mannose receptor, macrophage galactose-type lectin 1 (MGL1), mucin-1 (MUC1), and P-selectin glycoprotein ligand-1 (PSGL-1) enhances TGF-β1 production, and thereby, controls the development of endotoxin tolerance [84][85][86]. Clinical outcomes in sepsis confirm the association between high mortality and apoptosis-induced loss of cells of the innate and adaptive immune system, including CD4, CD8 T, and dendritic cells. Kidder et al. demonstrated that Siglec-1 positive macrophages induce the apoptosis of CD4 + T regulatory cells (Tregs) via recognition and binding of α2.3-linked sialic acids. However, the mechanism is not fully understood. In consequence, the reduction of Tregs numbers provides an increase in the T effectory cells (Teffs) population and promotes uncontrolled inflammation [87]. Siglec-2, which is mostly expressed on B cells, participates in the immune balance of sepsis through controlling chemokine production and regulating B cell response. Similarly, Siglec-10 plays an anti-inflammatory role in sepsis through increasing IL-10 expression. It has been shown that the anti-inflammatory effects in Campylobacter jejuni infections are mediated through the cis interaction between Siglec-10 and CD24 that inhibits dendritic cell cross-presentation and weak B cell signaling [88].
The sialic acid-Siglec axis has also been considered as a controlling mechanism in viral invasion machinery. Several sialylated glycoconjugates act as a key and facilitate the entry of retroviruses, including HIV, into the mature dendritic cell after binding to Siglec-1. In detail, the sialylated glycoprotein 120 (gp120) widely expressed on an HIV envelope can bind Siglec-1 and Siglec-7 on monocytes/macrophages and NK cells, respectively, which induces viral entry, promotes HIV replication and allows the infection of CD4 + T lymphocytes [89]. In the context of coronavirus (CoVs) pandemic issues, Varki and Angata hypothesize that the expression of sialic acids by the envelope of CoV can affect Siglec receptors biology in the hosts and thereby regulate the reactivity of innate immune cells [90]. The inhibitory receptors, such as Siglec-7 and -9, are also exploited in molecular mimicry mechanisms that allow viruses to avoid immune surveillance [91]. It has been demonstrated that the Hepatitis B virus (HBV) induces NK cell dysfunction via Siglec-9 recruitment. Conversely, blocking Siglec-9 on these cells of HBV-infected individuals increases TNF-α and IFN-γ secretion [92]. Thus, targeted manipulation of these processes could lead to a new therapeutic opportunity for patients with bacterial and viral infections [93,94].
Cancer Progression
Since sialic acids are commonly found in different types of cancers, their interplay with Siglec-expressing immune cells within the tumor microenvironment is considered a mechanism that shapes immune response in malignancy. The sialylation pattern in some cancers is highly heterogenous in specific cancer types and determines the profiles of engaged Siglec-expressing subpopulations of immune cells. Mostly, the Siglec expressing cells with the capacity for inhibitory signal transduction are recruited in cancer progression. It has been shown that lung cancers and melanomas express sialoglycans predominantly for Siglec-7 and Siglec-9. Among the human ligands, the highly sialylated mucin-1 (MUC-1), which binds Siglec-9, attenuates anti-tumor immunity in tumor-associated macrophages (TAMs) [95]. Moreover, the strong affinity of α2.3and α2.6-linked sialic acids to Siglec-9 on neutrophils results in neutrophils inhibition measured by reactive oxygen species (ROS) production. In contrast, the administration of Siglec-9 targeting antibody restored the effector functions of these cells in the presence of malignant cells in vitro [96]. In macrophages, binding of cancer-associated MUC-1 to Siglec-9 induced the conversion into the M2 phenotype, which has the function of reducing inflammation and contributing to tumor growth and immunosuppressive function. Beside Siglec-9, macrophages express widely Siglec-5/14, Siglec-7, and Siglec-10 that give a wide sialoglycan binding spectrum and thereby increases the role of the sialic acid-Siglec axis in the anti-tumorigenic regulatory mechanism [41,46,97]. In the cellular model of glioma, the crosstalk between murine malignant astroglia and immune cells via sialic acid-Siglec-F or Siglec-E axis support tumor-promoting functions, including remodeling of the extracellular matrix and recruitment of immunosuppressive myeloid cells [98,99]. According to Engblom et al. observations, the presence of Siglec-F-positive neutrophilia within tumors promotes cancer growth and correlates with poor prognosis [100]. Interestingly, the enhanced expression of polysialylated neural cell adhesion molecules (PSA-NCAMs) in human glioblastoma promotes migration, invasion, and metastasis, and thereby has been described as an adverse prognosis factor [101]. Given the recognizing capacity of microglial Siglec-11, it is reasonable to speculate that the PSA-NCAM-Siglec-11 axis may underlie the immunosuppression and impaired immune surveillance in the brain. The participation of the Siglec-sialoglycans axis in the maintenance of immune homeostasis suggests that the targeted manipulation of these processes could open a new therapeutic way in multiple immune-based disorders. Numerous clinical trials for cancer and autoimmune disorders revealed the beneficial effects of anti-CD22 (Siglec-2) and CD33 monoclonal antibodies (mAb), in particular when conjugated with immunotoxins. However, multiple adverse effects, including increased mortality, were observed [102]. Recently, Siglec-9 and Siglec-15 have been reported as crucial inhibitors of anti-tumor immunity, which can be blocked by mAbs in the novel anticancer management [103,104]. Recent advances in the field of immunotherapy suggest that targeting Siglec receptors with specific antibodies or fluorinated sialic acid analogs, called "false sialic acids," help to control autoimmunity, pathogen invasion, and malignancies [105,106].
Cardiovascular System Dysfunction
Moreover, a growing body of evidence supports the role of the sialoglycan-Siglec axis in the pathogenesis of vascular dysfunctions. The epidemiological analysis has uncovered a positive correlation between plasma total sialic acid and the risk of coronary artery disease (CAD) [107]. It has been shown that murine Siglec-G, mainly expressed on B-1 cells, promotes atherosclerosis and liver inflammation by inhibiting the protective function of B-1 cells [108]. The clinical studies showed that CAD patients express a reduced Treg level and Treg/Teff ratio, caused by the modulatory function of Siglec-E-expressing dendritic cells, as confirmed in animal models of CAD. Inhibition of Siglec-1, highly expressed on circulating monocytes and plaque macrophages in atherosclerotic patients, can prevent atherosclerotic lesion formation by suppressing the interaction between monocytes and endothelial cells, and macrophages accumulation [109,110]. In a laboratory model of diabetes, hyperglycemia-induced up-regulation of sialoglycans on human umbilical vein endothelial cells (HUV-EC-C) and mouse aorta was associated with the decreasing of Siglec-9-mediated phagocytic activity in macrophages and was described as a significant risk factor of angiopathy [111].
Sialic Acid-Siglec Checkpoint and Conventional Therapy
Despite the better knowledge of molecular mechanisms of immunity and progress in the development of new targeting drugs, conventional therapies are still the main strategy in the management of multiple disorders. In addition to well-determined and desired clinical outcomes, standard therapies include multiple negatives ranging from expected and/or unexpected adverse effects to not fully understood and studied interference on treatment efficacy. There are minimal data on the effects of conventional drugs on the sialoglycan-Siglec checkpoint and its importance in the progression of the disease process (Table 1).
Sialidase Inhibitors-Not Only in Influenza Virus Infections
The inhibition of glycan-lectin interactions is of importance in the treatment of pathogenic infections and several other glycan based diseases. The disruption of glycome controlling mechanisms prevents the interaction between pathology-related molecules. Oseltamivir and zanamivir, the most active inhibitors of influenza sialidase, prevent the virus release from the host cells and its multiplication [112]. In addition to the direct effect on viral sialidase, oseltamivir modulates DCs activity via sialidase-mediated Siglec-Toll like receptor (TLR) interaction [113,114]. Several Siglec receptors, e.g., murine Siglec-E and human Siglec-5/-9, interact with toll-like receptors (TLR) and inhibits their activation, thereby helping to maintain a healthy cytokine balance following infection. In the presence of pathogens, endogenous neuraminidase-1 (sialidase-1, Neu-1) disrupts the interaction between the TLRs and the Siglecs, thereby activating the receptors and triggering an immune response during infection [115,116]. However, abnormal TLR4 activation by bacterial endotoxin in sepsis can be reduced by oseltamivir-induced Neu-1 inhibition and protects against endotoxemia [83,117]. Additionally, recent clinical investigations revealed that targeting sialidase-1 (neuraminidase-1, Neu-1) by reducing total sialic acid contents may represent a possible therapeutic strategy in CAD therapy [118]. In cancer, Neu-1 inhibition by oseltamivir changes epidermal growth factor receptor (EGFR)-mediated signaling and shift cadherin expression that reduces metastatic potential and chemoresistance in various malignant cells [119,120]. There are no data about the recruitment of sialidase inhibitors in sialic acid-Siglec checkpoint activity.
Sialic Acid-Siglec Axis and Standard Respiratory Obstruction Therapy
Siglecs are involved in respiratory tract disorders. The molecular mechanisms of glycome machinery and its therapeutic targeting are extensively studied. According to the GINA (Global Initiative for Asthma) and GOLD (Global Initiative for Chronic Obstructive Lung Disease) guidelines, the main goal of conventional therapies in respiratory obstructive diseases is the limiting of inflammation [121][122][123]. However, in contrast to bronchial asthma, the inefficient management of COPD is related to the low sensitivity of patients to corticosteroids, and relatively high risk of exacerbation due to bacterial or viral infections [124]. The effects of mono-and combined therapies on paired Siglec-5/14 receptors were evaluated in CD14 + cells isolated from clinically stable COPD patients. It has been shown that inhaled corticosteroids (ICS), but not long-acting β2-agonist (LABA) and long-acting muscarinic antagonists (LAMA), increase Siglec-5 and/or Siglec-14 expression. Given the function of paired receptors, ICS, depending on the patients' genotype, may exert either beneficial or negative effects through the enhanced expression of paired Siglec-5/14 receptors and may raise the risk of harm to some individuals [125]. Zeng et al. demonstrated that dexamethasone (Dex), a potent routinely used corticosteroid, might exert an anti-inflammatory effect on COPD-origin neutrophils by up-regulating Siglec-9 expression (Figure 1) [126].
Moreover, a high level of Siglec-8 was observed in cells isolated from induced sputum in eosinophilic COPD patients after add-on LAMA therapy, which may have a pivotal role in disease regulation by downregulation of eosinophils [64]. The Siglec-8-related eosinophils maturation was also detected in the aspirin-exacerbated respiratory disease (AERD) but not in eosinophilic aspirin-tolerant asthma and chronic sinusitis [127].
Corticosteroids-Benefits and Pitfalls in the Cancer Management
In recent years, the engagement of Siglecs in cancer progression was intensively studied. Some of them have been described as diagnostic markers and a promising therapeutic target. Current clinical trials based on the targeting of sialic acid-Siglec axis revealed that ligation of sialylated ligands to ITIM-coupled Siglecs on leukocytes mediates immunosuppression and blockade of anti-tumor activity, whereas targeting of Siglec-3/-7/-9 or -15 by MAbs promotes anti-tumor immunity [96]. The clinical trials and preclinical observations running for the treatment of cancer showed that corticosteroids interfere with the function of local and infiltrating immune cells and impair cancer immunosurveillance [128][129][130][131]. According to neurosurgery and brain oncology guidelines for pre-and postoperative management, systemic corticosteroids are a "gold standard" in the regular therapy of glial tumors [132]. The retrospective clinical studies confirm the beneficial antioedemic effects of Dex. However, they also suggest the activation of mechanisms that activate genes expression correlated with shorter survival [129,[133][134][135][136][137]. The mechanisms of therapeutic effects of corticosteroids and modulatory action on cell biology are well established, but non-genomic mechanisms underlying cancer immune evasion are not fully understood. Since sialic acid is involved in the regulation of immunogenicity effect of corticosteroids on sialoglycans in gliomas was studied [138,139]. Cytometric analysis of glioblastoma cells of different immunogenicity showed a dose-dependent effect of dexamethasone on the sialylation pattern, which was also associated with the changed affinity of the Siglec-E and -F receptors to glioma cell membranes [98]. In co-culture systems without physical interaction, dexamethasone enhanced α2.8-sialylation in glioma cells, which was accompanied by the promotion of the suppressive immune status of microglial cells [99]. It may reflect Dex-induced dampening of anti-tumor immunity via interferention with the activity of the sialoglycan-Siglec checkpoint and mechanisms controlling the glycome machinery. According to the Cancer Genome Atlas, Dex activates several genes, including CDC25C, CDCA8, CDC20, PRC1, and PLK1 that are closely correlated with a worse prognosis and shorter survival in patients with glioblastoma [133]. Given the effects of corticosteroids on glycosylation pattern and sialome-dependent cellular interactions, the assessment of individual Siglec profiles in patients with malignant gliomas may be useful in verifying the safety of steroid therapy and the prediction of overall survival.
Corticosteroids-Benefits and Pitfalls in the Cancer Management
In recent years, the engagement of Siglecs in cancer progression was intensively studied. Some of them have been described as diagnostic markers and a promising therapeutic target. Current clinical trials based on the targeting of sialic acid-Siglec axis revealed that ligation of sialylated ligands to ITIM-coupled Siglecs on leukocytes mediates immunosuppression and blockade of anti-tumor activity, whereas targeting of Siglec-3/-7/-9 or -15 by MAbs promotes anti-tumor immunity [96]. The Given that the genomic and nongenomic mechanisms of action of corticosteroids are not fully understood, their clinical importance in the management strategies of lymphoid neoplasms is relatively high. Since the correlation of CD33 with poorer prognosis in leukemia was established, the running clinical trials implicate the potential of anti-CD33 frontline therapy [102,140]. However, it has been also shown that various dosage systems of corticosteroids, including prednisone and methylprednisolone, exert a pro-apoptotic effect toward CD33-positive lymphoblasts [141]. In B-acute lymphoblastic leukemia, phase II of the clinical trial on targeting for Siglec-2 revealed that the combined therapy with Dex increases the therapeutic efficacy of epratuzumab in Siglec-2-positive B cells [142].
Anti-Inflammatory Management
The extensive studies on the pathogenesis of AD revealed the beneficial role of anti-inflammatory, analgesic, and local anesthetic medications in the prevention of degenerative processes within the CNS. Using the preclinical mice model of surgery-induced neuroinflammation, Xu et al. showed that post-operative cognitive dysfunction in old, but not young, animals are strongly correlated with the increased levels of TNFα, IL-6, Iba-1, and CD33-positive cells in the hippocampus [143]. Despite many limitation of this study, the authors suggest that ibuprofen, an anti-inflammatory and analgesic drug, as well as local anesthetic levobupivacaine suppress inflammation and microglia activation but do not affect the cognitive function in experimental animals. As mentioned previously, Siglec-3 and other CD33-related Siglecs are associated with AD pathology. Therefore, it may be beneficial to consider anti-inflammatory therapy to limit the risk of post-operative cognitive dysfunction in elderly individuals. This observation opens a new view of the standard pharmacological strategies, as well as searching for biologically active natural substances that exert neuroprotective effects through the recruitment of some immune checkpoints. In line with this, curcumin, the widely known component extracted from the rhizome of Curcuma longa, has been described as a candidate for the diagnosis, prevention, and treatment of AD. Besides the antioxidation and anti-inflammatory properties, the strong biological activity of curcumin expressed by downregulation of Siglec-3 capacity results in the phagocytic clearance of amyloid and makes it a potential. However, it is still not formally registered as a therapeutic tool for AD as confirmed in human sections ex vivo [144].
Recent studies on the risk factors for fetal development confirmed the role for type 1 interferone (IFN) in the pathogenesis of autoimmune congenital heart block (CHB) in newborns [145]. The analysis of immune cells subpopulations from a fetus with CHB showed that upregulation of type 1 interferone correlates with a high level of Siglec-1 expressing monocytes and/or macrophages that are functionally involved as effector cells in fibrosis. The clinical investigations on the CHB preventing strategies revealed that targeting the maternal interferone significantly reduces the risk of fetal affections. As Lisney et al. have reported, the IFN-α-targeted therapy with anti-inflammatory hydroxychloroquine decreases Siglec-1 expression on maternal monocytes and/or macrophages and reduces risk for the development of fetal CHB [146]. The function of the sialic acid-Siglec axis in the host response to the conventional drugs-related side effects was analyzed in the pharmacologically induced tissue injury. Scaffidi et al. showed that high dose of acetaminophen, routinely used to treat mild to moderate pain or to reduce fever, causes the hepatocytes injury accompanied by the release of high-mobility group box 1 protein (HMGB-1) [147]. During cellular injury, HMGB-1, similar to the heat shock protein 70 and 90 (HSP-70, -90) are capable of induce inflammatory response expressed production of IL-6 and TNFα. However, the heavy sialylated CD24-Siglec-10 axis on human macrophagesas, as well as its murine analogue CD24-Siglec-G, shows HMGB-1 binding capacity, thereby dampen tissue damage-induced immune responses. Contrarily, mice with a targeted mutation of Siglec-G encoding gene and CD24 deficiency are extremely sensitive to acetaminophen-induced liver injury and are predisposed to develop cytokine release syndrome [148] (Table 1.).
Conclusions and Perspectives
This brief review focuses on some examples of the potential role of the sialic acid-Siglec checkpoint in pathological states and related conventional therapies. The interplay between sialoglycans and Siglecs undergoes dynamic changes in many physiological and pathological processes. Both, in resting and activation status, the glycome machinery controls the sialylation pattern and Siglec-related cellular activity that underlies the immune homeostasis and participates in the immune defense. Thus, it is certainly an important target in the field of glycoengineering-based therapy. The mechanisms of a new targeting therapies inhibit the Siglec-mediated cellular processes by structurally modified sialoglycans and monoclonal anti-Siglec antibodies applied in the modern delivery systems, as well as enzymatic modifications of cell membranes, seem to be showing therapeutic potential in future medicine. However, conventional therapy will be the main strategy in clinical management, and its interference with components of sialic acid-Siglec immune checkpoints should be verified in cancer or inflammatory diseases.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,118.6 | 2020-06-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Uniform Convergence of the Spectral Expansion for a Differential Operator with Periodic Matrix Coefficients
In this paper, we obtain asymptotic formulas for eigenvalues and eigenfunctions of the operator generated by a system of ordinary differential equations with summable coefficients and the quasiperiodic boundary conditions. Using these asymptotic formulas, we find conditions on the coefficients for which the root functions of this operator form a Riesz basis. Then we obtain the uniformly convergent spectral expansion of the differential operators with the periodic matrix coefficients
It is well-known that ( see [2,10] ) the spectrum σ(L) of L is the union of the spectra σ(L t ) of L t for t ∈ [0, 2π). To construct the uniformly convergent spectral expansion for L we first obtain the uniform, with respect to t ∈ Q ε (n), asymptotic formula for the eigenvalues and eigenfunctions of L t , where Q ε (2µ) = {t ∈ Q : |t − πk| > ε, ∀k ∈ Z}, Q ε (2µ + 1) = Q, Q is compact connected subset of C containing a neighborhood of the interval [−a, 2π − a], a ∈ (0, π 2 ), ε ∈ (0, a 2 ) and µ = 1, 2, .... Then we prove that the root functions of L t for t ∈ C(n) form a Riesz basis in L m 2 (0, 1), where C(2µ) = C\{πk : k ∈ Z}, C(2µ + 1) = C. Let us introduce some preliminary results and describe the scheme of the paper. Clearly where N ≫ 1, satisfying the following, uniform with respect to t ∈ Q, asymptotic formulas for j = 1, 2, ..., m. We say that the formula f (k, t) = O(h(k)) is uniform with respect to t ∈ Q if there exists a positive constant c, independent of t, such that | f (k, t)) |< c | h(k) | for all t ∈ Q and | k |≫ 1. The method proposed here allows us to obtain the asymptotic formulas of high accuracy for the eigenvalues λ k,j (t) and the corresponding normalized eigenfunctions Ψ k,j,t (x) of L t when p ν,i,j ∈ L 1 [0, 1] for all ν, i, j . Note that to obtain the asymptotic formulas of high accuracy by the classical methods it is required that P 2 , P 3 , ..., P n be differentiable (see [12]). To obtain the asymptotic formulas for L t we take the operator L t (C), where L t (P 2 , ..., P n ) is denoted by L t (C) when P 2 (x) = C, P 3 (x) = 0, ..., P n (x) = 0, for an unperturbed operator and L t −L t (C) for a perturbation. One can easily verify that the eigenvalues and normalized eigenfunctions of L t (C) are µ k,j (t) = (2πki + ti) n + µ j (2πki + ti) n−2 , Φ k,j,t (x) = v j e itx e i(2πk+t)x (5) for k ∈ Z, j = 1, 2, ..., m, where v 1 , v 2 , ..., v m are the normalized eigenvectors of the matrix C corresponding to the eigenvalues µ 1 , µ 2 , ..., µ m respectively. In section 2 we investigate the operator L t and prove the following 2 theorems.
(b) If λ k,j (t) ∈ U (µ k,p(j) (t), c 1 |k| n−3 ln |k|), then there exists unique eigenfunction Ψ k,j,t (x) corresponding to λ k,j (t) and this eigenfunction satisfies where c 2 is a constant independent of t and j.
Note that here and in forthcoming relations we denote by c i for i = 1, 2, ..., the positive constants, independent of t, whose exact values are inessential. Using Theorem 1 and investigating associated functions of L t we prove: Theorem 2 (a) The large eigenvalues of L t consist of m sequences (3) satisfying the following, uniform with respect to t ∈ Q ε (n), formula is a simple eigenvalue of L t and the corresponding normalized eigenfunction Ψ k,j,t (x) satisfies where v * j is the eigenvector of C * corresponding to µ j and (v * j , v j ) = 1. Note that A. A. Shkalikov [13,14] proved that the root functions of the operators generated by a ordinary differential expression, in the scalar case, with summable coefficients and more complicated boundary conditions form a Riesz basis with brackets. L. M. Luzhina [8] generalized these results for the matrix case. In [22] we prove that if n = 2 and the eigenvalues of the matrix C are simple then the root functions of L t for t ∈ (0, π) ∪ (π, 2π) form a ordinary Riesz basis without brackets. The case n > 2 is more complicated and the most part of the method of the paper [22] does not work here, since in the case n > 2 the adjoint operator of the operator generated by l(y) with arbitrary summable coefficients can not be defined by the Lagrange's formula.
In section 3 using Theorem 2 we obtain spectral expansion for the operator L. The spectral expansion for the Hill operator with real-valued potential q(x) was constructed by Gelfand in [4] and Titchmarsh in [15]. Tkachenko proved in [16] that the Hill operator, namely the operator L in the case m = 1, n = 2 can be reduced to triangular form if all eigenvalues of the corresponding operators L t for t ∈ [0, 2π) are simple. McGarvey in [10,11] proved that L, in the case m = 1, is spectral operator if the projections of the operator L are uniformly bounded. Gesztesy and Tkachenko in the recent paper [5] proved that the Hill operator is a spectral operator of scalar type if and only if for all t ∈ [0, 2π) the operators L t have not associated function, the multiple point of either the periodic or anti-periodic spectrum is a point of its Dirichlet spectrum and some other condition hold. However, in general, the eigenvalues are not simple, projections are not uniformly bounded, and L t has associated function, since the Hill operator with simple potential q(x) = e i2πx has infinitely many spectral singularities ( see [3], where Gasymov investigated the Hill operator with special potential, analytically continuable onto the upper half plane). Note that the spectral singularity of L is the point of S(T ) in neighborhood on which the projections of the operator L are not uniformly bounded and we proved in [18] that a number λ ∈ S(L t ) ⊂ S(L) is a spectral singularity if and only if L t has an associated function corresponding to the eigenvalue λ. The existence of the spectral singularities and the absence of the Parseval's equality for the nonself-adjoint operator L t do not allow us to apply the elegant method of Gelfand ( see [4]) for construction of the spectral expansion for the nonself-adjoin operator L. These situation essentially complicate the construction of the spectral expansion for the nonself-adjoint case. In [17] and [20] we constructed the spectral expansion for the Hill operator with continuous complex-valued potential q(x) and with locally summable complex-valued potential q(x) respectively. Then in [19] and [21] we constructed the spectral expansion for the nonself-adjoint operator L, in the case m = 1, with coefficients p k ∈ C (k−1) [0, 1] and with p k ∈ L 1 [0, 1] for k = 2, 3, ..., n respectively. In the paper [9] we constructed the spectral expansion of L when p k,i,j ∈ C (k−1) [0, 1]. In this paper we do it when p k,i,j (x) are arbitrary Lebesgue integrable on (0, 1) functions. Besides, in [9] the expansion is obtained for compactly supported continuous vector functions, while in this paper for each function when n = 2µ. Moreover, using Theorem 2, we prove that the spectral expansion of L converges uniformly in every bounded subset of (−∞, ∞) if f is absolutely continuous compactly supported function and f ′ ∈ L m 2 (−∞, ∞). Note that the spectral expansion obtained in [9], when p k,i,j ∈ C (k−1) [0, 1], converges in the norm of L m 2 (a, b), where a and b are arbitrary real number. Some parts of the proofs of the spectral expansions for L is just writing in vector form of the corresponding proofs obtained in [19] for the case m = 1. These parts are given in appendices, in order to give a possibility to reed this paper independently.
On the eigenvalues and root functions of L t
The formula (4) shows that the eigenvalue λ k,j (t) of L t is close to the eigenvalue (2kπi + ti) n of L t (0). If t ∈ Q ε (n), | k |≫ 1 then the eigenvalue (2πki + ti) n of L t (0) lies far from the other eigenvalues (2pπi + ti) n . It follows from (4) that where | k |≫ 1, ν ≥ 2, and (12), (13) are uniform with respect to t ∈ Q ε (n).
The boundary conditions adjoint to (2) is U ν,t (y) = 0 for ν = 0, 1, ..., (n − 1). Therefore the eigenfunction ϕ * k,s,t (x) and Φ * k,s,t (x) of the operators L * t (0) and L * t (C) corresponding to the eigenvalues (2πpi + ti) n and µ k,j (t) respectively and satisfying (ϕ k,j,t , ϕ * k,s,t ) = 1, where v * s is defined in Theorem 2(c). To prove the asymptotic formulas for the eigenvalues λ k,j (t) and the corresponding normalized eigenfunctions Ψ k,j,t (x) of L t we use the formula which can be obtained from by multiplying scalarly by Φ * k,s,t (x). To estimate the right-hand side of (15) we use (12), (13), the following lemma, and the formula which can be obtained from (16) by multiplying scalarly by ϕ * p,s,t (x).
Therefore there exist a positive constant M (k, j) and indices p 0 , s 0 satisfying max p∈Z, s=1,2,...,m n ν=2 Then using (17) and (12), we get where d > 2|k|. This implies that the decomposition of Ψ k,j,t (x) by basis where sup Now using the integration by parts, (1), and the inequality (21), we obtain Therefore arguing as in the proof of (22) and using (12) we get where ν = 2, 3, . . . , n, and k,j,t , ϕ * p,s,t ) and tending q to ∞, we obtain (18). Let us we prove (19). It follows from (20) and (18) that By (21) and (13) we have On the other hand Therefore using (24) we get Now using this we prove the following lemma.
Lemma 2
The following equalities Proof. Using (18) for ν = 2, p = k and the obvious relation we see that This with (25) and (13) for ν = 2 implies that Similarly, using (18), (25), (13) we obtain Since (13) is uniform with respect to t ∈ Q ε (n) and the constant c 5 in (25) does not depend on t ( recall that we denote by c k the constant independent of t) these formulas are uniform with respect to t ∈ Q ε (n). Therefore recalling the definitions of Φ * k,s,t and ϕ * k,q,t ( see (14)) we get the proof of (26) and (27) Proof. It follows from (25) and (13) that and this formula is uniform with respect to t ∈ Q ε (n). Then the decomposition of Ψ k,j,t (x) by the basis {ϕ p,s,t (x) : s = 1, 2, ..., m, p ∈ Z} has the form Since Ψ k,j,t = ϕ k,j,t = 1 and (30) is uniform with respect to t ∈ Q ε (n), there exists a positive constant N 1 , independent of t, such that for all | k |≥ N 1 , t ∈ Q ε (n) and j = 1, 2, ..., m. Therefore using (14) and taking into account that the vectors v * 1 , v * 2 , ..., v * m form a basis in C m , that is, e s is a linear combination of these vectors we get the proof of (28) THE PROOF OF THEOREM 1(a). It follows from Lemma 2 that there exists a positive constant N 2 , independent of t, such that if | k |≥ N 2 , t ∈ Q ε (n) then the right-hand side of (15) is less than c 10 |k| n−3 ln |k|. Therefore (15) and Lemma 3 give the proof of the Theorem 1(a).
THE PROOF OF THEOREM 1(b). Let λ k,j be an eigenvalue of L t lying in U (µ k,p(j) (t), c 1 |k| n−3 ln |k|) and Ψ k,j,t be any normalized eigenfunction corresponding to λ k,j . Then using (5) and taking into account that the eigenvalues of C are simple we get (15), (26), (27) gives On the other hand by (14) and (29) Since (26), (27), (29) are uniform with respect to t ∈ Q ε (n) the formulas (31) and (32) are also uniform. Therefore decomposing Ψ k,j,t (x) by basis {Φ p,s,t (x) : s = 1, 2, ..., m, p ∈ Z} we see that any normalized eigenfunction corresponding to λ k,j satisfies (6). If there are two linearly independent eigenfunctions corresponding to λ k,j , then one can find two orthogonal eigenfunctions satisfying (6), which is impossible. Theorem 1 is proved.
To proof of the main results for L t (Theorem 2) we need to investigate the normalized associated function Ψ k,j,1,t (x) of L t corresponding to the eigenvalue λ k,j (t). By definition of the associated function we have where Ψ k,j,0,t (x) is an eigenfunction of L t . Note that, in general, the eigenfunction Ψ k,j,0,t (x) is not normalized. For investigation of the associated function we use the following formulas.
By Proposition 1 the eigenvalue λ k,j (t) of L t for |k| ≥ N 0 is simple and by Theorem 1 the corresponding eigenfunction satisfy (6), where p(j) = j (see the definition of p(j) in Theorem 1), that is, (8), (7) and Theorem 2(a) is proved.
THE PROOF OF THEOREM 2(b). It follows from (8) that the root functions of L t quadratically close to the system {v j e itx −1 e i(2πk+t)x : k ∈ Z, l = 1, 2, ..., m} which form a Riesz in L m 2 (0, 1). On the other hand the system of the root functions of L t is complete and minimal in L m 2 (0, 1) ( see [8]). Therefore, by Bari theorem ( see [1,6]), the system of the root functions of L t forms a Riesz basis in L m 2 (0, 1). THE PROOF OF THEOREM 2(c). To prove the asymptotic formulas for normalized eigenfunction Ψ * k,j,t (x) of L * t corresponding to the eigenvalue λ k,j (t) we use the formula obtained from L * t Ψ * k,j,t = λ k,j (t)Ψ * k,j,t by multiplying by ϕ p,s,t and using (L * t Ψ * k,j,t , ϕ p,s,t ) = (Ψ * k,j,t , L t ϕ p,s,t ). Instead of (17) using these formula and arguing as in the proof of (25) we obtain This with (5) and (13) implies the following relations On the other hand (8) and equality Ψ * k,j,t , Ψ k,s,t = 0 for j = s give Since (8), (13) hold uniformly the formulas (46)-(48) are uniform with respect to t ∈ Q ε (n) and they yield where v * j is defined in Theorem 2(c). Now (8) and (49) imply (9), since
THE PROOF OF THEOREM 2(d).
To investigate the convergence of the expansion series of L t we consider the series k:|k|≥N , j=1,2,...,m where N ≥ N 0 and N 0 is defined in Theorem 1, f (x) is absolutely continuous function satisfying (1) and f ′ (x) ∈ L m 2 (0, 1). Without loss of generality instead of the series (51) we consider the series k:|k|≥N , j=1,2,...,m where f t (x) is defined by Gelfand transform ( see [4]) f is absolutely continuous compactly supported function and f ′ ∈ L m 2 (−∞, ∞), since we use (52) in next section for spectral expansion of L. It follows from (53) that To prove the uniform convergence of (52) we consider the series To estimate the terms of this series we decompose X k,j,t by basis {Φ * p,s,t : p ∈ Z, s = 1, 2, ..., m} and then use the inequality Using the integration by parts and then Schwarz inequality we get |k|≥N , s=1,2,...,m Again using the integration by parts, Schwarz inequality and (46), (50) we obtain that the expression in the in the second row of (56) is less than It is not hard to see that this expression is less than c 17 k −2 , that is, the expression in the second row of (56) is less than c 17 k −2 . Therefore the relations (56), (57) imply that the expressions in (55) and (52) tend to zero uniformly with respect to t ∈ Q ε (n) and t ∈ Q ε (n), x ∈ [0, 1] respectively as N → ∞. Since in the proof of the uniform convergence of (52) we used only the properties (54) of f t the series (51) converges uniformly with respect to x ∈ [0, 1], that is, Theorem 2(d) is proved.
Note that in the proof of Theorem 2(d) we proved the following theorem, which will be used in next section.
Theorem 3 If f is absolutely continuous, compactly supported function and f ′ ∈ L m 2 (−∞, ∞) then the series (52), where f t is defined by (53), N ≥ N 0 , N 0 is defined in Theorem 1(a), converges uniformly with respect to t ∈ Q ε (n), x ∈ D for any bounded subset D of (−∞, ∞).
Spectral Expansion for L
Let Y 1 (x, λ), Y 2 (x, λ), . . . , Y n (x, λ) be the solutions of the matrix equation satisfying Y which is a polynomial of e it with entire coefficients f 1 (λ), f 2 (λ), .... Therefore the multiple eigenvalues of the operators L t are the zeros of the resultant R(λ) ≡ R(∆, ∆ ′ ) of the polynomials ∆(λ, t) and ∆ ′ (λ, t) ≡ ∂ ∂λ ∆(λ, t). Since R(λ) is entire function and the large eigenvalues of L t for t = 0, π are simple ( see Theorem 2 (a)), For each a k there are nm values t k,1 , t k,2 , ..., t k,nm of t satisfying ∆(a k , t) = 0. Hence the set is countable and for t / ∈ A all eigenvalues of L t are simple eigenvalues. By Theorem 2(a) the possible accumulation point of the set A are πk, where k ∈ Z.
Lemma 6
The eigenvalues of L t can be numbered as λ 1 (t), λ 2 (t), ..., such that for each p the function λ p (t) is continuous in Q and is analytic in Q\A(p), where A(p) is a subset of A consisting of finite numbers t p 1 , t p 2 , ..., t p sp . Moreover the followings hold: where |k| ≥ N 0 , p(k, j) = 2|k|m + j if k > 0, p(k, j) = (2|k| − 1)m + j if k < 0, the sets Q ε (n), Q and number N 0 are defined in (2) and in Theorem 1(a).
Proof. Let t ∈ Q. It easily follows from the classical investigations [12, chapter 3, theorem 2] ( see (3), (4)) that there exist a large numbers r and c, independent of t, such that the all eigenvalues of the operators L t,z for z ∈ [0, 1], where L t,z is defined by (45), lie in the set where U (µ, c) = {λ ∈ C :| λ − µ |< c}. Clearly there exist a closed curve Γ such that: (a) The curve Γ lies in the resolvent set of the operators L t,z for all z ∈ [0, 1]. (b) All eigenvalues of L t,z for all z ∈ [0, 1] that do not lie in U ((2πki + ti) n , ck n−1− 1 2m ) for |k| ≥ N 0 belong to the set enclosed by Γ.
(ii) U (t, δ) ∩ A(U 0 ) = ∅ and d s,t (z) ∈ U 0 for z ∈ U (t, δ), s = 1, 2, ..., 2m. Now take any point t 0 from U (0, ε)\A(U 0 ). Let γ be line segment in U (0, ε)\A(U 0 ) joining t 0 and a point of the circle S(0, ε) = {t : |t| = ε}. For any t from γ there exist U (t, δ) satisfying (i) and (ii). Since γ is a compact set the cover {U (t, δ) : t ∈ γ} of γ contains a finite cover U (t 0 , δ), U (t 1 , δ), ..., U (t v , δ), where t v ∈ S(0, ε). Now we are ready to continue analytically the function λ p(k,j) (t) into the set U (0, ε). For any z ∈ U (t v , δ) ∩ Q ε (n) the eigenvalue λ p(k,j) (z) coincides with one of the eigenvalues d 1,tv (z), d 2,tv (z), ..., d 2m,tv (z), since there exists 2m eigenvalue of L z lying in U 0 . Denote by B s the subset of the set U (t v , δ) ∩ Q ε (n) for which the function λ p(k,j) (z) coincides with d s,tv (z). Since d s,t (z) = d i,t (z) for s = i the sets B 1 , B 2 , ..., B 2m are pairwise disjoint and the union of these sets is U (t v , δ) ∩ Q ε (n). Therefore there exists index s for which the set B s contains accumulation point and hence λ p(k,j) (z) = d s,tv (z) for all z ∈ U (t v , δ) ∩ Q ε (n). Thus d s,tv (z) is analytic continuation of λ p(k,j) (z) to U (t v , δ). In the same way we get the analytic continuation of λ p(k,j) (z) to U (t v−1 , δ), U (t v−2 , δ), ..., U (t 0 , δ). Since t 0 is arbitrary point of U (0, ε)\A(U 0 ) we obtain the analytic continuation of λ p(k,j) (z) to U (0, ε)\A(U 0 ). The analytic continuation of λ p(k,j) (z) to U (π, ε)\A(U π ) can be obtained in the same way, where A(U π ) can be defined as A(U 0 ). Thus the function λ p(k,j) (t) is analytic in Q\A(p), where A(p) consist of finite numbers t p 1 , t p 2 , ..., t p sp . Since ∆(λ, t) is continuos with respect (λ, t), the function λ p(k,j) (t) can be extended continuously to the set Q. Now let us define the eigenvalues λ p (t) for p ≤ (2N 1 − 1)m, t ∈ Q which are apart from the eigenvalues defined by (63). These eigenvalues lies in a bounded set B and by (61) the set B ∩ ker R and the subset A(B) of A corresponding to B are finite. Take a point a from the set Q\A. Denote the eigenvalues of L a in increasing ( of absolute value) order | λ 1 (a) |≤| λ 2 (a) |≤ ... ≤| λ (2N1−1)m (a) | . If | λ p (a) |=| λ p+1 (a) | then by λ p (a) we denote the eigenvalue that has a smaller argument, where argument is taken in [0, 2π). Since a / ∈ A the eigenvalues λ 1 (a), λ 2 (a), ..., λ (2N1−1)m (a) are simple zeros of ∆(λ, a) = 0. Therefore using the implicit function theorem we obtain the analytic functions λ 1 (t), λ 2 (t), ..., λ (2N1−1)m (t) on a neighborhood U (a, δ) of a which are eigenvalues of L t for t ∈ U (a, δ). These functions can be analytically continued to Q ε (n)\A, being the eigenvalues of L t , where, as we noted above, A∩Q ε (n) consist of a finite number of points. Taking into account that A(B) is finite, arguing as we have done in the proof of analytic continuation and continuous extension of λ p (t) for p > (2N 1 − 1)m, we obtain the analytic continuations of these functions to the set Q except finite points and continuous extension to Q By Gelfand's Lemma ( see [4]) every compactly supported vector function f (x) can be represented in the form where f t (x) is defined by (53). This representation can be extended to all function of L m 2 (−∞, ∞), and where {X k,t : k = 1, 2, ...} is the biorthogonal system of {Ψ k,t : k = 1, 2, ...}, Ψ k,t (x) is a normalized eigenfunction corresponding to λ k (t), the eigenvalue λ k (t) is defined in Lemma 6, Ψ k,t (x) and X k,t (x) are extended to (−∞, ∞) by (58) and by X k,t (x + 1) = e it X k,t (x). Let a ∈ (0, π 2 )\A, ε ∈ (0, a 2 ) and let l(ε) be a smooth curve joining the points −a and 2π − a and satisfying where Π(a, ε) = {x + iy : x ∈ [−a, 2π − a], y ∈ [0, 2ε)}, l(−ε) = {t :t ∈ l(ε)}, the sets Q, Q ε (n) and A are defined in (2) Since l(ε) ∈ C(n) ( see (65) and the definition of C(n) in the introduction), it follows from Theorem 2(b) and Lemma 6 that for each t ∈ l(ε) we have a decomposition where a k (t) = (f t , X k,t ). Using (67) in (66) we get Remark 1 If λ ∈ σ(L) then there exists points t 1 , t 2 , ..., t k of [0, 2π) such that λ is an eigenvalue λ(t j ) of L tj of multiplicity s j for j = 1, 2, ..., k. Let S(λ, b) = {z :| z − λ |= b} be a circle containing only the eigenvalue λ(t j ) of L tj for j = 1, 2, ..., k. Using Lemma 6 we see that there exists a neighborhood U (t j , δ) = {t :| t − t j |≤ δ} of t j such that: (a) The circle S(λ, b) lies in the resolvent set of L t for all t ∈ U (t j , δ) and j = 1, 2, ..., k.
Thus the spectrum of L t for t ∈ U (t j , δ), j = 1, 2, ..., k separated by S(λ, b) into two parts in since of [7] ( see §6.4 of chapter 3 of [7]). Since {L t : t ∈ U (t j , δ)} is a holomorphic family of operators in since [7] (see §1 of chapter 7 of [7]), the theory of holomorphic family of finite dimensional operators can be applied to the part of L t for t ∈ U (t j , δ) corresponding to the inside of S(λ, b). Therefore ( see §1 of the chapter 2 of [7] ) the eigenvalue Λ j,1 (t), Λ j,2 (t), ..., Λ j,sj (t) and corresponding eigenprojections P (Λ j,1 (t)), P (Λ j,2 (t)), ..., P (Λ j,sj (t)) are branches of an analytic function. These eigenprojections is represented by a Laurent series in t 1 ν , where ν ≤ s j , with finite principal parts. One can easily see that if λ p (t) is a simple eigenvalue of L t then and P (λ p (t)) is analytic function in some neighborhood of t, where α p (t) = (Ψ p,t , Ψ * p,t ). This and Lemma 6 show that for each p the function a p (t)Ψ p,t is analytic on D(ε) ∪ D(−ε) except finite points.
Theorem 4 (a) If f (x) is absolutely continuous, compactly supported function and
and where Proof. The proof of (70) in case (a) follows from (68), Theorem 3, and Lemma 6. In Appendix A by writing the proof of the Theorem 2 of [19] in the vector form we get the proof of (70) in the case (b). In Appendix B the formula (71) is obtained from (70) by writing the proof of the Theorem 3 of [19] in the vector form Definition 1 Let λ be a point of the spectrum σ(L) of L and t 1 , t 2 , ..., t k be the points of [0, 2π) such that λ is a eigenvalue of L tj of multiplicity s j for j = 1, 2, ..., k. The point λ is called a spectral singularity of L if where supremum is taken over all t ∈ (U (t j , δ)\{t j }), j = 1, 2, ..., k; i = 1, 2, ..., s j , the set U (t j , δ) and the eigenvalues Λ j,1 (t), Λ j,2 (t), ..., Λ j,sj (t) are defined in Remark 1. In other words λ is called a spectral singularity of L if there exists indices j, i such that the point t j is a pole of P (Λ j,i (t)). Briefly speaking a point λ ∈ σ(L) is called a spectral singularity of L if the projections of L t corresponding to the simple eigenvalues lying in the small neighborhood of λ are not uniformly bounded. We denote the set of spectral singularities by S(L).
Remark 2 Note that if γ = {λ p (t) : t ∈ (α, β)} is a curve lying in σ(L) and containing no multiple eigenvalues of L t , where t ∈ [0, 2π), then arguing as in papers [18,9] one can prove that for the projection P (γ) of L corresponding to γ the following hold that is, the definition 1 is equivalent to the definition of the spectral singularities given in [18,9], where the spectral singularities is defined as a points in the neighborhoods of which the projections P (γ) are not uniformly bounded. The proof of (74) is long technical. In order to avoid eclipsing the essence by technical detail and taking into account that in the spectral expansion of L the eigenfunctions and eigenprojections of L t for t ∈ [0, 2π) are used ( see (71)), and using that there are the closed relationship between projections (see (74)) of L and L t for t ∈ [0, 2π), in this paper, in the definition of the spectral singularities, without loss of naturalness, instead of the boundlessness of projections P (γ) of L we use the boundlessness of projections P (λ p (t)), of L t , that is, we use the definition 1. In any case the spectral singularity is a point of σ(L) that requires the regularization in order to get the spectral expansion. (61).
where U is a neighborhood of t 0 such that if t ∈ U then λ p (t) is not a spectral singularity.
(c) If the operator L has not spectral singularities then we have the following spectral expansion in term of the parameter t : If f (x) is absolutely continuous, compactly supported function and f ′ ∈ L m 2 (−∞, ∞) then the series in (76) converges uniformly in any bounded subset of (−∞, ∞). If f (x) ∈ S then the series converges in the norm of L m 2 (a, b) for every a, b ∈ R.
Proof. (a) If λ p (t 0 ) is a simple eigenvalue of L t0 then due to the Remark 1 ( see (69) and the end of Remark 1) the projection P (λ p (t)) and | α p (t) | continuously depend on t in some neighborhood of t 0 . On the other hand α p (t 0 ) = 0, since the system of the root functions of L t0 is complete. Therefore it follows from the Definition 1 that λ is not a spectral singularities of L.
(b) It follows from (61) and Theorem 5(a) that there exists a neighborhood U of t 0 such that if t ∈ U then λ p (t) is not spectral spectral singularities of L. If λ p (t 0 ) ∈ σ(L)\S(L) then by Definition 1 t 0 is not a pole of P (λ p (t)), that is, by Remark 1 the Laurent series in t 1 ν , where ν ≤ s, of P (λ p (t)) at t 0 has not principal part. Therefore (69) implies that 1 |αp(t)| and hence 1 |αp(t)| (f t , Ψ * p,t )Ψ p,t is a bounded continuous functions in some neighborhood of t 0 , which implies the proof of (b).
(c) It follows from Theorem 5(b) that if the operator L has not spectral singularities then where the left-hand side is defined by (72). Thus (76) follows from (77), (71) Now we change the variables to λ by using the characteristic equation ∆(λ, t) = 0 and the implicit-function theorem. By (60) ∆(λ, t) and ∂∆(λ,t) ∂t are polynomials of e it and their resultant is entire function. It is clear that this resultant is not zero function. Let b 1 , b 2 , ..., be zeros of the resultant, i.e., are the common zeros of the polynomials ∆(λ, t) and ∂∆(λ,t) ∂t .
To obtain (70) we must to prove that the last integral in (A2) tends to zero as N → ∞. For this we prove the following Lemma 7 On l ε the functions g N,t , k=1,2,...,N b N k (t))Ψ k,t (A3) tend to zero as N → ∞ uniformly with respect to t.
Proof. First we prove that g N,t tends to zero uniformly. Let P N,t and P ∞,t be projections of L m 2 [0, 1] onto H N,t and H ∞,t respectively, where H ∞,t = ∪ ∞ n=1 H N,t . If follows from (67) that f t ∈ H ∞,t . On the other hand one can readily see that H N,t ⊂ H N +1,t ⊂ H ∞,t , P N,t ⊂ P ∞,t , P N,t → P ∞,t . | 8,215.2 | 2007-09-20T00:00:00.000 | [
"Mathematics"
] |
A Comprehensive Evaluation Algorithm of Multi-Point Relay Based on Link-State Awareness for UANETs
The Multi-Point Relay (MPR) is one of the core technologies for Optimizing Link State Routing (OLSR) protocols, offering significant advantages in reducing network overhead, enhancing throughput, maintaining network scalability, and adaptability. However, due to the restriction that only MPR nodes can forward control messages in the network, the current evaluation criteria for selecting MPR nodes are relatively limited, making it challenging to flexibly choose MPR nodes based on current link states in dynamic networks. Therefore, the selection of MPR nodes is crucial in dynamic networks. To address issues such as unstable links, poor transmission accuracy, and lack of real-time performance caused by mobility in dynamic networks, we propose a comprehensive evaluation algorithm of MPR based on link-state awareness. This algorithm defines five state evaluation parameters from the perspectives of node mobility and load. Subsequently, we use the entropy weight method to determine weight coefficients and employing the method of Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for comprehensive evaluation to select MPR nodes. Finally, the Comprehensive Evaluation based on Link-state awareness of OLSR (CEL-OLSR) protocol is proposed, and simulated experiments are conducted using NS-3. The results indicate that, compared to PM-OLSR, ML-OLSR, LD-OLSR, and OLSR, CEL-OLSR significantly improves network performance in terms of packet delivery rate, average end-to-end delay, network throughput, and control overhead.
Introduction
In recent years, propelled by the rapid advancements in drone technology, Unmanned Aerial Vehicles (UAVs) have witnessed widespread adoption across various sectors, including aerial photography, 5G communication, agricultural and forestry monitoring, as well as search and rescue operations [1,2].In order to efficiently complete tasks, multiple drones need to establish Unmanned Aerial Vehicle Ad hoc Networks (UANETs) in the actual work process to support real-time and efficient collaborative communication between each other [3].UANETs, a variant of the well-established Mobile Ad hoc Networks (MANETs), represent a prominent trend in wireless communication due to their diverse range of applications [4].Compared to traditional mobile ad hoc networks, UANETs exhibit stronger mobility and are not restricted by terrestrial factors when moving in the air, leading to more frequent changes in network topology [5].In UANET, the routing technology at the network layer is one of the core technologies.However, existing routing protocols are mostly designed for static networks and do not require real-time updates of link states [6], rendering them inadequate for the swift mobility characteristic of UANETs.Therefore, it is necessary to optimize and improve the existing routing protocols based on the current network operation status.
OLSR is a proactive, table-driven multi-hop routing protocol [7].In the network, information is mainly exchanged in the form of HELLO messages and Topology Control Sensors 2024, 24, 1702 2 of 19 (TC) messages [8].The process of link detection and neighbor discovery between nodes is accomplished through broadcasting HELLO messages.The MPR nodes forward TC messages to obtain link information in the network, ultimately establishing and maintaining the entire network topology, and applying relevant path algorithms to generate routes [9].The MPR mechanism is the core technology of the OLSR protocol, effectively reducing message flooding [10].This protocol is suitable for network applications requiring shortterm concurrent transmission and low latency, and it is applicable to large-scale networks with high node density.
In static sensor networks, research on topology control is relatively extensive.However, in dynamic networks, the transient nature introduced by node mobility leads to frequent changes in network topology, resulting in significant impact on network topology and posing great challenges to topology control [11].Meanwhile, due to the rapid node mobility and frequent network topology changes in UANETs, direct use of the OLSR protocol often leads to increased likelihood of link failures, high network topology changes, and delays [12].Moreover, only nodes selected as MPR, known as a message sorting station in the network, can forward control messages such as TC messages, while other ordinary nodes cannot [13].At the same time, MPR nodes can generate link state information between themselves and MPR selection nodes.Therefore, research on MPR is highly necessary as it directly affects network performance.
The structure of the remaining sections in this paper is as follows: Section 2 provides a review and analysis of the current related work.Section 3 introduces the relevant evaluation parameters and the process of the MPR selection algorithm in detail.In Section 4, the simulation process is elaborated, followed by a discussion and analysis of the simulation results.Finally, Section 5 summarizes the paper and identifies future research trends.
Related Works 2.1. Research Status
5G communication is the latest generation of cellular mobile communication technology, and its greatest value lies in driving the digital transformation of various industries, enabling a shift from personal mobile applications to industry applications [14].At present, some research work on combining 5G communication and wireless communication networks is as follows.
Reference [15] explores Vehicle Ad hoc Networks (VANETs) within 5G systems, presenting a dynamic vehicle resource allocation algorithm that considers the dynamic mobility characteristics of vehicular nodes.This approach enhances the practicality and scalability of the network while ensuring a rational distribution of network resources.Reference [16] delves into cooperative platooning scenarios within VANETs integrated with 5G communication.It proposes a power control algorithm based on distributed dynamic programming, taking into account the mobility of vehicle nodes, to achieve fair resource allocation within base stations and each platoon.Considering the impact of node mobility.Reference [17] investigates session continuity in 5G communication systems under scenarios involving dense and mobile networks.The findings highlight the significant effect of communication session continuity, attributed to link interruptions caused by node mobility and frequent blocking due to data retransmissions.The advancement of 5G technology provides high-speed, low-latency, and reliable communication support for drone networks.
In recent years, some researchers have considered improving the MPR mechanism from various aspects when studying the OLSR protocol, including selecting appropriate metric parameters, altering selection strategies, and the impact of link variations.The current research status is as follows.
References [18][19][20] explore the impact of node mobility on the MPR mechanism and propose novel metric parameters for optimization.Reference [18] introduces the LD-OLSR protocol, leveraging link duration and three-dimensional node situational data to forecast link durations.By incorporating node forwarding willingness, it introduces an MPR factor, effectively enhancing packet delivery rates and reducing latency.Reference [19] presents an efficient MPR selection algorithm considering node mobility's effect on network topology.It introduces the concept of "effective coverage area", estimating future node positions using historical data to expedite network topology establishment and reduce TC message redundancy.Reference [20] designs the mobility and queue-length-aware MP-OLSR protocol based on multi-criteria decision-making metrics, considering various influencing factors.
References [21][22][23] enhance the MPR mechanism by altering the selection strategy.Reference [21] proposes a reverse-thinking MPR selection algorithm, combining iterative and set operations to eliminate redundant nodes effectively, enhancing data transmission success rates.Reference [22] introduces a novel MPR node selection method employing Self-Organizing Map (SOM) artificial neural networks to distinguish strong and weak MPR nodes and select reliable retransmission-capable nodes, improving throughput, packet delivery rates, and network security.Reference [23] presents the Dynamic Updating Ant Colony Optimization (DUACO) algorithm incorporating state information and dynamic update mechanisms, mitigating MPR set redundancy and enhancing network performance.
References [24][25][26] enhance the MPR mechanism considering link variations.Reference [24] proposes a link stability-based MPR selection algorithm, prioritizing nodes with stable link quality to extend the MPR set's effective time and reduce topology change impacts on data transmission.Reference [25] introduces the Multi-dimensional Perception and Energy-Aware OLSR (MPEAOLSR) routing protocol, addressing network challenges like frequent topology changes and congestion by considering link conditions and energy awareness.Reference [26] proposes an M-OLSR routing protocol based on the SL-MPR selection algorithm, considering node mobility and link variations to tackle issues in current UANETs, such as topology changes and control message redundancy.
While many scholars have addressed the selection of MPR nodes considering factors like link status, network load, and node energy conditions, utilizing neural network optimization algorithms and leveraging multiple parameters for decision-making, there remains a gap in proposing a comprehensive evaluation algorithm grounded in multidimensional perception.Such an algorithm would integrate various indicator parameters from different aspects of the same evaluation object to derive a holistic evaluation metric.
Therefore, it is imperative to propose a multi-attribute comprehensive evaluation routing algorithm based on multi-dimensional state perception for UANETs operating in highly dynamic and rapidly changing topology scenarios.
MPR Selection Model
In OLSR, nodes are categorized into MPR nodes and general nodes based on their ability to forward control messages.General nodes are limited to receiving and processing messages, while only nodes designated as MPRs have the capability to forward control messages [27].Therefore, selecting appropriate MPR nodes is pivotal for enhancing network performance.The MPR mechanism effectively limits the widespread dissemination of control messages, thereby reducing control overhead in the network, preventing resource wastage, and mitigating network congestion [28].As illustrated in Figure 1, employing the MPR mechanism for flooding substantially decreases the transmission of control packets while covering all nodes.Furthermore, as the network expands, the benefits of this mechanism become even more pronounced.
N 1 (i) means 1-hop neighbor set of node i, N 2 (i) means 2-hop neighbor set of node i, and M(i) means MPR set of node i.The steps for selecting MPR nodes are as follows [29]: Step 1. Select nodes from N 1 (i) through which node i can only reach certain 2-hop neighbors, and then add them to M(i).
Step 2. Sort 1-hop neighbors from high to low based on the number of the coverage for 2-hop neighbors, and select the ones with the highest coverage to join M(i).
Step 3. Update and remove the 1-hop neighbors from N 1 (i) and 2-hop neighbors from N 2 (i) for each addition operation.Step 1. Select nodes from 1 ( ) N i through which node i can only reach certain 2-hop neighbors, and then add them to ( ) M i .
Step 2. Sort 1-hop neighbors from high to low based on the number of the coverage for 2hop neighbors, and select the ones with the highest coverage to join ( ) M i .
Step 3. Update and remove the 1-hop neighbors from 1 ( ) N i and 2-hop neighbors from 2 ( ) N i for each addition operation.
Step 4. Repeat the Step 2 and remove nodes through the Step 3, and finally end operation until the nodes of ( ) M i can completely cover all of the 2-hop neighbors from 2 ( ) N i .
The ultimate objective of MPR selection within the 1-hop neighbor set of a node is to ensure the connectivity of data transmission links by achieving full coverage of the 2-hop neighbor set of the node in the network [30].
As depicted in Figure 2, the MPRs for node S are chosen from its 1-hop neighbor set, where nodes A to E represent 1-hop neighbor nodes, and nodes F to N represent 2-hop neighbor nodes.Initially, node A, the sole neighbor capable of reaching 2-hop neighbor F, is selected to join the MPR set.Subsequently, nodes are arranged based on their coverage, with nodes C, B, and D sequentially added to the MPR set.Ultimately, the MPR set of node S comprises {A, C, B, D}.It is worth noting that some nodes might receive the same control message forwarded by different nodes, and traditional selection algorithms may not always yield the optimal and minimal MPR set.In the scenario described above, the set {A, B, D} could also achieve full coverage of the 2-hop neighbors.The ultimate objective of MPR selection within the 1-hop neighbor set of a node is to ensure the connectivity of data transmission links by achieving full coverage of the 2-hop neighbor set of the node in the network [30].
As depicted in Figure 2, the MPRs for node S are chosen from its 1-hop neighbor set, where nodes A to E represent 1-hop neighbor nodes, and nodes F to N represent 2-hop neighbor nodes.Initially, node A, the sole neighbor capable of reaching 2-hop neighbor F, is selected to join the MPR set.Subsequently, nodes are arranged based on their coverage, with nodes C, B, and D sequentially added to the MPR set.Ultimately, the MPR set of node S comprises {A, C, B, D}.It is worth noting that some nodes might receive the same control message forwarded by different nodes, and traditional selection algorithms may not always yield the optimal and minimal MPR set.In the scenario described above, the set {A, B, D} could also achieve full coverage of the 2-hop neighbors.
Analysis of MPR Selection Issues
The measurement for selecting MPR nodes is single, and even when considering multiple factors, the probability of all factors being good is very low.Therefore, it is necessary to integrate multiple factors.
In the process of selecting MPR nodes, there are nodes with the same initial coverage and current coverage.In this case, a node is randomly selected as the MPR node.If MPRs are selected based on coverage, as shown in Figure 3, there will be two results: {1, 2, 4} and {1, 3, 4}, from which a selection will be made randomly.When the mobility of node 3 is too fast, it is not within the communication range of node 0 at a certain moment, causing link breakage between them.Meanwhile, compared with node 3, if the link duration of node 2 is longer and the link stability is better, then the former reflects a better effect.If a large amount of packets are received at a certain moment resulting in the buffer queue to be full, node 2 can be unable to continue receiving and only discard the packets arriving at the next moment, which will cause packets loss.Considering the load situation of node
Analysis of MPR Selection Issues
The measurement for selecting MPR nodes is single, and even when considering multiple factors, the probability of all factors being good is very low.Therefore, it is necessary to integrate multiple factors.
In the process of selecting MPR nodes, there are nodes with the same initial coverage and current coverage.In this case, a node is randomly selected as the MPR node.If MPRs are selected based on coverage, as shown in Figure 3, there will be two results: {1, 2, 4} and {1, 3, 4}, from which a selection will be made randomly.When the mobility of node 3 is too fast, it is not within the communication range of node 0 at a certain moment, causing link breakage between them.Meanwhile, compared with node 3, if the link duration of node 2 is longer and the link stability is better, then the former reflects a better effect.If a large amount of packets are received at a certain moment resulting in the buffer queue to be full, node 2 can be unable to continue receiving and only discard the packets arriving at the next moment, which will cause packets loss.Considering the load situation of node 2, the latter has a better effect.
In the process of selecting MPR nodes, there are nodes with the same initial coverage and current coverage.In this case, a node is randomly selected as the MPR node.If MPRs are selected based on coverage, as shown in Figure 3, there will be two results: {1, 2, 4} and {1, 3, 4}, from which a selection will be made randomly.When the mobility of node 3 is too fast, it is not within the communication range of node 0 at a certain moment, causing link breakage between them.Meanwhile, compared with node 3, if the link duration of node 2 is longer and the link stability is better, then the former reflects a better effect.If a large amount of packets are received at a certain moment resulting in the buffer queue to be full, node 2 can be unable to continue receiving and only discard the packets arriving at the next moment, which will cause packets loss.Considering the load situation of node 2, the latter has a better effect.Relying solely on single indicators such as node coverage, node mobility, and node load to select MPR nodes is unreasonable [31].The above examples illustrate that if multiple factors are not comprehensively considered, it will not only affect the selection of MPR nodes, but also disrupt normal communication and data transmission in the network, leading to poor network robustness and decreased overall network performance.
Proposed Algorithm
Due to the transient characteristics introduced by node mobility in dynamic networks, the network topology undergoes frequent changes.Existing research on dynamic network routing algorithms has paid limited attention to node mobility and network load variations.Additionally, there are a lack of studies that select multiple parameter indicators and utilize multi-attribute decision-making to optimize the MPR mechanism.Therefore, addressing the aforementioned issues, this paper combines the TOPSIS method to establish a multi-attribute comprehensive evaluation model.Leveraging link-state awareness among nodes, a novel MPR selection algorithm named TOPSIS-MPR is proposed.Relying solely on single indicators such as node coverage, node mobility, and node load to select MPR nodes is unreasonable [31].The above examples illustrate that if multiple factors are not comprehensively considered, it will not only affect the selection of MPR nodes, but also disrupt normal communication and data transmission in the network, leading to poor network robustness and decreased overall network performance.
Proposed Algorithm
Due to the transient characteristics introduced by node mobility in dynamic networks, the network topology undergoes frequent changes.Existing research on dynamic network routing algorithms has paid limited attention to node mobility and network load variations.Additionally, there are a lack of studies that select multiple parameter indicators and utilize multi-attribute decision-making to optimize the MPR mechanism.Therefore, addressing the aforementioned issues, this paper combines the TOPSIS method to establish a multi-attribute comprehensive evaluation model.Leveraging link-state awareness among nodes, a novel MPR selection algorithm named TOPSIS-MPR is proposed.This algorithm emphasizes two key dimensions of link awareness: mobility and load, enabling real-time monitoring of the availability, quality, and data payload of each network link.By integrating multiple parameters and multi-attribute decision-making methods, this research further optimizes the MPR mechanism for dynamic networks.
The formulas and conclusions derived in the paper are based on the following three hypotheses, namely: Hypothesis 1.Each drone node is equipped with a Global Positioning System (GPS) that can sense motion and analyze link status based on the location and velocity information provided by the module.
Hypothesis 2. The effective communication distance of each drone node is the same, and the signal propagation follows a free space propagation loss model.The received signal power is mainly related to the distance between them.Hypothesis 3.There are many data frames in the MAC layer buffer of each drone node, mainly including data frames waiting to be sent, data frames waiting to be forwarded, control frames waiting to be sent, retransmitted data frames, and confirmation frames waiting to be sent.The length of the queue buffer of a node can reflect its load situation.
Awareness of Mobility
Considering the impact of node mobility in the network from the dimensions of time and space, we propose three measurement parameters: link duration, stability degree of link, and average neighbor set change rate.
Link Duration
Given the propensity for drone nodes to move at high speeds, they frequently venture beyond the communication range of a node, leading to link disconnections.In light of this, we introduce the concept of link duration (LD), defined as the period from the establishment of a connection between two nodes in the network until the disconnection of the link.This metric serves to quantify the stability and reliability of network connections in dynamic environments.
As shown in Figure 4, we establish a three-dimensional Cartesian coordinate system with node i as the reference center, where node j moves around node i.At time t 1 , node j establishes a link connection with node i at point B, and then moves along the → V direction to point C at time t 2 .At the next moment, it will disconnect from node i, which will have exceeded the communication range of node j.
Hypothesis 3.There are many data frames in the MAC layer buffer of each drone node, mainly including data frames waiting to be sent, data frames waiting to be forwarded, control frames waiting to be sent, retransmitted data frames, and confirmation frames waiting to be sent.The length of the queue buffer of a node can reflect its load situation.
Awareness of Mobility
Considering the impact of node mobility in the network from the dimensions of time and space, we propose three measurement parameters: link duration, stability degree of link, and average neighbor set change rate.
Link Duration
Given the propensity for drone nodes to move at high speeds, they frequently venture beyond the communication range of a node, leading to link disconnections.In light of this, we introduce the concept of link duration (LD), defined as the period from the establishment of a connection between two nodes in the network until the disconnection of the link.This metric serves to quantify the stability and reliability of network connections in dynamic environments.
As shown in Figure 4, we establish a three-dimensional Cartesian coordinate system with node i as the reference center, where node j moves around node i .At time 1 t , node j establishes a link connection with node i at point B , and then moves along the V direction to point C at time 2 t .At the next moment, it will disconnect from node i , which will have exceeded the communication range of node j.Assuming that the measured coordinates of the drone node i and node j are (x i , y i , z i ) and (x j , y j , z j ), and the velocities are (v ix , v iy , v iz ) and (v jx , v jy , v jz ), respectively, the relative position vector → OB between node i and node j is denoted as (d x , d y , d z ), and the relative velocity vector → OC between node i and node j is denoted as (v x , v y , v z ).The expressions are as follows, where α is the angle between the relative position vector → OB and the relative velocity vector → OC, and β is its complementary angle: The LD between node i and node j called LD ij is expressed as: When two connected nodes maintain consistency in their movement direction and speed, the link maintenance time is longer, making the link less likely to disconnect.Conversely, the link is more prone to disconnection if there is inconsistency in their movement.Therefore, selecting neighbors with larger LD values results in longer link connection times along their paths, ensuring more stable and reliable data transmission.
Stability Degree of Link
Link duration assesses link stability by gauging the current or anticipated motion state of the link.However, to further enhance the evaluation of link stability, we propose the stability degree of link (SDL), which quantifies the fluctuation level of relative node positions.By leveraging SDL, we can enhance packet delivery rates, diminish data retransmission instances, elevate routing success rates, and ultimately enhance network performance.
Before defining link stability, we first provide the definition of distance variation based on the Chebyshev inequality in statistical theory, which can reflect the fluctuation of distance between node i and node j.The definition of δd ij is as follows: In the above equation, d ij and d ij , respectively, on behalf of the distance and average distance between node i and node j during time period [t − T, t], and n is the number of distance measurements at the same time period.
To characterize the impact of distance variation on link communication quality, we introduce a distance step function φ(d).This function signifies that as nodes draw closer, the communication quality of the link improves, with distance variation exerting a more pronounced impact on link communication quality.
After introducing the step function φ(d), the definition of link stability between node i and node j called SDL ij (t) is as follows:
Average Neighbor Set Change Rate
The rapid movement of drone nodes can result in alterations in network topology, frequent shifts in neighboring nodes, and an increased probability of link disruptions, thereby diminishing network performance.
For this reason, we propose the concept of average neighbor set change rate (ANSCR) by monitoring the changes in the neighbor set of nodes over a period of time, which can measure the topological changes of its surrounding neighbors.It refers to the change rate of the neighbor set per unit time, expressed as follows: Among the above, I NC i represents the number of newly added nodes in the neighbor set of node i during time period [t 1 , t 2 ], while DEC i represents the number of decreased nodes at the similar period of time.N i represents the number of nodes in the neighbor set of node i at time t 2 .All of these can be obtained by monitoring the neighbor table content of node i.
To reflect time correlation and prevent drastic changes caused by sudden changes in the values of I NC i and DEC i , we use an exponential moving average strategy for ANSCR i .The final expression is as follows, where ξ is the smoothing factor: The above expression means that the smaller the value of the ANSCR, the smaller the change in neighbors around the node, and the lower the degree of topology change.
Awareness of Load
As the number of data packets sent by nodes in the network increases, the receiving end nodes are unable to process them in a timely manner, making for an increase in the workload of the sending nodes.The escalation in data packet transmission within the network inherently amplifies the likelihood of collisions among packets from neighboring nodes.Such collisions culminate in congestion at the Media Access Control (MAC) layer, which in turn substantially degrades transmission efficiency.Furthermore, when the buffer queue reaches capacity, nodes must discard incoming data packets, exacerbating the packet loss rate.This scenario not only underscores the challenges of efficiently managing network traffic but also emphasizes the critical need for sophisticated mechanisms to alleviate congestion and optimize packet handling, thereby ensuring network reliability and performance.
Load of Node
To measure the load situation of the current node, we propose the concept of load of node (LN), which is defined as follows: Among them, Load i (t) represents the length of data packet frames waiting to be processed in the MAC layer buffer queue of node i at time t, and Load max represents the maximum frame length that the MAC layer of node i can accommodate.It indicates that the larger the value of LN i , the higher the utilization rate of the MAC layer buffer queue of node i at the current time, in other words, the greater the load on node i.
Load of Link
The load situation of a link is determined jointly by the load situation of the sender and receiver at both ends of the link.Therefore, in combination with load of node mentioned before, load of link (LL) is introduced to reflect the link load condition between two nodes, defined as follows: In the above equation, LL ij represents the load of link between node i and node j, while LN i and LN j respectively represent the load of node i and node j.It reflects that the greater the load on the nodes at both ends of the link, the greater the load on the link.
Load of Neighbor Set
The load situation of a node is not only affected by the node itself and a certain link, but also by the neighbors around it.Therefore, we propose load of neighbor set (LNS) in order that reflecting the load impact of the neighbors around the node.
Among them, LN i represents the load of neighboring nodes for node i, and N i represents the number of neighboring nodes for node i at time t.
Evaluation Algorithm of TOPSIS-MPR
Leveraging the valuable information extracted from received HELLO and TC packets, nodes within the network compute key evaluation metrics such as LD, SDL, ANSCR, LL, and LNS.In most cases, there is no neighbor node whose various indicators are superior to others, so it is necessary to weigh multiple indicators and select the best MPR nodes from the neighbor set.
•
Construct the original evaluation matrix M.
With n neighbors of a node in the network as the evaluation objects, we select the LD, SDL, ANSCR, LL, and LNS of neighbors as evaluation metrics.The original evaluation matrix M n×5 is constructed as follows: • Construct the standardized matrix N.
Since the dimensions and attribute types of each evaluation metric vary, direct comparison of the original data is not feasible.To calculate and compare various evaluation metrics, it is necessary to normalize the original evaluation matrix to obtain a standardized evaluation matrix.
Parameter types can be broadly categorized into benefit-type and cost-type.Benefittype parameters have values that are better when they are larger, while cost-type parameters have values that are better when they are smaller.In the original evaluation matrix M n×5 , LD and SDL belong to the benefit type parameters, while ANSCR, LL, and LNS belong to the cost type parameters.
After normalization, the standardized evaluation matrix N n×5 is obtained as follows: • Construct the weight matrix W.
The entropy weighting method is employed to construct the weight matrix W. This is an objective weighting method that eliminates subjectivity and obtains high-precision weights.The core of this method is to associate the entropy value of evaluation indicators with the weight value.The more scattered the data, the greater the difference, and the smaller the entropy value, which means that the indicator carries more discriminative information and the weight of the indicator is greater.
The weight matrix W 1×5 constructed by the entropy weight method is as follows: • Construct the weighted evaluation matrix R.
By using the weight matrix W 1×5 and the standardized evaluation matrix N n×5 , we obtain the weighted evaluation matrix R n×5 as follows: • Determine the theoretical optimal solution O + and the worst-case solution O − .
• Calculate the proximity factor matrix C.
Based on the Euclidean distance formula, we separately calculate the distance between each evaluation object and the theoretical optimal solution and the theoretical worst solution.The expressions are as follows: Calculate the proximity factor c and obtain the proximity factor matrix C n×1 :
Specific Steps of TOPSIS-MPR Algorithm
N(A) represents the set of 1-hop neighbors of node A; N 2 (A) represents the set of 2-hop neighbors of node A; M(A) represents the MPR set of node A; S 2 (A) represents the judgment flag for M(A) to fully cover N 2 (A).
As shown in Figure 5, this is the flowchart of the TOPSIS-MPR algorithm: To facilitate understanding of the algorithm flowchart, the specific steps are described as follows: Step 2. By traversing N(A), calculate the distance d between node A and its neighbors, and judge whether d is greater than the communication distance R. If so, remove the neighbor from N(A); otherwise, keep the neighbor.
Step 4. ∃i ∈ N(A), so that node i is the only reachable relay of a node in S 2 (A), then add node i to M(A), that is: M(A) = M(A) ∪ {i}, and remove node i in N(A) and the 2-top neighbors in S 2 (A) reachable through node i, then proceed to the Step 6.
Step 5.∀i ∈ N(A), i / ∈ M(A), calculate the proximity factor c i of all nodes i based on the evaluation algorithm of TOPSIS-MPR, select the node i with the highest c i value, then add node i to M(A), that is: M(A) = M(A) ∪ {i}, and remove node i in N(A) and the 2-top neighbors in S 2 (A) reachable through node i, then proceed to the Step 6.
Step 6.Judge S 2 (A) = ∅?If so, proceed to the Step 7; otherwise, proceed to the Step 5.
Step 7.The algorithm ends and M(A) is obtained, which is the MPR set of node A. To facilitate understanding of the algorithm flowchart, the specific steps are described as follows: ( ) ( ) S A N A .
Step 2. By traversing ( ) N A , calculate the distance d between node A and its neigh- bors, and judge whether d is greater than the communication distance R. If so, remove the neighbor from ( ) N A ; otherwise, keep the neighbor.
Step 4. ∃ ∈ ( ) i N A , so that node i is the only reachable relay of a node in 2 ( )
Simulation and Results
To validate the algorithm's performance, the proposed CEL-OLSR protocol based on the TOPSIS-MPR algorithm is compared with PM-OLSR [32], ML-OLSR [33], LD-OLSR [18] and the standard OLSR protocol.Through simulation experiments, their differences in network performance metrics are analyzed, including packet delivery rate, average end-toend delay, network throughput, and routing control overhead.
Simulation
Due to limitations such as the environment and interference, this paper conducts simulation experiments using a discrete event network simulator software called NS-3 running on a Linux environment.The simulated task scenario is range search or exploration missions performed by drones in UANETs.It involves setting up 50 mobile nodes in a 1.5 km × 1.5 km × 0.1 km area, with a three-dimensional Gaussian-Markov mobility model.The scenario tasks involve two sending nodes transmitting data packets of size 256 bytes to two other nodes at a const bit rate (CBR).The simulation experiment runs for 300 s.
In the network implementing the CEL-OLSR routing protocol, the interaction of node link state information among mobile nodes is achieved through hello control packets and TC control packets.In "olsr-header.h"and "olsr-header.cpp"files, we mainly modify the structure of the hello control message packet by adding the relevant evaluation parameters mentioned in Section 3 into the hello packet.In this simulation experiment, certain modifications are made to the structure of the hello message packets, as illustrated in Figure 6 below.In "olsr-repositories.h"file, it contains a series of tuples (including IfaceAssocTuple, NeighborTuple, TwoHopNeighborTuple, MprSelectorTuple, DuplicateTuple, Topology-Tuple, AssociationTuple, and LinkTuple).We create a new tuple called LinkQosTuple, which is used to record link quality (including neighbor node main address, average neighbor set change degree ANSCR, link duration LD, stability degree of link SDL, load of node LN, and load of link LL).
In "olsr-state.h"and "olsr-state.cpp"files, they define the "OlsrState" class and various types of information tables (including LinkSet, NeighborSet, TwoHopNeighborSet, TopologySet, MprSet, MprSelectorSet, DuplicateSet, IfaceAssocSet, and AssociationSet).We create a new information table called LinkQosSet to store the link status of different nodes at different times.According to the five evaluation parameters proposed in Section 3, during the network simulation process, we directly store the relevant data in the LinkQosSet information table from the hello message packets exchanged between nodes or calculate and store them.
In "olsr-routing-protocol.h" and "olsr-routing-protocol.cpp" files, we calculate the link duration of node based on the speed and location information in the hello data packets, and compute the load of link between two nodes based on the load of node.• ANSCR is the average neighbor set change rate of the sending node of the hello control packet.LN and LSN are the load of lode and the load of neighbor set of the node of the hello control packet, respectively.
•
Longitude, Latitude, and Altitude are the position coordinates of the node in the x, y, and z directions of the sending node of the hello control packet, respectively.• Velocity_X, Velocity_Y, and Velocity_Z are the velocities of the node in the x, y, and z directions of the sending node of the hello control packet, respectively.
In "olsr-repositories.h"file, it contains a series of tuples (including IfaceAssocTuple, NeighborTuple, TwoHopNeighborTuple, MprSelectorTuple, DuplicateTuple, TopologyTuple, AssociationTuple, and LinkTuple).We create a new tuple called LinkQosTuple, which is used to record link quality (including neighbor node main address, average neighbor set change degree ANSCR, link duration LD, stability degree of link SDL, load of node LN, and load of link LL).
In "olsr-state.h"and "olsr-state.cpp"files, they define the "OlsrState" class and various types of information tables (including LinkSet, NeighborSet, TwoHopNeighborSet, Topolo-gySet, MprSet, MprSelectorSet, DuplicateSet, IfaceAssocSet, and AssociationSet).We create a new information table called LinkQosSet to store the link status of different nodes at different times.According to the five evaluation parameters proposed in Section 3, during the network simulation process, we directly store the relevant data in the LinkQosSet information table from the hello message packets exchanged between nodes or calculate and store them.
In "olsr-routing-protocol.h" and "olsr-routing-protocol.cpp" files, we calculate the link duration of node based on the speed and location information in the hello data packets, and compute load of link between two nodes based on the load of node.
Analysis of Results
To evaluate the quality of the routing protocol design, this paper considers two aspects: data transmission accuracy and transmission speed.It selects the following four indicators to assess whether the optimized routing protocol can better meet the network performance requirements in the set task scenario.
Packet Delivery Rate
Figure 7a illustrates the packet delivery rate for five protocols under varying speeds, while Figure 7b depicts the growth rate in packet delivery rate for CEL-OLSR, PM-OLSR, ML-OLSR, and LD-OLSR when compared to OLSR.Notably, CEL-OLSR demonstrates the highest packet delivery rate.When compared to PM-OLSR, CEL-OLSR exhibits an average increase in packet delivery rate of 2.56%.Compared to ML-OLSR, this increase is 6.89%.Furthermore, when compared to LD-OLSR and OLSR, CEL-OLSR registers an average improvement of 11.11% and 22.04%, respectively.In our simulation, we categorize the drone flight phase into three stages based on speed: low speed, mid-low speed, and mid-high speed.Table 2 presents the packet delivery rate for the comparison protocols at different speed stages.It is evident that the packet delivery rate decreases as speed increases.Importantly, CEL-OLSR exhibits the smallest decrease.The probability of link disruption rises with node speed, leading to an increase in the number of lost packets.In the environment of ad hoc networks, the stability between links is a crucial factor in ensuring smooth packet delivery.With the increase in node mobility, the stability of links faces challenges because the relative positions of nodes become more variable, leading to continuous changes in network topology.This dynamic variation forces routing protocols to update frequently to adapt to the new network state, which may lead to the occurrence of routing loops, outdated routing information, and potential packet loss before reaching the destination.Additionally, as link disruption events increase, ensuring successful packet delivery becomes more difficult, often requiring multiple retransmissions.This not only exacerbates network load pressure but also, in cases of repeated unsuccessful retransmissions, may result in eventual packet discarding.In summary, the acceleration of node mobility inevitably exacerbates network dynamics, leading to a series of link and routing issues that may negatively impact successful packet delivery.
2.
Average End-to-End Delay As depicted in Figure 8a, within the node speed range of 10 m/s to 50 m/s, CEL-OLSR exhibits a maximum average end-to-end delay of approximately 11.07 ms, characterized by minimal fluctuations.In comparison to PM-OLSR, CEL-OLSR demonstrates a reduction in average end-to-end delay by 4.66 ms.Compared to ML-OLSR, this reduction amounts to 9.03 ms.Furthermore, when contrasted with LD-OLSR and the standard OLSR, CEL-OLSR showcases an overall decrease of 12.98 ms and 24.82 ms, respectively.As shown in Table 3, detailed results of end-to-end average delay for each protocol at different speed stages are presented.It is evident that transmission delay increases with rising speed, and CEL-OLSR exhibits a relatively gradual trend of delay variation.As shown in Table 3, detailed results of end-to-end average delay for each protocol at different speed stages are presented.It is evident that transmission delay increases with rising speed, and CEL-OLSR exhibits a relatively gradual trend of delay variation.When the node's mobility rate increases, the network's topology undergoes more frequent changes, leading to the breakage of established routing paths.This situation forces routing protocols to initiate a new round of route discovery processes, introducing additional time delays and thus prolonging the overall packet transmission time.In this dynamically changing network environment, data packets being transmitted may need to queue in relay nodes' buffers to await updated routing information, which also contributes to increased delays.Furthermore, due to link instability, packet loss and subsequent necessary retransmissions further contribute to delays.In summary, the increase in node speed results in more frequent changes to the network topology, not only increasing the time required for route discovery and maintenance but also leading to delays from factors such as link rebuilding, buffer queuing, and packet retransmission, collectively raising the overall transmission latency of the network.
Throughput
As the node speed increases, the probability of link breakage and packet loss also both are increased, resulting in a significant decrease in throughput.As shown in Figure 9, compared with PM-OLSR, ML-OLSR, LD-OLSR, and the standard OLSR, the throughput of the CEL-OLSR protocol has increased by an average of 8.04%, 22.71%, 45.55%, and 93.19%, respectively.The results of throughput for comparison protocols at different speed stages are shown in Table 4.
The rapid movement of nodes contributes to heightened instability in network links, resulting in frequent disconnections and complicating sustained communication.Such dynamic motion exacerbates link disruptions, posing challenges to stable communication, and diminishing the efficiency of data transmission.Furthermore, the frequent link changes lead to packet loss, necessitating repeated retransmissions.This not only consumes bandwidth that could be allocated to new data packets but also increases time overhead.Within this dynamically evolving network environment, competition for wireless channels may intensify, elevating the likelihood of MAC layer collisions and further diminishing data transmission speeds, thereby reducing overall network throughput.In summary, heightened node mobility gives rise to a range of issues including link instability, frequent routing modifications, packet retransmissions, channel contention, congestion, and buffering delays, all of which significantly diminish network throughput.
As the node speed increases, the probability of link breakage and packet loss also both are increased, resulting in a significant decrease in throughput.As shown in Figure 9, compared with PM-OLSR, ML-OLSR, LD-OLSR, and the standard OLSR, the throughput of the CEL-OLSR protocol has increased by an average of 8.04%, 22.71%, 45.55%, and 93.19%, respectively.The results of throughput for comparison protocols at different speed stages are shown in Table 4.The rapid movement of nodes contributes to heightened instability in network links, resulting in frequent disconnections and complicating sustained communication.Such dynamic motion exacerbates link disruptions, posing challenges to stable communication, and diminishing the efficiency of data transmission.Furthermore, the frequent link changes lead to packet loss, necessitating repeated retransmissions.This not only consumes bandwidth that could be allocated to new data packets but also increases time overhead.Within this dynamically evolving network environment, competition for wireless channels may intensify, elevating the likelihood of MAC layer collisions and further diminishing data transmission speeds, thereby reducing overall network throughput.In summary, heightened node mobility gives rise to a range of issues including link instability, frequent routing modifications, packet retransmissions, channel contention, congestion, and buffering delays, all of which significantly diminish network throughput.
Route Control Overhead
The control overhead ratios for each protocol are depicted in Figure 10.Overall, the average cost ratios of control messages for CEL-OLSR, PM-OLSR, ML-OLSR, LD-OLSR, and OLSR protocols are 40.02%,41.51%, 42.90%, 44.61%, and 41.16%, respectively.The detailed results for each protocol at different speed stages can be found in Table 5.A lower proportion of control messages implies a higher volume of actual data packet transmissions, which aligns with our expectations.While the overhead for CEL-OLSR also gradually increases with rising node speed, the rate of increase in control overhead slows down.The control overhead ratios for each protocol are depicted in Figure 10.Overall, the average cost ratios of control messages for CEL-OLSR, PM-OLSR, ML-OLSR, LD-OLSR, and OLSR protocols are 40.02%,41.51%, 42.90%, 44.61%, and 41.16%, respectively.The detailed results for each protocol at different speed stages can be found in Table 5.A lower proportion of control messages implies a higher volume of actual data packet transmissions, which aligns with our expectations.While the overhead for CEL-OLSR also gradually increases with rising node speed, the rate of increase in control overhead slows down.In high-speed mobile environments, frequent changes in network topology necessitate continuous updating and exchanging of routing information, resulting in a significant increase in control message overhead.In such environments, the intensification of link instability leads to more common occurrences of link breaks and reconstructions, compelling the system to transmit more link state control messages to maintain the accuracy of link information.Simultaneously, in the quest for optimal paths, the system must escalate the frequency of path detection and confirmation operations, further relying on the frequent exchange of control messages.The instability of links and path alterations also heighten the packet loss rate, prompting more retransmission requests, which often necessitate additional control messages for coordination.Additionally, the dynamically changing network environment demands more sophisticated congestion control and traffic management mechanisms to effectively administer network resources and circumvent congestion.Consequently, as node mobility increases, to uphold communication stability and efficiency, the system must augment the frequency of sending control messages, thereby significantly consuming network bandwidth and escalating node energy consumption.This surge in overhead is particularly pivotal in energy-constrained wireless network environments.
Conclusions
In UANETs, nodes often exhibit high mobility characteristics.Assuming node deployment remains relatively fixed, the emergence of "hotspot nodes" in the network due to mission requirements can lead to network congestion and a decline in network performance, among other issues.To mitigate these potential problems, we can address the situation by considering both node mobility and actual load conditions.By taking into account both mobility and load aspects, considering the influence of multiple factors, and making decisions based on comprehensive impact factors, we can enhance link stability, balance network load, and improve network performance.
Considering the importance of MPR nodes in OLSR, this paper takes a holistic approach from the perspective of mobility and load.It designs a comprehensive MPR evaluation algorithm based on link state awareness, selecting five evaluation metrics: Link Duration (LD), Stability Degree of Link (SDL), Average Neighbor Set Change Rate (ANSCR), Load of Link (LL), and Load of Neighbor Set (LNS).The weight coefficients are determined using the entropy weight method, and a comprehensive evaluation is conducted using the method of TOPSIS.Additionally, an optimized routing protocol named CEL-OLSR is proposed, which is aimed to solve the problem of single metric for MPR node selection in OLSR routing protocol.Compared to PM-OLSR, ML-OLSR, LD-OLSR, and OLSR, CEL-OLSR, respectively, improved packet delivery rates by an average of 2.56%, 6.89%, 11.11%, and 22.04%.Within the node speed range of 10 m/s to 50 m/s, CEL-OLSR exhibited the maximum average end-to-end delay of around 11.07 ms, with minimal fluctuation.In comparison to PM-OLSR, ML-OLSR, LD-OLSR, and OLSR, CEL-OLSR reduced the overall average end-to-end delay by 4.66 ms, 9.03 ms, 12.98 ms, and 24.82 ms, respectively.Furthermore, CEL-OLSR increased network throughput by 8.04%, 22.71%, 45.55%, and 93.19% compared to PM-OLSR, ML-OLSR, LD-OLSR, and OLSR, respectively.Overall, the average cost ratios of control messages for CEL-OLSR, PM-OLSR, ML-OLSR, LD-OLSR, and OLSR protocols are 40.02%,41.51%, 42.90%, 44.61%, and 41.16%, respectively.As the node speed increases, the control overhead of CEL-OLSR also gradually increase.Overall, CEL-OLSR presents a considerable improvement in terms of delivery rates, latency, throughput, and routing efficiency over the compared protocols.
Route communication is a multifaceted challenge, wherein network communication quality and overall performance are shaped by link statuses, influenced by various factors.Concurrently, drones have the potential to function as mobile base stations or relay nodes, facilitating the establishment and extension of ground-based 5G networks.Consequently, by considering task-specific needs and the distinctive characteristics of drone ad hoc networks, optimizing the integration of drone networks with 5G technology can enhance data transmission efficiency, elevate packet delivery rates, minimize transmission delays, and boost network throughput.This trajectory is poised to emerge as a pivotal research avenue for UANETs in the foreseeable future.
Step 4 .Figure 1 .
Figure 1.The comparison of data transmission links under the flooding mechanism and MPR mechanism.(a) Diagram of data transmission link under the flooding mechanism; (b) schematic diagram of data transmission link under MPR mechanism.
Figure 1 .
Figure 1.The comparison of data transmission links under the flooding mechanism and MPR mechanism.(a) Diagram of data transmission link under the flooding mechanism; (b) schematic diagram of data transmission link under MPR mechanism.
Figure 2 .
Figure 2. The traditional process of selecting MPR set for node S.
Figure 2 .
Figure 2. The traditional process of selecting MPR set for node S.
Figure 3 .
Figure 3.The process of selecting MPR set for node 0.
Figure 3 .
Figure 3.The process of selecting MPR set for node 0.
Figure 4 .
Figure 4.The schematic diagram of relative motion between drone node i and node j .
Figure 4 .
Figure 4.The schematic diagram of relative motion between drone node i and node j.
21 •Figure 6 .
Figure 6.The Structure of hello message packet.(a) The structure of the original hello message package; (b) the structure of the improved and optimized hello message package.
Figure 6 .
Figure 6.The Structure of hello message packet.(a) The structure of the original hello message package; (b) the structure of the improved and optimized hello message package.
Sensors 2024 ,Figure 7 .
Figure 7.The comparison of packet delivery rate.(a) Trends in packet delivery rate for various protocols with changing speeds; (b) growth rate of packet delivery rate for various protocols compared to OLSR at different speeds.
Figure 7 .
Figure 7.The comparison of packet delivery rate.(a) Trends in packet delivery rate for various protocols with changing speeds; (b) growth rate of packet delivery rate for various protocols compared to OLSR at different speeds.
Sensors 2024 ,Figure 8 .
Figure 8.The comparison of average end-to-end delay.(a) Trends in average end-to-end delay for various protocols with changing speeds; (b) decrease amount of average end-to-end delay for various protocol compared to OLSR at different speeds.
Figure 8 .
Figure 8.The comparison of average end-to-end delay.(a) Trends in average end-to-end delay for various protocols with changing speeds; (b) decrease amount of average end-to-end delay for various protocol compared to OLSR at different speeds.
Figure
Figure8billustrates the reduction in average end-to-end delay achieved by CEL-OLSR, PM-OLSR, ML-OLSR, and LD-OLSR in comparison to OLSR.As shown in Table3, detailed results of end-to-end average delay for each protocol at different speed stages are presented.It is evident that transmission delay increases with rising speed, and CEL-OLSR exhibits a relatively gradual trend of delay variation.
Table 1 .
The primary simulation parameters settings.
Table 2 .
Results of packet delivery rate (PDR) for comparison protocols at different speed stages.Speed Stage Speed CEL-OLSR PM-OLSR ML-OLSR LD-OLSR OLSR Low speed stage: 10 m/s~20 m/s
Table 2 .
Results of packet delivery rate (PDR) for comparison protocols at different speed stages.
Table 3 .
Results of average end-to-end delay (AEED) for comparison protocols at different speed stages.
Table 3 .
Results of average end-to-end delay (AEED) for comparison protocols at different speed stages.
Table 4 .
Results of throughput for comparison protocols at different speed stages.
Table 5 .
Results of control overhead rate for comparison protocols at different speed stages.
Table 5 .
Results of control overhead rate for comparison protocols at different speed stages. | 11,851 | 2024-03-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The microRNA let-7b-5p Is Negatively Associated with Inflammation and Disease Severity in Multiple Sclerosis
The identification of microRNAs in biological fluids for diagnosis and prognosis is receiving great attention in the field of multiple sclerosis (MS) research but it is still in its infancy. In the present study, we observed in a large sample of MS patients that let-7b-5p levels in the cerebrospinal fluid (CSF) were highly correlated with a number of microRNAs implicated in MS, as well as with a variety of inflammation-related protein factors, showing specific expression patterns coherent with let-7b-5p-mediated regulation. Additionally, we found that the CSF let-7b-5p levels were significantly reduced in patients with the progressive MS compared to patients with relapsing-remitting MS and were negatively correlated with characteristic hallmark processes of the two phases of the disease. Indeed, in the non-progressive phase, let-7b-5p inversely associated with both central and peripheral inflammation; whereas, in progressive MS, the CSF levels of let-7b-5p negatively correlated with clinical disability at disease onset and after a follow-up period. Overall, our results uncovered, by the means of a multidisciplinary approach and multiple statistical analyses, a new possible pleiotropic action of let-7b-5p in MS, with potential utility as a biomarker of MS course.
Introduction
Multiple sclerosis (MS) is a chronic inflammatory, demyelinating and neurodegenerative disease of the central nervous system (CNS), characterized by a highly variable relapse rate and a progressive increase of clinical disability. Although its etiology remains elusive, it is well known that MS is a multifactorial disease caused by a complex gene-environment interaction. The pathological hallmark of MS is a progressive blood-brain barrier (BBB) disruption that promotes infiltration of peripheral immune cells in the CNS, leading to an autoimmune response against myelin antigens [1]. Indeed, high levels of T cells and related cytokines and chemokines have been found both in the CNS lesions and in the cerebrospinal fluid (CSF) of patients with MS, thus contributing to gliosis, inflammation, demyelination, synaptopathy, and finally neuroaxonal degeneration [2]. The inflammatory events are typical in the relapsing-remitting (RRMS) phase of the disease, during which there is a full or partial recovery of clinical symptoms until reaching a phase of irreversible progressive worsening of the disease (i.e., secondary progressive MS, SPMS). However, a small number of patients with MS enter directly into the progressive phase after clinical onset (i.e., primary progressive MS, PPMS) due to irreversible accumulation of neurological disabilities as a result of axonal injury and neuronal loss [3].
The identification of biomarkers, as measurable indicators of pathogenic processes and as tools to discern clinical MS phenotypes, has recently gained great attention but it is still a critical issue under investigation. Among the biological fluids, the CSF is the main source of biomarkers representing a valid means through which it is possible to predict the disease course and the individual response to treatment [4,5].
Recently, the small non-coding RNAs (miRNAs) are emerging as important modulators of gene expression and have been found also in the CSF [5,6]. These molecules are encoded by a conserved class of genes across animals, and their mature products are constituted by single-stranded RNAs, approximately 22 nucleotides in length, repressing post-transcriptionally the translation of target mRNAs through an imperfect base pairing [7]. MiRNAs are able to directly regulate multiple targets and a single mRNA can be targeted by many miRNAs [8], thus controlling through a pleiotropic action different cellular processes and mechanisms involved in development, homeostasis, and disease [9,10].
In the last decade, neuroinflammation has shown to be one of the major processes regulated by miRNAs. Thus, a better understanding of miRNA dysregulation has the enormous potential to develop promising and novel therapeutic targets to personalized treatment, and to rapidly expand fields in MS biomarker research [11].
It has been recently suggested that the let-7 family of miRNAs, known as crucial regulators of developmental processes and cancer, may modulate the inflammatory response within the CNS in various neurodegenerative diseases [12][13][14][15][16][17]. Moreover, an emerging role for the let-7 family in MS pathophysiology has just started to be dissected, despite the very few clinical studies available [13,18,19].
In the present article, we aimed to provide new insights of let-7 family involvement in MS. Indeed, we explored the levels of the most representative members of the family (let-7b-5p, let-7e-5p, let-7f-5p) in the CSF of a large cohort of patients with MS and we found that the let-7b-5p was highly correlated to inflammatory processes linked to disease, disease stage, and disability in MS.
Let-7 Target mRNA Analysis and Gene Ontology Enrichment Analysis
A list of target mRNAs of let-7 family was downloaded from the MIENTURNET webtool [20] using only experimentally validated targets with strong evidence in humans (by report assays, qPCR and western blot analysis) from miRTarBase (http://mirtarbase. cuhk.edu.cn/php/index.php) [21]. Then, functional enrichment analysis of let-7 family targets was performed using the Bioconductor R package clusterProfiler v3.14.3 [22] with annotation of Gene Ontology Database, Biological Process categories [23].
Clinical Study Design
This observational prospective study included 166 patients with MS (Clinically Isolated Syndrome/Radiologically Isolated Syndrome, CIS/RIS, n = 25; Relapsing-Remitting multiple sclerosis, RRMS, n = 117; Progressive multiple sclerosis, PMS, n = 24) as the main cohort. Twenty age-and sex-matched control subjects were recruited as in [28]. For details on demographic characteristics see Table 1 (all patients). All patients with MS were treatment-naïve, and CSF withdrawal was performed at least 3 months after the last corticosteroid therapy. After patient admittance to the neurological department of IRCCS Neuromed Hospital (T0), all subjects underwent neurological assessment, conventional brain MRI scan and CSF withdrawal performed in sequence and within 24h, according to Italian standard clinical practice. A subgroup of patients with non-progressive MS (non-PMS: CIS/RIS and RRMS) were also subjected to cognitive evaluation.
Patients with MS
Patients with MS were included in the study with the following eligibility criteria: (i) diagnosis of multiple sclerosis according to the 2010 McDonald criteria [29]; (ii) EDSS score ≤7 at T0; age ≥18 and ≤65 years (inclusive); (iii) no immunomodulatory or immunosuppressive treatment before the CSF withdrawal; (iv) ability to provide written informed consent. Additional exclusion criteria were: (i) EDSS score >7 at T0; (ii) age <18 or >65 years; (iii) comorbidities for neurological diseases other than MS (i.e, Parkinson disease, Alzheimer disease, stroke); (iv) history or presence of any unstable medical condition, such as malignancy or infection that might confound the results of the study; (v) pregnancy or lactation; (vi) inability to provide written informed consent.
Clinical Parameters
For each patient, the following demographic and clinical variables were considered and analyzed: sex (F/M); age (in years); disease duration, estimated as the number of months from onset to the most recent assessment of disability; clinical disability, assessed by Expanded Disability Status Scale (EDSS); Progression Index (PI = EDSS/disease duration in months). Disease activity included clinical and/or radiological activity, evaluated by MRI scans. Conventional MRI scans (1.5 Tesla) were performed according to Italian standard clinical practice and the radiological activity was assessed according to [29]. Peripheral blood samples were collected from patients with MS by standard venipuncture EDTA collection tubes (Vacutainer ® , Becton Dickinson, Milan, Italy) and lymphocyte counts were performed as described in Stampanoni Bassi et al., 2020 [30]. Two verbal fluency tests were performed to assess the cognitive functions [31] in a subgroup of patients with non-PMS, without any signs of dementia evaluated by Mini Mental State Examination (MMSE). Specifically, for the semantic fluency assessment (categorical memory function), patients with a MMSE score >23.8 [32] were asked to say as many words as possible belonging to the "colors", "animals" and "fruits" categories in three different trials, which also lasted 60 s each. To evaluate the phonemic fluency (executive function), patients were asked to generate as many words as possible beginning with the letters "A", "F" and "S" in three different trials, each lasting 60 s. In both tasks, the greater the number of pronounced words, the better was the patient's cognitive performance. The results were corrected for gender, age and education according to [31].
Statistical Analysis
Statistical analysis was performed using R software v3.6.3 (R Core Team 2020, https:// www.R-project.org/) and Prism GraphPad 6.0. Data distribution was tested for normality with the Kolmogorov-Smirnov and Shapiro-Wilk tests.
Hierarchical Clustering was used to divide miRNAs and biochemical parameters in groups of homogeneous entities [45]. In particular, for miRNA clustering, Pearson's correlation coefficients of miRNA expression values were calculated and plotted in R using the corrplot package [44]. The correlation values were ordered according to the hierarchical clustering and the agglomeration method used was the "ward.D2". MiRNA network analysis was performed and plotted with the igraph package [46] using Pearson's coefficient values >0.5. For CSF biochemical parameters, clustering analysis was performed by means of hierarchical agglomerative clusters (complete linkage). The distance matrix for the hierarchical clustering was based on the Spearman correlation between variables. Next, we used the silhouette method to select the number of clusters (two clusters, see below) [47]. Finally, a principal analysis was performed separately for each of the two clusters and the first component (PC1) was used to evaluate the overall effect of each cluster on let7b-5p, similarly to [48]. Two patients with the score along PC1 of cluster 2 greater than three standard deviations above the mean and were removed from the analysis. The positive values on PC1 always corresponded to higher values in all parameters. Spearman correlation between let-7b-5p and single biochemical, demographic and clinical variables were also performed. Additionally, linear regression was computed to study the relationship between let7b-5p and the following predictor variables: the PC1 of each of the two clusters, age, gender and EDSS of MS cases and control subjects. Since the distribution of let7b-5p skewed, the log-transform of this variable was used for the regression analysis.
Differences between two groups were analyzed using Student's t-test, Mann-Whitney test and Fisher's exact test, as appropriate. Multiple comparisons were performed by Kruskal Wallis test followed by Dunn's Multiple Comparison test as post hoc. p-values were corrected for multiple comparisons with the Benjamini and Hochberg method [49]. A FDR or a p-value < 0.05 was considered statistically significant.
The Let-7 Family Regulates Crucial Processes Involved in MS Pathophysiology
Both the sequences of miRNAs grouped in the let-7 family and their genomic organization are highly conserved among vertebrates [14]. Up to date, thirteen members of the family have been identified with specific chromosome locations as annotated in miRbase, the primary miRNA repository (http://www.mirbase.org/; [50]). Some of them are clustered together and are present with multiple copies in the human genome, like let-7f-1 and let-7f-2 ( Figure 1A). Since all members of the family share the same seed sequence (nucleotides 2-8; Figure 1A') they are able to regulate overlapping target mRNAs. Size dots are correlated with the number of genes that belong to a Gene Ontology category and dots are colored according to the Benjamini-Hochberg false discovery rate adjusted p-values from blue (higher p-adjusted) to red (lower p-adjusted). (B') Network of let-7 targets that can be ascribed to three main processes involved in MS pathophysiology: inflammation (light blue rectangle); neuronal homeostasis (green rectangle); RNA metabolism (orange rectangle). Target mRNAs of let-7 involved in more than one process are represented into the rectangle overlapping zones. Targets participating in other pathways are grouped into a light violet rectangle (26 out of 130).
The encouraging results of our bioinformatic analysis based on the validated interactions between let-7 and target mRNAs prompted us to further investigate let-7 family in the context of MS.
Let-7b-5p Is a Possible Regulatory Hub of the Pattern of MS-Related miRNAs Circulating in the CSF
Considering that GO analysis revealed 18 experimental validated target mRNAs of let-7 family involved in miRNA metabolism, we evaluated the possible crosstalk between let-7 miRNAs and MS-linked miRNAs circulating in the CSF.
Let-7b-5p Is a Putative Anti-Inflammatory Regulator of the Complex Pathway of Soluble Factors Circulating in the MS CSF
Since most targets regulated by the let-7 family were involved in inflammatory response ( Figure 1B,B'), we assessed the correlations between let-7b-5p levels in the CSF and possible protein players in MS inflammation, namely cytokines, chemokines and growth factors.
Initially, we evaluated by multiplex assays the CSF levels of 27 inflammation-related factors in an extended cohort of patients with MS (n = 273, see Supplementary Table S1), including the main cohort of patients, whose we detected let-7b-5p level. Thus, we were able to perform a robust hierarchical clustering analysis using the silhouette method to select the appropriate number of clusters to consider ( Figure 3A). The clustering analysis identified two different patterns of protein factors (Cluster 1: 7/27 inflammatory proteins; Cluster 2: 20/27 inflammatory proteins) as shown in the dendrogram (Figure 3A'). Similar results were obtained running the hierarchical cluster algorithm on the extended cohort values (data not shown). We then computed the correlation between the CSF levels of each inflammatory protein and let-7b-5p. Importantly, as reported in Table 2 and Figure 3A', we observed that let-7b-5p positively correlated with all members of the Cluster 1, and negatively correlated with most inflammation-related factors belonging to the Cluster 2, including also experimentally-validated targets or pathways of let-7 family, like IL6 [65][66][67][68], IL10 [15,69] and IL17 pathway [12].
The miR Let-7b-5p Is Reduced in the CSF of Patients with Progressive MS and Is Associated with Different Processes According to the Phase of the Disease
To further investigate the involvement of let-7b-5p in MS pathology, we compared the miRNA levels in the CSF between control subjects and the main cohort of patients with MS. We observed a highly variable expression among the patients respect to control subjects (Ctr: n = 20; MS: n = 166; Mann-Whitney test, p > 0.05) ( Figure 4A). Therefore, we asked whether the disease phase of the examined MS patients could highlight more remarkable differences in terms of expression levels of let-7b-5p. To this aim, we stratified patients into three groups based on the disease subtypes, CIS/RIS: (n = 25), RRMS (n = 117) and PMS (n = 24) (see Table 1), and we observed that the level of let-7b-5p was significantly reduced in PMS patients in comparison to RRMS (Kruskal-Wallis test, p < 0.05) (Figure 4A'). Since the variability and the median values of CSF let-7b-5p levels were similar between the patients with CIS/RIS and RRMS, they were grouped together in the following statistical correlations with different aspects of the disease, and they were referred to as patients with non-progressive MS (non-PMS). Central and peripheral inflammation was the first aspect of the disease that we examined in association with CSF let-7b-5p in stratified patients, considering our previous results (Table 2 and Figure 3) obtained on all patients of the main cohort as well as the recent evidence about the crucial contribution of cytokines and growth factors released from infiltrating autoreactive T cells to neuronal damage in MS [70,71]. Similar to what we performed on all patients of the main cohort, we correlated CSF let-7b-5p levels with the CSF amount of inflammation-related protein factors in the subgroups of patients. In particular, we noticed that all the correlations observed in the main cohort were maintained in the non-PMS subgroup (CIS/RIS/RRMS; Table 3). Then, we evaluated the peripheral inflammation by counting the number of lymphocytes in the blood at T0. Interestingly, we found that the count of peripheral lymphocytes negatively correlated with let-7b-5p levels in the CSF of patients with non-PMS (Spearman's correlation: r s = −0.216, p < 0.05; Figure 5A), in accordance with the inverse correlation with central inflammation. To explore the clinical implication of this consideration, we assessed the possible link between the demographic or clinical parameters at both T0 and Tf1 (age, sex, disease duration, EDSS and PI) and let-7b-5p levels in non-PMS CSF. No significant associations were observed (data not shown). On the contrary, in a subset of patients, let-7b-5p levels was directly correlated with the cognitive performances related to executive functions (Semantic verbal fluency; n = 106, Spearman's correlation: r s = 0.294, p < 0.01, Figure 5B) and categorial memory functions (Phonemic verbal fluency; n = 95, Spearman's correlation: r = 0.218, p < 0.05, Figure 5B'), suggesting a protective role for the miRNA in the neuronal compartment linked to an anti-inflammatory action.
Not surprisingly, no correlations were found between inflammatory parameters and let-7b-5p levels in patients with PMS (Peripheral lymphocyte count, Spearman's correlation: r s = 0.092, n. s.; Figure 6A), with exception of IL5, RANTES and G_CSF (Table 4). According to the neurodegenerative phase of disease, we found that let-7b-5p negatively correlated with the clinical disability in terms of EDSS at both onset (T0; Spearman's correlation: r s = −0.463, p < 0.05) and after a follow-up period (Tf1; Spearman's correlation: r s = −0.536, p < 0.05) in PMS patients ( Figure 6B,B'), while no significant correlations were observed for the other clinical parameters (data not shown). Moreover, no changes in the CSF level of let-7e-5p and let-7f-5p were found in the main cohort, considering both all patients and stratifying them for MS subtype (Supplementary Figure S2). Similarly, let-7e-5p (Supplementary Figure S3) and let-7f-5p (Supplementary Figure S4) were not correlated with any inflammatory or clinical parameters in each phase of the disease, highlighting the specificity of the results obtained on let-7b-5p.
Finally, to study the relationship between let-7b-5p and multiple parameters relevant to MS course, we performed linear regression analysis in both non-PMS and PMS conditions. As predictor variables, we considered age, genders, EDSS and the CNS inflammatory milieu evaluated by performing the principal component analysis and saving the first component (PC1) of two clusters of the inflammatory mediators described before (Figure 3). In the non-PMS group, we found a positive association between let-7b-5p and cluster 1, and negative association with cluster 2. Both were statistically significant (Table 5). We replicated the same analysis for the PMS group (Table 5) and control subjects (data not shown). Anyway, we did not find a significant association between either cluster of inflammatory factors and let-7b-5p in neither group. Conversely, both age (estimate = 0.048, p < 0.05) and EDSS (estimate = −0.386, p < 0.01) were significantly associated with let-7b-5p in the PMS group (Table 5), similar to what observed by single correlation analyses. Overall, these data suggest that let-7b-5p, likely derived from diverse cellular sources, can participate in the different processes running in the CNS of patients with MS depending on the phase of the disease.
Discussion
In the last few years, circulating miRNAs have been proposed as potential diagnostic and prognostic biomarkers or even as therapeutic targets for various diseases, including CNS disorders like MS [6,17,72,73]. The let-7 family regulates many target mRNAs by participating in crucial processes for MS pathophysiology (neuronal homeostasis [17,24,25], inflammation [1,17] and miRNA metabolism [5,26,27], as in Figure 1B,B'). Notwithstanding, the impact of let-7 family on MS disease has been scarcely investigated, especially in humans.
In this context, we explored three representative members of the let-7 family (let-7b-5p, let-7e-5p, let-7f-5p) in terms of CSF abundance and correlation with other 21 MS-related miRNAs as well as potential implications in MS disease. Although all let-7 miRNAs have the possibility to control miRNA biogenesis and functioning because they share the same repertoire of target mRNAs with a role in miRNA metabolism, we specifically identified let-7b-5p as a possible hub of a network of seven miRNAs highly linked to MS [54][55][56][57][58][59][60][61][62][63]74]. Neither let-7e-5p or let-7f-5p showed such strong correlations with other detected miR-NAs in the CSF, suggesting that the timing and cellular sources are as important as the target mRNA subset in MS regulation. In particular, let-7b-5p directly correlated with protective miRNAs, like miR-451a, miR-219-3p and miR-223-3p. MiR-451a is known to inhibit the nuclear factor-kappa B (NF-κB)-mediated proinflammatory response [74] and the microglia activation by repressing, together with let-7b-5p [75], the toll like receptor 4 (TLR4) [60]. MiR-219-3p is necessary for myelination and its absence in the CSF correlates with MS diagnosis [36]. MiR-223-3p can exert a neuroprotective action [55,61], although preclinical studies demonstrated that miR-223 knockout mice develop a less severe experimental MS [56]. Also, 92a-3p, which has an anti-excitotoxic role in neurons [63] but pro-inflammatory effects in the immune system [57], was in the network with let-7b-5p together with miR-34a-5p, showing an opposite role according to the cellular context of expression [76,77]. The last two miRNAs in cluster with let-7b-5p were miR-16-5p and miR-24-3p, both upregulated in the peripheral or/and central compartments of patients with MS [58,78] and associated with disability accumulation [58,59,78].
Both let-7 target analysis and miRNA correlation network in MS CSF highlighted let-7b-5p as a "meta-miRNA" able to regulate different MS-linked miRNAs in different cellular contexts, coherently with previous studies. Indeed, let-7b-5p has been observed in peripheral blood [18] or derivatives [19,79] as well as in the CNS cells [16,80], confirming its multiple functions.
Furthermore, we evaluated an additional regulatory aspect of let-7b-5p by analyzing its possible interaction with twenty-seven MS-related protein factors circulating in the CSF of patients with MS. The inflammatory milieu associated with MS showed a double pattern of opposite correlations with the CSF levels of let-7b-5p. We speculated that the soluble mediators positively correlating with let-7b-5p (cluster 1, Figure 3A') could be involved in the miRNA induction and/or could act synergically in the same pathways. On the contrary, several direct and indirect experimentally-validated target mRNAs of let-7 family are negatively correlated with the let-7b-5p levels (cluster 2, IL6 [65][66][67][68]; IL10 [15,69]; IL17 pathway [12]). Considering the complex system of multiple feedback loops regulating the CNS homeostasis, these correlations suggested that the let-7b-5p might be considered as pleiotropic modulator of CSF molecules with possible protective implication in MS course, although let-7b-5p cannot be univocally ascribed as an absolute anti-inflammatory factor.
The putative protective role of let-7b-5p was further supported by our observation reporting lower levels of circulating let-7b-5p in CSF of patients with PMS compared to RRMS patients. In the relapsing-remitting phase of the disease, we hypothesize that let-7b-5p might be triggered by inflammatory insults, through the IFNγ and IP10 pathway activation as well as IL8, G_CSF and RANTES signals, in the attempt to counteract the proinflammatory action of the soluble mediators such as IL2, IL6, IL12 (p70), IL17, GM_CSF, MIP1b [81,82]. Coherently with this speculation, the CSF levels of let-7b-5p were inversely correlated with peripheral inflammation, measured by blood lymphocytes count and directly correlated with a better cognitive performance.
In the progressive phase of MS, the inflammation is less evident and neurodegenerative events are more prominent [3]. Indeed, the CSF let-7b-5p levels were reduced and few direct correlations with some of the inflammation-related proteins, such as IL5, RANTES and G_CSF were showed, possibly promoting a residual expression of the miRNA. Furthermore, let-7b-5p levels were negatively correlated with the severity of the disease, assessed by EDSS evaluation at both the CSF withdrawal and after 1-year follow-up, revealing a potential neuroprotective action of the miRNA in this context.
Our observations were also confirmed by using a multivariable approach, which underlined a possible anti-inflammatory and neuroprotective action of let-7b-5p specifically in MS condition, since no association in the control subjects' group were found.
Let-7e-5p and let-7f-5p were not associated with any of the considered aspects both in non-PMS or in PMS condition. It cannot be excluded a contribute of other members of let-7 family, as let-7-g and let-7i, recently found to be involved in MS [12,13] although their expression seems to be limited to peripheral cells and their levels in the CSF are generally lower than let-7b-5p [37,38]. However, further experiments are needed to elucidate this aspect.
All together our investigations suggest let-7b-5p as a protective factor for MS course, in terms of both inflammation and clinical disability. The combination of our proposed bioinformatics strategy with miRNA-mRNA regulatory network building and integrated biochemical approach may help to better understand the mechanism underlying MS. Considering that let-7b-5p levels have been recently associated with good response to IFNβ treatment [19], it is reasonable to consider let-7b-5p as a potential biomarker. The next stage is to further validate our findings using larger cohorts of patients and datasets as well as a longer follow-up period in order to deepen the mechanism at the basis of the let-7b-5p in MS inflammation and neurodegeneration.
Supplementary Materials: The following are available online at https://www.mdpi.com/2073-4 409/10/2/330/s1, Figure S1: Quantification cycle (Cq) of miRNAs detected in the CSF of patients with MS. Figure S2: The let-7e-5p and let-7f-5p levels in MS disease subtypes. Figure S3: Correlations between let-7e-5p levels, peripheral inflammation and EDSS of patients with non-progressive and progressive MS. Figure S4: Correlations between let-7f-5p levels, peripheral inflammation and EDSS of patients with non-progressive and progressive MS. Table S1: Demographic and clinical characteristics of patients with MS included in the extended cohort.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 5,759.6 | 2021-02-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
The “Leafing Intensity Premium” Hypothesis and the Scaling Relationships of the Functional Traits of Bamboo Species
The “leafing intensity premium” hypothesis proposes that leaf size results from natural selection acting on different leafing intensities, i.e., the number of leaves per unit shoot volume or mass. The scaling relationships among various above-ground functional traits in the context of this hypothesis are important for understanding plant growth and ecology. Yet, they have not been sufficiently studied. In this study, we selected four bamboo species of the genus Indocalamus Nakai and measured the total leaf fresh mass per culm, total non-leaf above-ground fresh mass, total number of leaves per culm, and above-ground culm height of 90 culms from each species. These data were used to calculate leafing intensity (i.e., the total number of leaves per culm divided by the total non-leaf above-ground fresh mass) and mean leaf fresh mass per culm (i.e., the total leaf fresh mass per culm divided by the total number of leaves per culm). Reduced major axis regression protocols were then used to determine the scaling relationships among the various above-ground functional traits and leafing intensity. Among the four species, three exhibited an isometric (one-to-one) relationship between the total leaf fresh mass per culm and the total non-leaf above-ground fresh mass, whereas one species (Indocalamus pumilus) exhibited an allometric (not one-to-one) relationship. A negative isometric relationship was found between the mean leaf fresh mass per culm and the leafing intensity for one species (Indocalamus pedalis), whereas three negative allometric relationships between mean leaf fresh mass per culm and leafing intensity were observed for the other three species and the pooled data. An exploration of the alternative definitions of “leafing intensity” showed that the total number of leaves per culm divided by the above-ground culm height is superior because it facilitates the non-destructive calculation of leafing intensity for Indocalamus species. These results not only confirm the leafing intensity premium hypothesis for bamboo species but also highlight the interconnected scaling relationships among different functional traits, thereby contributing to our understanding of the ecological and evolutionary significance of leaf size variation and biomass investment strategies.
Introduction
Leaves, as the primary photosynthetic organs in most vascular plants, play an indispensable role in plant growth and development [1,2], and the size of leaves has a marked effect on various biological processes, such as reproduction, survival, and ecosystem function [3][4][5].Thus, the natural variation in leaf size and its ecological and evolutionary significance have long attracted the attention of researchers in a variety of disciplines [6].
Yet, the surface area of the leaf lamina varies over six orders of magnitude across terrestrial plants [6,7].Previous explanations of this inter-specific leaf size variation include adaptations to herbivory [8,9] and physiological optimization strategies under different environmental conditions affecting photosynthesis, gas exchange, energy flux, and/or water use efficiency [10][11][12][13].For example, species surviving in shaded or well-watered environments typically have large laminae that can maximize light interception and carbon acquisition capabilities, while simultaneously reducing the negative effects of shading by the upper canopy [10].Importantly, an ancillary non-mutually exclusive hypothesis for the variation in leaf size posits a trade-off between leaf size and leaf number per unit shoot size referred to as the "leafing intensity premium" hypothesis [14][15][16][17].
This premise proposes that leaf size variation results from selection acting on different leafing intensities (and thus self-shading), defined as the number of leaves per unit shoot (or stem) volume or mass.According to this hypothesis, species with high leafing intensity (and thus high self-shading) tend to have comparatively small leaves, whereas species with low leafing intensity (and thus low self-shading) tend to have large leaves.Previous studies have demonstrated that leaf size and leafing intensity exhibit a negative scaling relationship across various habitats [15,18], forest successional stages [19], and diverse canopy light environments [20,21].Additionally, the trade-off between leaf size and massbased leafing intensity depends on the biomass investment (leaves vs. shoots or stems) [20].Biomass investment is considered a critical functional trait because it reflects the ability of an organism to use and optimize the allocation of material resources [22] to cope with different environments [23].
Scaling theory, arising from biomass partitioning theory, has identified a number of trade-offs in the allocation of resources among various physiological and ecological functional traits and has provided considerable insights into biomass allocation patterns [24,25].In the context of the leafing intensity premium hypothesis, it has also shed light on numerous functional traits, such as lamina mass vs. petiole mass, leaf mass vs. lamina area, perianth mass vs. perianth area, and tree height vs. diameter at breast height [26][27][28][29][30].For example, the scaling exponent for the leaf mass vs. leaf area scaling relationship typically exceeds unity, indicating that leaf area fails to increase proportionally with increasing leaf mass, a phenomenon called "diminishing returns" [7,27].
Here, we use scaling theory to explore the leafing intensity premium hypothesis of an important monocot genus, Indocalamus Nakai, a common bamboo growing in the rural areas of southern China, which is known for their cold resistance [31].Indocalamus was also selected for the study because it is of significant ecological value, its broad ecological range, and the fact that species within the genus manifest considerable differences in the number of leaves per culm.For example, I. longiauritus often dominates forest understories, providing habitats for numerous species of birds, lizards, and insects [32].Despite its ecological significance and morphological diversity, there are currently limited studies on the scaling relationship among the above-ground functional traits of Indocalamus species and their leafing intensity.To bridge this gap, we selected four Indocalamus species and measured the total leaf fresh mass per culm (TLM), total non-leaf above-ground fresh mass (TNLM), above-ground culm height (H), and the total number of leaves per culm (N).These data were then used to determine the scaling relationships between TLM and TNLM, as well as the scaling relationship between mean leaf fresh mass per culm (MLM), defined as TLM/N, and leafing intensity, defined as N/TNLM.
An important consideration in this study was how to define "leafing intensity" because alternative definitions have been advanced.Traditionally and most often, leafing intensity is defined as the total number of leaves per shoot divided by total non-leaf above-ground volume or mass of the shoot [14,20,33].Given that shoot volume and mass are usually positively correlated with plant height (H), we tested whether leafing intensity could also be defined as the total number of leaves per shoot divided by above-ground shoot height.If this alternative definition can effectively quantify leafing intensity, it would facilitate the non-destructive calculation of leafing intensity for many plants.To this end, we compared the performance of leafing intensity quantified as N/H with the traditional metric (i.e., N/TNLM).
Sampling Site and Data Acquisition
In early July 2014, along the Verdant Bamboo Road at Nanjing Forestry University (32.08 • N, 118.82 • E), we collected four species of the genus Indocalamus Nakai: Indocalamus barbatus McClure, Indocalamus pedalis (Keng) P. C. Keng, Indocalamus pumilus Q. H. Dai and C. F. Keng, and Indocalamus victorialis P. C. Keng.Ninety culms were sampled for each species.The collection of specimens took advantage of the ubiquity of the environmental conditions in Nanjing Forestry University and the general region of Nanjing, which belongs to the subtropical region.Based on climate data from 2014 (Source: China Meteorological Administration [Available online: https://www.cma.gov.cn/en/(accessed on 12 August 2024)]), the mean inter-annual precipitation was 1091.1 mm, the mean annual temperature was 16.4 • C, the mean annual humidity was 74%, and the mean annual sunshine duration was 1863.8 h.The soil type is predominantly mountain yellow-brown and grey-brown soils, which are acidic to slightly acidic [34].All plants were sampled from the same location.We measured above-ground culm height (H), total leaf fresh mass per culm (TLM), and total non-leaf fresh mass per culm (TNLM) using an electronic scale with a precision of 0.01 g (JM-A3002; Chaozeheng Equipment Company Limited, Zhuji, Zhejiang, China).The total number of leaves per culm (N) was also recorded, from which the leafing intensity (i.e., N/TNLM) and mean leaf fresh mass (i.e., MLM = TLM/N) per culm were calculated.The morphological and agronomic characteristics of the four species are shown in Table 1.Here, leaf length was estimated for each species without differentiating among individual plants of the same species, while other characteristics were measured for individual plants.
Data Analysis
The data for any two interdependent biological measures were analyzed using power law functions taking the form of where Y 1 and Y 2 are any two interdependent variables (e.g., plant height and mass), β is the normalization constant, and α is the scaling exponent of the Y 1 vs. Y 2 relationship [26].
To stabilize variance, that raw data were log-log transformed to yield power law functions taking the form of where y = ln (Y 1 ), x = ln (Y 2 ), and γ = ln (β).The parameters γ and α were determined using reduced major axis regression protocols [26,35].The bootstrap percentile method [36,37], employing 3000 bootstrapping replicates, was used to test the significance of the difference between any two estimated scaling exponents of Y 1 vs. Y 2 .The difference between two sets of bootstrap replicates of slopes was determined using the 95% confidence intervals (CIs) of the replicates.If the 95% CIs do not include 0, a significant difference exists; otherwise, there is no significant difference between the two scaling exponents [36,37].All calculations were performed and figures constructed using software R (version 4.2.0)[38].
Results
A statistically significant bivariate scaling relationship between TLM and TNLM was observed for each of the four Indocalamus species (Figure 1).The 95% CIs of the scaling exponents of TLM vs. TNLM, obtained using the bootstrap percentile method, included unity for three out of the four species (Figure 1A,B,D), indicating that increases in TNLM keep pace with increases in TLM for the three species (i.e., I. pedalis, I. pumilus, and I. victorialis manifested one-to-one scaling relationships).The exception, I. pumilus, had an upper bound of the 95% CIs of the TLM vs. TNLM scaling exponent that was smaller than unity (Figure 1C), indicating that increases in TLM did not keep pace with the increase in TNLM, i.e., an allometric scaling relationship was observed for I. pumilus.For each of the four species, the scaling exponent for MLM vs. leafing intensity was negative, indicating that MLM decreases with increasing leafing intensity.With the exception of I. pedalis, the lower bounds of the 95% CIs of the scaling exponents for MLM vs. leafing intensity were greater than negative unity (Figure 2A,C,D), indicating that the decreases in MLM kept pace with increases in leafing intensity for these species.In contrast, the 95% CIs of the scaling exponent of MLM vs. the leafing intensity of I. pedalis included negative unity (Figure 2B), indicating that decreases in MLM did keep pace with increases in leafing intensity (i.e., a one-to-one relationship was observed).For both leafing intensity metrics (i.e., N/TNLM and N/H), the lower bounds of the 95% CIs of the scaling exponents for MLM vs. leafing intensity for the pooled data were greater than negative unity (Figure 3), indicating that decreases in MLM did not keep pace with increases in leafing intensity for all four species.The 95% CIs (i.e., −0.053 to 0.039) of the numerical difference in the scaling exponents of MLM vs. N/TNLM leafing intensity and those of the exponent of MLM vs. N/H leafing intensity included zero, indicating that there was no significant difference between the two scaling exponents.Therefore, both metrics for leafing intensity yielded the same results.
Discussion
This study focused on four species of the bamboo genus Indocalamus to explore the scaling relationships among important above-ground functional traits and to specifically examine the "leafing intensity premium" hypothesis, as well as to compare its two different definitions.The following sections discuss the implications of the results in the context of existing scaling theories and the literature, highlighting the ecological and evolutionary significance of leaf size variation and biomass investment strategies in Indocalamus species.
Scaling Relationship between TLM and TNLM
Among the four Indocalamus species, three exhibit isometric (one-to-one) scaling relationships between TLM and TNLM, i.e., the 95% CIs of the scaling exponents of TLM vs. TNLM for these three species all include unity.This is consistent with previous studies which found that in the case of species lacking substantial quantities of secondary tissues, leaf mass scales isometrically with respect to stem mass [39,40].However, in the case of I. pumilus, an allometric relationship exists between TLM and TNLM, i.e., the upper bound of the 95% CIs of the corresponding scaling exponent was smaller than unity.This result indicates that increasing leaf mass requires a disproportionately larger investment in culm mass, reflecting the phenomenon called "diminishing returns" [27,41].Indeed, many, if not most, studies have confirmed the phenomenon of "diminishing returns" when using the metric of dry leaf mass [27,29].Arguably, the dry mass metric highlights the importance of carbon allocation, whereas fresh mass highlights the importance of mechanical support, since lamina water mass contributes to the total load a petiole must support [42,43].Indeed, prior studies have shown that the lamina fresh mass vs. leaf surface area scaling relationship is typically statistically more robust than that of the lamina dry mass vs. leaf surface area scaling relationship, indicating that lamina fresh mass is a more biologically realistic indicator of the physiological processes and mechanics of leaves [44][45][46][47].
Clearly, leaves acquire carbon through photosynthesis, while stems provide mechanical support and transport water and nutrients to the leaves.Thus, a high degree of biomechanical and physiological coordination is anticipated between leaf and stem traits [21,48].However, stems not only bear the static weight of the leaves but also dynamic forces such as wind [49].The fact that bamboo culms are in fact stems helps to explain a "diminishing returns" phenomenon between TLM and TNLM.
Scaling Relationships between MLM and Leafing Intensity
For each of the four species, the scaling exponent of MLM vs. leafing intensity was negative, indicating that MLM decreases with increasing leafing intensity.A negative isometric relationship for MLM vs. leafing intensity was observed in the case of I. pedalis.This result is consistent with previous research indicating that an isometric trade-off exists between leaf size and leafing intensity associated with a constant biomass partitioning between leaves and stems [20].The other three species and the pooled data for the four species exhibit negative allometric relationships between MLM and leafing intensity, indicating that the rate of increasing leafing intensity exceeds the rate of decreasing MLM.It is worth noting that leafing intensity can reflect the size of a plant's "bud bank" [50].As noted, species often require and use larger "bud banks" to compensate for their short stature and therefore manifest higher leafing intensities [33].Leafing intensity may also provide a mechanism for "space escape" and "temporal escape" [33].Plants with higher leafing intensities (and thus relatively more small leaves) can maximize the likelihood that at least some leaves will go unnoticed by insects, thus providing modest protection against herbivory [8,51].As a "temporal escape" mechanism, plants that produce more leaves can gradually display their leaves over longer periods of time, allowing them to compensate for leaf tissue losses that occur early in the growing season [8,51,52].
Different Metrics of Leafing Intensity
Leafing intensity is commonly defined as the number of leaves per unit stem volume or mass [14,20,33].However, stem volume and mass are closely correlated with plant height, which can be measured more conveniently and non-destructively compared to stem volume and mass.In this study, no difference was observed between the numerical values of the scaling exponents of the MLM vs. N/TNLM scaling relationship and the MLM vs. N/H scaling relationship (i.e., the 95% CI of the differences between the two groups of bootstrapping scaling exponent replicates included zero).This result indicates that N/H can be effectively used as a measure of leafing intensity for Indocalamus species.Future work is required to determine if this holds true for other monocot species or possibly eudicot species with more complicated leaf shapes.
Conclusions
This study investigated the scaling relationships among various above-ground functional traits associated with the "leafing intensity premium" hypothesis using four Indocalamus species.The results reveal both isometric and allometric scaling relationships between total leaf fresh mass per culm (TLM) and total non-leaf above-ground fresh mass (TNLM).These findings highlight complex resource allocation patterns for these bamboo species, with three of the four species exhibiting a phenomenon called "diminishing returns" in leaf mass investment.Additionally, negative scaling relationships between mean leaf fresh mass per culm (MLM) and leafing intensity were observed, indicating trade-offs between leaf size and number.Although I. pedalis displayed a negative isometric relationship, indicating a one-to-one biomass partitioning between MLM and leafing intensity, the other three species and the pooled data manifest negative allometric relationships, indicating a faster increase in leafing intensity compared to leaf size reduction.Finally, an alternative and equally effective definition of leafing intensity is validated (i.e., N/H), permitting a non-destructive assessment of leafing intensity for the four Indocalamus species.This work offers additional insights into the ecological and evolutionary significance of leaf size variation and biomass investment strategies in four Indocalamus species.Future research is needed to test whether the leafing intensity premium hypothesis holds true for other monocot and eudicot species, and whether there are significant differences in the scaling exponent of mean leaf size per shoot vs. leafing intensity for the same species growing under different conditions.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/plants13162340/s1,Table S1: The raw data on various above-ground functional traits of four species of the genus Indocalamus Nakai.
Author Contributions: Formal analysis, W.Y., K.J.N., J.C. and P.S.; investigation, W.Y., J.W. and Y.M.; writing-original draft preparation, W.Y.; writing-review and editing, K.J.N., J.C. and P.S.All authors have read and agreed to the published version of the manuscript.Funding: J.C. was funded by the National Natural Science Foundation of China (grant number 32071832).
Data Availability Statement:
The data can be found in the online Supplementary Table S1.
Figure 1 .
Figure 1.Fitted results for bivariate plots of total leaf fresh mass per culm vs. non-leaf above-ground fresh mass per culm on a log-log scale for four species of Indocalamus: I. barbatus (A), I. pedalis (B), I. pumilus (C), and I. victorialis (D).The red lines are regression curves; CIs denote the 95% confidence intervals of the slope; r 2 is the coefficient of determination; and n is the number of culms sampled for each species.
Figure 2 .
Figure 2. Fitted results for bivariate plots of mean leaf fresh mass per culm vs. leafing intensity on a log-log scale for four species of Indocalamus: I. barbatus (A), I. pedalis (B), I. pumilus (C), and I. victorialis (D).The red lines are the regression curves; CIs represent the 95% confidence intervals of the slope; r 2 is the coefficient of determination; and n is the number of culms sampled for each of the four species.
Figure 3 .
Figure 3. Fitted bivariate log-log scaling relationships for the pooled data of the four species of Indocalamus.(A) Mean leaf fresh mass per culm vs. leafing density from definition 1, and (B) the mean leaf fresh mass per culm vs. leafing density from definition 2. The red lines are the regression curves; CIs represent the 95% confidence intervals of the slope; r 2 is the coefficient of determination; and n is the number of culms sampled for each species.
Table 1 .
Morphological and agronomic characteristics of the four bamboo species. | 4,477.2 | 2024-08-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Optimization of Single-Point Incremental Forming of Polymer Sheets through FEM
Incremental sheet forming represents a relatively new process appointed to form sheets of pure metals, alloys, polymers, and composites for the manufacture of components in fields where customized production in a short time and at a low cost is required. Its most common variant, named single-point incremental forming, is a flexible process using very simple tooling; the sheet is clamped along the edges and a hemispherical-headed tool follows a required path, to deform the sheet locally. In so doing, better formability is reached without any dedicated dies and for low-forming forces, which represent some of the attractive features of this process. Nevertheless, and with special reference to thermoplastic sheets, incremental formed parts suffer from peculiar defects like twisting and wrinkling. In this numerical work, analyses were conducted through a commercial finite element code by varying the toolpath strategy of the incremental forming of polycarbonate sheets. The investigation of some features like the forming forces, the deformation states, the energy levels, and the forming time was carried out, to determine the toolpath strategy able to optimize the incremental forming process of polymer sheets. The results of the numerical analyses highlight a reduction of the forming forces when using toolpaths alternating diagonal up and vertical down steps and, presumably, a reduced risk of failures and defects. Furthermore, these toolpath strategies solutions also have a positive impact on the environment in terms of energy and do not significantly increase the manufacturing time.
Introduction
The industrial revolution of the last decades demands manufacturing processes with less changeover time and tooling cost; consequently, conventional manufacturing processes might prove ineffective for small batch production and prototypes. In addition, the advances in the use of computers applied to manufacturing have encouraged the development of procedures with higher levels of flexibility, (for instance, those not requiring dedicated dies). In this context, incremental sheet forming (ISF) has begun devoting much greater attention in recent years, due to its flexible and cost-effective nature that enables it to respond to the above-mentioned challenges [1].
ISF is born as an excellent alternative technique to other material forming procedures to deform incrementally flat metal sheets into the preferred complex three-dimensional profile and, to do this, a computer numerically controlled (CNC) generic tool stylus acts on a sheet of material, peripherally clamped along its outer edges [2]; among the different existing ISF process variants, the most basic is known as single-point incremental forming (SPIF), and involves the use of a simple tool and the absence of dies for the superimposition of local deformations in the sheet [3].
The main characteristics of this flexible process are higher formability, low forming forces, reduced lead time, and cost-effectiveness [4], while the applications cover several industrial fields like aerospace, on-site repair of military components, prototypes in automotive, as well as customized products in medical, architecture, etc. [5].
ISF process is widely used for several metals and alloys (also characterized by hard workability) like aluminum, steel, copper, titanium, etc. [6], but it was recently extended to polymers and composites [7].
Starting from the preliminary studies of Le et al. [8] and Franzen et al. [9], highlighting the goodness of the above mentioned process for the manufacture of complex polymer sheet components, the interest in the ISF of thermoplastic polymers has been increasing over the past few years enough to be a valid alternative to conventional technologies, based on heating-shaping-cooling manufacturing routes; the latter ones (i.e., extrusion, molding, casting, and thermoforming) are only economically viable for mass production, because of the high costs tied to the tool's design and manufacturing [10]. Conversely, one of the main benefits of working polymers by ISF is that the process can still be carried out at room temperature, achieving high levels of material formability [11]. The applications of parts obtained by ISF of these strongly engineered materials range from many fields, in particular aerospace and unmanned aerial vehicles, like racing and commercial cars [12], and customized products in the biomedical sector [13].
Failures and undesired deformation phenomena like twisting and wrinkling interest ISF sheets, including but not limited to the polymer ones [14,15]. Twisting is due to the component of the forming forces tangential to the toolpath, that generates in-plane shear and, consequently, an uncontrolled twist of the sheets with respect to the clamping frame; it is also promoted by higher levels of vertical forming forces when forming polymer sheets, since they are affected by significant indentation that accentuates the phenomenon [16,17]. The twisting was investigated and observed by the authors on axisymmetric components obtained by a unidirectional toolpath both for aluminum alloy [18] and polycarbonate sheets [19]; the twisting angles for the latter ones were very high, compared to the sheet metal ones (about 22 • vs. less than 6 • ), and a dramatic reduction of the phenomenon was possible adopting an alternate toolpath. All the same, severe forming conditions in terms of sliding forces could generate wrinkling, in particular when forming thin thermoplastic sheets, characterized by low mechanical resistance [20].
Measuring and predicting the forming forces during ISF is a major research area since it is an efficient tool for monitoring the quality of the process [21]. Besides, the reduction of the forces acting in the sheet plane represents a way to reduce the risk of failures and defects on the ISF of polymer sheets; in addition, lower sliding forces translate into lower global forming forces with a reduced risk of tool failure, improvement of the formed surfaces' quality, and the chance to reduce lubricants to lower friction and sticking of material to the tool. Finally, it can also involve energy implications; sustainable manufacturing is a hot topic in the face of global warming and, consequently, the improvement of the industrial processes also goes through the mitigation of their negative impact on the environment in terms of energy [22].
Considering the above, it is evident that it is advisable to perform an optimization strategy for the process under examination; besides, optimization methods were largely considered for enhanced performance in several engineering cases [23][24][25]. The choice of the toolpath, controlled by a part program generated by computer-aided manufacturing (CAM) software, represents a very significant success factor for the considered manufacturing process, because of its relevant effect on different aspects like dimensional accuracy, thickness distribution, processing time, and surface roughness [26]. Consequently, the paper aims to individuate a toolpath strategy useful for optimizing the ISF of polycarbonate through a numerical approach based on Finite Element Method (FEM) simulations; an accurate lecture of FEM results represented an optimization tool in a direct (manufacturing time and energy states) and indirect way (prediction of defectiveness and risks of failures as a function of the forming forces and the energy levels). As it is well known, FEM consists of a numerical technique to find approximate solutions to partial differential equations of a system [27]; it allows producing much more detailed results than experimental investiga-tions, and is often quicker and less expensive. FEM simulations were largely employed to increase understanding of phenomena interesting ISF of polymer sheets. For example, the authors determined stress, strain, and thickness distributions during the process [20], and developed a thermo-mechanical numerical model to investigate the suitability of friction heating, generated by the forming tool rotation, to form polymer sheets during ISF [28]. Moreover, Medina-Sanchez et al. [29] proposed a model to predict axial force in SPIF of thermoplastic sheets, while FEM aided the investigation of the feasibility of an advanced robotized polymer ISF in [30].
Concerning the polycarbonate, it is considered as a "transparency metal", due to its remarkable mechanical and physicochemical properties (such as light-weight, strength, corrosion resistance, and price, among others) [31]; polycarbonate parts find application in the areas of communications and transport, medical apparatus and instruments, the aerospace environment, etc. [32].
The manufacture of a fixed wall angle cone frustum through SPIF and by setting five different toolpaths, starting from polycarbonate sheets at room temperature, was simulated by a commercial FEM code. Some outputs of the simulations were investigated, like the forming forces, the manufacturing time, different forms of energies, and the stressstrain states, to draw conclusions on how the toolpath strategy influences the process and individuates the best solution in terms of reduced risk of failures and defects, surface quality, and energy implications.
Materials and Methods
The FEM commercial code LS DYNA was used to simulate the incremental forming process under study; this software is a general-purpose finite element program capable of simulating complex real-world problems, widely used by, among others, the automobile, aerospace, construction, military, manufacturing, and bioengineering industries. The simulations were carried out by considering the equipment and the materials at the disposal of the authors (as well as typical process parameters), while the efficiency of the code was guaranteed by different studies present in the literature [33,34], including some authors' works; in particular, they considered using this FEM code both for metal (aluminum alloy sheets [18]) and polycarbonate sheets, in the last case for foreseeing the occurrence of wrinkling [35].
The numerical model was constituted by the sheet, a square with a side L = 100 mm (equal to the internal area of the clamping frame) and thickness equal to t = 1.5 mm, and the hemispherical head of the tool (the part in contact with the sheet; radius r = 5 mm).
The components, i.e., conical frusta, presented a major base with radius R = 35 mm, height h = 20 mm, and a wall angle α = 60 • . The main characteristics of the equipment and of the components to manufacture are schematized in Figure 1. Both the sheet and the tool were simulated using shell elements and opportune materials' models; the main characteristics of the FEM model (properties of the elements and of the materials, boundary, and contact conditions) are reported in Table 1. In detail, the tool was considered with rigid behaviour, compared to the polycarbonate sheet, and a model for the tooling in forming applications was considered, coupled with the default type of shell elements that gives extremely cost-effective computational solutions. The material's model used for the sheet was useful for elastoplastic polymers, while the shell elements formulation was capable of simulating the deformable parts in forming problems.
Fixed constraints were assigned to a set of nodes (the peripheral nodes of the sheet) to simulate the action of the clamping frame.
The interaction between the two parts was governed by a contact card: the friction coefficient was set in line with a numerical work present in the literature [36]. However, consider that in experimental tests like the simulated ones the risk of damages is limited by lubricating the sheets with mineral oil for cold forming; in so doing, friction and sticking of material to the tool were reduced [20].
Five different toolpath strategies were considered in this study. The reference toolpath involved the contact of the tool with the sheet on points of a spiral path; with this solution, the tool follows a continuous path along X, Y, and Z axes, outlining the shape of the geometry and avoiding line scarring caused by step downs, typical of a Z-level contouring toolpath [37]. The spiral was described with a vertical distance between two successive spirals equal to vs = 1.0 mm; the points along the spiral were angularly spaced around θ = 6 • from each other. The schematization in Figure 2, limited to one spiral, reports the characteristics of this path; in detail, it reports vs and θ, while three consecutive points are labelled with A to C.
As anticipated above, four other strategies were considered to link two consecutive points; the first one was a stair path (involving an alternation of horizontal and vertical down steps), while the other three alternated between diagonal up and vertical down steps. For these four cases, the ramp height of the first step, hr, was equal to 0 (stair path), 0.5, 1.0, and 1.5 mm. Figure 3 summarizes, in a not-to-scale representation, the planar development of the five toolpath strategies; they are labeled ref_tp (reference toolpath) and hr0_tp, hr0.5_tp, hr1.0_tp, and hr1.5_tp, as a function of hr. and hr0_tp, hr0.5_tp, hr1.0_tp, and hr1.5_tp, as a function of hr. The tool motion with a feed rate equal to v = 1000 mm/min was assigned by means of the X, Y, and Z displacement laws (a series of interpolation points represented by the cartesian coordinates coupled to the corresponding manufacturing times), created with a Microsoft Excel spreadsheet developed by the authors.
Due to the long toolpaths that characterize the ISF processes, the computational time of the simulations could be very long; in order to reduce it, the mass scaling was used [38]. and hr0_tp, hr0.5_tp, hr1.0_tp, and hr1.5_tp, as a function of hr. The tool motion with a feed rate equal to v = 1000 mm/min was assigned by means of the X, Y, and Z displacement laws (a series of interpolation points represented by the cartesian coordinates coupled to the corresponding manufacturing times), created with a Microsoft Excel spreadsheet developed by the authors.
Due to the long toolpaths that characterize the ISF processes, the computational time of the simulations could be very long; in order to reduce it, the mass scaling was used [38]. The tool motion with a feed rate equal to v = 1000 mm/min was assigned by means of the X, Y, and Z displacement laws (a series of interpolation points represented by the cartesian coordinates coupled to the corresponding manufacturing times), created with a Microsoft Excel spreadsheet developed by the authors.
Due to the long toolpaths that characterize the ISF processes, the computational time of the simulations could be very long; in order to reduce it, the mass scaling was used [38].
Results
This section summarizes the main results from the simulation campaign. They are commented on in the following Discussion section, along with other results.
The forming forces were collected by the most common contact-related output file, RCFORC. It is an ASCII file containing resultant contact forces, written in the global coordinate system, for the slave and master sides of each contact interface. Figure 4 reports the trend of the forming forces vs. time for the two limit cases, i.e., the ref_tp (Figure 4a) and the hr1.5_tp (Figure 4b). Note that F X , F Y , and F Z were directly obtained from the RCFORC file, while the module of the force acting in the sheet plane, F XY , was the combination of F X and F Y : RCFORC file, while the module of the force acting in the sheet plane, FXY, was the combination of FX and FY: ; Moreover, these were the forces that the slave (the sheet) transmits to the master (the forming tool), with the respect of the coordinate system in Figure 1.
To appreciate quantitatively the influence of the toolpath strategy, Figure 5a,b report the trends of FZ and FXY for the five different toolpaths. Note that, in these figures, and in contrast to Figure 4, the time on the abscissa axis is expressed in percentage terms, with respect to the processing time of each case, for a simpler comparison among the force trends. In fact, the forming times are not the same; concerning this, Figure 6 reports the forming time vs. the toolpath strategy. Moreover, these were the forces that the slave (the sheet) transmits to the master (the forming tool), with the respect of the coordinate system in Figure 1.
To appreciate quantitatively the influence of the toolpath strategy, Figure 5a,b report the trends of F Z and F XY for the five different toolpaths. Note that, in these figures, and in contrast to Figure 4, the time on the abscissa axis is expressed in percentage terms, with respect to the processing time of each case, for a simpler comparison among the force trends. In fact, the forming times are not the same; concerning this, Figure 6 Finally, Figure 7 reports three forms of energy vs. the toolpath strategy. In detail, they are the total energy, Et, the sliding energy, Es, and the internal energy, Ei. They were collected by the ASCII file named GLSTAT. Finally, Figure 7 reports three forms of energy vs. the toolpath strategy. In detail, they are the total energy, Et, the sliding energy, Es, and the internal energy, Ei. They were collected by the ASCII file named GLSTAT. Finally, Figure 7 reports three forms of energy vs. the toolpath strategy. In detail, they are the total energy, E t , the sliding energy, E s , and the internal energy, E i . They were collected by the ASCII file named GLSTAT.
Discussion
From Figure 4, the ref_tp case shows the typical trend of the forming forces for a SPIF process for the manufacture of a cone frustum with a spiral toolpath [20,39]. This trend was also observed for the hr0_tp case but it was not represented in the figure, to allow a better result readability. FZ gradually increases with time, until it stabilizes. The oscillations are due to a little variability of the stiffness of the sheet, since during a spiral, the distance of the tool from the frame is different (minimum along X and Y axes, maximum along the diagonals) and, with it, the mechanical reaction of the sheet. FX, FY, and FXY increase with time until reaching the steady-state condition with the typical sinusoidal trend of FX and FY, while FXY presents a trend similar to FZ; on the contrary, the hr1.5_tp case, as well as the hr1.0_tp and hr0.5_tp strategies (the last two not reported in the figure), show an atypical and irregular trend of the forces. Figure 5 highlights that ref_tp and hr0_tp strategies determine similar forming forces, slightly higher for the first one; this last sentence is justified by the fact that the ref_tp strategy involves a continuous vertical down movement of the tool (and then, the most severe contact conditions), which is different from all the other strategies. For both ref_tp and hr0_tp cases, the typical trend of the forces is due to the continuous tool/sheet contact during all the process; this is obvious for ref_tp but is also true for the hr0_tp case too, due to the elastic springback that guarantees the contact between the tool and sheet, also during the horizontal steps of the toolpath [40]. Starting from the hr0.5_tp strategy, both FZ and FXY decrease significantly; in addition, their trends are completely irregular. FXY even tends to zero for the last two strategies; this is representative of an almost noncontact between the tool and the sheet on the top of the ramp heights of the toolpath (see the lower FZ values), because of hr values being similar to the entity of the elastic springback.
The differences in terms of forces are reflected in different strain states too. Concerning this, Figure 8 reports the maximum shear strain for two consecutive forming steps related to the two extreme cases (ref_tp and hr1.5_tp strategies), and for about the same percentage manufacturing time (equal to about 25%).
From the figure, it is possible to note that the ref_tp strategy (Figure 8a) determines strain accumulation on a large area of the sheet and its distribution is asymmetric, following the advancement of the tool (whose current position is indicated with an arrow); these phenomena are not so noticeable for the hr1.5_tp strategy (Figure 8b), and this translates into a more localized deformation and reduced distortion of the shells subject to the forming action of the tool.
Discussion
From Figure 4, the ref_tp case shows the typical trend of the forming forces for a SPIF process for the manufacture of a cone frustum with a spiral toolpath [20,39]. This trend was also observed for the hr0_tp case but it was not represented in the figure, to allow a better result readability. F Z gradually increases with time, until it stabilizes. The oscillations are due to a little variability of the stiffness of the sheet, since during a spiral, the distance of the tool from the frame is different (minimum along X and Y axes, maximum along the diagonals) and, with it, the mechanical reaction of the sheet. F X , F Y , and F XY increase with time until reaching the steady-state condition with the typical sinusoidal trend of F X and F Y , while F XY presents a trend similar to F Z ; on the contrary, the hr1.5_tp case, as well as the hr1.0_tp and hr0.5_tp strategies (the last two not reported in the figure), show an atypical and irregular trend of the forces. Figure 5 highlights that ref_tp and hr0_tp strategies determine similar forming forces, slightly higher for the first one; this last sentence is justified by the fact that the ref_tp strategy involves a continuous vertical down movement of the tool (and then, the most severe contact conditions), which is different from all the other strategies. For both ref_tp and hr0_tp cases, the typical trend of the forces is due to the continuous tool/sheet contact during all the process; this is obvious for ref_tp but is also true for the hr0_tp case too, due to the elastic springback that guarantees the contact between the tool and sheet, also during the horizontal steps of the toolpath [40]. Starting from the hr0.5_tp strategy, both F Z and F XY decrease significantly; in addition, their trends are completely irregular. F XY even tends to zero for the last two strategies; this is representative of an almost non-contact between the tool and the sheet on the top of the ramp heights of the toolpath (see the lower F Z values), because of hr values being similar to the entity of the elastic springback.
The differences in terms of forces are reflected in different strain states too. Concerning this, Figure 8 reports the maximum shear strain for two consecutive forming steps related to the two extreme cases (ref_tp and hr1. In light of the results in terms of forming forces (see Figure 5), and considering that higher and more regular plane forces determine a combination of continued strain accumulation and asymmetric strain levels (and, consequently, higher probability of twisting occurrence) [41,42], it can be assumed that the twisting phenomenon can be mitigated by using a toolpath strategy starting from hr0.5_tp. In addition, the numerical model used in this work does not include wrinkle instability criteria and it is not capable of accurately predicting the occurrence of wrinkling; but, despite this, the results of the From the figure, it is possible to note that the ref_tp strategy (Figure 8a) determines strain accumulation on a large area of the sheet and its distribution is asymmetric, following the advancement of the tool (whose current position is indicated with an arrow); these phenomena are not so noticeable for the hr1.5_tp strategy (Figure 8b), and this translates into a more localized deformation and reduced distortion of the shells subject to the forming action of the tool.
In light of the results in terms of forming forces (see Figure 5), and considering that higher and more regular plane forces determine a combination of continued strain accumulation and asymmetric strain levels (and, consequently, higher probability of twisting occurrence) [41,42], it can be assumed that the twisting phenomenon can be mitigated by using a toolpath strategy starting from hr0.5_tp. In addition, the numerical model used in this work does not include wrinkle instability criteria and it is not capable of accurately predicting the occurrence of wrinkling; but, despite this, the results of the simulations allows for the assumption that the occurrence of this defect is less likely to start from the above reported toolpath strategy.
Concerning the forming time (see Figure 6), it increases, passing from the ref_tp to hr1.5_tp case. This is a direct consequence of the increase of length of the toolpath; the first strategy represents the shortest one, to cover with discrete points a spiral, while the other ones gradually diverge from this condition.
From the histograms of the energies (see Figure 7), the total one decreases, passing from the ref_tp to hr1.5_tp case; however, the last three cases show very varied E t values. By observing the values of E s , they follow the trend of the total one, and this is in accordance with the observations on the plane forces. The values of E i , together with the considerations on the sliding energies, portend an increasing work done in permanent deformation, as well as a different way of deforming (from predominant distortion to compression of the sheet); the last observations are in line with the ones from the considerations on the shear strain states in Figure 8.
Conclusions
The present work follows a numerical approach for the simulation of the incremental forming of polycarbonate sheets; a commercial FEM code was used to simulate the process for the manufacture of a fixed wall angle cone frusta by varying the toolpath strategy, aiming to individuate solutions capable of reducing the forming forces, and with them the risk of failures and defects, and the spending of energy.
The analysis of the results highlights that the toolpaths alternating diagonal up and vertical down steps reduce both vertical and plane forming forces, compared to the reference toolpath (for which, they reach about 450 N and 350 N) and, with them, the probability of occurrence of twisting and wrinkling. In addition, these solutions also guarantee energy savings and reduce the rate of distortion energy, without affecting too much on the manufacturing time.
Considering all the results of the numerical campaign, it can be argued that one of the toolpath strategies with a positive ramp height guarantees low force and energy levels; the hr1.0_tp case (the solution with a ramp height of 1.0 mm) can be considered the best choice, since for the same total energy (0.51 MJ), guarantees lower sliding energy (linked to the elements' distortion) compared to hr0.5_tp (0.09 MJ against 0.21 MJ), and lower manufacturing time compared to hr1.5_tp (about 10 s less).
Future works could aim to extend the numerical analyses; for example, not only the toolpath but also the shape of the forming tool could be varied. In addition, an experimental campaign reflecting the numerical tests could be carried out to investigate features, like the twisting and the surface roughness, not observable by FEM. | 6,147.4 | 2023-01-01T00:00:00.000 | [
"Materials Science"
] |
Efficiency Decreases in a Laminated Solar Cell Developed for a UAV
Achieving energy autonomy in a UAV (unmanned aerial vehicle) is an important direction for aerospace research. Long endurance flights allow for continuous observations, taking of measurements and control of selected parameters. To provide continuous flight, a UAV must be able to harvest energy externally. The most popular method to achieve this is the use of solar cells on the wings and structure of the UAV. Flexible solar cells mounted on the surface of the wings can be damaged and contaminated. To prevent these negative changes, it is necessary to apply a protective coating to the solar cells. One of the more promising methods is lamination. To properly carry out this process, some parameters have to be appropriately adjusted. The appropriate selection of temperature and feed speed in the laminator allows a PV (photovoltaic) panel to be coated with film, minimizing any defects in the structure. Covering PV panels with film reduces the performance of the solar cells. By measuring the current–voltage characteristics, data were obtained showing the change in the performance of solar cells before and after lamination. In the case of testing flexible PV panels, the efficiency decreased from 24.29 to 23.33%. This informed the selection of the appropriate number of solar cells for the UAV, considering the losses caused by the lamination process.
Introduction
External energy harvesting allows for standalone power supply systems to extend their working time and even achieve full energy autonomy [1][2][3]. PV (photovoltaic) panels allow electricity to be obtained from solar energy, and surplus energy can be stored in batteries [4,5]. The use of such systems is gaining more and more popularity in the electromobility industry in use, among others, in charging stations for electric cars, and in aviation as an element of the wings or other parts of the vehicle structure. In the case of UAVs (unmanned aerial vehicles), the operation of a solar cell under the conditions in which it will be operated should be verified [2]. Currently, UAVs are used for, amongst other functions, distributing shipments, mapping, surveillance, and monitoring of borders and crops [6,7]. The biggest research area associated with UAVs is increasing flight duration without unnecessary landing. For this purpose, systems should be developed to increase flight duration, optimize the system in terms of weight and provide functionality in all weather conditions. Obtaining external energy allows for energy autonomy; however, it is closely related to the location and time of flight [8]. The use of solar cells allows for an increase in flight duration, but it also has numerous limitations that have to be taken into account during the design of power supply systems [9][10][11].
The sun is the largest source of free energy on Earth. Solar energy is a renewable, pollution-free, sustainable, and inexhaustible resource. A solar cell is a device that converts solar energy into electricity through the photovoltaic effect. The most-used material for
•
Adhering to an existing wing-this method is good for retrofitting an existing UAV.
Aerodynamics are normally not affected as modules are extremely thin. The biggest advantage of this solution is it allows the possibility of replacing PV cells in the event of damage. Wiring between modules is time-consuming with large wings, as strings of solar cells run from root to tip. The biggest disadvantage of this solution is the sealing of the gap between two modules [20,30]. • Placed into a mold-the challenge is to fix the modules in their exact position and to ensure no resin leaks onto the front of the module. The advantage of this solution is the wiring, which is easy to arrange. The effects of PV modules on aerodynamics are largely eliminated but modules cannot be swapped in case of damage. One variation of this method is to place solar cells inside the wing structure with a transparent coating, e.g., transparent film. This technology is mainly practiced within hobby modeling circles and the production process can be seen on models that are often developed by enthusiasts, e.g., on YouTube channels. Due to the labor-intensive nature of this method and the impossibility of replacing damaged elements, it is rarely used in commercial UAVs.
•
As the wing surface-lightweight solar modules need more ribs; more sturdy solar modules need fewer ribs but have more weight. The wiring arrangements are easy in this solution [31][32][33][34].
This article details aspects of the development of a solar-powered UAV which is designed to be able to fly in the stratosphere-TwinStratos (TS) UAV [35][36][37]. The goal of this research was to obtain an understanding of the laminated solar cells used in the first, smaller prototype of TS ( Figure 1). Decreases in efficiency and changes in the parameters of solar cells can affect energy produced by the system. For the purposes of this experiment, the UAV was equipped with SunPower Maxeon Ne3 solar cells, which are flexible and allow for adaptation to curved surfaces. The manufacturer of the SunPower Maxeon Ne3 cells ensured efficiency at a level of 24.3% [38]. Data received from a test stand allowed us to calculate if the number of solar cells assumed in the initial assumption was able to perform the assumed flight mission. Data obtained in the test allowed the development of a simulation model for a power supply system of the envisioned solar-powered UAV. In previous works, this integrated design approach based on model-based system engineering developed by the project team was applied to the design and testing of ultra-efficient racing vehicles [39], automated guided vehicles (AGVs) [40], as well as for the design of general aviation class aircraft [41]. • Placed into a mold-the challenge is to fix the modules in their exact position and to ensure no resin leaks onto the front of the module. The advantage of this solution is the wiring, which is easy to arrange. The effects of PV modules on aerodynamics are largely eliminated but modules cannot be swapped in case of damage. One variation of this method is to place solar cells inside the wing structure with a transparent coating, e.g., transparent film. This technology is mainly practiced within hobby modeling circles and the production process can be seen on models that are often developed by enthusiasts, e.g., on YouTube channels. Due to the labor-intensive nature of this method and the impossibility of replacing damaged elements, it is rarely used in commercial UAVs.
•
As the wing surface-lightweight solar modules need more ribs; more sturdy solar modules need fewer ribs but have more weight. The wiring arrangements are easy in this solution [31][32][33][34].
This article details aspects of the development of a solar-powered UAV which is designed to be able to fly in the stratosphere-TwinStratos (TS) UAV [35][36][37]. The goal of this research was to obtain an understanding of the laminated solar cells used in the first, smaller prototype of TS ( Figure 1). Decreases in efficiency and changes in the parameters of solar cells can affect energy produced by the system. For the purposes of this experiment, the UAV was equipped with SunPower Maxeon Ne3 solar cells, which are flexible and allow for adaptation to curved surfaces. The manufacturer of the SunPower Maxeon Ne3 cells ensured efficiency at a level of 24.3% [38]. Data received from a test stand allowed us to calculate if the number of solar cells assumed in the initial assumption was able to perform the assumed flight mission. Data obtained in the test allowed the development of a simulation model for a power supply system of the envisioned solar-powered UAV. In previous works, this integrated design approach based on model-based system engineering developed by the project team was applied to the design and testing of ultraefficient racing vehicles [39], automated guided vehicles (AGVs) [40], as well as for the design of general aviation class aircraft [41].
Lamination Process
In this study, we decided to laminate solar cells and glue PV panels to the UAV's wings. This method of mounting was chosen due to the fact it allowed application onto an existing aircraft. The second reason was related to the proof-of-concept stage of the UAV being developed. If there were any changes needed, these could be made relatively easily.
Solar cell lamination has two purposes: • Improving the aerodynamics of the wing with elimination of sharp edges; • Protection against scratching of the solar cell, action of chemicals, and harmful effects of weather conditions.
Lamination Process
In this study, we decided to laminate solar cells and glue PV panels to the UAV's wings. This method of mounting was chosen due to the fact it allowed application onto an existing aircraft. The second reason was related to the proof-of-concept stage of the UAV being developed. If there were any changes needed, these could be made relatively easily.
Solar cell lamination has two purposes: • Improving the aerodynamics of the wing with elimination of sharp edges; • Protection against scratching of the solar cell, action of chemicals, and harmful effects of weather conditions.
A disadvantage of lamination is the reduction in efficiency of solar cells in relation to the efficiency of uncovered solar cells. The test plan relating to lamination has been divided into individual stages:
1.
Testing of films of various thicknesses involving local damage to samples and then checking their reaction to external forces. This study enables the selection of a suitable film ultimately used in the UAV.
2.
Examination of the selected film with a spectrophotometer to find out its characteristics of reflection, absorption, and transmission.
3.
Covering the solar cell with the selected film. During lamination of the solar cells, an important aspect is the selection of appropriate process parameters.
4.
Testing the current-voltage characteristics of solar cells before and after the lamination process.
Film for Lamination
There are a few types of film with different thicknesses that can be used as protective surfaces for solar cells. In aerospace, one of the most widely used encapsulating materials is EVA (ethylene-vinyl acetate) [42]. The advantages of this material are high transmission and resistance to UV radiation [43]. The disadvantage in the case of EVA is the method required in the lamination process. To provide a smooth connection between the film and the solar cell, it is necessary to use a vacuum, ensuring that no air or humidity will be in contact with the solar cells. This requires advanced equipment that increases the cost of making the prototype of the UAV.
Another kind of film that can be used for solar cell lamination is PVC (polyvinyl chloride) film. PVC and EVA are similar materials. EVA is more flexible, lighter, and stronger than PVC, but the advantage of PVC is its ease of application to the solar cell. In this case, use of vacuum is not necessary. The time needed to prepare PVC-laminated solar cells is shorter than in the case of EVA.
For our prototype solution, we decided to use PVC film due to the simplicity of its application to the solar cell's surface. The films tested ranged from 60 to 250 microns in thickness. The inner side of the film is covered with glue, allowing adhesion to the laminated elements. The thinnest films were characterized by high flexibility but low mechanical strength, thick films the inverse. To select the appropriate film thickness, we decided to conduct several tests to check the strength of the films. Films were tested primarily in terms of their actual application and the typical working conditions. For this reason, at this stage of the work, no research was carried out with the use of advanced equipment, but only with the use of simple tools-knives, drills, needles, and files. Prepared damages are the most common defect that can occur during UAV flight operations. The performed tests were to show whether the damage caused by the system would allow further operation of the UAV or not. A visual method was used to check for defects appearing after film failure.
Tests have been carried out on laminated films. A laminator was used to prepare the samples. For lamination, we used a Laminator OPUS ProfiLAM (OPUS, Gliwice, Poland) A3. For the PVC film method of solar cell lamination, it was found that during the welding of the film, the guide rollers removed air just before the lamination process. With such a laminating process, there was no need to control the pressure to facilitate getting rid of air bubbles. The preparation of samples for testing began with laminating paper as a precursor to laminating solar cells. Due to its hygroscopicity, the paper allowed the adhesive to be absorbed, thanks to which no stains or air bubbles were formed. The use of paper additionally allowed us to obtain a rigid surface like the surface of a laminated solar cell. In the case of double lamination (film-paper-film), the second layer of film additionally stiffened the whole, making the sample similar to the structure of the UAV's laminate surface.
Initial Film Thickness Tests
In the case of testing the mechanical strength of the film against damage in real conditions, three tests were carried out. The first test consisted of cutting the film lengthwise and then bending it. The purpose of the test was to show the reaction of the longitudinally torn film to the stress on the wing of the UAV. Defects of this type may appear in the case of incorrect performance during manufacturing. The second test consisted of creating spot damage to the film and then checking whether the defect due to bends would enlarge. The purpose of this test was to present an example of films being damaged in flight. The final test tested damage to the edges of the film and then analyzed how stresses and external forces affected this damage. This test was similar to the second test, but the film was damaged at the end of the sample. This type of damage may occur when the film is detached from the UAV structure.
Every test was conducted several times on each film type with the number of bends to the film occurring around a dozen repetitions. This number of repetitions made it possible to observe changes in the structure of the samples. In the case where changes were not noticeable, the test time and/or a change of method of loading the samples using stretching and bending along other axes were added.
The incision test followed by the bend test yielded the observations in Table 1. Figure 2 presents the results of the incision test.
≤100
The incision damaged the inner side of the film. Due to the high flexibility of the film, the incision did not enlarge.
125-200
The incision damaged the inside of the film, enlarging the gap due to prolonged bending.
≥250
The incision did not damage the inner side of the film. However, due to bending, the gap burst.
Initial Film Thickness Tests
In the case of testing the mechanical strength of the film against damage in real conditions, three tests were carried out. The first test consisted of cutting the film lengthwise and then bending it. The purpose of the test was to show the reaction of the longitudinally torn film to the stress on the wing of the UAV. Defects of this type may appear in the case of incorrect performance during manufacturing. The second test consisted of creating spot damage to the film and then checking whether the defect due to bends would enlarge. The purpose of this test was to present an example of films being damaged in flight. The final test tested damage to the edges of the film and then analyzed how stresses and external forces affected this damage. This test was similar to the second test, but the film was damaged at the end of the sample. This type of damage may occur when the film is detached from the UAV structure.
Every test was conducted several times on each film type with the number of bends to the film occurring around a dozen repetitions. This number of repetitions made it possible to observe changes in the structure of the samples. In the case where changes were not noticeable, the test time and/or a change of method of loading the samples using stretching and bending along other axes were added.
The incision test followed by the bend test yielded the observations in Table 1. Figure 2 presents the results of the incision test.
≤100
The incision damaged the inner side of the film. Due to the high flexibility of the film, the incision did not enlarge.
125-200
The incision damaged the inside of the film, enlarging the gap due to prolonged bending.
≥250
The incision did not damage the inner side of the film. However, due to bending, the gap burst. In the second test, related to spot damage, all film thickness did not show any enlargement of defects, even under the influence of prolonged stresses as a result of bending or applying tensile stress.
The third test, related to end damage to the film, showed the effects listed in Table 2. In the second test, related to spot damage, all film thickness did not show any enlargement of defects, even under the influence of prolonged stresses as a result of bending or applying tensile stress.
The third test, related to end damage to the film, showed the effects listed in Table 2.
<100
In the case of thin film, the defect easily increased due to its delicate surface.
100-200
In the case of the intermediate films, the defect increased, but more slowly than in the case of thin and thick films.
≥250
In the case of thick films, the defect increased easily due to their greater brittleness/fragility. Thinner films allow flexibility over a low radius equal to a few centimeters. This feature of thin films allows for their use on small UAV elements such as hulls, ailerons, and flaps. Thick films do not allow the same flexibility over a low radius as thin films do, but they are more durable. Thick films provide higher resistance to mechanical damage. However, the thicker film, the lower the efficiency of the solar cells. Thicker films are heavier than thinner films, which is another point in favor of using the thinnest possible film.
After the lamination process, solar cells are soldered. Soldered joints stiffen the PV panel, increasing its brittleness. Analyzing the research carried out on possible damage of laminated solar cells during the flight of the test UAV and its response to defects, it was decided that the thinnest film that could be used was a film with a thickness of 100 µm.
Parameters of the Lamination Process
While testing the film samples, the parameters of the lamination process were of less importance due to the use of absorbent paper, to which the film adhered easily. In the case of solar cells, these parameters are more important due to the non-absorptive nature of solar cells. The parameters that played the greatest role in an optimized process were temperature and speed of lamination.
An optimized lamination process should create a smooth texture on the surface of the solar cell without visible defects (Figure 3a). The laminator used allowed 9 lamination speeds, allowing a feed rate from 200 to 1800 mm/min with increments of 200 mm/min for each speed. High feed rates (lamination speed) caused the film to peel off the solar cell. A second disadvantage of high feed rates was the formation of adhesive stains on the solarcell's surface (Figure 3b). Feed rates over 1400 mm/min produced these defects. more slowly than in the case of thin and thick films.
≥250
In the case of thick films, the defect increased easily due to thei greater brittleness/fragility. Thinner films allow flexibility over a low radius equal to a few centimeters. This fea ture of thin films allows for their use on small UAV elements such as hulls, ailerons, an flaps. Thick films do not allow the same flexibility over a low radius as thin films do, bu they are more durable. Thick films provide higher resistance to mechanical damage. How ever, the thicker film, the lower the efficiency of the solar cells. Thicker films are heavie than thinner films, which is another point in favor of using the thinnest possible film.
After the lamination process, solar cells are soldered. Soldered joints stiffen the P panel, increasing its brittleness. Analyzing the research carried out on possible damage o laminated solar cells during the flight of the test UAV and its response to defects, it wa decided that the thinnest film that could be used was a film with a thickness of 100 μm.
Parameters of the Lamination Process
While testing the film samples, the parameters of the lamination process were of les importance due to the use of absorbent paper, to which the film adhered easily. In the cas of solar cells, these parameters are more important due to the non-absorptive nature o solar cells. The parameters that played the greatest role in an optimized process were tem perature and speed of lamination.
An optimized lamination process should create a smooth texture on the surface o the solar cell without visible defects (Figure 3a). The laminator used allowed 9 laminatio speeds, allowing a feed rate from 200 to 1800 mm/min with increments of 200 mm/min fo each speed. High feed rates (lamination speed) caused the film to peel off the solar cell. second disadvantage of high feed rates was the formation of adhesive stains on the sola cell's surface (Figure 3b). Feed rates over 1400 mm/min produced these defects.
Another variable parameter was the temperature of lamination. The temperature rang of the lamination process was between 70-140 °C . Low temperatures caused ineffective lam ination, characterized by the formation of damp patches from the adhesive (Figure 3c,d).
Feed rates of 800-1000 mm/min and temperatures in the range of 90-105 °C produce optimal results. The selection of the appropriate lamination process parameters made it possible t obtain a homogeneous PV panel surface free from flaws. Prepared samples were subjecte to tests that examined their electrical properties before and after lamination.
Parallel to the lamination of the chosen solar cells, a different type of flexible sola Another variable parameter was the temperature of lamination. The temperature range of the lamination process was between 70-140 • C. Low temperatures caused ineffective lamination, characterized by the formation of damp patches from the adhesive (Figure 3c,d).
Feed rates of 800-1000 mm/min and temperatures in the range of 90-105 • C produced optimal results.
The selection of the appropriate lamination process parameters made it possible to obtain a homogeneous PV panel surface free from flaws. Prepared samples were subjected to tests that examined their electrical properties before and after lamination.
Parallel to the lamination of the chosen solar cells, a different type of flexible solar cells was also laminated. For each type, the optimal parameters of temperature and lamination speed determined for that type were used.
Test Stand for Collecting the Characteristics of Transmission, Absorption, and Reflection
During the research, we used an Evolution 220 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) to measure the characteristics of transmission, absorption, and reflection of the film. The spectrophotometer allowed for the determination of the characteristics in wavelengths ranging from 190 to 1100 nm.
Microscale Characterization Method
To obtain images of the monocrystalline surface topography of solar cells we used a scanning electron microscope (SEM). Images were obtained using a Supra 35 (Zeiss, Thornwood, NY, USA) SEM using an acceleration voltage of 10 kV. The secondary electron (in-lens) detector was used to obtain images of the surface topography.
Test Stand for Collecting Current-Voltage Characteristics of a Solar Cell
The test stand (Figure 4a) for measuring the current-voltage characteristics of solar cells allowed measurements to be obtained for the tested solar cells for STC (standard test conditions)-irradiated with a power of 1000 W/m 2 at a temperature of 25 • C, and Air mass 1.5 spectrum (AM 1.5) defined by European standard IEC 60904-3 [44]. This system for I-V characteristic measurements of solar cells meets all the requirements of the IEC 60904-1 standard [45].
Transmission, Absorption, and Reflection of the Film
The characteristics of absorption, reflection, and transmission are presented in Figure 5b-d for both films before and after the lamination stage were tested. The SunPower The device consists of a light source in the form of a xenon flash lamp with a power of 1430 watts. After passing through the filter ("Air Mass Filter") and the optical system, it uniformly illuminates the measuring table (Figure 4b).
Transmission, Absorption, and Reflection of the Film
The characteristics of absorption, reflection, and transmission are presented in Figure 5b-d for both films before and after the lamination stage were tested. The SunPower Maxeon Ne3 datasheet contains the spectral response of solar cells [38], which is presented in Figure 5a. The spectral response is the ratio of current generated by the solar cell to the power incident on the solar cell [46]. These characteristics make it possible to observe changes in the spectral response depending on the wavelength. When analyzing the graphs, the main observations were made of the wavelength range and places where the changes in characteristics occurred. In terms of the solar energy supplied to solar cells, the changes in the UV and infrared range are not as significant as in the visible light range.
Transmission, Absorption, and Reflection of the Film
The characteristics of absorption, reflection, and transmission are presented in F 5b-d for both films before and after the lamination stage were tested. The SunP Maxeon Ne3 datasheet contains the spectral response of solar cells [38], which is prese in Figure 5a. The spectral response is the ratio of current generated by the solar cell t power incident on the solar cell [46]. These characteristics make it possible to ob changes in the spectral response depending on the wavelength. When analyzin graphs, the main observations were made of the wavelength range and places wher changes in characteristics occurred. In terms of the solar energy supplied to solar cell changes in the UV and infrared range are not as significant as in the visible light ran There are some changes in the UV wavelength range for transmission, absorp and reflection. From a value of about 300 nm, the characteristics stabilize at one level the entire range of visible light up to a final value of 1100 nm. The research carried o laminated film elucidated changes in reflection, absorption, and transmission in the ble light range. A uniform value of the characteristics in the range from 300 to 78 demonstrates that the parameters of the solar cell in the visible light range will be con This information allows the conclusion to be made that the system will operate with ilar performance across the entire range of visible light. Figure 6 shows the surface topography of a monocrystalline silicon solar cell. observed that there are randomly distributed pyramids on the surface, which may cate the etching of the substrate in alkaline solutions. This chemical treatment of crystalline silicon significantly reduces the reflectance from the front surface of the cells. The texturization of the silicon surface is a key element in the production of p voltaic cells, enabling the formation of an appropriate microstructure on the surface substrate, trapping solar radiation inside the material by repeated reflection [47-49 There are some changes in the UV wavelength range for transmission, absorption, and reflection. From a value of about 300 nm, the characteristics stabilize at one level over the entire range of visible light up to a final value of 1100 nm. The research carried out on laminated film elucidated changes in reflection, absorption, and transmission in the visible light range. A uniform value of the characteristics in the range from 300 to 780 nm demonstrates that the parameters of the solar cell in the visible light range will be constant. This information allows the conclusion to be made that the system will operate with similar performance across the entire range of visible light. silicon significantly reduces the reflectance from the front surface of the solar cells. The texturization of the silicon surface is a key element in the production of photovoltaic cells, enabling the formation of an appropriate microstructure on the surface of the substrate, trapping solar radiation inside the material by repeated reflection [47][48][49]. Figure 6 shows the surface topography of a monocrystalline silicon solar cell. It was observed that there are randomly distributed pyramids on the surface, which may indicate the etching of the substrate in alkaline solutions. This chemical treatment of monocrystalline silicon significantly reduces the reflectance from the front surface of the solar cells. The texturization of the silicon surface is a key element in the production of photovoltaic cells, enabling the formation of an appropriate microstructure on the surface of the substrate, trapping solar radiation inside the material by repeated reflection [47][48][49]. All leads in the tested N-type solar cells are on the rear surface of the samples ( Figure 7). The electrode topography of a monocrystalline silicon photovoltaic cell is shown in Figure 8a,b.
Microscopic Scale Observations of the Solar Cell
When analyzing the structure of the solar cell before and after the lamination process, no traces of microcracks were observed. The temperature changes during lamination and the force generated by rollers pressing the film to the solar cell did not damage the upper surface layer and the electrical connections of the solar cell. The decrease in the efficiency of the solar cell is therefore not due to microcracks, but only due to the properties of the When analyzing the structure of the solar cell before and after the lamination process, no traces of microcracks were observed. The temperature changes during lamination and the force generated by rollers pressing the film to the solar cell did not damage the upper surface layer and the electrical connections of the solar cell. The decrease in the efficiency of the solar cell is therefore not due to microcracks, but only due to the properties of the layer of film applied during lamination. The lower efficiency and deterioration of electrical parameters are related to the light transmittance factor of the film.
Solar Cell Characteristics
The test stand allowed the determination of the electrical specification of the solar cells (Table 3). These values, as a mean of all measurements, I-V (current-voltage) and P-V (power-voltage) characteristics are presented in Figure 9.
Solar Cell Characteristics
The test stand allowed the determination of the electrical specification of the solar cells (Table 3). These values, as a mean of all measurements, I-V (current-voltage) and P-V (power-voltage) characteristics are presented in Figure 9. For each type (laminated and non-laminated) we used 25 samples of solar cells to conduct research. The relative standard deviation (RSD) as well as the minimum and maximum values obtained during tests are presented in Table 4. For each type (laminated and non-laminated) we used 25 samples of solar cells to conduct research. The relative standard deviation (RSD) as well as the minimum and maximum values obtained during tests are presented in Table 4. The test stand allowed for an irradiation intensity with a power equal to 1000 W/m 2 to be provided to cells. To obtain the characteristics for the lower range of irradiation intensity, the commonly used generic simulation model using a MATLAB/Simulink system was applied. Data obtained during the STC tests of the solar cell were used as inputs in the simulation model. Data from the tested solar cells and different irradiation levels are presented in Table 5. I-V and P-V characteristics are presented in Figures 10 and 11. The data introduced into the system allowed for the determination of current-voltage ( Figure 12) and power-voltage ( Figure 13) characteristics of solar cells for different temperatures. Using temperature coefficients of the SunPower Maxeon Ne3 cells, the following values were applied: voltage: −1.74 mV/ • C, current: 2.9 mA/ • C, power: −0.29%/ • C.
Temperature coefficient data obtained in the simulation model allowed for the comparison of these data with analytical calculations. Comparing these values, it can be concluded that the simulation model results are consistent with the calculations. The data of a solar cell for different temperatures are presented in Table 6.
The test stand allowed for an irradiation intensity with a power equal to 1000 W/m to be provided to cells. To obtain the characteristics for the lower range of irradiation in tensity, the commonly used generic simulation model using a MATLAB/Simulink system was applied. Data obtained during the STC tests of the solar cell were used as inputs in the simulation model. Data from the tested solar cells and different irradiation levels ar presented in Table 5. I-V and P-V characteristics are presented in Figures 10 and 11. The data introduced into the system allowed for the determination of current-vol age ( Figure 12) and power-voltage ( Figure 13) characteristics of solar cells for differen temperatures. Using temperature coefficients of the SunPower Maxeon Ne3 cells, the fo lowing values were applied: voltage: −1.74 mV/°C, current: 2.9 mA/°C, power: −0.29%/°C Temperature coefficient data obtained in the simulation model allowed for the com Figure 11. P-V curve at different irradiation levels for SunPower Maxeon Ne3 cells. Figure 11. P-V curve at different irradiation levels for SunPower Maxeon Ne3 cells. The data introduced into the system allowed for the determination of current-v age ( Figure 12) and power-voltage ( Figure 13) characteristics of solar cells for diffe temperatures. Using temperature coefficients of the SunPower Maxeon Ne3 cells, the lowing values were applied: voltage: −1.74 mV/°C, current: 2.9 mA/°C, power: −0.29% Temperature coefficient data obtained in the simulation model allowed for the c parison of these data with analytical calculations. Comparing these values, it can be cluded that the simulation model results are consistent with the calculations. The da a solar cell for different temperatures are presented in Table 6.
Conclusions
SunPower Maxeon Ne3 solar cells were selected for testing present repeatable e trical and physical parameters. The flexibility of the solar cell allows it to be bent influenced by external forces without fear that the solar cell will be broken or dama These features of the cell make it suitable for aerospace applications.
Flexible solar cells were covered with a thin film to provide protection and enha ment of the solar cell and to improve the aerodynamics of the UAV structure. The red tion in the solar cells' efficiency because of the lamination process reduced the ene supplied to the UAV power supply system. Tests carried out on test stands allowed the determination of the efficiency of laminated and non-laminated solar cells. It found that the optimal film thickness for lamination, a PVC film of 100 μm, reduced ciency by 4%. Spectrophotometric characteristics of transmission, absorption, and re tion allowed the conclusion that in the full range of visible light, these values are const This data demonstrates that losses of efficiency are constant for the visible light ra Reductions in efficiency precipitate the need to use more solar cells to obtain the s energy value. Reduced efficiency in relation to non-laminated solar cells, together w the benefits of enhanced protection of cells that film lamination confers, result in the n to change the design of the UAV, to optimize the energy consumption or redefine battery capacity.
Simulations that include the different parameters of solar cells in different temp tures allow for the determination of the response of the system in conditions when s
Conclusions
SunPower Maxeon Ne3 solar cells were selected for testing present repeatable electrical and physical parameters. The flexibility of the solar cell allows it to be bent and influenced by external forces without fear that the solar cell will be broken or damaged. These features of the cell make it suitable for aerospace applications.
Flexible solar cells were covered with a thin film to provide protection and enhancement of the solar cell and to improve the aerodynamics of the UAV structure. The reduction in the solar cells' efficiency because of the lamination process reduced the energy supplied to the UAV power supply system. Tests carried out on test stands allowed for the determination of the efficiency of laminated and non-laminated solar cells. It was found that the optimal film thickness for lamination, a PVC film of 100 µm, reduced efficiency by 4%. Spectrophotometric characteristics of transmission, absorption, and reflection allowed the conclusion that in the full range of visible light, these values are constant. This data demonstrates that losses of efficiency are constant for the visible light range. Reductions in efficiency precipitate the need to use more solar cells to obtain the same energy value. Reduced efficiency in relation to non-laminated solar cells, together with the benefits of enhanced protection of cells that film lamination confers, result in the need to change the design of the UAV, to optimize the energy consumption or redefine the battery capacity.
Simulations that include the different parameters of solar cells in different temperatures allow for the determination of the response of the system in conditions when solar cells are exposed to frosty surroundings and to high temperatures. In the case of a specific UAV being designed as part of this work, with an optimal flight time of over 24 h, this reduction in efficiency due to lamination equal to 4% will be significant.
Further work is planned to validate the simulation models of the laminated PV panels by testing the UAV in a real environment. The simulation model will allow predictions for the control of the energy balance of the UAV. Data related to solar cells, such as sun exposure, cloud cover, location, day duration, angular variation of the aircraft, flight scenarios, and energy consumption will also be considered. These data, in combination with the development of a power supply system, will allow for the calculation of the energy balance and planning of optimal flight paths in the stratosphere.
The methods developed for lamination of solar cells and data obtained will be used in the first prototype of the TS UAV. Subsequent improved iterations of TS will be able to fly in the stratosphere and achieve a cruising altitude of 20 km. These extreme conditions will allow verification of the initial assumptions with regard to the laminated PV systems meeting the requirements of this demanding environment. Funding: This research was partially funded from EEA and Norway Grants 2014-2021 and was partially carried out in the framework of project No. 10/60/ZZB/153 "Long-endurance UAV for collecting air quality data with high spatial and temporal resolutions". This work has also been supported by Silesian University of Technology (grant no. 10/060/BKM22/2025) and co-financed by the European Union from the European Social Fund in the framework of the project "Silesian University of Technology as a Center of Modern Education based on research and innovation" POWR.03.05.00-71300-Z098/17. The APC was funded by Silesian University of Technology.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Acknowledgments: The authors would like to express our thanks to the following researchers for their contribution to the research and carrying out some of the preparatory work as part of Project-Based Learning-the supervisors: Tomasz Rogala, Roman Niestrój, and the students: Justyna Sobiech, KamilŚwiątek, Dominik Lipok, Robert Lipka, Bartłomiej Ciupka, Daniel Czernecki.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,177 | 2022-12-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
BCST-APTS: Blockchain and CP-ABE Empowered Data Supervision, Sharing, and Privacy Protection Scheme for Secure and Trusted Agricultural Product Traceability System
School of Information Science and Technology, Taishan University, Taian, Shandong 271000, China School of Economics and Management, Taishan University, Taian, Shandong 271000, China College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China College of Information Science and Engineering, Shandong Agricultural University, Taian, Shandong 271000, China School of Information Science and Engineering, University of Jinan, Jinan, Shandong 250022, China
Introduction
Food for the people: food safety is the first and most important. From a global perspective, food safety incidents are typical public health emergencies. In order to solve those, countries around the world have successively studied and established a variety of APTSs relying on the agricultural product supply chain, mainly using the centralized technical architecture to realize the shared storage of traceability data. However, frequent privacy leaks in the data center and frequent food safety incidents have led consumers to lose trust in the traceability system. At the same time, considering many factors such as data ownership, data leakage, and their own commercial interests, agricultural production enterprises or organizations with a large amount of data are extremely cautious about opening their own internal data, especially core data. When food safety incidents break out, data are not available, tampered, or maliciously forged from time to time, resulting in the problems of less data and low reliability of agricultural product traceability system (APTS). e main reasons for the above problems are as follows. Firstly, the data privacy of the participants in the supply chain is not effectively protected, resulting in the difficulty of establishing a trust relationship between the participants. Secondly, regulators lack safe and effective regulatory technical means to effectively supervise the complete supply chain data. Finally, consumers no longer trust existing traceability systems and technologies. It can be seen that the contradiction between data privacy protection and efficient sharing of APTS is becoming increasingly prominent, and the problem of data security is still the difficulty and pain point restricting the safe sharing and supervision of agricultural products traceability data. e reason lies in the imperfect data privacy protection and access control technology of the traceability system. e decentralization, nontampering, and traceability of blockchain technology provide new technologies and ideas for the construction of APTS. Based on blockchain and CP-ABE encryption technology, this paper constructs a secure and trusted agricultural product traceability system (BCST-APTS), which can meet the whole supply chain data supervision, fine-grained authorized access control, and secure and trusted data sharing.
e main contributions of this paper are as follows: (1) With the help of cryptographic algorithms, the data stored on the blockchain can be encrypted to ensure data privacy and completely solve the trust problem between consumers and system participants. (2) Based on CP-ABE encryption technology, it provides new technical means to solve the problems of privacy protection, fine-grained access control, and data supervision of agricultural product supply chain data. (3) e proposed attribute management infrastructure scheme can more efficiently and flexibly meet the personalized privacy protection needs of supply chain participants. (4) A RE-CP-ABE scheme is proposed and elaborated in detail, which can quickly and accurately determine data access rights. More importantly, it can meet the data supervision requirements of the supervisory organization for the entire supply chain.
Agricultural Product Traceability System Based on
Blockchain. Graves et al. [1] believe that the three processes of production, transportation, and sales are the core, and integrated production, information sharing, and production operations are an important research direction of the supply chain. Cachon and Fisher [2] believe that information sharing can effectively improve the operational efficiency of the supply chain. Boehlje et al. [3] believe that building a traceability system for agricultural products can effectively reduce the cost of food supervision and improve the quality of products. Gao et al. [4] believe that the establishment of trust mechanisms and information sharing mechanisms should be accelerated between all entities in the supply chain, and an information service platform should be built to realize corporate information sharing, so as to reduce the overall operating costs of the supply chain and improve the operating efficiency and economic benefits of the supply chain. However, the existing data sharing and traceability system, which mainly adopts centralized technology architecture construction, can no longer be accepted by consumers. More precisely, the actual value of the traceability system is gradually being weakened. e system architecture based on blockchain technology has the characteristics of decentralization, nontampering, traceability, etc., which can not only meet the traceability requirements of the entire process of the agricultural product supply chain, but also realize the distributed shared storage of agricultural product entire process data. Agricultural blockchain technology can make traceable information fairer, just, transparent, lightweight, and efficient to reach consensus [5]. However, the consensus mechanism is a key technology to achieve consensus between organizations and nodes on the chain, and its vulnerability may damage the entire blockchain system [6,7]. Liu et al. [8] designed an anticounterfeiting traceability system based on blockchain technology that combines public and private chains to ensure the authenticity and reliability of the traceability information obtained and solve the problem of difficult supervision of traditional traceability systems. Feng [9] established an agricultural food supply chain traceability system based on RFID and blockchain technology. e system covers all links of the agricultural product supply chain, including the whole process of data acquisition and information management, and realizes the quality and safety monitoring, tracking, and traceability management of agricultural products "from farm to table." Yang et al. [10] designed a "database + blockchain" agricultural product traceability information storage model and query method based on hyperledger fabric, and the encrypted hash value of traceability data is stored on the blockchain. e above research successfully focused on the system architecture design and function realization, realized the distributed storage of agricultural product data, and ensured the data integrity. However, the existing research lacks indepth research on data confidentiality, secure storage, access control, etc., cannot protect the data and privacy of entities in the traceability system, and is difficult to apply in practice.
Privacy Protection and Access Control of Blockchain.
According to the degree of openness of the blockchain system, it can be divided into Public Blockchain, Private Blockchain, and Consortium Blockchain. According to whether the access of the organization node needs to be licensed, it can be divided into Public Blockchain and Permissioned Blockchain. Obviously, nonpublic Blockchains such as Consortium Blockchain and Private Blockchain are called Permissioned Blockchain [11]. Since the Permissioned Blockchain is a type of blockchain that each node needs to be licensed by the regulatory agency or authoritative organization, after verifying the identity, it is assigned specific system permissions to carry out specific businesses. Compared with the Public Blockchain, the Permissioned Blockchain is more suitable for application scenarios that require supervision, cross-organization sharing, and multiparty business collaboration.
For any industry, users are unwilling to share their personal information and confidential data with competitors [12], such as source location privacy [13]. e design of the agricultural product supply chain scheme based on the blockchain should ensure the security and credibility of data encryption storage, transaction records can be traced, inquired, and appealable, and private data belongs to each participant [14]. In order to solve the data security problems faced by the traditional APTS, it is necessary to protect the privacy of participants in the whole agricultural product industry chain, based on safe and reliable data sharing, improve the enthusiasm of agricultural industrial organizations to participate in the construction and application of traceability system, and strengthen the effective supervision of regulatory agencies, enhancing consumers' confidence and satisfaction with the traceability results. Hyperledger blockchain is committed to providing new solutions for data security and privacy protection [15,16]. For example, Hyperledger fabric has been used in the pharmaceutical traceability system [12]. e APTS fully meets the above characteristics, which is also the key application field of blockchain in agriculture.
Access control is the core key technology for data privacy protection. rough access permissions, data can only be accessed by the owner and authorized legal users. At present, the Permissioned Blockchain mainly adopts technologies such as organization (user) identity authentication, privacy channel, main/subchain data isolation [17], multi-subchain model [18,19], endorsement strategy, transaction encryption, smart contract encryption, and privacy data set to realize access control of block data. Organizational identity authentication solves the access control problem at the blockchain network level and prevents unauthorized users from entering the blockchain network; the privacy channel realizes the logical isolation between the organizations inside and outside the channel and achieves access control at the channel level, but it has different circumstances, creating a separate privacy channel that will incur additional management overhead (such as maintaining chain code version, policy and Membership Service Provider (MSP)). Obviously, the main/subchain data isolation and multichain model also have the same kind of problems as mentioned above, and endorsement policy can realize the organizational level access control of smart contract writing, but there is a risk of privacy disclosure due to cross-channel unauthorized access, and transaction encryption and smart contract encryption mechanisms still remain at the channel level; privacy dataset can realize access control of privacy data without creating a new privacy channel, but it still stays at the organizational level.
None of the above technologies can achieve more finegrained (such as organization-level/node-level) access control to meet the complex access requirements of the Permissioned blockchain across organizations [20], and other access control technologies are still needed. Fabric CA 1.4 version has adopted Attributes-based Access Control (ABAC), through the organization of identity attributes to access control of smart contract (chain code) operations, but it still lacks flexibility to set attributes only from the perspective of organizational identity. At the same time, the confidentiality of shared data cannot be guaranteed. Wang et al. [21] proposed an Attribute-based Distributed Access Control Framework (ADAC) suitable for IoT blockchain. Based on ABAC and blockchain, Zhang et al. [22] use the access tree [23] to configure access policies to achieve finegrained authorized access to IoTdevices. ABE is also used for access control of data sharing under the blockchain. Alniamy and Taylor [24] proposed fine-grained access control of shared data under the distributed environment of the blockchain. Jemel and Serhrouchni [25] and Huang et al. [26] solved the problem of fine-grained access control faced by data protection in an open shared environment, but the attribute set is open to all nodes in the entire network, which can easily be stolen by malicious nodes to generate correct users Key. Wang et al. [20] used ABE to propose a data access control and sharing model to achieve fine-grained access control and secure sharing. With the increasing number of on-chain organizations, when cross-organization deployment increases information sharing between different organizations, ABAC implementation may become complicated and requires attribute management infrastructure [27].
However, the above-mentioned existing research only focuses on the design of fine-grained access control and does not provide an overall plan that includes attribute management infrastructure and effective supervision of encrypted data, which is not conducive to the unified supervision of encrypted data by supervision organizations.
Block Data Encryption and Flexible Sharing.
Use blockchain distributed ledger and encryption technology to realize the privacy protection and safe sharing of agricultural global data, so as to ensure the stability of agricultural system operation and ensure the business flow (information flow), capital flow, and logistics data of the entire agricultural industry chain authenticity [5]. Data confidentiality is a prerequisite to ensure data security. Block (ledger) data security mainly encrypts transaction data through cryptographic algorithms. Symmetric encryption system can be used for blockchain data encryption [20,28]. is system requires both encryption and decryption parties to share keys. e ciphertext data can be calculated using a multikey fully homomorphic encryption (MFHE) scheme. Chen et al. proposed a dynamic multikey FHE scheme based on the LWE assumption [29], which requires less "local" memory, and the ciphertext expansion process is distributed. With the increasingly complex business exchanges between organizations and the dynamic changes in the number of organizations, key distribution and management will become complicated and difficult to operate. At the same time, there will be key leakage and multiple encryption problems. If the entire blockchain uses the same cryptographic algorithm and key, it is meaningless for data protection in the blockchain. What is more dangerous is that once an organization or node is illegally compromised, the loss is immeasurable. Obviously, symmetric cryptosystem is not the best choice for blockchain data encryption.
Relatively speaking, a public key cryptosystem based on Public-Key Infrastructure (PKI) is more suitable. At present, blockchains mostly use public key cryptosystems to encrypt data [20]. Although they have high security, they are limited to data sharing between the two, which cannot meet the data sharing of 1-to-N and multilevel access control [20]. In order to support more flexible public key generation, Sahai and Waters [30] proposed an Attribute-based Encryption (ABE) scheme, which uses a series of attribute sets instead of unique identifiers to identify identities. ABE is a fine-grained 1-to-N encryption scheme. Its advantages are as follows: (1) Encryption is only related to attributes, without paying attention to the number and identity of access members, which reduces the encryption overhead (2) Only the members that conform to the ciphertext attribute can be decrypted, so as to ensure the security of the data (3) e key is related to random numbers, and the keys of different members cannot be combined, which can resist collusion attacks [20] Further research proposes Key-policy Attribute-based Encryption (KP-ABE) [31] and Ciphertext-policy Attributebased Encryption (CP-ABE) [32]. KP-ABE embeds the policy into the encryption key and the attribute into the ciphertext. e key corresponds to an access structure and the ciphertext corresponds to a set of attributes. CP-ABE embeds the policy into the ciphertext and the attribute into the user key.
e ciphertext corresponds to an access structure, and the key corresponds to a set of attributes. e common feature of the two is to bind data encryption and decryption with policy. e data can be decrypted only when the attributes in the attribute set can meet the access structure. While retaining the ciphertext control, finegrained access control can be realized. KP-ABE scheme is close to static scenarios, such as paid video websites and log encryption management. In CP-ABE scheme, the data owner specifies the strategy of accessing ciphertext and associates the attribute set with the access resources. Data users can access ciphertext data according to their own attributes. is technology is suitable for access applications such as private data sharing, such as data encryption storage and fine-grained sharing in cloud computing environment.
In view of the above analysis, this paper uses CP-ABE scheme to encrypt the data stored in the APTS, which can not only protect the data privacy and security of the uplink organization, but also lay a foundation for flexible data sharing.
BCST-APTS: Secure and Trusted Agricultural Product Traceability System
3.1. System Logic Architecture. A secure and trusted agricultural product traceability system covers the entire process of production, processing, warehousing, logistics, and sales in the agricultural product supply chain. Participating entities include farmers/producers, processors, warehouse operators, logistics providers, retailers, and consumers. e business of each participant is carried out under the effective supervision of the Regulatory authority. e regulatory authority is responsible for the identity authentication, authority management, data supervision, and traceability of agricultural product quality and safety events for each subject. e system logic architecture is shown in Figure 1.
e system realizes the whole process data collection of agricultural products "from farm to table," that is, preproduction data, mid-production data, and postproduction data, including structured data and unstructured data. Structured data can be encrypted and stored directly on the blockchain, and unstructured data can be stored off blockchain, but its digital fingerprints must be stored on the blockchain to ensure the integrity and confidentiality of the data. Based on Permissioned blockchain and data encryption technology, the system has the following technical characteristics.
(1) No Tampering. Ensure the authenticity, validity, and permanence of data stored on the chain. product quality and safety incident occurs, the data of relevant participants can be automatically extracted and uploaded to the system, so as to prevent the relevant parties from tampering, deleting, or forging data when the incident occurs, so as to restore the truth of the incident and find the root cause of the problem.
In the above architecture, data encryption and flexible access control are the keys to ensuring that this system has the characteristics of security and credibility. It is also a typical difference between this work and other agricultural product traceability systems based on blockchain technology. In order to achieve the unification of the two, this paper focuses on the realization of the encryption and fine-grained access control of the data on the blockchain based on the CP-ABE scheme. e reencryption scheme based on ciphertext policy attribute encryption (RE-CP-ABE) is introduced in detail in Section 4.
System Deployment Network
Architecture. As mentioned in Section 3.1, the BCST-APTS involves multiple participants in the agricultural product supply chain. At present, in order to achieve efficient management within the enterprise, each entity has built a relatively complete information system, but the business system of each entity has huge differences in business logic, technical architecture, and deployment plans. erefore, so as to achieve various business alliances, data sharing and business systems between subjects must solve the problems of multisource and heterogeneous internal business systems. e distributed characteristics of blockchain technology itself provide a new solution to the above problems. Figure 2 shows the deployment network architecture diagram of the system. As shown in Figure 2, between the internal business system of each participant and the blockchain system, one or more blockchain nodes are built, and the internal business system and BCST-APTS are realized with the help of the client. For seamless connection of the blockchain, such as Organization A, Organization B, and Organization C as different participants in the agricultural product supply chain, there is also a regulatory organization responsible for supervision and operation of the entire blockchain system.
Note. e regulatory organization here is not a traditional centralized agency; it is just one of the ordinary members on the blockchain. When agricultural products need to be traded, the relevant data is packaged, and private data and trade secret data are encrypted using the CP-ABE encryption algorithm. e encrypted ciphertext is released and stored on the blockchain through the blockchain node, and data retrieval is only completed on the local blockchain node.
From the perspectives of part of the enterprise and the entire chain as a whole, the system architecture has obvious advantages. First of all, from the perspective of the organization, not only can the stability of the internal business system be ensured, but also a secure and reliable blockchain system can be accessed. Secondly, from the overall perspective of the entire chain, all participants jointly maintain a set of ledger books to achieve cross-regional and cross-industry agricultural product traceability business collaboration and data sharing, so as to ensure the authenticity and credibility of agricultural product traceability.
CP-ABE Scheme
Features. e data in the blockchain ledger is open to the whole nodes, which cannot guarantee the confidentiality of the data and is easy to be accessed illegally. is paper introduces CP-ABE encryption scheme to ensure the data confidentiality and authorized access control of the data sharers and realize the unity of data ownership and control on the blockchain. CP-ABE Encryption Scheme [32] consists of five basic algorithms, including setup, encrypt, keygen, decrypt, and delegate. Among them, CT �encrypt (PK, m, t) is an encryption algorithm. e encryption algorithm encrypts a message m under the tree access structure T. e specific calculation formula is as follows:
Here, the ciphertext CT is constructed by T, which is the tree access structure. e function att(x) is defined only if x is a leaf node and denotes the attribute associated with the leaf node x in T.
e decryption function is DecryptNode (CT, SK, x), defined as Here, SK is a private, which is associated with a set S of attributes, and a node x from T. Reference [32] explains the meaning of other parameters in detail, which will not be repeated here. However, from the above two formulas and parameters T and attr (x), it can be seen that, in CP-ABE algorithm, the attribute is extremely important for data encryption, decryption, and access control. It determines the flexibility of access control policy and who can decrypt ciphertext data. However, in order to meet the personalized encryption needs of each subject accessing the APTS, the system should support the needs of each subject to set personalized attributes, but it will lead to the increase of attribute synonymy or redundancy. At the same time, it is not conducive to the efficient supervision of encrypted data by regulators. Figure 3 shows an access control tree model in Apple's traceability system. In order to show the principle, it only includes four parts: product type, brand, place of production, and logistics provider.
Access Control Tree.
It can be seen from Figure 3 that leaf nodes represent an attribute of shared data, and non-leaf nodes are threshold nodes that support "AND" or "OR" logic operations. Data requesting organization must meet the minimum threshold value before they can decrypt the secret value of this node. For example, the threshold node "1/2" means that at least one of the two attributes can be decrypted, which is one of JD.com or SF Express. When the data requesting organization applies for access to encrypted data, only users who have the attributes in the access control tree and satisfy the logical relationship can access, so that the data can be encrypted once and shared N times.
Attribute Management Infrastructure.
To solve the above-mentioned problems, this paper proposes that the authoritative organization or regulatory authority in the APTS builds a whole-chain standardized attribute management infrastructure to provide attribute management, access, and other services to all access organizations in the entire blockchain. e specific construction process of this attribute management infrastructure is shown in Figure 4. e construction of the attribute management infrastructure includes the following steps: (1) Initialization phase: at this stage, the authoritative organization establishes the structure and storage mode of the attribute management infrastructure and establishes the user attribute set to standardize the management of all attributes of the whole chain. e structure of attribute management infrastructure can adopt key value, relational table, etc., and be stored in the form of file or database table. e user attribute set is used to store all attribute sets owned by the organization.
(2) Assign public attributes to the access organization.
When approving the access application of each organization, the authoritative organization assigns public attributes to the application organization according to its business, role, etc. e public attributes can be organization name, organization identity ID, system role, access time, and other different contents. (3) e access organization applies for private attributes.
After accessing the permissioned blockchain system, each organization can apply to an authoritative organization to maintain its own private attributes based on its own business development. e authoritative organization decides whether to approve the application. After passing the application, the organization can be used for subsequent data encryption and decryption. (4) Establish a whole-chain attribute management infrastructure. e public and private attributes of each organization together constitute the entire blockchain of attribute management infrastructure. (5) Maintain the attributes of the entire blockchain. e authoritative organization dynamically maintains and manages the attributes of the entire blockchain and the attribute collection in the attribute management infrastructure according to the result of the attribute application.
(6) Provide attribute services. Authoritative organizations provide external attributes services such as query, modification, and deletion according to the attribute management infrastructure and the attribute collection of the organization. For example, the data issuer retrieves the attributes used for data encryption, and when the authoritative organization works in place of the CA, it can generate encryption keys based on the data issuer.
e above attribute management infrastructure construction method manages the attributes of the entire blockchain through the attribute dictionary, which can not only meet the personalized attribute requirements of different access organizations, but also convert redundant attributes and synonymous attributes into standardized and standard attributes. A flexible and efficient solution is proposed for the difficult problem of attribute management in attribute-based encryption schemes.
RE-CP-ABE Encryption Scheme.
e CP-ABE encryption scheme can configure flexible and personalized encryption and access control policies with the help of attributes, but it also poses challenges for the entire blockchain of sharing and supervision of encrypted data. When the data issuer releases the encrypted data to the traceability system, if the original access control policy remains unchanged, it can be unified to the entire blockchain of standardized data encryption, so that the data owner, data requester, and data supervisors can quickly access data, which will greatly improve the management efficiency of the system. For this reason, RE-CP-ABE is proposed in this paper.
RE-CP-ABE scheme consists of six core algorithms: Setup, Encrypt, UpBlockChain,ReEncrypt, AccessKeyGen, and Decrypt. All variable symbols used in the specific algorithm are shown in Table 1.
Security and Communication Networks
System initialization algorithm: it has no input parameters, output public parameters PK, and master key MK.
Encrypt(PK, M, T) ⟶ CT. (4) Personalized encryption algorithm: according to the personalized access control tree T, it is constructed by users according to their own personalized needs, flexibly selected attribute set U p and logical relations, and personalized encryption is performed on the plaintext message M to obtain a personalized ciphertext CT.
Block publishing algorithm. Publish the encrypted personalized ciphertext CT and the corresponding access control tree T to the authoritative organization node or block generation node of the blockchain system, such as the Orderer node of Fabric.
Attribute reencryption algorithm: this algorithm is executed by an authoritative organization node and uses the attribute service provided by the attribute management infrastructure to reencrypt the received personalized ciphertext CT into a standardized ciphertext CT ′ . At the same time, the personalized access control tree T is converted into a standardized access control tree T ′ .
Access control and key generation algorithm: the algorithm is executed by the authoritative organization node and uses the attribute service provided by the attribute management infrastructure to determine whether the personalized attribute set S selected by the data request user meets the standardized access control tree T ′ . If both the attributes and the logical relationship meet the requirements, the user's data decryption private key SK is generated. Otherwise, there is no access control authority, and the decryption private key SK cannot be obtained.
Data decryption algorithm: according to the system public parameter PK and the decryption private key SK, the standardized data ciphertext CT ′ is decrypted into plaintext message M.
is algorithm is an improvement of the CP-ABE [32] scheme and retains the technical advantages of the original algorithm that flexibly set access control policies and data encryption according to attributes. At the same time, with the help of a standardized access control tree T ′ , it is possible to realize the standardization of personalized data encryption and access control, so that access rights can be quickly determined, and the effective supervision of encrypted data by data supervision organizations and authoritative third parties can be ensured.
BCST-APTS Scheme.
As one of the typical representatives of the Permissioned Blockchain, the fabric has been widely studied and applied in various fields. It realizes the technical positioning of business collaboration for alliance members, which determines that it can be successfully applied to the traceability system of agricultural products. is paper designs a secure and trusted agricultural product traceability system scheme based on Fabric and RE-CP-ABE, as shown in Figure 5.
is system scheme consists of data publisher (Organization 1), data requester (Organization N), and authoritative organizations, and the authoritative organization is responsible for the operation and maintenance management of the CA node and Orderer node of the system. Each connected organization manages its own Peer node and saves a copy of the blockchain ledger with the entire chain data.
Business Process.
e specific business process of the scheme is as follows: (1) Data is encrypted and published on the blockchain. e data publisher (organization 1) uses the Data decryption algorithm application 1 client to interact with Peer node 1, which is a node managed by itself, for blockchain interaction. e client is responsible for selecting attributes for encryption and constructing an access tree. After attribute encryption, the ciphertext data is submitted to the Peer node for publishing on the blockchain.
(2) Reencryption by authoritative organizations: the RE-CP-ABE encrypted smart contract is deployed in the Orderer node, and the Orderer node uses it to reconstruct the access tree of shared data, that is, converting it to the canonical access tree under the attribute dictionary, and reencrypting the data. As a result, the ciphertext data (organization level) encrypted by different organization attributes is converted to the ciphertext data (block level) under the specification attributes of the whole-chain attribute dictionary. (3) Store standard ciphertext data blocks: the Orderer node is responsible for sorting the received transactions, generating blocks of reencrypted data, broadcasting them to each Peer node 1 and Peer node N on the permissioned blockchain, and writing them into the blockchain ledger. (4) e data requester decrypts: the data requester (organization N) sends the data request and the attributes it owns to the Peer node N through the client of the application N. en, the Peer node N automatically executes the smart contract constructed by the canonical access tree to check whether the attributes and access control of data access are met condition. If it is determined to be satisfied, it returns to the client a request response including encrypted data and a standardized access control tree. e application N client decrypts the returned encrypted data to obtain the plaintext data.
Security Analysis.
In the proposed scheme, the CP-ABE encryption algorithm is used to protect data privacy and access control, the blockchain technology is used to ensure the distributed storage of data, and the RE-CP-ABE scheme is designed to ensure efficient data supervision of encrypted data to ensure the security and efficiency of the design scheme.
(1) Data confidentiality: this solution uses CP-ABE to encrypt the data on the blockchain and stores the ciphertext data on the blockchain. Although all nodes of the blockchain can obtain the data, the data content cannot be obtained when the attributes and access control tree requirements are not met. erefore, data privacy and security are protected.
(2) Data integrity: this solution stores the traceability data of agricultural products on the blockchain. With the help of the chain storage structure of the blockchain ledger, it can effectively prevent a single node from tampering with the data and ensure the integrity of the traceability data.
(3) Data availability: all nodes participating in the traceability system can have a copy of the complete ledger. erefore, when the service of a single node or multiple nodes is abnormal or interrupted, the entire system can still operate normally, which can effectively guarantee the availability of the system and data.
(4) Binding security of data ownership and control rights: taking full advantage of the fine-grained access control technology of the CP-ABE algorithm, it solves the problem of the data owner's control of distributed storage data on the blockchain and, at the same time, realizes one-time encryption for data release and multiple authorizations for data access, thereby improving the security and flexible access flexibility of the data on the blockchain. Security and Communication Networks (5) Reencryption security: the RE-CP-ABE scheme designed in this paper is implemented by the Orderer node of an authoritative organization, which can effectively identify malicious attribute operations such as forgery and impersonation by participants, thereby further ensuring the security of reencrypted data.
Conclusion and Prospect
In this paper, blockchain technology and CP-ABE algorithm are successfully integrated and applied to a secure and trusted agricultural product traceability system (BCST-APTS). Furthermore, an attribute management infrastructure is designed, which can regulate and efficiently manage the attributes of the entire blockchain. Based on this and CP-ABE algorithm, a RE-CP-ABE scheme is proposed, which can convert personalized encryption to standardized encryption, thereby ensuring the efficient sharing and supervision of data stored in the Permissioned Blockchain. Finally, this paper designs a BCST-APTS scheme based on Fabric and RE-CP-ABE. e above research work provides new solutions and ideas for solving the problems of data fraud, untrustworthy traceability results, and privacy leakage in the APTS. is work currently only designs the model of the system from the perspective of technology, architecture, and principles. e follow-up will focus on an in-depth research on the security of smart contracts, the efficiency of attribute management infrastructure, the flexibility and efficiency of the RE-CP-ABE solution, and the final construction a complete and usable APTS serving the development of agricultural product traceability technology.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors did not have any conflicts of interest. | 7,736.4 | 2022-01-15T00:00:00.000 | [
"Computer Science",
"Agricultural and Food Sciences"
] |
Predicting efficacy of drug-carrier nanoparticle designs for cancer treatment: a machine learning-based solution
Molecular Dynamic (MD) simulations are very effective in the discovery of nanomedicines for treating cancer, but these are computationally expensive and time-consuming. Existing studies integrating machine learning (ML) into MD simulation to enhance the process and enable efficient analysis cannot provide direct insights without the complete simulation. In this study, we present an ML-based approach for predicting the solvent accessible surface area (SASA) of a nanoparticle (NP), denoting its efficacy, from a fraction of the MD simulations data. The proposed framework uses a time series model for simulating the MD, resulting in an intermediate state, and a second model to calculate the SASA in that state. Empirically, the solution can predict the SASA value 260 timesteps ahead 7.5 times faster with a very low average error of 1956.93. We also introduce the use of an explainability technique to validate the predictions. This work can reduce the computational expense of both processing and data size greatly while providing reliable solutions for the nanomedicine design process.
cellular uptake and intracellular trafficking of NPs. In addition, these models provide data for monitoring NP interactions as they enter and exit a cell, which are difficult to calculate otherwise 13 . Internally, simulations make use of the forces acting on every atom. This can be obtained by deriving complex equations and deducing the potential energy from the molecular structure. However, the complex equations of MD simulations create two principal challenges 14 . The first challenge is to derive the potential energy for the system. There is a need for further refinement because the simulations are poorly suited to certain systems. The second challenge is the high computational demand of the simulations, which prohibits routine simulations with lengths greater than a microsecond. This leads to an inadequate sampling of conformational states 15 .
One way of accelerating MD simulations to take advantage of advanced hardware technologies such as graphics processing units (GPUs) [16][17][18] . A GPU provides higher performance than a single CPU core in terms of increased speed and overall processor utilization. However, GPUs lack the flexibility in their hardware architectures to implement all MD simulation algorithms. Extensive rework and optimization must be applied depending on the specific algorithm to enable it to work efficiently on these specialized pieces of hardware.
The limitations of hardware architecture can be resolved using machine learning (ML) during the development of MD simulations and molecular modelling. Wang et al. reviewed the use of ML-based methods to analyse and enhance MD simulations 19 . The first use of ML was to analyse the high-dimensional data produced by MD simulations through the use of artificial neural networks (ANNs). Different forms of ANNs can be used to produce latent vectors in a low-dimensional feature space from trajectory data. This enables an efficient way of evaluating the equilibrium and dynamic properties of systems [20][21][22][23][24][25][26][27][28] . Another set of studies focuses on the active involvement of ML-based techniques during the simulation process to improve the sampling time and capacity [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47] . However, for both objectives, model interpretability or model transferability to new systems poses a challenge. Another recent work implemented distance-based ML algorithms to simulate the atomistic interactions of a Au 38 (SCH 3 ) 24 nanocluster. The presented solution involves the use of transformation techniques to convert atomic coordinates into vectors of atomic interactions through descriptors that can be directly used with ML models. A Monte Carlo strategy was used to evaluate the energy landscape learned through the ML models and showed great results. However, the models were trained solely with Au 38 (SCH 3 ) 24 nanoclusters and focused mainly on a faster configuration space probing method. Hence, a study that can predict some target metrics for NP designs, such as the SASA value, without running MD simulations over a longer period and is generalizable to new systems holds much significance.
In this study, we propose a twofold approach. On the one hand, the issue of applicability of models to new NP designs is tackled, and on the other hand, using explainable AI provides a way to interpret the results. The proposed solution consists of three steps: transforming the data, using a hybrid ML network to predict the SASA value at a specified timestep, and using feature importance to explain and validate the results. Experimental atomic coordinate data for different NP designs are derived from MD simulations and are transformed using the many-body tensor representation (MBTR) descriptor, which reduces the data size and complexity, as well as reflecting interatomic interactions between pairs of elements. We present a combined ML system that consists of a time series model used to simulate the MD interactions over a specified period and a second deep neural network (DNN)-based model to calculate the SASA metric from the intermediate state. Feature importance is calculated using SHAP values to reflect the contribution of each element pair's interactions. In this paper, we show that ML methods can be used to substantially reduce the cost of NP simulations and, consequently, provide an efficient assistive tool for exploring the NP design space. This work is a novel study of predicting the SASA as a representative example; however, the approach can be generalized to a wide range of other properties and different molecules as well. In addition, we introduce a way to provide explanations for the models that increases both the reliability of the model and can give insights into better NP designs.
Results
The data used in this study are snapshots from MD simulations involving NP designs functionalized with 9 different drug types (see Table 3). These snapshots were taken over a range of variable periods at a rate of one snapshot per nanosecond. Specifically, 64 NP designs were recorded over 300 ns, 32 were recorded over 200 ns and 23 were recorded over 120 ns. These snapshots contain the Cartesian coordinates of the atoms in the systems along with other information and represent how the atom movements are dictated by the environment. We first transform these data into vector encoding by extracting design-specific global properties through MBTR descriptors. As a result, the data become manageable and compressed with only ( n features = ) 72 features representing each state. In order to apply ML models for the prediction of SASA values at future timesteps, the proposed solution combines two different models, each responsible for a part of the overall objective, as illustrated in the proposed workflow in Fig. 1. These are: 1. Time series model: This model is used to learn the inherent properties from a fixed window of MBTR vectors that influence atomic interactions during the period. This learned pattern is used to forecast future MBTR vectors and used in a sliding window mechanism until the vector for the specified time is predicted. Hence, this model enables the approximation of the state of an NP at any given point in time in the future. 2. SASA model: To calculate the SASA value by exploiting the transitive property between the atomic coordinates and the MBTR vectors, we use a second model. This model predicts the |SASA �t� | = P(θ|V �t� MBTR ) value for any particular timestep, t, where θ is the learned parameter.
The data are split into training and test sets with a ratio of 80:20, which translates to 107 designs in the training set and 12 in the test set. During the splitting, the order in time for the data of each design is preserved for the models to capture the sequential properties. www.nature.com/scientificreports/ hence, the test set is manually chosen to have representative samples from different ranges in the dataset. Each of the 12 designs in the test set along with the whole training data is depicted in Fig. 3a by taking the minimum and maximum SASA values over the whole period.
Time series prediction.
As discussed in the "Methods" section, we experiment with two approaches for time series prediction. Both approaches process the input data based on a sliding window method, and the window size dictates how long the simulations run before a solution can be used. The first approach is a transformer model using multivariate MBTR vectors as input to predict the next timestep's MBTR. The transformer model is used because the self-attention mechanism of the model is suitable for effectively approximating the interatomic interactions. The model achieves a mean absolute error (MAE) value of 40.16 on the test set for 3120 test samples for a fixed window size of 40. Here, we use the MAE as the error metric since it provides a linear score for deviation from the original value in a compact scale. The final MAE values are much higher than expected, which can be attributed to the smaller size of the dataset for such a large model. Hence, we use the second approach to minimize the error values with the same amount of data. As the next method, an ensemble approach is trained using 72 separate XGBoost models 48 , with each model predicting the value for the next timestep of each feature. The outputs from each model are then concatenated to produce the final vector for that timestep. The results of how different values of window size influence the batch of data can be used to forecast changes in the SASA. An optimal value for the size of this predefined batch can be set with consideration to the simulation costs for generating them and the error threshold (see Table 1). Although both the input and output of the models are the MBTR vectors except the final output, the graph represents the input-output relationship only. www.nature.com/scientificreports/ outcome of the ensemble approach are presented in Table 1, and in all cases, the MAE value is comparatively much smaller and suitable for the solution. The best achieved MAE of 1.57 is for the smallest tested window size of 10. Figure 2a shows the bar plot representation of the predictions using the ensemble approach and the transformer model for a randomly sampled test data, respectively. Figure 2b shows a detailed bar plot representation of the MAE for each model from the ensemble approach.
From the predictions, it can be seen that the XGBoost models provide better accuracy than the transformer model. From Fig. 2b, we can see that most of the features in the ensemble approach produce below average MAEs. There are 8 features that have above average MAEs, while only 4 features out of the 72 have an MAE above 10.
With these results, we used the ensemble approach as the time series predictor for the combined solution. Additionally, as this approach uses a classic ML algorithm, it is robust to smaller dataset sizes.
SASA prediction.
To determine the best performing deep neural network model for this task, models with different architectures are evaluated. Keeping the number of layers and activation functions the same, we experimented with different numbers of neurons in the feedforward network. The model with 512 neurons in each hidden layer had an MAE value of 6265.85, whereas the model with 128 neurons had a higher MAE value of 6810.92. In contrast, the model with 256 neurons in each hidden layer had the best performance, with an MAE value of 936.42; hence, it is used as the base model.
Both the MBTR vectors and the SASA values of the NP designs for each timestep were stacked vertically for the training and testing datasets. Figure 3b illustrates the predicted and expected SASA values that change continuously for 300 iterations of different designs in sequential order. Combined inference. As the SASA value takes an uncertain amount of time to reach a stable range, the duration for MD simulations has to be predefined to a maximum value during which all NPs are expected to www.nature.com/scientificreports/ reach that state. Reflecting the same property, inferences in the proposed solution can be made for a given amount of time, which is achieved by running the time series model s steps = t − w s − 1 times, where t is the target timestep. We start the combined inference with the MBTR vectors of the initial timesteps for a fixed window size and use the proposed workflow to predict the SASA value at the 300th timestep. Different window sizes, w s , are tested, the same as those for the time series model, and the results are evaluated by comparing the actual SASA value at the 300th timestep for the design and the predicted value using Eq. (1). The comparative results are demonstrated in Table 1.
where k refers to the number of NP designs in the test set, t is the final timestep for that design, y i is the groundtruth and ŷ i is the predicted value for the ith design. From Table 1, it can be observed that although the MAE for the time series model is smallest in the case of a smaller window size, the best score for the combined inference is achieved with a window size of 40. Hence, we use this value for comparing the outputs for the test set designs with respect to ground-truth values acquired by MD simulations, and the results are presented in Table 2.
It can be observed from Table 2 that the predictions are very close to the SASA values achieved through running MD simulations for the whole duration. As a result, the potential of the model is large, especially considering the computing and resource expenditures of acquiring the values through MD simulations for a large number of NP designs.
Explainable AI prospects.
To establish the reliability of the results, we use SHapley Additive exPlanations (SHAP) 49 . It is applied to our model to obtain the importance of the atomic interactions that greatly affect the model's output, i.e., the SASA value. From the results of the proposed approach, we can observe a strong correlation between the MBTR descriptors and the corresponding SASA values. This indicates that the interatomic distances can impact how the NP evolves.
Since the same structure from different residues may have different effects on solubility, the whole drug-carrier system is not suitable for determining feature importance . For example, Panobinostat-based and Quinolinolbased NPs have opposing properties: Panobinostat is a hydrophilic (attracted by water) drug, whereas Quinolinol is hydrophobic (repels water), which have different impacts on the resulting SASA value 7 . As the drugs have the same groups consisting of the same elements, using the relation between interatomic distances created by the MBTR and their SASA values is insufficient for explanations. For this reason, we generated MBTRs and built a separate model for each residue. In our approach, we focus on explanations for each residue to provide the pair of elements within them, which can result in a higher SASA value, as opposed to elements that are less significant.
For example, for the drug residue from Panobinostat-based NPs (Fig. 4), it can be observed that pairs of hydrogen atoms and carbon atoms are very important in terms of how steady the molecules on the surface are. The graph shows both positively and negatively affecting element-pairs. Positive interactions can lead to an increase in the SASA value, whereas negative interactions can lead to a decrease. The phenomenon of hydrogen atom pairs having such a large impact may be because the more spread the hydrogen atoms are, the greater they can create hydrogen bonds with the solvent molecules. In contrast, as the carbons exist mainly in long chains, a relatively higher distance may indicate folding, which reduces solubility. (1)
Discussion
Due to the wide range of biochemical and physicochemical properties of NPs and the expensive in vivo testing process, computational solutions (often MD simulations) are more feasible and precise for the study of NPs in anti-cancer treatment 50 . This work has been developed within an application scenario defined in the H2020 project EVO-NANO. The overall project scenario was to perform in silico NP design evaluations (MD simulations) before the synthesis of selected NPs, the evaluation of the designs via in vitro experiments using vascular microchips, and finally in vivo experiments using mouse cancer xenografts in which biodistribution, efficacy, and toxicity of the designs can be validated. Although computational methods provide a faster way to transition from the laboratories to the clinical field, they have the bottleneck of high computational resource and time requirements that limit the experimental possibilities. The work presented in this paper focuses on the in silico step and proposes an approach to accelerate the evaluation of NP designs by predicting the stable state without the need to execute complete MD simulations. The most significant contribution of this work is that it addresses the limitations of MD simulations and provides a scalable solution. It presents the opportunity of eliminating NP designs that do not possess the expected properties from the large pool of designs. As a result, a selective number of drug-carrier systems can be chosen with the largest efficacy values for further assessment. It takes several days to complete an NP simulation over 300 ns using high-performance computing resources, while the approach discussed in this research takes less than ten minutes to complete, starting from the input batch. Hence, if w s = 40 is used, the time gain is approximately 7.5 times (300ns simulation time / 40ns simulation time) for a simulation period of 300ns. The cost of the computation can also be solved since the trained model can be used to predict the stable state of the NP design within a very short amount of time, while the simulation steps are adjustable. Real case studies on the use of automated learning method-based prescreening processes have already shown to be feasible and accurate 51 , whereas the target variable, SASA, has been observed to be effective for comparative analysis between different configurations of NPs 7 . In addition, this approach can be adapted to other related applications where certain properties must be monitored, such as hydrophobic/hydrophilic properties 52 .
In drug discovery, explaining decisions made through ML models is crucial, especially based on the impact. Some of the most important properties of such explanations are transparency-to understand the rationale behind the predictions, justification-the reasoning behind the acceptance of the outcomes, and informativeness 53 . An explainable outcome not only establishes the credibility of the results through validation of what is expected but can also be used in the reverse way to find any association between the molecular structure and the physicochemical properties. We use local explainability techniques and demonstrate feature importance for a subset of the problem to achieve transparency. The effect on the target property for relative interatomic distances may not be directly applicable in the design process, but it can be used to establish new insights into the relationship between molecular structure and the target property. Moreover, information can be expanded by breaking down the problem into finer pieces and observing the model's behaviour from every perspective.
A limitation of this work is the limited availability of the training data. Having varied data with different SASA ranges can enhance the model performance. Currently, the model has been trained with 107 different designs, and having exposure to new designs can help the model generalize more. Another limitation is the use of the MBTR descriptor, which encodes the whole NP structure into a simpler form at the cost of information loss. In the future, instead of working with a single descriptor, implementing a combination of different descriptors can help summarize the complex structure in a concise form without losing any properties of the www.nature.com/scientificreports/ NPs. Additionally, we have explored explainability in this work in a limited scope and demonstrated that the potential of such techniques in this area is very large. However, the relative distances between atoms are not configurable; hence, they cannot be translated to design decisions. As a future recommendation, explanations can be expanded in a way that every structure from an NP design can be thoroughly assessed and can directly influence the design decisions. This can be achieved by extracting a hierarchy of properties, for instance, the ratio of drug to background molecules, the number of residues, and the size of the NP and the core, and evaluating the target characteristics against those.
Methods
In this section, we discuss the data used for this study, the transformation technique, and the proposed models in detail. This study did not require ethical approval.
Data description. The data we use in the project are derived from MD simulations which are generated using AMBER19 software 54 . In these simulations, the initial energy of the systems was minimized, and then the temperature was increased to 300 K. The MD simulations were run for one NP design at a time and stored in PDB format, which is a standard for files containing atomic coordinates. A PDB file contains information about elements used in the system, atomic coordinates in (x, y, z) format, and residue names. A simulation was run for some predefined time, which in this case was 300, 200, and 120 ns. When the MD simulations for a particular NP design were being run, the PDB files were extracted at 1 ns intervals. An example of simulation states in the beginning, middle, and end of the simulation is shown for a Panobinostat drug-based NP design in Fig. 5. A gold (Au) core is used in each of the systems, as it provides a low toxicity level and inertness and is easy to produce. The systems are designed with one of 9 different drug types, which can be classified either as hydrophobic or hydrophilic with respect to each other. These NPs are functionalized through ligands such as polyethylene glycol, dimethylamino, and amino groups. The systems contain 6 or 7 unique elements, including Au, S, H, C, O, and N, and can additionally contain F or Cl. Apart from the drug molecules, other residues are used in combinations of 5-7 different types per NP. The drug-forming residues are described in Table 3.
A comprehensive discussion on how the NPs were designed for this experiment along with how the simulations were conducted is presented in the study by Kovacevic et al. 50 For calculating the ground-truth total SASA Transforming the data using descriptors. To make the data suitable for application to an ML algorithm while keeping the representations computationally inexpensive and robust to rotations, permutations, and translations, we use MBTR descriptors. An MBTR is a global descriptor that provides a unique representation for any single configuration 57 . Each system is divided into contributions from different element pairs and described using relative structural attributes. In this work, to extract a single value conforming to a particular configuration of k atoms, we use an inverse distance-based geometric function, g 2 , as in Eq. (2). The structure is then represented by constructing a distribution, P 2 , of the scalar values using kernel density estimation with a Gaussian kernel. The theoretical underpinnings of the descriptor are expressed in Eq. (3).
where R l and R m , refer to the Cartesian coordinates of atoms l and m, respectively, and g 2 is derived from the reciprocal of their Euclidean distances. As the distributions are calculated for a set of predefined values of x and standard deviation σ 2 , each possible pair of the k-species present has multiple such values. These are combined into a singular value by taking the weighted average for each of these pairs, as expressed in Eq. (4).
where Z 1 and Z 2 are the atomic numbers for atoms l and m, respectively, and w 2 is the weighting function.
We use the DScribe implementation of the originally proposed method 58 . The exponential weighting function (w 2 = e −sx ) is used to keep the distributions tightly limited to atoms that reside in the neighbourhood. For that, a cut-off threshold of 1 × 10 −2 and a scaling parameter of 0.75 are used 8 . A key parameter of the implementation, n grid , refers to the number of discretization points and, in turn, determines the total number of features in the resulting vectors through Eq. (5). To determine its optimal value, we observe the correlation between the resulting vectors, MBTR n grid , for different n grid and the corresponding SASA values according to Eq. (6). These correlation scores are presented in Table 4.
where n elements is the number of total elements encountered throughout the descriptor generation process; here, n elements = 8.
where, k is the number of features and n is the number of samples used for the evaluation of C 2 .
From Table 4, we can observe that the correlation scores do not vary much for different values of n grid . However, as the lowest possible value of 2 for the parameter achieves the highest score while producing the smallest representation, it is chosen for this work.
Time series model.
For the time series model, we use two approaches: the first is based on a transformer model, while the second approach implements an ensemble of XGBoost models.
Transformer model. A transformer is a model architecture whose structure combines an encoder and decoder. For this work, we use the encoder part of the model taking a batch of data with a fixed window size as input Corr(MBTR n grid �i� , SASA) www.nature.com/scientificreports/ and outputting the multivariate vector of the MBTR corresponding to the next timestep. The architecture of the model is illustrated in Fig. 6a. In this work, a multi-head attention mechanism is used with 12 heads, the size of each attention head is 256, and the dropout probability is 0.25. The normalization layer uses ε = 1 × 10 −6 to normalize the input. The feedforward layer consists of a normalization layer, a 1-D convolutional layer, a dropout layer and another 1-D convolutional layer. The normalization layer and the dropout layer inside the feedforward layer use the same 1 × 10 −6 and 0.25 for the ε and dropout probability, respectively. The first convolutional layer uses a ReLU activation layer with a kernel size of 1 and filters it into 4 outputs. The second convolutional layer also uses a kernel size of 1 and provides 1 output. www.nature.com/scientificreports/ The model is trained by taking a window, w s , and all the features, n features , from each design in the training set and then combining them to predict the next n features -length vector at the next timestep. For instance, providing the MBTR representing the first 40 timesteps of the MBTR as input will produce the MBTR for the 41st timestep by evaluating the learned pattern from the training dataset. This model takes 1378.5 s for training on a Tesla P100 PCIe 16 GB GPU with 28 2.4 GHz Intel Broadwell CPU cores and 230 GB of RAM.
Ensemble model. The second approach is described as an ensemble approach with an XGBoost regressor, by creating one model for each feature. The model works by training a window, w s , of each feature to predict the next timestep's value for the respective feature. The difference from the previous approach is that one feature of each design is taken to learn the pattern from it instead of taking the whole n features as input. As a result, it provides better predictability of the MBTR. Moreover, on the same hardware as the transformer model, the training time of this approach is 20.73 times faster. The architecture of this model is shown in Fig. 6b.
For instance, providing the MBTRs representing the first 40 timesteps as input, the first model of the ensemble approach only predicts the value for the first feature. The function then iterates through the other features, and for each feature, the corresponding model predicts the value for the next timestep. Finally, all predicted results are combined into one MBTR vector for the target timestep. SASA model. A limitation of using the MBTR is that the encoded data cannot be reverted to atomic coordinates. Therefore, it is not possible to calculate SASA values from the MBTR directly. However, as ML has the potential to identify and understand hidden relationships, we use a feedforward neural network to predict the continuous values of the SASA from the encoded data. The MBTR as the input data represents the state of the NP at one timestep. The training and testing datasets are divided in the same way as the time series model.
The proposed network consists of 4 dense layers: (i) an input layer with 256 neurons and ReLU as the activation function, accepts 72 MBTR features; (ii) 3 hidden layers, each with 256 neurons and ReLU as the activation function; and (iii) an output layer using a linear activation function on a single neuron suitable for the regression task. For training, the model iteratively passes over the whole training set 500 times, with a batch size of 32, and optimizes using the Adam algorithm at a learning rate of 0.0001. The resulting value represents the predicted SASA. The performance of this regression model is evaluated using the MAE error metric to evaluate how close the predictions are to the expected values in either direction. The architecture of the model is shown in Fig. 6c.
Data availability
The transformed data, MBTRs for all the NP designs used in this experiment are available at: https:// github. com/ Evona no-Team/ evona no-ml/ tree/ master/ data/ proce ssed. PDB files of the NP designs can be provided from the authors on reasonable request. | 6,893.6 | 2023-01-11T00:00:00.000 | [
"Computer Science"
] |
Evaluation of Artificial Intelligence – Powered Identification of Large-Vessel Occlusions in a Comprehensive Stroke Center
BACKGROUND AND PURPOSE: Arti fi cial intelligence algorithms have the potential to become an important diagnostic tool to optimize stroke work fl ow. Viz LVO is a medical product leveraging a convolutional neural network designed to detect large-vessel occlusions on CTA scans and notify the treatment team within minutes via a dedicated mobile application. We aimed to evaluate the detection accuracy of the Viz LVO in real clinical practice at a comprehensive stroke center. MATERIALS AND METHODS: Viz LVO was installed for this study in a comprehensive stroke center. All consecutive head and neck CTAs performed from January 2018 to March 2019 were scanned by the algorithm for detection of large-vessel occlusions. The system results were compared with the formal reports of senior neuroradiologists used as ground truth for the presence of a large-vessel occlusion. RESULTS: A total of 1167 CTAs were included in the study. Of these, 404 were stroke protocols. Seventy-fi ve (6.4%) patients had a large-vessel occlusion as ground truth; 61 were detected by the system. Sensitivity was 0.81, negative predictive value was 0.99, and accuracy was 0.94. In the stroke protocol subgroup, 72 (17.8%) of 404 patients had a large-vessel occlusion, with 59 identi fi ed by the system, showing a sensitivity of 0.82, negative predictive value of 0.96, and accuracy of 0.89. CONCLUSIONS: Our experience evaluating Viz LVO shows that the system has the potential for early identi fi cation of patients with stroke with large-vessel occlusions, hopefully improving future management and stroke care.
contributes disproportionately to stroke-related disability and death. 1,2It requires emergent detection and treatment ideally by an endovascular approach.Management has changed dramatically during the past few years, most notably due to the numerous clinical trials published in 2015 that indicated that endovascular treatment is superior to tPA alone in the treatment of LVO acute ischemic stroke. 3,4One of the major contributors to this revolutionary result was the proper selection of eligible patients. 5As opposed to earlier trials, 6,7 patients in recent studies were selected primarily by CTA scans.These trials demonstrated the efficacy of mechanical thrombectomy in patients with a limited ischemic core in the setting of moderate-to-severe clinical deficits, which designated such patients as ideal candidates for revascularization therapy.The window for treatment was further extended at the beginning of 2018 to 24 hours, 8 following 2 trials that demonstrated the efficacy of endovascular treatment for selected patients in timeframes of 6-16 hours 9 and 6-24 hours. 10he immediate consequence was an increase in the number of patients eligible for transfer from primary and secondary hospitals to comprehensive stroke centers for endovascular treatment.Thus, fast and accurate recognition of pathology on CT scans has become crucial.
Artificial intelligence algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks.Methods ranging from convolutional neural networks to variational autoencoders have found great application in the medical image-analysis field, pushing it forward at a rapid pace.Deep learning has the potential to revolutionize entire industries, and given the centrality of neuroimaging in the diagnosis and treatment of neurologic disease, deep learning will likely affect neuroradiologists most profoundly. 11,12iz LVO (Viz.ai) is a medical product leveraging a convolutional neural network designed to detect LVOs on CTA scans and notify a neurointerventional specialist within minutes via a dedicated mobile application.
Our aim was to evaluate the detection accuracy of the Viz LVO in real clinical practice at a comprehensive stroke and trauma center.
MATERIALS AND METHODS
A retrospective study was conducted.Viz LVO was installed at the Rambam Health Care Campus in January 2018 for this study.All CTA scans obtained from January 2018 to March 2019 were scanned by the system, including nonacute ischemic stroke cases.The scans were analyzed by the Viz LVO Algorithm, Version 4.1.3,a convolutional neural network using deep learning to detect occlusions from the ICA terminus (ICA-T) to the Sylvian fissure.Analysis of this area would include all occlusions of the M1 segment of the MCA and possibly proximal M2 segment occlusions.Posterior circulation arteries are not assessed by the system.
The results of the system were compared with the formal CTA reading documented in the patients' files.Each CTA reading was performed by a single reader.The readers were 4 senior neuroradiologists, with 7-25 years of experience.A separate designated pool of 15 examinations was used for evaluating interrater and intrarater reliability among 4 raters.No variation was found between the results given for each CTA examination (intraclass correlation coefficient [ICC] .0.99).
LVO was considered as either an ICA-T or MCA-M1 occlusion.A second analysis included M2, which was further divided into proximal and distal occlusions using the curve into the Sylvian fissure as an anatomic landmark (Fig 1).
Other major pathologies reported in the formal neuroradiologist read were also documented, including cerebral hemorrhage, tumors, and intracranial arterial stenosis.Arterial stenosis was defined as a decrease of more than that in the arterial cross-sectional area calculated by the NASCET formula for ICA or MCA reported in the formal CTA read.
Examinations with metal artifacts (n ¼ 7) as well as those with severe motion or incomplete skull scanning (n ¼ 6) were excluded from the analysis a priori because they are automatically not analyzed by the algorithm.Such examinations are transferred to the server and the mobile application by the system, marked as technically inadequate and classified as negative for LVO.This process is further explained in the Algorithm Description segment and illustrated in Figs 2 and 3.
Algorithm Description
The LVO-detection algorithm involves several steps.First, applicable CTA series are identified by inspecting the DICOM metadata.Once an applicable series is identified, the next step is to verify the existence of contrast.The soft matter is extracted by creating a mask of all bone voxels, based on Hounsfield unit thresholding, dilation, and connected component analysis, and removing the bone mask and all voxels external to it.Once the soft matter is extracted, it is inspected for the existence of contrast by counting the total number of voxels with Hounsfield unit values consistent with iodine contrast (100-800 HU).If no contrast is identified, the scan is flagged as a suspected missed bolus and no further processing is conducted.
In the selected examinations, 3D registration of the brain is performed followed by cropping of a 3D cuboid, with dimensions determined so that the ICA-T, M1, and M2 regions are contained within the cuboid.The cuboid is inspected for the presence of metal by looking for voxels with Hounsfield unit values of .3000.If such voxels are identified, the scan is flagged as suspected of containing metallic artifacts and no further processing is conducted.Scans that were not processed due to bad bolus timing or metal artifacts are still available for viewing but are marked by a red frame to notify the user that the algorithm rejected the series.Examples are given in Fig 2, and an illustration of the process is provided in Fig 3.
The 3D cuboid is fed through a 3D segmentation convolutional neural network inspired by the U-Net architecture. 13The output of the network is a 3D cuboid of the same dimension as the input, whereby each voxel is assigned a number between 0 and 1 by the network, describing the probability (as estimated by the network) that this voxel is part of the ICA-T or M1 segments.The network was trained on hundreds of manual segmentations of the ICA-T and M1 regions.
Next, the lengths of the left and right segmentations are compared.This step is to identify cases in which due to an ICA occlusion and no retrograde filling, the ICA-T and M1 segments are not visible in the scan.If one of the sides is significantly shorter than the other, an LVO is detected and the system triggers an alert.If, however, sizable segmentations are available on both sides, these segmentations are extended using another segmentation convolutional neural network of similar architecture that was trained to segment all vessels (not just ICA-T and M1 vessels).The combination of the outputs of both networks is refined to generate the MCA vessel tree.Following this step, end points of the MCA vessels are identified.If the total distance between the ICA-T and the end point is below a predefined threshold, an LVO is detected and the system triggers an alert.The threshold was determined on the basis of the receiver operating characteristic curve to yield approximately equal sensitivity and specificity on the suspected-stroke population and corresponds, roughly, to the beginning of the Sylvian fissure.The process is visualized in Fig 4.
If no end point on either side is shorter than the threshold, the algorithm looks for partial occlusions.This is done by computing the centerline of the segmentation and inspecting the average Hounsfield unit value in the vicinity of the centerline.The algorithm is looking for a pattern of a drop in Hounsfield units, followed by an increase (Fig 5).If such a case is identified, an LVO is detected and the system triggers an alert.
Examples of system identification of both partial and complete occlusions and the matching images sent to the end user by the application during an alert are provided in Fig 6.
Statistical Analysis
Statistical analyses were performed using descriptive data analysis, including ranges, means, medians, SDs, and interquartile ranges for continuous variables and frequencies and percentages for categoric variables.Interrater reliability between system results and the formal read was quantified using an ICC model, namely 2-way random effects, absolute agreement, and single measurement.This model was selected because all ratings were performed by a different set of raters, 14 a scenario that would be expected in routine clinical settings.Thus, this model can be considered a realistic estimate of reliability for this scenario.The interrater ICCs were calculated between the model predictions and senior radiologist reports.
Measures of system performance were examined using sensitivity, specificity, positive predictive value (PPV), negative predictive value, and total accuracy.
In addition, logistic regression models were performed to predict the effect of each factor categoryage, sex, and identification of LVO by the Viz LVO system-on LVO detection.ORs and 95% CIs were estimated for each predictor.
To test the additive value of each factor, we entered the variables into receiver operating characteristic (area under the curve) curves one at a time: patient characteristics (age, sex) followed by Viz LVO results.When a logistic regression is fit, receiver operating characteristic curves are routinely used to summarize the model fit and to determine the best cutoff value for predicting whether a new observation is a failure (0) or a success (1).
The receiver operating characteristic curve is the sensitivity or recall as a function of fall-out.Overall, if the probability distributions for both detection and false-positives are known, the curve can be generated by plotting the cumulative distribution function (area under the probability distribution from infinity to the discrimination threshold) of the detection probability in the y-axis versus the cumulative distribution function of the false-positive probability on the x-axis.Ideal prediction produces an area under the curve of 1.00; area under the curve values of 0.70 and higher would be considered strong effects. 15he level of significance for all statistical analyses was 5%.We analyzed the data using the SPSS, Version 25.0 (IBM).This study was approved by the local Helsinki committee at Rambam Health Care Campus (IRB 0417-17).
RESULTS
A total of 1180 CTAs were scanned by the system and sent to the server and the mobile application during the study period.Thirteen cases had been flagged by the system as technically inadequate and were excluded a priori because they were not analyzed by the algorithm.Of the 1167 cases included in the study, 404 were stroke protocols, with others performed due to trauma, suspected stenosis, and other miscellaneous reasons (Table 1).
The interrater ICC for all cases was 0.83 (95% CI, 0.725-0.867).For stroke protocol only, the ICC was higher (0.86; 95% CI, 0.837-0.892).Of 1167 patients, 75 had an LVO as per a senior neuroradiologist's formal read, representing 6.4% of the cases.Sixty-one of these cases were detected by the system, leaving 14 cases of false-negative results.
The system alerted a possible LVO in 117 examinations, 56 of which did not show occlusion of the ICA-T or MCA M1, defined in our study as an LVO.Nevertheless, in 12 of these false-positive cases an occlusion of a more distal part of the MCA (M2 or M3) was detected.Additionally, 25 more of the false-positive alerts had different major pathologies, such as hemorrhage, tumors, or intracranial stenosis, defined as a decrease of ,50% in the arterial cross-sectional area (Table 2).
Logistic regression analysis adjusted for age and sex showed that Viz LVO strongly predicts LVO (OR ¼ 51.75; 95% CI, 28.84-92.84)(Table 3).Further receiver operating characteristic analysis demonstrated an area under the curve of 0.91 (Fig 7).
Three non-stroke protocol cases were found to have LVOs and were detected by the system: An elderly lady brought in as a trauma patient due to an automobile collision, a 41-year-old patient referred from another hospital with a suspect mass found to be an infarct, and a man suspected of having carotid artery stenosis, who was found to have complete occlusion of the ICA-T.In all cases, the system alerted the team by identifying an LVO.
DISCUSSION
Computer-aided detection and diagnosis performed using machine learning algorithms can be an important tool in helping physicians interpret medical imaging findings and reducing interpretation times. 16Imaging analysis has been shown to be the main artificial intelligence medical flagship, with especially promising results in the field of neuroradiology. 11This pairs well with stroke care, in which both timeliness and precision are needed. 17,18Various artificial intelligence-based systems have been developed for emergent detection of acute ischemic stroke, with Viz LVO being the first to include automatic direct LVO detection from CTA data. 19Evaluation of the accuracy and sensitivity of the system on a large patient population is imperative for future implementation into common clinical practice.In Rambam Health Care Center, about 150 cases of endovascular treatment for acute ischemic stroke are performed annually, allowing rapid evaluation of the system on a sizeable cohort.
In this retrospective single-center study, we found the Viz LVO detection system to be highly accurate.Similar results were previously reported by Chatterjee et al 20 in a study performed using an older version of the software (Viz.ai-Algorithm,Version 4.1.2)exclusively on patients with stroke.A recent study by Barreira et al 21 showed a sensitivity of 0.90 and accuracy of 0.86 using the Viz.aiAlgorithm, Version 4.1.3.Both studies focused on stroke-activation protocols and, therefore, showed high rates of LVOs, 30% of the cohort in the former and 49% in the latter, in contrast to our results of 18% for the stroke protocols and 7% for the entire cohort, regardless of the scan indication.
The system encountered 56 false-positive results, 37 (66%) of which had major pathologies and 19 that had no identified pathology.The high prevalence of pathologic examinations being accidentally flagged as LVOs is related to tissue distortion, resulting in vessels being pushed and changing their course.These results, including identification of 12 M2/3 occlusion cases and 9 cases of stenosis, are difficult to interpret because the inner working of deep learning systems is not completely understood.Future improvements to the algorithms are needed to enable higher accuracy of subtler pathologies on the one hand and exclusion of nonrelevant ones on the other.
The main advantage of using artificial intelligence software in medical analysis is that it can accelerate decision-making, a feature that is especially valuable in situations that demand quick action as in LVO stroke.The system showed suboptimal sensitivity, which prevents it from being used as a diagnostic tool to date.The PPV in our cohort was 0.65.A high PPV is essential to avoid an unacceptable burden on the application end-users due to multiple false-positive alerts.
The main advantage of the system in the clinical setting of acute stroke at this point relies on its ability to accelerate decision-making in cases positive for LVO stroke.This may show great significance in environments in which interventional neuroradiology consultants are less accessible, such as in prehospital advanced imaging used in mobile stroke units, which is a fastevolving field, 22 and in primary care centers.
The study was conducted in the setting of routine clinical practice, unlike previous studies.The patients were not preselected, and the neuroradiologists involved were not notified of the evaluation performed.This feature allowed analysis and assessment of the performance of the system for everyday patients in the emergency department.It accounts for the low rate of LVO acute ischemic stroke in our patient population and the lower PPV found compared with previous publications in stroke-only series.Because the system is being installed currently in multiple medical centers, some without dedicated stroke protocols, it could provide a better reflection of the real impact of the system on the diagnostic and therapeutic flow of patients.
The system uncovered 3 LVOs in patients with a nonstroke protocol that could have been easily missed due to low clinical suspicion.Such alerts could accelerate proper care in this scenario.
This study has several limitations.First, it is not an interventional study.The system was assessed without changing the treatment provided to patients in real-time, due to ethical limitations, thus preventing concrete discussion of improved time and cost with use of the system.Further research is already planned.
Furthermore, the criterion standard for LVO detection relied on a single neuroradiologist read per examination.Although the ICC showed no variation among readers, such evaluation is still subject to mistakes.Data were collected by radiology residents and assessed for possible discrepancies in follow-up examinations and the general clinical course of the patient to minimize such errors.In any case of inconsistency, examinations were marked and reread by a second senior neuroradiologist.
Another point is the exclusion of 13 examinations rejected by the system as technically inadequate, as described above.These examinations were not included in the study This study was conducted in a single comprehensive stroke center.One of the most fundamental future applications of the system is in improving notification, assessment, and treatment times for patients arriving at primary stroke centers.Thus, the next step in the evaluation of the system will need to be a multicenter study, comparing treatment timelines.
CONCLUSIONS
Our experience evaluating Viz LVO shows that the system has real potential for early, accurate identification of patients with stroke, hopefully improving workflow and patient care.
FIG 1 .
FIG 1. Division of the M2 segment of the MCA into proximal and distal segments at the curve of the artery into the Sylvian fissure (marked bilaterally by the dashed lines).
FIG 2 .
FIG 2. Alerts as they appear on the user end of the mobile application, showing the overview screen of examinations with (A) and without (B) a suspected LVO.An overview screen of failed processing is shown in C, in this case, due to metallic artifacts.
FIG 3 .
FIG 3. Flow diagram delineating the various steps of the algorithm.App indicates mobile application.
FIG 5 .
FIG 5. Algorithm processing of a partial occlusion.The cropped scan on the left visualizes a left M1 partial occlusion.The segmentation (on the right) extends through the partial occlusion.However, the average Hounsfield unit value decreases and then increases and a notification is triggered, even though the length of the segmentation exceeds the threshold.
FIG 4 .
FIG 4. Overview of the algorithm steps.A, Identification of an applicable scan based on metadata.B, Cropping the head region.Registration (C) and segmentation (D) of ICA-T/M1 regions.E, Additional segmentation of all vessels.Refinement of the segmentations to include only the MCA branches (F) and detection of suspected LVO based on vessel length (G).
FIG 6 .
FIG 6. System identification illustration demonstrates stenosis of the M1 segment of the left MCA (A), occlusion of the M1 segment of the left MCA (C), and occlusion of the proximal M2 segment of the right MCA (E), as they appear as preliminary convolutional neural network outcomes (green boxes represent original annotations by the Viz LVO system during identification).The images on the lower row (B, D, and F, respectively) match processed images sent by the system via the application and received by the viewer during an alert.
FIG 7 .
FIG 7. Prediction of LVO logistic regression (adjusted for age and sex).The area under the curve is shown to be 0.91.ROC indicates receiver operating characteristic.
Table 1 :
Descriptive statistics of the study sample a a Data are number and percentage unless otherwise indicated.
Table 3 :
Prediction of LVO by the Viz LVO system-logistic regression (adjusted for age and sex) Note:-SE indicates standard error; Sig, significance.
Table 2 :
Pathologies detected in false-positive cases
Table 4 :
Prediction of LVO by the Viz LVO system Note:-NPV indicates negative predictive value.AJNR Am J Neuroradiol 42:247-54 Feb 2021 www.ajnr.orgbecause they were not processed by the algorithm for LVO detection. | 4,796.4 | 2020-12-31T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Optimization of Surface Acoustic Wave Resonators on 42°Y-X LiTaO3/SiO2/Poly-Si/Si Substrate for Improved Performance and Transverse Mode Suppression
SAW devices with a multi-layered piezoelectric substrate have excellent performance due to advantages such as a high quality factor, Q, low loss insertion, large bandwidth, etc. Prior to manufacturing, a comprehensive analysis and proper design are essential to evaluating the device’s key performance indicators, including the Bode Q value, bandwidth, and transverse mode suppression. This study explored the performance of SAW resonators employing a 42°Y-X LiTaO3 (LT) thin-plate-based multi-layered piezoelectric substrate. The thicknesses for each layer of the 42°Y-X LT/SiO2/poly-Si/Si substrate were optimized according to the index of phase velocity, Bode Q value, and bandwidth. The effect of the device structure parameters on the dispersion curve and slowness curve was studied, and a flat slowness curve was found to be favorable for transverse mode suppression. In addition, the design of the dummy configuration was also optimized for the suppression of spurious waves. Based on the optimized design, a one-port resonator on the 42°Y-X LT/SiO2/poly-Si/Si substrate was fabricated. The simulation results and measurements are presented and compared, which provides guidelines for the design of new types of SAW devices configured with complex structures.
Introduction
Surface acoustic wave (SAW) devices have been a key component in smart phones, cars, base stations, etc., due to their small size, good performance, and MEMS production process [1][2][3].With the development of 5G communication technology, mobile communication has put forward a higher demand for the employed SAW filters, including a high frequency, large bandwidth, low loss, and good temperature stability.
The performance of SAW devices is mainly determined by their piezoelectric substrate.Traditional SAW devices, including the normal SAW structures and temperaturecompensated SAW (TC-SAW) structures, are mainly based on a bulk piezoelectric single crystal [4,5].Compared with those traditional ones, a piezoelectric thin film based on a multi-layered structure [3,6] offers not only a higher frequency and larger coupling factor (K 2 ) but also a higher Bode Q value [7] and a moderate temperature coefficient of frequency (TCF).These excellent characteristics have attracted much attention for SAW devices with multi-layered structures, which have been widely used in RF filters in the consumer market [8][9][10][11].
Although SAW devices with multi-layered structures have a distinct advantage, the design of one with an optimized performance calls for intensive study.Prior to manufacturing, a comprehensive analysis and proper design are essential to evaluating its key performance indicators, including the Bode Q value, bandwidth, and transverse mode suppression.In a previous study, T. Takai et al. studied the K 2 and TCF of SAWs on LiTaO 3 (LT) thin-film-based multi-layered structures and successfully applied them to a highperformance SAW filter [8,12].Recently, researchers have concentrated on the suppression of the transverse mode because it appears within the passband of the filter and thus affects the ripple and passband loss.There are some solutions for this thorny issue, for example, the use of a piston mode [13,14], apodization [15,16], and tilted IDT [9].S. Inoue et al. [17] also pointed out that an LT/quartz-layered SAW substrate with a flat slowness curve can obtain good suppression of transverse modes.Those previous works suggest that LiTaO 3 thin-film-based multi-layered structures are good candidate piezoelectric substrates for high-performance SAW devices.Meanwhile, the employed materials and thicknesses of the multi-layered substrate can modulate the flatness of the slowness curve, which makes transverse mode suppression possible.However, there is a need for a comprehensive analysis prior to manufacturing a desired SAW device.
Therefore, in this paper, a piezoelectric substrate with a 42 • Y-X LT/SiO 2 /poly-Si/Si multi-layered structure is proposed.The analytical theory and finite element method were employed to comprehensively analyze the SAW resonator based on the proposed layered structure.First, we studied the constitutive relationship between the mechanical displacement and electric field in the piezoelectric thin film, and then the internal relationship between the slowness curve and dispersion characteristic of the SAWs between the propagation and aperture directions was derived.Second, the device performance, including the admittance characteristic, Bode Q value, and bandwidth, was calculated.The effect of the piezoelectric thin film and electrode thickness on the performance of the SAWs on the 42 • Y-X LT/SiO 2 /poly-Si/Si substrate was studied, which can provide guidelines for the comprehensive design of SAW devices.Next, the dispersion characteristic was analyzed for the suppression of spurious transverse modes generated by the boundary effect of the electrode aperture.Then, the appropriate structure configurations of the substrate and electrode were obtained according to dispersion and slowness curves.Moreover, in order to ensure the suppression of spurious waves, a dummy structure based on the optimized design, a one-port resonator on the 42 • Y-X LT/SiO 2 /poly-Si/Si substrate, was fabricated.The simulation results and measurements were compared.
Simulation Techniques
For piezoelectric devices, the constitutive relationship between the mechanical displacement and electric field can be described through the multi-physical coupling in Equations ( 1) and (2) [18,19]: where T I and S J are stress and strain tensors, respectively; c I J and ε ij are the stiffness constant and dielectric permittivity constants; e iJ and e I j are both piezoelectric stress constants; and D i and E j are the electric displacement vector and electric field, respectively.The relationship between the strain and mechanical displacement can be described as where Micromachines 2024, 15, 12 3 of 14 As shown in Figure 1, x 1 represents the direction of wave propagation, x 2 represents the direction of the aperture, and x 3 represents the direction of the substrate thickness.
As shown in Figure 1, represents the direction of wave propagation, re the direction of the aperture, and represents the direction of the substrate thickn Piezoelectric Medium According to Maxwell's equations and the boundary conditions of SAW dev relationships between the electric displacement, , electric field, , electric p , and charge density, , are expressed as where the Nabla operator is ∇= .Equation ( 7) is applied to the i between the piezoelectric thin film and electrodes.Additionally, = 0 is applie piezoelectric thin film.
Assuming that there is no external force applied, the equilibrium equatio piezoelectric medium can be described in the tensor form: In this work, the thorough consideration of SAW devices consists of two asp The analysis of the substrate structure.We assume that the length of the apertu the direction is infinite (∂/∂ = 0) to research the regulator of the SAW device s and find the appropriate size according to the Bode Q values, the relative bandwi Therefore, the operator ∇ and Nabla operator ∇ can be expressed as the fo equation: The coordinate system of SAW propagation in a piezoelectric material.
According to Maxwell's equations and the boundary conditions of SAW devices, the relationships between the electric displacement, D i , electric field, E j , electric potential, φ j , and charge density, ρ s , are expressed as where the Nabla operator is 7) is applied to the interface between the piezoelectric thin film and electrodes.Additionally, ρ s = 0 is applied to the piezoelectric thin film.
Assuming that there is no external force applied, the equilibrium equation in the piezoelectric medium can be described in the tensor form: In this work, the thorough consideration of SAW devices consists of two aspects: (1) The analysis of the substrate structure.We assume that the length of the aperture along the x 2 direction is infinite (∂/∂x 2 = 0) to research the regulator of the SAW device structure and find the appropriate size according to the Bode Q values, the relative bandwidth, etc.Therefore, the operator ∇ iJ and Nabla operator ∇ can be expressed as the following equation: In addition, many methods for spurious suppression are applied to improve the properties of SAW devices, but these kinds of methods generally have different effects on spurious suppression.(2) In the analysis of the spurious wave yield at the aperture boundary, it is assumed that the length of the substrate along the x 3 direction is infinite (∂/∂x 3 = 0).The operator ∇ iJ and Nabla operator ∇ can be written as Substituting Equations ( 10) and ( 11) into the equilibrium Equations ( 8) and ( 9), the wave propagation characteristic along the aperture can be described using the following equation: The SAW propagates in the x 1 x 2 plane, and a coupling phenomenon occurs.The equation of the particle displacement is described as Substituting Equation (16) into Equation (15), we obtain: According to Equation ( 16), the wave number domain is described as According to the solutions to Equations ( 17) and ( 18), the frequency dispensation and wave number domain can be plotted and are applied to characterize the coupling of the wave propagation along the x 1 x 2 plane.
Analysis of Piezoelectric Thin-Film-Based Multi-Layered Structure
The high performance of SAWs on a piezoelectric thin-film-based multi-layered structure is due to the utilization of the advantages of various materials.The energy is confined to the surface due to the combination of low-velocity films and a high-velocity substrate.Here, we present a comprehensive analysis of a SAW resonator with a 42 • Y-X LT substrate.As shown in Figure 2, it uses a SiO 2 material as the temperature-compensated layer, a polysilicon (poly-Si) material as the trapping layer, and Si as the support substrate.
Without a loss of generality, the surface acoustic wave is assumed to propagate in the x 1 direction on a multi-layered structure.In order to simplify the solution while maintaining a good enough accuracy, the full-scaled 3D finite element model (FEM) was decomposed into a double-finger structure with one period interdigital transducer (IDT) [21][22][23][24][25].In addition, a substrate with a perfect matching layer set to the bottom for absorbing the wave propagated into the substrate was constructed.The continuity periodic boundary condition was set on the side of the model to extend it in the X direction to infinity.Additionally, it is seen that the geometric shape of the electrode is trapezoid because of the practical processing.And the mesh size of the region below the electrode was smaller than the other regions of the substrate due to the energy of the acoustic surface wave mainly focusing on the surface of the piezoelectric thin film.The maximum element size of the electrode was λ/6.The material constants used in the calculation are listed in Table 1.
Analysis of Piezoelectric Thin-Film-Based Multi-Layered Structure
The high performance of SAWs on a piezoelectric thin-film-based multi-layered structure is due to the utilization of the advantages of various materials.The energy is confined to the surface due to the combination of low-velocity films and a high-velocity substrate.Here, we present a comprehensive analysis of a SAW resonator with a 42°Y-X LT substrate.As shown in Figure 2, it uses a SiO2 material as the temperaturecompensated layer, a polysilicon (poly-Si) material as the trapping layer, and Si as the support substrate.Without a loss of generality, the surface acoustic wave is assumed to propagate in the direction on a multi-layered structure.In order to simplify the solution while maintaining a good enough accuracy, the full-scaled 3D finite element model (FEM) was decomposed into a double-finger structure with one period interdigital transducer (IDT) [21][22][23][24][25].In addition, a substrate with a perfect matching layer set to the bottom for absorbing the wave propagated into the substrate was constructed.The continuity periodic boundary condition was set on the side of the model to extend it in the X direction to infinity.Additionally, it is seen that the geometric shape of the electrode is trapezoid because of the practical processing.And the mesh size of the region below the electrode was smaller than the other regions of the substrate due to the energy of the acoustic surface wave mainly focusing on the surface of the piezoelectric thin film.The maximum element size of the electrode was /6.The material constants used in the calculation are listed in Table 1.To attain excellent performance in this layer structure, the thicknesses of the different layers were optimized through frequency domain simulations by the MUMPS solver using the infinite periodic models shown in Figure 2. The changes in the frequency characteristic with respect to the LiTaO 3 thickness are shown in Figure 3.In this case, the IDT period, λ, was 2.4 µm, the metallization ratio of the IDT was 0.5, the Al electrode thickness was 170 nm, the SiO 2 thickness was 500 nm, and the poly-Si thickness was fixed to 1 µm.In the following calculation, the thickness of the bottom Si substrates was set as 3λ, and the thickness of the PML was set as 2λ. Figure 3a presents a comparison of the calculated admittance Y 11 curves of the SAW resonators with increasing LiTaO 3 thickness.It is obvious that the LiTaO 3 thickness has a great influence on the frequency characteristic, as the resonant frequency gradually decreased with the increase in the LiTaO 3 thickness.In SAW applications, spurious waves are not allowed to exist.Figure 3b clearly shows the dependency of the phase velocity (V p ) on LiTaO 3 thickness, with the phase velocity increasing with increasing LiTaO 3 thickness.This is due to the LiTaO 3 thickness-induced dispersion of the phase velocity for high-frequency SAWs, where highvelocity acoustic waves are expected.Figure 3c shows the effect of the LiTaO 3 thickness on Bode Q values; the Bode Q values decrease with increasing LiTaO 3 thickness.For the low insertion of SAW devices, a special multi-substrate structure with high Bode Q is used.The electromechanical coupling coefficient (K 2 ) is illustrated in Figure 3d, where it can be seen that the K 2 value gradually decreased as the LiTaO 3 thickness increased.For SAWs that require a large bandwidth, a specific LiTaO 3 thickness with a large bandwidth is suitable for implementation.Based on the above simulation results, it can be seen that the maximum Q value of I.H.P. SAW resonators reached 3400, which is three times higher than that of the standard 42 • Y-X LT SAW resonator; in addition, the maximum K 2 value of I.H.P. SAW resonators exceeded 12%, which is a 20% wider bandwidth than that of normal SAWs [8,9].to 1 µm.In the following calculation, the thickness of the bottom Si substrates was set as 3, and the thickness of the PML was set as 2. Figure 3a presents a comparison of the calculated admittance Y11 curves of the SAW resonators with increasing LiTaO3 thickness.It is obvious that the LiTaO3 thickness has a great influence on the frequency characteristic, as the resonant frequency gradually decreased with the increase in the LiTaO3 thickness.In SAW applications, spurious waves are not allowed to exist.Figure 3b clearly shows the dependency of the phase velocity (Vp) on LiTaO3 thickness, with the phase velocity increasing with increasing LiTaO3 thickness.This is due to the LiTaO3 thickness-induced dispersion of the phase velocity for high-frequency SAWs, where high-velocity acoustic waves are expected.Figure 3c shows the effect of the LiTaO3 thickness on Bode Q values; the Bode Q values decrease with increasing LiTaO3 thickness.For the low insertion of SAW devices, a special multi-substrate structure with high Bode Q is used.The electromechanical coupling coefficient (K 2 ) is illustrated in Figure 3d, where it can be seen that the K 2 value gradually decreased as the LiTaO3 thickness increased.For SAWs that require a large bandwidth, a specific LiTaO3 thickness with a large bandwidth is suitable for implementation.Based on the above simulation results, it can be seen that the maximum Q value of I.H.P. SAW resonators reached 3400, which is three times higher than that of the standard 42°Y-X LT SAW resonator; in addition, the maximum K 2 value of I.H.P. SAW resonators exceeded 12%, which is a 20% wider bandwidth than that of normal SAWs [8,9].With the increase in the LiTaO 3 thickness, the resonance frequency, phase velocity, Bode Q value, and electromechanical coupling coefficient (K 2 ) changed monotonously.This is because the energy of the SH wave is mainly concentrated on the device's surface.When the piezoelectric thin film becomes thicker, its performance is similar to that of an SH wave on a 42 • Y-X LT structure.
Furthermore, the effect of the temperature-compensated SiO 2 layer on the comprehensive performance also deserves attention.In this case, the IDT period λ = 2.4 µm, the metallization ratio of the IDT was 0.5, the Al electrode thickness was 170 nm, the LiTaO 3 thickness was 600 nm, and the poly-Si thickness was fixed to 1 µm.Figure 4a illustrates the calculated admittance Y 11 curves as the SiO 2 thickness increases.The resonance frequency monotonously decreased with the increase in the SiO 2 thickness.Figure 4b shows the phase velocity vs. SiO 2 thickness curve; it can be clearly seen that the phase velocity decreased with increasing SiO 2 thickness.Figure 4c shows the effect of the SiO 2 thickness on the Bode Q values.It can be seen that the Bode Q values exhibited a nonlinear increase as a whole as the SiO 2 thickness increased.Figure 4d illustrates the K 2 vs. SiO 2 thickness curve, where it is clear that the K 2 value appears to decline in a non-linear manner with the increase in SiO 2 thickness.The K 2 value reached its maximum at around 300 nm SiO 2 .
illustrates the calculated admittance Y11 curves as the SiO2 thickness increases.The resonance frequency monotonously decreased with the increase in the SiO2 thickness.Figure 4b shows the phase velocity vs. SiO2 thickness curve; it can be clearly seen that the phase velocity decreased with increasing SiO2 thickness.Figure 4c shows the effect of the SiO2 thickness on the Bode Q values.It can be seen that the Bode Q values exhibited a nonlinear increase as a whole as the SiO2 thickness increased.Figure 4d illustrates the K 2 vs. SiO2 thickness curve, where it is clear that the K 2 value appears to decline in a nonlinear manner with the increase in SiO2 thickness.The K 2 value reached its maximum at around 300 nm SiO2.In addition to the LiTaO3 and SiO2 thicknesses, the electrode thickness also affects SAW performance due to mass loading, as shown in Figure 5.In this case, the IDT period = 2.4 µm, the metallization ratio of the IDT was 0.5, the LiTaO3 thickness was 600 nm, the SiO2 thickness was 500 nm, and the poly-Si thickness was 1 µm.Figure 5a illustrates how the calculated admittance Y11 curve changes with increasing Al thickness.It can be clearly seen that the mass loading of the Al electrode led to a monotonous decrease in the resonant frequency as the Al thickness increased.Meanwhile, the wave modes in the In addition to the LiTaO 3 and SiO 2 thicknesses, the electrode thickness also affects SAW performance due to mass loading, as shown in Figure 5.In this case, the IDT period λ = 2.4 µm, the metallization ratio of the IDT was 0.5, the LiTaO 3 thickness was 600 nm, the SiO 2 thickness was 500 nm, and the poly-Si thickness was 1 µm.Figure 5a illustrates how the calculated admittance Y 11 curve changes with increasing Al thickness.It can be clearly seen that the mass loading of the Al electrode led to a monotonous decrease in the resonant frequency as the Al thickness increased.Meanwhile, the wave modes in the piezoelectric medium have different excitation efficiencies, and therefore a suitable Al thickness was selected to suppress spurious waves within the passband range.The phase velocity, Bode Q values, and K 2 values showed similar curves to those with increasing SiO 2 thickness, as can be clearly seen in Figure 5b-d.
According to the above analysis, we found that the thickness of LiTaO 3 has a significant impact on the Bode Q and K 2 values, while the thicknesses of SiO 2 and Al mainly affect the speed.In practical applications, a local finite element simulation cannot fully characterize the performance of the model in the aperture direction.In order to maintain generality, we used Al (170 nm)/42 • Y-X LT (600 nm)/SiO 2 (500 nm)/ploy-Si/Si to build a 3D periodic model with a gap length of 0.175λ, a dummy length of 0.5λ, a dummy width of 0.25λ, and an aperture length of 15λ.The finite element mesh of the 3D periodic model is made of free tetrahedral elements.In this analysis, a degree of freedom (DOF) of about 203,761 could be obtained.As shown in Figure 6, the result exhibited multiple higher-order harmonics between the resonance frequency and the anti-resonance frequency.Figure 7 shows the displacement of each higher-order transverse mode (S1, S2, S3, S4, and S5) along the aperture direction.The surface wave propagates back and forth many times in its resonant cavity while undergoing total reflection at the busbars and reflectors, thus causing the SAW to propagate laterally.The lateral resonant energy exists as higher harmonics near the main resonant frequency.These responses would cause a large amount of ripple in the passband of the SAW filter, affecting the loss and flatness of the device.
Micromachines 2024, 13, x FOR PEER REVIEW 8 of 15 piezoelectric medium have different excitation efficiencies, and therefore a suitable Al thickness was selected to suppress spurious waves within the passband range.The phase velocity, Bode Q values, and K 2 values showed similar curves to those with increasing SiO2 thickness, as can be clearly seen in Figure 5b-d.According to the above analysis, we found that the thickness of LiTaO3 has a significant impact on the Bode Q and K 2 values, while the thicknesses of SiO2 and Al mainly affect the speed.In practical applications, a local finite element simulation cannot fully characterize the performance of the model in the aperture direction.In order to maintain generality, we used Al(170 nm)/42°Y-X LT (600 nm)/SiO2(500 nm)/ploy-Si/Si to build a 3D periodic model with a gap length of 0.175λ, a dummy length of 0.5λ, a dummy width of 0.25λ, and an aperture length of 15λ.The finite element mesh of the 3D periodic model is made of free tetrahedral elements.In this analysis, a degree of freedom (DOF) of about 203,761 could be obtained.As shown in Figure 6, the result exhibited multiple higher-order harmonics between the resonance frequency and the anti-resonance frequency.Figure 7 shows the displacement of each higher-order transverse mode (S1, S2, S3, S4, and S5) along the aperture direction.The surface wave propagates back and forth many times in its resonant cavity while undergoing total reflection at the busbars and reflectors, thus causing the SAW to propagate laterally.The lateral resonant energy exists as higher harmonics near the main resonant frequency.These responses would cause a large amount of ripple in the passband of the SAW filter, affecting the loss and flatness of the device.
Analysis of Spurious Suppression
The occurrence of transverse modes is mainly due to aperture boundary effects, which are associated with the variation in the slowness curve.In order to conduct a rigorous and comprehensive analysis, the influence of LiTaO3, SiO2, and Al thickness on the slowness curve shape was investigated for the transverse mode suppression on a Al/42°Y-X LT/SiO2/poly-Si/Si substrate.As shown in Figures 8-10, the x-axis represents the normalized frequency and the slowness, Sx, of the horizontal propagation direction, and the y-axis represents the wave number, ky, and Sy along the aperture propagation direction.It can be seen that the dispersion and slowness curves show flat, convex, or concave shapes, i.e., the thickness of each layer has different effects on the curvature of the dispersion curve.When the LiTaO3 film thickness changed from 200 nm to 1200 nm, the shape of the dispersion curve changed from convex to concave.This is attributed to the increasing influence of the concave dispersion curve of the 42°Y-X LT.This conclusion is also consistent with the results in Figure 3. Nevertheless, the SiO2 thickness had a weak effect on the curvature of the dispersion curve, mainly on the wave mode velocity, manifested by a parallel shift of the slowness curve.Lastly, the Al thickness had a strong effect on the curvature of the dispersion curve.For the flat curve, the main wave mode formed a standing wave in the IDT region, with the direction of energy propagation only along the horizontal direction.A convex or concave curve means that the energy is a propagating component of the wave in the aperture direction.Therefore, a flat dispersion curve and slowness at a specific thickness were taken into consideration.
Analysis of Spurious Suppression
The occurrence of transverse modes is mainly due to aperture boundary effects, which are associated with the variation in the slowness curve.In order to conduct a rigorous and comprehensive analysis, the influence of LiTaO 3 , SiO 2 , and Al thickness on the slowness curve shape was investigated for the transverse mode suppression on a Al/42 • Y-X LT/SiO 2 /poly-Si/Si substrate.As shown in Figures 8-10, the x-axis represents the normalized frequency and the slowness, S x , of the horizontal propagation direction, and the y-axis represents the wave number, k y , and S y along the aperture propagation direction.It can be seen that the dispersion and slowness curves show flat, convex, or concave shapes, i.e., the thickness of each layer has different effects on the curvature of the dispersion curve.When the LiTaO 3 film thickness changed from 200 nm to 1200 nm, the shape of the dispersion curve changed from convex to concave.This is attributed to the increasing influence of the concave dispersion curve of the 42 • Y-X LT.This conclusion is also consistent with the results in Figure 3. Nevertheless, the SiO 2 thickness had a weak effect on the curvature of the dispersion curve, mainly on the wave mode velocity, manifested by a parallel shift of the slowness curve.Lastly, the Al thickness had a strong effect on the curvature of the dispersion curve.For the flat curve, the main wave mode formed a standing wave in the IDT region, with the direction of energy propagation only along the horizontal direction.A convex or concave curve means that the energy is a propagating component of the wave in the aperture direction.Therefore, a flat dispersion curve and slowness at a specific thickness were taken into consideration.Corresponding to the flat slowness curves in Figures 8 and 10, the calculated frequency responses of the Al (100 nm)/42°Y-X LT (600 nm)/SiO2/poly-Si/Si and Al electrode (170 nm)/42°Y-X LT (800 nm)/SiO2/poly-Si/Si are shown in Figure 11.The poly-Si thickness was 1 µm.The calculated results show that the effects on the suppression of Corresponding to the flat slowness curves in Figures 8 and 10, the calculated frequency responses of the Al (100 nm)/42°Y-X LT (600 nm)/SiO2/poly-Si/Si and Al electrode (170 nm)/42°Y-X LT (800 nm)/SiO2/poly-Si/Si are shown in Figure 11.The poly-Si thickness was 1 µm.The calculated results show that the effects on the suppression of Corresponding to the flat slowness curves in Figures 8 and 10, the calculated frequency responses of the Al (100 nm)/42 • Y-X LT (600 nm)/SiO 2 /poly-Si/Si and Al electrode (170 nm)/42 • Y-X LT (800 nm)/SiO 2 /poly-Si/Si are shown in Figure 11.The poly-Si thickness was 1 µm.The calculated results show that the effects on the suppression of transverse modes of these two structures are reasonable.Although the lateral high-order wave existed near the main resonant frequency, its acoustic energy was still weak.
Micromachines 2024, 13, x FOR PEER REVIEW 11 of 15 transverse modes of these two structures are reasonable.Although the lateral high-order wave existed near the main resonant frequency, its acoustic energy was still weak.During fabrication, there are manufacturing errors in the electrode and film size of practical SAW devices.For example, electrodes are usually trapezoid in shape due to the actual process.These errors probably lead to a change in the flat slowness curve.In order to ensure good production yield and performance, the dummy structure for assisting in During fabrication, there are manufacturing errors in the electrode and film size of practical SAW devices.For example, electrodes are usually trapezoid in shape due to the actual process.These errors probably lead to a change in the flat slowness curve.In order to ensure good production yield and performance, the dummy structure for assisting in improving SAW performance was also described, as shown in Figure 12, with a LiTaO 3 thickness of 600 nm, a SiO 2 thickness of 500 nm, and an Al electrode thickness of 170 nm.During fabrication, there are manufacturing errors in the electrode and film size of practical SAW devices.For example, electrodes are usually trapezoid in shape due to the actual process.These errors probably lead to a change in the flat slowness curve.In order to ensure good production yield and performance, the dummy structure for assisting in improving SAW performance was also described, as shown in Figure 12, with a LiTaO3 thickness of 600 nm, a SiO2 thickness of 500 nm, and an Al electrode thickness of 170 nm. Figure 13a illustrates the effect of the dummy width (ranging from 0.5p to 1.0p (p = 0.5λ)) on the suppression of transverse wave modes.It can be seen clearly that configuring the dummy width and length is an efficient way to suppress the transverse wave modes, especially transverse modes S3 and S4.When the metallization ratio of the dummy changed from 0.5p to 1.0p, the amplitude of the transverse modes first decreased and then increased (Figure 13a).The optimal suppression of the transverse modes can be achieved at a = 0.7p. Figure 13b shows the effect of the dummy length, b, (ranging from 0.5p to 1.3p) on the suppression of the transverse wave modes.It is obvious that the dummy length only had a marked impact on the S4 mode, which showed a decreasing trend as the dummy length monotonically increased.Figure 13a illustrates the effect of the dummy width (ranging from 0.5p to 1.0p (p = 0.5λ)) on the suppression of transverse wave modes.It can be seen clearly that configuring the dummy width and length is an efficient way to suppress the transverse wave modes, especially transverse modes S3 and S4.When the metallization ratio of the dummy changed from 0.5p to 1.0p, the amplitude of the transverse modes first decreased and then increased (Figure 13a).The optimal suppression of the transverse modes can be achieved at a = 0.7p. Figure 13b shows the effect of the dummy length, b, (ranging from 0.5p to 1.3p) on the suppression of the transverse wave modes.It is obvious that the dummy length only had a marked impact on the S4 mode, which showed a decreasing trend as the dummy length monotonically increased.
Experimental Verification
A SAW resonator on a multi-layered Al/42°Y-X LT/SiO2/poly-Si/Si structure was fabricated, with an Al electrode thickness of 100 nm, a LiTaO3 thickness of 600 nm, a SiO2 thickness of 500 nm, and a poly-Si thickness of 1000 nm.The IDTs consisted of 151 fingers and 20 fingers of reflectors on both sides.The gap length was 0.35p, the aperture length
Experimental Verification
A SAW resonator on a multi-layered Al/42 • Y-X LT/SiO 2 /poly-Si/Si structure was fabricated, with an Al electrode thickness of 100 nm, a LiTaO 3 thickness of 600 nm, a SiO 2 thickness of 500 nm, and a poly-Si thickness of 1000 nm.The IDTs consisted of 151 fingers and 20 fingers of reflectors on both sides.The gap length was 0.35p, the aperture length was 15λ, the dummy width was 0.5p (which is equal to the IDT width), and the dummy length was 1.0p.Figure 14 shows the measured (red lines) and calculated (blue lines) admittance Y 11 curves of the SAW devices.The simulated results are in agreement with the experiment results for the fundamental wave mode.Notably, there was also a weak transverse high-order wave near the main resonant frequency.
Experimental Verification
A SAW resonator on a multi-layered Al/42°Y-X LT/SiO2/poly-Si/Si structure was fabricated, with an Al electrode thickness of 100 nm, a LiTaO3 thickness of 600 nm, a SiO2 thickness of 500 nm, and a poly-Si thickness of 1000 nm.The IDTs consisted of 151 fingers and 20 fingers of reflectors on both sides.The gap length was 0.35p, the aperture length was 15λ, the dummy width was 0.5p (which is equal to the IDT width), and the dummy length was 1.0p.Figure 14 shows the measured (red lines) and calculated (blue lines) admittance Y11 curves of the SAW devices.The simulated results are in agreement with the experiment results for the fundamental wave mode.Notably, there was also a weak transverse high-order wave near the main resonant frequency.As mentioned above, the unmanageable manufacturing errors probably accidentally yield spurious waves.In order to ensure a good production performance, the dummy structure to improve the SAW performance was also taken into consideration.Thus, a one-port resonator with a dummy width of 0.7p and a dummy length of 1.2p was fabricated.The other parameters remained the same.As shown in Figure 15, the black line shows the frequency response of the re-designed dummy structure, while the red line shows the original dummy structure (a dummy width of 0.5p and a dummy length of 1.0p).The transverse modes near the resonant frequency of the re-designed structure almost completely disappeared.
Micromachines 2024, 13, x FOR PEER REVIEW 13 of 15 As mentioned above, the unmanageable manufacturing errors probably accidentally yield spurious waves.In order to ensure a good production performance, the dummy structure to improve the SAW performance was also taken into consideration.Thus, a oneport resonator with a dummy width of 0.7p and a dummy length of 1.2p was fabricated.The other parameters remained the same.As shown in Figure 15, the black line shows the frequency response of the re-designed dummy structure, while the red line shows the original dummy structure (a dummy width of 0.5p and a dummy length of 1.0p).The transverse modes near the resonant frequency of the re-designed structure almost completely disappeared.
Conclusions
In this paper, SAW resonators on an Al/42°Y-X LT/SiO2/poly-Si/Si-layered structure were proposed and analyzed.A comprehensive analysis including the Vp, K 2 , Bode Q value, and transverse wave modes was performed on the proposed SAW resonator fullscaled 3D finite element model.Meanwhile, the dispersion and slowness curves were calculated, which are required to determine if there is good energy confinement and
Conclusions
In this paper, SAW resonators on an Al/42 • Y-X LT/SiO 2 /poly-Si/Si-layered structure were proposed and analyzed.A comprehensive analysis including the V p , K 2 , Bode Q value, and transverse wave modes was performed on the proposed SAW resonator fullscaled 3D finite element model.Meanwhile, the dispersion and slowness curves were calculated, which are required to determine if there is good energy confinement and suppression of transverse modes.In addition, the effects of dummy electrodes on transverse waves were discussed.By optimizing its device structure parameters and configuration, an SAW resonator with improved performance and transverse mode suppression was achieved.Furthermore, one-port resonators were fabricated on the optimized Al/42 • Y-X LT/SiO 2 /poly-Si/Si-layered structure, and the experimental results were basically consistent with the theoretical calculations.These results give an insight into the general design process for layered SAW devices, which provides guidelines for the design of desired SAW devices with improved performance.Institutional Review Board Statement: The authors followed the International Committee of Medical Journal Editors (ICMJE) recommendations for authorship.
Informed Consent Statement:
The authors agree with the plan to submit/publish to micromachines; the contents of the manuscript; being listed as an author; and the conflicts of interest statement.
x 2 x 1 x 3 Figure 1 .
Figure 1.The coordinate system of SAW propagation in a piezoelectric material.
Figure 2 .
Figure 2. Periodic model of multi-layered structure.(a) Schematic diagram of structural materials; (b) mesh distribution of finite element model.
Figure 2 .
Figure 2. Periodic model of multi-layered structure.(a) Schematic diagram of structural materials; (b) mesh distribution of finite element model.
Figure 3 .
Figure 3. Calculated characteristic changes of the SH wave with different LiTaO3 thicknesses.(a) Admittance, (b) Vp, (c) Bode Q, and (d) K 2 .Black shapes denote the calculated data, and red shapes denote the fitting results.
Figure 3 .
Figure 3. Calculated characteristic changes of the SH wave with different LiTaO 3 thicknesses.(a) Admittance, (b) V p , (c) Bode Q, and (d) K 2 .Black shapes denote the calculated data, and red shapes denote the fitting results.
Figure 4 .
Figure 4. Calculated characteristic changes of the SH wave with SiO2 thickness.(a) Admittance, (b) Vp, (c) Bode Q, and (d) K 2 .Black shapes denote the calculated data, and red shapes denote the fitting results.
Figure 4 .
Figure 4. Calculated characteristic changes of the SH wave with SiO 2 thickness.(a) Admittance, (b) Vp, (c) Bode Q, and (d) K 2 .Black shapes denote the calculated data, and red shapes denote the fitting results.
Figure 5 .
Figure 5. Calculated characteristic changes of the SH wave with different Al thicknesses.(a) Admittance, (b) Vp, (c) Bode Q, and (d) K 2 .Black shapes denote the calculated data, and red shapes denote the fitting results.
Figure 5 .Figure 6 .
Figure 5. Calculated characteristic changes of the SH wave with different Al thicknesses.(a) Admittance, (b) V p , (c) Bode Q, and (d) K 2 .Black shapes denote the calculated data, and red shapes denote the fitting results.icromachines 2024, 13, x FOR PEER REVIEW
Figure 7 .
Figure 7. Displacement of higher-order transverse modes along the aperture direction.
Figure 7 .
Figure 7. Displacement of higher-order transverse modes along the aperture direction.
Figure 13 .
Figure 13.Calculated admittance and conductance with (a) dummy widths ranging from 0.5p to 1.0p, and (c) dummy lengths ranging from 0.5p to 1.0p.(b,d) are enlarged views of (a,c), respectively.
Figure 13 .
Figure 13.Calculated admittance and conductance with (a) dummy widths ranging from 0.5p to 1.0p, and (c) dummy lengths ranging from 0.5p to 1.0p.(b,d) are enlarged views of (a,c), respectively.
Figure 15 .
Figure 15.Measured admittance of different dummy structures.
Figure 15 .
Figure 15.Measured admittance of different dummy structures.
Author Contributions: H.P. wrote the manuscript and participated in the design of the experiments and calculations; Y.Y. and L.L. participated in the calculations and helped develop the FEM model; Q.Z. and Q.X.proposed the concept of the PDE-based 2D-FEM model; Z.Z., X.D., P.C. and J.D. contributed to the experimental work; C.L., X.X., H.L. and J.M. contributed to the analysis and interpretation of the experimental results; Z.C. supervised the writing and review of the manuscript and helped develop and refine the EM model.All authors have read and agreed to the published version of the manuscript.Funding: This research was partially supported by the National Natural Science Foundation of China (grant numbers 12374449 and 52172005), the National Key Research and Development Program of China (grant numbers 2020YFA0709800, 2016YFC0104802 and 2020YFA0709800), and the Research and Development Program in Significant Area of Guangzhou City (grant number 202206070001).
Table 1 .
Material constants used in the calculation.
Table 1 .
Material constants used in the calculation. | 8,919.4 | 2023-12-21T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Pharmacokinetic properties of clarithromycin : A comparison with erythromycin and azithromycin
OBJECTIVE: To compare the pharmacokinetic properties of two new macrolide antibiotics, clarithromycin and azithromycin, with those of the prototype macrolide, erythromycin. DATA SoURcES: Primarily peer review joumals were searched for papers describing the pharmacokinetics of these new macrolides. STUDY SELECTION: Fifteen in vitro and clinical studies of clarithromycin and azithromycin and one clinical abstract on clarithromycin from the past four years were selected for review. DATA ExTRACTION: Data relevant to the pharmacok:inetic characteristics of clarithromycin, azithromycin and, to a lesser extent, erythromycin were selected for presentation in this comparison. DATA SYNTHESIS: By reviewing the available studies, it was possible to construct pharmacok:inetic profiles of the new compounds, and to compare them with each other and with erythromycin. CoNCLUSIONs: Both clarithromycin and azithromycin have been shown to have an antibacterial spectrum and pharmacok:inetic profile superior to that of erythromycin. The differences between the new compounds, however, may not be that significant. Each is likely to become a first-line therapeutic option in specific instances, which will become better delineated as clinical research on these new macrolides continues.
Proprietes pharmacocinetiques de Ia clarithromycine : Comparaison avec l'erythromycine et l'azithromycine 0BJECTIF : Comparer les proprietes pharmacocinetiques de deux nouveaux macrolides, soit la clarithromycine et l'azithromycine, avec celles du macrolide prototype erythromycine.SoURCES DES DONN~ES : Des publications scientifiques avec comite de revision scientifique ont principalement ete passees en revue et on y a selectionne des articles sur la pharmacocinetique de ces nouveaux macrolides.StLECTioN DES tTUDES : Quinze etudes in vitro et cliniques sur la clarithromycine et l'azithromycine et un resume clinique sur la clarithromycine publies au cours des quatre dernieres annees ont ete passes en revue.ExTRACTION DES DONNtES : Des donnees relatives aux caracteristiques pharmacocinetiques de la clarithromycine, de l'azithromycine et, dans une proportion plus faible, de l'erythromycine, ont ete choisies aux fins de cet article comparatlf.Synthese des donnees : En passant en revue les etudes disponibles, il a ete possible de dresser un profil pharmacocinetique des nouveaux composes et de les comparer entre eux, de meme qu'avec l'erythromycine.CoNCLUSIONS: La clarithromycine et l'azithromycine se sont toutes deux dotees d'un spectre antibacterien et d'un profil pharmacocinetique superieur a ceux de l'erythromycine.Les differences entre ces deux nouveaux produits ne sont necessairement significatives.Chacun est susceptible de devenir un choix therapeutique de premiere ligne dans certains cas specifiques qui seront definis plus precisement a mesure que la recherche clinique se poursuivra.E RITHROMYCIN IS COMMONLY USED TO TREAT A VARIETY OF infections, including those of the respiratory tract.Although the compound is effective against a fairly broad range of organisms (including atypical pathogens), its clinical success may be limited by erratic low blood and tissue levels, and a tid to qid dosing schedule.In addition, patients complain frequently of gastrointestinal adverse effects associated with erythromycin.A number of congeners are now in development (and one, clarithromycin, is available in Canada) that prom-Ise to Improve upon the antibiotic spectrum, pharmacokinetics, safety, and tolerability of erythromycin.
Clarlthromycin is an acid-stable analogue of erythromycin with a methoxy substitution at C-6 of the erythronolide ring.This structural alteration prevents acid-induced conversion of the molecule to inactive splroketal forms in the stomach and Improves bioavailability and gastrointestinal tolerance after an oral dose.It therefore increases antibacterial activity (1) compared with erythromycin.Clarithromycin has demonstrated bactericidal activity against both typical and atypical respiratory pathogens.
When clarithromycin is metabolized in humans, a microbiologically active metabolite, 14-hydro:xy clarithromycin, is formed.This active metabolite has been shown to contribute an additive or synergistic effect to the activity of the parent compound in vivo against selected pathogens.Results from in vitro and in vivo testing of the combined compounds suggest that routine in vitro susceptibility tests and animal efficacy studies with clarlthromycin alone may underestimate its potential efficacy against Haemophilus injluenzae (2) .
Azithromycin, a 15-membered macrolide that is classified as the first azalide, differs from erythromycin by the insertion of a methyl-substituted nitrogen into position 9a of the large aglycone ring.The insertion of the nitrogen into the ring distinguishes azalides from macrolides (which have only carbon-and oxygen-containing rings), and significantly alters the chemical, microbiological, and pharmacokinetic properties of these compounds (3).
This paper compares the pharmacokinetic properties of these new compounds, clarithromycin and azithromycin, with those of erythromycin.
TABLE 1
Comparative pharmacokinetics of new macrolides ABSORPTION Erythromycin base is incompletely but adequately absorbed from the upper part of the small intestine, and it is inactivated by gastric acid.The drug is therefore administered as enteric-coated formulations or as esters stable to gastric acid, eg, erythromycin estolate or ethylsuccinate.Bioavailability, generally about 25% of an oral dose (3), is decreased when erythromycin base or stearate is administered with food (4).As shown in Table 1, maximum concentration of 1.9 to 3 .81-lg/mLand 3.08 11g/mL are attained with erythromycin base and estolate 500 mg single dosing.
Both clarithromycin and azithromycin have been shown to have pharmacokinetic profiles superior to that of erythromycin (Table 1).These new compounds have a more predictable pattem of absorption than the prototype.After an oral dose, clarithromycin reaches peak serum concentrations within 2 h, azithromycin within 2.5 h , and both achieve higher concentrations in tissue than in serum.In trials evaluating the effect of food on the disposition of clarithromycin, it was found that food may enhance absorption and bioavailability of the drug from 55 to 70% (1 ,5-6).Clarithromycin is stable in gastric acid.Steady-state peak serum concentrations are 1.0 to 1.5 mg/L after a 250 mg twice-daily dose, and 2.0 to 3.0 mg/L after a 500 mg twice-daily dose.
The bioavailability of a 500 mg oral dose of azithromycin is only about 37%, and peak serum concentration attained is 0.4 mg/L (7) .Serum concentrations with multiple-dose regimens were slightly higher, but always below 1 mg/L.Studies of the influence of food on azithromycin absorption revealed that peak serum concentration decreased by 52% and area under the plasma concentration-time curve (AUC) diminished by 43% (8) .
There are apparent discrepancies between bioavailability and peak plasma concentration (Cmaxl values for erythromycins, clarithromycin, and azithromycin because no equivalent intravenous formulations exist for erythromycin.Lactobionate and gluceptate intravenous erythromycins are not fully equivalent.Thus, bioavailability values for erythromycin and erythromycin estolate are only approximate.
DISTRIBUTION AND TISSUE PENETRATION
Macrolide antibiotics are known to bind to plasma proteins, particularly alpha1-acid glycoprotein.These compounds are lipophilic, and they penetrate well into tissues (5) .
Scaglione and Fraschini (9) evaluated the diffusion of clarithromycin into respiratory tissues, including the nasal mucosa, tonsils, and lungs, in adult patients undergoing surgery.For the three days preceding their surgical procedures, patients received clarithromycin, 250 mg twice daily (nasal mucosa or tonsillar tissue), or 500 mg twice daily (lung parenchyma).For clarithromycin and its active metabolite, terminal disposition half-life (t1; 2 bl values in sputum were 1.3 -to 1.6-fold longer than those in serum.In tonsils, mean 4 h post dose parent and metabolite concentrations were 5 .3 and 3.1 mg/kg, respectively, and mean 12 h post dose values were 2.1 and 1.2 mg/kg (Figure 1) .Parent and metabolite concentrations in nasal mucosa 4 h post dose were 5.9 and 3 .2mg/kg, respectively, and mean 12 h post dose values were 2.2 and 1.5 mg/kg.In lung tissue, parent and metabolite concentrations 4 h post dose were 13.5 and 7 .2mg/kg, respectively, and 12 h post dose values were 2 .8 and 2 .0mg/kg (9).Significantly, these values exceed the 90o/o minimum inhibitory concentration (MICgo) for many respiratory pathogens (Table 2).
In a recent study conducted in the United Kingdom, pulmonary tissue concentrations of azithromycin were measured in 22 patients (10) .Up to 96 h after a single 500 mg dose, the following findings were observed: mean peak concentration of the drug in sputum, 1.56 mg/L; in bronchial mucosa, 3.89 J..Lg/mL; in alveolar macrophages, 23 mg/L.Serum concentrations were significantly lower (0.13 mg/L 12 h post dose) , and were generally sub-MIC throughout the study period.
The concentration of azithromycin in most tissues has been shown to exceed serum concentra-150 t ionsby 10-to 100-fold (7).Single oral dose (500 mg) studies of azithromycin have found that the concentration of azithromycin in most tissues 12 to 48 h after dosing ranges from 1 to 9 mg/kg (3) .Mean concentrations for a single tissue type are usually greater than 2 .0mg/kg, and all tissue levels are higher than those of concurrently obtained serum samples.The average 'tissue' half-life is two to four days.One may question whether such an extended half-life is completely beneficial.
Although intraphagocytic bioactivity is not a common property of antimicrobial agents (11) , the newer macrolide antibiotics achieve high intracellular concentrations.Both clarithromycin and azithromycin have been shown to penetrate macrophages and leukocytes, which makes them particularly effective against intracellular pathogens such as Legionella pneWTWphila and Chlamydia species (5) .In contrast, penicillin and cephalosporin antibiotics are not actively concentrated by phagocytes, and they possess only modest, if any, intracellular activity (11).
Anderson and colleagues (11) observed that erythromycin was rapidly concentrated by neutrophils, with an intracellular to extracellular (I:E) ratio of 7:3.The I:E ratio for clarithromycin was 9 : 1.These investigators concluded that the superior pharmacokinetic properties of clarithromycin will lead to increased intraphagocytic accumulation and bioactivity in vivo.
Therapeutic concentrations of clarithromycin have also been found to stimulate protein kinase C activity in polymorphonuclear leukocytes (PMNLs).Thus, in addition to its antimicrobial activity, the drug stimulates cellular host defense mechanisms involving the activation of protein kinase C (12).
It was recently shown that, among all macrolides tested so far, azithromycin provides the highest I:E ratio , confirmed both in vitro and in vivo, with values of approximately 160 obtained in vitro for azithromycin (13).The theory that onsite, intraphagocytic delivery of azithromycin provides a significant amount ofbioactive antimicrobial agent has been demonstrated in vitro and in vivo (14).However, this concept applies to all macrolides, including erythromycin, that are known to accumulate in phagocytic cells.
METABOLISM AND ELIMINATION
Although the metabolism of the macrolide antibiotics has not been extensively studied, it is known that a portion of the dose is metabolized in the liver.Macrolide antibiotics are demethylated by the cytochrome P-450-III microsomal enzyme system.Clarithromycin is metabolized to eight metabolites, but only one, the 14-hydroxy metabolite, has been shown to have antibacterial activity.The activity of this metabolite is comparable to or greater than that of the parent compound.The 14-hydroxy metabolite has been shown to act synergistically or additively with the parent compound, thereby extending clarithromycin's antimicrobial spectrum to include H injluenzae.It is not known whether any of the metabolites of azithromycin are active (5).
The pharmacokinetics of clarithromycin appear to be dose-dependent and nonlinear, apparently as a result of capacity-limited saturation of metabolic pathways.However, such nonlinearity is slight at the recommended dosages.Disproportionate increases in Cmax.t1; 2 b, and AUC have been reported in patients receiving a single high dose ( 1.2 g) or multiple doses.Similar dose dependency has been observed with the 14-hydroxy metabolite (1).
Thirty to 40% of an oral dose of clarithromycin is excreted unchanged or as an active metabolite via the kidneys, and the remainder is excreted via the bile (1).In individuals with normal renal function, the half-lives of clarithromycin and its 14-hydroxy metabolite after a 500 mg dose are 5 and 7 h, respectively (15).As renal function declines, the serum half-lives of these compounds increase to 7.7 hand 14 h, respectively.At a creatinine clearance of 30 to 80 mL/min, clarithromycin's half-life is 12 h, and this interval increases to 32 h when the creatinine clearance falls below 30 mL/min.For 14-hydroxy clarithromycin at the lower creatinine clearance, the half-life is 47 h.Clearly, regimen alteration would be advisable in patients with severely impaired renal function.
Severe hepatic impairment could theoretically alter the pharmacokinetics of clarithromycin and its metabolite so that less metabolite would be formed, and renal clearance of the parent compound would increase.Steadystate levels of unchanged clarithromycin in hepatically impaired patients are similar to those in normal subjects, so if renal function is normal, the drug can be administered without dose adjustment (15).
Azithromycin elimination is polyphasic: the initial rapid decline in drug serum levels is followed by Comparative pharmacokinetics of new macrolides multiple-phase distribution and elimination.After a single 500 mg dose, terminal half-life probably exceeds 40 h ( 16).When azithromycin metabolism occurs, demethylation is the primary route.The metabolites, which may number as many as 10, are not thought to have any significant antimicrobial activity (3).Urinary excretion of unchanged azithromycin in humans appears to be a minor elimination route, usually amounting to less than 6o/o within 24 h after an oral 500 mg dose.About 20% of the drug that reaches the systemic circulation is excreted unchanged in the urine (8).The feces are an important route of elimination for azithromycin; binary concentrations of the drug far exceed serum concentrations, suggesting binary excretion.Over half the drug-related material in the bile is unchanged.Transintestinal excretion may be the primary route of elimination of the unchanged compound (3).
Only 2 to 5% of an oral dose of erythromycin is excreted in active form in the urine.The antibiotic is metabolized through the liver and excreted in active form in the bile.
DRUG REGIMENS
The usual dose of erythromycin for adults ranges from 1 to 2 g/day, given in equally divided and spaced amounts, usually every 6 h (4).
Azithromycin may be given as a single 1 g dose in specific instances, but the more common regimen is a five-day course of therapy, beginning with a 500 mg dose on day 1, followed by daily 250 mg doses on days 2 through 5 (1,8).
For clarithromycin, the usual adult dose for infections of the respiratory tract and the skin and soft tissues is 250 to 500 mg every 12 h for seven to 14 days.ln patients with both hepatic and renal impairment, or in the presence of severe renal impairment, decreased dosage or prolonged dosing intervals may be appropriate (18).
FUTURE OUTLOOK
The pharmacokinetic advantages and superior spectra of activity of clarithromycin and azithromycin over erythromycin (base and esters) are well delineated: however, it is my opinion that differences between clarithromycin and azithromycin are not as dramatic as they may appear to be.Azithromycin's long half-life is not, in itself, an advantage.When evaluating tissue penetration of clarithromycin and azithromycin, one would prefer data from the same investigators under similar study conditions.Comparative studies are currently under way and are likely to provide further insights into understanding these new compounds.
Both clarithromycin and azithromycin offer therapeutic advantages in certain areas, and they are likely to become first-line therapy in a number of situations (17).The once-or twice-daily dosing regimens of these new macrolides may also improve patient compliance, a key factor in the management of any infection. | 3,381 | 1993-05-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots
Introduction: Utilizing anthropomorphic features in industrial robots is a prevalent strategy aimed at enhancing their perception as collaborative team partners and promoting increased tolerance for failures. Nevertheless, recent research highlights the presence of potential drawbacks associated with this approach. It is still widely unknown, how anthropomorphic framing influences the dynamics of trust especially, in context of different failure experiences. Method: The current laboratory study wanted to close this research gap. To do so, fifty-one participants interacted with a robot that was either anthropomorphically or technically framed. In addition, each robot produced either a comprehensible or an incomprehensible failure. Results: The analysis revealed no differences in general trust towards the technically and anthropomorphically framed robot. Nevertheless, the anthropomorphic robot was perceived as more transparent than the technical robot. Furthermore, the robot’s purpose was perceived as more positive after experiencing a comprehensible failure. Discussion: The perceived higher transparency of anthropomorphically framed robots might be a double-edged sword, as the actual transparency did not differ between both conditions. In general, the results show that it is essential to consider trust multi-dimensionally, as a uni-dimensional approach which is often focused on performance might overshadow important facets of trust like transparency and purpose.
Introduction
Industrial robots are increasingly working hand in hand with their human coworkers.Hand in hand can be meant literally here, as close collaboration requires physical and temporal proximity (Onnasch and Roesler, 2021).For efficient collaboration, humans have to trust the robotic interaction partner (Hancock et al., 2011;Sheridan, 2016).While human-robot trust research is still an evolving field, trust has been studied extensively in human-automation and human-human interaction, both fields that are strongly related to human-robot interaction (HRI) (Lewis et al., 2018).Most theoretical models of trust in automation as well as trust in humans consider trust as multi-dimensional.For instance, for Roesler 10. 3389/frobt.2023.1235017trust in automation, (Lee and See, 2004), performance, purpose, and process are described as separate dimensions of trust.Even though a transferability of these dimensions to human-robot trust is assumed (Lewis et al., 2018), recent research focused on using single-items of trust (e.g., Salem et al., 2015;Sarkar et al., 2017;Roesler et al., 2020;Onnasch and Hildebrandt, 2021) or uni-dimensional trust questionnaires (e.g., Sanders et al., 2019;Kopp et al., 2022).These approaches are not able to capture different dimensions, and thus cannot contribute much to a more detailed understanding of the underlying determinants of trust and trust dynamics in interaction with robots.
The multi-dimensional trust-in-automation questionnaire (MTQ) originally proposed by Wiczorek (2011) and translated, adapted, and validated by Roesler et al. (2022a) might also be used for investigating trust in HRI.Theoretically, it is based on the concept of Lee and See (2004) and assesses the dimensions performance, utility, purpose, and transparency.This allows for a more fine-grained assessment of trust in order to gain a better understanding of which trust dimensions are impacted from a given characteristic of a robot.Factors on part of the robot that influence trust can be classified as performance-and attribute-based characteristics (Hancock et al., 2011).In particular, performancebased factors such as reliability are the largest current influence on perceived trust in HRI.However, actual reliability is rarely correctly weighted for the formation of trust (Rieger et al., 2022).One decisive factor for this discrepancy could be the type of error experienced in the interaction (Madhavan et al., 2006).In particular, obvious failures made by a robot might dramatically reduce trust as expectations are violated (Madhavan et al., 2006).Based on this easy-error hypothesis in human-automation interaction, we hypothesized a comparable pattern in HRI.Thus, we assumed that comprehensible failures that might happen to humans as well are more forgivable than incomprehensible failures.
This effect could even be enhanced by one of the most popular design features in HRI-the application of anthropomorphic characteristics (Salem et al., 2015;Roesler et al., 2021).Anthropomorphism by design refers to the incorporation of human-like qualities and characteristics into the design and behavior of robots (Fischer, 2021).Anthropomorphic design extends beyond mere robotic appearances, encompassing elements such as communication, movement dynamics, and contextual integration (Onnasch and Roesler, 2021).Different factors collectively contribute to shaping perceived anthropomorphism of a robot.Even something subtle like an anthropomorphic framing of a robot can serve as a trigger that activates human-human interaction schemes (Onnasch and Roesler, 2019;Kopp et al., 2022).Due to the activation of humanlike expectations, failures that might have happened to a human as well [i.e., comprehensible failures (Madhavan et al., 2006)] could lead to less pronounced trust decrease in the anthropomorphically compared to the technically framed robot.
In addition to this presumed positive effect, anthropomorphism also comes with it potential pitfalls, especially in industrial HRI.In this application domain, anthropomorphism can undermine the perceived tool-like character of the robot, which can result in lower trust and perceived reliability (Roesler et al., 2020;Onnasch and Hildebrandt, 2021).The results in regard to anthropomorphic framing are currently mixed in task-related interactions (Onnasch and Roesler, 2019;Roesler et al., 2020;Kopp et al., 2022).Whereas studies which combined anthropomorphic framing and appearance in industrial HRI found negative effects (Onnasch and Roesler, 2019;Roesler et al., 2020), another study which investigated anthropomorphic framing without an exposure to an industrial robot found a positive effect on trust (Kopp et al., 2022).However, this was only the case if the anthropomorphic framing was combined with a cooperativeness framing (Kopp et al., 2022).As participants in this study were exposed to an actual robot and no additional framing in regard to the cooperativeness was given, it might be assumed that the possible mismatch of appearance, context, and framing reduces trust (Goetz et al., 2003;Roesler et al., 2022b).Thus, we hypothesized that anthropomorphic framing of an industrial robot leads to lower initial and learned trust compared to technical framing.
To investigate the joint effects of failure comprehensibility and anthropomorphic framing, we conducted a laboratory experiment.Participants collaborated with an industrial robot in a collaborative task.The robot either had an anthropomorphic framing or a technical framing based on perceived human-likeness framings used by Kopp et al. (2022).The dynamics of trust were investigated by measuring trust once initially before the actual collaboration started, after a period of perfectly reliable robotic performance, and after the experience of a failure, which was either comprehensible or incomprehensible.
Methods
The experiment was preregistered via the Open Science Framework (OSF) (https://osf.io/nvmqk)and approved by the local ethics committee.Also the collected data can be assessed via the OSF https://osf.io/2vzxj/.
Participants
The sample consisted of 51 participants (M age = 26.94;SD age = 7.72) who were recruited via the participant pool of the local university and online postings.Of those participants, 50.98% were female, 47.06% male, and 1.96% non-binary.Participants signed consent forms at the beginning of the experiment and received five Euros as compensation at the end of the experiment.Due to time constraints of the project, we were unable to achieve the intended sample size as planned and preregistered.Hence, it is crucial to consider the issue of limited statistical power.
Task and materials
The aim of the human-robot collaboration was to solve multiple times a four-disk version of the Tower of Hanoi together with the industrial robot Panda (Figure 1).In this mathematical game, a stack of disks has to be moved from the leftmost to the rightmost peg by carrying only one disk at a time and never dragging a larger disk on a smaller one in the fewest possible moves.The tower was situated in front of the robot vis-à-vis the participant.The required movement sequences of the robot were preprogrammed and included movements in the following chronology.First, the robot moved toward one peg as a sign to remove the top disk from this peg.Subsequently, the robot moved toward another peg as a prompt to place the previously picked disk there.Afterward, the robot moved back to the resting position to start the next sequence.The participant's task was to move the disks by following exactly the robot's directives to solve the Tower of Hanoi in an optimal sequence.Moreover, the participant had the task to monitor the robot's behavior by comparing the steps shown by the robot with an optimal procedure.The participants received a printed copy of the precise instructions of the Tower of Hanoi as can be seen on the table in Figure 1.Whenever the robot deviated from the optimal procedure, the participants needed to intervene by pushing a (mock-up) emergency button.
Dependent variables
Single items were used to assess general trust (How much do you trust the robot?) and reliability (How reliable is the robot?) both assessed on a scale from 0 to 100.In addition, the MTQ with four subscales (i.e, performance, utility, purpose, transparency) was assessed via 16 items (e.g., The way the system works is clear to me.) on a four-point Likert scale from disagree to agree (Wiczorek, 2011;Roesler et al., 2022a).Both the German and English versions of the questionnaire can be accessed through the OSF via https://osf.io/56cwx/.
To prevent confounding effects of participants' interindividual differences we included two control variables.First, the disposition to trust technology was assessed (Lankton et al., 2015).Second, we asked participants to fill in a 5-item short version of the Interindividual Differences in Anthropomorphism Questionnaire Waytz et al. (2010).The short version comprised solely of items that directly addressed technological aspects (To what extent does technology-devices and machines for manufacturing, entertainment, and productive processes (e.g., cars, computers, television sets)-have intentions?).
To test whether the manipulation of anthropomorphism via framing was successful we incorporated a self-constructed questionnaire with ten items that addressed aspects of anthropomorphic context (e.g., the character, task, and preferences of the robot).All items were rated on a 0%-100% human-likeness scale.The manipulation of failure comprehensibility was checked by asking the participants to rate on a five-point Likert scale whether they too could have committed the failure (Roesler et al., 2020).
Procedure
All participants were randomly assigned to one of the four conditions and received corresponding written instructions including the framing of the robot.After filling out the initial questionnaire compromising single items of trust and perceived reliability, participants were informed that they will be working together with the robot for three blocks each including three Towers of Hanoi.After the first fault-free block, again the single items of trust and perceived reliability were assessed.The next block started and in the second block, either a comprehensible failure (i.e., showing the wrong position of a disc without the violation of rules) or an incomprehensible failure (i.e., showing the wrong position of a disc and breaking the rule of never putting a large disc on a smaller one) occurred.After the failure experience, participants needed to push the (mock-up) emergency button.This was done to ensure that all participants realized the failure.Subsequently, the single items of trust and perceived reliability, the MTQ, sociodemographics, control variables, and manipulation checks were measured.After this, all participants were debriefed and obtained the 5 Euro compensation.The entire experiment lasted approximately 35 min.
Design
The study consisted of a 2 × 2 × 3 mixed design with the two between-factors robots framing (anthropomorphic vs technical) and failure comprehensibility (low vs high) and the within-factor experience (initial vs pre failure vs post failure).
The different robot framing conditions were implemented via written instructions (Kopp et al., 2022).In the anthropomorphic conditions, the robot was framed as a colleague and named Paul with humanlike characteristics.In contrast, in the technical conditions, the framing characterized the robot as a tool with some technical specifications and the model name PR-5.The framings can also be accessed via the OSF (https://osf.io/3xgcp).The failures were represented by wrong instructions on part of the robot.The comprehensibility was manipulated by the obviousness of the failure.In incomprehensible conditions, the robot suggested moving a bigger disk on a smaller one, which is forbidden by the general rules of the Tower of Hanoi.In the comprehensible conditions, the robot suggested a wrong position of a disk without breaking a general rule.
Control variables
First, the variables regarding the individual differences concerning attitudes toward technology and tendency to anthropomorphize were analyzed between the four conditions using one-way ANOVAs.The analyses revealed no significant differences between the four groups in the disposition to trust technology (F(3, 47) = 1.25; p = .303),as well as the tendency to anthropomorphize (F(3, 47) = 2.48; p = .072).
Manipulation check
To investigate whether the manipulations were successful, independent t-tests were conducted.Surprisingly, the anthropomorphically framed robot was not perceived as significantly more anthropomorphic on the self-constructed scale compared to the technically framed one (t(49) = 0.34; p = .732).Moreover, the comprehensible and incomprehensible failures did not lead to a different understandability of the failure (t(49) = −0.96;p = .341).
Initial trust
Initial trust and perceived reliability were analyzed in regard to differences between differently framed robots via independent t-tests.The analyses revealed neither a difference in general trust (t(49) = −0.63;p = .529)nor in perceived reliability (t(49) = 1.48; p = .145)between the framing conditions.
Learned trust
General trust and perceived reliability were analyzed via 2 × 2 × 2 mixed ANOVAs with the between-factors framing (anthropomorphic vs technical) and failure comprehensibility (low vs high) as well as the within-factor failure experience (prevs.post-failure).The analysis of trust revealed only a significant main effect of failure experience (F(1, 47) = 40.73;p < .001)with higher trust before (M = 84.75;SD = 17.90) compared to after the failure experience (M = 64.31;SD = 24.65).No further main or interaction effects were revealed in the analysis (all ps > .068).A comparable pattern of results was revealed for perceived reliability.Again, a significant main effect of failure experience was found (F(1, 47) = 71.15;p < .001).Participants perceived the robot prior failure experience (M = 93.51;SD = 8.94) as significantly more reliable than after failure experience (M = 66.16;SD = 23.65).No further effects were revealed (all ps > .349).
As the MTQ was measured after failure experience 2 × 2 between-factors ANOVAs with the factors framing (anthropomorphic vs technical) and failure comprehensibility (low vs high) were used.Neither the analysis of the performance scale nor the analysis of the utility scale revealed any significant effects (all ps > .132).However, the analysis of the purpose scale showed a significant main effect of failure comprehensibility (F(1, 47) = 6.20; p = .016)depicted in Figure 2 (left).Incomprehensible failures (M = 3.05; SD = 0.54) received significantly lower scores on this scale compared to comprehensible failures (M = 3.38; SD = 0.35).Moreover, the analysis of the transparency scale revealed a significant main effect of robot framing (F(1, 47) = 7.08; p = .011)as can be seen in Figure 2 (right).The anthropomorphically framed robot (M = 3.02; SD = 0.52) was perceived as significantly more transparent than the technically framed one (M = 2.59; SD = 0.62).No further significant effects were revealed for the purpose and transparency scale (all ps > .161).
Discussion
The purpose of the presented study was to examine the joint effects of anthropomorphic robot framing and the experience of more or less comprehensible failures on human trust in a realistic industrial human-robot collaboration.Based on previous research in task-related HRI (Onnasch and Roesler, 2019;Roesler et al., 2020;Onnasch and Hildebrandt, 2021) it was assumed that anthropomorphic framing would lead to lower trust and perceived reliability compared to a technical framing.The present results were not consistent with this claim, as no significant differences in initial and learned trust as well as perceived reliability were revealed.This might be explained by the interplay of framing and appearance.Earlier studies in industrial HRI manipulated framing and appearance together (Roesler et al., 2020;Onnasch and Hildebrandt, 2021).The comparison to the current results could indicate that the negative effect of the decorative anthropomorphism in industrial HRI might be mainly attributable to appearance rather than to framing.In addition, recent research of Kopp et al. (2022) showed a positive effect of anthropomorphic framing on trust in industrial HRI if the relation is perceived as cooperative.Even though it often remains unclear if and why people perceive the relation to an industrial robot in a cooperative or competitive manner (Oliveira et al., 2018), our interaction scenario was designed in a cooperative way.This might explain why anthropomorphic framing was influencing at least one facet of trust-transparency.
As anthropomorphism is assumed to activate well-known human-human interaction scripts, knowledge about the otherwise highly unknown novel technology is elicited (Epley et al., 2007).The imputation of human-like functions and behaviors can thus reduce uncertainty and, in this case, increase perceived transparency.Of course, this is a double-edged sword, as perceived transparency does not refer to actual transparency in this case.The illusion of higher transparency might even lead to unintentional side effects, such as a wrong mental model of the robot.In terms of future research, it would be important to consolidate the current findings by further examining the effect of anthropomorphic framing on transparency.However, the general effectiveness of framing in regard to humanrobot trust should be interpreted with caution as no significant results were revealed for general trust and the other subscales of the MTQ.This pattern of results is consistent with a current metaanalysis showing no significant effect of context anthropomorphism for subjective as well as objective outcomes (Roesler et al., 2021).However, the meta-analysis has shed light on a notable research gap concerning anthropomorphic context, which has received comparably less attention than studying the effectiveness of robot appearances.The findings of this study, coupled with insights from Kopp et al. (2022) 's previous work, tentatively suggest a potential effectiveness of anthropomorphic framing for industrial HRI in regard to trust.The previous and current results underscore the necessity for further exploration and empirical investigation of possible benefits of anthropomorphic framing in industrial HRI.
Therefore, it might be not surprising that3no interaction effect of framing and failure comprehensibility was found.The possible effect might have been covered by the rather nonsalient manipulations of both anthropomorphism and failure comprehensibility.This assumption is further supported by the nonsignificant manipulation checks for both variables.Nonetheless, the comprehensibility of failures did significantly influence the perceived purpose of the robot.Purpose refers to motives, benevolence, and intentions (Lee and See, 2004) and not to the performance of the interaction partner.This leads to the assumption that failure number and types affect different facets of trust.
Both the result that anthropomorphic framing and failure comprehensibility can affect different dimensions of trust but not general trust shows the importance to integrate multidimensional approaches to investigate trust in HRI.Unidimensional trust measures most commonly relate to performance aspects (Roesler et al., 2022b).Even though performance-attributes of a robot are one of the most important determinants of trust, they are by far not the only one (Hancock et al., 2011).Therefore, it is highly relevant to also include trust facets that go beyond performance.Thus, future research should include a multidimensional view at trust, particularly with novel embodied technologies like robots.
Although the generality of the current results must be established by future research, especially with bigger samples sizes to investigate the joint effect of both factors, the present study has provided clear support that uni-dimensional trust measurements might overshadow certain important facets of trust.Not only was anthropomorphic framing leading to higher transparency compared to technical framing, but more comprehensible failures to more perceived purpose of the robot compared to incomprehensible failures.Furthermore, this research opens up multiple avenues for future research to investigate more detailed different dimensions of trust.
FIGURE 2 Frontiers
FIGURE 2Means, standard errors and exact values of each participant for the type of failure concerning purpose (left) and the framing concerning transparency (right). | 4,466 | 2023-09-07T00:00:00.000 | [
"Engineering",
"Psychology",
"Computer Science"
] |
Ab Initio Study of the Interaction of a Graphene Surface Decorated with a Metal-Doped C30 with Carbon Monoxide, Carbon Dioxide, Methane, and Ozone
Using DFT simulations, we studied the interaction of a semifullerene C30 and a defected graphene layer. We obtained the C30 chemisorbs on the surface. We also found the adsorbed C30 chemisorbs, Li, Ti, or Pt, on its concave part. Thus, the resulting system (C30-graphene) is a graphene layer decorated with a metal-doped C30. The adsorption of the molecules depends on the shape of the base of the semifullerene and the dopant metal. The CO molecule adsorbed without dissociation in all cases. When the bottom is a pentagon, the adsorption occurs only with Ti as the dopant. It also adsorbs for a hexagon as the bottom with Pt as the dopant. The carbon dioxide molecule adsorbs in the two cases of base shape but only when lithium is the dopant. The adsorption occurs without dissociation. The ozone molecule adsorbs on both surfaces. When Ti or Pt are dopants, we found that the O3 molecule always dissociates into an oxygen molecule and an oxygen atom. When Li is the dopant, the O3 molecule adsorbs without dissociation. Methane did not adsorb in any case. Calculating the recovery time at 300 K, we found that the system may be a sensor in several instances.
Introduction
Molecules, such as CO, CO 2 , CH 4 , and O 3, are air and water pollutants that threaten the environment and life, prompting the scientific community to develop technological solutions to such challenges [1][2][3]. In this study, we are interested in exploring the use of fullerenes for such aims.
Surfaces based on fullerenes and their variations have been widely studied since the prediction and further synthesis of the C 60 structure [4][5][6], a highly stable group of molecules consisting of 60 carbon atoms, also named buckminsterfullerene, buckyball, or simply fullerene. Although fullerenes, such as C 60 , C 70 , or larger, are the most commonly studied [7,8], smaller fullerenes can also be experimentally produced and are of particular interest due to their curvature [9][10][11].
Fullerene fragments such as a C 30 hydrocarbon-i.e., half of the buckminsterfullerene C 60 -can show some of the properties of their complete counterparts [9] while also offering new possibilities due to their open basket-like shape. Similar nonplanar-related structures are corannulene (C 20 H 10 ) and coronene, known since the 1960s [12][13][14]. The latter is a bowl carbon structure with 20 atoms or C 20 , the smallest possible fullerene, which has been experimentally produced [11]. And the discovery of bidimensional, planar structures, been experimentally produced [11]. And the discovery of bidimensional, planar structures, such as graphene [15,16] and borophene [17], has also attracted attention because of their attractive properties and potential applications.
Previous investigations from other authors considered fullerenes on a graphene surface, focusing on studying weak interactions at a molecular level [18]. Graphene can accept electrons from a C60 fullerene relatively quickly, which, combined with the high transport capability of the former, turns this hybrid material into a good candidate for solar cell technology [19]. The development of hybrid surfaces has also focused on fabricating graphene-C60 films on silicon surfaces by a multistep self-assembly process [20]. The potential applications of these systems are promising, especially as lubricating films in electromechanics microsystems. Graphene-C60 vertical heterostructures composed of C60 thin films have also focused on their structural and electrical properties [21]. The absorption of pollutants, such as COCl2 (phosgene), H2S, CO, or CO2, among others, by these hybrid structures has also raised attention. Decorating such arrangements with transition metals usually catalyzes absorption [22][23][24][25].
This work studies a mixed surface formed by a semifullerene C30 adsorbed on a defective 5 × 5 graphene layer without a hexagonal ring, i.e., six carbon vacancies. The roughness of the surface at several sites and the change in curvature make this an attractive system to dope with different atoms. We considered Li, Ti, and Pt-decorations and then studied the ability of the compound system to capture the pollutant molecules mentioned above. We found that all the molecules reacted with the surface except methane.
Optimization of the Semifullerene C30
We took two different parts when splitting a fullerene C60 into two halves ("buckyballs") [10] to obtain a semifullerene C30. One has a pentagon in the base (section P), and the other has a hexagonal base (section H). Figure 1 shows the optimization for each case. Figure 1a,b show the C30 with a pentagon at the bottom, and figures 1c and 1d show the C30 with a hexagonal base. After optimization, we discovered that in the C60 molecule, the separation between the carbon atoms is 1.425 Å. For section P, the distance is 1.444 Å at the bottom, and for the rest of the particles, the average separation is 1.375 Å. For section H, the space is 1.485 Å at the base, and the average spacing is 1.436 Å for the other particles. The results from other authors [10] agree with our results. 30 . In (a,b), we have C30 with a pentagonal base, in a front and a side view, respectively. In (c,d), we show a front and side view for C 30 with a hexagonal base.
Optimization of Graphene with a Six-Vacancy Cluster
The vacancies in the graphene layer are necessary for the adsorption of the C30 molecule. We considered a graphene unit cell with 50 atoms and made a six vacancy cluster. Then, we optimized the system. Figure 2 shows the final configuration. We note that there is some distortion in the graphene lattice. The carbon atoms around the vacancies have different separations concerning pristine graphene. The bond lengths marked with A are 1.403 Å, and those marked with X are 1.452 Å. The other bonds are 1.420 Å, which is the same size as pristine graphene.
Optimization of Graphene with a Six-Vacancy Cluster
The vacancies in the graphene layer are necessary for the adsorption of the C30 molecule. We considered a graphene unit cell with 50 atoms and made a six vacancy cluster. Then, we optimized the system. Figure 2 shows the final configuration. We note that there is some distortion in the graphene lattice. The carbon atoms around the vacancies have different separations concerning pristine graphene. The bond lengths marked with A are 1.403 Å, and those marked with X are 1.452 Å. The other bonds are 1.420 Å, which is the same size as pristine graphene.
Adsorption of the C30 Molecule with a Pentagonal Base
The left column (P) in Figure 3 shows the adsorption of the C30 molecule with a pentagonal base in row 1. The initial location of the C30 molecule is above the cluster vacancies. Besides, the molecule is, with the closest carbon atom to the surface, at a distance of 3 Å. In the same column, row 2, we can see the system's final configuration. The adsorption energy is −15.29 eV, indicating a powerful graphene reaction. We perceive a view from above, the graphene surface in row 3 of the same column after adsorption using four-unit cells.
Adsorption of the C 30 Molecule with a Pentagonal Base
The left column (P) in Figure 3 shows the adsorption of the C 30 molecule with a pentagonal base in row 1. The initial location of the C 30 molecule is above the cluster vacancies. Besides, the molecule is, with the closest carbon atom to the surface, at a distance of 3 Å. In the same column, row 2, we can see the system's final configuration. The adsorption energy is −15.29 eV, indicating a powerful graphene reaction. We perceive a view from above, the graphene surface in row 3 of the same column after adsorption using four-unit cells.
Adsorption of the C 30 Molecule with a Hexagonal Base
Column H in row 1 shows the initial location of the C 30 molecule with a hexagonal bottom concerning the graphene layer with the closest carbon atom to the surface at a distance of 3 Å. In the same column, row 2, we can see the system's final configuration. The adsorption energy is −16.410 eV, which is a stronger adsorption than in the pentagonal case. We perceive the graphene surface in row 3 of the same column after adsorption using four-unit cells. Figure 3. Adsorption of C30 on graphene with a six-vacancy cluster. In column P, we show the adsorption of the C30 molecule with a pentagonal base. In the same column P, in row 1, we have the initial location of the semifullerene. We have the final configuration after adsorption in the second row of the same column. We view the surface with four-unit cells from above in the last row of this column, P. The corresponding sequence for a C30 with a hexagonal base is in column H.
Adsorption of the C30 Molecule with a Hexagonal Base
Column H in row 1 shows the initial location of the C30 molecule with a hexagonal bottom concerning the graphene layer with the closest carbon atom to the surface at a distance of 3 Å. In the same column, row 2, we can see the system's final configuration. The adsorption energy is −16.410 eV, which is a stronger adsorption than in the pentagonal case. We perceive the graphene surface in row 3 of the same column after adsorption using four-unit cells. Figure 4a presents the initial and final configuration for the adsorption of a lithium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C30 was 3.27 Å and 5.27 Å from the graphene layer. The lithium atom ends up bound to a carbon atom of the C30. The adsorption energy of Li is -3.686 eV, which indicates a strong reaction with the surface. The Li atom yields 0.0561 electrons. . Adsorption of C 30 on graphene with a six-vacancy cluster. In column P, we show the adsorption of the C 30 molecule with a pentagonal base. In the same column P, in row 1, we have the initial location of the semifullerene. We have the final configuration after adsorption in the second row of the same column. We view the surface with four-unit cells from above in the last row of this column, P. The corresponding sequence for a C 30 with a hexagonal base is in column H.
2.5.
Adsorption of Metals on the Graphene-C 30 (P) Surface 2.5.1. Doping with Li Figure 4a presents the initial and final configuration for the adsorption of a lithium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C 30 was 3.27 Å and 5.27 Å from the graphene layer. The lithium atom ends up bound to a carbon atom of the C 30 . The adsorption energy of Li is -3.686 eV, which indicates a strong reaction with the surface. The Li atom yields 0.0561 electrons. Figure 5 shows the interaction's projected density of states (PDOS). Note the hybridization of orbitals s and p from carbon with the orbital p from lithium around the Fermi energy at around 4 eV above the Fermi energy and about 2 eV below the Fermi energy. Figure 4b shows the initial and final configuration for the adsorption of a titanium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C 30 was 3.34Å and 5.25 Å from the graphene layer. The titanium atom ends up bound to four carbon atoms of the C 30 . The adsorption energy is = −8.082 eV, implying an intense reaction. The Ti atom yields 0.6129 electrons to the surface.
Doping with Ti
We can see in Figure 6 the interaction's PDOS. We note the hybridization of orbitals s and d from titanium with the orbitals p from the neighboring carbon atoms between −4 eV, a bit below the Fermi energy, and between 1 eV and 5 eV. Figure 4b shows the initial and final configuration for the adsorption of a titanium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C30 was 3.34Å and 5.25 Å from the graphene layer. The titanium atom ends up bound to four carbon atoms of the C30. The adsorption energy is = −8.082 eV, implying an intense reaction. The Ti atom yields 0.6129 electrons to the surface.
Doping with Ti
We can see in Figure 6 the interaction's PDOS. We note the hybridization of orbitals s and d from titanium with the orbitals p from the neighboring carbon atoms between −4 eV, a bit below the Fermi energy, and between 1 eV and 5 eV. Figure 7 shows the corresponding PDOS. We note the hybridization of orbital p from carbon with the orbitals s and p from platinum, around the Fermi energy, at around 2 eV above the Fermi energy, at about 2 eV below the Fermi energy, and below −4 eV. Figure 7 shows the corresponding PDOS. We note the hybridization of orbital p from carbon with the orbitals s and p from platinum, around the Fermi energy, at around 2 eV above the Fermi energy, at about 2 eV below the Fermi energy, and below −4 eV. Figure 8a shows the initial and final configuration for the adsorption of a lithium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C 30 was 3.37 Å and 4.57 Å from the graphene layer. The lithium atom ends up bound to a carbon atom of C 30 with an adsorption energy of −1.551 eV. It is a strong reaction but not as intense as the pentagonal case. The Li atom transfers 0.0364 electrons to the surface. Figure 8a shows the initial and final configuration for the adsorption of a lithium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C30 was 3.37 Å and 4.57 Å from the graphene layer. The lithium atom ends up bound to a carbon atom of C30 with an adsorption energy of −1.551 eV. It is a strong reaction but not as intense as the pentagonal case. The Li atom transfers 0.0364 electrons to the surface. Figure 9 shows the interaction's PDOS. Note the hybridization of orbitals s and p from carbon with the orbital p from lithium, between 1eV and 3 eV, around 4 eV, and a weaker hybridization between −2 eV and −1 eV. Figure 8b shows the initial and final configuration for the adsorption of a titanium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C30 was 3.34 Å and 4.57 Å from the graphene layer. The titanium atom ends Figure 8b shows the initial and final configuration for the adsorption of a titanium atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C 30 was 3.34 Å and 4.57 Å from the graphene layer. The titanium atom ends up bound to two carbon atoms of C 30 . The adsorption energy of the titanium atom is −5.435 eV. The Ti atom transfers 0.6179 electrons to the system. The interaction is intense but not as much as in the pentagonal case.
Doping with Ti
We can see in Figure 10 the interaction's PDOS. We note the hybridization of orbitals s and d from titanium with the orbitals p from the neighboring carbon atoms around −2 eV and between the Fermi energy and 5 eV. presents the initial and final configuration for the adsorption of a platinum atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C30 was 3.34 Å and 4.57 Å from the graphene layer. The platinum atom ends up bound to two carbon atoms of C30. The adsorption energy of the platinum atom is −4.706 eV, which is a strong interaction with the surface but not as intense as in the pentagonal case.
The Pt atom transfers 0.5141 electrons to the surface. Figure 11 shows the corresponding PDOS. We Note the hybridization of orbital p from carbon atoms with the orbitals s and d from platinum, at around −2 eV, about 1.5 eV, and below −4 eV. Figure 8c presents the initial and final configuration for the adsorption of a platinum atom on the surface. The initial distance between the metal atom and the plane defined by the opening of C 30 was 3.34 Å and 4.57 Å from the graphene layer. The platinum atom ends up bound to two carbon atoms of C 30 . The adsorption energy of the platinum atom is −4.706 eV, which is a strong interaction with the surface but not as intense as in the pentagonal case.
Doping with Pt
The Pt atom transfers 0.5141 electrons to the surface. Figure 11 shows the corresponding PDOS. We Note the hybridization of orbital p from carbon atoms with the orbitals s and d from platinum, at around −2 eV, about 1.5 eV, and below −4 eV. The initial distance between the carbon atom of the CO2 molecule and the Li atom was 3.17 Å, and the distance from the graphene layer was 7.12 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of CO2 on the Li-doped graphene-C30 system for the pentagonal base. Figure 12b shows the corresponding PDOS. We note the hybridization of orbital p from the oxygen atom with the orbital s from lithium at around 3 eV.
Adsorption of CO
There is no adsorption in this case. The initial distance between the carbon atom of the CO2 molecule and the Li atom was 3.17 Å, and the distance from the graphene layer was 7.12 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of CO2 on the Li-doped graphene-C30 system for the pentagonal base. Figure 12b shows the corresponding PDOS. We note the hybridization of orbital p from the oxygen atom with the orbital s from lithium at around 3 eV. The initial distance between the carbon atom of the CO 2 molecule and the Li atom was 3.17 Å, and the distance from the graphene layer was 7.12 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of CO 2 on the Li-doped graphene-C 30 system for the pentagonal base. Figure 12b shows the corresponding PDOS. We note the hybridization of orbital p from the oxygen atom with the orbital s from lithium at around 3 eV. Figure 13a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule ends up bound to the lithium atom without dissociation.
Adsorption of O 3
The adsorption energy is −1.777 eV, and using MD at 300 K, we found that the particle Li-O 3 remains close to the surface at that temperature. Figure 13a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule ends up bound to the lithium atom without dissociation. The adsorption energy is −1.777 eV, and using MD at 300 K, we found that the particle Li-O3 remains close to the surface at that temperature. Figure 13. (a) The adsorption of O3 on the Li-doped graphene-C30 system for the pentagonal base. The initial distance between the central oxygen atom of the O3 molecule and the Li atom was 3.015 Å, and the distance from the graphene layer was 7.15 Å. The molecule was perpendicular to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of O3 on the Li-doped graphene-C30 system for the pentagonal base. Figure 13b shows the corresponding PDOS. Notice the weak hybridization of orbitals p from the oxygen and carbon atoms with the orbitals s from the lithium between 0 and 2 eV. Figure 13b shows the corresponding PDOS. Notice the weak hybridization of orbitals p from the oxygen and carbon atoms with the orbitals s from the lithium between 0 and 2 eV. Figure 13a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule ends up bound to the lithium atom without dissociation. The adsorption energy is −1.777 eV, and using MD at 300 K, we found that the particle Li-O3 remains close to the surface at that temperature. Figure 13. (a) The adsorption of O3 on the Li-doped graphene-C30 system for the pentagonal base. The initial distance between the central oxygen atom of the O3 molecule and the Li atom was 3.015 Å, and the distance from the graphene layer was 7.15 Å. The molecule was perpendicular to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of O3 on the Li-doped graphene-C30 system for the pentagonal base. Figure 13b shows the corresponding PDOS. Notice the weak hybridization of orbitals p from the oxygen and carbon atoms with the orbitals s from the lithium between 0 and 2 eV. The initial distance between the carbon atom of the CO molecule and the Ti atom was 4.18 Å, and the distance from the graphene layer was 7.34 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of CO on the Ti-doped graphene-C 30 system for the pentagonal base. Figure 14b shows the corresponding PDOS. Notice the hybridization of orbital p from the carbon atom with the orbitals s and d from the titanium atom at around −2 eV and between 1 eV and 4 eV.
Adsorption of CO 2
There is no adsorption in this case.
Adsorption of CH 4
There is no adsorption in this case. Figure 15a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule dissociates into an oxygen atom and an oxygen molecule. The oxygen atom is bound to the titanium, and the oxygen molecule is attached to the titanium atom. The adsorption energy of the ozone molecule is −6.3953 eV. The oxygen atom loses 0.2702 electrons. Besides, the oxygen molecule gains 0.4085 electrons. Using MD at 300 K, we obtained that the particle Ti-O 3 remains close to the surface at that temperature.
Adsorption of O 3
There is no adsorption in this case.
Adsorption of CH4
There is no adsorption in this case. Figure 15a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule dissociates into an oxygen atom and an oxygen molecule. The oxygen atom is bound to the titanium, and the oxygen molecule is attached to the titanium atom. The adsorption energy of the ozone molecule is −6.3953 eV. The oxygen atom loses 0.2702 electrons. Besides, the oxygen molecule gains 0.4085 electrons. Using MD at 300 K, we obtained that the particle Ti-O3 remains close to the surface at that temperature. Figure 15. (a) Adsorption of O3 on the Ti-doped graphene-C30 system for the pentagonal base. The initial distance between the central oxygen atom and the Ti atom was 3.0 Å, and the distance from the graphene layer was 7.14 Å. The plane of the ozone molecule was parallel to the graphene layer. The adsorption is with dissociation. (b) The PDOS for the adsorption of O3 on the Ti-doped graphene-C30 system for the pentagonal base. Figure 15b shows the corresponding PDOS. Notice a weak hybridization of orbitals s from the carbon atom with the orbitals s and d from the titanium atom and p orbitals from the oxygen atoms at around 4 eV and between −6 eV and −4 eV with p orbitals from oxygen atoms and orbitals s from the titanium atom.
Adsorption of CO
There is no adsorption in this case.
Adsorption of CO2
There is no adsorption in this case.
Adsorption of CH4
There is no adsorption in this case. Figure 15b shows the corresponding PDOS. Notice a weak hybridization of orbitals s from the carbon atom with the orbitals s and d from the titanium atom and p orbitals from the oxygen atoms at around 4 eV and between −6 eV and −4 eV with p orbitals from oxygen atoms and orbitals s from the titanium atom.
Adsorption of CO
There is no adsorption in this case.
Adsorption of CO 2
There is no adsorption in this case.
Adsorption of CH 4
There is no adsorption in this case. Figure 16a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The adsorption energy is −0.8521 eV, and the molecule dissociates into an oxygen atom and an oxygen molecule. The oxygen atom ends up bound to a carbon atom. Besides, the oxygen molecule ends up bound to the platinum atom. The oxygen atom, which ends bound to a carbon atom, transfers 0.1207 electrons. The remaining part of the ozone molecule, the oxygen molecule bound to the Pt atom, gains 0.5665 electrons. Figure 16a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The adsorption energy is −0.8521 eV, and the molecule dissociates into an oxygen atom and an oxygen molecule. The oxygen atom ends up bound to a carbon atom. Besides, the oxygen molecule ends up bound to the platinum atom. The oxygen atom, which ends bound to a carbon atom, transfers 0.1207 electrons. The remaining part of the ozone molecule, the oxygen molecule bound to the Pt atom, gains 0.5665 electrons. Figure 16. (a) Adsorption of O3 on the Pt-doped graphene-C30 system for the pentagonal base. The initial distance between the central oxygen atom and the Pt atom was 3.60 Å, and the distance from the graphene layer was 7.25 Å. The plane of the ozone molecule was parallel to the graphene layer. The adsorption is with dissociation. (b) The PDOS for the adsorption of O3 on the Pt-doped graphene-C30 system for the pentagonal base. Figure 16b shows the corresponding PDOS. Notice a weak hybridization of orbitals p from the carbon atom with the orbitals p from the platinum and oxygen atoms at around 4.2 eV. The same hybridization is stronger below −4 eV.
Adsorption of CO
There is no adsorption in this case. Figure 17a shows the initial and final configuration of the system for the adsorption of a carbon dioxide molecule. The molecule adsorbs without dissociation, and one oxygen atom ends up bound to the lithium atom. The adsorption energy is −0.6491 eV, and the molecule transfers to the system 0.0803 electrons. The calculated recovery time at 300 K is 0.13 s, a good value for a sensor. Figure 16b shows the corresponding PDOS. Notice a weak hybridization of orbitals p from the carbon atom with the orbitals p from the platinum and oxygen atoms at around 4.2 eV. The same hybridization is stronger below −4 eV.
Adsorption of CO
There is no adsorption in this case. Figure 17a shows the initial and final configuration of the system for the adsorption of a carbon dioxide molecule. The molecule adsorbs without dissociation, and one oxygen atom ends up bound to the lithium atom. The adsorption energy is −0.6491 eV, and the molecule transfers to the system 0.0803 electrons. The calculated recovery time at 300 K is 0.13 s, a good value for a sensor.
Adsorption of CO 2
into an oxygen atom and an oxygen molecule. The oxygen atom ends up bound to a carbon atom. Besides, the oxygen molecule ends up bound to the platinum atom. The oxygen atom, which ends bound to a carbon atom, transfers 0.1207 electrons. The remaining part of the ozone molecule, the oxygen molecule bound to the Pt atom, gains 0.5665 electrons. Figure 16. (a) Adsorption of O3 on the Pt-doped graphene-C30 system for the pentagonal base. The initial distance between the central oxygen atom and the Pt atom was 3.60 Å, and the distance from the graphene layer was 7.25 Å. The plane of the ozone molecule was parallel to the graphene layer. The adsorption is with dissociation. (b) The PDOS for the adsorption of O3 on the Pt-doped graphene-C30 system for the pentagonal base. Figure 16b shows the corresponding PDOS. Notice a weak hybridization of orbitals p from the carbon atom with the orbitals p from the platinum and oxygen atoms at around 4.2 eV. The same hybridization is stronger below −4 eV.
Adsorption of CO
There is no adsorption in this case. Figure 17a shows the initial and final configuration of the system for the adsorption of a carbon dioxide molecule. The molecule adsorbs without dissociation, and one oxygen atom ends up bound to the lithium atom. The adsorption energy is −0.6491 eV, and the molecule transfers to the system 0.0803 electrons. The calculated recovery time at 300 K is 0.13 s, a good value for a sensor. Figure 17. (a) Adsorption of CO2 on the Li-doped graphene-C30 system for the hexagonal base. The initial distance between the carbon atom of the CO2 molecule and the Li atom was 3.11 Å, and the distance from the graphene layer was 7.15 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of CO2 on the Li-doped graphene-C30 system for the hexagonal base. The initial distance between the carbon atom of the CO 2 molecule and the Li atom was 3.11 Å, and the distance from the graphene layer was 7.15 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of CO 2 on the Li-doped graphene-C 30 system for the hexagonal base. Figure 17b shows the corresponding PDOS. Notice the hybridization of orbitals p from the oxygen atom with the orbitals s from the lithium atom at around 2 eV.
Adsorption of CH 4
There is no adsorption in this case. Figure 18a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule ends up bound to the lithium atom without dissociation. The adsorption energy of the ozone molecule is −2.119 eV, and the surface transfers 0.2883 electrons to the ozone molecule. Using MD at 300 K, we found that the particle Li-O 3 remains close to the surface at that temperature; it does not go away from the surface.
Adsorption of O 3
There is no adsorption in this case. Figure 18a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The molecule ends up bound to the lithium atom without dissociation. The adsorption energy of the ozone molecule is −2.119 eV, and the surface transfers 0.2883 electrons to the ozone molecule. Using MD at 300 K, we found that the particle Li-O3 remains close to the surface at that temperature; it does not go away from the surface. Figure 18. (a) Adsorption of O3 on the Li-doped graphene-C30 system for the hexagonal base. The initial distance between the central oxygen atom and the Li atom was 3.26 Å, and the distance from the graphene layer was 7.35 Å. The plane of the ozone molecule was parallel to the graphene layer. The adsorption is without dissociation. (b) The PDOS for the adsorption of O3 on the Li-doped graphene-C30 system for the hexagonal base. Figure 18b shows the corresponding PDOS. Notice the hybridization of orbitals p from the oxygen with the orbitals s from the lithium atom between 3 eV and 4 eV. There is a weaker hybridization below the Fermi energy.
Adsorption of CO
There is no adsorption in this case.
Adsorption of CO2
There is no adsorption in this case.
Adsorption of CH4
There is no adsorption in this case. Figure 19a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The adsorption energy is −0.8214 eV, and the molecule dissociates into two fractions during adsorption, an oxygen atom and an oxygen molecule. Besides, the first fraction is bound to a carbon atom, and the second remains close to the surface. Using MD at 300 K, we found that the molecule O2 remains close to the surface at that temperature; it does not go away from the surface. Figure 18b shows the corresponding PDOS. Notice the hybridization of orbitals p from the oxygen with the orbitals s from the lithium atom between 3 eV and 4 eV. There is a weaker hybridization below the Fermi energy.
Adsorption of CO
There is no adsorption in this case.
Adsorption of CO 2
There is no adsorption in this case.
Adsorption of CH 4
There is no adsorption in this case. Figure 19a shows the initial and final configuration of the system for the adsorption of an ozone molecule. The adsorption energy is −0.8214 eV, and the molecule dissociates into two fractions during adsorption, an oxygen atom and an oxygen molecule. Besides, the first fraction is bound to a carbon atom, and the second remains close to the surface. Using MD at 300 K, we found that the molecule O 2 remains close to the surface at that temperature; it does not go away from the surface. Figure 19b shows the corresponding PDOS. Notice the hybridization of orbitals p from the carbon and oxygen atoms and the orbitals d from the titanium atom between 2 and 4 eV and below the Fermi energy. Figure 19b shows the corresponding PDOS. Notice the hybridization of orbitals p from the carbon and oxygen atoms and the orbitals d from the titanium atom between 2 and 4 eV and below the Fermi energy. Figure 19. (a) Adsorption of O3 on the Ti-doped graphene-C30 system for the hexagonal base. The initial distance between the central oxygen atom and the Ti atom was 3.97 Å, and the distance from the graphene layer was 6.90 Å. The plane of the ozone molecule was parallel to the graphene layer. The adsorption is with dissociation into an oxygen atom and an oxygen molecule. (b) The PDOS for the adsorption of O3 on the Ti-doped graphene-C30 system for the hexagonal base. Figure 19b shows the corresponding PDOS. Notice the hybridization of orbitals p from the carbon and oxygen atoms and the orbitals d from the titanium atom between 2 and 4 eV and below the Fermi energy. The surface transfers 0.0322 electrons to the carbon monoxide molecule. Figure 20b shows the corresponding PDOS. We can see the hybridization of orbitals p from the carbon atom and the orbitals s from the platinum atom at around 3 eV and about 1.2 eV, respectively. We can also notice a weak hybridization of orbitals p from the carbon atom with orbitals d and s from the platinum atom below −1 eV.
Adsorption of CO2
There is no adsorption in this case.
Adsorption of CH4
There is no adsorption in this case. Figure 20. (a) Adsorption of CO on the Pt-doped graphene-C 30 system for the hexagonal base. The initial distance between the center of the CO molecule and the Pt atom was 3.0 Å, and the distance from the graphene layer was 7.32 Å. The molecule was parallel to the graphene layer. The adsorption is without dissociation; (b) The PDOS for the adsorption of CO on the Pt-doped graphene-C 30 system for the hexagonal base.
The surface transfers 0.0322 electrons to the carbon monoxide molecule. Figure 20b shows the corresponding PDOS. We can see the hybridization of orbitals p from the carbon atom and the orbitals s from the platinum atom at around 3 eV and about 1.2 eV, respectively. We can also notice a weak hybridization of orbitals p from the carbon atom with orbitals d and s from the platinum atom below −1 eV.
Adsorption of CO 2
There is no adsorption in this case.
Adsorption of CH 4
There is no adsorption in this case. Figure 21a shows the initial and final configuration of the system for the adsorption of an ozone molecule that occurs with dissociation and with an adsorption energy of −1.43 eV. The molecule splits into two parts, an oxygen atom and an oxygen molecule. Using MD at 300 K, we found that the particle O 2 remains close to the surface at that temperature; it does not go away from the surface. Figure 21a shows the initial and final configuration of the system for the adsorption of an ozone molecule that occurs with dissociation and with an adsorption energy of −1.43 eV. The molecule splits into two parts, an oxygen atom and an oxygen molecule. Using MD at 300 K, we found that the particle O2 remains close to the surface at that temperature; it does not go away from the surface. The oxygen atom, which ends bound to the platinum atom, transfers 0.1207 electrons. The surface transfers 0.5665 electrons to the remaining fraction of the ozone molecule and the oxygen molecule, which remains close to the surface. Figure 21b shows the corresponding PDOS. We can see the hybridization of orbitals p from the oxygen atom and the orbitals s and d from the platinum atom at around 2 eV and about −1.75 eV, respectively. We can also notice a weak hybridization of orbitals p from the oxygen atom with orbitals d and s from the platinum atom below −2 eV.
Materials and Methods
We used the GGA approximation for the exchange and correlation energies in the Perdew-Burke-Ernzerhohof (PBE) expression [26], using a Martins-Troullier norm-conserving pseudopotential [27]. We performed structural relaxations using the Quantum ES-PRESSO code package [28], which uses periodical boundary conditions. We took threshold energy of 1.0 × 10 −6 eV for convergence, a cut-off energy point of 1100 eV, and a threshold force of 1.0 × 10 −5 eV/Å. We considered 40 k points within the Monkhorst-Pack particular k point scheme for Brillouin-zone integrations [29] with a separation of 0.083 Å −1 .
To check the pseudopotentials, we minimized the energy of the different systems. Thus, we obtained the Li lattice parameter 3.495 Å (the experimental value is 3.510 Å) [30]; for titanium, we obtained: a = 2.863 Å, and c = 4.544 Å (the observed values are 2.950 and 4.683 Å, respectively [30]; in the case of Pt, we calculated a lattice parameter of 2.898 Å (the experimental value is 2.924 Å). We obtained the bond lengths and angles of the different pollutant molecules we are considering with the same approach. Figure 22 shows our results, which agree with the experimental values. The oxygen atom, which ends bound to the platinum atom, transfers 0.1207 electrons. The surface transfers 0.5665 electrons to the remaining fraction of the ozone molecule and the oxygen molecule, which remains close to the surface. Figure 21b shows the corresponding PDOS. We can see the hybridization of orbitals p from the oxygen atom and the orbitals s and d from the platinum atom at around 2 eV and about −1.75 eV, respectively. We can also notice a weak hybridization of orbitals p from the oxygen atom with orbitals d and s from the platinum atom below −2 eV.
Materials and Methods
We used the GGA approximation for the exchange and correlation energies in the Perdew-Burke-Ernzerhohof (PBE) expression [26], using a Martins-Troullier norm-conserving pseudopotential [27]. We performed structural relaxations using the Quantum ESPRESSO code package [28], which uses periodical boundary conditions. We took threshold energy of 1.0 × 10 −6 eV for convergence, a cut-off energy point of 1100 eV, and a threshold force of 1.0 × 10 −5 eV/Å. We considered 40 k points within the Monkhorst-Pack particular k point scheme for Brillouin-zone integrations [29] with a separation of 0.083 Å −1 .
To check the pseudopotentials, we minimized the energy of the different systems. Thus, we obtained the Li lattice parameter 3.495 Å (the experimental value is 3.510 Å) [30]; for titanium, we obtained: a = 2.863 Å, and c = 4.544 Å (the observed values are 2.950 and 4.683 Å, respectively [30]; in the case of Pt, we calculated a lattice parameter of 2.898 Å (the experimental value is 2.924 Å). We obtained the bond lengths and angles of the different pollutant molecules we are considering with the same approach. Figure 22 shows our results, which agree with the experimental values.
In our simulations, the adsorption energy is: where E(Surf + Mol) is the energy corresponding to the final system; [E(Surf ) + E(Mol)] corresponds to the initial configuration, which is the energy of the surface, without interaction with the molecule plus the isolated molecule's energy. We calculated the recovery time (τ) from the Eyring transition state theory using the expression [31,32]: In Equation (2), h is the Plank's constant, k B is the Boltzmann's constant, E ads is the adsorption energy, and T is the absolute temperature.
The desirable set of values for the recovery time is between 10 −2 and ten seconds, implying at 300 K, adsorption energies in the range (−0.6428, −0.8215) eV. In our simulations, the adsorption energy is: where E (Surf + Mol) is the energy corresponding to the final system; [E(Surf) + E(Mol)] corresponds to the initial configuration, which is the energy of the surface, without interaction with the molecule plus the isolated molecule's energy. We calculated the recovery time (τ) from the Eyring transition state theory using the expression [31,32]: In Equation (2), h is the Plank's constant, kB is the Boltzmann's constant, Eads is the adsorption energy, and T is the absolute temperature.
The desirable set of values for the recovery time is between 10 −2 and ten seconds, implying at 300 K, adsorption energies in the range (−0.6428, −0.8215) eV.
Discussion
We performed computational simulations to investigate the adsorption of polluting molecules on graphene-semifullerene (C30) surfaces, considering two C30 geometries: hexagonal and pentagonal base. We found it possible to dope all surfaces with the metals Li, The methane molecule did not adsorb on any surface. Finally, we found that both surfaces always adsorb the ozone molecule. When Ti or Pt are dopants, we found that the O 3 molecule always dissociates into an oxygen molecule and an oxygen atom. For the P surface, the adsorption energies are −6.3953 and −0.8521 eV for the Ti and Pt doped surfaces, respectively. Furthermore, the adsorption energies for the Ti and Pt doped H surface are −0.82 eV and −1.43 eV, respectively. In the case of Li, the O 3 molecule adsorbs without dissociation. The adsorption energy is −1.777 eV for the P surface, and the adsorption energy is −2.119 eV for the H surface.
At 300 K, the P surface would not act as a suitable sensor in any case. The H surface would be a sensor for O 3 with Ti as the dopant (τ = 9.97 s) and for CO 2 with Li as a dopant (τ = 0.13 s). | 10,046.4 | 2022-04-29T00:00:00.000 | [
"Chemistry"
] |
Behavior of a net of fibers linked by viscous interactions: theory and mechanical properties
This paper presents an investigation of the macroscopic mechanical behavior of highly concentrated &ber suspensions for which the mechanical behavior is governed by local &ber–&ber interactions.Theproblem is approached by considering the case of a net of rigid &bers of uniform length, linked by viscous point interactions of power-law type. Those interactions may result in local forces and moments located at the contacting point between two &bers, and respectively power-law functions of the local linear and angular velocity at this point. Assuming the existence of an elementary representative volume which size is small compared to the size of the whole structure, the &ber net is regarded as a periodic assembly of identical cells. Macroscopic equilibrium and constitutive equations of the equivalent continuum are then obtained by the discrete and periodic media homogenization method, based on the use of asymptotic expansions. Depending on the order of magnitude of local translational viscosities and rotational viscosities, three types of the equivalent continua are proved to be possible. One of them leads to an e;ective Cosserat medium, the other ones being usual Cauchy media. Lastly, formulations that enable an e;ective computation of constitutive equations are detailed. They show that the equivalent continuum behaves like an anisotropic power-law <uid.
Introduction
Understanding the behavior of short ÿber-reinforced uids is of great interest in many industrial applications such as the processing of polymeric materials and other composites. In short ÿber systems processing, ÿbers noticeably in uence the ow of the uid matrix and conversely, the ow of the matrix determines the spatial distribution and orientation of ÿbers, which makes the modeling of such a problem complex.
Many relevant theoretical studies on this topic, forming an important part of the theory of suspensions, have tried to establish the relation between the microscopic properties of such materials, given by the behavior, geometry, orientation and distribution of ÿbers, and macroscopic mechanical behavior, under some restrictive assumptions. They generally apply to the case of rigid straight ÿbers immersed in a Newtonian uid when short range interactions between ÿbers may be neglected, that is to say for dilute and semi-dilute suspensions (Batchelor, 1971;Advani, 1994).
However, those theories cannot account properly for the e ects of local interactions between ÿbers, that may be due to dry friction e ects or localized viscous forces (Toll and Manson, 1994), and their validity is therefore limited to quite dilute suspensions, which is not the case of many industrial processes such as compression of Sheet Molding Compounds (SMC) or Glass Mat Transfer (GMT), or injection of Bulk Molding Compounds (BMC) (Dumont et al., 2003;Le Corre et al., 2002). Moreover, the complete solving of theoretical problems often requires further statistical assumptions that make the e ective calculation of the behavior possible only in the case of perfectly unidirectional or perfectly isotropic orientations of ÿbers (Fredrickson and Shaqfeh, 1989;Shaqfeh and Fredrickson, 1990).
Furthermore, almost no analytical solutions can be found in the literature in the case of non-Newtonian uids reinforced with ÿbers. This is due to the di culty of calculating velocity ÿelds around ÿbers as in the Newtonian case (Je ery, 1922), and to the non-applicability of the superposition principle often used in that case. However, according to the work of Batchelor (1971), the behavior at high concentration regimes is mainly dominated by short range interactions between ÿbers, the contribution to the total stress of the matrix and of the ÿber-matrix interaction becoming then negligible. The critical ÿber concentration for such an assumption to be valid cannot be established in a general way. It largely depends on the ÿber's geometry but it is also conditioned by the nature of the matrix and of the interactions.
Assuming a highly concentrated regime, the macroscopic behavior of a suspension can be drawn from some simple micro-mechanical considerations as it is done in the works of Toll and Manson (1994), Gibson and Toll (1999) and Servais et al. (1999) in the case of planar ÿbers linked by a combination of dry and non-linear viscous interactions. However, the interesting results obtained by these authors lack generality for they only apply to simple viscometric ows such as biaxial extension or simple shear ow.
As a ÿrst step towards a more general approach to the modelling of the behavior of short ÿber systems, this paper exposes an homogenization method suitable for the modelling of highly concentrated ÿber suspensions linked by non-linear viscous interactions of power law type. This method is an application of the homogenization method of discrete and periodic media, initially developed by Moreau and Caillerie (1995), Tollenaere and Caillerie (1998) and Pradel (1998) for the modelling of periodic trusses or foams in the scope of elasticity. It is based on the use of asymptotic expansion methods for periodic homogenization proposed by Bakhvalov and Panasenko (1989), Bensoussan et al. (1978) and Sanchez-Palencia (1980), adapted to discrete problems. Starting from a discrete problem at the scale of the ÿbers (microscopic scale), the proposed method enables ÿnding the essential properties of the equivalent continuum, that is to say the general form of its balance and constitutive equations at the macroscopic scale.
In Section 2 are detailed the basic assumptions of the upscaling method, the notations adapted to the discrete geometrical description of the net and the assumptions relative to the modelling of interactions between ÿbers. In Section 3, the upscaling process, based on asymptotic expansions, is discussed and preliminary results are exposed. This process then enables the discrete balance equations of the ÿber net to be transformed into continuous ones: this leads to the deÿnition of the macroscopic stress tensors of the equivalent continuum (see Section 4). Depending on the ÿber-ÿber interaction laws at the microscopic level, three types of macroscopic constitutive equations are obtained, the e ective computation of which are then discussed (see Section 5). Finally, in Section 6, simple considerations on those constitutive equations show that in some cases, the equivalent continuum may be modeled by an anisotropic power law uid.
For the sake of simplicity, the problem is exposed in the case of a planar ÿber net, but, as it will be clear in the equations, the extension to a three dimensional problem would be straightforward.
Notations
Boldface symbols denote tensors, the order of which is indicated by the corresponding number of underlinings. Dots and colons are used to indicate tensor products contracted over one and two indices respectively. In the usual Cartesian frame, this leads to using Einstein's summation convention over repeated indices.
Tensorial product is denoted by the symbol ⊗, e.g.: The gradient of a vector a with respect to space variables x i will be denoted ∇ a and deÿned as The same convention will be used for higher order spatial derivatives, e.g.: (∇ a) ijk = 9 2 a i 9x j 9x k :
Basic assumptions
The structure under consideration is a planar net made of rigid cylindrical ÿbers of uniform length l, that will be called bars. The plane is characterized by the Cartesian reference frame R = (O; e 1 ; e 2 ), where O is an arbitrary reference point (see Fig. 1).
The discrete homogenization technique we propose requires two further assumptions. The ÿrst assumption is that the structure may be considered as periodic. Thus the ÿber net is regarded as an assembly of identical cells as shown in Fig. 1, which are characterized by the two vectors of R , d 1 and d 2 , called periodicity vectors. The ÿber net dimensions are therefore L 1 and L 2 which are such that where N 1 is the number of cells in the d 1 direction and N 2 the number of cells in the d 2 direction. The second assumption is that the ÿber net is made of a huge number of cells N c , so that the scale separation parameter , deÿned as may be considered as a very small parameter. Condition (2) is equivalent to the following, and so can have another meaning: where s and S are respectively the surface of one cell and the surface of the whole ÿber net, d = √ s represents the characteristic length of the microstructure and L = √ S is the characteristic length of the whole net. Let us assume that d has the same order of magnitude as the length of ÿbers. Condition (3) is therefore a condition of scale separation; it implies that the size of the ÿber net should be large in comparison with the size of ÿbers.
Numbering system
The net of ÿbers is made of a discrete set of rigid bars. Each bar is supposed to be linked to the rest of the net by one or several interactions, located at the contacting point between two ÿbers. The geometry and topology of the net is therefore entirely deÿned by the spatial and angular position of each ÿber, and by the connectivity of each ÿber with the rest of the assembly.
Centers of cells are ÿrst located by a vector of integers a whose components are (a 1 ; a 2 ). Their positions are located by vectors p 0 (a) (see Fig. 1): p 0 (a) = a 1 d 1 + a 2 d 2 : (4) The assumption of periodicity then suggests a system of numbering of bars and links re ecting the regularity of the microstructure. Each bar of the net is numbered bỹ b = (b; ab), which means that the barb is the b th bar of the cell ab. The set all bars of the net is denoted B.
In the same way, the action of the barc on the barb is denoted either byk = (c=b) or byk = (k; ak ), the set of connections of the net being denoted C. Barsb andc can respectively be considered as the interior and the exterior of the interactionk, so the following notation will be used: b = I(k);c = E(k) and sok = (E(k)=I(k)): We will consider that the cell to whichk belongs is the belonging cell of the bar on which the action is exerted, i.e. the barb = I(k), so that ak = ab. The reciprocal interaction (b=c) will be denoted tk = ( t k; a tk ) and will therefore belong to the cell ac.
In the notationsb = (b; ab) andk = (k; ak ), integers b and k are the numbers of bars and connections in a reference cell the sets of which are denoted B R and C R . In this work, we will assume that the size of cells d is about the same order as the length of ÿbers l, and that d ¿ l. This implies that interactions can take place only between two ÿbers of the same cell or between ÿbers of two neighboring cells. The exterior of the connectionk = (k; ak ) will therefore be located in the cell ak or in a neighboring one. Anyway, it will belong to the cell ak + k , where k is a vector of integers whose components take their values in {−1; 0; +1}. Notice that the periodicity assumption causes k to be independent of the position of the cell.
Geometrical description
As illustrated in Fig. 2, the position of a barb is deÿned by the position of its center Pb, located by the vector p(b) = OPb, and by its unit vector eb. The periodicity of the net implies that p(b) can be partitioned as where p 0 (a) is the macroscopic position of the belonging cell ofb in the ÿber net, and p b 1 is the local position of bar b in that cell. It is to be noted that p b 1 only depends on the considered bar b and not on the macroscopic position for the net is assumed to be perfectly periodic. Some extension to this restriction could be achieved by considering a quasi-periodic ÿber net, as done by Tollenaere and Caillerie (1998), but is not in the scope of this paper, which is only interested in obtaining the equivalent continuum's constitutive equations.
Vectors d i are small compared to the size L of the net, so they can be written as d i = i where the vectors i are macroscopic vectors, independent of . Let us now introduce the pseudo continuous variable b deÿned by b = ( 1 ; 2 ) = ab. According to this notation, Eq. (6) becomes p(b) = 1 1 + 2 2 + p b 1 = p 0 ( ) + p b 1 : As illustrated in Fig. 3, macroscopic positions of bars are now parameterized by the vector of reals which is a variable of ⊂ [0; 1]×[0; 1]. is the reference parametric space, where the net is made of square cells of size . As is assumed to tend to zero in the use of asymptotic expansion methods, tends to become, and will be used as a continuous variable, even if, strictly speaking, it only takes values such as = a for a ÿnite value of . Let us introduce G , the gradient ∇ p 0 of the geometrical transformation → p 0 ( ). According to Eq. (7), the components of G relative to e i ⊗ e j can be expressed as and the Jacobian g of the transformation is
Local kinematics and interactions
The problem being planar in R and ÿbers being considered as rigid bars, their motion is a rigid body planar motion. Kinematics of a barb can therefore be deÿned by: • the velocity of its center Pb, denoted C(b) • its angular velocity !(b) = !(b)e 3 , where e 3 is the unit vector normal to the plane of ÿbers. The velocity of a point Q of barb is then where is the curvilinear abscissa of Q on the barb with respect to the center Pb as shown in Fig. 4. is a small variable; its order of magnitude is about the size of the cell d so it can be written as = d = L , where is independent of the size of the problem L. In the following, it will be convenient to introduce the new variable = L!, which enables us to rewrite Eq. (10) as The actionk of bar E(k) on bar I(k) is supposed to take place at a point and to be of non-linear viscous type, due to the relative velocities of both bars at their contacting point. In this paper, we will consider the case where those interactions follow a power law of the relative velocities. The interactionk can therefore be partitioned into: • a force fk proportional to the di erence of velocity of the two bars Ck at their contacting point, denoted Qk : where the power law index m is a real scalar ranging from 0 to 1. By deÿnition, Ck , so making use of Eq. (11), its expression reads In the last equation, k (resp. t k ) is the normalized local abscissa of Q k on the bar b = I(k) (resp.c = E(k)).
• a moment Mk (Qk ) relative to Qk , proportional to the di erence of angular velocities of both bars: where !k denotes the di erence of angular velocities !(E(k)) − !(I(k)).
In order to normalize moments with respect to forces, vectors mk (Qk ), such as Mk (Qk )= Lmk (Qk ), will be used. In the following, they will also be called moments, even if they are homogeneous to forces. The moment interaction law can then be rewritten as where ÿ k = B k =L m+1 . The moment of actionk expressed at point Pk , the center of bar I(k), will be denoted mk and will have the following expression: In Eqs. (12) and (15), the coe cients k and ÿ k are respectively called translational and rotational viscosity. According to the assumption of periodicity, they only depend on the considered interaction k inside the reference cell. Those viscosities are chosen such that and where k 0 and ÿ k 0 are strictly positive real scalar constants of same order of magnitude and where the parameter q is a real number, characteristic of the relative order of magnitude of ÿ k with respect to k . In the following, only cases where q ¿ 0 will be examined, for situations where ÿ k would be much greater than k are not physically likely to happen in the considered suspensions. For simplicity, three cases will be discussed. They may represent three di erent situations one might expect when studying a speciÿc application: • Case 1: q=0, rotational viscosities have the same order of magnitude as translational ones. • Case 2: q = 1, rotational viscosities are "quite" small compared to translational ones. • Case 3: q = 1 + m, rotational viscosities are very small or negligible.
As it will be clear in the following (see Section 5.3), cases where q = 1 + m + j, j being a strictly positive integer are equivalent to Case 3 with ÿ k 0 = 0, so the results that will be drawn in Case 3 will also apply to such physical situations.
The analysis of intermediate cases, where q is a real positive number, was also carried out. They were shown to lead to three types of macroscopic descriptions (i.e. equivalent continua) which are identical to the descriptions that can be deduced from the analysis of the three proposed cases. The reader should therefore keep in mind that results exposed in the following sections are not restrictive.
Upscaling process
At this stage, it is not possible to give an explicit formulation of forces and moments in terms of the macroscopic velocity gradient of the suspension, as it is generally done in the classical theory of ÿber suspensions (Batchelor, 1971;Gibson and Toll, 1999). Such a process would require a further assumption on the velocity ÿelds relative to each bar which is not required here. As it will be clear in the following, the upscaling process used in our homogenization method will provide local forces and moments as implicit functions of a macroscopic velocity gradient which might be assimilated to the bulk velocity gradient of the suspension, as done in the previously cited works. But this property will be a result of the homogenization process and not an assumption.
Velocity ÿelds
Taking advantage of the smallness of the parameter and of the assumption of periodicity, velocity and angular velocity ÿelds relative to a barb are expanded in discrete asymptotic series expansion in powers of (Moreau and Caillerie, 1995;Tollenaere and Caillerie, 1998). Thus, they are written as where functions C b n and b n are continuous functions of that generally depend on the bar b. re ects the macroscopic variation of those functions whereas index b is a local variable, depending on the corresponding bar ofb in the reference cell.
For a pair of connected barsk = (c;b), let us recall thatb is supposed to belong to the cell located by , whereasc may belong to a neighboring cell located by + . Thus C(c) = C E(k) ( + k ) is expanded using a Taylor expansion around , using the fact that k is very small with respect to : Carrying asymptotic expansion (19) into Eq. (21) and identifying each order of powers of , the expansion of C(c) reads The expansion of the angular velocity (c) is obtained exactly in the same manner.
Velocity di erence ÿelds
In the same way, velocity di erences Ck (13) and k = E(k) − I(k) can be expanded: By substituting expansion (19) into Eq. (13), and making use of Taylor expansions (22), velocity di erences C k i can be identiÿed at each order of powers of . For the two ÿrst orders ( 0 and 1 ), we obtain For the two ÿrst orders of angular velocities di erence, one obtains
Equilibrium of the net
In the scope of this paper, inertia e ects will be neglected and no external forces or torques will be supposed to act on bars of the net. Therefore, the equilibrium equations of a given barb can be described by the two following sets of equations: where C(b) is the set of connections of barb. The expansion of equilibrium equations of the whole ÿber net can be simpliÿed by the use of a virtual power formulation. This technique, developed by Moreau and Caillerie (1995) and Tollenaere and Caillerie (1998) enables the macroscopic behavior to be obtained fast and avoids the need to consider expansions of the equilibrium equations at higher orders of powers of . Thus using two sets of virtual ÿelds denoted ( and '), problem (28) is equivalent to the following virtual formulation: Within such a summation, it can be noticed that if the connectionk = (c=b) belongs to C(b), connection tk = (b=c) belongs to C(c). Summation on the bars can therefore be transformed into a summation on the set of connected pairs (c=b) such thatc ¿b. Doing so, formulation (29) can rewritten as This last equation was obtained using the action and reaction theorem which in this case can be shown to imply where vector L k denotes the vector PbPc and is therefore deÿned as
From discrete to continuous
Suppose that ub is any vectorial ÿeld of the spatial variable with the following two properties: Then it can be shown that when tends to zero, we have using the deÿnition of integrals by Riemann sums as detailed in Moreau (1996). The latter property is important and will often be used in the following developments. It enables us to transform the discrete problem into a continuous one.
Self-equilibrium at the lower orders
In a ÿrst stage let us assume that both order zero velocities C b 0 and angular velocities b 0 actually depend on the considered bar, so that order zero velocity di erences C k 0 and k 0 are non null functions.
3.4.1. Self-equilibrium of forces According to asymptotic expansions (23), expressions such as Ck m−1 have an asymptotic expansion of the following form: Subsequently, at this stage, the asymptotic expansions of interaction forces, deÿned by Eq. (12), can be written as Their asymptotic expansion may therefore be written in the following way: with, at the lower order of powers, Taking into the virtual power formulation (30) virtual velocity ÿelds (b) and '(b) such that ∀b; then carrying into Eq. (30) asymptotic expansions (36) and making use of property (33), we obtain, when tends to 0: This relation being veriÿed for any continuous function 1 , forces of the lower order are solution of the lower order self-equilibrium equation of the reference cell: Then carrying into this last equation the constitutive relationship (37), we obtain a relation that must be satisÿed by the ÿrst order velocities of bars in the reference cell. Velocities C b 0 are actually solutions of the problem In the special case where b = C b 0 , one checks that order zero velocities have to satisfy the condition Translational viscosities being strictly positive quantities, Eq. (42) means that all the bars of the same cell have the same linear velocity at the macroscopic scale: According to such a property, we also deduce that Such a result is however inconsistent with the asymptotic expansion (36) proposed for interaction forces which was based on the assumption that Ck 0 = 0. Actually, according to property (43), expressions such as Ck m−1 have an asymptotic expansion of the form: Subsequently, asymptotic expansions of interaction forces can be written as with, at order 0 ,
Self-equilibrium of moments
In the assumption of non null order zero angular velocities, by substituting (46) and expansion (23) into the interaction law (16), one obtains the following expansion of interaction moments: Let us now distinguish the three cases mentioned in Section 2.4. Cases 1 and 2 can be treated in the way as was done for forces. Taking into the virtual power formulation of equilibrium (30) virtual velocity ÿelds (b) and '(b) such that ∀b; and then carrying into Eq. (30) asymptotic expansions (49), making use again of property (33), we obtain, when tends to 0 According to property (33), all the higher order terms vanish when tends to zero. The last relation being veriÿed for any function 2 , order zero angular velocities are then shown to solve the following self-equilibrium problem on the reference cell: This problem is strictly equivalent to the problem satisÿed by order zero velocities so, for cases 1 and 2, we also obtain ∀b ∈ B R ; b 0 ( ) = 0 : No such simple conclusion can be drawn in case 3, and order zero angular velocities should actually depend on the considered bar b in the reference cell. However, asymptotic expansions of moments are now expressed as described in Eq. (49), but the exponent 1 + q − m now equals 2. For the three cases, asymptotic expansions of moments thus can be written as with the following expressions of ÿrst order terms: • Case 1: • Case 2: • Case 3:
Equilibrium of the equivalent continuum
Before detailing the obtaining of the equivalent continuum of the ÿber net, let us recall some fundamental results from the mechanics of continuous media with internal rotation. The equivalent continuous description deduced from the discrete homogenization process will actually prove to be consistent with such general results.
Continuous media with internal rotation
A medium for which microscopic kinematics may imply rotational motion such as granular materials, foams or ÿber suspensions, may be considered as a Cosserat medium (Cosserat and Cosserat, 1909), or more generally as a continuous medium with internal rotations. Local kinematics of such a medium is characterized by a velocity ÿeld C(x) and an angular velocity ÿeld (x) and, as described by Eringen (1968) and Germain (1973), the virtual power formulation of its equilibrium reads ∀(C * ; * ); − ( : ∇ C * + · * + Ä : ∇ * ) d where C * and * are virtual velocity and angular velocity ÿelds. In Eq. (61), represents the stress tensor, Ä the couple-stress tensor (due to local rotations) and the micro-stresses vector. Vectors f and c respectively denote the external volume forces and moments. The principle of objectivity causes to be directly linked to by the relation: tensor e being the permutation tensor whose deÿnition is given in Appendix A and A is the antisymmetric part of . Then by splitting ∇ C * into its symmetric and antisymmetric parts D * and R * and accounting for relation (62), formulation (61) may also be rewritten as Such a formulation leads to the following local equilibrium equations: and implies, for a viscous material, constitutive equations of the generic type: In the special case where Ä and external moments c actually happen to be null or negligible, ∇ no longer in uences the behavior. It is then clear from Eqs. (64) that vector e · A is also null or negligible. In such a case, constitutive equations (65) simplify to The symmetry condition (67)
Equilibrium of the ÿber net
For the ÿber net, assuming that external forces and moments are null, the virtual power formulation of the equivalent continuum is obtained by taking in formulation (30) smooth "macroscopic" virtual velocity ÿelds and ' such that ∀b; (b) = 0 ( ) and '(b) = ' 0 ( ): Then when tends to zero, using Eq. (33), forces and moments can be shown to be solutions of the problem: which, using the property (a · b) · c = b ⊗ c : a, can also be written as In this last formulation, the following stress tensors, deÿned in the parametric space , were introduced: The formulation of the macroscopic equilibrium of the net in terms of the velocity gradients in the physical space are simply obtained by making the change of variables → p 0 ( ) in formulation (71). According to deÿnition (8), the relation between the gradient of a vectorial ÿeld u with respect to , denoted ∇ u, and its gradient with respect to x = p 0 , denoted ∇ x u, is given by The macroscopic equilibrium of the ÿber net in the physical space then reads This last problem is the virtual power formulation of the equilibrium of a Cosserat continuous medium without any external forces or moments, as detailed in Section 4.1. Its local equilibrium is governed by the following balance equations: Therefore, the state of stresses inside the ÿber net is deÿned by the three following tensors: 0 = g −1 G · S 0 : stress tensor; Ä 0 = g −1 G · M 0 : couple stress tensor; 0 = g −1 Z 0 : micro-stresses vector: In accordance with the general theory of Cosserat media exposed in Section 4.1, the macroscopic stress tensor 0 is non symmetric, and its antisymmetric part is directly linked to 0 by the relation 0 = e : 0 = e : A 0 : This general property of Cosserat media can easily be checked in the case of the ÿber net problem, as shown in Appendix A.
As visible in Eqs. (78) and (72)- (74), the state of stress of the equivalent continuum happens to be directly related to ÿrst order interaction forces and moments. The sti ness of the ÿber net will therefore by closely linked to its density of connections. This remark is consistent with the results obtained by Servais et al. (1999) in the case where dry friction between ÿbers may be neglected as well as local moments.
Constitutive equations of the equivalent continuum
As Eqs. (72)-(74) show, the determination of the macroscopic state of stress ( 0 ; Ä 0 ; 0 ) in the ÿber net requires the determination of forces and moments of order 0 , f k 0 and m k 0 . Nevertheless, with the chosen local interaction relations, those quantities directly depend on the value of kinematic variables C b 1 , b 0 and b 1 , as well as on the macroscopic velocity gradient ∇ C 0 . To fully determine the constitutive equations corresponding to the three cases deÿned in Section 2.4, further equilibrium formulations are required in order to enable the determination of local kinematic variables.
Such formulations will necessarily depend on the local interaction laws. Here again, one has to distinguish three cases, depending on the relative magnitude of rotational viscosity ÿ k with respect to translational one k , characterized by the parameter q (see Section 2.4).
Case 1: q = 0
According to results (43) and (53), order zero interaction laws become Carrying into the general equilibrium formulation (30) the virtual functions and making tend to zero, one gets This relation being satisÿed for all ÿelds 1 and 2 , forces f k 0 and moments m k 0 are solutions of the problems: Formulations (86) and (87) are respectively force and moment order zero self-equilibrium equations. They are strictly equivalent to the following non-linear systems: Constitutive equations of the equivalent continuum can then be calculated considering as given the macroscopic ÿelds ∇ C 0 , ∇ 0 and 0 . The computation gives velocities C b 1 and b 1 as functions of ∇ C 0 , ∇ 0 and 0 , then Eqs. (72)- (74) and (78) give 0 , Ä 0 and 0 in terms of those ÿelds, which provides constitutive laws. Such a computation obviously requires a numerical implementation of problems (88) and (89). This will explicitly provide the macroscopic stress tensors as functions of the macroscopic ÿelds and can be achieved by the use of any suitable numerical methods for non-linear systems. Subsequently, in case 1, according to local behavior equations (80) -(83), the equivalent continuum is a general Cosserat medium whose constitutive relationships are of the following type: which is consistent with the general formulation (65) obtained by continuous media theory (see Section 4.1).
Case 2: q = 1
In this case, as shown in Section 3.4 order zero moments are null. The macroscopic couple-stress tensor Ä 0 is therefore automatically null, and from Eqs. (77), the micro-stresses vector 0 is also null. Property (79) then immediately causes the antisymmetric part of the macroscopic stress tensor to be null. Thus, the equilibrium of the equivalent continuum does not imply any local moment and 0 is a symmetric tensor.
In this case, the only constitutive equation to determine is therefore Eq. (72), which only requires the determination of forces f k 0 . Thanks to property (53), their expression is the same as in case 1; they are deÿned by Eq. (80), with C k 1 given by Eq. (82). Then, adopting the same technique as in case 1, order zero forces f k 0 can be proved to solve the self-equilibrium (86), which is strictly equivalent to the following non-linear system: Furthermore, property (79) implies an additional relation on forces f k 0 which reads This last relation shows that 0 can theoretically be expressed as a function of ∇ C 0 . One then notices that Eqs. (91) and (92) enable C b 1 and 0 to be calculated in terms of the macroscopic ÿeld ∇ C 0 . Here again, an explicit determination of the macroscopic stress tensor 0 can be achieved by the numerical solution of the non-linear system formed by both equations.
Finally, in this case, the ÿber net's equivalent continuous medium happens to be analogous to the special case of the continuous medium discussed in Section 4.1, governed by the classical local equilibrium equation 20 with a constitutive equation of the type 0 = 0 (∇ C 0 ; 0 (∇ C 0 )); where 0 is a symmetric tensor. The relation between 0 and ∇ C 0 , is obtained in an implicit way by ensuring the symmetry condition. Furthermore, in accordance with the general theory, because of the symmetry of 0 , only the symmetric part of the macroscopic velocity gradient ∇C 0 contributes to the total dissipated mechanical power. This causes 0 to depend only on the macroscopic strain rate D 0 , deÿned as So in Eq. (94) ∇ C 0 may be replaced by D 0 and the equivalent continuum exhibits a general uid-like behavior.
Case 3: q = 1 + m
In this case, moments of order 0 are immediately null, which causes Ä 0 and 0 to be null. As in case 2, the equivalent continuum is a Cauchy medium, governed by local balance equation (93), and the stress state is deÿned by the single tensor 0 . Its determination, according to Eq. (72), requires the determination of order zero interaction forces f k 0 . Here again, the set of self-equilibrium formulations (86) and (87) can be obtained by the same process as in the two previous cases, but Eq. (87) no longer brings further information on order zero angular velocities b 0 . Those variables now depend on the considered bar in the reference cell.
Forces f k 0 are therefore deÿned by They solve Eq. (86), which is equivalent to the non-linear-system: the unknowns of which are now C b 1 and b 0 . It therefore brings 2 equations per bar whereas 3 unknowns per bar are to be determined, ∇ C 0 being considered as data.
It is to be noted that in this case, order one moments m k 1 , deÿned by imply the same kinematic variables as f k 0 . The missing equations thus can be obtained by taking in Eq. (30) the virtual functions (b)=0 and '(b)= 2 ( )' b leading to the moments order one self-equilibrium following formulation: Making use of the action-reaction theorem (31), this equation can be transformed and proved to be equivalent to a new non-linear system, which reads Vectors C b 1 , and b 0 can therefore be computed by the simultaneous solving of Eqs. (98) and (101) in terms of ∇ C 0 , what will provide f k 0 , and then 0 in terms of this macroscopic velocity gradient.
Finally, as in case 2, the ÿber net's equivalent continuous medium is also a classical continuous medium governed by the local equilibrium equation (93), and its constitutive equations are of the type: where 0 is a symmetric tensor and D 0 is the macroscopic strain rate tensor. Cases 2 and 3 ÿnally happen to lead to the same type of equivalent continuous medium, even if the calculation of their constitutive equations leads to somewhat di erent resolution schemes. As mentioned above, it is now clear that case 3 also includes cases where q = 1 + m + j, where j is any positive integer. In such cases, rotational viscosities vanish from the macroscopic constitutive equations, and only the determination of the translation one is required for a full solution of the problem.
Fundamental properties
In the case of the power law interaction relations (12) and (16) discussed in this paper, further properties of constitutive equations can be drawn from results exposed in the previous sections.
In a ÿrst stage, let us focus on case 3, where rotational viscosities are assumed to be very small compared to translational ones. Self-equilibrium equations (98) and (101) form a system that enables the calculation of local variables C b 1 and b 0 in terms of ∇ C 0 . Such a system can be summarized in the following way: where F is a vector that contains the force and moment equilibrium equations of each bar of the reference cell, and X is a vector containing the kinematic unknowns C b 1 and b 0 relative to each bar. F is a block vector where blocks of components [3b − 2 : 3b], relative to bar b, are denoted F b and read 22 whereas X may be assembled in blocks X b such as In the following, the uniqueness of the solution of problem (103) will be assumed. Nevertheless, when internal mechanisms exist (isolated bars or isolated groups of bars) this will not be the case anymore, but we will assume that the concentration regime is su ciently high for every bar to be connected with some of its neighbors.
Let us now consider that X is the solution of Eq. (103) and thatX is the solution of F(X; ∇ C 0 ) = 0; where is a non-null real scalar, and study the relation between X andX.
0 are the respective solutions of problems (103) and (106), forces and moments solutions of Eq. (106) are then They can therefore be rewritten aŝ (111) and (112) in Eq. (106) and according to the expression of vector F, one checks thatX is solution of the problem: The uniqueness of the solution then immediately causeŝ Subsequently, forces and moments resulting from problem (106) are such that where f k 0 and m k 1 result from the initial problem (103). Now forming the macroscopic stress tensorsˆ 0 and 0 deÿned by Eqs. (78) and (72) corresponding to both problems ÿnally leads tô 0 = m 0 : This result therefore enables one to deduce the following important property: The macroscopic stress tensor is a homogeneous function of degree m of the macroscopic strain rate tensor. Such a result shows that, in case 3, the equivalent continuum is a power law uid, with a strain rate sensitivity equal to the strain rate sensitivity postulated at the level of interactions between ÿbers. Actually, if one deÿnes a norm : eq in the space of second order tensors, property (117) enables us to write the following relation, characteristic of power law uids: The same property can be deduced for case 2, thanks to interaction relation (80) and admitting the uniqueness of the system formed by Eqs. (91) and (92).
In case 1, macroscopic ÿelds ∇ C 0 and 0 (directly linked to ∇ 0 ) can be imposed separately so no such simple property can be deduced. However, as evident from the interaction relations (80) and (81), and from the formulation of self-equilibrium problems (88) and (89), the following property can be written: The equivalent Cosserat medium exhibits a degree m homogeneity property in terms of the pairs (∇ C 0 ; 0 ).
Conclusions
This theoretical work on the behavior of a net of rigid ÿbers linked by punctual power law ÿber-ÿber interactions shows several interesting results.
If the scale separation assumption (3) is satisÿed in any practical application, an equivalent continuous description of the behavior of the net is possible and its general equilibrium equations are typical of a Cosserat continuous medium. The state of stress of this medium is entirely deÿned by Eqs. (72)-(74) that explicitly provide the link between the local forces and moments and the macroscopic stress tensors.
Furthermore, the analysis of three di erent ÿber-ÿber interaction laws leads to 2 main di erent types of equivalent continuous media depending on the relative order of magnitude of rotational viscosities with respect to the translational ones. If rotational viscosities are of the same order of magnitude as translational ones, the ÿber net is actually equivalent to a Cosserat medium, its state of stress is given by the usual, but non-symmetric, Cauchy stress tensor and by a couple stress tensor accounting for local moments generated at ÿber-ÿber interactions. Such a case would probably be relevant for almost rigid interactions (m ≈ 0). If rotational viscosities become smaller, that is to say in cases like cases 2 or 3, the equivalent continuum is a usual Cauchy medium, deÿned by a single symmetric stress tensor.
Constitutive equations of the ÿber net's equivalent continuous medium cannot be obtained in an explicit form. They require the numerical determination of each order zero forces and sometimes of order zero or order one moments. Nevertheless, such a computation can be achieved quite simply and does not imply huge numerical problems, thanks to the periodicity assumption.
In a last stage, another fundamental property of the equivalent continuum was drawn. Thanks to the power law nature of the ÿber-ÿber local interaction laws, in cases 2 and 3, the macroscopic stress tensor could be proved to be a degree m homogeneous function of the macroscopic strain rate tensor. This shows that the equivalent continuum is a power law and anisotropic uid with the same strain rate sensitivity m as the one postulated at the scale of ÿbers. Such a feature shows the way any appropriate phenomenological continuous constitutive model should be chosen. Analogous results could be deduced from the analysis of case 1, but with no such simple interpretation, because the behavior of the equivalent continuum depends on two independent macroscopic ÿelds.
In the present work, an application of the method of homogenization of periodic discrete structures was presented. The method was shown to provide fundamental theoretical results on the structure of macroscopic constitutive equations suitable for a continuous modelling of a speciÿc net of ÿbers. It requires almost no restrictive physical assumptions and enables an easy, computer time e cient, analysis of the behavior for ÿber nets eventually including a great number of ÿbers, which is a necessary feature for the study of most ÿber-reinforced uids.
As is visible in the above theoretical exposition, many extensions to this work can be envisaged. Richer ÿber-ÿber interaction laws could ÿrst be introduced, as, for example, the case of Carreau type or viscoelastic interactions, or dry friction between ÿbers. The great adaptability of the method would also enable one to account for the exibility of ÿbers, considering them, for example, as elastic beams.
Appendix A
As shown in Section 4, in the general case, the state of stress of the ÿber net is deÿned by the three tensors 0 , Ä 0 and 0 , the vector 0 being given by the relation according to deÿnition (32) of L k . Let us now multiply this last equation by a constant virtual ÿeld 0 . We get Terms like p b 1 ∧ 0 can be seen as a virtual ÿeld b , so forces f k 0 being solutions of the self-equilibrium equation (86), one obtains which gives an alternate deÿnition of vector 0 as This result then enables us to ÿnd the relation between 0 and 0 , using the deÿnition and properties of the permutation tensor e. e is the tensor whose components in e i ⊗ e j ⊗ e k are It is then easy to check that 0 = g −1 k∈CR (e · (G · k )) · f k 0 = e : k∈CR g −1 (G · k ⊗ f k 0 ) which is equivalent to 0 = e : 0 : | 10,507 | 2004-02-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Relationship between salivary/pancreatic amylase and body mass index: a systems biology approach
Background Salivary (AMY1) and pancreatic (AMY2) amylases hydrolyze starch. Copy number of AMY1A (encoding AMY1) was reported to be higher in populations with a high-starch diet and reduced in obese people. These results based on quantitative PCR have been challenged recently. We aimed to re-assess the relationship between amylase and adiposity using a systems biology approach. Methods We assessed the association between plasma enzymatic activity of AMY1 or AMY2, and several metabolic traits in almost 4000 French individuals from D.E.S.I.R. longitudinal study. The effect of the number of copies of AMY1A (encoding AMY1) or AMY2A (encoding AMY2) measured through droplet digital PCR was then analyzed on the same parameters in the same study. A Mendelian randomization analysis was also performed. We subsequently assessed the association between AMY1A copy number and obesity risk in two case-control studies (5000 samples in total). Finally, we assessed the association between body mass index (BMI)-related plasma metabolites and AMY1 or AMY2 activity. Results We evidenced strong associations between AMY1 or AMY2 activity and lower BMI. However, we found a modest contribution of AMY1A copy number to lower BMI. Mendelian randomization identified a causal negative effect of BMI on AMY1 and AMY2 activities. Yet, we also found a significant negative contribution of AMY1 activity at baseline to the change in BMI during the 9-year follow-up, and a significant contribution of AMY1A copy number to lower obesity risk in children, suggesting a bidirectional relationship between AMY1 activity and adiposity. Metabonomics identified a BMI-independent association between AMY1 activity and lactate, a product of complex carbohydrate fermentation. Conclusions These findings provide new insights into the involvement of amylase in adiposity and starch metabolism. Electronic supplementary material The online version of this article (doi:10.1186/s12916-017-0784-x) contains supplementary material, which is available to authorized users.
Background
Amylase is responsible for starch hydrolysis, initiating carbohydrate digestion in the oral cavity and later in the gut. In humans, approximately half of the amylase activity found in serum is produced by the salivary glands and the remaining part by the exocrine pancreas [1] from different genes located in the same complex chromosome 1 locus.
A well-known multi-allelic copy number variant at salivary amylase gene (AMY1A; diploid copy number ranging from one to roughly 20) evolved as an adaptation to dietary habits [2]. Populations with high starch consumption carry larger number of copies than others that have maintained an ancestral pre-agricultural way of life [2]. Previously, we reported that AMY1A copy number estimated by quantitative RT-PCR (qPCR) is associated with body mass index (BMI) in North European and South Asian adult populations [3]. It provided a putative genetic link between complex carbohydrate metabolism in the gut and obesity. This association was replicated in early-onset obese females from Finland [4] and in prepubertal boys in Italy [5], and an association with insulin resistance was reported in adult Korean men [6], where AMY1A copy number was also estimated by qPCR. On the other hand, using digital PCR, two studies failed to reproduce these findings [7,8]. Usher et al. [7] suggested that the discrepancy with the previously reported observations likely comes from their higher-resolution approaches for both molecular and computational analyses. Recently, however, using digital PCR, we have found that, in Mexican children with high-starch diet, high number of AMY1A copies significantly protects against obesity in this population [9]. Finally, a study that used fiber-FISH suggested a role for copy number of pancreatic amylase genes (AMY2A and AMY2B) in the observed functional associations [10].
This debate is important for several reasons. First, chromosome structural variants are increasingly recognized to highly contribute to disease development [11], and thus the correct genotyping of multi-allelic copy number variant is mandatory [12]. Second, it was shown that non-obese adults with high salivary amylase activity (and putatively high AMY1A copy number) present with improved glucose tolerance following liquid starch ingestion [13]. Furthermore, high serum amylase activity was shown to be associated with decreased risk of metabolic syndrome and type 2 diabetes in a Japanese asymptomatic population [14]. Finally, in more than 100 different strains of mice fed a high-fat, high-sucrose diet, the Amy1 locus was reported to be significantly associated with weight gain variation and with an enrichment of obesity-associated bacteria of gut microbiota [15]. Therefore, it is crucial to robustly determine if amylase activities (and amylase gene copy number) impact energy and glucose homeostasis.
In the present study, we employed a systems biology approach, using genetics, protein activity and metabonomics analyses, to decipher the putative interaction between amylase genes and adiposity in human population. We first assessed the association between plasma enzymatic activity of salivary (AMY1) or pancreatic (AMY2) amylase, and several metabolic traits, including BMI. We then analyzed the effect of AMY1A or AMY2A copy number on the same parameters. A Mendelian randomization analysis was subsequently performed to assess causality effects explaining the complex relationship between BMI and AMY1 or AMY2 plasma enzymatic activity, and actually suggested a bidirectional causal negative effect in the relationship between BMI and AMY1 plasma enzymatic activity. We subsequently confirmed an association between AMY1A copy number and reduced obesity risk in children. Finally, we assessed the association between BMI-related plasma metabolites and AMY1 or AMY2 plasma enzymatic activity.
Study participants
D.E.S.I.R D.E.S.I.R. is a 9-year longitudinal study in a French general population, fully described elsewhere [16]. A total of 4834 unrelated individuals who were successfully genotyped through iSelect Metabochip DNA microarrays (Illumina, San Diego, CA, USA) was included in the present study. AMY1A copy number and AMY2A copy number were successfully genotyped in 3607 and 3657 participants, respectively. At baseline, we had access to AMY1 plasma enzymatic activity for 3744 participants. Among them, we had access to AMY1 plasma enzymatic activity after 9 years of follow-up for 679 individuals, to BMI after 9 years of followup for 2796 individuals, and to the levels of BMI-associated plasma metabolites at baseline for 718 individuals. Moreover, we had access to AMY2 plasma enzymatic activity at baseline for 3980 participants. Among them, we had access to AMY2 plasma enzymatic activity after 9 years of followup for 705 individuals, to BMI after 9 years of follow-up for 2970 individuals, and to the levels of BMI-associated plasma metabolites at baseline for 718 individuals. Additional file 1 recapitulates all these numbers. Non-diabetic participants did not use glucose lowering medication, and presented with fasting plasma glucose less than 7 mmol/L and glycated hemoglobin A1c less than 6.5% [17].
The Biological Atlas of Severe Obesity study (ABOS)
ABOS is a cohort study (ClinicalGov NCT01129297) from the University Hospital of Lille, France, fully described elsewhere [18]. In the present study, we measured plasma enzymatic activity of AMY1 and AMY2 in 488 participants who were also genotyped through Metabochip DNA microarrays (Illumina).
Obesity case-control studies
Clinical characteristics of study participants are shown in Additional file 2. The first case-control study included 2220 normal-weight adults (with a BMI < 25 kg/m 2 ) and 1179 adults presenting with obesity (with a BMI ≥ 30 kg/m 2 ). These adults were from D.E.S.I.R. or were recruited either by the CNRS UMR8199 (Lille, France), by the Department of Nutrition of Hotel-Dieu Hospital (Paris, France), or by the Centre d'Etude du Polymorphisme Humain (CEPH, Saint-Louis hospital, Paris, France). The second case-control study included 712 normal-weight children or adolescents (with a BMI-for-age < 85th percentile) and 785 children or adolescents presenting with obesity (with a BMI-for-age ≥ 99th percentile). These children or adolescents were from the French Haguenau regional cohort study [19] or from the French Fleurbaix-Laventie Ville Santé study [20], or they were recruited by the CNRS UMR8199 (Lille, France).
Estimation of AMY1A and AMY2A copy number
Copy number of AMY1A and AMY2A was estimated using the QX200 droplet digital PCR (ddPCR) system (Bio-Rad Laboratories, Hercules, CA, USA), following the manufacturer's recommendations. Concentration of DNA samples was measured using the Qubit ds DNA Assay HS kit (Life Technologies, Carlsbad, CA, USA). Dilutions were performed with 20× GE Sample Loading Reagent (Fluidigm, South San Francisco, CA, USA). Each 40 μL reaction included 11 μL ddPCR SuperMix for Probes no dUTP (Bio-Rad), 24 ng DNA (for AMY1A copy number estimation) or 32 ng DNA (for AMY2A copy number estimation), 1.1 μL of TaqMan assay targeting AMY1A or AMY2A (Hs07226362_cn or Hs04204136_cn, respectively; Life Technologies), 1.1 μL of TaqMan assay targeting the reference RNase P assay (Human RNase P #4403328; Life Technologies), and 0.5 U HindIII (High Fidelity; New England Biolabs, Ipswich, MA, USA). Of note, both AMY1A and AMY2A target assays utilized FAM-labeled probes, while RNase P assay was labeled in VIC. Enzymatic digestion was done for 5 minutes at 20°C. Subsequently, the reaction was emulsified with Droplet Generator Oil (Bio-Rad) using the QX200 Droplet Generator (Bio-Rad), following the manufacturer's instructions. The droplets were then transferred to a 96-well reaction plate (Eppendorf) and PCR amplification was performed using a Veriti Thermal Cycler (Life Technologies). After amplification, droplets were read using a QX200 Droplet Reader (Bio-Rad). Fluorescence data were analyzed using Quanta-Soft software (version 1.7.4, Bio-Rad). Only samples with at least 7000 droplets were kept for further analyses.
Measurement of plasma enzymatic activities of salivary (AMY1) and pancreatic (AMY2) amylases
Plasma enzymatic activities of total amylase and AMY2 were estimated by an enzymatic colorimetric assay with an autoanalyzer (CoBAS Icobas 8000 modular analyzer series; kits #AMY-P-20766623322 and #AMYL2-03183742122; Hoffman-La Roche, Basel, Switzerland). The plasma enzymatic activity of AMY1 was calculated by subtracting the activity of AMY2 from the activity of total amylase. Normal ranges of the plasma enzymatic activities of AMY2 and total amylase were 13-53 U/L and 29-99 U/L, respectively. Only individuals presenting with these normal ranges were analyzed.
Statistical analyses Ethnic characterization
Ethnic characterization of each participant was assessed using the iSelect Metabochip DNA microarrays (Illumina), as previously described [24].
Association analyses between AMY1/AMY2 plasma enzymatic activity or AMY1A/AMY2A copy number and metabolic traits in D.E.S.I.R The associations between metabolic quantitative traits (except BMI) and enzymatic activity of AMY1/AMY2 or AMY1A/AMY2A copy number were assessed through linear regression models adjusted for age, sex, BMI, daily alcohol consumption, smoking status, and the first two principal components for ethnicity as previously described [24]. We used the same models for the analysis of plasma metabolites, with the same adjustments (including or not BMI). The analysis of BMI was adjusted for age, sex, daily alcohol consumption, smoking status, and the first two principal components for ethnicity.
The effect of AMY1 or AMY2 activity at baseline on the change in BMI during the 9-year follow-up was assessed through a linear regression model adjusted for age at baseline, sex, BMI at baseline, daily alcohol consumption, smoking status, and the first two principal components for ethnicity. The effect of BMI at baseline on the change in AMY1 or AMY2 activity during the 9-year follow-up was assessed through a linear regression model adjusted for age at baseline, sex, AMY1 or AMY2 activity at baseline, daily alcohol consumption, smoking status, and the first two principal components for ethnicity.
Of note, BMI, aspartate aminotransferase, fasting insulin, triglyceride levels, and the homeostasis model assessment of beta-cell function (HOMA-2B) and of insulin resistance (HOMA-2IR) were logarithmically transformed before statistical analysis.
Association analyses of glucose-related traits were performed in non-diabetic individuals only. Association analyses of lipid traits were performed in participants who did not use any lipid-lowering drugs at baseline. Association analyses of blood pressure were performed in participants who did not use any drugs against hypertension at baseline.
HOMA-2B and HOMA-2IR were calculated in D.E.S.I.R. participants as previously described [24]. In each regression model, traits were analyzed as dependent variables whilst copy number and enzymatic activities were used as covariates.
Mendelian randomization analysis between BMI and AMY1/ AMY2 plasma enzymatic activity The causal effect between BMI and AMY1 or AMY2 plasma enzymatic activity was estimated using a Mendelian randomization approach [25,26].
BMI → AMY1/AMY2 We used single nucleotide polymorphisms (SNPs) previously found to be genome-wide significantly associated with BMI [27] as genetic instruments for this analysis. We excluded 14 SNPs with known pleiotropic effects on non-anthropometric traits (Additional file 3). Among the remaining 83 SNPs, four were not testable through the Illumina Metabochip DNA microarray (rs12016871 within MTIF3 locus, rs16851483 within RASA2 locus, rs17001654 within SCARB2 locus, and rs9641123 within CALCR locus) and one SNP did not pass the quality control (rs12566985 within the FPGT locus). These five SNPs were replaced with proxies (R 2 ≥ 0.64; Additional file 3). For each of the 83 instrumental genetic variables, we estimated causal effects of BMI on AMY1 or AMY2 plasma enzymatic activity as ratios between the SNP effect sizes on plasma AMY1 or AMY2 enzymatic activity (measured in D.E.S.I.R.) over the SNP effect size on BMI (obtained from Locke et al. [27]). Standard errors for these causal estimates were derived by replacing in the former calculations each SNP effect size on AMY1 or AMY2 plasma enzymatic activity with its corresponding standard error estimated within D.E.S.I.R. The 83 values of causal effects of BMI on AMY1 or AMY2 plasma enzymatic activity were collapsed into single estimates (one for each enzymatic activity) using inverse-variance weighting [25]. Since no published genome-wide association studies on amylase activities were available, we used as an alternative approach, the two-stage least-squares (TSLS) regression to estimate the causal effect of BMI on AMY1 or AMY2 plasma enzymatic activity using D.E.S.I.R. data. This analysis used as the instrumental variable the genetic risk score, calculated as the sum of alleles increasing BMI over the 83 selected SNPs. We did not observe any residual effect of BMI-associated SNPs on amylase activities (P > 0.2). To ensure that cryptic pleiotropic effects among the 83 SNPs were not influencing our estimates of causal effect of BMI on AMY1 and AMY2 plasma enzymatic activities, we used Egger regression to test for the significance of the intercept [28]. We found no significant effect of pleiotropy (P = 0.41 for AMY1 activity, and P = 0.49 for AMY2 activity).
AMY1/AMY2 → BMI We were unable to use AMY1A or AMY2A copy number as instrumental variables to assess the inverse causation between AMY1 or AMY2 plasma enzymatic activity and BMI as they both showed a residual association with BMI after adjusting for the corresponding plasma enzymatic activity (P < 0.001; Additional file 4). We therefore looked for other instruments by testing the association between SNP genotyped on the Metabochip DNA microarray (Illumina) and AMY1 or AMY2 plasma enzymatic activity in the D.E.S.I.R. participants. This association was assessed using linear regression of AMY1 or AMY2 plasma enzymatic activity on genotyped SNP adjusted for age, sex, BMI, and the first two principal components for ethnicity. Subsequently, the significant associations between SNPs and AMY1 or AMY2 plasma enzymatic activity (after Bonferroni correction: P < 4 × 10 −7 = 0.05÷124,571 tested SNPs) were confirmed in ABOS. The combined analyses were performed using a weighted inverse normal method via the function "metagen", with a fixed effect, in the "META" R package. No heterogeneity was observed (P > 0.05). A good instrument was consequently defined as a SNP significantly associated (P < 4 × 10 −7 ) with AMY1 or AMY2 plasma enzymatic activity, without showing any residual association with BMI (P > 0.05). Given these instruments, the causal effect of AMY1 or AMY2 plasma enzymatic activity on BMI was estimated using TSLS regression as implemented in the R package ivpack (R function ivreg).
Association analyses between AMY1A copy number and obesity risk
The association between obesity and AMY1A copy number was assessed by a logistic regression model adjusted for age and sex in the two case-control studies. The combined analysis was performed using a weighted inverse normal method via the function "metagen", with a fixed effect, in the "META" R package. No heterogeneity was observed for this combined analysis (P = 0.14).
All genetic analyses were performed under an additive model. All statistical analyses were performed using IBM SPSS (version 14.0) or R (version 3.0).
Results
Association study between plasma enzymatic activity of AMY1 or AMY2 and metabolic traits in D.E.S.I.R.
Association study between AMY1A/AMY2A copy number and metabolic traits in D.E.S.I.R.
When analyzing copy number of AMY1A and AMY2A in D.E.S.I.R. participants through ddPCR (Additional files 5 and 6), we confirmed that even AMY1A copy numbers were more frequent than odd AMY1A copy numbers (Additional files 5 and 6), as shown by Usher et al. [7]. Furthermore, we confirmed that the copy numbers of AMY1A and AMY2A were nearly always both even or both odd (Additional file 6) [7,8,10]. Although AMY1A or AMY2A copy number was significantly correlated with AMY1 or AMY2 plasma enzymatic activity, respectively (Spearman test: R 2 = 0.34, P < 2.2 × 10 −16 ; R 2 = 0.12, P < 2.2 × 10 −16 , respectively; Fig. 1a and b), we only found a nominal association between AMY1A copy number and lower BMI (β = -0.0018 ± 0.0009 kg/m 2 per AMY1A copy, P = 0.044; Additional file 7). Of note, we found that ddPCRestimated AMY1A copy number is highly correlated with AMY1A copy number previously estimated by qPCR (Spearman test: R 2 = 0.86, P < 2.2 × 10 −16 ; Additional file 8) [3]. Furthermore, among these 2137 samples previously assessed, we confirmed a highly significant association between ddPCRestimated AMY1A copy number and lower BMI (β = -0.0043 ± 0.0011 kg/m 2 per AMY1A copy, P = 1.7 × 10 −4 ; Additional files 9 and 10). However, as tackled above, the association between AMY1A copy number and lower BMI was only nominal when we analyzed the whole sample set from D.E.S.I.R. (Additional files 7, 9 and 10). We did not find other significant associations between AMY1A or AMY2A copy number and metabolic traits (P > 0.05; Additional file 7).
Assessment of the causal effect between AMY1/AMY2 plasma enzymatic activity and BMI in D.E.S.I.R.
Assessing the inverse relationship (AMY1/AMY2 → BMI) turned out to be challenging as we could use neither AMY1A nor AMY2A copy numbers as genetic instruments. Indeed, we found a significant residual association between AMY1A or AMY2A copy number and BMI when adjusting for the corresponding plasma enzymatic activity (P < 0.001, Additional file 4). As no genome-wide association study for AMY1 or AMY2 plasma enzymatic activity has been performed thus far, we assessed the association between 124,571 SNPs genotyped through the Metabochip DNA microarray and AMY1 or AMY2 activity in D.E.S.I.R. participants, and confirmed the identified associations in another French cohort study (ABOS) in order to find valid genetic instruments. We found one SNP strongly associated with AMY1 activity (PRH1-PRR4 rs10492100: P = 3.3 × 10 −11 ; Table 3) and two SNPs strongly associated with AMY2 activity (AMY2B rs12075225: P = 2.0 × 10 −71 ; ABO rs507666: Table 3). SNP rs12075225 could not be considered as instruments into the Mendelian randomization analysis as we found a residual association between rs12075225 and BMI when adjusting for AMY2 activity (P < 0.05). However, we were able to use rs10492100 and rs507666 as instruments to assess the causal effect of AMY1 and AMY2 activities, respectively, on BMI. When using TSLS regression with these instruments, we did not find a significant causal relationship of AMY1 or AMY2 activity on BMI (P > 0.05; Table 2).
Next, we took advantage of the prospective D.E.S.I.R. study design with measured AMY1 (n = 679) or AMY2 (n = 705) plasma enzymatic activity after 9 years of follow-up (Additional file 1). Indeed, although confounding can still be present in prospective studies, having consistent results between baseline and follow-up data reinforces the significance of the causal effect estimated besides using Mendelian randomization tools. We found a significant negative effect of BMI at baseline on the change in AMY1 activity (β = -0.20 ± 0.08 IU/L, P = 0.014) or AMY2 activity (β = -0.18 ± 0.06 IU/L, P = 3.0 × 10 −3 ) during the 9-year follow-up, which is in line with the results of the Mendelian randomization analyses showing that BMI negatively impacts amylase activity. Nonetheless, we also identified a significant negative contribution of AMY1 activity at baseline to the change in BMI during the 9-year follow-up (n = 2796; β = -0.0062 ± 0.0027 kg/m 2 , P = 0.022), which would imply a bidirectional causal negative effect in the relationship between BMI and AMY1 plasma enzymatic activity. The association between AMY2 activity at baseline and the change in BMI during the 9-year follow-up was not significant (P > 0.05).
Association study between AMY1A copy number and obesity in adults and children/adolescents
The uncertainty about the causal effect of lower AMY1A copy number (or AMY1 activity) to higher BMI prompted us to assess the association between AMY1A copy number and obesity risk in two French case-control studies, one including 1179 obese adults and 2220 controls, and the other one including 785 obese children/adolescents and 712 controls (Additional file 2; Additional files 11 and 12). In the French adults, we found that the mean number of AMY1A copies was lower in obese subjects (6.8 ± 2.5; Table 4) than in controls (7.0 ± 2.6; Table 4), although this difference was not significant when we adjusted the logistic regression model for both age and sex (P = 0.13; Table 4). In contrast, in the French children/adolescents, we found a significant association between AMY1A copy number and lower obesity risk (odds ratio (OR) per estimated copy 0.94; 95% confidence interval (CI), 0.90-0.98; P = 7.1 × 10 −3 ; Table 4). When we combined the two case-control studies in adults and youths, we identified a significant contribution of AMY1A copy number to lower obesity risk (OR per estimated copy 0.97; 95% CI, 0.94-0.99; P = 6.8 × 10 −3 ; heterogeneity: P = 0.14; Table 4).
Association analysis between known BMI-associated plasma metabolites and AMY1/AMY2 plasma enzymatic activity in D.E.S.I.R.
Finally, we aimed to assess the association between 36 plasma metabolites known to be associated with BMI [23] and AMY1 or AMY2 plasma enzymatic activity in 718 D.E.S.I.R. participants. First, we confirmed a significant association between these metabolites and BMI in these participants, except for palmitoyl sphingomyelin, which was found to be only metabolite nominally associated with BMI (P = 0.07), although with the same published effect size direction (β < 0) [23]. Then, we identified significant associations between several metabolites, including branched-chain amino acids (isoleucine, isovalerylcarnitine, and leucine), and AMY1 and/or AMY2 plasma enzymatic activity (Additional file 13), with an effect size direction opposite to the one of BMI effect on the same metabolites (Additional file 13). Interestingly, lactate was significantly associated with higher AMY1 activity, when the regression model was adjusted or not for BMI (β = 0.050 ± 0.020, P = 5.8 × 10 −3 ; BMI-adjusted: β = 0.058 ± 0.020, P = 1.6 × 10 −3 ; Additional file 13).
Discussion
In the present study, we found that plasma enzymatic activities of both AMY1 and AMY2 were markedly associated with lower BMI and some other related metabolic traits, including lower fasting plasma glucose levels, higher pancreatic beta-cell function, and better lipid profiles, linking starch hydrolysis and metabolism in humans. Although AMY1 or AMY2 plasma enzymatic activity was significantly correlated with the number of copies of AMY1A or AMY2A, Fig. 1a and b), we found a nominal association between AMY1A copy number and lower BMI in middleaged French adults. However, we identified a significant association between AMY1A copy number and lower risk of obesity in French children, which is in line with our previous study performed in Mexican children [9]. The present study also assessed the hypothesis that pancreatic amylase genes (instead of salivary amylase gene) could actually drive the association with BMI [10]. However, we did not find any significant marginal association between AMY2A copy number and BMI, which makes unlikely a major role for pancreatic amylase gene. Yet, we only genotyped AMY2A copy number and not AMY2B copy number, which is a limitation of our study, even if Usher et al. [7] showed that AMY2A copy number was similar to AMY2B copy number in approximately 95% of haploid genotypes.
Through a Mendelian randomization analysis, we identified a causal negative effect of BMI on plasma enzymatic activities of both AMY1 and AMY2. In contrast, we failed to find any causal effect of AMY1 or AMY2 plasma enzymatic activity on BMI, although this specific analysis likely lacked sufficient statistical power. Indeed, since we could use neither AMY1A nor AMY2A copy number as a genetic instrument for the analysis, we were deprived of the possibility to utilize some of the strongest potential instruments available. Despite this limitation, we were able to find surrogate instruments that were, however, poorly associated with plasma enzymatic activities of AMY1 and AMY2 compared to their corresponding gene copy number. In addition, since no large genomewide association study for AMY1 or AMY2 plasma enzymatic activity has been performed so far, we were left with a very limited number of useable instruments. The prospective data available in D.E.S.I.R. further supported the negative effect of BMI on AMY1 and AMY2 activities. However, we also found a significant negative contribution of AMY1 activity at baseline to the change in BMI during the 9 years of follow-up, which implies a possibly causative impact of AMY1 activity on decreased adiposity. This was supported by the present results obtained from our obesity case-control study, showing a significant contribution of AMY1A copy number to decreased obesity risk in French children.
The impact of AMY1 activity and AMY1A copy number on adiposity is therefore complex and it seems to interact with the metabolic effect of complex carbohydrate digestion by the gut microflora. Our recent independent digital PCR analyses of AMY1A copy number in Mexican children [9] and our present results in French youths found strong evidence that high AMY1A copy numbers protect against childhood obesity in this high-starch diet populations. In contrast, in French middle-aged adults from the general population, we failed to reproduce these findings. The difference between adults and children may be due to the fact that the heritability of BMI is higher in childhood than in adulthood [29,30], optimizing the identification of significant associations between genetic events and obesity risk. Furthermore, this difference may be due to different gene-environment interactions depending on age [29,30]. For instance, youths may eat more carbohydrates than adults (as the energy requirements of youths have been shown to parallel their growth rate) [31]. In rodents, it was shown that the SNP at Amy1 locus strongly predicts weight gain after 8 weeks on a highfat, high-sucrose diet, with an associated enrichment in gut bacteria observed in obesity states, which may mediate the metabolic effect of Amy1 expression variation [15].
In D.E.S.I.R., we found that AMY1 plasma enzymatic activity was significantly associated with higher plasma lactate levels independently of BMI. Lactate is a well-known product of complex carbohydrate fermentation by the gut microbiota [32]. It has been proposed that decrease in the lactate/ butyrate ratio can generate an extra 20 calories/day, which may lead to an extra kilogram for weight over a year [32]. Therefore, we suggest that amylase activity, which is associated with higher lactate production, may protect against obesity, especially in individuals with a high-starch diet.
Conclusions
In conclusion, our systems biology study performed in a prospectively followed population-based European cohort suggests a bidirectional relationship between AMY1 plasma enzymatic activity and adiposity. Altogether, low AMY1 activity due to both genetic and environmental events may modulate human colonic microbiota fermentation of oligosaccharides into short-chain fatty acids via lactate regulation [32], which may have a negative impact on energy harvest, and therefore may aggravate obesity. Further studies are warranted to assess the validity of this hypothesis that, if confirmed, may have clinical implications in obesity treatment [33]. Odds ratio per AMY1A copy from a logistic regression adjusted for age and sex AMY1A salivary amylase gene, CI confidence interval, OR odds ratio | 6,490.4 | 2017-02-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Determination of Hypoglycemic Agents in Surface Water Samples Using SPE-LC-MS/MS Method
Antidiabetic compounds are a class of emerging contaminants in environment, for which there are no regulations in the world environmental legislation. These compounds are among the most widely used drugs in the world due to the large number of patients with diabetic conditions. The presence of these pollutants in the environment is insufficiently studied, so efficient analytical methods are needed to allow their detection at trace levels (ng/L). For the simultaneously quantification of the five antidiabetics (glyburide, metformin, glipizide, gliclazide, glimepiride) and one bio-degradation product (guanyl urea) in surface water samples a SPE-LC-MS/MS (solid phase extraction -liquid chromatography coupled with mass spectrometry detection) method was validated using real river water samples. The compounds were separated on C18 LC column in 9 minutes at 30C using a gradient of mobile phase of 0.1% formic acid and acetonitrile. Good performance parameters were obtained using the method: low limits of quantification (LOQs 0.1-2.4 ng/L), precision (repeatability 3.5-7.2% and reproducibility 6.5-12.7%) and determination coefficients (higher than 0.99). The most contaminated river was represented by Ialomita, which had a total concentration of antidiabetics of 112.1 ng/L in the downstream point, followed by the Siret and Dambovita rivers, which had a total concentration of antidiabetics of 66.3 ng/L and 57.3 ng/L, respectively, also in the downstream points.
Introduction
A large variety of pharmaceuticals such as antidiabetics, β-blockers, analgesics, antibiotics, antidepressants, lipid regulators, hormones have been monitored and detected in the environment, particularly in surface waters and wastewaters [1,2]. A high number of administered pharmaceuticals passes the human body unchanged by excretion and enters into wastewater. The excreted and unchanged pharmaceuticals pass the sewage treatment plant (STP) and the incomplete removal contributes to environmental presence [3]. The presence of pharmaceuticals residues in the aquatic environment represent one of the most urgent emerging environmental issues [4]. The active substances used in obtaining pharmaceuticals from antidiabetic class were used in the treatment of diabetes mellitus or prediabetes treatment. Antidiabetics prescribed include the next classes: meglitinide (repaglinide), sulfonylurea derivatives (gliclazide, glibenclamide, glimepiride), biguanidine (metformin) [5]. These drugs were frequently detected in WWTP influents at ng/L concentration level, whereas in some cases comparable concentrations in the treated effluent were noticed. In 2019, the International Diabetes Federation reported the number of diabetic patients worldwide (20-79 years) as 463 million [6]. In Romania it was estimated in 2019 that the number of diabetes patients reached 900,000 [7]. Metformin (N, N-dimethyl-biguanide) is the most consumed antidiabetic for treat type 2 diabetes, but also it is prescribed as a cytostatic product [8,9]. Because MET is consumed intensive by a large number of diabetic patients, it has a high polarity (low octanol water partition coefficient, log Kow -2.6), is not metabolized by the human body and is eliminated unchanged by urine (90%) in 12 h https://doi.org/10.37358/RC. 20.7.8252 and the rest by feces, is expected to be present in the influent treatment plants from where it is eliminated by effluent in the receiving rivers [10][11][12]. MET was determined with high concentrations of the order ng/L in the surface waters analyzed [13,14].
Scheurer et al., reported in 2009 the occurrence of metformin in Germany, in three WWTPs with a median of 110 μg/L in the influent and 11.4 μg/L in the effluent, respectively [15]. In German river waters, MET was detected in a range from 102 ng/L in the Lake Constance, 349 ng/L in the Weser river, 100 ng/L in the Rhine river, up to 1700 ng/L in the Elbe river [10,16]. In Belgium, Metformin was detected in all WWTPs influent samples ranging from 20 μg/L to 94 μg/L with a medium concentration of 46 μg/L [17]. Kolpin et al. reported on the occurrence of metformin in United States surface waters and metformin was detected in 4.8% of 84 samples investigated with a maximum concentration of 0.15 μg/L and a medium concentration of 0.11 μg/L [18]. In China, metformin was detected in eleven Wastewater Treatment Plants (WWTPs) ranged from 1.7 μg/L to 239.0 μg/L, with an average value of 68.3 μg/L [19]. Glibenclamide (GLB), also known as glyburide is extensively metabolized, mainly by hydroxylation of the cyclohexyl moiety of the molecule, whereas its excretion rate as a parent compound are rather low, 35 and 42% in urine and feces, respectively. GLB was determined in surface waters at ng/L to µg/L levels [20]. Glimepiride (GMP) and gliclazide (GLZ) are metabolized in the human body, generating metabolites that exhibit pharmacological activity. Gliclazide was detected in river water at low concentrations at ng/L levels [20].
Ecotoxicological information for selected compounds is still too limited, particularly regarding chronic and behavioral data. For metformin a LC50 value of >982 mg/L for Lepomis macrochirus and an EC50 value of 130 mg/L for Daphnia magna were reported. Guanyl urea showed no toxic effects on the bacterial community in a manometric respiratory test at a concentration of 11.9 mg/L [21]. The physical/chemical properties of selected compounds are presented in In Romania, data on the presence of antidiabetics in surface waters, intended for the production of drinking water, are not available. In general, environmental studies on pharmaceutical contaminants have focused on the following chemical classes: antibiotics (macrolides, sulfonamides, quinolones, penicillin's, tetracyclines), non-steroidal anti-inflammatory drugs (NSAIDs), antiepileptics, lipid regulators [23][24][25][26][27][28][29][30]. Also, multiple studies have been carried out for the emerging contaminants of the type of metallic elements present in the environmental components which are not regulated in the national environmental legislation [31,32].
Thus, it is necessary to carry out analytical studies for the determination of the antidiabetic's concentrations (metformin, gliclazide, glimepiride, glyburide, glipizide and one degradation product: guanyl urea), from surface waters and for the evaluation of the potential impact of the effluents that they have on the quality of the receiving rivers. The main aim of the paper was to validate an SPE-LC-MS/MS method that would allow the quantification of hypoglycemic agents at traces (ng/L) levels in surface waters. Then, these concentration values were used for quantitative estimation of the potential impact of wastewater treatment plants that discharge waste water on the receiving inland waters. This was achieved by comparing the levels of antidiabetics from river samples taken downstream from those collected upstream from the treatment plants. Antidiabetic pollutants from WWTP effluents are continuously introduced into receiving rivers where they can irreversible affects the aquatic https://doi.org/10.37358/RC.20.7.8252 microorganisms. Then, the surface water potentially contaminated with antidiabetics represents the source for obtaining drinking water for the resident population. Thus, rigorous analytical control is required regarding the occurrence of these compounds in surface water and in the drinking water.
Instrument/Equipment and Operating Parameters
The analytical determinations of antidiabetic contaminants were conducted on a 1260 UHPLC system (Agilent Technologies, Germany) which was equipped with a triple quadrupole mass spectrometer (QQQ) Model 6410 Agilent (Agilent Technologies, Waldron, Germany). The compounds were separated on C18 LC column, in 9 minutes, at 30 0 C using a gradient of mobile phase of 0.1% formic acid and acetonitrile (0.2mL/min flow rate). The injection volume for calibration standard and for sample extract was 10 µL. The ionization of compounds was realized by positive electrospray ionization in MS source using the optimal parameters shown in Table 2, 3. The system was controled by Mass Hunter software from Agilent Technologies. Formic acid was used in mobile phase to obtain good peak shape and for the production of the precursor ion [M-H] + . Ionization of compounds was performed using the next optimized settings: gas temperature 300 0 C, capillary voltage, 3000 V, nitrogen nebulizer gas flow rate (10 L/min), nebulizer pressure 50 psi, the cell acceleration voltage (CAV) 4 V, collision energy 10-25V, fragmentor voltage 80-120V. The two most intense product ions were selected for the analysis. Table 4 presents the optimized mass spectrometer /QQQ parameters for the determination of antidiabetic drugs in the environment samples. The adduct [M-H] + was used as the precursor ion for MS determinations in the positive ionization mode. The first product ion as abundance was used for quantification and the second ion as abundance for confirmation.
Samples Treatment and Analysis of the Antidiabetics
The SPE-LC-MS / MS method previously developed for wastewater samples (effluent and influent treatment plants) was validated using surface water. Thus, the method used a volume of 500 mL samples of river water that was subjected to the entire procedure without addition of standard and with known addition of standard. It was used an enrichment factor of 500 for each water sample. First the sample was filtered on a 0.45 µm glass filter (4.7 mm diameter) to remove the suspended matter that can block the SPE extraction material. Then, the pH of the sample was adjusted to 10 with 0.24% ammonium hydroxide, after which the entire volume of water was passed through SPE cartridge, preconditioned with 2x4 mL methanol and 2x4 mL ultrapure water with pH 10. The extraction was performed on an automatic SPE equipment type Dionex Autotrace 280 (Thermo Scientific) using cartridges Strata X (500mg / 6mL, Phenomenex). To remove the interfering matrix, the adsorbent material was washed with ultrapure water, after that the cartridge was air dried (20 min.) in order to remove traces of water. The compounds of interest were eluted from the SPE cartridge with 2x3 mL methanol. To change the extraction solvent, the obtained extracts were evaporated in a water bath (50 0 C) under a dry nitrogen stream, after which the organic residue was taken up with 1 mL of the mobile phase (0.1 formic acid: acetonitrile, 50/50).
Validation Study
The method has been validated in terms of linearity, limit of quantification, efficiency of recovery and precision (repeatability, reproducibility). Linear regressions (1-100 ng/mL) were obtained for each antidiabetic compound by injecting of 5 calibration solutions with increasing concentrations. Regressions were accepted if the coefficients of determination were over 0.99. The limit of quantification was calculated as the minimum analyte concentration that can be determined from a spiked surface water sample with a concentration of compound for which the signal to noise (S/N) ratio is 10, following the entire extraction and analysis process. The recovery was experimentally determined from a river water sample with the addition of 50 ng/L calibration mix solution. Also, the water sample was analyzed without addition, and the determined antidiabetic drugs were subtracted from the sample with addition. A recovery of 70-120% was considered good for accuracy experiments. To calculate the precision, to 4 sub-samples of surface water (500 mL) were added 1 mL calibration solution, 50 ng/L mixture of antidiabetics in the mobile phase. The samples were extracted and analyzed in the same day, determining the repeatability, expressed as RSD (residual standard deviation), and on different four days, calculating the reproducibility. Precision was accepted if the values of repeatability and internal reproducibility were below 15%.
Sample Collections
Surface water samples were collected in November 2019 in a single day, from 5 rivers (Siret, Bahlui, Ialomita, Dambovita, Somes) (Table 5). Thus, samples were collected from downstream and upstream of the municipal wastewater treatment plant (WWTP). Sampling points were located 100m before the station (upstream) and 50m after the station (downstream). The sample was collected in a 1L volume glass falcon, then stored at 4 0 C during transport to the laboratory and extracted within 48 h.
Validation of method
The mass spectrometer detector response was linear in the range of 1-100 ng/mL for all compounds except Gua (5-100ng/mL), yielding coefficients of determination between 0.99-0.998 (Table 6, Figure 2). The limit of quantification had low values (0.10-2.45 ng/L) allowing the simultaneous determination of antidiabetics from surface water samples using the LC-MS/MS method. The method presented corresponding performance characteristics as repeatability, generating intraday precision within the range of 3.5-7.2% and internal reproducibility (inter-days precision) in the range of 6.5-12.7%. The validation parameter proves that the method is sensitive, accurate and precise. The validation parameters of the method and data obtained by the external standard calibration methodology for all antidiabetic compounds in surface water are presented in Table 3.
Antidiabetics Occurrence in Surface Water
A total of 10 surface water samples (from 5 receiving rivers) taken in the downstream and in the upstream points of the wastewater treatment plants of some municipalities were analyzed in order to determine the concentrations of antidiabetics. At the same time, the aim of this study was to evaluate the chemical quality of surface water used in the production of drinking water. Compounds that were ubiquitous (100% detection frequency) in all samples analyzed were metformin and guanyl urea being detected in all 5 rivers both upstream and downstream of the treatment plants. The compound detected with the highest frequency (90%) was gliclazide followed by glipizide which was determined only in 50% of the samples. The glibenclamide and glimepiride were never detected in the analyzed surface waters. In order to assess the potential impact of WWTP effluents discharged in river, we calculated an increase factor (IF) for each antidiabetic compounds with equation (1): were, cdw is the antidiabetic concentration in surface water sampled from downstream of the WWTP and cup is the antidiabetic concentration in surface water taken from river in the upstream of WWTP.
Regarding the potential impact of the WWTPs on the surface water quality, it was observed that the Bahlui river presented the highest increase factor (22) for the concentration of glipizide, probably due to Galati WWTP, followed by the Dambovita river which had an increase factor of 5.7 in the case of gliclazide, generated probably by Glina (Bucharest) WWTP (Figure 4). The next degree of potential impact corresponded to GUA (3.4) and MET (3.1) in the Ialomita river probably due to the effluent discharged by the Targoviste WWTP. In the case of Somes river, for the gliclazide, an increase factor of 3 it was obtained, probably due to the discharge of the Cluj-Napoca WWTP effluent in the Ialomita river in the downstream area. The potential impact of the 5 WWTPs was strong for four rivers (Bahlui, Dambovita, Ialomita, Somes), but the most pronounced was the WWTP Iasi effluent on Bahlui river, followed by WWTP Bucharest for Dambovita river. The antidiabetic concentration increased, after the effluent discharge, with factors of 22 from <LOQ to 3.5 ng/L for glipizide (Bahlui), 5.7 from 3.1 to 20.9ng/L for gliclazide (Dambovita) and 4.9 from 2.4 to 14.2ng/L for metformin in Bahlui.
These values are similar or lower than the concentrations reported by other paper in Germany rivers (Lake Constance MET 102 ng/L, Rhine river MET 100ng/l), or in USA rivers (MET 150ng/L) [10,14,18]. The most contaminated river was represented by Ialomita, which had a total concentration of antidiabetics of 112.1 ng/L in the downstream point, followed by the Siret and Dambovita rivers, which had a total concentration of antidiabetics of 66.3 ng/L and 57.3 ng/L, respectively, also in the downstream points.
Conclusions
A SPE-LC/MS/MS method was validated in order to quantify 5 antidiabetic compounds (metformin, glimepiride, glyburide, gliclazide, glipizide) and one degradation (guanyl ure) in surface water samples. The limit of quantification (LOQ) ranged in the interval of 0.1-2.45 ng/L. The recovery rates obtained for spiked samples were between 57.4 and 105.2%, proving that the method is accurate. The linear regressions (1-100ng/L) used to calibrate the LC-MS/MS system had determination coefficients higher than 0.99. The method was precise having good intra-day precision (3.5-7.2%) and inter-day precision (6.5-12.7%). The most contaminated river was represented by Ialomita, which had a total concentration of antidiabetics of 112.1 ng/L in the downstream point, followed by the Siret and Dambovita rivers, which had a total concentration of antidiabetics of 66.3 ng/L and 57.3 ng/L, respectively, also in the downstream points. | 3,621.8 | 2020-08-04T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
AdS$_{\textbf{2}}$ solutions and their massive IIA origin
We consider warped $\mathrm{AdS}_2\times \mathcal M_4$ backgrounds within $F(4)$ gauged supergravity in six dimensions. In particular, we are able to find supersymmetric solutions of the aforementioned type characterized by $\textrm{AdS}_{6}$ asymptotics and an $\mathcal M_4$ given by a three-sphere warped over a segment. Subsequently, we provide the 10D uplift of the solutions to massive type IIA supergravity, where the geometry is $\mathrm{AdS}_2\times S^{3}\times\tilde{S}^3$ warped over a strip. Finally we construct the brane intersection underlying one of these supergravity backgrounds. The explicit setup involves a D0-F1-D4 bound state intersecting a D4-D8 system.
Introduction
Ever since the birth of the AdS/CFT correspondence [1,2], the quest for supersymmetric AdS vacua in string theory has become a goal of utmost importance. All the research efforts in the last decades devoted to this task have delivered a wide range of results including partial or exhaustive classifications of AdS string vacua in diverse dimensions (see e.g. [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]). A further crucial element for providing a holographic interpretation of the corresponding AdS vacuum is to possess the underlying brane construction from which the solution emerges when taking the near-horizon limit (see [18] for a non-exhaustive collection of examples).
While in higher dimensions the organizing pattern of the landscape of supersymmetric AdS solutions is well delineated, achieving such a goal in two and three dimensions turns out to be too hard of a task, at least in full generality. This is due to the vast and rich structure opening up when it comes to establishing the possible geometries and topologies of the internal space. However, some partial developments in this direction can be found in [19][20][21][22][23][24][25]. More recently in [26,27] novel examples of AdS 2 & AdS 3 solutions were found in the context of massive type IIA string theory.
In all of the examples there the ten dimensional background is given by the warped product of AdS times a sphere warped over a line, their striking general feature being the non-compactness of the would-be internal space.
Such a feature seems to emerge very naturally within the context of brane intersections in massive type IIA supergravity when these produce AdS 2 or AdS 3 geometries in their near-horizon limit. So far a similar thing seems to be happening in higher dimensions only when the corresponding AdS vacua are obtained by employing non-Abelian T duality (NATD) as a generating technique (see e.g. [28][29][30] for recent examples of this type). This issue makes the holographic interpretation of this type of supergravity constructions problematic. Nevertheless, within the context of [29], the proposed holographic picture is that of an infinite quiver theory arising from a possible deconstructed extra dimension.
Going back to the original goal of finding novel examples of supersymmetric lower dimensional AdS vacua, a very fruitful approach seems to be that of exploiting the existence of consistent truncations of string and M theory yielding lower dimensional gauged supergravities as effective descriptions. The reason why this can be so helpful is that one may restrict the search for solutions within a theory with a smaller amount of fields and excitations. Once in possession of new lower dimensional solutions, a ten (eleven) dimensional solution can be generated by using the needed uplift formula. In this context, new supersymmetric solutions were found in [31][32][33] and [34][35][36][37][38][39], by exploiting consistent truncations respectively down to N = (1, 1) supergravity in six dimensions and N = 1 supergravity in seven dimensions.
The focus of this paper will be F (4) supergravity in six dimensions as arising from a consistent truncation of massive type IIA supergravity on a squashed 4-sphere [40]. We will study supersymmetric warped AdS 2 solutions supported both by a non-trivial 2-form field and a non-trivial profile for the universal scalar field. We will show how such an Ansatz produces half-BPS solutions where the full six dimensional geometry is AdS 2 × S 3 warped over a segment. This, upon uplifting, will produce an AdS 2 solution in massive type IIA string theory where the ten dimensional background is given by AdS 2 × S 3 ×S 3 warped over a strip. Furthermore we will show how these ten dimensional backgrounds can be equally obtained by taking the near-horizon limit of a non-standard D0-F1-D4-D4 -D8 intersection specified by a certain brane charge distribution. Finally we will conclude by speculating on the physical interpretation of our construction.
AdS × M 4 Solutions in F (4) Gauged Supergravity
In this section we derive two supersymmetric AdS 2 × M 4 warped backgrounds in d = 6 F (4) gauged supergravity. The solutions preserve 8 real supercharges and, are characterized by an AdS 6 asymptotics and by a running profile for the 2-form gauge potential included in the supergravity multiplet. The 2-form wraps the internal directions of AdS 2 and supports the singular behaviors arising in the IR regime. As we will see this fact hints at the physical interpretation of these backgrounds in terms of branes intersecting the D4-D8 system giving rise to the AdS 6 vacuum.
We will firstly introduce our setup given by six dimensional F (4) gauged supergravity in its minimal incarnation 1 . Then we will formulate a suitable Ansatz on the bosonic fields of the supergravity multiplet and on the corresponding Killing spinor. With this information at hand, we will derive the BPS equations and we will solve them analytically. Minimal N = (1, 1) supergravity in d = 6 [41,42] is obtained by only retaining the pure supergravity multiplet and, as a consequence, the global isometry group breaks down to [42][43][44] G 0 = R + × SO(4) . (2.1) The R-symmetry group is realized as the diagonal SU(2) R ⊂ SO(4) SU(2) × SU (2). The corresponding 16 supercharges of the theory are then organized in their irreducible chiral components. The fermionic field content of the supergravity multiplet is composed by two gravitini and two dilatini. Both the gravitini and the dilatini can be packed into pairs of Weyl spinors with opposite chiralities. Furthermore, in d = 1 + 5 spacetime dimensions, symplectic-Majorana-Weyl spinors 2 (SMW) may be introduced. The SMW formulation manifestly arranges the fermionic degrees of freedom of the theory into SU(2) R doublets, which we respectively denote by ψ a µ and χ a with a = 1, 2. It is worth mentioning that these objects have to respect a "pseudo-reality condtion" of the form in (B.5) in order for them to describe the correct number of propagating degrees of freedom.
The bosonic field content of the supergravity multiplet is given by the graviton e m µ , a positive real scalar X, a 2-form gauge potential B (2) , a non-Abelian SU(2) valued vector field A i and an Abelian vector field A 0 . The consistent deformations of the minimal theory consist in a gauging of the R-symmetry SU(2) R ⊂ SO(4), by making use of the vectors A i , and a Stückelberg coupling inducing a mass term for the 2-form field B (2) . The strength of the former deformation is controlled by a coupling constant g, while the latter by a mass parameter m. The bosonic Lagrangian has the following form [40,41,45] where the field strengths are defined as A combination of the gauging and the massive deformation induces the following scalar potential which can be re-expressed in terms of a real "superpotential" f (X) through where f (X) is given by The supersymmetry variations of the fermionic fields are expressed in terms of a 6D Killing SMW spinor ζ a as [41,45] with ∇ µ ζ a = ∂ µ ζ a + 1 4 ω mn µ Γ mn ζ a and (Ĥ mn ) a b defined as , σ i being the Pauli matrices as given in (B.8). By varying the Lagrangian (2.2) with respect to all the bosonic fields one obtains the following equations of motion where D is the gauge covariant derivative defined as Dω i = dω i + g ijk A j ∧ ω k for any ω i transforming covariantly with respect to SU(2). Finally we mention that the scalar potential (2.4) admits a critical point giving rise to an AdS 6 vacuum preserving 16 real supercharges. This vacuum is realized by the following value of vev for X while all the gauge potentials are zero.
The General Ansatz
Let us consider a 6D metric of the general form associated to a warped backgrounds of the type AdS 2 ×M 4 where M 4 is locally written as a fibration of a S 3 over an interval I α . We point out that the warp factor V is nondynamical and it has been introduced because its gauge-fixing will turn out to be crucial to analytically solve the obtained BPS equations.
As far as the 2-form gauge potential B (2) is concerned, it will purely wrap AdS 2 as follows We furthermore also assume a purely radial dependence for the scalar and, for simplicity, we will restrict ourselves to the case of vanishing vectors, i.e. A i = 0 and A 0 = 0. We need also a suitable Ansatz for the Killing spinor corresponding to the spacetime background given in (2.11) and (2.12). As we pointed out in [46], the action of the SUSY variations on the SU(2) R indices of the Killing spinor ζ a is trivial, so it is more natural to cast the components of a Killing spinor in a (1 + 5)-dimensional Dirac spinor ζ. Following the splitting of the Clifford algebra given in (B.9), the Killing spinors considered are of the form (2.14) The spinor η S 3 is a Dirac spinor, hence it has 4 real independent components and satisfies the following Killing equation where R −1 the radius of S 3 and γ θ i are the Dirac matrices introduced in (B.7) expressed in the curved basis {θ i } on the 3-sphere.
Regarding the spinors χ ± AdS 2 , they are Majorana-Weyl Killing spinors on AdS 2 and only possess 1 real independent component each. They respectively solve the equations 3 where L −1 is the radius of AdS 2 and ρ x α are the Dirac matrices introduced in (B.6) given in the curved basis {x α } on AdS 2 .
3 Since χ ± AdS2 are Weyl spinor, they respectively satisfy the conditions Π(±ρ * )χ ± AdS2 = ±χ ± AdS2 with Π = 1 2 (I ± ρ * ). It follows that they can organized in a Majorana doublet Finally ε 0 is a 2-dimensional real spinor encoding the two different chiral parts of ζ as where we used the identity (B.10). Totally we have that ζ depends on 16 real independent supercharges that, as we will see, will be lowered by an algebraic projection condtion associated with the particular background considered.
AdS
Let us now derive two analytic warped solutions of the type AdS 2 × S 3 × I α associated with the general background 2.2. Both preserve 8 real supercharges (BPS/2), enjoy an AdS 6 asymptotics and a singular IR regime. The first solution is characterized by the following Ansatz,
(2.18)
If we now impose the algebraic condition on the spinor ζ, written in (2.14), we can specify the SUSY variations of fermions (2.7) for the background (2.18). In this way we obtain the following set of BPS equations, U = 1 4 e V cos(2θ) −1 (5 + 3 cos(4θ)) f + 6 sin (2θ) 2 X D X f + L e −U sin(2θ) , (2.20) In addition to the first-order equations, one has to impose the two additional constraints If the superpotential f is given by (2.6), it is easy to see that the constraints (2.21) are satified. Let us now make the following gauge choice on the non-dynamical warp Then, the equations (2.20) can be integrated analytically for α ∈ [0, π 4 ] and the corresponding solution is given by . Finally, if we take the α → 0 limit, the solution (2.23) is locally described by the AdS 6 vacuum (2.10), while for α → π 4 , the background becomes singular.
The second solution is simpler and it can be found by setting the two warp factors U and W of (2.18) equal. In this case, we produce a curved domain wall solution charged under the 2-form. The Ansatz in this case has the following form
(2.25)
With this prescription the Killing spinor (2.14) boils down to Also in this case these constraints are satisfied if the superpotential has the form of (2.6). If we now make the gauge choice it is easy to see that the equations (2.27) are solved by the following expressions
The Massive IIA Origin
We will now move to the 10D origin of these backgrounds in massive type IIA supergravity. We will start by discussing their uplifts by using the formula in [40]. Later, for the simpler case, we will also provide a brane solution which will allow us reinterpret the charged domain wall (2.30) as a particular background with polarized branes.
Uplifts and AdS 2 × S 3 × S 3 × I α × I ξ Backgrounds
In this section we present the consistent truncation of massive IIA supergravity around the AdS 6 ×S 4 warped vacuum [40] and we discuss the uplifts of the AdS 2 ×M 4 solutions obtained in section 2.3. If one choose the 6D gauge parameters as it follows the 6D equations of motion (2.9) can be obtained from the following truncation Ansatz of the 10d background 4 [40] ds 2 where ∆ = Xc 2 + X −3 s 2 and ds 2 4 is the metric of a squashed 4-sphere locally written as a fibration of a 3-sphereS 3 over a segment, The AdS 6 × S 4 warped vacuum of massive IIA is naturally obtained by uplifting the 6D vacuum (2.10). In particular, for X = 1 and vanishing gauge potentials, the manifold 4 We use the string frame, while in [40] the truncation Ansatz is given in the Einstein frame. See appendix A. 5 They are defined by the relation dθ i = − 1 2 ε ijk dθ j ∧ dθ k .
(3.3) becomes a round 4-sphere 6 . From (3.4) it follows that the AdS 6 × S 4 vacuum is supported by the 4-flux F (4) that, together with the dilaton, has the following form These are exactly the flux and dilaton configurations corresponding to the near-horizon of the localized D4-D8 system of [40,47]. The uplifts of the AdS 2 warped solutions obtained in section 2.3 can be easily derived by plugging the explicit form of the 6D backgrounds (2.23) and (2.30) into the truncation formulas (3.2) and (3.4). In both cases one obtains a 10D background AdS 2 × S 3 ×S 3 fibered over two intervals parametrized by the 6D coordinate α and by the internal coordinate ξ.
In particular we can write the corresponding 10D metric of the charged domain wall solution (2.30) as where ds 2 4 is given by (3.3) in the particular case of vanishing vectors A i = 0, i.e.
D4-D8 System and AdS 6 Vacua
In order to provide the explicit brane picture producing the 10D background (3.6) in its near-horizon limit, as a preliminary analysis, we review how the AdS 6 vacuum is obtained as the near-horizon limit of the D4-D8 intersection. The complete brane system realizing this mechanism is sketched in table 1. The corresponding string frame supergravity background reads branes t ρ ϕ 1 ϕ 2 ϕ 3 z Table 1. The brane picture underlying the 5d SCFT described by D4-and D8-branes. The above system is 1 4 -BPS.
where vol (4) &ṽol (4) represent the volume forms on the R 4 factors respectively spanned by (r, θ i ) and (ρ, ϕ i ). The functions H D4 & H D8 specify a semilocalized D4-D8 intersection [47] and their explicit form is given by The above background yields a warped product of AdS 6 and a half S 4 in the limit where z → 0 , and ρ → 0 , while z 3 ρ 2 ∼ finite . In what follows we will consider the intersection of the D4-D8 system with a D0-F1-D4 bound state. The presence of these new branes will break the isometry group of the AdS 6 × S 4 vacuum producing the AdS 2 foliation.
is that of a non-standard brane intersection in the spirit of [48], since there is no transverse direction which is common to all branes in the system. The explicit profile of the massive IIA supergravity fields in the string frame reads 14) where the warp factors appearing in the above metric read If we now take the limit ρ → 0 while keeping (z, r) finite, the metric becomes 18) where L AdS 2 = 1/2, which is AdS 2 × S 3 ×S 3 warped over the (z, r) coordinates. By comparing (3.17) with (3.2), one finds an explicit mapping between the (z, r) coordinates and the (α, ξ) coordinates appearing in the uplift formula. In particular, by comparing the warp factors in front of the AdS 2 × S 3 block of the metric and the two expressions of the 10D dilaton, one gets the following two algebraic relations which, once combined with the matching condition for theS 3 block, give The complete forms of the two 10D backgrounds match through the coordinate change in (3.20), upon further identifying Q 3 = m, together with the condition (3.1) relating the couplings g & m.
Acknowledgements NP would like to thank I. Bena, N. Bobev, Y. Lozano, J. Montero and C. Nunez for enlightening discussions. NP would also like to thank the members of the Department of Theoretical Physics at Uppsala University for their kind and friendly hospitality while some parts of this work were being prepared. The work of NP is supported by TÜBİTAK (Scientific and Technological Research Council of Turkey). The work of GD is supported by the Swedish Research Council (VR).
A Massive IIA Supergravity
In this appendix we review the main features of massive IIA supergravity [49]. The theory is characterized by the bosonic fields g M N , Φ, B (2) , C (1) and C (3) . The action has the following form (A.1) where S top is a topological term given by where H (3) = dB (2) , F (2) = dC (1) , F (3) = dC (3) and the 0-form field strength F (0) is associated to the Romans' mass as F (0) = m. All the equations of motion can be derived 7 consistently from (A.1). They have the following form 7 We set κ 10 = 8πG 10 = 1.
where M, N, · · · = 0, . . . , 9 and R and are respectively the 10D scalar curvature and the Laplacian. The stress-energy tensor is given by with ∇ M being associated with the Levi-Civita connection of the 10D background. The Bianchi identities take the form As a consequence of (A.5), the following fluxes turn out to satisfy a Dirac quantization condition. It may be worth mentioning that the truncation Ansatz of section 3.1 is obtained by casting massive IIA supergravity into the Einstein frame [40]. where {Γ m } m = 0, ··· 5 are the 8 × 8 Dirac matrices and η = diag(−1, +1, +1, +1, +1). The chirality operator Γ * can be defined in the following way in terms of the above Dirac matrices Γ * = Γ 0 Γ 1 Γ 2 Γ 3 Γ 4 Γ 5 with Γ * Γ * = I 8 .
(B.2)
For (1 + 5)-dimensional backgrounds, we can choose the matrices A, B, C, respectively realizing Dirac, complex and charge conjugation, satisfying the following defining relations [50] ( The second identity in (B.4) implies that it is actually inconsistent to define a proper reality condition on Dirac (or Weyl) spinors. However, it is always possible to introduce SU(2) R doublets ζ a of Dirac spinors, called symplectic-Majorana (SM) spinors respecting a pseudo-reality condition [50] given by where ab is the SU(2) invariant Levi-Civita symbol. The condition (B.5) ensures us that the number of independent components of a SM spinor be the same of those of a Dirac spinor. Moreover, the above condition also turns out to be compatible with the projections onto the chiral components of a Dirac spinor. Hence it is possible to construct SM doublets of irreducible Weyl spinors that are called symplectic-Majorana-Weyl (SMW) spinors. | 4,831.4 | 2018-11-27T00:00:00.000 | [
"Physics"
] |
Differentiated Instruction: Challenges and Opportunities in EFL Classroom
The issue of Differentiated Instruction (DI) has recently attracted the attention of scholars and practitioners because of its immense significance and many opportunities to enhance student learning. In this regard, the current study aims to contribute a small summary of DI in the context of EFL to provide context and illustrate the need to implement DI in the classroom to ensure that students learn languages successfully. Since differentiated instruction puts students at the center of teaching and learning, encourages equality and academic achievement, and acknowledges student diversity, it encourages teachers to be aware of individual needs, interests, skills, English proficiency levels, and students’ preferred learning strategies. Although some challenges may occur such as time -consuming and pressure on teachers in implementing DI, this approach has advantages that can affect students' learning processes, future learning attitudes, and future success. So, the learning process provides many opportunities when the teacher is committed to differentiated instruction.
Introduction
Nowadays in English language schools, students come from a variety of linguistic, educational, socioeconomic, and many ethnic groups.Because the classroom's heterogeneity is growing, language teachers may face difficulties because of this situation in effectively facilitating the language-learning experiences of various student communities.It represents the consequence of the one-size-fits-all strategy which may frequently employ in the preparation of textbooks, additional materials, and technology used which intend to serve a large market (Aldossari, 2018;Raza, 2020).
JELTL
It might be difficult to learn English as a second language due to the dissimilar nature of the target language and the student's mother tongue (Maheswari et al., 2020;Manik & Suwastini, 2020).This problem is typically more severe when the individual qualities of each student are considered (Uzair-ul-Hassan et al., 2019).Therefore, the learning process must be more considerate of the individual qualities of each student.Such a learning process would position students at the center of the instructional process (Matra, 2014), with the teacher acting primarily as a facilitator who seeks out the most effective methods and strategies to aid students throughout their learning process (Bahous et al., 2011).As a result, Tomlinson (2001) popularized Differentiated Instruction, which is a teaching approach made up of purposeful awareness to meet students' variety (Joseph, 2013).Furthermore, DI is defined as a process visionary, student-oriented, qualitative, and based on an assessment approach.It also consists of teaching whole-class, large/small group, and individual instruction (Tomlinson, 2001).
In implementing DI, teachers must assess several aspects before executing the teaching and learning process.The aspects include students' readiness, interests, and learning profiles.Thus, teachers can effectively use DI throughout instruction, improving teaching, support, and learning for all students.By recognizing the distinctions among students, it is crucial to enhance both teaching and student learning.In the teaching-learning process, there can be many elements that affect how students grasp the key content, engage the classroom instruction, and demonstrate their abilities and expertise.Thus, instruction must address these disparities to help students succeed and fulfill their potential (Ortega et al., 2018;Tomlinson, 1999).
Numerous studies have demonstrated the benefits of differentiated instruction in EFL classrooms.It has been demonstrated to improve intellectual growth and interest in the subject, students' comprehension of the significant concept, and a method that could help all students learn and make greater progress (AlHashmi & Elyas, 2018;Kotob & Abadi, 2019;Magableh & Abdullah, 2020b;Sougari & Mavroudi, 2019).Kotob and Abadi (2019) found the implementation of differentiated instruction, there was a significant rise in the academic performance of students who had previously been categorized as having low achievement.It has been demonstrated by Magableh and Abdullah (2020) that differentiated instruction is an effective instructional design for dealing with classes containing students with a variety of skill levels.Nevertheless, several researchers have raised concerns regarding the difficulty of successfully integrating differentiated instruction (Aftab, 2016;Ismajli & Imami-Morina, 2018;Naka, 2018a).
Considering the numerous demonstrated benefits of differentiated instruction, the present study will investigate the approach in greater depth to explain what differentiated instruction is, how to implement it, and what factors determine the success of its implementation, as well as to compare the strengths and weaknesses of implementing the approach in English as a Foreign Language classrooms.
Differentiated Instruction
2.1 Defining Differentiated Instruction Tomlinson (2000) has pointed out the concept of differentiated instruction as a process in which teachers reorganize their classrooms to deliver the greatest education possible to every student.Students are engaged through a variety of teaching approaches adapted to their interests, needs, and abilities so that every student has a range of options to obtain and display their knowledge (Aldossari, 2018).Differentiated instructions are typically provided with a variety of variants to accommodate the varied qualities of individual students.Tomlinson and Imbeau (2010) called differentiation a sequence of educational strategies held as a way of thinking as well as a set of principles when planning and implementing a teaching and learning process, where ongoing representations on how to key result areas the learning content to students with various characteristics are conducted.Tomlinson (2001) then distinguishes differentiated instruction from individualized instruction, which emerged in the 1970s she is defining what is the precise concept of differentiated instruction that distinguish it from the misconception between them.He explains that the concept behind differentiated instruction differs significantly from Individualized Instruction.Different from individualized instruction, the goal of differentiated instruction is not to provide each student with a distinct level.But teachers may alternate between interacting with the entire class, several groups, and individually.Therefore, differentiated instruction does not make diverse groups unvaried, which is one of the reasons why flexible grouping is promoted in DI and dismisses the concept of fixed grouping (Baecher et al., 2012).DI comprises three elements as its basis, namely content, procedure, and product.The content in DI takes the role of input or in other words, it is the knowledge that students are needed to obtain and to achieve the intended knowledge teachers need to employ the methods in instruction that are then called process (Tomlinson, 2001).To distinguish the material, teachers may employ extra texts, novels, or short tales at various reading levels (Algozzine & Anderson, 2007).Modify the process, it can be in a form of writing in a journal, designing models, and using choice boards (Tomlinson, 2001).Again, teachers can customize this component of their instruction by making contracts or entries of potential project ideas since the goal of the product should be for students to represent their topic mastery (Algozzine & Anderson, 2007;Sulistyo et al., 2018).
2.2 Adjusting Content, Process, and Product in the Differentiated Classroom Pham (2011) pointed out that differentiated instruction involves adjusting both the content and the process.The modification can assist and enhance student development since it will be able to absorb and evaluate student performance more precisely if these areas are modified (Pham, 2011).According to Bigge and Shermis (2004), the targeted content must be both difficult and manageable for the learners, if it does not they will be demotivated and struggle.Hence, modifying the content is essential since it is adjusting the developmental progress and range of development of the learners.In addition, Pham (2011) emphasized that content modification should focus on essential instructional characteristics to achieve targeted learning results.
Differentiated instruction should be based on the specific qualities of the intended classroom's students.Any design of differentiated instruction must be based on a comprehensive understanding of student diversity.Consequently, the design of differentiated instruction always begins with a preassessment of student differences (Borja et al., 2015;Logan, 2011;Ortega et al., 2018).Tomlinson and Imbeau (2010) categorize the distinctions between students as readiness, interest, and learning profile.
According to Tomlinson (2001), the learning process is when students grasp the knowledge or skills that are given to them.Thus process differentiation helps learners to generate similar outcomes or products in various manners (Watts-Taffe et al., 2012).This implies here that teacher must then offer and combine the tasks learners use to learn and understand the material (Suharyadi & Wulyani, 2022).Also, the teacher may adapt the knowledge process for each student based on their level of readiness, their areas of interest, and their learning profiles.Good differentiation adapts or distributes the content concerning the student's preferred learning style, which may be visual, kinesthetic, or auditory (Fauziah & Cahyono, 2022).Students can participate in several activities that are meant to foster context by being encouraged to work in flexible groups that need numerous grouping patterns and enable them to collaborate individually, in partners, or as a team (Chamberlin & Powers, 2010).Several teachers also stated that flexible grouping might accommodate both students with similar learning styles and preferred learning methods as well as those with various styles of learning and learning preferences (Chamberlin & Powers, 2010).
Besides, the process is also considered the real key to teaching and learning, where the focus is on how students comprehend the material (Tomlinson & Imbeau, 2010).This is the actual instruction of the created and planned-for material.This phase describes how the teacher would organize the class, considering the diversity and learning abilities of each student (Baumgartner et al., 2003;Borja et al., 2015;Ortega et al., 2018).Students can be conditioned to engage and collaborate to generate new content knowledge using strategies for flexible grouping (Winarti et al., 2021).To facilitate the students' process, grouping students according to their readiness, interests, and learning profile can be quite advantageous at this point.Using graded activities that involve the same abilities, various learning activities can be provided to match the needs of each learner (Jin, 2015;Leblebiċiėr, 2020;Valiandes, 2015).Students with an audiovisual learning style can use audiovisual learning media, while students with a visual learning style can use graphic organizers, concept maps, or charts, The assignment papers may be modified based on the needs of the students with varied time allotments and amounts of help (Ernest et al., 2011;Fuad et al., 2017;Malacapay, 2019;Shih, 2010).
Finally, besides adjusting content and process, Tomlinson and Imbeau (2010) suggested that the product is as important as those aforementioned before.Since the product is the component of the curriculum that assesses whether students have mastered the learning subject and demonstrated the desired abilities (Tomlinson & Imbeau, 2010).As a result of differentiated instruction, the way teachers assess the students is tailored to allow students to demonstrate their understanding and skills in a manner that corresponds with their preferences and characteristics (Aliakbari & Haghighi, 2014;Ortega et al., 2018;Tomlinson, 2000).Assessment is possible by evaluating the goods created by individuals or small groups, by encouraging the person or group's products, or by providing rubrics based on differing skill levels that allow for variances of difficulty (Ernest et al., 2011;Subekti, 2020).
According to Tomlinson and Imbeau (2010), affect also incorporate as the final part of the curriculum that can be modified.It examines how the emotions and feelings from students' previous and present experiences influence their perceptions of the learning process and their positions as learners.Addressing the affective demands on students often happens when a teacher changes the classroom atmosphere compared to any other three classroom aspects that have been mentioned.A positive attitude toward the learning topic and the students as learners can enhance their academic development (Tomlinson & Imbeau, 2010).Students can be motivated to learn if they believe they need the content to prepare for their future careers.Similarly, students can get unmotivated if the study material reminds them of a negative experience.Recognizing the past experiences and future aspirations of the students might aid the instructor in tailoring instruction to accommodate the students' emotions and ambitions.The desire to go to English-speaking nations may drive students to study English.Choosing instructional materials that are relevant to these nations may increase their motivation.A variety of activities such as requesting directions and engaging in other everyday discussions in Englishspeaking nations will help students develop a positive attitude toward the learning process (Suwastini et al., 2021).
Furthermore, to prepare a classroom for differentiated instruction, it is necessary to modify seating arrangements so that students have access to a range of learning possibilities, including whole-class instruction, peer instruction, group discussions, teamwork, and independent study (Ortega et al., 2018).In fact, from the perspective of differentiated instruction, not every student works in the same setting, necessitating the application of a variety of instructional strategies to help learners acquire meaning from their newly acquired knowledge (Pham, 2011).A differentiated classroom environment encourages students to realize their potential and advance to the next level of their language proficiency while inspiring other students in the class to develop and be guided by the improvement and achievement of their more advanced classmates (Vargas Parra et al., 2017).According to Ernest et al. (2011), it is also possible to vary the learning environment by allowing students to complete assignments under their preferable conditions.Some students prefer a quiet spot to complete their homework, but others may perform better with music playing in the background.
The Strengths of Differentiated Instruction
Numerous research projects have demonstrated the benefits of utilizing differentiated instruction in EFL classrooms.For example, Kotob and Abadi (2019) discovered that low-achieving EFL students benefit from this differentiated instruction, such as the increase in test results, leading to an improvement in students' achievements.Moreover, it is undeniable that the influence of grouping flexibility was an effective strategy for differentiating instruction and maximizing student performance, particularly for low achievers whose test scores climbed substantially.In other words, the successful technique of differentiated instruction ensures their academic development.An inclusive classroom setting produces a productive learning environment in which students felt comfortable and valued.The curriculum and teaching methods utilized by the English Language teacher contributed to the development and inclusion of students (Celik, 2019).Differentiation necessitates a significant shift in the current conception of inclusion in terms of students' engagement and dedication to the courses.It also motivates the teacher, who becomes more of a facilitator than a dictator (Celik, 2019).
Similarly, Sharaf (2019, as cited in Dabr, 2021) determined the usefulness of differentiated instruction for improving EFL writing skills.She gave the treatment in one group.The experimental group consisted of ninety sophomore EFL students.The study showed significant differences between the mean test scores before and after DI was implemented, it showed improvement in the post-test.Thus, differentiated instruction is excellent for enhancing the writing skills of EFL students.In a similar vein, DI also help students to increase their overall skills.Since, the students distinguished them according to their learning preferences (Dabr, 2021).Next, Magableh and Abdullah (2020a) investigated the influence of DI on EFL students' performance in reading comprehension.Their study involved two control groups and two experimental groups.The experimental group was taught with differentiated instruction whereas the control group was taught with the usual instruction.The results suggested that using differentiated instruction to improve EFL students' reading comprehension among Jordanian students in grades four and five was effective.Students that benefited from differentiated instruction developed reading comprehension, and this shows the improvement in student's performance in reading comprehension (Suson et al., 2020).
Further, the positive effect of DI also confirms by the project of Magableh and Abdullah (2020b) on studying the effects of DI on English achievement among students.This study involved sixty students from eighth grade who come from schools in Irbid and Jordan.The English language was taught to a group of thirty students using differentiated instruction and showed a favorable effect on reducing variation in the classroom and encouraging EFL learning and instruction.Thus, after reviewing the ideas and implementation of differentiated instruction, it leads to the conclusion that using differentiated teaching to enhance the learning of students with varied backgrounds may be helpful.The implication in both theories and practices is that student's interests and needs may be met through differentiated instruction, so enabling them to learn English successfully.The accomplishment is a result of the teachers' thoughtful selection of differentiated instruction components to employ in the classroom.Consequently, most students who receive customized teaching are more confident and at ease when learning English based on their learning requirements and preferences (Tanjung & Ashadi, 2019).
The Weaknesses of Differentiated Instruction
In the past years, teachers have typically encountered a variety of challenges in the classroom.Recognizing learner variability is one method for preventing low levels of student performance in the classroom.Teachers should adapt their expectations and the learning content and activities to the abilities and differences of their students (Celik, 2019).EFL teachers regard DI problems in mixedability classes to be the most challenging obstacle to overcome (Naka, 2018b).The greater the diversity of the classroom, the more preparation is required before instruction.Pre-assessment must be undertaken to determine the readiness, interest, and learning characteristics of each student (Suwastini et al., 2021).
The student-to-teacher ratios, inadequate pre-service training for teachers, a dearth of appropriate tools to implement differentiated instruction, a lack of student acquisition, and reliance on tried-and-true methods of instruction all ranked among the most significant obstacles (Aldossari, 2018).Teachers would have been prevented from making the necessary changes to their classroom teaching due to practical issues (such as a lack of time or resources) or the additional preparatory time needed for its application (Sougari & Mavroudi, 2019;Widiati et al., 2023).Limited time is a difficulty in DI, as a lack of deep learning cannot occur without it (Porta & Todd, 2022).
Therefore, most teachers do not agree that planning and instructional time are accessible for differentiation.When implementing and designing differentiated instruction, teachers encountered the most significant challenges and obstacles in the form of a lack of planning and instructional time (Aftab, 2016).They were unable to integrate differentiated topic instruction into their regular classroom procedures due to their lack of understanding, time constraints, and interactive lesson preparation (Chien, 2015).These teachers taught the identical content to the entire class.For differentiated instruction, they just used conventional texts; they did not use a range of resources (e.g., jigsaws, diverse organizers, and texts) since they were under time constraints to complete the textbook (Chien, 2015).This mismatch in teacher roles may have a theoretical justification.Teachers may acknowledge monitoring and facilitating students' learning but because of practical issues (such as pressure to finish the course material or time constraints they may regress towards the more comfortable procedure of continuing to act as a controller) (Sougari & Mavroudi, 2019).
Teachers' ability to effectively embrace this instructional approach, however, is probably to be restricted by their inexperience with the DI's basic concepts.They may not experience its benefits therefore, which might cause them to develop a negative opinion of DI and revert to the traditional approach which they have familiar with (Sougari & Mavroudi, 2019).Thus, these teachers must alter their instructional perspective to be more accepting of the variances in their classrooms.And when teachers are aware of these variances, they must be committed to making regular adjustments to their instruction to accommodate all students regardless of their differences.To ensure that all students have equal possibilities to master the learning content, regardless of their differences, teachers must continuously evaluate their students and reinvent their lessons.Therefore, a dedicated English instructor should have a variety of teaching styles to accommodate the variances among students (Afriliyasanti et al., 2016;Suwastini et al., 2021).
Conclusion
It has been argued that the principles of differentiated instruction offer EFL classrooms several benefits, particularly for mixed-ability classrooms, including fostering the students' performance, adjustment, enhanced self-awareness and responsibility, improved engagement, and students' motivation, as well as the development of student cooperation and collaboration.Differentiated instruction provides teachers with limitless options for the reflexive process and opens the door to the possibility of fair assessments.However, these advantages are not without disadvantages.Differentiated instruction requires commitment from both school administration and teachers.This approach to instruction is time-intensive and increases the teacher's workload.However, despite the difficulties, differentiated instruction does offer benefits that can affect the student's learning experience, their future attitude toward the learning process, and their future accomplishments.Thus, when teachers are devoted to the principles of varied instruction, the learning process offers several chances.It also asks for an unlimited amount of classroom action research in which teachers continually test out new ideas, methods, and media to improve students' overall performance.Considering these findings, to improve their ability to differentiate instruction and to use DI in language classrooms, teachers may find it helpful to take DI training workshops.It would also be helpful to provide teachers with training on the best way to utilize the differentiated resources currently included in coursebooks more effectively.Finally, assistance from school administrators (such as a customized website with differentiated resources) could perhaps ease teachers' concerns about the designing of instructional material. | 4,673.2 | 2023-04-15T00:00:00.000 | [
"Education",
"Linguistics"
] |
Design and Analysis of an O+E-Band Hybrid Optical Amplifier for CWDM Systems
Broadband amplification in the O+E-band is very desirable nowadays as a way of coping with increasing bandwidth demands. The main issue with doped fiber amplifiers working in this band such as the bismuth-doped fiber amplifier is that they are costly and not widely available. Therefore, a wideband and flat-gain hybrid optical amplifier (HOA) covering the O+E-band based on a parallel combination of a praseodymium-doped fiber amplifier (PDFA) and a semiconductor optical amplifier (SOA) is proposed and demonstrated through numerical simulations. The praseodymium-doped fiber (PDF) core is pumped using a laser diode with a power of 500 mW that is centered at a wavelength of 1030 nm. The SOA is driven by an injection current of 60 mA. The performance of the HOA is analyzed by the optimization of various parameters such as the PDF length, Pr3+ concentration, pump wavelength, and injection current. A flat average gain of 24 dB with a flatness of 1 dB and an output power of 9.6 dBm is observed over a wavelength range of 1270–1450 nm. The noise figure (NF) varies from a minimum of 4 dB to a maximum of 5.9 dB for a signal power of 0 dBm. A gain reduction of around 4 dB is observed for an O-band signal at a wavelength of 1290 nm by considering the up-conversion effect. The transmission performance of the designed HOA as a pre-amplifier is evaluated based on the bit-error rate (BER) analysis for a coarse wavelength-division multiplexing (CWDM) system of eight on-off keying (OOK)-modulated channels, each having a data rate of 10 Gbps. An error-free transmission over 60 km of standard single-mode fiber (SMF) is achieved for different data rates of 5 Gbps, 7.5 Gbps, and 10 Gbps.
Introduction
IP traffic is continuously increasing globally due to the proliferation of various applications and technologies in our daily lives such as fifth-generation mobile networks, cloud computing, web applications, and the Internet of Things (IoT) [1,2]. The tendency of increased bandwidth demands and, therefore, network capacity will persist in the future. Urgent steps are required to meet the enormous bandwidth demands since the commonly employed C-band for commercial optical links is facing its capacity limits [2]. Fiber-optic communication is mainly conducted in the wavelength region where optical fibers have relatively low transmission loss [2]. This low-loss wavelength region ranges from 1260 nm to 1650 nm and is divided into six wavelength bands referred to as the O-, E-, S-, C-, L-, and Ubands, which range from 1260-1360 nm, 1360-1460 nm, 1460-1530 nm, 1530-1565 nm, 1565-1625 nm, and 1625-1675 nm, respectively [2]. Moreover, the O-band has smaller fiber dispersion, which enables high-speed optical transmission in this band without using any dispersion compensation schemes [3]. Similarly, the E-band has had a small transmission loss compared to the O-band since the invention of dehydration techniques during glass production [3]. Different solutions are available to fix the above-mentioned capacity limits of optical communication systems, which include (a) the optimal use of advanced multilevel modulation formats, (b) spatial division multiplexing (SDM)-based systems, and (c) exploiting the relatively low attenuation and unused optical windows for transmission beyond the C-band [2,4]. The first two approaches can be considered for addressing the issues of capacity and bandwidth demand at the expense of certain issues, which have been discussed in [2,4]. The third approach that exploits the high bandwidth provided by SMFs is generally called optical multiband transmission (OMBT) and is the most viable and efficient solution [4]. The OMBT technique relies on the use of the low-attenuation optical bands of SMF for data transmission, resulting in an 11-fold expansion of the available bandwidth of the C-band and 5 times the available bandwidth of the C+L-band [5]. The first phase of the OMBT scheme targets the implementation of communication outside the C-band such as the L-band using commercial off-the-shelf components [5]. In the second phase, the remaining bands, such as the O-, E-, S-, and U-bands, will be considered for data transmission [5]. The upgrade to the L-band is carried out based on erbium-doped fiber amplifiers (EDFAs), thus enabling the addition of a massive bandwidth of around 60 nm to the current 35 nm window of the C-band [2,5]. Therefore, the main challenge for the realization of OMBT for the second phase is the design and realization of innovative, efficient, and low-cost photonic components, particularly optical sources and wideband amplifiers compatible with the O-, E-, S-, and U-bands.
In optical fiber communication, optical signals suffer from various impairments such as fiber attenuation, component insertion losses, fiber dispersion, and nonlinearities [6]. The capacity of optical networks can be increased by providing a sufficient power budget or by increasing the signal power through the use of amplifiers at regular intervals along the fiber link [7,8]. Initially, electronics-based semiconductor amplifiers were employed to boost the power of optical signals. However, these amplifiers had some issues such as the requirement of optical-to-electrical and then electrical-to-optical conversions, low reliability, complexity, bulkiness, and cost inefficiency [9]. All-optical fiber amplifiers have been extensively researched and various doped fiber amplifiers based on the rare-earth dopants ytterbium, praseodymium, thulium, and holmium have been proposed [6].
Another way to realize wideband optical amplifiers is to combine multiple optical amplifiers in parallel or in series, where each amplifier operates over a distinct spectral region [10]. Such an arrangement is generally known as a hybrid optical amplifier (HOA) and has been shown to enhance system capacity and performance [11]. HOAs can be realized using different combinations of series or parallel amplifiers, as discussed in [12]. Alabbas et al. proposed a C+L-band HOA based on hafnia-bismuth-erbium-doped fiber (HBEDF) and zirconia-erbium-doped fiber (ZEDF) as the gain medium [13]. For the design proposed in [13], a gain of 14.6 dB, gain fluctuation of 1.8 dB, and NF fluctuating in the range of 4.3-7.9 dB over a wavelength range of 1530-1600 nm was observed. HOAs that have been demonstrated in other studies include the HOA based on distributed Raman-EDFA that was used for amplifying a 120 Tbps optical signal over 630 km of SMF [14], Raman-EDFA used for a 54 Tbps optical signal over 9150 km of SMF [15], and Raman-SOA that was used for a 107 Tbps optical signal over 300 km of SMF [16]. Guo et al. proposed an HOA based on a combination of two fiber parametric wavelength converters and an EDFA for S-band amplification [17]. An NF as low as 4 dB was observed when the conversion efficiency was kept higher than 10 dB for the first stage. Kaur et al. designed an HOA based on SOA-EDFA-Raman for the transmission of 40 DWDM channels at a rate of 10 Gbps over 240 km of SMF operating at the edge of the L-and U-bands [18]. A gain of 31 dB with a flatness of 0.8 dB and an NF of 5.7 dB was observed for a 1611.8-1620.5 nm wavelength range. Hafiz et al. demonstrated an erbium-ytterbium co-doped waveguide amplifier (EYDWA)-Raman HOA having a gain of 25 dB with a flatness of 2.78 dB and an NF of less than 6 dB over a 1539.7-1562.7 nm wavelength range [19]. Hafiz et al. proposed an HOA based on an erbium-ytterbium co-doped fiber amplifier (EYDFA) and backpropagating Raman amplifier [20]. A gain of 26 dB was achieved with a flatness of 1.37 dB over a wavelength range of 1545-1565 nm. In [21], an HOA based on a Raman-EDFA configuration was proposed for WDM transmissions. The HOA exhibited a gain of 46 dB and an NF of 3 dB over a 1530-1600 nm wavelength range. R.E.Tench reported designs of HOAs based on a three-stage holmium-doped fiber amplifier (HDFA) and a thulium-doped fiber amplifier (TDFA) [22] and a two-stage HDFA and TDFA employing a shared pump [23]. A small signal gain of 70 dB over a 2009-2098 nm wavelength range and an NF of 7.5 dB [22], 49.1 dB, and 6.5 dB at 2051 nm [23] were observed. Maes et al. reported on an E+S-band HOA based on bismuth-doped fiber (BDF) and EDF, which produced a small signal gain and output power of 27 dB and 24.5 dBm, respectively over a 1431-1521 nm wavelength range [24]. Guo et al. demonstrated a three-stage S-band HOA based on an L-band EDFA placed between two optical parametric amplifier (OPA)-based wavelength converters [25]. An average gain of 18.6 dB with a flatness of 1.2 dB and an NF of 5.1 dB was observed. F. D. Ros et al. optimized a C+L-band HOA based on an EDFA-Raman amplifier using neural network models, where gain flatness decreased from 6.7 dB to 1.9 dB [26]. A survey of the previous work related to HOAs has been summarized in Table 1 and compared with the main results obtained from our proposed work. The missing information in the studies presented in Table 1 is represented by dashes. In this paper, we proposed a wideband flat-gain O+E-band HOA based on a parallel configuration of PDFA-SOA. Wideband fiber Bragg grating (WFBG) has been used to separate the O-and E-band signals to input them separately into the PDFA and the SOA, respectively. In our previous study, we optimized the values of the doping concentration of Pr 3+ and the length of the PDF [6,27]. In this work, we optimized the pump wavelength and the injection current to analyze the performance of the proposed HOA. A small signal gain of 24 dB with a flatness of around 1 dB and an NF of 4-5.9 dB over a 1270-1450 nm wavelength range was observed. Finally, the transmission performance of the HOA was evaluated as a pre-amplifier in a CWDM system of eight OOK-modulated optical signals over 60 km of SMF and an aggregate data rate of 80 Gbps. We implemented the proposed HOA using the well-known commercial tool called OptiSystem [28]. The proposed HOA can be used to enable amplification in future optical access networks.
Theoretical Background
Spectroscopic Properties and Rate Equations of Pr 3+ It is evident from Figure 1a that Pr 3+ has two pump absorption bands with their peaks centered at 1010 nm and 1400 nm. The pump wavelengths in the 1000-1040 nm wavelength range are widely used to excite the Pr 3+ [6,27]. The emission starts at 1220 nm but the emission cross-section is maximum at around 1300 nm. The simplified energy level diagram of Pr 3+ based on four-level absorption and radiative transitions is shown in Figure 1b. It can be observed that the main energy levels are labeled as 1 G 4 , 3 P 0 , 1 D 0 , 3 F 4 , 3 F 3 , 3 H 5 , and 3 H 4 [29]. The 1 G 4 ↔ 3 H 4 transitions enable the pump ground state absorption (GSA) and emission. Similarly, the 3 H 4 → 3 F 4 and 1 G 4 → 3 H 5 transitions hold the signal GSA and signal emission. Moreover, the carrier density of the 1 G 4 level can be decreased by the up-conversion effect due to the 1 G 4 → 1 D 2 and 1 G 4 → 3 H 5 transitions [29] because the energy difference between the 1 G 4 and 1 D 2 levels is equal to the energy difference between the 1 G 4 and 3 H 5 levels [6,27,29]. The carrier densities at each level are represented by n 1 , n 2 , n 3 , n 4 , and n 5 and the total density n t is given by [27]: The rate equations for the energy level diagram of Pr 3+ shown in Figure 1b can be written as [27]: In the above expressions, the transition rates γ 13 , γ 31 , γ 32 , γ 34 , and γ 35 are given by The small-signal gain of the PDFA completely depends upon the transitions between the 3 H 4 and 1 G 4 levels having carrier densities of n 1 and n 3 , respectively. When a signal passes through the gain medium of the PDFA having a thickness of dz, then the propagation equation of the signal can be written as [27,29]: In the above expression, σ 34 is the excited state absorption (ESA) cross-section at the signal wavelength, which is not considered by OptiSystem. Therefore, the small-signal gain achieved is given by [27,29]: where L is the length of the PDF. The OptiSystem model of traveling-wave SOA performs lumped amplification, which is applicable to describe the amplification of both the CW and the optical pulsed signals. The coefficient of material gain g m and carrier density N(t) are interrelated by the expression [30] Similarly, the net gain coefficient g and material gain g m are also related to each other by [30] g Neglecting the group velocity dispersion (GVD) in the SOA and taking into account the amplified spontaneous emission (ASE), the gain G for a traveling-wave SOA at a distance z is given by [30] G(t, z) = exp g(t)z The carrier and optical intensities are related by the following rate equation [30]: where β is the spontaneous emission coefficient, which characterizes the part of the total spontaneous emission coupled to the guided wave and is given by [30]: The recombination rate is generally assumed to be linearly proportional to the carrier density in order to obtain almost accurate results; therefore, R(n) = n r . Assuming α = 0 [30], the steady-state expression becomes The various symbols used in Equations (1)-(12) are described in Table 2. Power of pump and signal P p (0), P p (z) Power of pump at input and a position z along the PDF P sat Pump saturation power P a Power absorbed R ij , W ij Transition rates of pump and signal between the i th and j th levels A ij Rate of spontaneous emission between the i th and j th levels hυ p , hυ s Photon energies for pump and signal Figure 2a shows the schematic of the proposed design of an HOA for O+E-band amplification based on a parallel combination of a PDFA and an SOA. Figure 2b illustrates the implementation of the proposed HOA in a typical eight-channel CWDM transmission link. The HOA consists of a wavelength division multiplexer (WDM) used to combine the O-and E-band signals, a reflective-type WFBG filter used to separate the O-and E-band wavelengths for amplification, a laser diode pump, and two optical isolators used to block any light reflected back into the system. The O-and E-band wavelengths are combined using a WDM coupler and then applied to the WFBG, whose center wavelength is adjusted to 1310 nm. The WFBG reflects all E-band signals and transmits all O-band signals, as shown in Figure 2. The O-band signal is coupled with a pump laser with a wavelength and power of 1030 nm and 500 mW, respectively, and injected into the PDF. Similarly, the E-band signal is applied to the input of the SOA for amplification. The injection current of the SOA is adjusted such that the gain fluctuation in the E-band is minimum. The injection current used to drive the SOA in this work is 60 mA. The Oand E-band signals are combined after amplification using an optical coupler. A two-port WDM analyzer, an optical power meter (OPM), and an optical spectrum analyzer (OSA) are employed in the simulation for the observation and analysis of the results. Figure 2b shows eight CW lasers centered at wavelengths of λ 1 = 1270 nm, λ 2 = 1290 nm, λ 3 = 1310 nm, λ 4 = 1370 nm, λ 5 = 1390 nm, λ 6 = 1410 nm, λ 7 = 1430 nm, and λ 8 = 1450 nm, each modulated by non-return-to-zero (NRZ) data at a rate of 10 Gbps. A pseudo-random bit-sequence generator (PRBS) inputs the logical data to the NRZ generator that converts the data into an electrical signal. After modulation with a Mach-Zehnder modulator (MZM), the resultant optical signals are multiplexed (MUX) and transmitted over a standard SMF of 60 km. The combined 80 Gbps CWDM signal is amplified using the proposed O+E-band HOA. At the receiving end, the optical CWDM signal is demultiplexed (DEMUX). Each of the outputs of the DEMUX is photodetected using a PIN photodiode, low-pass filtered (LPF) to remove the band noise and harmonics, and given to bit-error rate (BER) estimators. Table 3 shows the values of the important parameters used in our study.
Results and Discussion
The length of the PDF, doping concentration of Pr 3+ , pump wavelength, and injection current were optimized to achieve the optimum performance of the HOA. As mentioned earlier, we used the previously optimized values of the doping concentration of Pr 3+ and PDF length [6,27], whereas the pump wavelength and injection current were optimized in this study. Figure 3 shows the plots of the pump wavelength versus the amplifier gain and output power by considering the PDF length, Pr 3+ concentration, and SOA injection current of 15.7 m, 50 × 10 24 m −3 , and 60 mA, respectively, at an input signal power of -15 dBm. It is clear from Figure 3a,b that an average gain and output power of around 24 dB and 9.6 dBm were observed, respectively, over a 1270-1450 nm wavelength range covering the O+E-band for a pump wavelength of 1010 nm, 1030 nm, and 1040 nm. It is evident from Figure 3a,b that the flatness in gain and power was best for a pump wavelength of 1030 nm. Therefore, we use a pump wavelength of 1030 nm for the rest of the results. Figure 4 shows the wavelength versus gain plots as a function of the injection current of the SOA considering a PDF length of 15.7 m, Pr 3+ concentration of 50 × 10 24 m −3 , and pump wavelength of 1030 nm for an input signal power of −15 dBm. It may appear that the fluctuation in the gain of the HOA was at a minimum for an injection current of 60 mA, particularly in the E-band, compared to 50 mA and 70 mA. Therefore, 60 mA was used as the optimized value of the injection current of the SOA to obtain an overall flat-gain profile for the HOA. Moreover, it can be observed that the gain of the HOA in the E-band increased by increasing the injection current of the SOA. To validate the impact of up-conversion on the gain and output power of the HOA, we obtained the wavelength versus gain and output power plots for the HOA by using a PDF length of 15.7 m, Pr 3+ concentration of 50 × 10 24 m −3 , pump wavelength of 1030 nm, and injection current of 60 mA, respectively. The results are shown in Figure 5 for a signal power of −15 dBm. Figure 5a shows that a penalty of around 4 dB was observed for the gain of the HOA for an O-band wavelength of 1290 nm. Similarly, Figure 5b shows that a penalty of around 1 dB was observed in the output power of the HOA for an O-band wavelength of 1290 nm. These results confirm that up-conversion had a negative impact on the gain and output power of the PDFA owing to the decrement in the population inversion originating when a Pr 3+ ion jumped to an excited manifold, whereas the other was demoted to a low-energy manifold [27,31]. It is worth mentioning here that up-conversion occurred only in the PDF, which subsequently affected the wavelengths in the O-band only by reducing the gain and output power, whereas the gain and output power remained constant in the E-band because the phenomenon of up-conversion did not happen in the SOAs. We also plotted the wavelength versus the NF as a function of the signal power, as shown in Figure 6a, considering a PDF length of 15.7 m, Pr 3+ concentration of 50 × 10 24 m −3 , pump wavelength of 1030 nm, pump power of 500 mW, and injection current of 60 mA. The results show that the NF was equal to 4 dB, 4.5 dB, and 5 dB at a signal wavelength of 1270 nm for signal powers of 0 dBm, −15 dBm, and −30 dBm, respectively. The NF increased to around 5.3 dB, 5.8 dB, and 6 dB for a signal wavelength of 1410 nm and signal powers of 0 dBm, −15 dBm, and −30 dBm, respectively. Therefore, it is clear that the NF increased when the power of the signal was low. The reason behind this trend is that the optical signal-to-noise ratio (OSNR) was reduced when the signal power was weak, which, in turn, increased the NF of the HOA [6]. We also plotted the signal wavelength versus the ASE as a function of the signal power in Figure 6b by considering a PDF length of 15.7 m, Pr 3+ concentration of 50 × 10 24 m −3 , pump wavelength of 1030 nm, pump power of 500 mW, and injection current of 60 mA. It can be observed from the spectral plot in Figure 6b that the average ASEs over a 1270-1450 nm wavelength range were around −39 dBm, −33.7 dBm, and −28 dBm for signal powers of −30 dBm, −15 dBm, and 0 dBm, respectively. Moreover, it is also evident that the ASE increased by increasing the signal power. The reason behind this trend is that by increasing the signal power, the photon population due to spontaneous emission relatively increased, which consequently resulted in an increase in the ASE [6]. To observe the performance of the proposed O+E-band HOA in a transmission link, we employed it in a CWDM system and measured the BER of the signals transmitted over the link. The BER was computed by observing the statistical distribution of the eye diagrams of the received signals obtained after the low-pass filter (LPF) shown in Figure 2b. The optical power received for each channel at the PIN photodetectors was varied by using optical attenuators to observe the impact on the BER values. The minimum detected optical power necessary to obtain a BER of 10 −9 is called the receiver sensitivity [32]. Figure 7 shows the BER versus the received optical power plots for channels 2, 3, 5, 6, and 7, where each channel had a data rate of 10 Gbps. We have chosen a limited number of channels to make the results easy to visualize. It is clear that the receiver sensitivities of channels 5, 6, and 7 were around −21 dBm, which is the minimum among all the channels, whereas the receiver sensitivity of channel 3 was around −18 dBm, which was the maximum we obtained. The receiver sensitivity of channel 2 was equal to −19 dBm. The reason behind this variation in the receiver sensitivities of the different channels is the variation in the gain for the different signal wavelengths, as shown in Figure 3a. In telecommunication systems, an eye diagram is obtained by overlapping the bit periods of the signal over each other on the oscilloscope to obtain a plot of the signal amplitude with respect to time [33]. Since the shape of the resulting plot is similar to an eye, therefore, the name eye diagram is generally used [33]. Eye diagrams instantly provide visual information to a telecommunication system designer to check the received signal quality and predict the system BER [33]. There are two types of noise that can impact system performance, which are amplitude noise and timing jitter. Therefore, the eye diagram is another important alternative to BER measurements, which readily measure the extent of the amplitude noise and timing jitter in received signals. To further evaluate the performance of the channels, the eye diagrams of channels 2, 3, 5, and 7 were obtained at the output of the LPF by considering data rates of 5 Gbps, 7.5 Gbps, and 10 Gbps, as shown in Figure 8. As the data rate increased, the opening of the eye diagrams of channels 2, 3, 5, and 7 decreased due to the intensity fluctuation, timing jitter, and intersymbol interference (ISI). However, the eye diagrams of the channels had enough openings for the easy detection of ones and zeros [2].
Conclusions
A wideband and flat-gain hybrid optical amplifier operating in the O+E-band is proposed. The amplifier employs a parallel combination of a praseodymium-doped fiber amplifier and an SOA. The main problem of separating the O-and E-band signals to input them separately into the praseodymium-doped fiber amplifier and SOA has been solved using a wideband fiber Bragg grating. Moreover, various parameters, such as the PDF length, Pr 3+ concentration, pump wavelength, and injection current of the SOA, have been precisely optimized to achieve the broadband amplification of a hybrid optical amplifier in the O+E-band with maximum flatness. The performance of the hybrid amplifier has been analyzed using the optimized parameters. A flat average gain and output power of 24 dB and 9.6 dBm, respectively, is achieved over the O+E-band. Noise figure in the range of 4-5.9 dB over a 1270-1450 nm wavelength region has been observed. The effect of up-conversion on the gain and output power of the hybrid amplifier is also investigated. The system-level performance of the hybrid amplifier has been analyzed as a pre-amplifier for a CWDM transmission system of eight OOK-modulated optical signals with an aggregate data rate of 80 Gbps.
Abbreviations
The following abbreviations are used in this manuscript: | 5,943 | 2022-11-01T00:00:00.000 | [
"Physics"
] |
Radical Uncertainty , Dynamic Competition and a Model of the Business Cycle : The Implications of a Measure and an Explanation of What Is Supposed Non-Measurable and Non-Explainable
The influence of radical uncertainty and expectations on economic behaviour is indisputable, whether on entrepreneurship, innovation, investment, or the behaviour that contributes to the business cycle. It is rather surprising, therefore, to see widespread ambiguities in accounts of this crucial aspect of business life and, indeed, human existence. In particular, the frequent assumption ex hypothesis that radical uncertainty is non-measurable and non-explainable constitutes a major misunderstanding that obstructs the analysis of economic growth and development and, more generally, the study of economic dynamics. This essay first of all underlines the conceptual difference between uncertainty and expectations. It then establishes the possibility and delineates a method of measuring true or radical uncertainty by means of the monthly EU business tendency surveys. This method allows the derivation from these surveys of both more and better information than they at present provide, and also some indicators that are relevant mainly in an evolutionary perspective. In order to obtain a deeper understanding of such procedures, some applications have been carried out. A model of dynamic competition and the business cycle centred on the relation between innovation and uncertainty is then specified and tested using a FIML estimator.
Introduction
A strange and elusive spectre haunts economists and businessmen -the spectre of uncertainty.Here we refer to so called 'true' or 'radical uncertainty', that is, uncertainty that cannot be represented by probability distributions but is the result of the limits of human knowledge and hence an expression of human ignorance.But such a specification is not always made and, indeed, general and widespread conceptual misconceptions and ambiguities concerning the definition and theoretical status of uncertainty make this phenomenon embarrassing to the theoretical economist.Radical uncertainty may be dampened by the obtaining of information; but is likely to be stimulated by social change and innovation.Thus, the presence and influence of 'radical uncertainty' tends to grow with the increasing innovation driving the dynamism of modern economies.Indeed, one of the main implications of the Schumpeterian teaching on innovation concerns the rise of endogenous uncertainty and its effects on the economy.But such an implication was almost ignored by Schumpeter and continues to be disregarded by many of his followers.This paper attempts to remedy this situation.
Students of the firm and the schools of business administration and organization are paying growing attention to the phenomenon of uncertainty.But widespread conceptual ambiguities persist, in particular the identification of uncertainty with known probability distributions that, as such, expresses probabilistic certainty (Arrow 1953and 1984, Savage 1954, De Finetti 1964, Harsanyi 1967, Kahneman and Tversky 1979, Machina 1982, Pindyck 1991, Lupton 2003).On the other side, many students who emphasize the distinction between risk and uncertainty (Knight 1921, Keynes 1937, Hayek 1937, Kirzner 1973and 1985, Lawson 1985, Shackle 1990) have unanimously drawn, from the fact that uncertainty cannot be represented by definition through known distributions of probability, the conclusion that it cannot be measured at all.It is true that heterodox economists (Nelson and Winter 1982, Davidson 1988and 1994, Dow 1995, Simon 1997, Cantner, Hanusch and Pyka 1998, Hodgson 1999, Morroni 2006, Scazzieri et. al. 2011) do emphasize the limits of knowledge, radical uncertainty and the associated notion of bounded rationality; but, for the most part, they persist in considering uncertainty as a sort of vague atmosphere permeating reality, which it is impossible to overlook, but also impossible to measure and hence obliging to plausible reasoning.
The resulting absence of data on and quantitative indicators of radical uncertainty represent a serious and embarrassing lacuna that entails, among other things, that students who place importance on quantitative analysis are obliged to use specifications with probability distributions as a means of quantitatively expressing uncertainty.This paper will attempt to remedy this situation.
The plan of this essay is as follows.Section 2 points out the difference between expectation and uncertainty.
Section 3 explores the volatility of opinion, highlights the inability of the Business Tendency Surveys (BTS) data as usually computed to represent the intensity of the relationship between registered changes of opinions and actual results, and delineates some ways of calculating the degree of radical uncertainty from these surveys and some other indicator useful for the interpretation of surveys data.Section 4 presents some applications concerning the relationship between uncertainty and the size of the firm.In addition, this section discusses the relationship between uncertainty and the 'business confidence indicator' and carries out some econometric estimates on this matter; moreover, the section presents some other applications and corrections concerning BTS data, mainly based on the degree of permanence of the registered opinions.Section 5 extends the question of uncertainty to a wider theoretical perspective centred on the notion of dynamic competition; it presents a model with innovation and uncertainty and its extension to the business cycle and brings to the topic an econometric test that uses a FIML estimator (Note 1).
Clarification of Notions: Uncertainty versus Expectations
Radical uncertainty refers to uncertain events that lack an objective or subjective probability distribution.It may seem at first sight that the notion of subjective probability, that is, the degree of personal confidence that an event may happen, and the connected notion of expectation, express a measure of uncertainty.But this is mistaken.It is therefore important to underline that expectation does not represent implied uncertainty, but just an opinion.
While personal degree of confidence and expectation are subjective entities expressing anticipation and hope, our research is concerned with ascertaining an objective measure of uncertainty, where uncertainty results from the limits of knowledge and is thus an expression of the degree of 'ignorance'; such a measure is not given by people's expectations but rather by the instability and/or delusory nature of their expectations (Note 2).
Expectation, then, is, in a certain sense, a pretension of knowledge, while uncertainty is an expression of cognitive impotence.Again, uncertainty expresses a disability caused by the limited reach of knowledge, while, and by contrast, expectation is the expression of an attempt to penetrate the fog of cognitive vagueness, that is, a reaction against uncertainty.Because they are different phenomena, the effects of uncertainty on economic variables differ from those due to expectations.The distinction between expectations and uncertainty is illustrated by our identification of changes in or the volatility of firms' opinions as an indicator of uncertainty.In fact, this indicator merely expresses the fragility of expectations.
Another point deserves attention.It is possible to estimate the value of some proxies that provide a measure of expectations.But the accuracy of the estimation of such expectations is questionable.Economists claim to have formulated analytical expressions of static expectations, adaptive expectations, and rational expectations.These expressions offer some arbitrary and often overly simplified formalisations; substantially, they share the assumption of perfect knowledge.But each entrepreneur has his own proper expectations, the degree of accuracy of which will only appear ex post.It does not make sense to suppose some general rule of formation of expectations, especially not in the case of entrepreneurship which is, in its very nature, action in the face of radical uncertainty (Note 3).
But the key point is that, while both uncertainty and expectations are measurable, uncertainty is a different thing altogether from expectation.The importance of an objective measure of uncertainty is indisputable.For instance, 'decision theory' can be substantially improved if a measure of true (or radical) uncertainty is conjoined to a subjective distribution of probability.Such a measure is also indispensable for the analysis of dynamic competition and the connected business cycle, as we shall see in Section 5. Nevertheless, it appears an exaggerated pretension to offer a general solution to the problem of measuring uncertainty.To grasp the spirit of this elusive variable more than one quantitative indicator must be defined, as we shall see in the next section; and some indicator resulting from a weighted average of various indicators should be put forth.
Theoretical Tool
A main purpose of the business tendency surveys is the desire to investigate how opinions, expectations and, in sum, the considered phenomena vary over time.Indeed, these surveys are repeated regularly precisely because understanding of such variation is the goal; in the absence of change, a single survey would suffice to photograph the situation once and for ever.It is therefore of paramount importance to derive, from the various answers of the interviewed subjects, the largest amount and the best quality of information possible regarding changes in opinions, expectations and other relevant behaviours.But this exigency does not seem properly fulfilled by the current uses of the data provided by the European Union surveys.One (of several) consequences of this failure is that an important possibility of measuring true or radical uncertainty is obscured from view.
What we shall see, in fact, is that the volatility of opinions and the difference between expectations and results as expressed by the Business Tendency Surveys and usually disregarded can be interpreted as a measure of radical uncertainty and, once noted, may facilitate the investigation of the important effects of uncertainty on entrepreneurial and economic behaviour, specifically with regard to the business cycle.Such a measure would very likely prove itself to be one of the most profitable uses of the surveys, which are harmonized in all EU countries and thereby provide precious homogeneous data series.
A useful starting point of the analysis is a matrix assembling the survey results of two periods.The rows and columns of the matrix refer to the first and second periods respectively and express the modalities of answer (Up, Same, Down, indicated respectively by the subscripts 1, 2, 3).The matrix is as follows: X expresses the per cent of each modality of answer (on total answers) in the first period and Y the same percent in the second period.Rij with i=j, that is, the terms on the main diagonal, indicate, for each modality, the percentage of answers that do not change from one period to another.The remaining Rij (i.e. with i ≠ j) express the percentage of answers changing from modality i in the first period to modality j in the second period.
The current publications on the survey data show only the total by row (X) and column (Y) and the balance (Up minus Down), while the intermediate terms of the matrix (the transition from modality i to j) are absent.But the intermediate terms are indispensable for representing the changes in answers; in fact, the total of each modality hides changes over time by compensation.
Indicators of Radical Uncertainty
The matrix data allow the computation of some useful indicators, such as the volatility of opinions (or of results), that is, the sum of the terms of the matrix outside the main diagonal; in fact, that sum can be interpreted as an important indicator of radical or true uncertainty.The indicator can be formalised as follows: OV = Σt0 t1 Rij with i ≠ j OV stands for opinions' (or results') volatility (Note 4).
The reference to opinions and expectations stresses the need to measure their volatility.But also the volatility of the answers concerning results, not considered by the applications in this paper, may be important under other respects.To this measure of uncertainty based on the volatility of opinions it might be objected that, by the time a new state of the world arises, thus making the change of opinion no longer a signal of uncertainty.This objection is based on a clear misunderstanding.In formulating expectations, one uses the information that one has on the state of the world; when information and/or opinions change, due to changes in the state of the world or for other reasons, new expectations will be formulated, but without the achievement of certainty -such a goal being but a chimera.It is quite natural to refer uncertainty of opinions to the volatility of opinions, i.e. their variability, whatever their accuracy (and whatever the causes of their variability), that is, independently of the fact that, for example, the opinions and expectations of survey period 1 turn out to be more accurate than those of period 2. As a simple matter of fact, respondents can be very uncertain about expectations that turn out to be accurate.
We do not deny that a proxy of uncertainty based on the volatility of opinions has its limitations, as does any kind of empirical analysis.In fact, the phenomena considered by the surveys do not cover all the causes of uncertainty.To partly remedy this, a second indicator has been provided through a direct question.Specifically, starting from April 2004, and on my request, an additional question was included in the ISAE (Note 5) quarterly business surveys: "In the last months, what proportion of your expectations on some main variables (demand and delivery orders, profit, variable costs) was confirmed?" There exist some differences as well as analogies between the first (indirect) and the second (direct) indicators of uncertainty.While the first indicator expresses the volatility of expectations, the second expresses the effective violation of expectations.An evident linkage between the two indicators is that the non-confirmation of expectations, expressed by the second indicator, may cause changes in expectations and hence in the first indicator.
We can identify a third indicator of uncertainty in the standard deviation of profit rates across firms.In an economy of perfect knowledge and in the absence of institutional monopolies, such deviations would be null.It is the existence of limits to knowledge (true uncertainty) that allows differentials in capabilities and the associated profits to rise.This seems to imply that the variance of profit rates across firms provides an expression of the limits of knowledge, that is, of uncertainty.As we shall see, this indicator is suitable to the representation of dynamic competition processes and business cycles (Note 6).
Our transition matrix of survey answers allows the derivation of some other useful indicators.It is worthwhile dedicating some attention to what can be called a 'permanence indicator'.Clearly, the answers resulting from very fragile opinions (that is, opinions much subject to change) are less meaningful than those resulting from less volatile opinions.This is not a question of mere reliability.The permanence of respondents' opinions (or their volatility) may be right or wrong; the point, however, is that if a respondent is, for instance, wrongly convinced of something, he operates accordingly; conviction (in doing something) represents, therefore, a relevant item of information for understanding his behaviour.This underlines the importance of an indicator of the degree of permanence of answers; which can be expressed as follows: PermUp = R11 (t0 : t1) /Up (t1) This gives the proportion of the answers saying Up that do not change from period t0 to t1, on the percent of Up relative to period t1.Of course, the permanence indicator for Same and Down must substitute in the expression 1, respectively, R22 and R33 to R11, and Same or Down to Up.These indicators can be used to weight the current percent of Up, Same and Down, in order to obtain some new values for each modality that take into account the degree of insistence on answers; such insistence expressing any one particular marked direction of firms' expectations and opinions.
A stronger way to compute the persistence indicator is the following: 11 (t1 : t2 )/3Up (t1) where R"11 represents the portion of the R11 that does not change also in the period t1:t2 or, in other words, the percentage of respondents that give the same answer in three consecutive surveys (we attribute a double weight to R").The expression of the permanence indicator for Same and Down is identical, with the due changes in R and the denominator (Note 7).
Also an average of two consecutive periods may be considered; that is: A different weight may be attributed to the R of the two periods.
Uncertainty and the Size of the Firm
1).The results that follow refer to the volatility indicator OV and have been derived from the answers to the EU-ISAE monthly surveys of business tendency and conditions for a sample of firms that are representative of all industrial sectors and Italian geographical areas.The answers refer to expectations over the next three or four months, discounted by all seasonal factors and concern: delivery orders, production, prices, cost of financing and liquidity assets.These variables are defined by three modalities: modality 1, expressing "increase" (in the rate of change of the variables), modality 2, indicating "no change", modality 3, expressing "decrease".
The EU-ISAE business tendency surveys report the number of persons employed by each firm, so the indicator of uncertainty derived from them can be distinguished according to the size of the firm.This provides for some important information.For instance, if the firm's behaviors and organization is influenced by uncertainty, then we can ask whether this uncertainty varies according to size.We have grouped firms by size into six classes.The first class (up to 15 employees) intends to show the influence of uncertainty on dimensional growth beyond the threshold that marked the effectiveness of the Italian Working People Statute.We consider here un-weighted answers since the attribution of the same weight to each opinion gives a better expression of the state of opinions than answers weighted according to the size of the firm.The average (of each column) for the whole period is shown, providing a clear idea of the standard deviation (from the average) over the period considered.
The monthly data have been aggregated by year and computed starting from 1986.But the tables below start from 1998, when some modifications in the survey generate a discontinuity, and terminate in 2005 for the same reason.2).It may be useful to add some data on the second (direct) indicator of uncertainty which, it will be recalled, expresses the effective violation of expectations.For reasons of space, we limit ourselves to a graphic comparison of the two indicators.The second graph is more uneven than the first, probably due to the absence of deseasonalization (made impossible by the smaller amount of data) and to the fact that the revision of expectations is slower than their violation.Uncertainty appears to be lower in figure 2, since this only considers the modality "low confirmation" of expectations, due to the absence of weights attributed to the modalities "high confirmation" and "middle confirmation".However, in both figures uncertainty markedly decreases with increase in business size.The inverse relation between uncertainty and firms' size is thus confirmed; this is relevant for firms' transaction costs, financing and innovation, as these are greatly influenced by uncertainty.
3).Finally, it may be useful to provide three figures illustrating the indirect indicator of uncertainty derived from the expectations-realizations difference of the ISAE monthly surveys.The graphs show the percentage of expectations in period t that differ from the realizations relative to one, two and three months later.
Unfortunately, in recent years the survey questions on results have been limited by ISAE to only liquidity assets and production, and this of course reduces the possibility of confrontation between expectations and results.The This indicator of uncertainty based on the difference between expectation and realization substantially confirms the results derived from tables 2, 4 and 6, except that uncertainty on prices in the last class is higher than expected.It also appears that the difference between expectations and realizations grows with the time distance between the two, but with some exception for liquidity assets.
Business Confidence Indicator Corrected for Uncertainty
Our research on a measure of the degree of uncertainty leads us to some reflections upon the business confidence indicator currently derived from the monthly business tendency surveys (BTS).This indicator is the result of an arithmetical average of the balances of answers (difference between Up and Down) concerning three phenomena: current overall delivery orders, the stock of finished products, expectations on production.Such computation does not consider uncertainty; in fact, expectations on production cannot be considered a proxy of uncertainty, which is rather expressed by the volatility of expectations, as previously seen in section 3.
Of course, uncertainty influences the degree of confidence more than do any of the three phenomena usually considered in the standard computation of the business confidence indicator.It may, therefore, be interesting to compare between the current confidence indicator and our indicator of uncertainty.Both the indicators have been expressed in quarterly values, with 2000 the year base.
In figure 4 below, the dotted-line stands for the usual confidence indicator while the full-line stands for the uncertainty indicator.As can be seen, the behaviour of uncertainty differs markedly from that of the usual confidence indicator; it is in general higher and more uneven.This means that the possible introduction of uncertainty in the computation of the confidence indicator would lead to some remarkable changes with regard to the current computation of the indicator.This is shown in figure 5.This figure compares the usual confidence indicator to an indicator that is derived by adding radical uncertainty with a weight of 0.25, and hence attributing to the usual indicator a weight of 0.75.The working hypothesis is that each one of the four components has an identical importance; although it is our actual opinion that a higher weight should be attributed to uncertainty.Of course, the influence of uncertainty on the confidence indicator is negative.In order to see the degree of significance and contribution of each component, an econometric analysis of the relations between the components of the confidence indicator and the variation of industrial production may be performed, which includes also volatility in the regression.An estimation in this regard using non-deseasonalised values has shown wrong signs, both for the current overall orders and the current stock of finished products; only expectations on production and volatility seem to have an explanatory meaning.Carter-Nagar R 2 is 0.73.This seems to show the importance of the need for a wider inquiry on the definition of the confidence indicator, i.e. an inquiry that also takes into account some other survey questions.
Some Applications Concerning the Permanence Indicator and the Corrections of Up, Same and Down by
Giving a Double Weight to Rii (The Repeated Answers) (Note 9) The results that will follow concern three questions of the harmonised EU surveys, two of which express opinions and one expresses expectations.The questions are: a) Do you consider current overall order to be above normal, normal for the season, below normal?b) Do you consider your current stock of finished products to be above normal, normal for the season, below normal?c) How do you expect your production to develop over the next 3 months?It will increase, remain unchanged, decrease?The attention for those questions has been suggested by the importance that the European Commission attributes to them that in fact are used to provide the Industry Confidence Indicator for each State member of the European Community and the whole European Union.It seems evident that the dynamics of opinions is better expressed by un-weighted survey data, as these give an identical importance to each answer and opinion.
An analogous application was performed on data for South Africa provided by Murray Pellissier and concerning four questions of the BER surveys on expectations.The results confirmed those reported below.
The figures that follow flank, to the EU surveys results, those 'modified' or corrected according to the weight attributed to Rii, i.e. the repeated answers.Here we give to these answers a double weight with respect to Up-R11, i. e. the remaining ones.Therefore, the expression for the corrected (or modified) UP is: Of course, the correction of Same and Down must substitute, in the above expression, Same or Down to Up and R22 or R33 to R11 (Note 10).
For making comparable the current percent of answers to their modified percents, the sum of the percent of the modified Up, Same and Down has been reported to 100 (i.e. the sum of the current percent modalities of answers) simply by dividing 100 by the sum of the percent of all modified answers and multiplying by the percent of each modified answer (Note 11), i.e. according the proportion Modified Up: x = modified(Up+Same+Down)]: 100, as well as for Same and Down.
The figures report: a) The permanence indicator, the first expression for PermUp (Same and Down) in section 3, i.e. a ratio the variability over time of which expresses the discrepancy between the time path of the percentage of the repeated answers (not considered by the current computations on surveys) and the total percent of the corresponding answers (Up or Same or Down); it gives, therefore, an idea of the relevance of the correction we propose. b) The ratio between the modified percent of answers and the usual percent of answers.The difference (positive or negative) with respect to one of this ratio expresses the percentage of correction, i.e. the percent difference between the modified and current percentages.
c)
The ratio (R11-R33)/balance, that gives the variation over time of the difference of the percent of the repeated Up and Down (that we use for corrections) with respect to Up minus Down, i.e. the usual balances.This ratio gives an idea of the impact on balances of our correction.
Such correction is plainly expressed by the ratio between the modified balance and the usual From the figures we can see substantial differences between the UE surveys values and the modified ones (that is attributing a double weight to the repeated answer R with respect to the remaining one, non repeated); but a higher weight of the repeated answers used for the rectification would imply larger differences.The percent correction (dotted lines) is lower than the oscillation of the permanence indicators and the ratio (R11-R33)/balance (full lines) since the first also includes the remaining (non permanent) answers that do not contribute to the correction.
In particular, in the first and third Figures (for Up and Down), the correction percentage oscillates around 20 percent, but with a substantial dispersion as an effect of the high dispersion of R11/Up and R33/Down (respectively between 0,8 0,2, and 0,8 0,4).The second Figure shows a correction percentage higher than 1 due to the higher value of R22/Same than those of R11/Up and R33/Down.The dispersion is lower than in the first and third Figures since R22/Same is much less uneven than R11/Up and R33/Down.The asterisk * stands for multiplication Equation 1 may also include a term DE for the variation of entrepreneurial skill, displaying for innovation (the prey) a propulsive role similar to that of stocking in the predator-prey models used by studies on food chains (Note 16).
The parameter b1 is a constant exponential rate of growth of innovation, expressing the autonomous push to innovate due to entrepreneurial aggressiveness; its impact on innovation (DPA) is reduced by the degree of radical uncertainty (volatility of expectations or the standard deviation of profit rates) u that discourages (preys on) innovation (PA) according to parameter b2.The parameter b3 is an exponential rate of growth of radical uncertainty; the negative sign on b3 expresses the compressing effect on radical uncertainty (and/or on the standard deviation of profit rates) arising out of adaptive competition (as stimulated by u).For its part, b 4 stimulates u according to the cross product between predator and prey, where the prey is the dimension of innovation (PA) that feeds uncertainty (volatility and/or the standard deviation of profit rates), i.e. feeds the predator.Precisely, innovation is the field of pasture of radical uncertainty: in the absence of innovation, the term with b4 would become null because of the adaptive search for profit.When innovation intensifies, u (the predator) grows, thus causing a contraction in innovation (the prey), and hence the predator, with a cyclical alternation.The system parameters give the dimension of the disequilibrating (b1 and b4) and equilibrating (b2 and b3) push expressed by dynamic competition (this being represented by the combination between innovative and adaptive competition).
It may be useful to underline in this regard that the measures of dynamic competition based on the rapidity of contraction of the standard deviation of profit rates across firms (as, for instance, in D.C. Mueller and others or H. Odagiri) (Note 17) only consider adaptive competition or, more precisely, parameter b3 of the above system.
They ignore the other parameters and hence give a poor approximation to the intensity of competition and economic dynamism, as dynamic competition consists both in innovation and adaptation-(structural organization).
Econometric Estimation
The estimation below refers to four main European industrial countries: Italy, the United Kingdom, France and Germany.The data on patent applications and grants are used to express innovation and derive from the Ufficio Italiano Brevetti in the case of Italy, and from the United States Department of Commerce in the other cases.The data on radical uncertainty derives from the UE-ISAE Business Tendency Surveys.The data on the standard deviation of profit rates across firms for France and Germany come from D.C. Mueller (1990), and from H. Odagiri (1994) in the case of the United Kingdom; they refer to some samples of manufacturing firms and, respectively, to the periods 1961-82, 1965-82 and 1964-77.It may be objected that these periods are far from the present.But the estimations are only intended to provide an example of econometric application of our theory.At any rate, for Italy the data on patent applications and uncertainty run from April 2000 to December 2010; they have been aggregated by quarters and deseasonalised.
The data for France give pre-tax profit; those for the United Kingdom and Germany give after-tax profit.Their reliabilities are affected by their derivation from the balance sheets of some firms based on dissimilar and not wellestablished procedures.
The results shown below must be judged in the light of the deficiencies of the appropriate data series.Nevertheless, confirmation of the theory is encouraging.But the improvement of quantitative analysis in the crucial fields of innovation and dynamic competition needs a great deal of statistical research.
A FIML estimator was used to preserve the tight interaction between (1) and (2) above, i.e. innovation and uncertainty (adaptation), which is a crucial point of the research on dynamic competition presented in this Section.
The estimates are derived by an asymptotically exact Gaussian estimator of a differential equation system using discrete data.As there is no equivalent of a just-identified model for non-linear systems, there is no system-wide test such as the Carter-Nagar R 2 or likelihood-ratio.In order to give an idea of the efficiency of estimations, the means and standard deviations (not to be confused with the standard deviation of profit rates across firms) of the observed and estimated endogenous variables are also reported.
A system which differs from Volterra (pseudo Volterra form), in that the second equation uses only PA instead of the term PA*u in the right-hand side, has also been estimated.As a matter of fact, it may be assumed that the "reproduction" hypothesis typical of Volterra's study on population plainly operates only in the equation of innovation in that each innovation is strongly influenced by the state of knowledge resulting from previous innovations.In the equation of u, however, it may operate only backwards as large disequilibria and uncertainty stimulate adaptation.This means that in equation 2 the cross product term of Volterra, the encounter between predator and prey, may be replaced by the prey (innovation) only.
Data on patent applications have been divided by thousand, for uniformity of their scale with respect to u.The model with the term PA in equation ( 2), instead of PA*u, does not converge.
France
The data series of the standard deviation of profit rates for France has two out-lying observations in 1974 and 1977.
The first has no justification and is probably due to inaccuracy of the data; the second is largely determined by the 1977 revaluations of the assets of mergers that consistently depressed profit rates.We have substituted for those anomalous data an interpolation from the contiguous data (Note 18).For Italy, the values of parameters are much lower than is the case in the other countries.This is mainly due to the fact that in the recent period the rate of growth of patent applications has substantially decreased and the rate of growth of uncertainty has increased, while in the estimation periods concerning the other countries considered the rate of growth of patent applications was high and uncertainty (the standard deviation of profit rates across firms) decreasing.
For Germany, the model in Volterra's form provides a worse estimate of the equation for u (the standard deviation of profit rates across firms) than the model where the term PA is substituted for PA*u in (2); the contrary is the case for France and the United Kingdom.It would seem, therefore, that in Germany disequilibria do not generate disequilibria, while a self-reinforcing tendency of disequilibria appears in the United Kingdom and France, i.e. u contributes to stimulate its own growth through the term PA*u.
All parameters have the correct signs, have reasonable values and, in the estimation of the model in the Volterra
Figure 1 .FigFigure 2 .
Figure 1.1st indicator of uncertainty by classes of business size Figure 3. Expectations-realizations differences
Figure 4 .
Figure 4. Radical uncertainty and confidence indicator
Figure 5 .
Figure 5. Usual confidence Indicator and that corrected by radical uncertainty
Figure 6 .
Figure 6.Modified indicators and answers compared to current ones
Figures
Figures from the fifth to the eighth (for current stock of finished products) and from the ninth to the twelfth (for
Table 1 .
Survey answers of two periods
Table 2 .
Uncertainty on production (Relative change of answers based on previous month)
Table 3 .
Uncertainty on delivery orders and demand (Relative change of answers based on previous month)
Table 4 .
Uncertainty on prices (Relative change of answers based on previous month)
Table 5 .
Uncertainty on cost of financing (Relative change of answers based on previous month)
Table 6 .
Uncertainty on liquidity assets (Relative change of answers based on previous month)
Table 7 .
General level of uncertainty, derived by the aggregation of the above series (Relative change of answers based on previous month) As we can see, uncertainty (as expressed by the indicator considered) varies inversely with the size of firms and is around 0.2 and 0.4.The high level of uncertainty of the first two classes (1-15 and 16-99 employees) means that expansion over the threshold of 15 employees is discouraged, since it implies an increase in normative rigidities while uncertainty remains high.Uncertainty decreases with increase in firms' size in the first three classes shows some ambiguity in the two central classes, and decreases substantially in the largest class.In particular, increase in size of firms significantly reduces the variability of expectations on cost of financing, and this, together with the parallel reduction of uncertainty on liquidity assets, should encourage dimensional expansion.
The uncertainty on prices is less than on other variables; it is particularly low in the largest class of firms, probably due to oligopoly, and lower than expected in the first class, probably owing to market niches.
Table 8 .
Model in Volterra's form
Table 9 .
Model in Volterra's form
Table 10 .
Model in Volterra's form
Table 11 .
Model with the term PA in equation (2), instead of PA*u
Table 12 .
Model in Volterra's form
Table 13 .
Model with the term PA in equation (2), instead of PA*u | 8,129.8 | 2013-03-01T00:00:00.000 | [
"Economics"
] |
Evaluating the impact of a coordinated checkpointing in distributed data streams processing systems using discrete event simulation
Coordinated Checkpointing is a fault-tolerance strategy proposed for Data Streams Processing systems, which handles a continuous, potentially unbounded flow of data under Quality of Service requirements. Although traditional in large-scale distributed systems, there is a lack of study on how a Coordinated Checkpointing may impact the stream processing in both failure-free and failure-prone environments, especially considering the inherent requirement of analyzing and processing data in real-time. This paper presents a study that used a discrete simulation model to investigate the impacts of the Coordinated Checkpoint fault tolerance strategy on a Data Stream Processing System. The results show Coordinated Checkpointing should be avoided since it critically impacts the stream processing and the real-time analyzes of data, increasing latency up to 120%, and discarding up 95% of the processing window during a global checkpoint when a rollback-recovery is required
Introduction
Data Stream Processing (DaSP) systems are a computing paradigm for online analysis of data streams processed under Quality of Service (QoS) requirements (de Matteis and Mencagli, 2017). These streams are potentially unbounded data transmitted at high volume and high velocities. Some of them require real-time processing and analysis, such as disaster management, network attack and anomaly detection, financial market, trend analysis, social media, web analytics, Internet of Things (IoT), operational infrastructure monitoring, and online advertising (de Assunção et al., 2018, Gradvohl et al., 2014.
DaSP systems have to process data streams uninterruptedly to provide real-time analysis. The system must be fault-tolerant to achieve this level of dependability. One of the proposed fault-tolerance used for DaSP systems is the Checkpoint Rollback-Recovery.
It consists of periodically saving the application's state to restart from the last safe state in case of a system failure. The checkpoint can be coordinated (synchronous) or uncoordinated (asynchronous). In the coordinated checkpoint, all components take a checkpoint at the same time. In turn, in the uncoordinated checkpoint, each component decides when to perform its checkpoint (Casanova et al., 2015).
Although most DaSP systems run in distributed process architectures, where the checkpoint-rollbackrecovery strategy is intensely studied (Levy et al., 2014, Oldfield et al., 2007, Moody et al., 2010, Monnet et al., 2004, there is a lack of studies about the impact of this checkpoint strategy on DaSP systems. Nevertheless, practical evaluation of fault-tolerance mechanisms in large-scale applications such as DaSP systems is challenging. At the hardware level, the challenges include the requirement to study machines that are either larger than those currently available or have hypothetical architectures. Other challenges in this level include the study of more advanced machines, which are not accessible yet; and the lack of analytical models to predict performance and compare to other results accurately (Levy et al., 2014). Besides, at the application level, the system expects uncertainties, such as changes in arrival rate, arrival distribution, and others since data stream processing is potentially unbounded. Therefore, tests concerning failure issues and how the adopted fault-tolerance strategy interferes in stream processing are relevant as well.
Simulations are quite useful for performance analysis in parallel and distributed programs (Albertsson, 2006, Hoefler et al., 2010, Tikotekar et al., 2007 as well as in large and extreme-scale applications (Levy et al., 2014, Ferreira et al., 2011, Mubarak et al., 2012, Böhm and Engelmann, 2011. Besides, a simulation holds several benefits such as providing a risk-free environment, high accuracy compared to analytic models and the ability to handle uncertainty scenarios such as failure occurrences. Therefore, we propose a simple discrete event simulation model built on ARENA simulation software to verify the impact of the Coordinated Checkpoint Rollback-Recovery (CCRR) strategy on the DaSP systems. The primary goal of this work is to simulate different situations in both failure-free and failureprone scenarios. Also, we provide an applicationdriven simulation model capable of evaluating different QoS metrics such as latency, throughput, and mean waiting time; and integrity metrics such as the amount of information loss and unprocessed tuples.
The specific contributions of this paper are the following: • a simple and easy to use a discrete event simulation model of the Coordinated Checkpoint-Rollback-Recovery in Stream Processing Systems; • an evaluation of our model's performance showing an error of less than 1% and 11% against analytic models for both failure-free and failure-prone environments; • a simulation analysis showing that the Coordinated Checkpointing could be impracticable in failureprone DaSP systems due to high information loss, an increase in latency and a decrease in throughput reaching 95%; • two analytic models to predict information loss in failure-free and failure-prone environments using CCRR.
We organized the remaining of this paper as follows: Section 2 presents the fundamental concepts. Section 3 shows the related work; Section 4 introduces the proposed computational model; Section 5 compares the simulation results and the analytic models; Section 6 describes the experiments; Section 7 discusses the results; and, finally, Section 8 presents the conclusions.
Fundamental Concepts
There are different architectures for online data processing and analysis. However, most of them are multi-tiered systems with loosely coupled components combined to form a single processing framework. This organization improves maintainability, scalability, and availability (de Assunção et al., 2018).
The multi-tiered architecture of DaSP systems comprises different components. Fig. 1 shows an overview of these components. For instance, there are Data Sources responsible for data streams generation, such as RFID readers, wireless sensors, mobile devices and GPS, among others. Also, there are Data Collectors, for instance, network clients, JSON readers, protocol buffers, and others, to gather the data streams and transmits them to the stream processing engine. Messaging systems are generally present as well. Examples of such message systems are IoT hubs queuing systems and publish-subscribe messages that receive the stream and manages it (de Assunção et al., 2018).
There are also Stream Processing Engines that will effectively process the streams and Data Deliveries, such as Web Interfaces, Dashboards, RESTful APIs, which will receive the processed information. The architecture also requires Data Storage components, like relational databases, NoSQL databases, or inmemory storage. However, using all components are not mandatory, and an actual system may have only some of these features. The communication between components often uses TCP/IP protocols (Gradvohl et al., 2014).
The Stream Processing Engine uses several software components known as operators, running on processing nodes (hardware components). Each operator runs on a single node, although a single node can hold one or more operators. The engine connects the operators forming a directed acyclic graph (DAG), which we will refer to as a topology (Gradvohl, 2016). The operators are responsible for tuples processing and analyzing, and can execute a series of procedures such as data cleaning, classification and feature selection, among others.
We classify operators according to their ability to maintain their state, i. e., internal data structures, intermediary results, and tuples routing information, among others. We classify an operator as stateless if it does not gather or keep any information about the previously processed streams or the operator state. On the other hand, the output of the stateful operators depends on the processing of the previous streams and its previous state (Gradvohl, 2018).
Operators are components responsible for data stream processing.
Beyond the requirement for real-time processing, data streams have other characteristics that distinguish them from traditional static data processing. They are potentially infinite, which makes them impractical for storing in the system's main memory; the system must analyze each tuple a limited number of times and discard them later to reduce the computational costs, to avoid queuing and offer a real-time response (Ramírez-Gallego et al., 2017).
Also, we formally define an input stream as a sequence of data elements {s 1 , s 2 , . . .}, which each s i = (t i , D i ), t i is the time stamp, and D i = (d 1 , d 2 , . . .) is the payload for each element i. In this paper, we consider s i as a tuple. Second, we consider that the probabilistic distribution of the data may change over time. This phenomenon is well known and well studied in the data streams environment due to its non-stationary nature. We refer to this phenomenon as Concept Drift (Gama et al., 2014).
Coordinated Checkpointing
The system implements a coordinated or synchronous checkpoint by exchanging messages between the operators in a DaSP system. We can formally define a global checkpoint (or a snapshot) of a system composed by nodes (n 1 , n 2 , n 3 , . . . , nn) at an instant t as a storage of events at each n i at the instant t and also a storage of the communications logs (send and receive messages) between operators in the instant t (Goswami and Sahu, 2005). Therefore, for global consistency, checkpoints are (C k 1 , C k 2 , C k 3 , . . . , C k n ), where C k i is the k th local checkpoint at node n i . Fig. 2 illustrates a simplified flowchart of the CCRR in DaSP systems. When the checkpoint interval expires, the model triggers the CCRR strategy. First, the strategy blocks all operators for stream processing, which results in information loss since the system discards all received tuples discarded. Then, the initiator sends a checkpoint request to the operators. All available operators reply to a message informing the initiator that they are active. If the system detects no failure, the initiator waits for all operators to perform their respective and local checkpoints. After this procedure, the initiator sends a message to all active operators informing that the system performed a global checkpoint successfully. This whole process is known as Commit Time. At the checkpoint request, if an operator does not respond, the initiator usually waits an extra time. If the initiator still receives no response, this indicates that a failure occurred. This process triggers the rollback recovery phase when the system waits until the node resumes and then recovers the last checkpoint of the failed operator. When the rollback-recovery process finishes, all operators take the checkpoint, and the initiator commits them to a stable storage device.
Related Work
In this section, we present works that approach stream processing simulators. Hoefler et al. (2010) divide simulators into different categories, such as application, application-communication, and architectures simulators.
Application simulators focus on the performance of a given algorithm, a system, or an application. Users employ applicationcommunications to simulate critical components of an application, such as its relation to other components presented in the topology.
Finally, architecture simulators represent a detailed model of one or more components of a parallel architecture. The model presented in this paper is an application model.
Due to the high computational cost of a detailed simulation, simulations commonly focus on a limited group of components (Hoefler et al., 2010).
A simulation model has to be accurate enough and yet avoid unnecessary features (Levy et al., 2014). Therefore, the work presented in the literature focuses on specific aspects of the distributed discrete event simulation.
Concerning failure tolerance aspects, Ferreira et al. (2011) has studied the benefits of the process replication as a primary fault-tolerance mechanism for large-scale distributed systems. They used different simulators to run the experiments. On the other hand, Levy et al. (2014) proposed a framework on the LogOPS simulator to evaluate the performance of the CCRR in large-scale systems.
On performance evaluation, Zheng et al. (2005) used BigSim as a simulation tool to develop a performancemodeling environment to predict performance issues on large parallel machines. In turn, Shchur and Shchur (2015) studied the benefits of using parallel discrete event simulation as a paradigm for largescale modeling systems, including the requirement of analyzing important metrics such as scalability, CPU time, and storage issues.
For online processing, there are few works addressing simulations.
For instance, CEPSim (Higashino et al., 2016) is a simulator for cloud-based systems that can model different DaSP systems by transforming them into user queries based on DAG representation. CEPSim allows some customizations of operators' execution, placement, and schedule while providing important metrics such as latency and throughput. However, CEPSim does not support faulttolerance simulations.
In turn, Flow is a simulator primarily focused on the large-scale simulation of stream processing systems (Park et al., 2010). It is capable of working with millions of kernels and data flows, and the automatic parallelization of different models. As CEPSim, Flow also does not support fault-tolerance simulations. Table 1 presents a comparison table with CEPSim, Flow and our model. (Zheng et al., 2004), LAM-MPS (Plimpton, 1995), xSim (Böhm and Engelmann, 2011) and LogGOPSim (Hoefler et al., 2010), among others. In turn, ARENA has embedded components such as resource allocation, queue management, and failure modules, which simplify the modeling of both DaSP system topology and the CCRR strategy. Researchers have already been using it for simulation of distributed systems (Christine and Emilie, 2005) and fault-tolerance strategies (Mehresh et al., 2010).
Therefore, although there are many works in discrete-event simulations of distributed systems, none of them addresses the specific and dynamic environment of the DaSP systems, except for CEPSim and Flow. Also, since both CEPSim and Flow do not support any fault-tolerance simulations, we find a lack of studies about the impact a faulttolerance mechanism has on DaSP systems. This paper contributes to the proposition of a novel simulation model capable of performance and simulation analysis of the CCRR strategy in DaSP systems in both failurefree and failure-prone environments.
Computational Model
This section presents the simulation model and the input, control, and output parameters regarding our proposed approach. Fig. 3 shows the simulation model. Since ARENA is a drag-and-drop simulation software, each component is a block that performs a specific computation. We modeled the data sources as Create blocks. The time in seconds (s) between arrivals follows a Normal Distribution of mean µ = 3.2 × 10 -3 s and standard deviation σ = 5 × 10 -5 s.
System Model
Beyond the probabilistic distribution, the simulation user can set the number of entities per arrival, which defines the arrival rate. The default rate is 1, which is equivalent to 1250 tuples/s. The stream-processing level sends the tuples, and the data collectors discard them from the simulation. It is important to observe that, since data streams are potentially infinite, the user must use a processing window to verify the system during a predetermined period. This period may be time-based when the system achieves a predefined amount of time, or tuple-based, calculated based on how many tuples the data collectors received. This work uses a tuplebased processing window by setting a termination condition on the execution setup with the number of tuples that the collectors successfully discarded. At the stream-processing level, each operator receives tuples from the previous operator (except Operator 1, which receives tuples directly from data sources) through an Input Queue I and then sends them to the next operator throughout an Output Queue O. An operator communicates only with the next operator, except when it detects a failure. If it is the case, the model sends tuples to the next active operator. The Decision blocks before every node represent this condition, and the record blocks immediately classify all the tuples directly sent to the next operator as unprocessed.
Streams usually flow throughout the model. However, the stream-processing level receives the tuples, and a Decision block verifies if the simulation time is higher than the checkpoint expected time. In the affirmative case, it means the checkpoint time has expired, and the system has to take a global checkpoint. The model immediately stops the processing, and sends every received tuple to the loss area, increasing the counter for this metric. Then, the model verifies if a node has failed. If this is the case, it triggers the recovery phase. A Process block models the recovery phase, which takes a constant user-applied variable of time (R) to recover. After the recovery, the initiator commits the checkpoint to the stable storage. The commit process is also a Process block, which takes a user-applied constant of time (δ) to execute. If it detects no failures, the system only performs the commit. Then, the model increments the checkpoint time, resume the processing and stops losing tuples. We modeled the failures as time-based on a Poisson distribution of mean M, and we can attach it to any node. Fig. 4 shows the architecture level. There are four processing nodes, each one with three CPUs. There is only one operator in each processing node to simplify the model. We modeled the processing nodes as resources and the CPUs as Process blocks. They operate in a size, delay, and release procedure and process each tuple in a Normal Distribution of mean µ = 8 × 10 -5 s and standard deviation of σ = 10 -5 s. Therefore, the proposed simulation model consists of four processing nodes. To increase the model scalability, a user can model more processing nodes by simply adding more Process blocks.
We can use several heuristics approaches to estimate the CPU time (Zheng et al., 2005). It can be a usersupplied expression, a suitable multiplier such as a scaling factor, a hardware performance counter to count floating-point, integer, and branch instructions on the simulation machine or a hardware simulator, which cycles a target machine processor. The proposed model uses a user-supplied expression since it is the less complex and highly flexible approach. Latencies are the main QoS parameters that a DaSP must attend. System Latency is the time a system takes to process and analyze a certain amount of data (Gradvohl, 2018). Therefore, the model requires low latency. There are other types of latency, such as Maximum peak latency, Post-peak latency, and Operator latency, which we do not address in this paper, but the model can measure them. In our model, the latency is equivalent to the simulation time since the model will stop when the system computes the number of tuples defined in the processing window.
Input, control and output parameters
Another QoS metric frequently observed in DaSP systems is the throughput, the rate of successfully processed tuples given a predetermined period (Gradvohl, 2016). In this paper, we use the seconds (s) as the adopted period. The simulation model requires high throughput.
Finally, we also account for the mean waiting time in the queue. Since data streams require real-time processing, failures, or the adopted fault-tolerance strategy must not substantially increase queuing time as it would increase both computational cost and latency.
Concerning integrity metrics, unprocessed tuples are the ones who did not pass through one or more operators. In DaSP systems, when a node fails, the system forwards the tuples that the failed operator would receive to the next active operator. This procedure is fundamental to maintain system availability. Considering that each operator may implement critical procedures (e. g., data cleaning, normalization, or classification), a high number of unprocessed tuples could lead to inaccurate decisions.
Information loss is also a crucial integrity metric. This metric computes the number of tuples discarded during the checkpoints. Critical information could have missed during this activity since coordinated checkpoint blocks all operators for stream processing. Besides the commit time, if a failure has occurred, the simulation will also account for the recovery time, which provokes an even severe situation.
All values set to process and create blocks were empirically defined to simulate the same arrival rate and processing time presented by Apache Storm, a real-world DaSP system, on the work presented by Chintapalli et al. (2016).
Limitations and Assumptions
Simulations are known as computationally expensive (Levy et al., 2014). In order to construct an efficient and accurate simulation model, we only modeled features that are relevant to the performance of the DaSP system and the adopted fault-tolerance strategy. Therefore, we made the following assumptions: • The operators receive, process and send tuples based on a First-In-First-Out (FIFO) nature; • Nodes work under the fail-stop model; • We assume reliable message delivery; therefore, no message is lost; • CPUs are identical.
Since the proposed simulation model is applicationoriented, and we assume reliable message delivery, the simulation ignores failures in the network and the communication between operators. Besides, we do not directly address memory consumption.
Analytic Models
This section introduces the analytic models presented in the literature to predict execution time and the proposed analytic models to predict information loss, in both failure-free and failure-prone environments. In this section, we also compare the predicted with the obtained results in the simulation model. Levy et al. (2014) proposes Eq. (1) for execution time prediction in failure-free environments.
Simulation time
where Tw is the predicted execution time; Ts is the execution time without any fault-tolerance mechanisms; τ is the optimal checkpoint interval time; and δ is the commit time to the stable storage. For the cases where the CCRR shares the stable storage device, the authors propose the commit time expressed in Eq. (2).
where N is the number of processing nodes; ||Cavg|| is the average checkpoint size for each node, and β is the aggregate write bandwidth for the stable storage. However, in failure-prone environments, we cannot use Eq. (1) since it does not consider failure occurrence. Therefore, we use Eq. (3) proposed by Daly (2006), which accounts for both failure occurrence and the required recovery time by using the mean time between failures (MTBF) and a constant R, described as follows: In Eq. (3), M is the mean time between failures (MTBF); k is the number of performed checkpoints; and R is the node recovery time. According to Daly (2006), we assume Tapp = k × τ , k ∈ N.
Information loss
We can use Eq. (5) to predict information loss in a failure-free state.
where Ω is the number of tuples lost; is the arrival rate; k the number of checkpoints; and δ the commit time. However, Eq. (5) is not adequate for a failureprone environment since it ignores failure occurrence. Therefore, we propose Eq. (6) to predict information loss in situations where failures occur.
where Ω is the number of tuples lost; is the arrival rate; k the number of checkpoints; δ the commit time; R is the recovery time; M is the MTBF; and T fail is the execution time for a failure-prone environment.
On the other hand, unprocessed tuples are challenging to predict. We can calculate this metric using the exact time between a failure and the remaining time to the next checkpoint times the arrival rate. However, since researchers modeled them based on the MTBF and a predefined probabilistic distribution, values can change substantially even inside a simulation model. Therefore, using a simulation model is a practical approach to measure this metric.
Figs. 6 and 7 show the comparison between analytic models and the obtained results in the simulation. We considered the same values for each comparison, which were M = 600s, R = 60s, δ = 74s and = 5000. The error for the latency prediction in failure-free environments was 1.5% and for failure-prone 10.5% in the worst case. The error for information loss prediction was less than 1% for failure-free environments and less than 11.6% for failure-prone environments. These results show that our model is accurate to simulate both latency and information loss.
Experimental Evaluation
In this section, we present four case studies applied to evaluate our proposed simulation model. We evaluated all cases on different arrival rates, and we replicated each experiment 10 times for each arrival rate. Therefore, we consider the mean for each value and a 10 million tuples processing window for all case studies.
Case 1 is the baseline test, a failure-free environment without the CCRR strategy. It is essential to verify the model performance running on a clean scenario to compare it to the remaining case studies. Since it is a failure-free environment, the only control parameter we used was the arrival rate. Case 2 is a failure-free environment with the implementation of the CCRR strategy. Therefore, no failure occurs. In this case, we want to verify how CCRR affects the system performance even if the system detects no failures. Concerning control parameters, we considered δ = 1s, M = 600s and the calculated checkpoint interval was τ = 74s.
Case 3 is a failure-prone environment where a single node (Node 1) experiences a failure. The experiment relies on the investigation of the model performance in case of failures. We considered δ = 1s, M = 600s, R = 60s and the calculated checkpoint interval was τ = 74s.
Case 4 is an emergency mode where two nodes (Node 1 and Node 3) fails at the same time. We increased the recovery time in 100%, and we reduced the MTBF in half to force more failures occurrences. Therefore, we considered δ = 1s, M = 300s, R = 120s and the calculated checkpoint interval was τ = 45s.
Results and Discussion
Fig. 8a depicts the results for the latency metric. Results show that the increase in latency with the adoption of the CCRR strategy in failure-free environments is relatively low, with a maximum of 2%. A failure in a single node (Case 3) resulted in an increase in latency up to 10% compared to a failurefree environment (Case 2). For Case 4, the increased latency was up to 120% in the worst case. Fig. 8b shows the results for the throughput metric. Given a certain arrival rate, it measures how much time the system takes to process and analyze all the received data from the first to the final processing node. Using the CCRR strategy also does not severely affect this metric in failure-free environments, with a decrease up to 2%. Case 3 showed a decrease up to 12%, and Case 4 showed a decrease up to 109%.
Therefore, evidence shows that the adoption of the CCRR strategy does not profoundly affect latency and throughput in failure-free environments. However, in emergencies, the CCRR strategy critically affects these metrics, reaching an increase of up to 120% in some cases. An increase of this magnitude could damage the real-time processing characteristics of a DaSP system.
Tables 2 to 5 show the mean waiting time (in seconds) that a tuple in the queue waited for processing in each operator in the four studied cases. In Case 1 and Case 2, there was almost no difference between values, except for operators 3 and 4 in arrival rates 5000 and 6250. Therefore, evidence shows there is no substantial increase in mean waiting time in failurefree environments with the adoption of the CCRR strategy. Concerning Case 3, there was an increase in time up to 12% for Operator 1, 121% for Operator 2, and 8% for Operator 3. For Operator 4, the results were almost the same, except for a 29% increase in time for 5000 tuples/s, when compared with the failure-free environments.
In Case 4, the increase was up to 27%, 125%, 14%, and 8% for operators 1, 2, 3, and 4, respectively. Therefore, it shows a relation between a failure (which we implemented in nodes 1 and 3 where operators 1 and 3 were running) and an increase in the mean waiting time on the next operators, especially on the closest one. However, although expressive, none of these increases were high enough to affect the stream processing critically.
Concerning integrity metrics, Fig. 9a shows the number of unprocessed tuples. Since there are no failures in Case 1 and Case 2, this metric for both cases is zero. For Case 3, the average number of unprocessed tuples was around 530 thousand tuples, equivalent to 5.3% of the processing window. For Case 4, the average was around 1.2 million tuples, equal to 10.2% of the processing window.
Using Eq. (4) to define an optimal checkpoint interval time is one approach to reduce this number. Less time between checkpoints implies a smaller period that an operator remains inactive and, consequently, it will process more tuples. However, frequent checkpoint increases overhead during failure-free executions (Casanova et al., 2015). Besides, an increase in the number of checkpoints directly affects tuple loss.
Another approach to alleviating this impact is to use the uncoordinated version of the Checkpoint Rollback-Recovery strategy. In this asynchronous approach, each node decides when to take its checkpoint independently (Goswami and Sahu, 2005), which avoids the requirement for blocking nodes. This procedure implies a reduced computational power during the node checkpoint, but there would be no information loss. However, the asynchronous checkpoint is risky due to the domino effect, when the recovery of a node depends on another node recovery (Gradvohl et al., 2014). Guermouche et al. (2011) present a solution for an uncoordinated checkpoint without a domino effect in applications that uses the Message Passing Interface (MPI) as its standard message exchange system between operators.
In turn, Fig. 9b presents the number of tuple losses. Due to the absence of fault-tolerance mechanisms in Case 1, there is no loss in this case. For Case 2, the average loss was 133 thousand tuples, equivalent to 1.33% of the processing window. The average loss for Case 3 was 1.1 million tuples, 10.1% of the processing window. Finally, for Case 4, the average loss was around 9.5 million tuples, equal to 95% of the processing window.
The following situation is one of the critical aspects of using CCRR in DaSP systems. In an emergency, the system could lose almost the same amount of tuples it processed. The system would lose one of two tuples. Due to the concept drift phenomenon, data streams are subject to changes in their probabilistic distribution that could occur in different types, such as gradual, sudden, incremental, or recurrent. For instance, sudden drifts appear abruptly and can completely change the data (Ramírez-Gallego et al., 2017). Discard all these tuples could result in losing a new or crucial change in the data probabilistic distribution that could lead to radically inaccurate decisions.
The combination of CCRR and Replication of Components (Gradvohl et al., 2014) could be a more reliable, long-term, and suitable approach to reduce these impacts. In this case, several operators running on different nodes would perform the same stream processing synchronously, in such a way that a failure in one node would not imply in unprocessed tuples. Then, on the checkpoint time, the system could recover the failed node as usual. This approach implies the increase in the computing power investment due to the necessity of at least duplicating operators, and the wasting of resources in failure-free executions, which could reduce the CCRR poor scalability (Casanova et al., 2015). Besides, it would also decrease stream loss since the system would not have to wait for a node recovery to resume stream processing.
As a final observation, researchers can use our model for the simulation of the CCRR strategy in DaSP systems. Results from the comparison with analytic models in Section 5 and the experiments in Section 6 demonstrated that the model is accurate to determine the performance and the impact the CCRR strategy has on DaSP systems. Also, since we built it in a userfriendly software such as ARENA, it enables the user's full control of the simulation, by changing different control parameters such as MTBF, arrival rate, recovery time, optimal checkpoint interval and whose operator will fail.
Conclusions
This paper presented a simulation model for evaluating the Coordinated Checkpoint-Rollback Recovery faulttolerance strategy for Distributed Data Streams Processing Systems in both failure-free and failureprone environments. With an error lower than 1.5% and 10.5% in these environments, respectively, we demonstrated that the simulation model is accurate to evaluate the proposed scenario. We also proposed two analytic models to predict information loss in failurefree and failure-prone environments, with an error lower than 1% and 11%, respectively. Furthermore, we discussed how the CCRR negatively affects stream processing. We demonstrated through four case studies that using this strategy does not imply a severe impact in system performance in failure-free environments since the increase in mean waiting time, latency, and decrease in throughput was around 2%.
However, in emergencies, this strategy critically affects latency and throughput, and a high loss of information due to the system freezing during a global checkpoint. Therefore, we do not recommend using a pure coordinated checkpointing in the DaSP system. The use of process replication, in conjunction with this strategy or its asynchronous approach, with the attention to the domino effect, would be a more reliable approach to reduce both unprocessed and lost tuples. Therefore, our work provides a reliable study on how much a coordinated checkpoint could affect the stream processing on a DaSP system, without the necessity to implement this strategy on a real architecture. Also, it provides an easy-to-use simulation model flexible enough to study different aspects of a DaSP environment, including fault-tolerance strategies. | 7,645.4 | 2020-05-19T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Bochner–Riesz operators in grand lebesgue spaces
We provide the conditions for the boundedness of the Bochner–Riesz operator acting between two different Grand Lebesgue Spaces. Moreover we obtain a lower estimate for the constant appearing in the Lebesgue–Riesz norm estimation of the Bochner–Riesz operator and we investigate the convergence of the Bochner–Riesz approximation in Lebesgue–Riesz spaces.
The classical Lebesgue-Riesz norm f p , p ∈ [1, ∞], for the function f , is denoted by and the corresponding Banach space is, as usually, Denote, for an arbitrary linear or quasi linear operator U , acting from L p to L q , p, q ≥ 1, its norm Of course, the operator U is bounded as the operator acting from the space L p into the space L q iff U p,q < ∞. More generally, for the operator U acting from some Banach space F equipped with the norm ⋅ F into another (in the general case) Banach space D, having the norm ⋅ D , denote as usually There exists a huge numbers of works devoted to the p, q estimates for Bochner-Riesz operators B α R , as a rule for the case q = p, see e.g. [21, chapter 5], [7,8,11,26,31,38,39], etc.
Our aim, in this paper, is to extend some results contained in the above mentioned works concerning upper estimates for the norm of the Bochner-Riesz operator, in the case of different Lebesgue-Riesz spaces (Section 2) and to extend these estimates to the so-called Grand Lebesgue Spaces (GLS), see Section 3.
We deduce also a non trivial quantitative lower estimate for the coefficient in the Lebesgue-Riesz norm estimation for the Bochner-Riesz operator (Section 4) and we study the convergence of the Bochner-Riesz approximation in Lebesgue-Riesz spaces (Section 5).
2 Norm estimates for the Bochner-Riesz operator acting between different Lebesgue-Riesz spaces We clarify slightly the known results about the Lebesgue-Riesz p, r norms for the operators B α R defined in (1.1), see e.g. [8,21,10,26,39], ecc. It is important to observe that the Bochner-Riesz operator B α R may be rewritten as a convolution, namely is the Bessel function of order λ and z = (z, z). Notice that, under our restriction on α, Evidently, as long as R > 0.
Assume here and in the sequel and introduce the value so that where the inequality on the left hand side is true due to the restriction α > −1.
Moreover, using the well-known results about the behavior of the Bessel functions (see, e.g., [8, p. 172 it is easy to estimate and K λ q = ∞ otherwise (see also, e.g., [21, pp. 339-341]). Correspondingly, Furthermore, we recall the Beckner-Brascamp-Lieb-Young inequality for the convolution (see [3,4]) where p, q, r ≥ 1 and Recently in [20] has been given a generalization of the convolution inequality in the context of the Grand Lebesgue Spaces (see Section 3 for the definition), built on a unimodular locally compact topological group.
The estimate (2.10) is essentially non-improbable. Indeed, the equality in (2.10) is attained iff both functions f, g are proportional to Gaussian densities, namely there exists positive constants c 1 , c 2 , c 3 , c 4 such that where ⋅ is the euclidean norm. Then the convolution f * g is also Gaussian and The following result about the boundedness of the Bochner-Riesz operator, acting between different Lebesgue-Riesz spaces, holds. and (2.14) Let q, r ≥ 1 such that 1 + 1 r = 1 q + 1 p and assume q > q 0 , r > r 0 , p ≤ p 0 , where Then the Bochner-Riesz operator satisfies Proof.
If f ∈ L p for some value p > 1, then by (2.10) it follows where 1 + 1 r = 1 q + 1 p and p, q, r ≥ 1, q > q 0 . On the other words, Moreover it is easy to verify that r 0 , defined in (2.15), is such that r 0 > p. Under our restrictions W (α, n, R; p, r) is finite and positive. So we conclude that estimate (2.18) holds. ✷
Main result. Boundedness of Bochner-Riesz operators in Grand Lebesgue Spaces (GLS)
We recall here, for reader convenience, some known definitions and facts from the theory of Grand Lebesgue Spaces (GLS). For instance are generating functions. If where C ∞ ∶= 0, C ∈ R (extremal case), then the corresponding Gψ space coincides with the classical Lebesgue-Riesz space L r = L r (R d ).
These spaces are rearrangement invariant (r.i.) Banach functional (complete) spaces; their fundamental functions have been considered in [35]. They do not coincide, in the general case, with the classical Banach rearrangement functional spaces: Orlicz, Lorentz, Marcinkiewicz etc., see [30,33]. The belonging of a function f ∶ R n → R to some Gψ space is closely related with its tail function behavior as t → 0+ as well as when t → ∞, see [27,29].
The Grand Lebesgue Spaces can be considered not only on the Euclidean space R n equipped with the Lebesgue measure, but also on an arbitrary measurable space with sigma-finite non-trivial measure.
The proof is simple and alike as the one in [36]. First we observe that ν(r) is finite. One can assume, without loss of generality, f Gψ = 1, then f p ≤ ψ(p), p ∈ (a, b). Applying the inequality (2.18) we have Taking the minimum over p subject to our limitations, we get [ W (α, R; p, r) ⋅ ψ(p) ] = C(α, n, R) ν(r) = C(α, n, R) ν(r) f Gψ , (3.8) which is quite equivalent to our claim in (3.6). ✷ 4 Lower bound for the coefficient in the Lebesgue-Riesz norm estimate for the Bochner-Riesz operator.
Let p, r > 1, n > 1 and let us introduce the following variable where W (α, n, R; p, r) is defined in Section 2. Our target in this Section is a lower bound for the above variable. Then Q n (p, r) ≥ Θ n, pr pr + p − r , r > p. Remark 4.1. The possible case when Q n (p, r) = +∞ can not be excluded.
Proof.
We will apply equality (2.12), in which we choose the ordinary Gaussian density and take α = R 2 2. Obviously Q n (p, r) ≥ lim R→∞ W (R 2 2, n, R; p, r).
We have where I(A) denotes the indicator function of the (measurable) set A, A ⊂ R n . Therefore, as R → ∞ , by virtue of dominated convergence theorem.
If we take f = f 0 , then in (4.2) the convolution of two Gaussian densities appears. It remains to apply the relation (2.12); we omit some simple calculations. ✷
Convergence of Bochner-Riesz operators.
We investigate here the convergence, as R → ∞, of the family of Bochner-Riesz approximations B α R [f ] to the source function f in the Lebesgue-Riesz norm L p (R n ), p ∈ (1, ∞), in addition to the similar results in [21,11,26], etc.
For any function f ∈ L p (R n ), its modulus of L p continuity is defined alike as in approximation theory [1, chapter V] : We apply now the triangle inequality for the L p norm in the integral form Note that, under the above conditions, so that (5.2) follows again from the dominated convergence theorem. ✷ Remark 5.1. As a slight consequence, under the above conditions, if f ∈ L p (R n ), then and, consequently, The case p = ∞ requires a separate consideration. Introduce the Banach space C 0 (R n ) as a collection of all bounded and uniformly continuous functions f ∶ R n → R , equipped with the ordinary norm, As above 3) The assertion of Theorem 5.1 under the same conditions remains true in the case p = ∞.
Theorem 5.2. Under the same conditions of Theorem 5.1, for any function f ∶ R n → R from the space C 0 (R n ) , its Bochner-Riesz approximation B α R converges uniformly to the source function f , that is Proof. The proof is the same as in Theorem 5.1 and may be omitted. ✷ Remark 5.2. As before, if f ∈ C 0 (R n ), then 6 Concluding remarks.
A. In our opinion, the method described in this paper may be essentially generalized on more operators of convolutions type, linear or not. See some preliminary results [30].
B. It is interesting to generalize the estimates obtained in the previous Sections to the so-called maximal operators associated with the Bochner-Riesz one considered here, in the spirit of the works [16,25], and so one: | 2,049.6 | 2020-06-02T00:00:00.000 | [
"Mathematics"
] |
In defense of decentralized research data management
Decentralized research data management (dRDM) systems handle digital research objects across participating nodes without critically relying on central services. We present four perspectives in defense of dRDM, illustrating that, in contrast to centralized or federated research data management solutions, a dRDM system based on heterogeneous but interoperable components can offer a sustainable, resilient, inclusive, and adaptive infrastructure for scientific stakeholders: An individual scientist or laboratory, a research institute, a domain data archive or cloud computing platform, and a collaborative multisite consortium. All perspectives share the use of a common, self-contained, portable data structure as an abstraction from current technology and service choices. In conjunction, the four perspectives review how varying requirements of independent scientific stakeholders can be addressed by a scalable, uniform dRDM solution and present a working system as an exemplary implementation.
Introduction
Research data management (RDM) is an increasingly important topic for individual scientists, institutions, infrastructure providers, and large-scale research collaborators. This shift in attention is driven by ethical considerations, threats to the trustworthiness of research outputs, and the desire to maximize the impact of publicly funded research. Generic, largescale storage and computing infrastructure has existed internationally for a considerable time. Yet, the apparent lack of fit for domain-specific or regionalized data exchange and publication use cases has motivated a large number of localized, domain-specific developments or deployments of RDM solutions. These emerging solutions address some of the immediate needs, in part motivated by the increasing enforcement of minimum RDM standards by funding agencies. Yet as of today, the lack of infrastructure allowing interoperability across RDM systems still limits the potential impact that the research data can make to science and society.
This problem can be addressed by establishing a network of interoperable but independently governed and funded services that jointly form a decentralized research data management system (dRDM). Such a system makes digital research objects available across a network of participating institutions and investigators for publication, query, retrieval, backup or archive, and collaborative evolution. Importantly, this is achieved without critically relying on central services, thereby offering a high level of resilience against any failure of individual network components, including technical errors, but also institutional failure like discontinued funding.
Two primary models of decentralization can be distinguished: (1) A federation, where a single technology is utilized across partner sites, to provide a homogeneous solution, and (2) interoperability, where multiple technologies are used across partner sites but integrated into a single but heterogeneous set of components. On the one hand, the federation model dramatically simplifies the technical challenges. Simplicity comes at a cost though, as it constrains all partner sites to the deployment and maintenance of a single (homogeneous) software solution that might be suboptimal for many partners; a "one-size-must-fit-all" problem that can limit the type of partners involved in the federation. On the other hand, the interoperability model allows decentralization based on a network of heterogeneous software solutions. Each participant site is free to employ the optimal, site-specific solution avoiding the challenges and limitations of a "one-size-must-fit-all" approach. Though in such a system the challenge is shifted to establishing effective interoperability between the different technologies employed.
Arguably, the interoperability model is more flexible and inclusive as it allows a more diverse set of partner sites to participate. More importantly, the interoperability model can improve the widespread application and resilience of dRDM. For example, established analysis and deployment workflows at each site can stay working, while interoperability with other sites can be established in parallel, for those projects requiring it, rather than requiring disruptive infrastructural changes that can simultaneously impact multiple laboratories or researchers. In the following, we present four perspectives on the utility of this type of dRDM. All four share a common principle: the use of a uniform data structure as a common denominator that facilitates independent development of software adapters to instruments and services that enable interoperability and data flow between all relevant infrastructure components and participants. While various standards and implementations of such data structures exist (e.g. BagIt, Kunze et al., 2018;Frictionless Data Package, Walsh et al., 2017;or Dat, McKelvey et al., 2020), all presented perspectives share the use of DataLad's datasets (Hanke et al., 2020) as key technology choices. This particular implementation is a domain-agnostic lightweight data structure that offers joint version control capabilities for code and data (based on the industry standard Git, git-scm.com), supports arbitrarily structured metadata, and is capable of tracking the identity and availability of dataset components via the git-annex software (Hess, 2020) without requiring universal data access or actually containing the file content. This makes it possible to construct a dataset as a standardized overlay data structure which references content in heterogeneously organized data portals or databases. Moreover, it does not hide or bypass existing institutional access protection mechanisms and leaves authorization procedures in the responsibility of the data owners (see Figure 1).
dRDM perspective: one laboratory or researcher
From the perspective of individual researchers, their laboratories, and collaborators, dRDM can improve day-to-day operations and make them robust against disruptive infrastructural changes. If data are uniformly accessible regardless of their storage location, scientists can orchestrate collaborative workflows and access not only to the data collected locally but also from external (public) resources in a streamlined fashion. Moreover, researchers utilizing a dRDM model can ensure consistent and robust data management across local and institutional information technology (IT) environments. For example dRDM makes it trivial to deploy a processing script from a local copy of data within the laboratory to a larger scale version of the data hosted in a datacenter. And as most researchers, in particular at early career stages, frequently move their workplace to different institutions (Guthrie et al., 2017), the benefits of this feature extend beyond a single workplace. When research agendas comprise a longer time frame, such that an employment change does not necessarily imply a fresh start and the discontinuation of previous projects, the potentially substantial and disruptive transition to a new institution and IT environment can be alleviated or prevented by a dRDM-based system.
Without dRDM, and depending on the magnitude of the differences between IT systems and policies, the necessary changes can be severe. Consider, for example, a transition from an environment with ample storage and shared computing resources, to a workplace with minimal local resources, but an institutional cloud storage service account. Before, all data holdings were accessible with low latency as if stored on a single big hard drive. Computing resources had direct data access, and analysis scripts could reference the desired data by (hardcoded) paths. After the transition, scripts cease to work because there is no local storage resource large enough to hold all data for analysis. Instead, additional, servicespecific software has to be used to pull required data from the cloud and deposit results into the cloud. Essentially all analysis implementations of the past have to be manually adjusted to work in the new environment, an error-prone process that in itself is a threat to the reproducibility of results.
Using a common data structure as an abstraction of an analysis environment has the potential to substantially ease such transitions. In the case of a DataLad dataset, it is possible to comprehensively include all components of a compute-or data-intensive analysis in a single, version-controlled unit. This includes input data of any number and size, analysis code in any programming language, and even complete computational environments in the form of software container images. The dataset offers an intuitive application programming interface (API) for data access that hides the peculiarities of a particular IT environment and enables the development of analysis codes with improved portability properties. For example, a particular input file for an analysis can be referenced using a simple local path, relative to the root path of the analysis dataset: input/datasetA/file1.dat. An analysis script that requires this file can ensure this by executing the shell command datalad get input/ datasetA/file1.dat. Importantly, the analysis script does not need to reflect that datasetA, which contains the file of interest, is a different modular data unit that is presently hosted on a particular storage service. Consequently, the analysis script does not need to be adjusted whenever the availability of datasetA changes because it has been transferred to a different institution. Instead, the DataLad software can be centrally configured to look for datasets, identified by a globally unique identifier and a precise version, at a different or additional location. Given that the data structure also allows for change tracking, it is possible to retrospectively discover how data were manipulated, improving the transparency and reproducibility of conducted projects.
For an individual researcher or laboratory, the barrier of entry into such a system is low. With no confinement to external services or file types, a scientist can transition new or existing projects into a common data structure independently and can typically achieve this without assistance, additional infrastructure, or project structure change. Nevertheless, the adoption of a common data structure such as DataLad's datasets implies the necessity to acquire additional expertise, e.g. from documentation, user training, or tutorials, and also an individual's interest in doing so. Efforts such as Repro-Nim's (repronim.org) webinars, teaching resource collections, and teaching fellowships, or in-depth, user-focused documentation formats such as the DataLad Handbook (Wagner et al., 2020) facilitate this.
dRDM perspective: a research institute
Like individual laboratories or researchers, research institutes also exist in a volatile environment. It is in their best interest to provide their scientists with the latest technologies to maximize their competitive advantage, boost research efficiency, and consequently increase the attractiveness and reputation of a research environment. However, the desire to quickly adopt new technologies has to be counterbalanced with the need to keep the cumulative cost of legacy infrastructure and procedures at a manageable level. This is compounded by the fact that institutions are generally responsible for guaranteeing a certain level of longevity for all research outputs, for example, the retention of research data, typically for at least a decade.
For the same reason as for individual researchers or laboratories, readiness for future infrastructure transitions, it makes sense for research institutions to utilize a portable, common data structure as an abstraction layer for RDM operations. The key feature of data structures, like DataLad's datasets, is that they present researchers with a familiar view, a project directory on a filesystem, and internally translate requests for data by location (i.e. a file path) into requests for data by identity (i.e. a UUID or a checksum). This represents a powerful paradigm shift, as it enables future modifications of the content lookup and retrieval without changing the user/research-facing data representation.
The Institute of Neuroscience and Medicine Brain & Behaviour (INM-7) of the Research Center Jülich uses DataLad datasets not only to manage access to large-scale neuroimaging datasets, like the UKBiobank (Miller et al., 2016), or the Human Connectome Project (HCP, van Essen et al., 2013), but also as a system to archive completed projects. Institute members can discover all managed datasets via a collection that is maintained as a DataLad superdataset (a dataset comprising a versioned collection of datasets) hosted on a local GitLab (gitlab.com) instance. Independent of the hosting choice of the original data provider, institute members can access any data file by requesting it through the institute's dataset collection, as described above. File access permissions are managed either directly by the respective data owners (e.g. each HCP user obtains their own credentials from the HCP consortium) or by controlled access to local downloads of restricted datasets (e.g. dedicated access group for signatories of the UKBiobank data usage agreement). Importantly, data access procedures remain uniform and fine-grained, regardless of whether an analysis is developed on a student's laptop or is computed on the institute's cluster system. This RDM setup also facilitates the ad hoc usage of resources at the Jülich Supercomputing Center (JSC). Institute staff can stage individual data resources on the JSC storage systems, and the DataLad software can transparently obtain dataset content on this independently operated resource without requiring individual adjustments of datasets, or analysis scripts. When a study is completed and archived, its DataLad dataset, including the incorporated study metadata, remains fully discoverable and accessible through the institute's dataset collection. However, file content can be administratively moved from fast and expensive "hot" storage to higher latency bulk storage, and eventually onto tape backup systems, all without structurally impacting dataset access for institute members. Combined with data access statistics, this flexibility allows institute staff to maintain an optimal compromise of data access latency and storage demands without individual user negotiations.
dRDM perspective: a domain data archive or computing platform Domain data archives seek to provide high-reliability datasets access to all authorized researchers, with a secondary mandate to ensure that publicly funded data are findable via internal search or external indexing. Archives treat datasets as a natural unit of organization, and the necessary considerations are ingress, validation and metadata extraction, storage, publication, and egress. By adopting common data standards coupled with ingress and egress validation mechanisms, an archive team can focus development efforts on the key tasks of ensuring data access, availability, and findability.
For example, OpenNeuro (Gorgolewski et al., 2017) is a public neuroimaging data repository. Rather than imposing its own schema to which submitters must adapt their data, the archive adopted the community-developed Brain Imaging Data Structure (BIDS) standard for data organization and metadata (Gorgolewski et al., 2016). To assure reliable data access, and to serve the wide community of users, the archive relies on commercial infrastructure and uses Amazon Web Services to host the web interface and the Simple Storage Service (S3) to host the data. However, to ensure the long-term availability of the data, it requires a data model that is not tied to any specific vendor, hosting platform, or technology. In addition to the data model, OpenNeuro also desired making data available through generalized, stable interfaces independent of a particular storage platform or vendor. Consequently, the archive adopted DataLad to represent datasets internally (within the archive). This choice enables data change tracking and a common protocol for data egress (i.e. Git combined with git-annex). Data ingestion is also facilitated by DataLad. When a dataset is submitted to the archive, a DataLad dataset is created and binary files with imaging data are annexed. The dataset owner makes at least one "snapshot" to mark the dataset as complete and then publishes it in the archive. When the dataset is published, all files are uploaded to S3, and the URLs provided by S3 are associated with the annexed files. Finally, the DataLad dataset is published to a GitHub repository, to allow findability by other researchers even beyond the OpenNeuro Archive. The use of high-availability, permissive, third-party services ensures data are accessible even if the primary website suffers from downtime. At the same time, the data model does not depend on either service and can be ported to other services as new technologies emerge.
Version control and persistent identifiers are central features of the OpenNeuro data model. Datasets may change over time as new data are added or metadata is updated, and analyses of a dataset depend critically on the state of the dataset at the time of analysis. Dataset snapshots are represented as Git tags, allowing analyses to refer to the version of the dataset used via its version number (as opposed to by checksum). In addition, data object identifiers (DOIs) are issued for each snapshot of the dataset, ensuring that the particular version of the dataset may be cited in publications and facilitate the reproduction of analyses.
The use of DataLad and the published datasets on GitHub allows OpenNeuro datasets to be available beyond the archive. A variety of computational systems even without direct interaction with OpenNeuro can reference and access the datasets. For example, a researcher interested in developing a new analysis method might test the code during development on their personal computer by fetching an OpenNeuro dataset for testing or validation. The same researcher can then run a scaled-up version of the analysis on a high-performance computing cluster, which may host OpenNeuro datasets in a centralized location within a datacenter with minimal effort, simply reusing the data model and DataLad version tracking mechanisms. Finally, a cloud-based computational platform may expose OpenNeuro datasets to its users to increase data availability and enhance the general utility of the services offered.
As datasets are published and accumulate in one or several accessible repositories, new opportunities emerge for data aggregation and reuse across datasets (Avesani et al., 2019). Common metadata standards are essential to effectively harmonize data from multiple sources and enable research questions at scales previously impracticable. Furthermore, a common data standard can facilitate the aggregation of data from multiple sources. The effective separation of metadata (Git) and data (git-annex) is a key feature of the DataLad model that ensures that the metadata can be made accessible even when there are legal and ethical barriers to openly sharing data. It is thus becoming possible to develop tools to aggregate data from multiple providers without requiring an explicit effort from those providers. The dRDM model breaks some of the barriers and facilitates aggregation, curation, and upcycling data, allowing central archives such as OpenNeuro to act as stewards rather than gatekeepers.
Key partners that can be effectively served by the proposed dRDM model are cloud computing platforms. BrainLife (brainlife.io) is one of the most recent open and publicly funded platforms developed with the goal to serve researchers facilitating access, sharing, or reuse of data processing methods. The code implementing the data processing method can be submitted to BrainLife and registered as a web service (an App). The BrainLife platform allows automated tracking of the analyses execution and orchestrates data processing on diverse compute resources via a convenient graphical web interface or command line interfaces. BrainLife is not meant to be a data archive but a registry for reusable processing methods used in published scientific articles. The computational platform is compliant with the BIDS data standard so as to facilitate users' data ingress and egress. Recently, the BrainLife team has used DataLad to connect the platform users with hundreds of BIDS-compliant datasets that are made publicly available as DataLad datasets. BrainLife uses DataLad to offer automated import "with the push of a button" of datasets that users have published on a variety of public archives. BrainLife benefits from the dRDM standardization in two ways: (1) Metadata standardization enables automatic identification of relevant dataset components, extraction of key data properties, and match-making of applicable analysis implementation against available data types, and (2) the abstraction of data transport logistics provided by DataLad's datasets enables BrainLife to automatically obtain (pull) data files from the original providers, for example, from OpenNeuro, avoiding manual access to each data archive. Taken together, BrainLife is an example of a highly accessible computing platform that translates the potential of a dRDM system to the immediate computing needs of researchers, by connecting to independent standardization efforts without suffering from the need to continuously adjust to implementation changes in a large number of data portal and metadata access APIs. dRDM perspective: a collaborative multisite consortium, the Canadian
Open Neuroscience Platform
The need for data sharing across institutions and states is fueled by the requirement of large sample sizes to enable well-powered and generalizable studies and for distributing the cost of data acquisition across sites. These large consortia generally opt for centralized data hosting, which simplifies data harmonization and management. However, large numbers can also be achieved through many independently acquired datasets that have the potential to better represent a more diverse population, an important factor for the construction of biomarkers. The Canadian Open Neuroscience Platform (CONP) is a consortium aiming for this goal and was funded in part to share neuroscience datasets across Canada within a comprehensive ethical and legal framework, establishing a repository of data implementing the Findable, Accessible, Interoperable, Reusable (FAIR) principles (Wilkinson et al., 2016).
While the central CONP data portal (portal.conp.ca) could have been only a set of links pointing to original infrastructures, this would not have given direct data access across datasets and would have been of limited utility for information aggregation. On the other extreme, centralizing data would have been infeasible. Critically, ethical or institutional policy requirements would have prevented transferring data to a central data storage for a number of datasets that are presently accessible on the platform. To keep the governance of datasets local, the CONP needed to adopt a distributed solution, while still making the data accessible directly through a single portal.
Adopting a portable, common data structure, like DataLad's dataset, as an abstraction provided the CONP a shared and centralized space for distributing the metadata, while keeping the links to the original data locations. Metadata descriptors implemented using the DAta Tag Suite (DATS) model (Sansone et al., 2017) are incorporated in the centrally hosted dataset Git repositories, while original raw data are hosted on diverse platforms (OSF.io, Zenodo.org, Loris.ca, Braincode.ca, and others). The CONP uses a crawler to discover datasets on external services, like OSF or Zenodo, and builds a minimal DATS model for each dataset to make these data findable and accessible through the CONP portal. This offers a simple procedure for researchers who both want to share data in a general repository but also make these data discoverable in a neuroscience specialized portal.
Presently, CONP users must access datasets exclusively using the DataLad software. This imposes requirements, such as the necessity to deploy the software for any consumer. However, not all data consumption scenarios require that each participant operates a full-featured node of the dRDM system. Consequently, the CONP is working on convenient export functionality, such as an in-browser dataset downloader, to lower the technical threshold for interaction with its users. Because such a solution relies on standardized data access records, it can also be used by any other project using the data structure for dRDM.
Conclusions
As illustrated by the four perspectives presented here, dRDM, built on a common, portable data structure that enables uniform access to all relevant commercial and institutional data services, is a flexible model that can scale from personal computing environments to individual institutions, all the way to large-scale collaborations in multisite consortia. The inclusive nature of this RDM approach that avoids one-size-must-fit-all prescription of centralized or federated services is suitable for introducing RDM standards and procedures in heterogeneous fields of endeavor. Consequently, it has also been selected as a strategic component of the NFDI Neuroscience initiative, a consortium that aims to consolidate neuroscience RDM in Germany along these lines.
Using the DataLad software and its datasets as an exemplary implementation of a common portable data structure, it is possible to curate and maintain unified data distributions collating data from the wide range of data providers. One such distribution is datasets.datalad.org, which currently provides a single point of entry for public or authenticated access to over 5,000 DataLad datasets covering over 200 TBs of neuroscience research data from hundreds of archives, initiatives, or individual laboratories. Among others, this collection also includes the superdatasets for CONP and OpenNeuro and through them provides access to all datasets managed by the respective entities. In turn, this collection is used by BrainLife to automatically discover datasets that can be processed on its platform.
Standardizing on a technology implies a substantial risk and installs a single point of failure in a complex system. However, standardization of core components also limits the variability that subsequent developments need to consider and ultimately enables more progress to be made with the same finite resources. In the case of DataLad, risks are introduced by three components: two small-scale developments (DataLad, git-annex) and the version control system Git. Git is a globally adopted industry standard. The chance of a technology failure without an adequate mitigation opportunity can be considered minimal. Both DataLad and git-annex build on Git, adding only documented, plain-text data structures to the content managed by Git. In the case of catastrophic failure (discontinuation of the development), the interpretability of data contained in these structures is unimpaired. Moreover, both software components are openly developed (public code history, issue tracker, support channels) and are available under recognized free software licenses (MIT, Affero GPL), such that continued maintenance by a third party can be considered feasible. This use of general-purpose protocols and technologies makes it possible to present scientific data in a readily usable form on platforms and forums, such as GitHub, that are used by a large audience of nonresearchers, thereby dramatically increasing the exposure of publicly funded research output, and successfully utilizes them for improving the capabilities and resilience of global dRDM. Yaroslav O. Halchenko studied optoelectronic engineering at VSTU (Vinnitsa, Ukraine) and then obtained Masters in Computer Science at UNM (Albuquerque, NM, USA) and Ph.D. at NJIT (Newark, NJ, USA). In 2009, he joined PBS Department at Dartmouth College as a postdoc in the laboratory of James Haxby at Dartmouth College, to work together with Michael Hanke on NeuroDebian and PyMVPA. After the expected but still unfortunate departure of Dr.Hanke back to Germany, Yaroslav continued his work at Dartmouth. In 2018, he accepted a research associate professor position and established the Center for Open Neuroscience (CON) at PBS Department. CON develops new and contributes to many ongoing projects and initiatives to make neuroscience research more open, robust, efficient, and trustworthy. The CON contributes to the development and dissemination of the DataLad and many related software, data sharing, and scholastic resources.
API
An application programming interface (API) defines interactions between multiple software intermediaries. An API can be entirely custom, specific to a component, or it can be designed based on an industry standard to ensure interoperability.
Checksum
A checksum is a small-sized datum derived from a block of digital data for the purpose of detecting errors that may have been introduced during its transmission or storage.
UUID
A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems.
Version control
Version control (also known as revision control) is a class of systems responsible for managing changes to computer programs, documents, or other collections of information.
Figure 1:
A common, portable data structure allows establishing interoperability between diverse participant sites. Left: A common data structure can serve as a uniform abstraction layer to interface any number of commercial or institutional storage services, which may be centralized or federated systems. Right: The portable nature of the data structure facilitates data exchange between archive and compute services, as well as collaboration among individual researchers or formal consortia. Moreover, it provides institutions with the flexibility to evolve their infrastructure without needlessly impacting scientific workflows. | 6,200.6 | 2021-01-11T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
On a remarkable electromagnetic field in the Einstein Universe
We present a time-dependent solution of the Maxwell equations in the Einstein universe, whose electric and magnetic fields, as seen by the stationary observers, are aligned with the Clifford parallels of the $3$-sphere $S^3$. The conformal equivalence between Minkowski's spacetime and (a region of) the Einstein cylinder is then exploited in order to obtain a knotted, finite energy, radiating solution of the Maxwell equations in flat spacetime. We also discuss similar electromagnetic fields in expanding closed Friedmann models, and compute the matter content of such configurations.
Introduction
In the early days of general relativity there was a common belief that the Universe had to be eternal and unchanging. Following this concept, Albert Einstein introduced the first cosmological model, subsequently named after him. It consisted of a compact spatial hypersurface with positive curvature, the 3-sphere S 3 , which did not change under the flow of time. In order to overcome the attraction force of matter and obtain a static configuration, a new term had to be introduced in the field equations -the famous cosmological constant. Although numerous observations have since excluded this kind of static models in favor of expanding ones, the Einstein universe is still a very important explicit solution of the Einstein equations, partly because of the conformal equivalence between (a region of) this model and Minkowski's spacetime (the standard background for quantum field theories). For example, since Maxwell's equations are conformally invariant, one can think of any configuration of electromagnetic fields in the Einstein universe as a configuration in flat spacetime.
The Einstein universe is a particular case of, and conformal to, the closed Friedmann-Lemaître-Robertson-Walker (FLRW) models, where the assumption of staticity is relaxed by introducing a scale factor that allows the universe to expand or contract. Using this relation, one can extend solutions of Maxwell's equations in the Einstein universe to the closed FLRW models. In general, when one allows the spatial volume to change in time, the energy of the electromagnetic configuration will also change according to an appropriate power of the scale factor.
The common denominator of the models described above is the 3-sphere S 3 . This manifold is an important example of a non-trivial fibre bundle, given by the Hopf fibration, whose base is the usual 2-sphere S 2 , and whose fibres are circles S 1 (the Clifford parallels). It is natural to look for solutions of the Maxwell equations whose electric and magnetic fields, as measured by the static observers, are aligned with these fibres. Such ansatz will be used in the first section to find a remarkable solution of Maxwell's equations in the Einstein universe, which will then be interpreted as a knotted, finite energy, radiating electromagnetic field in Minkowski's spacetime. The extension of this solution to closed FLRW models will be carried out in the second section. Finally, the matter distribution that must be added to obtain a self-consistent solution of the Einstein-Maxwell equations will be determined in the third section.
We follow the conventions of [1,2]; in particular, we use the Einstein summation convention and a system of units in which c = G = 1.
Electromagnetic field in the Einstein universe
We are interested in finding solutions of the Maxwell equations in the cylinder E = R × S 3 with the standard Lorentzian metric. This manifold represents a static universe with positive cosmological constant Λ, called the Einstein universe. By choosing the radius of this universe as our unit length we can assume, without any loss of generality, that E is the product of R by the unit sphere.
Since S 3 ∼ = SU (2) is a Lie group, so is E. We can introduce the following leftinvariant orthonormal tetrad of one-forms on this manifold: Take θ 0 = dt to be the standard element of the holonomic basis on the cylinder, defining the cross section foliation. On each leaf, an orthonormal triad {θ 1 , θ 2 , θ 3 } is chosen in such a way that 1 thus reproducing the su(2) Lie algebra structure. In terms of the dual orthonormal vector tetrad, the choice of θ 0 and relation (1) can be written as We look for a solution of the Maxwell equations in vacuum, with the Faraday two-form Written in this basis, they are The simplest nontrivial solution can be obtained from the assumption that the components E i and B i of the electric and magnetic fields E and B measured by the stationary observers depend only on time. Using that ansatz, we arrive at the relations A solution of system (6) will include trigonometric functions with different initial phases and field strength factors. We choose the simplest one for the conformal analysis, because the results will not change qualitatively under more general assumptions 2 .
Recall that Minkowski's spacetime is conformal to an open region of the Einstein universe [2,3]. More precisely, the Minkowski metric g can be written as where (ψ, θ, ϕ) are the standard hyperspherical coordinates on the 3-sphere. In these coordinates, Minkowski's spacetime corresponds to the region 0 ≤ |t| + ψ < π (Fig. 1). Since Maxwell's equations are conformally invariant, we can interpret (7) as an electromagnetic field in flat spacetime 3 . The energy will of course change under the 2 The property that all field lines (as seen by the stationary observers) are closed is unstable: any generic electromagnetic perturbation will destroy this feature. Moreover, such small perturbations will not decay away in time, as they would in Minkowski's spacetime, since S 3 is compact.
3 See [4,5] for a similar contruction exploiting the conformal relation between Minkowski's and de Sitter's spacetimes.
conformal transformation. Its value will be given by the integral expression is the energy-momentum tensor of F , X 0 = ΩX 0 is X 0 normalized by the Minkowski metric g, and Σ t is the spacelike hypersurface corresponding to the hyperspherical cap Σ t of constant t (Fig. 1). Note that Σ t is not a Cauchy surface for Minkowski's spacetime unless t = 0, as it approaches I − for t < 0 and I + for t > 0. Using T ab = Ω 2 T ab and d( vol) = Ω −3 d(vol), we can reexpress the energy as an integral over the hyperspherical cap Σ t , It is straightforward to see that this quantity is finite. Moreover, it is a decreasing function of the parameter t > 0, which indicates that the solution under consideration describes radiation fields. This can be seen by looking at the time dependence of the energy (Fig. 2.), given, for t > 0, by the formula The electromagnetic field (7) is related to the simplest initial conditions given by Rañada in [6, Section 2] (see also [7,8]), where only the initial magnetic field is nonzero. In fact, the phase-shifted solution (with E 0 = 1) is the Cauchy development of initial conditions of that type. Due to the fact that it is aligned along the Clifford parallels, the Hopf index n is equal to one in our case. This fact can be confirmed using an integral expression for this quantity, namely where F = dA.
Expanding models
In this section we turn our attention to the expanding models. Consider a metric of the form where γ S 3 is the standard metric of S 3 with unit radius; this corresponds to a closed FLRW model, widely used in modern cosmology [1].
We are looking for a solution of the Maxwell equations similar to (7), namely with a Faraday tensor of the form where E 1 and B 1 depend only on time. Substituting into the vacuum Maxwell equations, (3), we have The Einstein universe case corresponds to a ≡ 1. Note that the T 00 component of the electromagnetic stress-energy tensor is indicating that the energy density of the solution is decaying with the fourth power of the radius of the universe.
Matter content
The solutions of Maxwell's equations discussed in the last sections were derived under the assumption of a fixed background metric, satisfying the Einstein field equations (20) G ab + Λg ab = 8πT tot ab , where G ab is the Einstein tensor, g ab is the metric and T tot ab is the total stress-energy tensor. One can ask what kind of matter has to be present for this configuration to be self-consistent. In order to answer this question, we consider the decomposition of the full stress-energy tensor into the parts coming from the electromagnetic field and matter, T tot ab = T ab + T mat ab . We already used the expression for the time-time component of T ab when computing the energy of the solution (7); the full expression for this tensor is given by (10). It is straightforward to see that it has diagonal form for any solution of the system (18), T ab = diag(D, −D, D, D), where in general D = D(t).
Let us start from the assumption that matter can be described by a density ρ and a principal pressure p aligned with X 1 , T mat 00 = ρ, T mat 11 = p. A possible interpretation of this anisotropic pressure can be given by superimposing the stress-energy tensors of two pressureless fluids with the same mass density µ, moving in opposite directions along X 1 , with the same velocity with respect to the fundamental observers: Another possible interpretation can be given by considering cosmic strings aligned with the Clifford parallels, expanding and contracting with space while preserving the structure of the Hopf fibration.
Because the metric in this model is similar to that of the closed FLRW universe, we end up with a system analogous to the Friedmann equations with the additional terms from electromagnetic stress-energy tensor, As it turns out, we can deal with system (23) using a similar procedure as in the standard case [1]. The first equation is equivalent to After taking a time derivative of both sides and using the third equation from (23), we find where M is another integration constant. We can use now the second equation from (23) to obtain the pressure p as Note that the matter satisfies the weak, strong and dominant energy conditions if M ≥ 0, as in this case 0 < p ≤ ρ. Nevertheless, p/ρ always approaches 1 as a tends to zero, indicating that the matter particles approach the speed of light near the Big Bang or the Big Crunch. In the limit case M = 0 we have p = ρ, meaning that the particles are actually moving at the speed of light. The behavior of the scale factor a can be derived from an effective potential formulation. Using (24) and (25) we have We can think about each term as a kinetic part, an effective potential V (a), and a conserved energy h = −1/2, respectively, The potential is a function of a, but also depends on the three parameters Λ, M and E 0 . Clearly we need a positive cosmological constant to have an (unstable) equilibrium point, so that we can reproduce the previous results concerning the Einstein universe. In this case, V (a) will have the shape depicted on Fig. 3. Depending on the values of the parameters, we can have expanding solutions, recollapsing solutions, de Sitter-like bouncing solutions, and of course the unstable static solution and its asymptotes. Note that in units for which the radius of the static solution is a = 1, we have Λ = 1 + 4πE 0 2 , M = 1 3 (1 − 8πE 0 2 ) (hence we must have E 0 2 ≤ 1 8π ). Finally, we note that except in the case of the Einstein universe and its two asymptotes, the electromagnetic field performs a finite number n of oscillations during the history of the universe, proportional to its lifetime as measured in conformal time: Therefore, at any given epoch these fields will appear mostly constant and may perhaps serve as models of primordial magnetic fields [9]. | 2,840.6 | 2017-02-16T00:00:00.000 | [
"Physics"
] |
Event-based Consensus Tracking for Nonlinear Multi-Agent Systems under Semi-Markov Jump Topology
This paper studies the event-triggering leader-follower consensus with the strictly dissipative performance for nonlinear multi-agent systems (MASs) with semi-Markov changing topologies. First, a polynomial fuzzy model is established to describe the error nonlinear multi-agent system that is formed by one virtual leader and followers. Then, a new event-triggering transmission strategy is proposed to mitigate communication and computational load. By utilizing the event-triggering mechanism and modeling the switching topologies by semi-Markov process, a sampled-data based consensus protocol is designed. Compared with traditional Markov jump topologies, the transition rate is time-varying for semi-Markov switching topologies. By mode-dependent Lyapunov-Krasovskii functional, the sum of square based relaxed stabilization conditions for fuzzy MASs are obtained to guarantee event-triggering consensus with strict dissipativity in an even-square sense, i.e., the derived conditions take into account the joint effects of event-triggering control, semi-Markov jump topologies and external disturbance. An illustrative example is provided to verify the proposed consensus design schemes.
I. INTRODUCTION
Cooperative consensus of multi-agent systems (MASs) has received considerable attention owing to its wide applications, including flocking [1], formation control [2], [3]. The main purpose of consensus problems is to design a distributed controller (consensus protocol), which can guarantee that all agents can reach a common state by exchanging local information among neighboring agents via communication link. Various control schemes have been utilized, such as finite time control in [4], fault-tolerant control in [5], [6], adaptive control in [7]- [9] and optimal control in [11], [12].
The communication topologies among the agents may not be often fixed due to links interruption and new establishment partly stem from the communication equipment failures and disturbance. To describe the time-varying topology, a common method is that the switching topologies are modeled by the Markov process, which have been attracted a lot of concerns. For example, see [13]- [15]. However, in practice, Markov changing topologies have many limitations because the dwell time obeys exponential distribution and the transition rates are constant. Different from traditional Markov jump topologies, the dwell time of semi-Markov changing topologies obeys more general distribution, including Gaussian distribution and Weibull distribution. For semi-Markov switching topologies, the transition rates are time-varying and depend on the dwell time. Recently, fruitful results have been reported on semi-Markov switching topologies [16]- [18]. Hence, semi-Markov changing topology is one of the issues worth considering here.
Dissipativity theory is introduced in [19], which plays a key role in the analysis and synthesis of control systems. In practice, it is necessary to guarantee the dissipativity to reach the purpose of interference attenuation. The dissipativity is regarded as a generalization of the H ∞ performance, the passivity theory, and the bounded real lemma. The dissipative performance is discussed for a variety of dynamic systems [20], [21]. For instance, [21] studies the observedbased event-triggering sliding mode control with the strict dissipativity of the switched stochastic discrete system. Event-triggering control (ETC), as an effective scheme in saving communication resources and alleviating control updates, has gain remarkable attention. Different from the time-triggering scheme, data transmission and control updates are decided by an event-triggering condition. When the triggering condition is met, the event occurs. The central idea and challenge of ETC are to establish the time sequence of data transmission through a predefined event-triggering strategy which is different from the time series of traditional periodic control. For example, see [22] and the references therein. Recently, event-triggering consensus problems for MASs have attracted extensive attention. Rich results have been obtained [23]- [28]. For instance, the control problem of event-triggering consensus is discussed for linear MASs with changing topologies in [26].
Recently, the polynomial fuzzy model in [29] is introduced for modeling a nonlinear system by polynomial expression. The new fuzzy model can be viewed as a generalization of the T-S fuzzy model [30]- [32], which has attracted extensive attention. One may refer to [33]. To date, few results are reported on even-triggering consensus with strictly dissipative performance for polynomial fuzzy MASs under semi-Markov jump topologies.
Motivated by the above discussion, this paper investigates the even-triggering consensus with strict dissipativity of polynomial fuzzy MASs with semi-Markov changing topologies. The main contributions of this paper are summarized as follows: (i) Most existing results deal with the consensus problems of the nonlinear MASs by using the Lipschitz conditions, such as [9], [10]. A polynomial fuzzy model is established to describe the error nonlinear multi-agent system in this paper. Compared with [34], the fuzzy model here is simpler and without extra assumptions.
(ii) In [35], [36], the consensus problems of continuoustime communication are investigated for nonlinear MASs under changing topologies. [13] addresses the time-triggering consensus problems for nonlinear MASs under Markov switching topologies. Unlike [13], [35], [36], a sampled-data mode-dependent event-triggering transmission strategy is presented here to reduce communication and computational load. By using the event-triggering scheme and modeling the switching topologies by semi-Markov process, the modedependent event-triggering consensus protocols are designed.
(iii) Using mode-dependent Lyapunov-Krasovskii functional, relaxed stabilization conditions based on sum of square (SOS) [37] are obtained to assure event-triggering consensus with strict dissipativity in an even-square sense, i.e., the presented conditions take into account the joint effects of event-triggering communication, semi-Markov jump topologies and external disturbance.
The remainder of this paper is organized as follows: In Section 2, the related knowledge of graph theory is introduced and the problem formulation is given. In Section 3, the polynomial fuzzy model is built and the sample-data mode-dependent event-triggering transmission scheme is designed. In Section 4, event-triggering consensus protocols and the main results are presented. In Section 5, an illustrative example is provided. We conclude this paper in Section 6.
Notation: The symbol ⊗ denotes the Kronecker product. · is the Euclidean norm. I represents the identity matrix with appropriate dimensions. E{·} is the expectation operator. (Ω, F, P) denotes a probability space. Q > 0 means that the matrix Q is positive definite. The superscript T for matrix Q T denotes transpose of matrix Q. Sym(A) means A + A T . Σ 2 represents SOS.
II. PRELIMINARIES AND PROBLEM FORMULATION
Here, we introduce the related knowledge of graph theory and the problem formulation is presented.
A. GRAPH THEORY
Let G = (V, E, A) be a digraph generated by N follower agents, in which V = {1, ..., N } is a nonempty node set, E = {(i, j) : i, j ∈ V} denotes an edge set, and A = [a ij ] ∈ R N ×N represents a weighted adjacency matrix.
We denoteḠ as a digraph formed by one virtual leader labeled 0 and N follower agents marked 1 ∼ N .
C. PROBLEM FORMULATION
Here, we consider a leader-follower nonlinear multi-agent system formed by N followers and one virtual leader. Each agent's dynamics is described bẏ where x 0 ∈ R n is the state of virtual leader. x i ∈ R n is the state of agent i, and i = 1, . . . , N. f (x i ) ∈ R n is a polynomial vector in x i . u i ∈ R n is the control input. d p ∈ R n×q , and w i ∈ R q is the external disturbance. The error state is e i = x i − x 0 . Then, the error dynamics can be expressed aṡ Before going further, the following assumptions and concepts are given to obtain the main results. Assumption 1: EachḠ , γ(t) ∈ S, contains a directed spanning tree with the root of the virtual leader. Assumption 2: States of each agent are periodically sampled. The sampling period is synchronized by a clock. Definition 1: [36]: Given matrices Y ∈ R ι×q , X = X T ∈ R ι×ι , and Z=Z T ∈ R q×q with X ≤0 and Z >0, if for T * ≥ 0 and δ > 0, then (3) is called strictly (X , Y, Z)-δ-dissipative. Definition 2: Under the consensus protocol u i , (2) is called mean-square consensus if for any initial conditions x i (0), x 0 (0) ∈ R n . Remark 1: Inspired by [35], the definition of mean-square consensus in (5) for event-based MASs under semi-Markov jump topologies is presented.
A. POLYNOMIAL FUZZY MODEL
To describe system (3), a polynomial fuzzy model is established below: where The compact form of (6) iṡ where The function h p (θ i ) has the properties of
B. EVENT-TRIGGERING MECHANISM
To save communication resources, an event-triggering control strategy is presented for the system (2). To determined whether the sampled data is transmitted or not, the modedependent event-triggering condition for the ith agent is defined as where ρ i > 0 denotes the threshold, and Φ γ(t) > 0 is the weighting matrices to be designed later.
, where l = 1, 2, . . ., and h is the sampling period. Define where α is a scalar with α ∈ (0, 1]. E i (t i k + lh) denotes the measurement error formed by the last released state x i (t i k ) and the current state where m is an integer. Remark 2: If the condition (8) holds, the event is triggering, and then the sampled data is sent to its neighbors and controller. The event-triggering time sequence represents Zeno behavor does not happen. Remark 3: In (9), motivated by [32], α is introduced to smooth the input signal. If α = 1, the event-triggering mechanism will reduced to the conventional one as in [16], [38]. Compared with the conventional event-triggering mechanism, the event-triggering mechanism in (9) will reduce erroneous events induced by the abrupt changing of the output measurement.
IV. EVENT-TRIGGERING CONSENSUS DESIGN AND CONSENSUS CONDITIONS
Now, we consider the event-triggering dissipative consensus conditions for system (7) under semi-Markov changing topologies. Then, the derived results can be extended to a fixed topological case.
A. EVENT-TRIGGERING CONSENSUS PROTOCOL
Here, we first design a distributed event-triggering consensus protocol for system (7) under semi-Markov changing topologies.
Considering the controlled output z i , the augmented system of agent i iṡ where c zp ∈ R ι×n , d zp ∈ R ι×q , and z i ∈ R ι .
VOLUME 4, 2016
An event-triggering consensus protocol for agent i is designed as follows: is the weight of information flow. If agent i can get the leader's information at γ(t), then d i = 1. Remark 4: Our purpose is to design the consensus protocol (11) to ensure that all agents can achieve agreement and alleviate the consumption of communication resources.
By Definition 1, (14) is strictly
Based on (17), one obtains Therefore, there exists a scalar > 0, such that By using Dynkin's formula, one has
VOLUME 4, 2016
Similarly, one obtains Since Therefore, it follows from (31)-(33) that That is which indicates that lim T →∞ E e(σ) 2 = 0. According to Definition 2, all agents achieve consensus. This proof is completed. ∇∇∇ Remark 5: In Theorem 1, by polynomial Lyapunov-Krasovskii functional technique, SOS-based relaxed sufficient condition is presented to ensure that the polynomial fuzzy MASs can achieve even-square agreement with strictly dissipative performance under event-triggering control and semi-Markov switching topologies. Based on Theorem 1, we present the following approach to design the control gains.
C. CONSENSUS CONDITIONS UNDER SEMI-MARKOV SWITCHING TOPOLOGIES
Here, based on the obtained result in Theorem 2, consensus conditions for MASs under semi-Markov switching topologies is given.
V. ILLUSTRATIVE EXAMPLE
Consider a nonlinear multi-agent network, which the switching topologies are shown in Figure 1. Each agent's dynamics is described from [40] x where The polynomial fuzzy model is established as follows: , and x i3 ∈ [ ζ 1 3 , ζ 2 3 ], where ζ 1 1 = −24, ζ 2 1 = 24, ζ 1 2 = −32, ζ 2 2 = 32, ζ 1 3 = −46, and ζ 2 3 = 46. The augmented fuzzy error system iṡ Without loss of generality, assume that the edges' weights of all communication topologies are 1. Figure 2 depicts the semi-Markov switching signal. Laplacian matrices are expressed as The external disturbance is w(t) = 1.5e −0.25t | cos t|. Let The event-triggering instants of each agent are depicted in Figure 3, which indicates the amounts of sampling data transmitted are reduced. The state trajectories of each agent are shown in Figure 4. The error states are given in Figure 5. The simulation results show that all agents achieve consensus, which demonstrates the effectiveness of the presented design schemes.
VI. CONCLUSION
In this paper, the event-triggering consensus with strict dissipativity have been studied for fuzzy MASs with semi-Markov jump topologies and external disturbance. A new VOLUME 4, 2016 He is currently a Professor with the Graduate School of Science and Technology, Tokai University, Japan. His research interests include approximate reasoning, fuzzy reasoning, fuzzy system modelling and applications, neuro-fuzzy learning algorithms for system identification. He has published over 200 papers in journals and conferences. He has actively served in a number of journals. He is the Executive Editor of International Journal of Innovative Computing, Information and Control; Editor-in-Chief of International Journal of Biomedical Soft Computing and Human Sciences; and Editor-in-Chief of ICIC Express Letters. He is a member, the board of directors of Biomedical Fuzzy Systems Association. VOLUME 4, 2016 | 3,102.4 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
A class of graphs approaching Vizing's conjecture
For any graph $G=(V,E)$, a subset $S\subseteq V$ \emph{dominates} $G$ if all vertices are contained in the closed neighborhood of $S$, that is $N[S]=V$. The minimum cardinality over all such $S$ is called the domination number, written $\gamma(G)$. In 1963, V.G. Vizing conjectured that $\gamma(G \square H) \geq \gamma(G)\gamma(H)$ where $\square$ stands for the Cartesian product of graphs. In this note, we define classes of graphs $\mathcal{A}_n$, for $n\geq 0$, so that every graph belongs to some such class, and $\mathcal{A}_0$ corresponds to class $A$ of Bartsalkin and German. We prove that for any graph $G$ in class $\mathcal{A}_1$, $\gamma(G\square H)\geq \left(\gamma(G)-\sqrt{\gamma(G)}\right)\gamma(H)$.
Introduction
For basic graph theoretic notation and definitions see Diestel [3]. All graphs G(V, E) are finite, simple, connected, undirected graphs with vertex set V and edge set E. We may refer to the vertex set and edge set of G as V (G) and E(G), respectively.
For any graph G = (V, E), a subset S ⊆ V dominates G if N [S] = V (G). The minimum cardinality of S ⊆ V , so that S dominates G is called the domination number of G and is denoted γ(G). We call a dominating set that realizes the domination number a γ-set.
For a vertex h ∈ V (H), the G-fiber, G h , is the subgraph of G H induced by {(g, h) : g ∈ V (G)}. Similarly, for a vertex g ∈ V (G), the H-fiber, H g , is the subgraph of G H induced by {(g, h) : h ∈ V (H)}.
Perhaps the most popular and elusive conjecture about the domination of graphs is due to Vadim G. Vizing (1963) [5], which states γ(G H) ≥ γ(G)γ(H). To read more about past attacks on the conjecture, and which graphs are known to satisfy its statement, see the survey [2].
One of the earliest significant results is that of Bartsalkin and German [1], who showed that the conjecture holds for decomposable graphs, that is, graphs G with vertex sets which can be disjointly covered by γ(G) cliques, as well as all spanning subgraphs of decomposable graphs with the same domination number. Bartsalking and German called the family of such graphs class A. There are known examples of graphs not in class A, see for example [2] page 5, however, the examples in the literature satisfy the property that if we add the maximum number of edges to such graphs without changing the domination number, the clique number is one more than the domination number of the resulting graph. This gives motivation to consider such graphs for Vizing's conjecture.
Furthermore, it is interesting to generalize to the class of decomposable graphs to those with clique number exceeding the domination number by some fixed amount, since every graph falls into some such class. By producing bounds on the domination numbers of cartesian products of graphs where one is in such a class, we could hope to produce a better bound for all graphs.
The best current bound for the conjectured inequality was shown in 2010 by Suen and Tarr [4], In this note, we extend the technique of Bartsalkin and German, defining classes of graphs A n for n ≥ 0, and show that for any G in class A 1 , Although graphs in classes A n for n > 1 are not well understood, Douglas Rall has produced examples for A 2n−4 for any n ≥ 2 (personal communications).
We adhere closely to the notation of [2].
2. Extending the Argument of Bartsalkin-German 2.1. Concepts and Consequences. Given a graph G, we say that G satisfies Vizing's conjecture if for any graph H, (1.1) holds.
The clique covering number θ(G) is the minimum number k of sets in a partition The following is a recursive definition of the class of graphs D n . Let D 0 be the set of decomposable graphs of Bartsalkin and German [1], that is, those graphs G so that θ(G) = γ(G).
Definition 2.1. For any positive integer n let D n be the class of graphs G such that θ(G) = γ(G) + n and G is not the spanning subgraph of any graph H ∈ D m for 0 ≤ m < n such that γ(G) = γ(H).
Definition 2.2. For any non-negative integer n, let A n be the class of graphs G, such that G is a spanning subgraph of some graph H ∈ D n so that γ(G) = γ(H).
Thus, Bartsalkin and German showed that graphs in class A 0 satisfy Vizing's conjecture.
A known example [2] of a graph not in A 0 is K 6,6 with the edges of 3 vertexdisjoint 4-cycles removed. It is not difficult to check that it is in A 1 .
The following is another example [2] of a graph not in A 0 and in A 1 . The next two observations are generalized from [1].
For a chosen non-negative integer n, let G be a graph in class D n with γ(G) = k and C = {C 1 , . . . , C k+n } the clique partition of V (G). For any nonnegative integer l < k + n and C i 1 , . . . , Proof. The proof is a simple application of the pigeonhole principle.
We now introduce the main concept which allows us to work within the classes.
, dominates j cliques of the partition, for some j, 1 ≤ j ≤ m, then we say that v is j-restraining.
Definition 2.6. For any integers l, m, clique partition . . , C jt are the cliques from C that have a non-empty intersection with D, then we say D is (|D| , l + t)-restraining. We say that l + t is the restraint of D and l + t − |D| the excess of D.
Notice that a vertex which is j-restraining is also (1, j)-restraining.
The next lemma describes how the sum of restraint of a graph in class D n is limited by n.
Lemma 2.7. For any non-negative integer n, let G be a graph in class D n with γ(G) = k and C = {C 1 , . . . , C k+n } the clique partition of V (G). Suppose for some non-negative integers l and t, that D is a (|D| , l + t)-restraining set, dominating C D = {C i 1 , . . . , C i l , C j 1 , . . . , C jt } as in Definition 2.6. Then G − C D cannot contain a set E of vertices which is (|E| , |D| + |E| + n − (l + t) + 1)-restraining.
Proof. If we suppose to the contrary, then D ∪E is a set of vertices dominating l + t + |D| + |E| + n − (l + t) + 1 = |D| + |E| + n + 1 cliques of the partition. We count |D ∪ E| and one vertex from each undominated clique and find a dominating set of G with size at most k − 1, which is a contradiction.
Notice that every missing G-cell for h is dominated "horizontally", in G h . We often write C h i 1 , . . . , C h i l as the missing G-cells for h with vertices dominated from C h j 1 , . . . , C h jt .
For a clique partition of G and minimum dominating set D of G H, we define a labeling of vertices in D, similar to that of [1] which we call the simple labeling : For G ∈ D n , γ(G) = k, 1 ≤ i ≤ k + n, and h ∈ V (H), if D ∩ C h i is non-empty, we label all of those vertices by i. Choose any vertex h ∈ V (H). If there exist vertices in D ∩ (C i × N [h]), then one of them received the label i. Notice that projecting all vertices labeled i onto H produces a vertex-labeling where h is adjacent to a vertex labeled i.
The Argument.
For our main result, our reasoning can be divided into two counting arguments which we call the Undercount Argument and the Overcount Argument. As in the method of Bartsalkin and German, we label vertices of the minimum dominating set D of G H by the label of the clique containing their projection onto G. In the undercount argument, we remove some of these labels from all vertices of D, which allows us to relabel them. In the overcount argument, for certain fibers G h , we assign multiple labels to one vertex of D in each fiber, and later remove the resulting overcount.
Theorem 2.11. For any graphs G ∈ A 1 and any H, Proof. Suppose G ∈ D 1 with γ(G) = k. For any graph H, and a minimum dominating set D of G H let C 1 , . . . , C k+1 be a clique partition of V (G).
We consider only missing cells in ∪ k+1 i=3 C h i . If C h j 1 , . . . , C h jt ⊆ ∪ k+1 i=3 C i , then applying Lemma 2.3 with n = 0 we see that there are at most l vertices in D ∩ V (G h ) with duplicated labels held by vertices of D in the same cells. We relabel these vertices by assigning a label of a distinct corresponding missing cell. That is, at most l vertices with duplicated labels receive labels i 1 , . . . , i l .
Thus, for every vertex h ∈ V (H) and missing G-cells for h, C h i 1 , . . . , C h i l , there are l − 1 vertices in D ∩ V (G h ) which have duplicated labels held by other vertices of D in the same cell. By assumption, some of the dominating vertices of C h i 1 , . . . , C h i l are from C h 1 or C h 2 , and we remove the labels on those vertices; that is we remove the labels 1 or 2 from vertices of D in C h 1 and C h 2 . This produces at least l vertices in D ∩ V (G h ) with duplicated labels held by other vertices of D in the same cells. Now, for every missing cell in G h , there are enough such duplicated labeled vertices so that every such vertex can be relabeled and receive a label of a distinct missing cell. That is, all missing cells are covered. Projecting all vertices with a given label greater than 2 onto H produces a dominating set of H, of size at least γ(H). Summing over all labels, we count (γ(G) − 1)γ(H) vertices of D.
For 1 ≤ i ≤ k + 1 and h ∈ V (H), if D ∩ C h i is non-empty, we label one of those vertices by i.
Let C h i 1 , . . . , C h i l be the missing G-cells for h with vertices dominated from C h j 1 , . . . , C h jt . We consider only missing cells in ∪ k+1 i=r+2 C h i . If C h j 1 , . . . , C h jt ⊆ ∪ k+1 i=r+2 C i , then applying Lemma 2.3 with n = 0 we see that there are at most l vertices in D∩V (G h ) with duplicated labels held by vertices of D in the same cell. We relabel these vertices by assigning a label of a distinct corresponding missing cell. That is, at most l unlabeled vertices receive labels i 1 , . . . , i l and all missing cells are covered.
If any of C h 1 , C h 2 , . . . , C h r+1 are members of {C h j 1 , . . . , C h jt }, then applying Lemma 2.3, By assumption, some of the dominating vertices of C h i 1 , . . . , C h i l are from C h 1 ∪ C h 2 , . . . , ∪C h r+1 , and we remove the labels on those vertices; that is we remove the labels 1, 2, . . . , r + 1 from vertices of D in C h 1 , . . . , C h r+1 . This produces at least l vertices in D ∩ V (G h ) with duplicated labels held by vertices of D in the same cells. Now, for every missing cell in G h , there are enough such vertices with duplicated labels so that every such vertex can receive a label of a distinct missing cell. Projecting all vertices with a given label greater than r + 1 onto H produces a dominating set of H, of size at least γ(H). Summing over all labels, we count (γ(G) − r)γ(H) (2.2) vertices of D.
Overcount Argument:
Next we condition on the minimum restraint of a vertex set in G with excess 1. Suppose G has minimum restraint r + 1 for some 1 ≤ r ≤ γ(G), and let E be a (r, r + 1)-restraining set of vertices with minimum restraint r + 1. For any h ∈ H, if G h contains a missing cell, then . . , C t jt as in definition 2.6, and D ∩ V (G h ) has non-zero excess, then we can label one vertex of D ∩ V (G h ) by two labels, say i 1 and i 2 , and the rest of the vertices by one distinct label from {i 3 , . . . , i l , j 1 , . . . , j t }. Thus, in every G-fiber with a missing cell, there are at least r vertices of D and at most one such vertex receives two labels.
For any fixed label i, 1 ≤ i ≤ k + 1, projecting the vertices of D labeled i onto H produces a dominating set of H which has size at least γ(H). Summing over all the labels, we count (γ(G) + 1)γ(H) vertices of D. However, those vertices that received two labels are counted twice. Since in every G-fiber, if a vertex of D was counted twice, there were at least r − 1 vertices of D that were counted once. We remove the overcount to conclude, The above argument does not immediately generalize to other classes since graphs in D 1 have the property that for any h ∈ H, every G-fiber G h has either one or no missing cells. This is not true in other classes. For example, if G ∈ D 2 we could have G-fibers with one missing cell, and the overcount argument would not apply.
However, by repeating the undercount argument when G ∈ D n for any nonnegative integer n, we obtain the same undercount result.
We say a (r, r + n) restraining set S of G ∈ A n is a minimum restraining set if S has the minimum restraint over all restraining sets with excess n.
Corollary 2.12. For any non-negative integer n, any graph G ∈ A n , and any graph H, if G contains a minimum restraining set of size r, then γ(G H) ≥ (γ(G) − r)γ(H).
Note that this bound is an improvement on the best current bound [4] for any graph G with minimum restraint at most 1 2 γ(G) − 1 2 .
Acknowledgements
We would like to thank Bostjan Brešar and Douglas Rall for their patient and insightful comments. | 3,768.8 | 2015-12-03T00:00:00.000 | [
"Mathematics"
] |
$f(R,T)$ models applied to Baryogenesis
This paper is devoted to the reproduction of the gravitational baryogenesis epoch in the context of $f(R, T)$ theory of gravity, where $R$ and $T$ are respectively the curvature scalar and the trace of the energy-momentum tensor, respectively. It is assumed a minimal coupling between matter and gravity. In particular we consider the following two models, $f(R,T) = R +\alpha T + \beta T^2$ and $f(R,T) = R+ \mu R^2 + \lambda T$, with the assumption that the universe is filled by dark energy and perfect fluid where the baryon to entropy ratio during a radiation domination era is non-zero. We constrain the models with the cosmological gravitational baryogenesis scenario, highlighting the appropriate values of model's parameters compatible with the observation data of the baryon-entropy ratio.
I. INTRODUCTION
Since antiparticles were first predicted and observed ( [1]), it has been clear that there exist high degree of matterantimatter symmetry. This observation is a stark contradiction to the phenomena of everyday and cosmological evidence, particularly the fact that our universe consists of almost entirely matter with little primordial antimatter. The successful of this discovery is verified by the predictions of Big-Bang Nucleosynthesis (BBN) ( [2]), the highly precise measurements of the cosmic microwave background ( [3]) and the absence of intense radiation from matterantimatter annihilation ( [4]). The origin of the baryon number asymmetry is an open issue of the modern Cosmology and particle physics. Various baryogenesis scenarios explain how there are more matter than antimatter in this universe ( [5])-( [12]), which might occur during the matter or the radiation eras. The existence of processes which violate C and CP tells us that there is a fundamental asymmetry between matter and antimatter 1 . Thus the possibility arises of processes which preferentially produce matter rather than antimatter (although our present theoretical understanding doesn't allow us to deduce this directly from the observed CP violation). However, even if this is the case, the ratio of particles and antiparticles will be very close to unity providing they are in equilibrium, as will be the case when the universe was very hot. Only as it cools and the equilibrium is removed will the tiny asymmetry in the particle interactions be amplified to an actual asymmetry in number densities. These requirements to produce matterantimatter asymmetry, namely, (a) non-conservation of baryon number, (b) CP violation and (c) non-equilibrium are known as the Sakharov conditions [13]. In order to connect to dark energy, the authors ( [14])-( [15]) have studied a class of models of spontaneous baryo(lepto)genesis by introducing a interaction between the dynamical dark energy scalars and the ordinary matter. Recently, Davoudiasl et al. [16] have proposed a mechanism for generating the baryon number asymmetry in thermal equilibrium during the expansion of the Universe by means of a dynamical breaking of CP. The interaction responsible for CP violation is given by a coupling between the derivative of the Ricci scalar R and the baryon current J µ of the form where M * is the cutoff scale characterizing the effective theory, g and R being respectively, the metric determinant and the Rurvature scalar. Other scenario to extend this well known theory by using a similar couplaging between the Ricci scalar and the baryonic current has been discussed by many authors. This scenario extends the well known theory that uses a similar coupling between the Ricci scalar and the baryonic current. In ( [17]), f (R) theories of gravity are reviewed in the context of the so called gravitational baryogenesis. Some variant forms of gravitational *<EMAIL_ADDRESS>† Email<EMAIL_ADDRESS>‡ Email<EMAIL_ADDRESS>§<EMAIL_ADDRESS>1 One is the charge conjugation symmetry (C-symmetry) and the other is the parity symmetry (P -symmetry). The combined symmetry of the two is called, CP -symmetry.
baryogenesis by using higher order terms containing the partial derivative of the Gauss-Bonnet scalar coupled to the baryonic current are discussed in ( [18]) whereas in ( [19]), the gravitational baryogenesis scenario, generated by an f (T ) theory of gravity where T is the torsion scalar are proposed. The purpose of this paper is to investigate the gravitational baryogenesis mechanism in f (R, T ) modified theory of gravity, a theory in which matter and geometry are minimally coupled and well known as generalization of General Theory of Relativity. This Theory was firstly introduced by the authors of ( [20]) and several works with interesting results have been found in [21]- [28].
The paper is organized as follows: A brief review in f (R, T ) gravity is performed in section (2). We investigate the essential features of baryogenesis in f (R, T ) gravity by calculating the corresponding baryon to entropy ratio in universe containing a dark energy and the perfect fluid with constant equation of state parameter in section (3). Some conclusions are presented in the last section.
II.
BRIEF Let us consider the total action in modified f (R, T ) gravity by where R, T are the curvature scalar and the trace of the energy-momentum tensor, respectively, G being the gravitation constant.
From the matter Lagrangian density L m , we defined the energy-momentum tensor of the matter as Varying the action (2) with respect to the metric formalism, the field equations are obtained as where We note that f R , f T are the partial derivatives of f (R, T ) with respect to R and T , respectively. The field equations (4) are reduced to Einstein field equations when f (R, T ) ≡ R. Contracting Eq.(4) with the tensor metric components g µν , one gets the relation between the Ricci scalar R and the trace T of the energy momentum tensor Let us now consider the spatially flat FLRW spacetime where a(t) is the scale factor and the matter content of the universe is a perfect fluid for which the matter Lagrangian density can be taken as L m = −p. For this, the Eqs.(4) and (7) become where the dot denotes the derivative with respect to the cosmic time t and H =ȧ a , the Hubble parameter. In the above equations, ρ is the matter density, p the matter pressure and the trace T = ρ − 3p.
In f (R, T ) gravity where we take account a minimal coupling between matter and geometry, we consider a CPviolating interaction term generating by the baryon asymmetry of the universe of the form, We define the baryon to entropy ratio as where T D is the decoupling temperature and n B , the baryon number. The " dot " denote the derivative with respect the cosmic time. We assume in this paper that a thermal equilibrium exists. For this reason we consider that the universe evolves slowly from an equilibrium state to an equilibrium state with the energy being linked to the temperature T as In (13), g * represents the number of the degrees of freedom of the effectively massless particles.
In the context of GR, if we assume that the matter content of the universe as perfect fluid with constant equation of state parameter w = p ρ , the Ricci scalar R and the trace T = ρ(1 − 3w) of the energy-momentum tensor of matter are related as If the universe is filled by the radiation, the baryon number to entropy ratio (12) is equal zero in GR. This results is different zero for the other content of the matter. However, a net baryon asymmetry may be generated during the radiation dominated era in f (R, T ) theories of gravity. To do, we focus on attention on two particulars f (R, T ) models namely f (R, T ) = R + αT + βT 2 and f (R, T ) = R + µR 2 + λT to describe how we can recover the baryogenesis epoch with these models. We calculate the baryon to entropy ratio for each model by considering a universe filled by the dark energy and perfect fluid with constant equation of state parameter w = p ρ and assuming that the scale factor evolve as power-law a(t) = Bt γ where B is a constant parameter.
A. f (R, T ) = R + αT + βT 2 cases For the first case, by using the FRW equation (9) and both the expressions of the scale factor and equation of state parameter that we assumed, we find the analytically expression of the energy density as where δ = 8πG + α 2 (3 − w) and ζ = β( 3 2 − 7w − 3w 2 2 ). Equaling this expression with (13), we obtain the decoupling cosmic time t D expressed in function of the decoupling temperature T D as Using Eq. (16), we arrive at a final expression of the baryon-to-entropy ratio for the present f (R, T ) particular model In the radiation dominated phase, δ = 8πG + 4 3 α, ζ = −β. Hence, Eq.(17) reduces to We can show from Eq.(18) that the resulting baryon to entropy ratio is non-zero in contrast for the GR if γ = 1 2 . Within the choice of the free parameters and depending on matter content, we can adjust the baryon to entropy ratio Eq. (18) to satisfy the observational constraints. For illustrating we assume that the cutoff scale M * takes the value M * = 10 12 GeV , also that the critical temperature is equal to T D = M I = 2.10 16 GeV , with M I being the upper bound for tensor mode fluctuations constraints on the inflationary scale, g b ≃ O(1) and g * ≃ 106, which is the total number of the effectively massless particle in the Universe [29]. In table I, we present some values of baryon to entropy ratio According to the results of this table, we observe that for β = −4.10 −9 , n B /s = 9, 01.10 −11 , which is very agreement with observations and practically equal to the observed value ( n B /s ≃ 9, 42.10 −11 ) whereas when β > −4.10 −9 , we denote a significantly small values. In addition, we plot in fig 1, the γ-dependence of the baryon-to-entropy ratio for M * = 10 12 GeV , T D = 2.10 16 GeV . From the curves of the 1, we note that the intersection of each curve with the curve traducing the observational value ( dashed curve) are in good agreement value of baryon to entropy ratio for specific value of γ including 0 and 0.5. Also, we note that each curve goes towards 0 when the parameter γ tends to 0.5 which is compatible with theoretical results. B. f (R, T ) = R + µR 2 + λT cases The first FRW Eq. (9) becomes Making use the scale factor that we assumed, we can solve (19) explicitly where Γ = 4µγ 2 γ 2 − 4γ + 1) and ∆ = 8πG + 1 2 λ(3 − w). For the special model considered, one can express the decoupling cosmic time t D as where we assumed that 2025γ 4 − 30g * Γ∆π 2 T D 4 > 0. Then, we reformulate the baryon to entropy ratio (12) as In the radiation dominated epoch, ∆ = 8πG + 4λ 3 and the baryon to entropy ratio (22) becomes We see from this results that the baryon to entropy ratio is non-zero in case where γ = 1 2 for the special f (R, T ) model considered. Notice that for γ = 0.3, and µ = λ = 10 −5 , the baryon to entropy ratio n B /s = 8, 28.10 −11 , which is a compatible with the observational value. In the same way, we plot in figure (2) The γ-dependence of baryon to entropy ratio for the model f (R, T ) = R + µR 2 + λT for M * = 10 12 GeV , TD = 2.10 16 GeV . The dashed curve represents the observational value of baryon to entropy ratio whereas the blue curve represents the evolution of baryon to entropy ratio for µ = λ = 10 −5 .
IV. CONCLUSION
The paper is devoted to the study of gravitational baryogenesis mechanism in the context of f (R, T ) theories. According that the CP-violating interaction that will generate the baryon asymmetry of the Universe and considering | 2,912 | 2018-08-03T00:00:00.000 | [
"Physics"
] |
DeepFold: enhancing protein structure prediction through optimized loss functions, improved template features, and re-optimized energy function
Abstract Motivation Predicting protein structures with high accuracy is a critical challenge for the broad community of life sciences and industry. Despite progress made by deep neural networks like AlphaFold2, there is a need for further improvements in the quality of detailed structures, such as side-chains, along with protein backbone structures. Results Building upon the successes of AlphaFold2, the modifications we made include changing the losses of side-chain torsion angles and frame aligned point error, adding loss functions for side chain confidence and secondary structure prediction, and replacing template feature generation with a new alignment method based on conditional random fields. We also performed re-optimization by conformational space annealing using a molecular mechanics energy function which integrates the potential energies obtained from distogram and side-chain prediction. In the CASP15 blind test for single protein and domain modeling (109 domains), DeepFold ranked fourth among 132 groups with improvements in the details of the structure in terms of backbone, side-chain, and Molprobity. In terms of protein backbone accuracy, DeepFold achieved a median GDT-TS score of 88.64 compared with 85.88 of AlphaFold2. For TBM-easy/hard targets, DeepFold ranked at the top based on Z-scores for GDT-TS. This shows its practical value to the structural biology community, which demands highly accurate structures. In addition, a thorough analysis of 55 domains from 39 targets with publicly available structures indicates that DeepFold shows superior side-chain accuracy and Molprobity scores among the top-performing groups. Availability and implementation DeepFold tools are open-source software available at https://github.com/newtonjoo/deepfold.
Dataset, training, and validation
We used the latest PDB database (Feb.2022) for training.We clustered the sequences of PDB using CD-HIT with 40% sequence identity, which resulted in 31,911 protein chains.We further filtered 23,366 chains of highresolution (with < 2.5 Å) for a fine-tuning dataset.The obtained sequences were cropped to 256 and 384 residue sizes as in the AF2 for training.Five DeepFold models were selected from training with various training schedules.For all the trained models, we employed the Uni-fold (a trainable version of AF2) training system, where the trainings were started from the AF2 parameters and then further optimized in the style of transfer learning.Table S1 shows the training detail of a representative model for which a validation result is shown below.Details of the five DeepFold models are described in Supplementary Section 6.For validation, we tested the trained model on 102 targets of CASP13/14.In order to measure the performance on the side chain torsion angles precisely, we defined new accuracy measures 1 for 1 and 2 for 2 . 1 is defined as the proportion of predicted 1 angles that have a difference of 10 degrees or less when compared to the ground truth 1 angles.For 2 accuracy, 2 is defined as the proportion of residues with angles for both 1 and 2 have a difference of 10 degrees or less with compared to the ground truth 1 and 2 respectively.Figure S1 shows a comparison of DeepFold predictions with AF2 in backbone, side-chain and secondary structure accuracies.Blue color indicates that DeepFold outperformed AF2, whereas yellow denotes the opposite cases.From Figure S1 (a), we can see that the DeepFold predicts quite different protein structures from those of AF2 for about 13 targets.On the other hand, the average TMscore of DeepFold predictions is 0.8648 while that of AF2 is 0.8592 indicating modest improvement of 0.56% in favor of DeepFold.and 2 of DeepFold and AF2, respectively.As can be seen from the plots, the average side chain accuracies of both 1 and 2 are higher in DeepFold, showing that mean scores are improved about 3% point over AF2 in both accuracy measures.More importantly, the majority of targets are improved consistently in side-chain angles.This implies that the modified loss functions were effective in improving the side chain angle predictions.Figure S1 (d) compares the accuracies of secondary structure prediction on CASP 13/14 targets by DeepFold vs. AF2, which shows that the average secondary structure accuracy of DeepFold (with mean accuracy of 0.8606) was slightly higher than that of AF2 (mean accuracy of 0.8567).Considering that the typical classification accuracy of eight-state secondary structure in the literature ranges from 0.70 to 0.80 (Spencer, et al., 2015;Wang, et al., 2016;Zhang, et al., 2018), the accuracy of 0.8606 is significantly higher than the typical average accuracy.In addition, we found that (data not shown here) both DeepFold and AF2 showed better or comparable accuracies in all eight states, with both models performing well in predicting 'H' (alpha helix) and 'E' (strand), while they demonstrated low accuracies in predicting 'S' (Bend).Figure S2 (a) compares the predicted average side-chain confidence score for each target ̂ and its ground truth value .As can be seen from the linear fit, a reasonable correlation between the prediction ̂ and the true confidence score s was achieved with the correlation coefficient of r = 0.7.Considering that the side chain confidence s reflects the side chain angle differences by definition as shown in Eq. ( 9), the high correlation implies that the predicted ̂ can be used as a good measure for confidence level for side chain accuracy.
Re-optimization by conformational space annealing
Once 3D structures are inferred from the DeepFold networks, we perform global optimization using conformational space annealing (CSA) with the full atom force field, distance restraints, and side chain torsion restraints generated by the networks.The energy function used for CSA is implemented in the PyCSA (Joung, et al., 2018) global optimization library and OpenMM (Eastman, et al., 2017), a molecular dynamics simulation toolkit.The energy function is defined as follows: where is molecular mechanics force field composed of AMBER14SB (Maier, et al., 2015) energy and Generalized Born implicit solvation energy (Onufriev, et al., 2004).The distogram potential energy Edisto is defined by = − <> ( / 0 ), where indicates the distogram probability between the residues and for < 17.94 Å with constant extrapolation for farther distances (Senior, et al., 2020), where 0 is the distogram probability at the cut-off distance.Each term of is an interpolated fit to a cubic spline.Since distogram predictions for larger distances are not expected to be accurate enough, we limited the summation on < > over all pairs for which the distance of maximum probability is less than 16.06 Å. for the set of all the side-chain torsion angles {} is the flat-bottomed Lorentzian-type potential energy (Joo, et al., 2018) defined as follows: where ̂ is k-th predicted torsion angle obtained from DeepFold, is a tolerance angle (5° in this work), and , which results in a flat-bottom of width 10° with Lorentzian-width of = 15°.For the weights , we choose {3.0,2.5,2.0,1.5} for the four types ( 1 ,••• , 4 ) of sidechain angles respectively.For multiple predicted models, we can generalize the above formula by clustering the predicted angles with a threshold of 30°, and take the new ̂ as the average of all the predicted angles of each cluster.Also a new is taken as = (15, _ + 10), where _ is the maximum angle difference of the cluster.Estap is a statistical potential for pairs of torsion angles including backbone and side-chains (Yang, et al., 2012).The CSA method aims to find the lowest energy structure by exploring its conformational space.In its early stages, CSA searches the entire conformational space, gradually narrowing the search to smaller regions with lower energy.As a result, the CSA method provides the optimized structures that satisfies the distogram and side-chain restraints obtained from DeepFold, while also balancing the molecular mechanics force field.
Ablation study for DeepFold
We retrained the DeepFold model to investigate the effect of each modified loss component.The training database was built using the PDB structures deposited before 28 August 2019, the same as that of AlphaFold2 (AF2) (Jumper et al., 2021).It was also filtered with 40% sequence similarity in the same manner as AF2.We also applied the same sequence similarity filter to remove PDB chains similar to CASP13/14 targets.The dataset for the ablation study contains 25,777 chains.In the case of the template database provided by AF2, we filtered template structures based on the start dates of CASP13/14, respectively.
Full model
Figure S3: Comparison of ablation models with the AlphaFold2 model.We validate the trained models with 102 CASP 13/14 targets.The first row shows the results for W-FAPE loss, while the second and third row represent the results for SC-torsion loss and those of the full model respectively.
Figure S3 presents the outcomes of the ablation study for the three aforementioned models.Even with just the W-FAPE or SC-torsion loss, there are improvements in side-chain accuracy.When these two losses are combined in the full model, the side-chain accuracy is further improved.Detailed numerical results are provided in Table S2.In conclusion, the DeepFold architecture elevates the side-chain accuracy, while the backbone accuracy remains stable.Note that the Molprobity score gets worse from ~1.06 to ~2.3 (small is better).However, it was improved through later CSA re-optimization.(See the Molprobity comparison between DFolding-server and DFolding in the Figure 3 of the main manuscript.) Figure S4 (a) illustrates a TM-score comparison between the case of the original AF2 templates and that of new templates (and their alignments), which were obtained by using the CRFalign method (Lee, et al., 2022).Structures were generated using the AF2 pipeline.For the 102 CASP13/14 targets, there is an average TM-score improvement of about 0.01.Meanwhile, certain targets display substantial improvement in head-to-head comparison.Specifically, for T1064, the TM-score improvement is 0.39.In Figure S4 (b, c), it shows that the side-chain prediction remains largely unchanged despite the template change.As depicted in Figure S5, the backbone improvement for the predicted structure (by DeepFold through CRFalign) on target T1064 is shown with a TM-score difference of about 0.39 over that by AF2 (from 0.4049 to 0.7937).For this target, the Neff score of the MSA was relatively low, around 1.85, and the average TMscore difference among the top 4 templates was 0.11 (0.31 vs. 0.42) which is a considerable difference but not a huge one.Still, there was a considerable TM-score difference of approximately 0.39 in the final result.When comparing protein tertiary structures, notable differences can be observed in the beta-sheet arrangement of the intermediate regions, ranging from cyan to yellow colors, when compared with the native structure.The impact of templates on protein structure prediction accuracy was studied in a recent work (Wu, et al., 2023) where they concluded that templates are especially beneficial for targets with similar templates.Using the domains for which PDBs are publicly available, we performed comparisons between the results of CSA reoptimization and those of AF2s with several different iterations.The result is shown in the following table which indicates that the CSA method produces better average scores on side-chain torsion angles and molprobity.This demonstrates that CSA reoptimization is an appropriate tool for advancing the details of protein structures, as we suggested in the main paper.Apparently, increasing the number of recycling iterations does not lead to statistically significant changes in AF2 results beyond four times as AF2 paper reports (Jumper et al., 2021).However, another research group as ColabFold (Mirdita, et al., 2022) suggests that increasing the number of recycles can be beneficial for larger proteins or complex structures.
Training details
We trained each DeepFold model with 8 A100 GPUs, where the batch size was 64.Because the batch size is larger than the number of GPUs, we used gradient accumulation for training.The training time for each model is about 3-4 days.We employed a fine-tune strategy for model 0~3, freezing the evoformer module with the gradients affecting only the structure module.For our model training, the differences in hyperparameters between DeepFold models are outlined in TableS3.We trained the initial model with our recent PDB dataset.This initial training did not include our proposed modifications to loss functions.Then we further trained our model with bigger crop sizes together with our proposed loss functions.We used the Adam optimizer with exponential decay for which decay_rate=0.95 and decay_steps=500.effectiveness of our methodology.For example, domain T1123-D1, shown in red, showed significant improvement despite having a Neff value of 1.9, indicating relatively low MSA quality.In such cases, our data suggest that DeepFold's updated template information can help improve prediction results.However, Figure 7(a) in the main manuscript shows that there is no improvement for the other three domains marked in red.These three domains all had Neff values greater than 6.0, making them high-quality targets for MSA.It is worth mentioning that when the quality of MSA information is high, the impact of templates on the prediction structure may not be significant, as highlighted in the AF2 paper (Jumper et al., 2021).
Figure S1 :
Figure S1: Comparison of DeepFold with AF2 for 102 targets of CASP13/14 in terms of (a) TM-scores, (b) & (c) side chain accuracies of 1 and 2 , and (d) secondary structure accuracies in 8-state (calculated by DSSP).All numbers within the figures are average values over 102 targets.
Figure
Figure S1 (b) and (c) show 1 and 2 of DeepFold and AF2, respectively.As can be seen from the plots, Figure S2 (b) & (c) are showing a comparison of the pLDDT and LDDT score of all the targets with DeepFold and AF2 respectively.We can see that DeepFold shows a slightly better correlation coefficient of 0.80 over 0.78 of AF2.
Figure S2 :
Figure S2: Validation of the trained model on 102 targets of CASP13/14.(a) compares the true average side chain confidences and its prediction ̂ for each target, and (b) & (c) compares (normalized) pLDDT vs. LDDT for all the targets by DeepFold and AF2 respectively.
Figure S5 :
Figure S5: Structure change by new templates and alignment for T1064.
Table S1 :
Training settings and the weights of the loss terms for the training of a DeepFold model (Jumper, et al., 2021)
Table S2 :
Validation of CASP13/14 targets with various metrics.We observe that modified side chain loss is effective in increasing chi1, chi2 accuracies.Also, we find that combining the two losses gives better results in side-chain accuracies.
Table S3 :
Note that AF_# refers to the AF2 with # times iteration. | 3,253.6 | 2023-11-23T00:00:00.000 | [
"Computer Science",
"Biology"
] |
SOFB is a comprehensive ensemble deep learning approach for elucidating and characterizing protein-nucleic-acid-binding residues
Proteins and nucleic-acids are essential components of living organisms that interact in critical cellular processes. Accurate prediction of nucleic acid-binding residues in proteins can contribute to a better understanding of protein function. However, the discrepancy between protein sequence information and obtained structural and functional data renders most current computational models ineffective. Therefore, it is vital to design computational models based on protein sequence information to identify nucleic acid binding sites in proteins. Here, we implement an ensemble deep learning model-based nucleic-acid-binding residues on proteins identification method, called SOFB, which characterizes protein sequences by learning the semantics of biological dynamics contexts, and then develop an ensemble deep learning-based sequence network to learn feature representation and classification by explicitly modeling dynamic semantic information. Among them, the language learning model, which is constructed from natural language to biological language, captures the underlying relationships of protein sequences, and the ensemble deep learning-based sequence network consisting of different convolutional layers together with Bi-LSTM refines various features for optimal performance. Meanwhile, to address the imbalanced issue, we adopt ensemble learning to train multiple models and then incorporate them. Our experimental results on several DNA/RNA nucleic-acid-binding residue datasets demonstrate that our proposed model outperforms other state-of-the-art methods. In addition, we conduct an interpretability analysis of the identified nucleic acid binding residue sequences based on the attention weights of the language learning model, revealing novel insights into the dynamic semantic information that supports the identified nucleic acid binding residues. SOFB is available at https://github.com/Encryptional/SOFB and https://figshare.com/articles/online_resource/SOFB_figshare_rar/25499452.
1 Supplementary Note 1: The details of the datasets used.
Supplementary Table 1: The details of the datasets used.From top to bottom, the name of the datasets, the number of protein sequences in the datasets, the number of nucleic-acid-binding residues in the datasets and the number of non-nucleic-acid-binding residues in the datasets.Source data are provided with this paper.2 Supplementary Note 2: The details of comparison with other language model Supplementary Figure 1 illustrates the overall prediction performance of SOFB across different feature characterizations.The Supplementary Figure 1 indicates that despite fine-tuning, the performance of ESM (esm1 t34 670M UR100) [1] shows negligible improvement.In terms of recognizing DNA and RNA binding residues, the AUCs improved marginally by 0.001 and 0.003, reaching 0.898 and 0.802, respectively.However, these enhancements remain insufficient compared to the bio-language learning model initially employed.Notably, the performance of the latest ESM2 (esm2 t12 35M UR50D) [2] surpasses both ESM and fine-tuned ESM, exhibiting superior metrics across the board, particularly in F1 and MCC.Specifically, ESM2 achieved AUCs of 0.902 and 0.822 for the DNA and RNA tasks, respectively.However, after fine-tuning, ESM2's performance witnessed a decline, with AUCs dropping to 0.869 and 0.780, representing decrements of 0.032 and 0.042, respectively.This decline might stem from the ESM2 model's reduced suitability for fine-tuning compared to ESM, possibly influenced by variations in certain layer parameters that led to a departure from its original performance level.
Supplementary Figure 1 4 Supplementary Note 4: The experiment of chain interactions We conducted another experiment to investigate the effect of chain interactions on the prediction performance of our SOFB.In particular, we adjusted the number of protein chains in the training sets and obtained different number of protein chains with five experimental groups of 100, 200, 300, 400, and all protein chains.The experimental results are tabulated in Supplementary Table 2. From the experimental results, it can be observed that in the DNA binding residue recognition task, as the number of protein chains increases, the interactions between protein chains are enhanced, which leads to the increment of the results.
Moreover, in the RNA-binding residue identification task, the prediction performance also improves with the increasing number of protein chains.In addition, the ROC curves of the experimental results are illustrated in Supplementary Figure 3, from which we can observe that although the number of protein chains in the training set had a slight impact on the performance of SOFB, an increase in the number of chains enhanced the interaction between protein chains, thereby further improving the predictive capability of SOFB.
Supplementary Table 2: From left to right each column shows the number of protein chains, the number of amino acids, the number of binding residues, the number of non-binding residues, Rec, Pre, F1, MCC, AUC, respectively.Source data are provided with this paper.6 Supplementary Note 6: The result of SOFB on different protein family We conducted experiments on evaluating the performance of our SOFB on predicting nucleic acid binding residues of proteins across different protein families(protein family).We primarily utilized the protein binding residue dataset obtained from BioLip [4] in our study.Although we cannot directly obtain the data classified according to protein family from BioLip, we categorized the proteins in the BioLip test set based on the provided protein IDs by indexing in InterPro [5], thereby classifying the proteins according to their protein families.We then evaluated the performance of SOFB across different protein families using MCC as the evaluation metrics.
The experimental results were summarized in the top of the Supplementary Figure 5, where the top seven protein families that were best characterized were illustrated.From the Supplementary Figure 5, we can observe that SOFB performed best on the Bacterial regulatory proteins, tetR family (PF00440), which represents a DNA-binding domain with a helix-turn-helix (HTH) structure.
MerR HTH family regulatory protein (PF13411) has a winged helix-turn-helix (wHTH) structural domain.MarR family (PF01047), Myb-like DNA-binding domain (PF00249) also belongs to the clan of HTH.Therefore, we infer that SOFB has concentrations on DNA-binding residues with protein clan for HTH structure.
Supplementary Figure 5: SOFB predicts results for different protein family within the DNA and RNA-binding test datasets (using MCC as a metric), and shows the protein family among them with the best results, respectively.Source data are provided with this paper.
In terms of RNA-binding residues prediction task as illustrated in the bottom of the Supplementary Figure 5, SOFB performs best on the KH domain (PF00013), which presents in a wide variety of nucleic acid-binding proteins.
Pumilio-family RNA binding repeat (PF00806), the Puf domains that usually occur as a tandem repeat of 8 domains, was also accurately inferred for its binding residues by SOFB.Unfortunately, SOFB exhibited limited performance in the RNA binding prediction task within other protein families.We speculate that it is due to the presence of multiple proteins or more repeats with in protein families PF00013 and PF00806, resulting in that SOFB learns more informative features of such proteins.Overall, in both DNA binding and RNA binding prediction tasks, SOFB has demonstrated excellent performance in certain protein families.8 Supplementary Note 8: The results of SOFB on other datasets We have conducted additional experiment to explore and compare the predictive capabilities of SOFB.Firstly, we collected YFK16, YK17 and MW15 test datasets from [6], and each dataset consisted of 2 subsets for protein-DNA and protein-RNA binding.Subsequent evaluations of the nucleicacid-binding residues prediction performance on these tissue datasets with metric AUC involved SOFB, along with other baseline models including DRNApred, COACH-D, SVMnuc, Nucbind and iDRNA-ITF.As illustrated in Supplementary Besides, due to the limited size of the test datasets, we ultimately selected the YK17 training dataset as our large-scale benchmark dataset.In particular, we employed CD-HIT method to eliminate protein sequences with identity exceeding 30%, resulting in a final dataset of 464 proteins with totaling 106,081 that binds to DNA and 416 RNA binding proteins with totaling 95,020 amino acids.Subsequently, we compared the predictive performance of SOFB with iDRNA-ITF [3] on this dataset to validate the performance of SOFB on largescale datasets.
Supplementary Table 5: The number of protein entries within the large dataset used, the number of binding residues versus non-binding residues, and the performance of SOFB and iDRNA-ITF on the dataset shows that the results of SOFB are superior to iDRNA-ITF, which is currently the best performer.Source data are provided with this paper.The experimental results were summarised in Supplementary Table 5.In terms of DNA binding resisues prediction, From the table, we can observe that SOFB outperformed iDRNA-ITF by a margin on the large-scale dataset.
For instance, SOFB achieved an AUC improvement of over 5% and a precision improvement of over 10%.In the RNA binding prediction task, we observed a narrower gap between SOFB and iDRNA-ITF.Both models exhibited similar AUC values, but SOFB continued to outperform iDRNA-ITF in precision, reaching a remarkable precision score of 0.86.Overall, our SOFB model maintains its strong effectiveness and remains highly competitive even on larger-scale datasets.
9 Supplementary Note 9: The details of Case
Study
We have conducted additional analyses and employ a more consistent criterion for proteins selection, where the three protein chains with the highest MCC score obtained by the best two models (SOFB and iDRNA-ITF) were selected.
For the DNA task, the top three proteins are 5h3r A, 6c31 A, 6enb A, and for the RNA task, they are 6htu A, 5www A and 5wzg A. We compared the predictive results of SOFB and iDRNA-ITF and visualized the nucleic-acidbinding residues of these proteins in the DNA and RNA-binding, respectively.
The visualization of the protein chains are illustrated in Supplementary Figure 7 and Supplementary Figure 8.
The DNA-binding protein 5h3r A consists of 141 amino acids and 20 DNA-binding residues, as depicted in Supplementary Figure 7.Both SOFB and iDRNA-ITF accurately predicted all 20 binding residues of the protein.
However, iDRNA-ITF predicted more false positives than we did, so our precision (Pre) was 0.188 higher than theirs.SOFB achieved the F1 of 0.909 and the Mathews correlation coefficient (MCC) of 0.898.In contrast, iDRNA-ITF resulted in F1 and MCC values of 0.784 and 0.766.
The three protein chains with the best results within SOFB demonstrates SOFB's superior ability to detect true nucleic-acid-binding sites in sequences that prove challenging for alternative methods.Although SOFB predicts false positives for amino acids at various positions, they are predominantly spatially proximate to the nucleic acid.This finding suggests that SOFB can glean spatial structure information from one-dimensional sequence data, such as residue positions in three-dimensional space post-protein folding, and utilize it for nucleic-acid-binding residue identification.
The RNA-binding protein 6htu A comprises 76 amino acids and 16 RNAbinding residues, as shown in Supplementary Figure 8. SOFB missed three binding sites on this protein chain, while iDRNA-ITF only missed predicting one binding site.However, its success in predicting more binding sites was at the cost of eighteen false-positive amino acids (the number of false positives for SOFB was one).This resulted in its F1 and MCC being 0.254 and 0.313 less than SOFB.
The RNA-binding protein 5www A comprises 94 amino acids and 24 RNAbinding residues.The RNA-binding protein 5www A consists of 94 amino acids and 24 RNA-binding residues.SOFB and iDRNA-ITF identified 19, 17 binding residues and 3, 9 false positives for this protein, respectively.From these results, it is evident that SOFB exhibits more pronounced advantages in the recognition of RNA binding residues.Its superiority lies in its ability to mitigate false positives when identifying a comparable number of correct examples.This factor contributes to its enhanced performance in this particular task.
test sets) were selected, and we then performed random mutations on the amino acids incorporating the positions of the binding residue.Subsequently, the mutant sequences were fed into the NABert model for calculating the attention score.Specifically, the scores of an amino acid in the 16 heads of the last layer were averaged to get the attention score for that amino acid.
We subsequently conducted a statistical analysis to evaluate the differences in attention scores before and after these positional mutations.Specifically, we computed the attention scores for each mutation and performed a t-test to evaluate the significance of the differences in attention scores.
The experimental results were summarised in Supplementary Figure 10.In terms of DNA binding residue prediction, we can observe from Supplementary Figure 10 (a) that differences were observed in the attention scores before and after the mutations, with the p-value equal to or less than 0.05.For the RNA binding residue prediction, it can also be demonstrated from Supplementary Figure 10 (b) that the changes can be observed in the attention scores after the mutations.Furthermore, we observed that in both DNA binding prediction tasks and RNA binding prediction tasks, the attention scores of the mutated positions were lower compared to those before the mutations.This finding suggests that SOFB has the ability to concentrate more attention on biologically relevant positions, indicating its potential for discovering functional sites.
These statistical results and hypothesis tests provide evidence for the effectiveness and potential interpretability of SOFB, offering different insights into the identification of functional sites.
12 Supplementary Note 12: The results of SOFB that incorporate with the
RoseTTAFold method
RoseTTAFoldNA [7] extends the RoseTTAFold's end-to-end deep learning approach to model the nucleic acid and protein-nucleic acid complexes, and can rapidly produces three-dimensional structure models with confidence estimates for protein-DNA and protein-RNA complexes, and for RNA tertiary structures.RoseTTAFoldNA is broadly useful for modeling the structure of naturally occurring protein-nucleic acid complexes, and for designing sequence specific RNA and DNA binding proteins.
However, it is unfortunately that the RoseTTAFoldNA model requires both protein sequences and RNA or DNA sequences to predict protein structures or DNA-protein binding, while in our study, it only involves the recognition of nucleic acid-binding residues within protein sequences, not specific nucleic acid sequences.Therefore, it is difficult to apply the RoseTTAFoldNA model into our study.Nonetheless, we conducted additional experiments and employed RoseTTAFold [8], the prototype of RoseTTAFoldNA, to generate the extensive information of protein structures and integrate it as part of the bio-information into our SOFB to predict the nucleic acid-binding residues.Specifically, after obtaining the structural information of all training and test sets, we combined them with the 75-dimensional bio-feature and obtained the 89-dimensional biofeature for nucleic acid-binding residues prediction.Subsequently, we employed the newly integrated features to train the SOFB model, and evaluated the performance of the model on the DNA-binding and RNA-binding test set.
Supplementary Figure 2 :
: (a) shows the average Recall (Rec), Precision (Pre), F1, MCC of ten runs of six dynamic contextual embeddings (ProtVec, ESM, Finetune ESM, ESM2, Finetune ESM2, ProGen) on DNA-binding test set and RNA-binding test set; (b) provides ROC curves with AUC values, PR curves with AP values for DNA-binding residue and RNA-binding residue predictions, respectively, where SOFB performs best by both metrics.Source data are provided with this paper.3Supplementary Note 3: The details of the heat maps in the correlation analysis Supplementary Figure2exhibits the feature correlation of SOFB using different feature characterizations.Upon analysis of the correlation heatmaps, we observe that SOFB consistently demonstrates superior performance compared to all other dynamic methods in the classification of amino acids and the segregation of amino acids within a sequence into two distinct groups, crucial for subsequent identification of nucleic acid binding residues.It's noteworthy that in the context of recognizing DNA binding residues, the heatmap derived from fine-tuned ESM2 lacks discriminatory patterns entirely, attributing to the stark decline in performance observed.Overall, the feature construction strategy of SOFB surpasses other methods, showcasing the effectiveness and robustness of SOFB.Heat maps of correlation analysis of six dynamic contextual embeddings (ProtVec, ESM, Finetune ESM, ESM2, Finetune ESM2, ProGen) and SOFB on DNA-binding test set and RNA-binding test set are given.Source data are provided with this paper.
Supplementary Figure 4 :
provides a comparison of SOFB with other state-of-the-art algorithms on the DNA (RNA)-binding test sets, where other algorithm results are reported in [3].It shows violin plots depicting multiple performance metrics of different baseline methods along with SOFB, where SOFB outperforms all other methods.The triangle markers measn Recall (Rec), Precision (Pre), F1 score, Matthews correlation coefficient (MCC) and Area Under the Curve (AUC) (n=5).Source data are provided with this paper.
7 Supplementary Note 7 :
The results of Ablation Study Supplementary Figure 6: (a) provides ablation experiments of SOFB testing on nucleic-acid-binding test sets in the different settings, from top to bottom, SOFB, setting (a), setting (b), setting (c) and setting (d), respectively.Then setting (e), setting (f) and setting (g), where the setting (a, b, c and d) and (e, f, g) are ablation of the structure and feature matching modules, respectively.(b) shows the ROC curves of the ablation experiment on DNA and RNA-binding residues prediction tasks, from top to bottom, a(no-Bi-LSTM), b(no Diff-k sizes), c(no Stack module) and d(no-State), respectively.Then e(Both ProtT5),f(Both NABert), e(Exchange module) and SOFB, where the setting (a, b, c and d) and (e, f, g) are ablation of the structure and feature matching modules, respectively.(n=5) Source data are provided with this paper.
Table 3 :
Showing the values of Pre, Rec, F1, MCC, AUROC for combining different baseline methods with SOFB, where the results of SOFB outperforms all other methods.Source data are provided with this paper.
To comprehensively demonstrate the effectiveness of our proposed SOFB, we combined multiple performance metrics and depicted them in a violin plot (Supplementary Figure4), where each data point (represented by a triangle) signifies a predictive metric for the respective model.It is evident that our SOFB exhibits superior overall performance compared to other methods.The Supplementary Table3is given for a detailed comparison.Supplementary
Table 4
, we can observe that SOFB performs best comparing with the other methods on all datasets.For instance, in DNA binding prediction, none of the methods achieved a performance exceeding 0.9.Both DRNApred and COACH-D fell short of 0.8.In contrast, our SOFB exhibited the highest prediction AUC, reaching a remarkable 0.949.This underscores the effectiveness of SOFB in recognizing amino acid binding patterns.Furthermore, in RNA binding prediction, a decrease in performance was observed for all methods except SOFB.For example, DRNApred yielded a prediction AUC of only 0.467 on the MW15 dataset, while COACH-D achieved merely 0.579 on the same dataset. | 4,081 | 2024-06-03T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Effect of stress pulse shape on the dynamic fracture of soda-lime glass
The goal of this paper is to evaluate the tensile strength and fracture behavior of the soda lime glass under static and dynamic loading using Brazilian tests. The evaluation of the static tensile strength was performed using the universal testing machine; the fracture behavior under dynamic loading was studied using the Hopkinson Split Pressure Bar (HSPB) Technique. The dynamic loading is realized using the stress pulses of the different shape. In order to obtain the different loading stress pulses impact strikes made from four different materials (Steel, Teflon, Beech, and Polymer) were used. The high-speed camera in both static and dynamic tests was used to obtain some detail view on the failure process of specimens. Experimental results showed that the dynamic tensile strength was at least three times higher than the static one. The initiation of the dynamic fracture occurs when some parameters of the loading stress pulse reaches critical values. These parameters are independent on the stress pulse shape.
Introduction
The knowledge of the fracture properties of glass play significant role in the planning of their use in many practical application when e.g. glass windows are especially vulnerable to shock and impact loading. This kind of loading involves many accidents like terrorist bombing attack, gas explosion, debris impact during windstorm [1] and many others events. This is the reason of the study of the glass and many other brittle materials (concrete, rocks, ceramics) failure behavior under high loading rates corresponding to the problems mentioned above.
The experimental evaluation of the brittle material strength behavior is very complicated due to their brittleness, high hardness, and very small strain to failure. This is the main reason why some alternative experimental methods are used. One of the most popular methods is the Brazilian test which was originally developed for the testing of concrete and rocks [2,3]. This method consists in the loading of a thin circular disk (specimen) by a line compressive line load. This loading generates a tensile stress state inside the specimen. Specimen fails perpendicularly to the diametric loading direction. The evaluation of the stress state in the specimen is very easy during of the static loading under assumption of the pure elastic behavior of the specimen up to the fracture. This method was extended for the dynamic loading using HSPB technique [4]. The evaluation of the results of HSPB method is more complicated than during the static loading. The main problems consist in minimizing the inertial effect, in achievements of dynamic force balance and in deformation specimen at constant loading-rate before failure [5,6]. These problems are mostly solved using of the pulse shaper technique. The glass dynamic fracture was studied in papers [7][8][9][10], where the main characteristics of this process were obtained.
In the given paper the main attention was focused on the study of the pulse shape effect on the dynamic failure of glass. The shape of the loading (incident) stress pulse was from "half sine" up to nearly trapezoidal with superimposed oscillations. The detail analysis of the loading stress pulse and response functions parameters was performed. The relation between these parameters and specimen failure behavior was found.
Experimental details
Sodalime glass was chosen as testing glass material. Main mechanical properties of this glass are listed in the Table 1.
Corresponding author<EMAIL_ADDRESS>The velocities of the longitudinal wave, cL, and shear wave, cT, were measured ultrasonically by pulse echo method using Physical Acoustics Corporation µDiSP system. Specimens in form of cylinders, 14 mm in diameter and 7 mm in thickness were prepared both for the static and dynamic Brazilian test.
The Brazilian experiments at static loading were performed using universal testing machine (INSTRON 5985) .The specimens were compressed at loading-rate of 1 mm/min (1.6667×10 −5 m/ s).
The tensile strength is given by the relation: where P is the maximum loading force at which specimen fails, D is the specimen diameter and t is its thickness.
Dynamic Brazilian test was performed using Hopkinson Split Pressure Bar (HSPB) system as schematically shown in the Fig.1. This system consists of three main parts. First of all there is a gas gun enabling to accelerate the projectile (striker) to some velocity. Second part is a system two elastic bars (incident and transmitted) and the third part is the data acquisition system. After impact of the striker on the end of the incident bar the compressive stress pulse (incident stress pulse), I (t), is developed. After impact of this wave on the interface between the incident bar and specimen some part is reflected back as the reflected stress pulse,R (t) and part is transmitted to the second bar as the stress pulse T(t).
The maximum value of the incident stress pulse is given by the striking velocity of the striker (projectile). The time duration of the incident stress pulse increases with the striker length. These dependences are different for the different striker materials. In our research the following strikers were used: -Striker made from the tool steel, 14 mm in diameter and 32 mm in length -Striker made from beech wood, 14 mm in diameter, 63 mm in length. The axis of the striker was oriented in L ( 1) direction -Striker made from Teflon, 14 mm in diameter, 40mm in length -Striker made from High density polyethylene (HDPE) 15 mm in diameter, 50 mm in length. Steel, Teflon and HDPE were considered as isotropic materials. Beech was considered as the orthotropic material . The bars are made from the tool steel . Diameter of the bars is 15 mm and their length 1000 mm. Strain gauges were located in the middle of the bars. In the Table 2 the main properties of the isotropic materials are given. The material density and wave velocities were determined experimentally. The technique mentioned in the previous section was used. (2) The specimen loading rate is given as the difference of the velocities of the specimen -bar interfaces, v1, v2 [9]: The knowledge of the loading rate enables to evaluate the specimens shortening (displacement) according to: In order to obtain more information on the specimen behavior impacts themselves were monitored by high speed photography, using PHOTRON FASTCAM SA-Z type 2100K-M, Frame Rate 210000fps, Shutter Speed 1.00 µs and Resolution 384x160 dpi. All experiments were performed at the room temperature.
Experimental results
Quasi static loading was performed using the tensile testing machine mentioned in previous chapter. The specimens of glass exhibited brittle behavior. The force F increased with the displacement linearly up to the specimen fracture. The value of the maximum force, Fmax, is 7132768 N,the tensile strength evaluated using Eq.1 is 46.33 5.39 MPa and corresponding displacements, pm, is 0.3930.013 mm. These values represents average from 15 measurements. The average value of the tensile strength is closed to the value for annealed glass reported in [9].
In the Fig.2 examples of the loading (input) stress pulses for different strikers and different striking velocities are displayed. It is obvious that the impact of different strikers leads to the development of the stress pulses, I, of different shapes. The maximum value (amplitude) of the stress pulses, Imax, increases with the striking velocity, V0. Experimental data can be fitted by the linear function: = + 0 (5) Fig.2. Examples of the input stress pulses, I produced by different strikers.
The dependence of impulse, II and energy, wI, on the striking velocity, V0, can be also fitted by this equation. The parameters of the Eq.8 are given in the Table 3. If we use the steel striker the specimen damage occurs at all striking velocities. The use of the Teflon striker did not lead to the specimen damage. The increase in the striking velocity led to the striker permanent deformation. The maximum of the incident stress was limited. The parameters of the stress pulses I corresponding to undamaged specimens are reported in the Table 4. First a crack initiated at the contact interface (a). From this crack, two wing-like cracks propagated from the surface of the specimen to the centre, running parallel to the faces of the cylindrical sample (b). Once these cracks reached the midpoint of the specimen, the two wing-like cracks merged and continued to propagate as a single crack, reaching the far end of the specimen (d,e). The inrease of the loading leads to the growth of the specimen damage up to its splitting. It is evident that there is no force equilibrium. The stress in the specimen exhibits significant gradient. Results show that the main two assumptions used in the dynamic tensile strength evaluation were not satisfied. One reason of this phenomenon may be a consequence of the short pulse duration. The Brazilian tests are performed using stress pulses of more than 100 µs time duration [18,19]. The time duration of stress pulses higher than 100 µs can be expected if we use other strikers -see Fig.2 .
This time duration is achieved for the polymer striker. Fracture of the specimen was observed at the polymer striker impact velocity 90 m/s. Even if the input stress pulse duration is about 120 µs like e.g. in many workssee e.g. [9,10] no equilibrium in the specimen was achieved. The obtained results show that the use of the Eq. (2) is very problematic at least. We have three values of the stress maximum: transmitted stress, T, average stress (I +R+T)/2 and input stress I+R. If we use these three values of the stress for the specimens where fracture occurred we obtain values of tensile strength at least three times higher than than values achieved at the quasi static loading. This increase seems to be non realistic.In [9] where the use of Eq. (2) was fully satisfied were found that the dynamic tensile strength is higher than the static one by factor 1.18. The loading rate was in interval 1 -4 m/s. Because the evaluation of the tensile strength using a classical approach -Eq. (4)is questionable we try to find another quantity describing the fracture behavior. Analysis of the experimental data led to the conclusion that there are some parameters of the stress pulses which are independent on the stress pulse shape. The analysis of the experimental data show that main parameters of the pulses depends on the stresss pulse maximum Imax independently on the striker material as : Table 4 represent critical values for the fracture of the specimens. For example the critical value of the stress pulse energy is 0.011163 MJm -2 . The damage starts for the incident stress pulse with this energy.
Conclusions
The experimental research on the dynamic fracture of glass using indirect tensile (Brazilian) test was performed. The dynamic fracture was studied using classical Hopkinson split pressure bar. Four different strikers were used. The loading stress pulses exhibit different stresstime histories starting from "half -sine" up to trapezoidal with superimposed oscillations. The duration of these pulses changes from about 30 µs up to about 120 µs.
During the all tests no force equilibrium in the specimen was achieved. This fact was also documented by observation of the fracture behavior using high speed camera. There were no differences in the qualitative features of the damage growth in the specimen loaded by the stress pulses of different shape. The use of classical equation -see Eq.
(2) -led to some unrealistic increase in the tensile strength in comparison with the static one.
Analysis of the results showed that the fracture starts if the parameters of the input stress pulse achieve some critical values. These values are independent on the striker material properties. | 2,697.8 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
An Effective Adversarial Attack on Person Re-Identification in Video Surveillance via Dispersion Reduction
Person re-identification across a network of cameras, with disjoint views, has been studied extensively due to its importance in wide-area video surveillance. This is a challenging task due to several reasons including changes in illumination and target appearance, and variations in camera viewpoint and camera intrinsic parameters. The approaches developed to re-identify a person across different camera views need to address these challenges. More recently, neural network-based methods have been proposed to solve the person re-identification problem across different camera views, achieving state-of-the-art performance. In this paper, we present an effective and generalizable attack model that generates adversarial images of people, and results in very significant drop in the performance of the existing state-of-the-art person re-identification models. The results demonstrate the extreme vulnerability of the existing models to adversarial examples, and draw attention to the potential security risks that might arise due to this in video surveillance. Our proposed attack is developed by decreasing the dispersion of the internal feature map of a neural network to degrade the performance of several different state-of-the-art person re-identification models. We also compare our proposed attack with other state-of-the-art attack models on different person re-identification approaches, and by using four different commonly used benchmark datasets. The experimental results show that our proposed attack outperforms the state-of-art attack models on the best performing person re-identification approaches by a large margin, and results in the most drop in the mean average precision values.
I. INTRODUCTION
In order to continuously track targets across multiple cameras with disjoint views, it is essential to re-identify the same target across different cameras. However, this is a very challenging task due to several reasons including changes in illumination and target appearance, and variations in camera intrinsic parameters and viewpoint.
There has been great interest and significant progress in person re-identification (ReID) [1]- [6], which is important for security and wide-area surveillance applications as well as human computer interaction systems. Fueled by the new models, including the neural network-based approaches, proposed in recent years, the performance of The associate editor coordinating the review of this manuscript and approving it for publication was Adam Czajka .
person ReID approaches has improved significantly. For instance, the rank-1 accuracy of the state-of-the-art method on the Market 1501 dataset [7] is 94.8% [1], which has increased from 44.4% when the dataset was initially released in 2015.
In this paper, we demonstrate the effectiveness of an attack model in generating adversarial examples (AEs) for the person ReID application, attack multiple state-of-theart person ReID models, and also compare the performance of the presented attack approach with other state-of-the-art attack models via an extensive set of experiments on various person ReID benchmark datasets. One of our goals is to demonstrate the extreme vulnerability of multiple state-ofthe-art person ReID approaches to this attack, and draw the attention of the research community to the existing security risks. In person ReID, the paired probe and gallery images are VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ expected to have high similarity. However, by adding humanimperceptible perturbations to the probe images, the models are easily fooled even when the probe images appear the same as the original images. Adversarial examples [8]- [10] have been extensively investigated recently in image classification [10], [11], object detection [12]- [14] and semantic segmentation [12], [15], etc. However, relatively less attention has been paid to the robustness of person ReID models. Bai et al. [16] proposed an adversarial metric attack, which targets on fooling the distance metrics in person ReID systems. An early attempt for the defense shows that a metric-preserving network can be applied to defend against such attack. Zheng et al. [17] propose Opposite Direction Feature Attack (OFDA) to generate adversarial examples/queries for retrieval tasks such as person ReID. The idea is to push away the feature of the adversarial query in the opposite direction of the original feature.
In this paper, we present and employ an effective approach to generate adversarial examples targeting person ReID methods. Our approach [18] is referred to as the Dispersion Reduction (DR), and it is a black-box attack. The main idea behind our approach is reducing the ''contrast" of the internal feature map of a neural network. The intuition is that just like reducing the contrast of an image would make the objects less recognizable or distinguishable, reducing the contrast of an internal feature map would have a similar effect on recognizability of objects by the neural network. In our previous work [18], we showed the transferability of the DR attack across different tasks including object detection, classification and text recognition. The contribution of this work includes the following: We adapt the DR attack for the person ReID problem, and perform an extensive comparison and evaluation on different state-of-the-art methods and multiple benchmarks. In addition, we compare the performances of multiple attack methods. We show that making a feature map ''featureless'', through dispersion reduction, is very-well suited to fool any state-of-the art ReID model. Moreover, we use different network models (different from the model used by the victim ReID networks) as the source model to generate the adversarial examples and show the effectiveness and generalizability of our attack approach. We also analyze the effect of the perturbation budget on the attack performance.
The rest of this paper is organized as follows: The related works on both person ReID and attack models are summarized in Section II. The proposed dispersion reductionbased attack approach, and the methodology are described in Section III. The experimental results are presented in Section IV, and the paper is concluded in Section V.
A. PERSON ReID METHODS
Various person re-identification (ReID) approaches have been proposed in the past [19], which can be classified into different categories. There have been methods based on distance learning [20]- [26], on feature design and selection [27]- [33], and on mid-level feature learning [34]- [38].
Many works relied on color transformation and statistic models for person re-identification. Cheng and Piccardi [39] applied a cumulative color histogram transformation and employed an incremental major color spectrum histogram representation. Trajectory matching, height estimation and illumination-tolerant color representation were used by Madden and Piccardi [40]. Chae and Jo [41] employed a Gaussian Mixture Model (GMM) for the segmented regions in a person, and used a ratio of the GMMs to identify the same person. The Brightness Transfer Function (BTF) and its variants have been introduced to improve the matching performance. Porikli [42] proposed BTF for inter-camera color calibration. Later, Javed et al. [43] and Posser et al. [44] proposed the Mean Brightness Transfer Function (mBTF) and the Cumulative Brightness Transfer Function (cBTF), respectively. Datta et al. [45] presented the Weighted-BTF (wBTF), and Bhuiyan et al. [46] presented the Minimum Multiple Brightness Transfer Function (Min-MCBTF) to model the appearance variation by using a learning approach. However, it was assumed that multiple consecutive images are available for training, which is not the case for the commonly used benchmark datasets.
Researchers then focused on combining the features and distance metrics at the same time. Liao et al. [47] proposed Local Maximal Occurrence (LOMO) and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA) for person ReID. Chen et al. [48] formulated a new view-specific person ReID framework, referred to as camera correlation aware feature augmentation (CRAFT). In this framework, cross-view feature adaptation is performed by measuring cross-view correlation from visual data distribution and carrying out adaptive feature augmentation. Matsukawa et al. [49] proposed the hierarchical Gaussian Of Gaussian (GOG) descriptor, which generates discriminative and robust features that describe color and textural information simultaneously. An image is first divided into horizontal strips. Then, local patches in the strips are modeled using a Gaussian distribution. Köstinger et al. [50] proposed the KISSME, which is a statistical inference perspective to address the problem of metric learning.
More recent works employ neural networks and achieve state-of-the-art performance in person ReID. Zheng et al. [1] proposed DG-Net that encompasses a generative module, which separately encodes a specific person into both appearance and structure. It also integrates a discriminative module that shares the appearance encoder with the generative module. As a result, the high-quality cross-id composed images are fed back to the appearance encoder online and used to improve the model for discriminative module. Zhang et al. [3] proposed AlignedReID performing automatic part alignment during learning, without requiring extra supervision or pose estimation. By learning jointly on global and local features, it aims to address existing drawbacks. Xie et al. [51] proposed PLR-OSNet, which introduces Part-level resolution (PLR) into Omni-Scale Network (OSNet) [52]. It has two branches including both global and local feature representations. The global branch adopts a global-max-pooling layer, while the local branch employs a part-level feature resolution scheme for producing only a single ID-prediction loss, which is in contrast to existing part-based methods.
B. ADVERSARIAL ATTACK METHODS
Szegedy et al. [9] introduced the adversarial images, which can fool the Convolutional Neural Network (CNN)-based models, and cause misclassification by adding small perturbations to the original images. In one of the earlier works, Goodfellow et al. [53] proposed fast gradient sign method (FGSM), which generates AEs in one step. Several works extended this by iteratively updating the AEs with multistep attacks including the basic iterative method (BIM) [10], deep fool [54], momentum iterative method [11], Diverse Inputs Method (DIM) [55] and Translation-Invariant (TI) attacks [56]. Compared with FGSM, the iterative methods generate a smaller perturbation, which makes the adversarial examples even more imperceptible to human eye.
The transferability property of adversarial examples motivated research on black-box adversarial attacks. To perform black-box attacks, methods have been introduced [57], [58], which employ a substitute model that is trained to mimic the target model. Gradient-free attacks use feedback on query data, i.e., soft predictions [59], [60] or hard labels [61]. However, these aforementioned approaches require feedback from the target model, which is not practical in some scenarios. More recently, several methods have been proposed, which study the attack generation process itself. In general, an iterative attack [8], [62], [63] achieves a higher attack success rate than a single-step attack [53] in a white-box setting, but performs worse when transferred to other models. Below, we will summarize some of these attack methods.
1) GRADIENT-BASED ADVERSARIAL ATTACK METHODS
Fast Gradient Sign Method (FGSM) [53] generates the adversarial example x adv by linearizing the loss function in the input space and performing one-step update as follows where ∇ x J (x real , y) is the gradient of the loss function w.r.t.
x, sign(·) is the sign function that constrains the perturbation in L ∞ norm bound. FGSM can generate more transferable adversarial examples, however, it may not be as effective in white-box attacks [10]. Basic Iterative Method (BIM) [10] extends FGSM by updating the gradient in a multi-step manner with a small step size α, which can be expressed as where x adv 0 = x real . BIM clips x adv t after each update, or sets α = /T , with T being the number of iterations to ensure that they are in an -neighbourhood of the real image.
Momentum Iterative Fast Gradient Sign Method (MI-FGSM) [11] integrates momentum term into the iterative attack process. The update procedure is as follows where g t collects the gradient information up to the t-th iteration, and µ is the decay factor. Diverse Inputs Method (DIM) [55] applies random and differentiable transformations to the input images with probability p and maximizes the loss function with respect to these transformed inputs. The transformed images are fed into the classifier for gradient calculation. Such transformation includes random resizing and padding with a given probability p. This method can be combined with the momentumbased method to further improve the transferability.
2) TRANSLATION-INVARIANT ATTACK METHODS
Translation-Invariant (TI) attack methods have been proposed by Dong et al. [56] to further improve the transferability on white-box models. The authors notice the difference between the discriminative regions used by defenses to identify object categories and the normal trained models. Rather than optimizing the objective function, TI attack method uses a set of translated images to optimize the adversarial examples as arg max where T ij (x) is the translation operation that shifts image x by i and j pixels along the two-dimensions, respectively, and w ij is the weight for the loss J (T ij (x adv , y)).
Note that the TI can be integrated into any gradient-based attack such as FGSM or DIM. For example, the translationinvariant method for fast gradient sign method (TI-FGSM) updates as Also, the translation-invariant method for diverse inputs method (DIM) can also be obtained by a similar approach.
III. PROPOSED APPROACH
In this section, we will describe the dispersion reductionbased attack on the person ReID application.
A. NOTATION
We use x real to denote the original query image, and f (·) to denote a deep neural network classifier. The output feature map at layer k is denoted by F, where F = f (x real )| k at the first time step. For each step afterwards, we calculate the dispersion, which is denoted as g(·), and the gradient of dispersion as ∇ x real g(F k ) to update the adversarial examples x adv . More details will be provided in the following section.
B. DISPERSION REDUCTION
For person ReID, the existing models are trained with various benchmark datasets, which have different labeling schemes. Thus, compared to the image classification problem, person ReID is more complicated. More specifically, treating and attacking the person ReID models as black-boxes require an approach that is highly transferable and effective at attacking different training datasets and model architectures. The aforementioned existing black-box attacks, however, use a pretrained model as surrogate, which shares the same training dataset and same labeling scheme with the targeted models. Moreover, most existing attack methods rely on task-specific loss functions, which greatly limits their transferability across tasks and different network models.
In our previous work [18], we showed that Dispersion Reduction (DR) has good transferability properties, and is successful in across task attack scenarios. DR employs a publicly available classification network as the surrogate source model, and attacks models that are used in different computer vision tasks, such as object detection, semantic segmentation and cloud API applications. DR is a black-box attack. Conventional black-box attacks establish a source model as the surrogate, for which the inputs are paired with the labels generated from the target model instead of the ground truth labels. In this way, the source model mimics the behavior of the target model. Our proposed DR attack, on the other hand, does not rely on the labeling system or a task-specific loss function, since DR only accesses top part of the model. Although a source model is still required, there is no need for training with new target models or querying the target model for labels. Instead, a pre-trained public model could simply serve as the source model due to the strong transferability of the proposed DR attack. As shown in Fig. 1, the DR attack reduces the contrast of an internal feature map, by reducing its dispersion, so that the information that is in the feature map becomes indistinguishable, and the following layers are not able to extract any useful information regardless of what kind of computer vision task is at hand. The adversarial example, shown in the second column of Fig. 1, was generated by attacking (reducing the dispersion of) the conv3.3 layer of VGG16 surrogate model. This also results in the distortion of the feature maps of the subsequent layers (e.g. conv5.3). As can be seen, compared to the feature maps of the original image, the standard deviations of the feature maps for the adversarial image are lower after the attacked layer.
Moreover, we have analyzed the effect of attacking different convolutional layers of the VGG16 network with the proposed DR attack based on the PASCAL VOC2012 validation set [18]. Fig. 2a shows the mAP value for Yolov3 and Faster RCNN, and mIoU for Deeplabv3 and FCN. Fig. 2b is the plot of the standard deviation values before and after the DR attack, together with the change. As can be seen, attacking the middle layers of VGG16 results in higher drop in the performance compared to attacking top or bottom layers. At the same time, the change in the standard deviation for middle layers is larger compared to the top and bottom layers. We can infer that for initial layers, the budget constrains the loss function to reduce the standard deviation, while for the layers near the output, the standard deviation is already relatively small, and cannot be reduced too much further. Based on this observation, we choose one of the middle layers as the target of the DR attack. More specifically, in our experiments, we attack conv3-3 for VGG16, the last layer of group -A for inception-v3, and the last layer of 2nd group of bottlenecks (conv3-8-3) for ResNet152.
The DR attack is defined as the following optimization problem: where f (·) is a deep neural network classifier, θ denotes the network parameters, and g(·) computes the dispersion. As shown in Alg. g(·)) as the dispersion metric due to its simplicity. Given any feature map, DR iteratively adds perturbation to x real along the direction of decreasing standard deviation, and maps it to the vicinity of x real by clipping at x ± . Denote the feature map at layer k as F = f (x adv t )| k , DR attack solves the following equation The code is provided in [65].
Algorithm 1 Dispersion Reduction Attack
Input : classifier f , real image x real , feature map at layer k, perturbation , iteration T and learning rate l Output: adversarial example x adv , s.t. x adv − x real ∞ ≤ 1: procedure Dispersion reduction 2: x adv 0 ← x real 3: for t=0 to T-1 do 4: Compute std g(F k ) 6: Compute gradient ∇ x real g(F k ) 7: Update x adv by: 8: x adv t = x adv t − Adam(∇ x real g(F k ), l) 9: Project x adv t to the neighbour of x real : 10:
C. VICTIM ReID MODELS AND IMPLEMENTATION DETAILS OF ATTACKS
In order to evaluate the effectiveness of our proposed adversarial DR attack, we adapt it for the person ReID problem, and attack three different state-of-the art person ReID appraoaches, namely DG-Net [1], AlignedReID [3] and PLR-OSNet [51]. For person re-identification, both DG-Net and AlignedReID use ResNet-50 [66] as the backbone model, while PLR-OSNet employs the Omni-Scale Network as the backbone. DG-Net reaches 94.8% and 86.0% on the rank-1 accuracy and mean average precision (mAP), respectively, on Market 1501 dataset [7]. AlignedReID achieves 92.6% and 82.3% accuracy [67] on the rank-1 and mAP, respectively,, and PLR-OSNet achieves 95.6% and 88.9% accuracy on the rank-1 and mAP, respectively, on the Market 1501 dataset.
We used the pre-trained models for these ReID approaches, provided by the authors on their Github pages [68]- [70]. During training, the images are resized to 256×128, which is a strong baseline that can achieve higher accuracy. We reduce the mini-batch size from 16 to 4 to save GPU memory usage on all models and all datasets. The learning rate for DG-Net, AlignedReID and PL-OSNet is 0.0001, 0.0002 and 0.0003, respectively. All models use a decay rate of γ = 0.1, which reduces the learning rate by a factor of 1/10 after T steps during the training. For DG-Net T is set to 60000. For AlignedReID and PL-OSNet, T is set to and 20, respectively. More implementation details can be found in the source codes provided by the authors [68]- [70].
For each dataset, the images are separated into training and testing folders. We follow the data preparation process described in [68], [69]. After pre-processing, we apply the TI-FGSM and TI-DIM attacks as described in [56], and detailed in the source code on Github page [71].
For our dispersion reduction (DR) attack, we first used the pre-trained ResNet-152 as the source model. The values of the parameters, listed in Algorithm 1, are as follows: = 4, l (learning rate) = 0.05, T = 100. The adversarial examples are generated on the test images, and used for testing on the victim ReID models. As mentioned above, both DG-Net and AlignedReID use ResNet-50 [66] as the backbone model. Thus, in order to generate the adversarial examples with different surrogate models, we have also used VGG-16 and InceptionV3, as our source models. As discussed above, we used conv3-3 for VGG16, the last layer of group -A for inception-v3, and the last layer of 2nd group of bottlenecks (conv3-8-3) for ResNet152, as the attack layers. We also analyzed the effects of using different values, and a detailed discussion is provided in the following section.
IV. EXPERIMENTS, RESULTS AND DISCUSSION
As mentioned above, we have used three state-of-the-art ReID methods as victim models, attacked them with the proposed DR attack, and evaluated the performance drop on four different datasets. Moreover, we attacked the same victim models with two other state-of-the-art attack approaches, namely TI-FGSM and TI-DIM [56], [71]. We compared the effectiveness of our DR attack with these other attack methods as well. Moreover, we have used three different network models as the surrogate source model to evaluate and compare the performance drop and the attack effectiveness.
A. DATASETS
We have employed four challenging and commonly used benchmark datasets to demonstrate the effectiveness of the proposed attack. These datasets are Market-1501 [7], CUHK 03 [37], DukeMTMCreID [72] and MSMT 17 [73], which are briefly described below.
1) MARKET-1501
Market-1501 [7] dataset contains 32,217 images of 1501 labeled persons from six camera views. There are 751 VOLUME 8, 2020 identities in the training set and 750 identities in the testing set. In the original study on this proposed dataset, mAP is the evaluation criteria used to compare the algorithm performances.
2) CUHK03
CUHK03 [37] dataset contains 8765 images of 1467 labeled persons. In this paper, we use a new protocol, in which the training set and test set have 767 and 700 identities, respectively. We select the detected bounding boxes instead of labeled bounding box results. It is a more difficult evaluation protocol for CUHK 03.
3) DukeMTMC-ReID
DukeMTMC-ReID [72] dataset is composed of 36,411 images of 1812 persons captured from eight cameras. There are 702 identities in the training set and 1110 identities in the testing set. The evaluation criteria is mAP, same with the Market-1501 dataset.
4) MSMT17
MSMT17 [73] is the largest image-based person ReID dataset introduced in 2018. It contains 124,069 labeled images of 4101 person IDs captured from 12 different outdoor or indoor cameras. The evaluation protocol/criteria is also same as the Market-1501 dataset, and uses mAP.
B. EVALUATION METRIC
With the same image perturbation ( = 4), we compare performances of all the attack methods while attacking the victim ReID approaches. The lower number indicates more drop in ReID accuracy, and thus, better attack performance. Mean average precision (mAP) is used as the evaluation metric. The effects of using different values are discussed in Section IV-D.
C. RESULTS AND DISCUSSION
In the first set of experiments, we used ResNet-152 as the source model of the attacks. The results are summarized in Table 1, wherein the first three rows show the mAP values for the baseline victim models, namely DG-Net, AlignedReID and PLR-OSNet, on different benchmark datasets. The mAP values for DG-Net are 86.0%, 61.1%, 74.8% and 52.3%, and the mAP values for AlignedReID are 82.3%, 70.7%, 82.8% and 43.7% for Market1501, CUHK03, DukeMTMC-ReID and MSMST17 datasets, respectively. For PLR-OSNeT, the mAP values are 88.9%, 77.2% and 81.2% for Market1501, CUHK03 and DukeMTMC-ReID datasets, respectively. These three models are regarded as the state-of-the-art ReID approaches based on their performance. The fourth to sixth rows in Table 1 show the value of the mAP after the victim models are attacked with TI-FGSM, which is a state-of-the-art attack method. Table 1 show the value of mAP after the victim models are attacked with TI-DIM, which is another state-of-the-art attack method. Compared to TI-FGSM, this attack is more effective since it causes more drops in the mAP values for all four datasets. For instance, for the CUHK03 dataset, the mAP value of DG-Net drops by 46.9 from 61.1 to 14.2, the mAP value of AlignedReID drops by 54.2 from 70.7 to 16.5, and the mAP value of PLR-OSNet drops by 58.1 from 77.2 to 19.1. The last three rows of Table 1 show the mAP value after the victim models are attacked with the proposed DR approach. As can be seen, our proposed approach is the most effective attack compared to TI-FGSM and TI-DIM, and causes the most drop in the mAP values for all victim models and for all four datasets. For instance, for the CUHK03 dataset, the mAP value of DG-Net drops by 53.3 from 61.1 to 7.8, the mAP value of AlignedReID drops by 62.4 from 70.7 to only 8.3, and the mAP value of PLR-OSNet drops by 67.7 from 77.2 to only 9.5. Fig. 3 shows some example images and query results for the Market-1501 dataset. The first column shows the query images, and columns 2 through 11 show the Rank 1 to Rank 10 returned images for that query, respectively. The first and third rows are for the original query images, while the second and fourth rows are for the adversarial query images. The perturbations between the query images of first versus second row and third versus fourth row are imperceptible to the human eye, but the person ReID performances have been significantly impacted by the proposed attack. Similar results for CUHK03 and DukeMTMC-ReID datasets are shown in Fig. 4 and Fig. 5, respectively. We report the overall results for the MSMT17 dataset in Table 1, and are not able to provide example images due to the release agreement.
The examples in Figures 3, 4 and 5 show the effectiveness of the proposed DR attack. In these figures, the adversarial examples, although imperceptible to human eye, result in no matches even in Rank 10 returns. As a quantitative measure, we computed the peak signal to noise ratio (PSNR) as well as the structural similarity index measure (SSIM) between the adversarial images (generated by the TI-FGSM, TI-DIM and the proposed DR attack) and the original images, and calculated the average on the Market1501 dataset. The average SSIM value is 0.70, 0.72 and 0.72 for TI-FGSM, TI-DIM and the DR attacks, respectively. The average PSNR is 26, 28 and 27 for TI-FGSM, TI-DIM and the DR attacks, respectively. Since the perturbation budget is kept the same ( = 4) for all the attack methods, their average SSIM and PSNR values are similar. Some example adversarial images generated by these attacks are shown in Fig. 6 for qualitative comparison.
In the second set of experiments, we used two other network models, namely VGG-16 and InceptionV3, as our surrogate source models. The goal here was to use different network models, other than ResNet, to generate adversarial examples and show the generalizability of the proposed DR approach. We have generated AEs by using these different networks as the source models with the proposed DR approach and with TI-DIM. We then used the AEs to attack DG-Net and AlignedReID. In this experiment, we chose to use TI-DIM, since it has better attack performance than TI-FGSM based on Table 1. The results obtained with our proposed DR attack are summarized in Tables 2 and 3 when the victim ReID method is AlignedReID and DG-Net, respectively. As can be seen, when Resnet-152 is used as the surrogate model, it results in the highest drop in the mAP values. This is mostly because most of the ReID approaches use ResNet as their backbone network. However, even when we use VGG-16 or InceptionV3 as the surrogate source model, the proposed DR attack still causes significantly more drop in the mAP values compared to the state-of-the-art attack (please see Tables 1, 2 and 3).
The results obtained with the TI-DIM are summarized in Tables 4 and 5, which show the results obtained with the TI-DIM attack, with using different surrogate models, TABLE 3. mAP values on different datasets when DG-Net is attacked with the proposed DR approach. First row is the performance before attack. Last three rows show the results when AEs are generated by using different network models as the surrogate models.
when the victim ReID method is AlignedReID and DG-Net, respectively. When we compare Table 2 with Table 4, and Table 3 and Table 5, it can be seen that the proposed DR attack still outperforms the TI-DIM as a blackbox attack even when the surrogate model is different from the target model.
D. EFFECT OF ON THE PERFORMANCE
In literature, it is a common practice to fix the value of , and then compare the performance degradation for different attack methods. In the experiments above, we set = 4, since it results in less change in the original image, and better demonstrates the difference between the attack methods. When is increased, more budget is given to each attack method to make changes on the original images, and they TABLE 4. mAP values on different datasets when AlignedReID is attacked with the TI-DIM. First row is the performance before attack. Last three rows show the results when AEs are generated by using different network models as the surrogate models. start to provide similar performance. A better attack should be able to provide more performance degradation with a smaller budget. As shown in Table 6 and Fig. 7, our proposed DR attack can reach a given attack effectiveness by using the least budget. For instance, the proposed DR attack drops the mAP value of DG-Net to 20.3 with an budget of 8, whereas TI-DIM needs a budget of 12 to drop the mAP to 21.8.
V. CONCLUSION
Neural network-based methods have achieved state-of-the-art performance on the person re-identification problem across different camera views. In this paper, we have presented a black-box and effective attack model, which is based on dispersion reduction, and does not rely on task-specific loss functions and label queries. We have used the adversarial examples generated by this approach to attack three different state-of-the-art person Re-ID models. We have also compared the performance of our attack approach with two other state-of-the-art attack models. The results demonstrate the effectiveness and generalizability of the proposed dispersion reduction attack on three state-of-the-art person ReID models. It also outperforms other state-of-the-art attack models by a large margin, and results in the most drop in the mean average precision values.
ACKNOWLEDGMENT
The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. | 7,221.2 | 2020-09-14T00:00:00.000 | [
"Computer Science"
] |
Search for massive long-lived particles decaying semileptonically at s=13TeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\sqrt{s}}=13\,\hbox {TeV}$$\end{document}
A search is performed for massive long-lived particles (LLPs) decaying semileptonically into a muon and two quarks. Two kinds of LLP production processes were considered. In the first, a Higgs-like boson with mass from 30 to 200\,GeV\!/c2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {\,GeV\!/}c^2$$\end{document} is produced by gluon fusion and decays into two LLPs. The analysis covers LLP mass values from 10\,GeV\!/c2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {\,GeV\!/}c^2$$\end{document} up to about one half the Higgs-like boson mass. The second LLP production mode is directly from quark interactions, with LLP masses from 10 to 90\,GeV\!/c2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {\,GeV\!/}c^2$$\end{document}. The LLP lifetimes considered range from 5 to 200 ps. This study uses LHCb data collected from proton-proton collisions at s=13\,TeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sqrt{s} = 13\text {\,TeV} $$\end{document}, corresponding to an integrated luminosity of 5.4\,fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {\,fb} ^{-1}$$\end{document}. No evidence of these long-lived states has been observed, and upper limits on the production cross-section times branching ratio have been set for each model considered.
Introduction
Supersymmetry (SUSY) is one of the most popular extensions of the Standard Model (SM), which can solve the hierarchy problem, can unify the gauge couplings at the Planck scale and proposes dark matter candidates. The minimal supersymmetric extension of the Standard Model (MSSM) is the simplest phenomenologically viable realisation of SUSY [1,2]. The present study addresses a subset of models featuring massive long-lived particles (LLPs) with a measurable flight distance [3,4], decaying semileptonically. Longlived particles decaying semileptonically with displaced jets composed of SM particles have been studied by the experiments at the LHC [5][6][7][8][9] Additional information on searches for LLPs at collider experiments can be found in Refs. [10][11][12].
This analysis uses proton-proton ( pp) collision data at a centre-of-mass energy √ s =13 TeV collected by the LHCb experiment at the LHC, corresponding to a total integrated luminosity of 5.4 fb −1 . It extends the analysis of Ref. [9] e-mail<EMAIL_ADDRESS>(corresponding author) on data collected at √ s = 7 and 8 TeV. The adopted theoretical framework is inspired by the SUper GRAvity (mSUGRA) with R-parity violation (RPV) [13], in which the neutralino can decay into a muon and two quarks:χ 0 1 → μ + q i q j (μ −q iq j ). Neutralinos can be produced by a variety of processes. In this paper the analysis has been performed assuming the two mechanisms depicted in Fig. 1. In the first process, a Higgs-like particle, h 0 , is produced by gluon fusion and decays into two LLPs. The analysis covers h 0 masses from 30 to 200 GeV/c 2 , LLP lifetimes from 5 to 200 ps and LLP mass values from 10 GeV/c 2 up to about one half the h 0 mass. The second mode is a direct LLP production from quark interactions. The LLP lifetime range considered is from 5 to 200 ps and the mass range from 10 to 90 GeV/c 2 . The LLP lifetime range begins at 5 ps, well above the typical b-hadron lifetime, and extends up to 200 ps, where most of the vertices are still within the LHCb vertex locator (VELO). The mass range avoids the region of the SM b-quark states, but also takes into account the forward acceptance of the LHCb detector within which the decay products of relatively light LLPs can be efficiently detected.
The LLP signature is a displaced vertex made of charged particle tracks accompanied by an isolated muon with high transverse momentum with respect to the proton beam direction, p T . This study benefits from the excellent vertex reconstruction provided by the VELO, and by the low p T threshold of the muon trigger, compared to the other LHC experiments. In addition, the LHCb experiment is probing a rapidity region only partially accessible by other LHC experiments. These properties allow the LHCb experiment to be complementary to similar analyses performed by the two central detectors at the LHC and even explore regions of the theoretical parameter space where these experiments are limited by their low efficiency to reconstruct highly boosted LLPs.
Detector description and simulation
The LHCb detector [14,15] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, 1 LLP production processes considered in this paper, where thẽ χ 0 1 represents the LLP: a di-LLP production via a scalar particle h 0 ; b non-resonant, direct LLP production from quark interactions, where X is a stable particle, with mass identical to the LLP. The LLP decays into a muon and two quarks: designed for the study of particles containing b or c quarks. The detector includes a high-precision tracking system consisting of the VELO which is a silicon-strip detector surrounding the pp interaction region [16], a large-area siliconstrip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of siliconstrip detectors and straw drift tubes [17,18] placed downstream of the magnet. The tracking system provides a measurement of the momentum, p, of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV/c. The minimum distance of a track to a primary pp collision vertex (PV), the impact parameter, is measured with a resolution of (15 + 29/ p T ) µm, where p T is in GeV/c. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors [19]. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic (ECAL) and a hadronic calorimeter (HCAL) [20]. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers [21]. The online event selection is performed by a trigger [22], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. During data taking an alignment and calibration of the detector is performed in near real-time and used in the software trigger [23]. The same alignment and calibration information is propagated to the offline reconstruction.
Simulation is used to model the effects of the detector acceptance and the imposed selection requirements. In the simulation, pp collisions are generated using Pythia 8 [24,25] with a specific LHCb configuration [26] and with parton density functions taken from CTEQ6L [27]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [28,29] as described in Ref. [30]. The simulation includes pileup events with an average of 1.1 pp visible interactions per bunch crossing.
Several sets of signal events have been produced assuming the processes illustrated in Fig. 1, where theχ 0 1 plays the role of a long-lived particle. For the first process considered, twõ χ 0 1 particles are obtained from the decay of the Higgs-like boson produced by gluon fusion, gg → h 0 →χ 0 1χ 0 1 . For the second process, the LLP is produced in a non-resonant mode, qq →χ 0 1 X . Here X is a stable neutral particle with the same mass as that of theχ 0 1 state. This production of a LLP in association with a stable particle X is included, which enables probing the sensitivity to this topology, with the signal LLP recoiling against such a particle.
The LLP decays into a muon and two quarks; the branching ratio ofχ 0 1 → μ + q i q j (μ −q iq j ) is set to be equal for each quark combination (q i = u, c and q j =d,s,b), with an equal proportion of μ + and μ − .
In the following, the model name is indicated by the values of m h 0 , mχ0 1 and τχ0 1 ; h125-chi40-10ps, for example, corresponds to m h 0 = 125 GeV/c 2 , mχ0 1 = 40 GeV/c 2 , τχ0 1 = 10 ps. For the direct production, the Higgs mass is omitted from this notation, such as for example in chi30-10ps.
The most relevant background in this analysis is from events containing heavy quarks. The background from heavy quarks directly produced in pp collisions, as well as from W , Z , Higgs boson and top quark decays, is studied using the simulation. The simulation of inclusive bb and cc events is not efficient to produce a large enough sample to cover the relevant highp T muon kinematic region. Hence, a dedicated sample of 20 × 10 6 (1 × 10 6 ) simulated bb (cc) events has been produced with a minimum partonp T of 20 GeV/c and requiring a muon with p T > 12 GeV/c and 1.5 < η < 5.0. All the simulated background species are suppressed by the multivariate analysis presented in the next section. Therefore, a data-driven approach is employed for the final background estimation.
Signal selection
Signal events are selected by requiring a vertex displaced from any PV in the event and containing one isolated, highp T muon. Due to the relatively high LLP mass, the muons from the LLP decay are expected to be more isolated than muons from hadron decays. The events from pp collisions are selected online by a trigger requiring muons with p T > 10 GeV/c. The offline analysis requires that the triggering muon has an impact parameter, IP μ , with respect to any PV, larger than 0.25 mm and a transverse momentum, p μ T , larger than 12 GeV/c. Primary and displaced vertices are reconstructed offline from charged particle tracks [31]. Genuine PVs are identified by a small radial distance from the beam axis, R xy < 0.3 mm. Once the set of PVs is identified, all the other vertices are candidates for the decay position of LLPs. An LLP candidate is formed by requiring three or more tracks including the muon and having an invari-ant mass above 4.5 GeV/c 2 . There is no requirement for the reconstructed momentum to point to a specific PV. Particles interacting with the detector material are an important source of background. Therefore, a geometric veto is used to reject candidates with vertices in regions occupied by detector material [32]. The event preselection requires at least one PV in the event and at least one LLP candidate. Figure 2 compares the distributions from data and from the simulated bb events for the relevant observables, after preselection. For illustration the shapes of simulated h125-chi40-10ps events are also superimposed. The effect of the geometric veto is visible in the R xy distribution, for candidates with R xy above 5 mm. From simulation, the veto introduces a loss of efficiency of 3% (27%) for the detection of LLPs with a 50 GeV/c 2 mass and a 10 ps (200 ps) lifetime, m h 0 =125 GeV/c 2 . The muon-isolation variable is defined as the sum of the energy of tracks surrounding the muon direction, including the muon itself, in a cone of radius R ηφ = 0.3 in the pseudorapidity-azimuthal (η, φ) space, divided by the energy of the muon track. The radius is reduced to R ηφ = 0.2 when the theoretical hypothesis assumes a LLP mass of 10 GeV/c 2 , to account for the reduced aperture of the jet of particles produced by the LLP decay. A muon-isolation value of unity denotes a fully isolated muon. In simulation the muon from the signal is found to be more isolated than the hadronic background. The variables σ R and σ Z are the vertex uncertainties in the radial direction and in the z direction respectively.
The reconstructed vertex mass is very broad and does not peak at the neutralino mass values, because it misses some charged particle tracks, and any neutrals produced in the LLP decay.
The shapes of the distributions in Fig. 2 are all consistent with a dominant bb composition of the background. This is confirmed by comparing the yields in data and simulation: after preselection and requiring the isolation parameter below 1.2, the total number of LLP candidates in data is 148 × 10 3 . The predicted background yields from bb and cc events are (120 ± 20) × 10 3 and (14 ± 4) × 10 3 , respectively. Small contributions are expected from processes with W , Z bosons plus jets, top and Standard Model Higgs events: 260, 20, 2, and 1 candidates, respectively. The bb and cc prediction uses the cross-sections measured by the LHCb experiment at 13 TeV [33,34]. The acceptance of this analysis is computed with MadGraph5-aMC@NLO [35] and the detection efficiency is obtained from simulated events. As already stated, these background estimations are only used for crosschecks.
A multivariate analysis (MVA) based on a boosted decision tree [36,37] is used to further purify the data sample. Ten MVA input variables are selected to optimise the signalbackground separation. They are: p μ T and IP μ , the ratio of the energies associated with the muon measured in ECAL and HCAL normalised to the muon energy, the LLP candi-date p T , its pseudorapidity, the number of tracks forming the LLP, the vertex uncertainties σ R and σ Z , and the vertex R xy distance.
Larger vertex uncertainties are expected on the vertices of candidates from bb events compared to signal LLPs. The former are more boosted and produce more collimated tracks, while the relatively heavier signal LLPs decay into more divergent tracks. This effect decreases when the mass of the LLP approaches the mass of b-quark hadrons. The selection based on the energy deposit in the calorimeters is efficient to suppress the background due to kaons or pions punching through the calorimeters and being misidentified as muons. The muon-isolation variable and the reconstructed mass of the long-lived particles are not included in the classifier; the discrimination power of these two variables is subsequently exploited for the signal determination.
The signal MVA training samples are provided by simulation. The background training sample is obtained from data, based on the hypothesis that the fraction of signal in the data after preselection is small. This automatically includes all possible background sources, with the correct relative abundance.
The training is performed independently for each simulated model. The MVA classifier is subsequently applied to the data and to the simulated signal. For each model, the optimal MVA cut value is chosen by an iterative minimization procedure to give the best expected cross-section upper limit, but keeping at least ten candidates to allow the invariant-mass fit to work properly.
The classifier can be biased by the presence of signal in data used as background training set. To quantify the potential bias, the MVA training is performed adding a fraction of simulated signal events (up to 5%) to the background set. This test demonstrates a negligible effect on the MVA performance for all the signal models.
Determination of the signal yield
The signal yield is determined with an unbinned extended maximum-likelihood fit to the distribution of the reconstructed LLP mass. The shape of the signal component is taken from the simulated models, and a background component is added. After the MVA selection, no simulated background survives, therefore the background shape is determined by a data-driven method, which also avoids potential simulation mismodeling of the reconstructed mass. The data candidates are separated into a signal region with muon isolation below 1.2 and a background region with isolation values from 1.4 to 2.0. The signal-region selection accepts more than 80% of the signal for all the models considered (see e.g. Fig. 2). Any potential signal yield in the background (j) (i) Fig. 2 Distributions from data compared to simulated bb events (blue) and the simulated signal h125-chi40-10p (red), after preselection. From a to j: muon transverse momentum; muon impact parameter; muon isolation; the calorimetric energy, E calorimeters , associated with the muon normalised by the muon energy, E muon ; the number of tracks used to reconstruct the LLP vertex including the muon; the radial distance to the beam line of the reconstructed vertex; longitudinal and radial vertex fit errors, σ Z and σ R ; reconstructed transverse momentum and mass of the LLP candidate. The distributions from simulated events are normalised to the data region is considered negligible. The reconstructed mass distribution obtained from the background candidates is used to constrain an empirical probability density function (PDF) consisting of the sum of two negative-slope exponential functions, one of them convolved with a Gaussian function. Shape parameters and amplitudes are left to vary in the fit. It is possible that the mass distribution obtained after selection of the background region does not represent exactly the background component in the signal region. Hence, a correction is applied before performing the fit: the mass distribution selected in the background region is weighted with weights deduced from the comparison of the candidate mass distributions of signal and background regions obtained from data with a relaxed MVA selection. This relaxed selection is required to have sufficiently populated samples and to minimise the correlation with the final distributions from which signal yields are obtained. The consistency of this procedure is tested on bb simulated events.
Examples of the invariant mass of the selected LLP candidates are shown in Fig. 3 for the signal and background regions. The invariant-mass fit is performed simultaneously on LLP candidates from the signal and from the background regions. In the former, the numbers of signal and background events are free parameters of the fit. The results of the fit are shown in the figure. The sensitivity of the fit procedure is studied by adding a small number of simulated signal events to the data according to a given signal model. The fitted yields are on average consistent with the numbers of added events. The fitted signal yields, given in Tables 1 and 2 are compatible with the background-only hypothesis for all the theoretical models.
Detection efficiency and systematic uncertainties
The detection efficiency required in the calculation of the signal yield is estimated from the simulated signal events. The efficiencies after preselection and after MVA selection are shown in Tables 1 and 2, for the considered models of resonant and non-resonant LLP productions, respectively.
The values include the geometrical acceptance. Several phenomena compete to determine the detection efficiency. In general the efficiency after preselection increases with the LLP mass because more particles are produced in the decay of heavier LLPs. There is a loss of particles outside the spectrometer acceptance, especially when the LLPs are produced from the decay of heavier states, such as the Higgs-like particle. In addition, the lower boost of heavier LLPs results in a shorter average flight length, which is disfavoured by the requirement of a minimum R xy value. With increasing LLP lifetimes a larger portion of the decays falls into the material region and is vetoed. Finally, a drop of sensitivity is expected for LLPs with a lifetime close to the b-hadron lifetimes, where the contamination from bb events becomes even more important, especially for low-mass LLPs. The detection efficiency is reduced by up to one order of magnitude after the optimised MVA selection while the background is reduced by 3-4 orders of magnitude. A breakdown of the relative systematic uncertainties is shown in Table 3. The uncertainties of the partonic luminosity depend upon the process considered; they are estimated following the procedure explained in Refs. [38,39] and vary from 3% up to 6%, which is found for the gluon fusion process. The integrated luminosity [40] contributes with an uncertainty of 2%. The statistical precision of the efficiencies determined from simulation is in the range 2-4% for the different models. Different sources of systematic uncertainty arising from discrepancies between data and simulation have been considered. The size of those discrepancies for the relevant observables are inferred from a comparison of the distributions obtained from data and from bb simulated events, which describes the data quite completely, or from other calibration processes.
The muon detection efficiency, including trigger, tracking, and muon identification efficiencies, is studied by a tag-andprobe technique applied to muons from J/ψ → μ + μ − , Υ (1S) → μ + μ − and Z → μ + μ − decays. The corresponding systematic effects due to differences between data and simulation are estimated to be between 2 and 3.7%, depending on the theoretical model considered. A comparison of the simulated and observed p T distributions of muons from Z → μ + μ − decays shows a maximum difference of 0.2 GeV/c in the selected region; this difference is propagated to the LLP analysis by shifting the muon p T threshold by the same amount. The corresponding systematic uncertainty is below 1% for all models under consideration.
The muon impact-parameter distribution is also studied from Z decays and shows a discrepancy between data and simulation of about 10 µm close to the p μ T threshold. By changing the minimum IP μ requirement by this amount, the change in the detection efficiency is below 1% for all the models.
The vertex reconstruction efficiency has a complicated spatial structure due to the geometry of the VELO and the material veto. Uncertainties in the estimated vertex-finding efficiency are due to the per-track efficiency, track resolution, and differences in the contribution from background tracks due to the underlying interaction and pile-up. In the materialfree region, R xy < 4.5 mm, the efficiency as a function of the flight distance has been studied in the context of lifetime Table 5 Upper limits at 95% CL on the production cross-section times branching ratio for signal models with a non-resonant production. Masses are given in GeV/c 2 , lifetimes in ps, cross-sections in pb measurements [41], showing that the simulation reproduces the data within 1%. In the region R xy > 4.5 mm a deviation of less than 6% is inferred from the study of inclusive bb events in data and simulation. By altering the efficiency in the simulation program as a function of the true vertex position, the effect on the LLP detection efficiency is estimated to be 1-2%. A second method to determine this contribution uses vertices from B 0 → J/ψ K * 0 decays with J/ψ → μ + μ − and K * 0 → K + π − . For this process the vertex detection efficiencies in data and simulation agree within 10%. This result, obtained from a process with four final-state particles, is propagated to the LLP decay into a larger number of charged particle tracks and a detection threshold of three tracks. A discrepancy of at most 2% between the LLP efficiency in data and simulation is found, which is adopted as a contribution to the systematic detection uncertainty. The uncertainty on the position of the beam line in the transverse plane is less than 20 µm [16]. It can affect the secondary-vertex selection, mainly via the requirement on R xy . By altering the PV position in simulated signal events, the effect is estimated to be below 1%.
The effect of the imperfect modelling on the observables used in the MVA training is estimated with pseudoexperiments. As previously stated, the bias on each input variable is determined by comparing simulated and experimental distributions of muons and LLP candidates from Z and W events, as well as from bb events. At the MVA test stage, each input variable is modified by a scale factor randomly selected from a Gaussian distribution of width equal to the corresponding bias. The standard deviation of the signal efficiency distribution is taken as a systematic uncertainty.
The signal and background samples are obtained through a selection on the muon isolation parameter. By a comparison Comparing the mass distributions of bb and Z → bb events, a maximum mass-scale discrepancy between data and simulated events of 10% is estimated in the proximity of the threshold, which translates into a 1.4% contribution to the detection efficiency uncertainty.
Finally, the total systematic uncertainty is obtained as the sum in quadrature of all contributions, where the different components of the detection efficiency are assumed to be fully correlated.
The choice of the signal and background invariant-mass templates can affect the results of the LLP mass fits. The uncertainty due to the signal model accounts for the mass scale and the mass resolution. The mass scale and resolution discrepancies between data and simulation are below 1% and 1.5% respectively, as obtained from bb and Z → bb events. Pseudoexperiments are used to estimate the effect on the cross-section calculation. For each theoretical model, ten simulated signal events are added to the selected data after a Gaussian smearing or after changing the mass scale. The average deviation of the observed upper limits with respect to the one obtained from the default signal and background distributions is below 2%. The background shape is deduced from data selected in the poorly isolated region after reweighting, with weights inferred from the data distributions obtained with relaxed selection criteria. The overall uncertainty is estimated by reducing by half the weights and running pseudoexperiments as before. The average deviation of the observed upper limits is below 14%.
Results
The 95% confidence level (CL) upper limits, expected and observed, on the production cross-sections times branching fraction are computed for each model using the CLs approach [42]. Statistical and systematic uncertainties on the signal efficiencies are included as nuisance parameters of the likelihood function, assuming Gaussian distributions. Finally, the upper limit values are corrected by the factors which account for the imperfect modelling of signal and background templates.
The numerical results for all the models are given in Tables 4 and 5 LLP mass value is explained by the above-mentioned effects on the detection efficiency. The upper limits for the processes with m h 0 = 125 GeV/c 2 can be compared to the prediction of the Standard Model Higgs production cross-section from gluon fusion of about 46 pb at √ s = 13 TeV [43].
Conclusion
Long-lived massive particles decaying into a muon and two quarks have been searched for using proton-proton collision data collected by the LHCb experiment at √ s = 13 TeV, corresponding to an integrated luminosity of 5.4 fb −1 . The LLP lifetime range considered is from 5 to 200 ps. The background is dominated by bb events and is reduced by tight selection requirements, including a dedicated multivariate classifier. The signal yield is determined by a fit to the LLP reconstructed mass with a signal shape inferred from the theoretical models.
The forward acceptance of the LHCb experiment makes it complementary to other LHC experiments, while its low trigger p T threshold allows exploring relatively small LLP masses. Two types of LLP productions have been assumed. In the first a Higgs-like particle is produced by gluon fusion and decays into two LLPs. The analysis covers Higgs-like boson masses from 30 to 200 GeV/c 2 , and LLP mass range from 10 GeV/c 2 up to about one half of the mass of the parent boson. The second mode is a direct LLP production from quark interactions, covering the LLP mass range from 10 up to 90 GeV/c 2 .
The results for all theoretical models considered are compatible with the background-only hypothesis. The upper limits at 95% CL set on the cross-section times branching fractions are mostly of O(0.1 pb), but the sensitivity is limited to O(10 pb) for the lowest LLP mass value considered of 10 GeV/c 2 .
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: All LHCb scientific output is published in journals, with preliminary results made available in Conference Reports. All are Open Access, without restriction on use beyond the standard conditions agreed by CERN. Data associated to the plots in this publication are made available on the CERN document server at http://cdsweb.cern.ch/record/2706539.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 | 6,757.8 | 2022-04-01T00:00:00.000 | [
"Physics"
] |
The largest metallicity difference in twin systems: High-precision abundance analysis of the benchmark pair Krios & Kronos
Aims. We conducted a high-precision di ff erential abundance analysis of the remarkable binary system HD 240429 / 30 (Krios and Kronos, respectively), whose di ff erence in metallicity is one of the highest detected to date in systems with similar components ( ∼ 0 . 20 dex). A condensation temperature T C trend study was performed to search for possible chemical signatures of planet formation. In addition, other potential scenarios are proposed to explain this disparity. Methods. Fundamental atmospheric parameters ( T ef f , log g , [Fe / H], v turb ) were calculated using the latest version of the FUNDPAR code in conjunction with ATLAS12 model atmospheres and the MOOG code, considering the Sun and then Kronos as references, employing high-resolution MAROON-X spectra. We applied a full line-by-line di ff erential technique to measure the abundances of 26 elements in both stars with equivalent widths and spectral synthesis taking advantage of the non-solar-scaled opacities to achieve the highest precision. Results. We find a di ff erence in metallicity of ∼ 0 . 230 dex: Kronos is more metal rich than Krios. This result denotes a challenge for the chemical tagging method. The analysis encompassed the examination of the di ff usion e ff ect and primordial chemical di ff erences, concluding that the observed chemical discrepancies in the binary system cannot be solely attributed to any of these processes. The results also show a noticeable excess of Li of approximately 0 . 56 dex in Kronos, and an enhancement of refractories with respect to Krios. A photometric study with TESS data was carried out, without finding any signal of possible transiting planets around the stars. Several potential planet formation scenarios were also explored to account for the observed excess in both metallicity and lithium in Kronos; none was definitively excluded. While planetary engulfment is a plausible explanation, considering the ingestion of an exceptionally high mass, approximately ∼ 27 . 8 M ⊕ , no scenario is definitively ruled out. We emphasize the need for further investigations and refinements in modelling; indispensable for a comprehensive understanding of the intricate dynamics within the Krios & Kronos binary system.
Introduction
The chemical tagging technique consists in the possibility of identifying co-natal stars that have dispersed into the Galactic disc based on chemistry alone (e.g.Freeman & Bland-Hawthorn 2002;Casamiquela et al. 2021).This idea has been one of the motivations of important surveys such as APOGEE, GALAH, and the Gaia-ESO survey (Gilmore et al. 2012;Randich et al. 2013;De Silva et al. 2015;Majewski et al. 2017).A fundamental assumption guiding these surveys is that the members of the birth cluster should exhibit a chemically homogeneous composition.This hypothesis was tested using main-sequence and red giant stars in open clusters, reaching an internal coherence in metallicity in the range 0.02−0.03dex (e.g.De Silva et al. 2006;Bovy 2016;Liu et al. 2016;Casamiquela et al. 2020Casamiquela et al. , 2021)).Originally proposed by Andrews et al. (2018), wide binaries (100 au < a <1 pc) are an ideal sample for studying chemical tagging (e.g.Andrews et al. 2019;Kamdar et al. 2019;Hawkins et al. 2020).In particular, for the case of binaries with physically similar components, it is possible to reach the highest possible precision through a line-by-line differential analysis (e.g.Schuler et al. 2011;Saffe et al. 2015Saffe et al. , 2017;;Teske et al. 2016;Liu et al. 2018;Tucci Maia et al. 2019;Jofré et al. 2021;Flores et al 2024), which helps to minimize a number of modelinduced and other systematic errors (see Nissen & Gustafsson 2018).
Recently, the internal coherence of the chemical tagging was strongly challenged by the discovery of the exceptional comoving pair HD 240429/30 (hereafter Krios & Kronos; Oh et al. 2018), composed of two G-type stars sharing nearly identical Gaia TGAS 1 proper motions and parallaxes.Oh et al. (2018) suggest that the two stars are co-natal, based on their proximity in phase-space, with very similar radial velocites and isochrone ages, and also with very low probabilities of stellar capture and exchange scattering.The authors used the stellar parameters and abundances from the survey of Brewer et al. (2016), who studied 1617 FGK stars that belong to the California Planet Survey (CPS) using an automated spectral synthesis procedure.In this way, Oh et al. (2018) estimated for the pair a mutual difference in iron content of ∼0.20 dex, and a similar value for other metals such as Ca and Ni.To our knowledge, this is the largest difference found to date between stars with twin components and a supposed common origin, highlighting the pair Kronos & Krios as a benchmark multiple system.Hawkins et al. (2020) studied 25 binary systems and found that 80% are homogeneous at the 0.02 dex level, while six pairs show differences greater than 0.05 dex.Then, if confirmed, the metallicity difference between Kronos & Krios would be ten times higher, in logaritmic scale, than the typical internal coherence of stars born in the same cluster.The greatest difference found between Kronos & Krios would imply that their co-natal nature could not be recovered by any previous chemical tagging work (e.g.De Silva et al. 2006;Bovy 2016;Liu et al. 2016;Casamiquela et al. 2020Casamiquela et al. , 2021)).The difference between Kronos & Krios (∼0.2 dex) is similar to those found between random pairs (scatter of 0.23 dex, Nelson et al. 2021), defying the main assumption of the chemical tagging, in which stars formed together display the same abundances along their main sequence lifetimes.Recently, Saffe et al. (2024) analysed, for the first time, a giant-giant binary system bringing new insights, with significant differences in metallicity potentially attributed to primordial inhomogeneities.The significance of these findings underscores the importance of our binary system and deserves particular attention.
In addition, it is equally important to explain the origin of the significant metallicity difference between Kronos & Krios.This requires studying the relative volatile-to-refractory content between the stars and the condensation temperature (T c ) trends.For instance, Meléndez et al. (2009) found that the Sun is deficient in refractory elements (T c > 900 K) relative to volatile (T c < 900 K) when compared to 11 solar twins, and that the abundance differences correlate with T c .They suggested that this trend is a signature of planet formation, assuming that refractory elements were locked up in rocky planets during the Solar System formation.However, different explanations for the T c trends could also be possible.Booth & Owen (2020) suggest that if a giant planet forms early enough ( < ∼ 1 Myr) at large separations, it could trap > ∼ 100 M ⊕ of dust exterior to its orbit.Then, the star would accrete more gas than dust from the protoplanetary disc, which could result in a lack of refractories in the stellar atmosphere.A larger amount of refractories in a stellar atmosphere could also be the result of accretion of rocky material (e.g.Gonzalez 1997;Meléndez et al. 2017;Saffe et al. 2017;Oh et al. 2018).Oh et al. (2018) suggested that Kronos accreted ∼15 M ⊕ of rocky material in order to explain the mutual T c trend.To date, this is the highest amount of material estimated to be accreted in binary systems with twin components: it is equivalent to approximately seven times the four inner planets of the Solar System together, which is also remarkable.Other authors consider alternative scenarios trying to explain the T c trends, such as Galactic chemical evolution (GCE) or dust-cleansing effects (e.g.Önehag et al. 2011;Adibekyan et al. 2014;Nissen 2015).
Recently, Spina et al. (2021) studied a sample of 107 binary systems and showed that accretion events occur in ∼20-35% of solar-type stars.In contrast, Behmard et al. (2023) found a much lower engulfment rate of ∼2.9%, claiming that accretion events are rarely detected.The last authors propose that primordial in-homogeneities rather than engulfment events could explain the differences observed in binary systems.According to their criteria (see Sect. 6), Kronos & Krios would be the only pair showing a true engulfment detection, ruling out most previous claims of engulfment events.This highlights again the relevance of the notable pair Kronos & Krios between other binary systems.Interestingly, Kunimoto et al. (2018) consider an engulfment event unlikely in this binary system, owing to the rapid mixing expected from fingering convection (10-100 Myr, Théado & Vauclair 2012).Thus, the origin of the extreme metallicity difference in this benchmark pair remains unknown.
A number of recent works studied atomic diffusion effects on main-sequence stars, using stellar evolution models (e.g.Dotter et al. 2017), observing the stars of the M67 open cluster (e.g.Souto et al. 2018Souto et al. , 2019) ) and also using binary stars (e.g.Ramírez et al. 2019;Liu et al. 2021).Diffusion models show a strong dependence on log g, with the largest effects occurring near log g ∼ 4.2 dex (see e.g.Fig. 5 in Souto et al. 2019).The same plot predicts that a difference of ∼ 0.20 in log g could translate into a difference of ∼ 0.075 in [Fe/H]; other differences are predicted for different chemical elements.Liu et al. (2021) found that the overall abundance offsets in four of seven binary systems could be due to atomic diffusion effects, complicating the chemical tagging.The difference in log g estimated for the pair Kronos-Krios is 0.10 dex (Brewer et al. 2016), the largest difference found in the sample of twin-star binary systems of Ramírez et al. (2019).Then, we wondered if atomic diffusion effects not previously studied in this benchmark pair could explain, at least in part, the extreme difference in metallicity found.
The detection of a possible T c trend in a binary or multiple system is a challenge, requiring the highest possible precision in the derivation of stellar parameters and abundances.This demands high-quality spectra with very high S/N, reaching typically ∼400 or even more (e.g.Teske et al. 2016;Liu et al. 2018;Schuler et al. 2011;Tucci Maia et al. 2019), compared to S/N∼200 for the case of Kronos & Krios (Oh et al. 2018).For stars with low rotational velocities, it is usual to use equivalent widths rather than spectral synthesis in the derivation of stellar parameters, given that spectral synthesis depends on additional factors (such as v sin i, the resolving power R of the instrument, and the correct fitting of line profiles).The stars Kronos & Krios present projected rotational velocities of 1.1 km/s and 2.5 km/s (Brewer et al. 2016), allowing a clean measurement of equivalent widths.Moreover, for the case of multiple systems with physically similar components, the use of a line-by-line differential technique allows the minimization of systematic errors (e.g.Schuler et al. 2011;Bedell et al. 2014;Saffe et al. 2015;Teske et al. 2016;Liu et al. 2018;Tucci Maia et al. 2019).In this way, the physical similarity between Kronos & Krios (G0V+G2V) is an advantage to be exploited with a differential analysis, a technique not applied by previous works for this pair.
Then we studied the benchmark pair Kronos & Krios by using a high-quality MAROON-X spectra with higher S/N (∼400), higher resolving power (R ∼ 85000), broader spectral coverage (from ∼4900 to 9200Å), and using a more refined analysis technique than previous works (fully differential together with equivalent widths).In addition, we took advantage of using non-solarscaled opacities in the derivation of model atmospheres, which could result in small abundance differences when compared to the classical solar-scaled methods (Saffe et al. 2018(Saffe et al. , 2019;;Flores et al 2024).This allowed us to determine a metallicity difference between the two stars with the highest possible precision, and to perform a T c trend analysis to study the possible origin of the differences in this benchmark pair, which could be attributed to a planet engulfment event (Oh et al. 2018;Behmard et al. 2023).Moreover, we explored alternative scenarios that could lead to this result, such as atomic diffusion (Liu et al. 2021) and the potential primordial origin of the chemical difference (Ramírez et al. 2019;Nelson et al. 2021;Saffe et al. 2024).
This work is organized as follows.In Sect. 2 we describe the observations and data reduction.In Sect. 3 we present the stellar parameters and chemical abundance analysis.In Sect. 4 we show the results and discussion.Finally, in Sect. 5 we highlight our main conclusions.
Observations and data reduction
The spectra of Kronos & Krios were acquired through the M-dwarf Advanced Radial velocity Observer Of Neighboring eXoplanets (MAROON-X) spectrograph.2This high-precision bench-mounted echelle spectrograph provides high-resolution (R ∼ 85000) spectra when illuminated via two 100 µm (0."77 on sky) octogonal fibres.MAROON-X is connected to the 8.1 m Gemini North telescope at Maunakea, Hawaii.Currently, the spectrograph has no movable parts and is operated in one readout mode (100 KHz, 1x1 binning).MAROON-X is equiped with two STA4850 (4080 x 4080) CCD detectors with a pixel size of 15 µm, including a coating optimized for their respective wavelength coverage.The instrument includes its own tungstenhalogen lamp for flat-fielding and a ThAr arc lamp for wavelength calibration.
The observations were taken on August 15, 2022 (Programme ID: GN-2022B-Q-203, PI: Paula Miquelarena); the star Kronos was observed immediately after the star Krios, using the same spectrograph configuration.The exposure times for Krios and Kronos were 3 x 20 minutes and 3 x 16.67 minutes, respectively.This resulted in a final signal-to-noise ratio (S/N) per pixel of ∼420 for both stars, measured near ∼6000 Å in the combined spectra.The final spectral coverage was ∼4900 − 9200Å.The solar spectrum was obtained by observing the asteroid Vesta (Programme ID: GN-2022A-Q-22, PI: Yuri Netto), yielding a S/N similar to that achieved in the combined spectra of Kronos and Krios.However, it is worth mentioning that the most accurate differential study, in terms of abundance precision, is conducted between the components of the binary system due to their similarity.
MAROON-X spectra were reduced using MAROONXDR,3 a publicly available Data Reduction for Astronomy from Gemini Observatory North and South (DRAGONS, Labrie et al. 2019) implementation of the data reduction pipeline, following the standard recipe for echelle spectra (e.g.bias and flat corrections, scattered light correction).The continuum normalization and other operations (such as Doppler correction and spectra combination) were carried out using the Image Reduction and Analysis Facility (IRAF).4
Stellar parameters and abundance analysis
We determined fundamental stellar parameters, such as effective temperature (T e f f ), surface gravity (log g), metallicity ([Fe/H]), and microturbulence velocity (v turb ), as well as chemical abundances for Kronos and Krios by first measuring the equivalent widths (EWs) of 26 elements, including Fe i and Fe ii, using the splot task in IRAF.The list of spectral lines, along with significant laboratory data, such as excitation potential, oscillator strengths, and log gf values, were sourced from Liu et al. ( 2014a), Meléndez et al. (2014), and were supplemented with data from Bedell et al. (2014), who carefully selected lines for precise abundance determinations.
Stellar atmospheric parameters were obtained by imposing ionization and excitation balance of the Fe i and Fe ii lines.In this method we search for a zero slope when comparing Fe i and Fe ii abundances with reduced equivalent width (EW r = EW/λ) and excitation potential, respectively.For this purpose, we employed the FUNdamental PARameters programme (FUNDPAR, Saffe et al. 2015Saffe et al. , 2018) ) in its latest version.It uses the MOOG5 code (Sneden 1973) together with ATLAS12 model atmospheres (Kurucz 1993) to search for the best solution (for more details, see Saffe et al. 2018).In Figure 1 we present the differential abundances of Fe i (black) and Fe ii (red) versus excitation potential (upper panel) and reduced EWs (lower panel) for Krios compared to Kronos.
We employed a full6 line by line differential technique using the Sun as reference in the first step.In this context, the adopted solar parameters were T e f f = 5777 K, log g = 4.44 dex, [Fe/H] = 0.00 dex and v turb = 1.00 kms −1 .Subsequently, we recalculated v turb by ensuring a zero slope between absolute abundances of Fe i and EW r , and the value obtained was 1.13 km s −1 .The final parameters for Kronos and Krios relative to the Sun are presented in Table 1.The corresponding uncertainities were estimated using the method described in Saffe et al. (2015), which accounts for the individual and mutual co-variances for the error propagation.We applied the same methodology to determine
Star
T eff log g the differential stellar parameters and abundances of Krios, using Kronos as the reference star.The resulting parameters for Krios relative to Kronos are also provided in Table 1.
We also derived chemical abundances for 26 elements, other than Fe: Li Ce ii, Pr ii, Nd ii, and Eu ii.For this purpose we implemented a curve of growth analysis by using the latest version of MOOG (Sneden 1973).In order to account for hyperfine structure (HFS) effects, we employed spectral synthesis for V i, Mn i, Co i, Cu i, Li i, Y ii, Sc ii, and Eu ii, incorporating HFS constants from Kurucz & Bell (1995).We also applied abundance corrections for galactic chemical evolution (GCE) based on the [X/Fe]-age correlation from Bedell et al. (2018) for (Krios-Sun) and (Kronos-Sun), following the methodology detailed by Spina, Meléndez & Ramirez (2016) and Yana Galarza et al. (2016).No GCE correction was made for Krios-Kronos, as it is assumed that they were born from the same molecular cloud.Specifically, we considered non-local thermodynamic equilibrium (NLTE) corrections for Ba ii (Korotin et al. 2011), Na i (Shi et al. 2004), and O i (Ramírez et al. 2007).The NLTE correction for Ba ii is +0.015 dex for Kronos and 0.00 dex for Krios.For Na i we adopted -0.08 dex for both stars, and for O i we adopted +0.11 dex for Kronos and +0.18 dex for Krios.The differential abundances of all elements, along with their corresponding errors, are detailed in Table 2.It is worth mentioning that extensive NLTE corrections are avilable using an interpolation tool at the MPIA website. 7This service includes several elements (Mg, Si, and Ca, among others) and also Fe i and Fe ii corrections.For example, the O i triplet include hydrogen collisions with cross-sections based on quantummechanical calculations (Bergemann et al. 2021).The interpolation tool made use of MAFAGS or MARCS model atmospheres.Considering that our calculation used the ATLAS12 model, a future implementation of FUNDPAR using the MARCS models could take advantage of the mentioned NLTE corrections.The total abundance errors (σ T OT ) were obtained by quadratically adding the observational errors (derived as σ/ √ (n − 1)) and errors due to uncertainties in fundamental parameters.For those elements with only one line, we adopted for σ the average standard deviation of the other elements.
Using the spectroscopic stellar parameters obtained for both components, we derived new values for stellar masses M ⋆ , radius R ⋆ , and ages τ ⋆ .To accomplish this, we employed PARAM 1.58 from the PAdova and tRieste Stellar Evolution Code (PAR-SEC) (De Silva et al. 2006;Rodrigues et al. 2014Rodrigues et al. , 2017)).We specifically utilized the evolutionary tracks from Modules for Experiments in Stellar Astrophysics (MESA; Paxton et al., 2011;Paxton et al. 2013Paxton et al. , 2015Paxton et al. , 2018)); the initial data required for the analysis included T e f f , logg, and [Fe/H], along with the respective 1σ error in all cases.We also included parallaxes from Gaia EDR3 (Gaia Collaboration et al. 2021) and photometry from Tycho-2 catalogue in V and B bands (Hog et al. 2000).The derived values are −1.34 Gyr for Kronos, and −1.10 Gyr for Krios.In addition, we estimated the ages of the components using trigonometric log g, obtaining as a result τ ⋆ = 2.18 ± 1.37 Gyr for Kronos and τ ⋆ = 2.09 ± 1.50 Gyr for Krios, which are similar to the previous values within the errors, providing evidence of the true coevality of the system.
Results and discussion
The stellar parameters and chemical abundances derived from this work were obtained through the opacity sampling method, incorporating non-solar-scaled opacities (Saffe et al. 2018).When comparing the fundamental atmospheric parameters listed in Table 1 with those obtained from Brewer et al. (2016), we find a good agreement within the errors.However, a notable discrepancy arises when comparing T e f f differences between the two components.In our investigation these temperatures exhibit notable similarity, yielding identical temperatures when using Kronos as the reference star.In contrast, Brewer et al. (2016) reports a significant temperature difference between the components.We attribute this discrepancy to the use of higher S/N spectra, the use of different line lists and atmospheric models, and the full line-by-line differential technique employeed in our study.
Moreover, it is noteworthy that the atmospheric model utilized in the prior chemical analysis, as indicated by Brewer et al. (2016), employed a fixed microturbulence parameter set at 0.85 kms −1 .Nissen & Gustafsson (2018) have cautioned against the potential inaccuracies associated with using a constant value for v turb .This caution gains particular significance considering an observed variation of approximately 1.2 kms −1 when analysing stars with effective temperatures ranging between 5000 K and 6500 K (Edvardsson et al. 1993;Ramírez et al. 2013).In our study, we opted not to fix v turb ; instead, we estimated the value that best fits with the atmosphere model of the components, achieving an optimal agreement between abundances and line intensity.
Additionally, we calculated the photometric temperatures of the two stars using the colte code, 9 which derives coloureffective temperature relations employing Gaia DR3 and 2MASS photometry in the InfraRed Flux Method, and estimating errors from Monte Carlo simulations of each index (Casagrande et al. 2021).The weighted average results can be observed in Table 1.For Kronos there is excellent concordance between spectroscopic and photometric T e f f , and for Krios the photometric temperature appears marginally higher than the spectroscopic value, although still statistically indistinguishable within the errors.Nevertheless, the spectroscopic estimate exhibits a slightly closer agreement with the photometric value compared to those derived by Brewer et al. (2016).
The significant difference in metallicity found in Oh et al. ( 2018) of ∼ 0.20 dex is also reflected in this study, with a difference of 0.230 dex, indicating that Kronos is more metal-rich than Krios.Figure 2 shows the abundance of chemical elements in Krios versus condensation temperature T c , considering Kronos as reference.The 50% T C values were taken from Lodders (2003), for a solar composition gas.We calculated the slope considering all elements and considering only the refractories.The weighted results were -17.43 ± 2.25 ×10 −5 dex K −1 for all elements and -23.98 ± 5.16 ×10 −5 dex K −1 for refractories.Based on these findings, a pronounced lack of refractories relative to volatiles in Krios compared to Kronos is evident, with a significance at a 9σ level.Regarding the refractory elements, we can observe that this slope is also significant at a 6σ level.
In view of their results, Oh et al. ( 2018) explored the possibility that this system formed through binary-single scattering events, where initially unrelated stars undergo an exchange of binary members.The study delves into the rate of exchange scattering, considering factors such as the cross-section and velocity parameters.However, the analysis revealed that this mechanism is unlikely to explain the distinctive abundance patterns observed in such stars.A statistical examination, employing randomly drawn star pairs with similar metallicity characteristics, reinforces this conclusion, highlighting the improbable nature of exchange scattering in accounting for the observed chemical differences within the binary system.
Li content in Kronos & Krios
The Li abundance was initially calculated for Krios and Kronos using spectral synthesis of the 6707.8Å line and corrected for NLTE effects using the INSPECT tool (Lind et al. 2012), obtaining A(Li)=2.78 ± 0.07 dex for Kronos and A(Li)=2.26 ± 0.07 dex for Krios.However, due to an artefact observed around the lithium line, particularly on its left wing, we opted to use spectra from the HIRES database (Programme ID: Y219, PI: Brewer) to redetermine its abundance.After correcting for NLTE effects using the INSPECT tool, we obtained A(Li) values of 2.84 ± 0.07 dex for Kronos and 2.28 ± 0.07 dex for Krios, in good agreement with the values obtained with MAROON-X spectra, within the errors.Consequently, the lithium difference between components is ∆(Li) = 0.56 dex, slightly greater than the ∆(Li) = 0.50 dex reported by Oh et al. (2018).
Prior studies of FGK dwarf and subgiant stars revealed a subtle trend between lithium abundance and T e f f , with A(Li) being higher for hotter stars (Ramírez et al. 2012;Bensby & Lind 2018).Furthermore, Carlos et al. (2019) found a strong correla-9 https://github.com/casaluca/colteNotes.The total error σ T OT includes errors due to parameters and observational errors. (a) Absolute abundance of Li. tion between Li depletion and age for a sample of 77 solar-type stars, and a weaker correlation with metallicity and mass, with higher Li depletion for older, more metallic, and less massive stars, in line with previous studies (e.g.Castro et al. 2009;Carlos et al. 2016).
Recently, Martos et al. ( 2023) estimated a correlation between Li abundance and both age and [Fe/H] in a sample of 118 solar analogues, using a least-squares method, and found a robust anticorrelation with these parameters.In Figure 3 of their work, they showed the behaviour of A(Li) with respect to Age and [Fe/H].In Figure 3, we replicated this distribution by plotting A(Li) NLT E versus age, including those objects with −0.15 < [Fe/H] < 0.15 (black points) and [Fe/H] > 0.15 (red squares).We included Kronos and Krios, shown in the figure with diamonds.Given the significant difference in metallicity between both stars, we also contemplated the hypothesis that the bulk composition of Kronos closely resembled that of Krios, indicated with triangles in the figure.We considered ages computed using MESA isochrones, indicated in green in Figure 3. Additionally, we incorporated ages calculated through the Yonsei-Yale (Y 2 ) set of isochrones (Yi et al. 2001;Demarque et al. 2004) and taking into account the influence of alpha enhacenment, to maintain consistency with the sample analysed by Martos et al. (2023), resulting in τ ⋆ = 3.08 ± 1.54 Gyr and τ ⋆ = 2.81 ± 1.60 Gyr for Kronos and Krios, and τ ⋆ = 3.61 ± 1.69 Gyr for Kronos considering [Fe/H] = −0.01dex, represented in the figure in orange.First, we focussed on the MESA set of parameters; it is apparent that Krios has a similar A(Li) to the other stars in the same age group.However, the behaviour of Kronos is quite different from the stars of the same age and metallicity in the sample.It can be observed that, regardless of the primordial metallicity that Kronos may have had, it has more lithium than the rest of the stars in the sample.This phenomenon remains prominent in both cases, whether its primordial metallicity was [Fe/H] = -0.01dex initially, or considering a bulk metallicity of [Fe/H] = 0.22 dex.Furthermore, these results are also replicated with the set of Y 2 parameters.This suggests that the difference in lithium between the two stars cannot be solely explained by differences in parameters.If this were the case, we would expect Kronos to be defficient in Li compared to Krios, considering Li < 1 dex, following the trend of the metal-rich stars in the sample; however, it has Li=2.78 dex, which is far from this sequence.
Due to the considerable depletion of lithium in stars, which can exceed a factor of 100 at the solar age (e.g.Asplund et al. 2009;Monroe et al. 2013), planet engulfment provides a viable mechanism for significantly increasing the photospheric lithium content in solar-type stars (e.g.Ramírez et al. 2012;Meléndez et al. 2017).Sandquist et al. (2002) showed that planet accretion onto the host star could introduce planet material into the stellar convection zone, thereby modifying surface abundances, especially with respect to lithium.Meléndez et al. (2017) found an increase in Li in HIP 68468 of approximately 0.6 dex, four times more than expected for a star of its age, attributing this phenomenon to a possible planet ingestion.In a similar work, Galarza et al. ( 2021) analysed the binary system HIP 71726-HIP 71737.Their analysis revealed a metallicity difference of ∆(Fe/H) ∼ 0.11 dex and a lithium disparity of ∼ 1.03 dex between the components.The authors concluded that an engulfment event involving ∼ 9.8 M ⊕ of rocky material could account for these observed differences.Spina et al. (2021) analysed the chemical composition of 107 binary systems composed of solar-type stars, finding that those stars with higher [Fe/H] than their companions also exhibited an increase in Li abundance, linking both results to planetary ingestion by these enriched objects.They determined that engulfment events occur with a probability of 20-35%.Nonetheless, Behmard et al. (2023) claim that the use of an inhomogeneous sample, the omission of an analysis of abundances with T C , and the fact that some binaries in the sample did not qualify as twins could significantly affect the high rates of engulfment found by Spina et al. (2021).Instead, they conducted a more detailed analysis of 36 planet-hosting binaries, of which only 11 systems were considered twins, aiming to detect potential engulfment events.This exploration revealed that engulfment events are rare, with a rate of ∼2.9%.Notably, the study emphasizes that only the Krios-Kronos binary could have experienced a genuine engulfment event.
Searching for planets around Kronos & Krios
To date, there have been no planets detected in orbit around Kronos and Krios.Therefore, we conducted a detailed photometric analysis with the aim of revealing potential planetary bodies that could offer valuable insights to the planet formation scenarios expounded in the subsequent sections.
Both stars were observed by the Transiting Exoplanet Survey Satellite mission (TESS; Ricker et al. 2015) in sectors 17, 18, and 24 (from October 8 to November 27, 2019, and from April 16 to May 12, 2020) with a 30-minute cadence and in sectors 57 and 58 (September 30-November 26, 2022) with a cadence of 200 seconds.The analysis of these data products, available in target pixel file (TPF) format, was carried out with the tools provided by the Lightkurve Python package (Lightkurve Collaboration et al. 2018).Given that both stars are sufficiently separated in the TESS field, we were able to analyse the TPF files of Kronos and Krios independently.We performed singleaperture photometry on the images, choosing as optimal aperture the one centred on the target that allowed all the possible flux to be collected from the star, but that minimized the sky contribution.The 30-minute and 200-second cadence light curves were treated separately.For both modes, a median filter was applied to remove the systematics in the resulting light curves.We were not able to eliminate the strong systematics introduced by the changes in the Earth-Moon orientation and distance in sector 24 and, hence, these data were not used in the further analysis.
To look for signs of additional stellar and/or planetary companions around Kronos and Krios, we ran the Transit Least Squares code (TLS; Hippke & Heller 2019) on the detrended light curves of each component separately (Figure 4).No transit or eclipse-like signal that could suggest the presence of a transiting planet or an eclipsing stellar companion was detected in the 30-minute or in the 200-second cadence of the two stars.Additionally, a detailed by-eye inspection of the TESS photometry revealed that the stars show no signs of periodic modulation or sporadic events, such as flares, which indicates that they are not photometrically active objects.Here, it is important to caution that the present conclusion about the periodic photometric variability is based only on visual scrutiny of the data.In order to obtain a more reliable and confident result, we should run on the TESS light curves of Kronos and Krios a tool specifically designed to detect periodic modulations in time series, such as the Lomb-Scargle periodogram (Lomb 1976;Scargle 1982) or the auto-correlation function (McQuillan et al. 2013).However, conducting such an analysis is beyond the scope of this paper.
Atomic diffusion
The atomic diffusion process includes effects such as gravitational settling, thermal and chemical diffusion, and radiative acceleration (e.g.Dotter et al. 2017;Liu et al. 2021).It primarily operates in the radiative zones of the stars, pushing certain elements and altering its surface abundances, depending on the particular species and the evolutionary state of the star.In the case of substantial differences in the spectroscopic parameters of stars (T e f f or log g) forming a binary system, this process could potentially explain a disparity in metallicity between the components since their abundances may have been affected differently as they evolve.Liu et al. (2021) found that, for four of the seven studied pairs with differences in log g > 0.05 dex, their discrepancies in [Fe/H] could be attributed to atomic diffusion rather than planetary formation.In the present work, Kronos and Krios exhibit a log g difference of approximately 0.05 dex.In consequence, we investigate whether the observed difference in metallicity between the components could be attributed to a diffusion process.To address this, we used the MESA Isochrones and Stellar Tracks (MIST; 10 Choi et al. 2016), which facilitates the derivation of stellar evolutionary models that integrate the influences of atomic diffusion and overshoot mixing, and also employed solar abundances from Asplund et al. (2009).We generated a set of isochrones covering the age range of both stars; the results are depicted in Figure 5. From the figure, it is evident that Krios follows an evolutionary model consistent, within the errors, with the age calculated from PARAM (τ ⋆ ≈ 1.57Gyr).With Kronos exhibiting a lower log g than Krios, the metallicity of Kronos would be expected to be lower if diffusion was predominant in explaining the anomalies found.However, as depicted in Figure 5, Kronos exhibits a significantly higher metallicity than Krios.Based on this result, while this effect cannot be completely ruled out, we can consider that it is not the primary factor responsible for the pronounced difference in metallicity found in the binary system.There must be an additional mechanism to account for these discrepancies.
Primordial chemical differences between components
Binary systems are ideal laboratories for testing a number of scenarios that have been proposed to explain the origin of chemical signatures.This is attributed to the shared origin of the two stars within the same molecular cloud, assuming that their primordial chemical composition should be similar and diminishing the factor of GCE.
Taking into account the substantial difference in metallicity between Kronos and Krios, having a projected separation of approximately 11277 au, Oh et al. (2018) explored the probability of their coincidental pairing.Using the Gaia Universe Mock Simulation (Robin et al. 2012) and the Besançon Galaxy model (Robin et al. 2003), they looked for chance pairs within 200 parsec of the Sun.From a sample of 119259 solar-mass primary stars, they found only one pair with ∆v r < 2 km s −1 , which naturally suggests a physical association between the Kronos & Krios system rather than a chance pairing.We further calculated the ∆v 3D of the binary system.For this purpose, we utilized the space velocities of each component from the Gaia DR3 dataset (Gaia Collaboration et al. 2021) using the Gala code (Price-Whelan 2017).The ∆v 3D is estimated at 0.55 km s −1 , which is below the 2 km s −1 limit required to ensure the continuity of a binary system (Kamdar et al. 2019).Ramírez et al. (2019) examined a sample of 12 binary systems with twin stars, and found a modest correlation between the absolute difference in metallicity among the components and their separation.They found increased metallicity discrepancies with expanding separations between the binary system components.Additionally, Andrews et al. (2019) investigated chemical homogeneity in 24 binary systems with similar components, finding consistency in their abundances at a level of 0.1 dex.Furthermore, they generated a set of random pairs from these systems, and in this case, consistency was observed at a level of 0.3-0.4dex.Following this line, Nelson et al. (2021) analysed 33 comoving pairs of F and G dwarfs and found that those comoving systems spanning separations from ∼ 2 × 10 5 au to 2 × 10 7 au exhibit greater homogeneity (∆[Fe/H] = 0.09 dex) than those that are randomly paired (∆[Fe/H] = 0.23 dex).
Assuming that the two stars indeed constitute a coeval and conatal system, and considering the results presented by Nelson et al. (2021), it suggests that the difference in metallicity found in the binary system cannot be solely explained by their separation distance, suggesting the existence of some other factor to account for this significant discrepancy.
Rocky planet formation
The average abundance calculated for refractory and volatile elements for Krios-Kronos are −0.24 ± 0.01 dex and −0.03 ± 0.03 dex, respectively.These results, along with the trend observed in Figure 2, indicate an overabundance of refractories in Kronos compared to Krios.Moreover, as seen in Section 4.1, there is an excess of Li in Kronos of ∆(Li) = 0.56 dex, which cannot be solely explained by differences in the parameters of the components.
Among the possible scenarios that could explain this result, we firstly consider the hypothesis presented by Meléndez et al. (2009).They suggested that the lack of refractories in the Sun may be attributed to the formation of terrestrial planets and planetesimals around it, which primarily accreted refractory material for this purpose (e.g.Saffe et al. 2016;Yana Galarza et al. 2016;Liu et al. 2020).The fact that Krios exhibits a deficiency in refractories could potentially result from a protoplanetary disc sequestering refractory material, possibly for the subsequent formation of rocky planets around the star.To date, as we present in Section 4.2, no planets have been detected transiting either component of the binary system.While current evidence does not provide strong support for this model, it would be intriguing to conduct a radial velocity study to search for possible anomalies that could indicate the presence of a planet.
Another plausible factor that may account for the disparity in metallicity and the trend with T C among the binary system components is the potential presence of a debris disc encircling Krios, similar to what was observed in the ζ 1 − ζ 2 Ret system (Saffe et al. 2016).Debris disc detection primarily relies on identifying infrared (IR) excess emissions originating from circumstellar dust.These dust particles have lifespans shorter than those of stellar systems, reinforcing the hypothesis that these discs experience continuous replenishment through ongoing collisions with substantial celestial bodies (e.g.Wyatt 2008).
To investigate the potential presence of an IR excess in this binary system, we employed the VOSA11 platform, obtaining the energy distribution for both system components using photometric observations from the SDSS, JPAS, TYCHO, JPLUS, Johnson, WISE, 2MASS, and GAIA3 filters.The analysis did not reveal any IR excess in either component, discouraging the possibility that the observed differences in metallicity and the T C trend in this system are linked to the presence of a debris disc around Krios.
Dust trapping
The model proposed by Booth & Owen (2020) suggests that the lack of refractories in one of the stars could be due to the formation of a massive gas giant planet that created a gas gap in the protoplanetary disc.This gap transforms into an outer pressure trap beyond the orbit of the planet, mainly sequestering dust from the disc.This mechanism could reveal a disparity between refractory and volatile elements in the hosting star.The recent study by Hühn & Bitsch (2023) refines the understanding of this planetary formation scenario, exploring how the origin of a planet influences the material accreted onto the convective envelope.
If we consider this scenario as plausible, Krios should have a Jupiter-sized planet in orbit, which, according to this model, would create traps allowing the accretion of volatiles while inhibiting the accretion of refractories.If this were the case, it could potentially account for the abundance pattern observed in Figure 2. The absence of detected planets to date does not provide conclusive evidence to entirely rule out this hypothesis.
Planetary ingestion
Another important scenario to consider is the engulfment hypothesis (e.g.Saffe et al. 2017;Galarza et al. 2021;Jofré et al. 2021;Flores et al 2024).Spina et al. (2021) suggested that two conditions must be met for the observed anomalies to be attributed to the engulfment of a planet.Firstly, there should be an excess of refractory elements compared to volatiles in one of the stars in the pair, indicating the accretion of rocky material by that object.Secondly, it should also exhibit an excess of Li compared to its companion.This latter characteristic becomes particularly significant when engulfment occurs at an advanced age of the star because, by that time, it would have already burned most of the Li in its atmosphere, and therefore the accretion of new refractory material would leave a substantial and detectable imprint when comparing the A(Li) of the two components.
When comparing our findings with the hypotheses presented in the work of Spina et al. (2021), the scenario of engulfment becomes a plausible consideration.This implies that Kronos might have undergone the accretion of one or more planets at an advanced age, leaving distinctive lithium content marks and introducing refractory elements into the atmosphere of the star.To determine how much terrestrial mass Kronos would need to have accreted to achieve these values, we employed the terra code12 (Yana Galarza et al. 2016).Our estimations reveal a convection envelope mass of M cz = 0.017M ⊙ for Kronos, with approximately ∼ 27.8M ⊕ of rocky material considered necessary to replicate the observed trend illustrated in Figure 2.This includes a combination of 19.9M ⊕ of terrestrial material and 7.9M ⊕ of meteoritic material.This remarkable magnitude of ingested ma- terial represents one of the largest estimated to date in twin components, underscoring the significance of the results.Furthermore, recent research conducted by Armstrong et al. (2020) provides compelling evidence of the existence of TOI849-b, a planet with a core mass of 39.1M ⊕ .This discovery not only reinforces the plausibility of our findings, but also highlights the prevalence of planetary bodies with masses comparable to or even greater than the values found in our study within exoplanetary systems.
Figure 6 depicts the observed model and the model adjusted by terra, taking into account the ingestion of ∼ 27.8M ⊕ .A good agreement between the two models is evident, except for Li, which is overestimated by ∼ 0.36 dex.We caution that terra models engulfment as if it had occurred at the present time.Hence, the excess of Li found in the predicted model suggests that the engulfment likely occurred in the past.The mass of the convective envelope plays a significant role; therefore, knowing this value at the time of accretion would contribute to improving the model.Nonetheless, the simulation predicted by terra can be considered a solid first approximation.Behmard et al. (2023) employed a sample of 36 stellar systems to quantify the duration a chemical signature could remain observable in the stellar photosphere due to the ingestion of a planet and its associated strength.They simulated pollution resulting from the engulfment of a 10 M ⊕ planet and found that stars with masses ranging between 1.1-1.2M ⊙ exhibited the highest and most enduring chemical signature, maintaining values greater than 0.05 dex for approximately 2 Gyr.They also conducted an analysis considering the ingestion of a 50 M ⊕ planet, where stars with masses between 0.7-1.2M ⊙ displayed signatures exceeding 0.05 dex for a duration spanning 3-8 Gyr.Taking into account the age and mass of Kronos, these findings lead us to consider that the chemical differences observed in this star when compared to Krios, may have originated from a potential engulfment of rocky material.
The Kozai migration (Kozai 1962) has been proposed to explain this phenomenon in other binary systems, wherein a giant planet orbiting one of the stars may experience orbital decay due to a combination of perturbations caused by the other star in the system that leads to an increase in the eccentricity of the planetary orbit, accompanied by tidal friction that brings the planet closer to the host star, ultimately resulting in the ingestion of the surrounding rocky material and potentially the planet itself (Wu et al. 2003;Takeda et al. 2008;Borkovits et al. 2011;Mustill et al. 2015;Petrovich 2015;Church et al. 2020).In this context, we should assume that Kronos initialy formed a giant gas planet as well and possibly rocky material.Then the migration of this hypothetical planet triggered the accretion of refractory material, either from the inner regions of the planetary system or from the giant planet's core itself, resulting in the observed refractory excess (as shown in Fig. 2).Some similar migration scenarios have been invoked in the literature for other binary systems (e.g.Neveu-VanMalle et al. 2014;Teske et al. 2015;Saffe et al. 2017;Jofré et al. 2021;Flores et al 2024).
Conclusions
We performed a high-precision differential abundance analysis of the binary system Krios & Kronos with the aim of exploring different scenarios that could explain the particularly high [Fe/H] disparity found in Oh et al. (2018).To achieve this, we took advantage of high-resolution spectra (S/N ∼ 420) obtained from MAROON-X.We calculated the fundamental atmospheric parameters (T e f f , log g, [Fe/H], v turb ) for the two stars, for the first time making use of the non-solar-scaled method and using the Sun as reference, and recalculated parameters of Krios using Kronos as reference.We also measured chemical abundances for 27 elements through equivalent widths and spectral synthesis, subsequently analysing their relation with the T C .We found high similarity in the fundamental parameters of the two components and confirmed the existing difference in metallicity between them, with Kronos having a metallicity ∼ 0.230 dex higher than Krios.This substantial disparity suggests that previous chemical tagging works may not have successfully recovered their shared origin (e.g.De Silva et al. 2006;Bovy 2016;Liu et al. 2016;Casamiquela et al. 2020Casamiquela et al. , 2021)).
In addition to these results, a significant difference in Li abundance between the components was also found, with Kronos being 0.56 dex more abundant in Li than Krios.When comparing the abundances of (Krios-Kronos) versus T C , we observed a pronounced trend relative to T C , a behaviour that is repeated when considering only refractory elements.From these results, we primarily deduce an excess of refractories in Kronos compared to Krios.
We conducted a comprehensive single-aperture photometry analysis using TESS data and the TLS code to investigate potential planets orbiting either of the stars.No transits or eclipses of potential planets orbiting any of the components were detected, and there were no indications of stellar activity.While no evidence of transiting planets around Kronos and Krios was found, it should be noted that planetary-mass bodies that do not transit may still exist in the system.Additionally, it would be compelling to perform an analysis of the radial velocity variations of both components to shed light to this hypothesis.
Different scenarios were considered to explain the results obtained.We introduced, for the first time, an atomic diffusion analysis in this system, given the 0.05 dex difference in log g found between components, the limit considered by Liu et al. (2021) beyond which this phenomenon could affect the metallicity of the components.However, the characteristics of Kronos differ significantly from what was anticipated by its evolutionary model, suggesting that this scenario may not entirely account for the wide difference in metallicity.
We also examined the potential that this difference had a primordial origin, considering the projected separation existing between the stars (∼ 11277 au).Following the approach of Nelson et al. (2021), given that comoving pairs exhibit differences in metallicity of ∆[Fe/H] ∼ 0.09 dex, and taking into account that both stars probably formed from the same gas and dust cloud (Oh et al. 2018), we suggest that the difference in metallicity between the components cannot be solely associated with primordial differences; this implies the presence of an additional factor influencing this substantial disparity.
Planet formation scenarios were also investigated.The T C trend found in Figure 2, if interpreted as a deficiency of refractories in Krios, could have its origin in the formation of rocky planets (not yet detected), as proposed by Meléndez et al. (2009).We also analysed the IR excess, searching for a possible dust disc in Krios that could be generating the observed effect, with no positive results.Additionally, the scenario proposed by Booth & Owen (2020) was analysed in this binary system for the first time, assuming the presence of a hypothetical Jupiter-sized planet orbiting Krios, in which case pressure traps that sequester refractory elements could generate the observed pattern.However, additional photometric and spectroscopic data are necessary to conduct a more detailed search for planets around Krios, and to shed light on this hypotheses.
The last scenario analysed was planetary engulfment, a phenomenon whose characteristics closely match the results found in Kronos, both in the excess of [Fe/H] and the excess of Li compared to Krios (Spina et al. 2021).Regarding this hypothesis, we calculated the amount of rocky material Kronos would have ingested to achieve this difference through the terra code, resulting in ∼ 27.8M ⊕ .
In conclusion, while the evidence may seem to favour the engulfment hypothesis, it is crucial to acknowledge the inherent complexities and uncertainties associated with each scenario.Therefore, further investigation and exploration are imperative in order to achieve a more comprehensive understanding of the chemical anomalies and dynamics within this binary system.
Fig. 1 :
Fig. 1: Differential abundance vs excitation potential (upper panel) and differential abundance vs reduced EW (lower panel) of Krios relative to Kronos.The black dots correspond to Fe i and the red triangles correspond to Fe ii.
Fig. 2 :
Fig. 2: Differential abundances from Krios-Kronos vs T C .The weighted linear fits to all elements and to refractories are represented as a black and a red line, respectively.
Fig. 3 :
Fig. 3: Lithium abundance vs age for a sample of solar analogues extracted from Martos et al. (2023).The orange diamonds represent Kronos and Krios with ages calculated using Y 2 isochrones, along with their respective metallicities.The orange triangle represents Kronos with a bulk metallicity composition of [Fe/H]=−0.01dex.Similarly, the green diamonds and triangle represent Kronos and Krios, considering ages calculated with MESA isochrones.
Fig. 4 :
Fig. 4: Portion of the detrended TESS light curves of Kronos (top) and Krios (bottom) considering the 200-second cadence data of sector 58.
Fig. 5 :
Fig. 5: Set of isochrones for an age range of 0.5-2.5 Gyr.Krios is plotted in black and Kronos in grey.The vertical and horizontal bars correspond to σ[Fe/H] and σ log g, respectively.
Table 1 :
Fundamental parameters obtained for Kronos and Krios.
Table 2 :
Differential abundances obtained for Kronos and Krios relative to the Sun and for Krios relative to Kronos. | 11,709.6 | 2024-06-10T00:00:00.000 | [
"Physics"
] |
Microscopic origin of molecule excitation via inelastic electron scattering in scanning tunneling microscope
The scanning-tunneling-microscope-induced luminescence emerges recently as an incisive tool to measure the molecular properties down to the single-molecule level. The rapid experimental progress is far ahead of the theoretical effort to understand the observed phenomena. Such incompetence leads to a significant difficulty in quantitatively assigning the observed feature of the fluorescence spectrum to the structure and dynamics of a single molecule. This letter is devoted to reveal the microscopic origin of the molecular excitation via inelastic scattering of the tunneling electrons in scanning tunneling microscope. The current theory explains the observed large photon counting asymmetry between the molecular luminescence intensity at positive and negative bias voltage.
Introduction -The physical limitation of conventional semiconductor devices spurs the recent development of single molecule photoelectronics [1][2][3], where the incisive tool to probe single molecular structure and dynamics is of great demand. Combining the high resolution of scattering tunneling microscope (STM) with the specificity of fluorescence spectroscopy of molecules, STM-induced luminescence (STML) provides an ideal tool to study the photon emission and dynamics on the single-molecule level [4,5]. Experimental breakthroughs have allowed direct observations of the singlemolecular properties, e.g., the dipole-dipole coupling between molecules [6][7][8], the energy transfer in molecular dimers [9], and the Fano-like lineshape [10][11][12]. Yet, the retarded theoretical followup prevents us from conclusively understanding the single-molecular properties through the quantitative analyses of experimental data.
Such lag of the corresponding theoretical effort has led to inconsistent between experimental explanations. The underlying origin of the asymmetric emission intensity at positive and negative bias between the tip and substrate was assigned as the carrier-injection mechanism in [6], while it was also understood as inelastic electron tunneling (probably mediated by the localized surface plasmon) [13] for the same molecule, i.e., the single ZnPc molecule. The question exists even on the asymmetry with larger tunneling current at positive bias or versa [6,13]. The inconsistency remains unresolved mainly due to the lack of microscopic theory to conclusively determine the properties of the different tunneling mechanisms, which are mixed in the ab initio calculations [14,15].
In this letter, we reveal the underlying microscopic origin of the inelastic electron scattering down to the basic Coulomb interaction between the tunneling electron and the single molecule. Our theory shows the asymmetry with larger tunneling current and photon counting rate at negative bias, in turn, excludes the possibility of the opposite asymmetry to be attributed to the inelastic electron scattering. Such attempt shall initiate the understanding of the experimental feature from its microscopic origin and stimulate the theoretical studies of the STML.
Model -For the clarity of the notation, we sketch the design Figure 1. (Color online) (a) Schematic diagram of STML of a single molecule placed on a salt-covered metal plane. The STM tip apex is modeled as a sphere with radius R. Point A is the projection of tip's center on the plane, and d is the distance between tip and plane. The position of the positive charge in the molecule (red) is set as the origin of the coordinate system. r and r 0 stand for the vector of the tunneling electron (black) and the negative charge in molecule (blue), respectively. (b) The level diagram for inelastic electron scattering mechanism at negative bias. The black lines denote the vacuum level at two electrodes, and the red lines represent the initial and final electronic states. µ t ≡ µ 0 + eV b and µ s ≡ µ 0 are the Fermi energies of tip and substrate at bias voltage V b , where µ 0 is the Fermi energy of tip and substrate at zero bias. of the single-molecule STML in Fig. 1(a). A molecule, simplified for clarity as a dipole with positive (red) and negative (blue) charge, is deposited on a salt-covered metal substrate. A metal tip is positioned above the substrate plane. Both the tip and substrate are typically used with noble meta, e.g., silver (Ag). With nonzero bias voltage, an electron (black) from one electrode excites the molecule via the Coulomb interaction during its tunneling through the vacuum and then enters into the other electrode (see Fig. 1(b)). Subsequently, the excited molecule emits a photon by the spontaneous emission, which is measured by the photon counting to reveal molecular properties.
The Hamiltonian for the setup is divided into three parts as H = H el + H m + H el−m , where H el is the Hamiltonian for the tunneling electron between the tip and substrate, H m is the Hamiltonian of the molecule, and H el−m is the interaction between the tunneling electron and the single molecule. The Hamiltonian of the tunneling electron is H el = −∇ 2 / (2m e ) + V ( r), where V ( r) is the potential for the tunneling electron at position r = (x, y, z) and m e is the mass of an electron. The wave functions are written for different regions [16,17] as H el,s |ϕ n E n |ϕ n , where H el,t (H el,s ) is the Hamiltonian of the free tip (substrate) obtained by neglecting the potential in the substrate (tip) region. |φ k (|ϕ n ) is the eigenstate of free tip (substrate) with is the eigenenergy with zero bias voltage. The detailed form of the wave functions are discussed in the supplementary material. The Hamiltonian for the molecule is simplified as a two-level system [18,19] H m = E e |χ e χ e | + E g χ g χ g , where |χ e (|χ g ) is its excited (ground) state with energy E e (E g ).
The key element to understand the mechanism is the interaction between the molecule and the tunneling electron. For the purpose of clarity, we consider a simple case of one tunneling electron. The interaction, simplified from the Coulomb interaction, resembles the dipole interaction as where µ = −Ze r 0 denotes the effective electric dipole moment of the molecule. Z is the effective charge number, and r 0 stands for the vector of the center of the electrons in molecule. r represents the vector of the tunneling electron. Here, we have chosen the central position of the positive charge of molecule as the origin of the coordinate system. The detailed derivation can be found for the molecule with multiple chemical bonds [20] in the supplementary material. The interaction is rewritten explicitly with the basis of the wave functions of the single molecule and tunneling electron as We have defined the transition matrix element N s, is the transition matrix between molecular ground and excited states. The electron-dipole interaction in Eq.
(3) will induce energy transfer between the tunneling electron and the molecule (the state of the two-level molecule is flipped).
Tip's wave function in the vacuum region has the asymp- where a is the position of tip's center of curvature and κ k = −2m e ξ k is its decay factor. The normalized coefficient A k can be determined by first-principles calculations. This wave function is typical known as the s-wave, which is the simplest case for the tip [16,21]. Contribution from other wave functions can be similarly considered as that in the studies of STM [21]. And substrate's wave function ϕ n ( r) = B n e −κ n |z| decays along the +z direction with decay factor κ n = √ −2m e E n [22,23] and the normalization constant B n . With the wave functions for the tip and substrate, the transition matrix element is explicitly written as where µ x(y,z) is the x(y, z) component of the molecular dipole moment. And without loss of generality, we have chosen the position of tip's center of curvature along x axis, i.e., a = (a x , 0, d + R). By taking the decay wave functions of tip and substrate into account, we integrate over the region between plane z = 0 and z = d as an approximation. And in the later discussion, we ignore the dependence of N s,t | V b ,E n →ξ k on the normalization constants A k and B n by taking them to independent on the index k and n.
Asymmetry of photon counting -To understand the asymmetry of photon counting, we calculate the tunneling rate at negative bias (V b < 0), illustrated in Fig. 1(b), where the Fermi level of tip is lower than that of substrate. The molecule is initially in its ground state and the tunneling electron in one of substrate's eigenstate, i.e., |Ψ (t = 0) = χ g |ϕ n . To the first order of H el − H el,s and H el−m , we obtain the time evolution of the system as where the second and third terms stand for elastic and inelastic tunneling respectively. In order to obtain the above result, we have applied the rotating-wave approximation for Hamiltonian in Eq. (3). The corresponding tunneling amplitudes read where M n,k ≡ φ k | (H el − H el,s ) |ϕ n is the transition matrix element of the elastic tunneling and E eg ≡ E e − E g is the optical gap of the single molecule. We will focus on the inelastic tunneling process instead of the elastic tunneling which has been well explored in the earlier development [16,[21][22][23] of STM. The inelastic tunneling rate J n→k from |ϕ n to |φ k is J n→k = d c e,k (t) 2 /dt. The overall inelastic electron current at negative voltage is explicitly rewritten as where ρ t (E) (ρ s (E)) are the density of state of tip (substrate) at the energy E. F µ 0, T (E) is the Fermi-Dirac distribution of electrons in tip or substrate state at energy E, chemical potential µ 0 , and temperature T .
rules out all the tunneling processes whose energy do not conserve. Without loss of generality, we consider here the tip and substrate are of the same metal (Ag). In STML experiment, the temperature of the ultrahighvacuum chamber is low enough, typically lower than 10K [6-13, 24, 25], that the Fermi-Dirac distribution function is approximately a Heaviside function, i.e., F µ 0 ,T (E) = 1 for E < µ 0 and F µ 0 ,T (E) = 0 for E > µ 0 . The inelastic tunneling current becomes Eq. (9) suggests that the current for inelastic tunneling is nonzero only at the condition eV b < −E eg for the negative bias case. For the positive bias V b > 0, the current for the inelastic tunneling is obtained with the similar method as Similar to the negative bias case, the condition for a nonzero inelastic current is eV b > E eg . The equal bias voltage for nonzero inelastic current at negative and positive bias is an important feature different from the carrier-injection mechanism where the electron injection requires different voltage for the negative and positive bias [6,26]. With Eqs. (9 and 10), we obtain the inelastic tunneling current as Photon counting of molecular fluorescence is a quantity relevant for probing the properties of the single molecule. Once excited, the molecule will decay to its lower state spontaneously with rate γ. The photon counting rate is proportional to the inelastic current The detailed derivation can be found in the supplementary materials. In Fig. 2, we plot the photon counting rate as the function of the bias voltage between the tip and the substrate. The blue solid and black dashed lines show the relative emission intensity for tip's center of curvature R = 0.5nm and 1nm, respectively. The Fermi energy of silver is µ 0 = −4.64eV, and the density of state of silver can be found in [27]. Without loss of generality, we choose the tip right above the molecule (a x = 0) and the molecular dipole along the z direction (µ z = 0 while µ x = µ y = 0). The distance between tip and molecule is d = 0.4nm. As predicted in Eq. (11), the bias voltages for nonzero inelastic current at negative and positive bias are the same, i.e., |eV b | > E eg = 2eV. Insets in Fig. 2 describe the mechanism of the inelastic electron scattering. Another important feature is the asymmetry of the larger photon counting at negative bias than that at positive bias, as illustrated in Fig. 2. This intensity asymmetry stems from the eigenfunction asymmetry of tip and substrate. The tip's wave function φ k ( r) decays spherically with factor κ k , and substrate's wave function ϕ n ( r) decays along the +z direction with factor κ n . The relation between the elements of the transition matrix at positive bias V b and that at negative bias The ratio between the transition matrix element at positive bias V b and that at negative bias −V b is e −(κ k −κ n )R . Inserting Eq. (13) into Eq. (9), we obtain the ratio of the emission intensity as (see Supplementary Material for details) The current equation shows the characteristic asymmetry with larger current at negative bias induced by inelastic electron tunneling. Such asymmetry for inelastic scattering is caused by geometry shape of the tip and the substrate, and persists with different materials. In Fig. 3, we show the dependence of the asymmetrical ratio R as a function of the bias voltage with both the analytical formula (red dashed line) in Eq. (14) and the numerical result (blue solid line) calculated with the exact tunneling rate from Eqs. (9)(10). The analytical formula shows an agreement on the trend that the asymmetry of the photon counting increases with increasing bias voltage. The exponential decay of the ratio R as function of bias voltage is predicted in Eq. (14) and shall be tested with the experimental data.
With the theoretical predictions above, we revisit the important features observed in recent experiments [6,12,13]. In the single-hydrocarbon fluorescence induced by STM [12], the phenomenon that the emission intensity at positive bias was lower than that at negative bias is in line with our prediction. Though such the asymmetric intensity feature (the intensity at positive bias was much lower than that at negative bias) of a single ZnPc molecule was attributed to the carrier-injection mechanism [6], we emphasis that the inelastic electron scattering mechanism may also play an important role in this feature. By changing the tip and substrate material from Ag to Au, Doppagne el al. [13] observed a phenomenon which was opposite to the feature in [6]. The emission of a single neutral ZnPc molecule at positive bias was 30 times more intense than that at negative bias. Our theory definitely excludes the inelastic electron scattering mechanism as the origin of such asymmetric luminescence in [13].
In conclusion, we have derived the microscopic origin of the molecular excitation via the inelastic electron scattering mechanism in single-molecule STML. By the model, we obtain the emission intensity in the inelastic electron scattering mechanism. We find that inelastic electron scattering mechanism requires a symmetric bias voltage for nonzero inelastic current which equals the optical gap of this two-level molecule exactly. It implies that the energy window between the Fermi levels of two electrodes should at least equal the optical gap of the molecule [26]. Importantly, we reveal an asymmetric emission intensity at negative and positive bias which is due to the asymmetric forms of wave functions at two electrodes and show that the ratio of such asymmetry decays with tip's radius of curvature and bias voltage. Our model offers us a theoretical insight into the molecular excitation in the inelastic electron scattering process which has never been explored before.
Before closing, it is worthy to mention that the inelastic scattering mechanism is one of the three mechanisms proposed now and the photon counting obtained here is one part of the total emission intensity. This document is devoted to providing the detailed derivations and the supporting discussions to the main content.
I. ELECTRONIC WAVE FUNCTIONS ON THE TIP AND SUBSTRATE
In this section, we show the details to the wave function of the tunneling electron Hamiltonian, The total potential V ( r), illustrated in Fig. 1(a), is divided into two parts: the tip V t ( r) (subfigure (b)) , and the substrate part V s ( r) (subfigure (c)). We use the approximate method proposed by Bardeen in 1961 [1, 2]. The Hamiltonian of the free tip and substrate, H el,t = −∇ 2 /2m e +V t ( r) and H el,s = −∇ 2 /2m e +V s ( r). For zero bias V b = 0, the eigenstates of the free tip and substrate are where H el,t(s) | V b =0 represents the free tip (substrate) Hamiltonian at zero bias and |φ k (|ϕ n ) is the eigenstate of free tip (substrate) with energy ξ k (E n ). As the tip apex has been modeled as a metal sphere, its wave function in the vacuum region has the asymptotic spherical form where a is the tip's center of curvature and κ k = −2m e ξ k is its decay factor. A k can be determined by the first-principles calculations. On the other hand, in the vacuum region, we take substrate's wave function as ϕ n ( r ≡ (x, y, z)) = B n e −κ n |z| , where κ n = √ −2m e E n is the decay factor. For nonzero bias V b = 0, we take the potential change induced by bias voltage as a perturbation and obtain the solution up to the first-order correction, H el,s |ϕ n E n |ϕ n , where H el,t(s) represents the free tip (substrate) Hamiltonian at bias V b and ξ k ≡ ξ k + eV b E n ≡ E n is the corrected energy of state |φ k (|ϕ n ). Here we neglect the change to the wave function of the tip induced by the applied voltage [3].
II. THE ELECTRON-MOLECULE INTERACTION
In this section, we show the detailed derivation of the the effective electron-dipole interaction between a tunneling electron and a single molecule. The Coulomb interaction between a tunneling electron and the molecule is written as where r is the position of the tunneling electron. The molecule contains N bonds. For the n-th bond, R n ( r n ) is the position of positive (negative) charge with effective charge Z n . R 0 ≡ ∑ N n=1 R n Z n / ∑ N n=1 Z n denotes the center of the positive charge. For the case where the distance between the tunneling electron and the molecule is much larger than the size of the molecule, i.e., r − R 0 R n − R 0 , r n − R 0 for all n, the coupling in Eq. (6) where µ = ∑ N n=1 Z n e R n − r n = −Ze R 0 − r 0 , Z ≡ ∑ N n=1 Z n , r 0 ≡ ∑ N n=1 Z n r n /Z denotes the total electric dipole moment of the molecule [4]. We set the position of the positive charge as the origin of the coordinate axes, i.e., R 0 = 0. The electronmolecule interaction is expressed as Figure 2. Photon intensity ratio versus the radius of tip. The blue solid line represents the result obtained by numerical calculation of the inelastic current and the red dashed line shows the result given in Eq. (13). Here, the positive bias is fixed to +2.5eV. | 4,611.6 | 2020-03-10T00:00:00.000 | [
"Physics"
] |
Insight into plant cell wall degradation and pathogenesis of Ganoderma boninense via comparative genome analysis
Background G. boninense is a hemibiotrophic fungus that infects oil palms (Elaeis guineensis Jacq.) causing basal stem rot (BSR) disease and consequent massive economic losses to the oil palm industry. The pathogenicity of this white-rot fungus has been associated with cell wall degrading enzymes (CWDEs) released during saprophytic and necrotrophic stage of infection of the oil palm host. However, there is a lack of information available on the essentiality of CWDEs in wood-decaying process and pathogenesis of this oil palm pathogen especially at molecular and genome levels. Methods In this study, comparative genome analysis was carried out using the G. boninense NJ3 genome to identify and characterize carbohydrate-active enzyme (CAZymes) including CWDE in the fungal genome. Augustus pipeline was employed for gene identification in G. boninense NJ3 and the produced protein sequences were analyzed via dbCAN pipeline and PhiBase 4.5 database annotation for CAZymes and plant-host interaction (PHI) gene analysis, respectively. Comparison of CAZymes from G. boninense NJ3 was made against G. lucidum, a well-studied model Ganoderma sp. and five selected pathogenic fungi for CAZymes characterization. Functional annotation of PHI genes was carried out using Web Gene Ontology Annotation Plot (WEGO) and was used for selecting candidate PHI genes related to cell wall degradation of G. boninense NJ3. Results G. boninense was enriched with CAZymes and CWDEs in a similar fashion to G. lucidum that corroborate with the lignocellulolytic abilities of both closely-related fungal strains. The role of polysaccharide and cell wall degrading enzymes in the hemibiotrophic mode of infection of G. boninense was investigated by analyzing the fungal CAZymes with necrotrophic Armillaria solidipes, A. mellea, biotrophic Ustilago maydis, Melampsora larici-populina and hemibiotrophic Moniliophthora perniciosa. Profiles of the selected pathogenic fungi demonstrated that necrotizing pathogens including G. boninense NJ3 exhibited an extensive set of CAZymes as compared to the more CAZymes-limited biotrophic pathogens. Following PHI analysis, several candidate genes including polygalacturonase, endo β-1,3-xylanase, β-glucanase and laccase were identified as potential CWDEs that contribute to the plant host interaction and pathogenesis. Discussion This study employed bioinformatics tools for providing a greater understanding of the biological mechanisms underlying the production of CAZymes in G. boninense NJ3. Identification and profiling of the fungal polysaccharide- and lignocellulosic-degrading enzymes would further facilitate in elucidating the infection mechanisms through the production of CWDEs by G. boninense. Identification of CAZymes and CWDE-related PHI genes in G. boninense would serve as the basis for functional studies of genes associated with the fungal virulence and pathogenicity using systems biology and genetic engineering approaches.
INTRODUCTION
Cell wall degrading enzymes (CWDEs) are part of the carbohydrate-active enzymes (CAZymes) produced by plant pathogens for penetrating and degrading the plant cell walls, and these CAZymes have been directly linked to devastating crop diseases (Zhang, Bruton & Biles, 2014;Somai-Jemmali et al., 2017;Gawade et al., 2017). Plant pathogenic fungi, especially among the fungal families of Ascomycota, Basidiomycota, Chytridiomycota, and Zygomycota, have been reported to contain the highest number of CAZymes (Zhao et al., 2013;Kubicek, Starr & Glass, 2014). Differences in composition and structure of the woody components are commonly mirrored with the types of lignocellulolytic enzymes produced by invading pathogenic fungi (King et al., 2011). In fact, many plant pathogens particularly white rot fungi are well-endowed with high copies of CWDEs as compared to decay-feeding saprotrophs and have been demonstrated to be highly competent producers of lignocellulolytic enzymes for host-specific attack, and subsequent biomass degradation (King et al., 2011;O'Connell et al., 2012).
Ganoderma boninense is a causative agent of basal stem rot (BSR) disease that beset the oil palm industries with devastating economic losses due to the reduced lifespan and eventual death of the infected oil palm (Chen et al., 2017). Due to the toxicity and environmental issues of chemical pesticides, the Ganoderma disease is currently managed mainly through cultural practices such as the removal of dead trees and infected stumps prior to or during replanting but these strategies remain ineffective in preventing the spread of the G. boninense in affected plantations (Hushiarian, Yusof & Dutse, 2013;Sahebi et al., 2015). Recent research works in overcoming the Ganoderma disease have been mainly aimed at understanding the oil palm molecular defense response via transcriptional analysis, and profiling of proteins and metabolites of the infected oil palm (Nusaibah et al., 2016;Sahebi et al., 2017;Ho et al., 2018). The spread of G. boninense in oil palm plantation has been attributed to two main routes specifically spore dispersal and root contact with G. boninense infected palm tissues (trunk, bole, and roots) (Paterson, 2007;Chen et al., 2017). Importantly, root infection via cell wall degradation has been suggested as the main mode of Ganoderma infection based on the spread of infection from root to the base of mature palm trees (Rees et al., 2009).
Lignocellulolytic enzymes of G. boninense have been shown to be predominant in instigating oil palm infection and cell wall-degrading processes (Goh, Ganeson & Supramaniam, 2014;Jumali & Ismail, 2017;Surendran et al., 2018). Direct roles of CWDEs in the hemibiotrophic infection of oil palm roots were first demonstrated via macroscopic examination of enzymatically-degraded root outer cell layers and invaded root and stem tissues of G. boninense-infected tissues (Rees, 2006;Rees et al., 2009). In the initial stage of infection, the fungal mycelia behave as biotrophs that absorb the plant nutrient by penetrating the oil palm root surface and culminating in rapid growth spread in oil palm lower stem. During the necrotrophic stage, the fungus attacks the host cell walls by excreting host of enzymes including CWDEs that led subsequent cell death and multiplication of basidiocarps in decayed palm woods (Rees et al., 2009;Chong, Dayou & Alexander, 2017).
Despite the important roles of CWDEs in the oil palm pathogenesis, information about genomic features and mechanisms underlying the pathogenicity of G. boninense in oil palm is severely lacking. The necessity for establishing a reliable genetic model for understanding Ganoderma-oil palm interactions is highlighted in recent reports of draft genome sequences of different strains of G. boninense which could facilitate the identification of CWDEs as pathogenicity factors essential for successful invasion of oil palm cells (Sulaiman et al., 2018;Utomo et al., 2018). A deeper understanding on the genetic composition of CWDEs as part of carbohydrate-acting enzymes of G. boninense and the biological mechanisms conferring the fungal ability to produce CWDEs are important in our efforts to elucidate the plant-pathogen interactions at the genome and molecular levels. Therefore, this research work was devised to obtain genomic insight of CWDEs in G. boninense through computational and comparative genome analysis. Genome sequence of G. boninense NJ3 that was isolated from Indonesian oil palm field was used for the comparative genome analysis (Mercière et al., 2015). In this study, CAZymes specifically auxiliary protein (AA), glycosyltransferase (GT), carbohydrate binding modules (CBMs), carbohydrate esterases (CE), glycoside hydrolases (GH) and polysaccharide lyases (PL) in G. boninense NJ3 genome were annotated using CAZy annotation pipeline. Direct comparison of CAZymes was made with close relative and model strain G. lucidum as the reference Ganoderma strain. Further comparison was carried out with five selected pathogenic Basidiomycetes in the search for genetic patterns underlying G. boninense hemibiotrophic infection strategy. Identification of the responsible genes for G. boninense CAZymes including CWDEs will broaden genomic understanding on the molecular mechanisms of the fungal wood-decaying abilities and oil palm pathogenesis.
Prediction of genes
The contigs from the G. boninense NJ3 assembly was processed using Augustus gene prediction tool for the identification of genes (http://augustus.gobics.de/). Gene prediction was carried out using ''augustus -species=phanerochaete_chrysosporium gboninense_NJ3.fna >gboninense_NJ3_augustus.gff'' command where gboninense_NJ3.fna was the assembled contigs file. Phanerochaete chrysosporium was chosen as the gene prediction model as it was the closest species to G. boninense in Augustus based on the NCBI taxonomy browser (https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi). A GFF file containing the genes predicted and their annotations was produced by Augustus and the protein sequences were then extracted via the command ''getAnnoFasta.pl gboninense_NJ3_augustus.gff''. The predicted genes dataset of the G. boninense NJ3 genome has been deposited at European Nuclear Archive (ENA) under the accession number PRJEB34805.
dbCAN pipeline analysis
The produced protein sequences from Augustus was searched against the dbCAN: An HMM (Hidden Markov Model) based database for carbohydrate-active enzyme annotation. dbCAN release 6.0 was downloaded in May 2018. The downloaded database was converted into a HMM formatted database using hmmpress (part of HMMER3 software package). hmmscan was run with the following parameters; -domtblout results.out.dm. hmmscanparser.sh (from dbCAN) was used to process the results table with e-value 1E-3 as filter.
Annotation by PhiBase 4.5 database
PhiBase 4.5 was downloaded for local analysis from http://www.phi-base.org. The raw sequence file was converted into a blast database using makeblastdb (part of the ncbi-blast+ software package). A local blastp run was deployed to identify homologs of PhiBase 4.5 in G. boninense NJ3 predicted protein sequences. Blastp was run with the parameters -outfmt 6 -max_target_seqs 1 -max_hsps 1 -evalue 0.1.
Gene ontology and WEGO chart
Gene ontology of PhiBase 4.5 homologs in G. boninense NJ3 was obtained using Blast2GO 5.2.5 pipeline. Using local blastp function in Blast2GO, the sequences of the homologs were annotated against the NCBI NR (non-redundant) protein database. The NR database was downloaded in April 2019. E-value chosen was 0.1 and number of blast hits was set to 10. Other parameters remained at default values. The gene ontologies were then mapped onto the sequences by mapping the latest Blast2GO database with the blasted sequences. Next, InterProScan 5.33-72 was ran locally to obtain protein domain annotations. The XML file produced was then loaded in Blast2GO. Lastly, the annotation tool in Blast2GO merged and verified the gene ontologies obtained between both gene ontology annotation methods. Blast2GO annotation parameters were left at default values. Functional classification of PHI genes was carried out using Web Gene Ontology Annotation Plot (WEGO) software (Ye et al., 2006). To generate the WEGO chart, the results from Blast2GO annotation were exported in WEGO native format. The WEGO chart was then generated by uploading to http://wego.genomics.org.cn/. Datasets of G. boninense NJ3 Augustus gene annotation, GO annotation and protein ID from PhiBase 4.5 analysis are provided in supplementary files.
Characterization of carbohydrate-active enzymes (CAZymes) in G. boninense
In this study, the draft genome of G. boninense NJ3 was used for identifying and characterizing the carbohydrate-active enzymes in fungal genome. To identify CAZymes in G. boninense NJ3, the assembled dataset was further processed using Augustus pipeline to produce predicted genes peptide sequence for CAZymes analysis via the dbCAN pipeline. Comparative analysis for CAZymes characterization in G. boninense was carried out using G. lucidum genome sequence as the reference Ganoderma spp. strain owing to the high-quality genome sequence and well-established genomics studies of this closely-related Ganoderma strain. We hypothesized that G. boninense is enriched with a high number of CAZymes similar to G. lucidum that provide wood-degrading capabilities and contribute to the disparate nutritional strategy of hemibiotrophic G. boninense and saprophytic G. lucidum, respectively.
Following analysis, a total of 755 CAZymes was identified in G. boninense NJ3 as compared to 489 CAZymes found in G. lucidum (Fig. 1). Overall, about 465 copies of cell wall degrading enzymes (CWDEs) comprising of glycoside hydrolase (GH), carbohydrate esterase (CE) and polysaccharide lyase (PL) were found in the G. boninense NJ3 genome. From the total CWDEs, 348, 102 and 15 genes were found for GH, CE and PL, respectively (Table S1). The amount of CWDEs found in G. boninense is comparatively higher when compared with G. lucidum (273 GHs, 30 CEs, 10 PLs). A richer and highly similar set of CWDEs was observed in G. boninense NJ3 that enables degradation of woody structures such as hemicellulose and pectin for nutrient uptake and growth in similar means to G. lucidum which harbored one of the richest sets of polysaccharide-degrading enzymes in the sequenced genome of Basidiomycota fungi (Chen et al., 2012b). Interestingly, G. boninense NJ3 possesses a higher number of CEs of about 102 copies as compared to G. lucidum with 30 copies of CEs that are important for plant cell wall modification. In addition to polysaccharide-deacetylating CE enzymes, G. boninense NJ3 also harbors a diverse array of GHs that are crucial in the hydrolysis of cellulose and hemicellulose components of the plant biomass. Common GH in white rot fungi such as GH6 and GH7, and universal CWDEs of cellulolytic GH1, GH3 and GH5 for degradation of cellulose, hemicellulose and pectin were observed in both Ganoderma spp. Importantly, G. boninense possesses several GHs that were not found in G. lucidum specifically polysaccharides-acting GH109 (α-N-acetylgalactosaminidase), GH145 (L-Rh α-α-1,4-GlcA α-L-rhamnohydrolase), GH135 (α-1,3-galactosaminogalactan hydrolase) and GH131 (broad specific β-glucanase) that degrade both cellulose and hemicellulose. In terms of polysaccharide-active enzymes, both Ganoderma spp. contained multiple copies of pectic-acting PL8 and PL14 while PL4 (rhamnogalacturonan endolyase), PL12 (heparin-sulfate lyase) and PL15 (alginate lyase) were only found in G. boninense. Apart from CWDEs, both fungal genomes harbored other CAZymes including carbohydrate binding module (CBM), auxiliary activity (AA) and glycosyltransferase (GT) that are essential for lignin depolymerization and carbohydrate utilization from the host plant. During wood decaying process, access to the structural woody components was aided by CBMs that formed a two-domain structure together with catalytic domains (CDs) of cellulases by increasing the enzyme concentration on the substrate surfaces. Overall, a total of 290 copies of CAZymes was identified in G. boninense NJ3 as compared to 176 copies in G. lucidum. Of these, 67 CBMs, 145 AAs and 78 GTs were identified from G. boninense. Both Ganoderma spp. strains have a similar set of CBMs except for CBM19 (chitin-binding function) and CBM32 (pectic-binding) were unique for G. boninense while CBM12 (chitin-binding) was only found in G. lucidum. Although white rot fungi have been associated with the lack of CBMs, G. lucidum and G. boninense contained 10 and 12 out of 16 total families of CBMs, respectively. It is notable that G. boninense possessed high copies of CBM1, an important fungal CBM that uses cellulose and chitin as substrates for polysaccharide-degrading activities (Mello & Polikarpov, 2014;Várnai et al., 2014). Both fungi shared similar GTs except for GT65 and GT41 that were found only in G. boninense while GT31 was present only in G. lucidum. The ability of Ganoderma spp. to utilize nutrient from plant tissues relied heavily on the synergistic actions of cellulolytic and ligninolytic enzymes that include redox AA enzymes (Zhou et al., 2018). Laccase (AA1_A1), ferroxidase (AA1_A2), class II peroxidase (AA2), GMC oxidoreductase (AA3), radical-copper oxidase (AA5), 1,4-benzoquinone reductase (AA6) and iron reductase (AA8) were among AA enzymes identified in both sets of fungal genomes. From this comparative analysis, G. boninense was found to be endowed with significantly higher copies of the redox enzymes especially lignin-acting laccase (AA1) and peroxidase (AA2) and oxidoreductase (AA3) enzymes as compared to G. lucidum.
Lignocellulolytic enzymes production has been well-documented in white rot Ganoderma spp. for wood decomposition and subsequent feeding and propagation on the woody substrates (Silva, Melo & Oliveira, 2005;Paterson, 2007;Zhou et al., 2013). Apart from wood-degrading enzyme producing capabilities and plant pathogenicity, Ganoderma species are generating much research interest for therapeutic applications through the production of bioactive polysaccharides and terpenoids such as ganoderic acid (Boh, 2013;Wu et al., 2013). Owing to its therapeutic and biotechnological potentials, G. lucidum has been developed as model medicinal mushroom through extensive biochemistry, genomics, and genetic engineering research, and this saprophytic mushroom is in fact endowed with an extensive set of CAZymes encoded in its genome (Xu, Xu & Zhong, 2012;Chen et al., 2012b;Liu et al., 2012;Yu et al., 2012). In this study, the genome sequence of G. boninense NJ3, a pathogenic fungal isolate from oil palm plantation in Indonesia (Mercière et al., 2015), was employed for identifying genes involved in the production of cell wall degrading and carbohydrate active enzymes. By comparing CAZymes of G. boninense NJ3 with model G. lucidum, profiles of these closely-related Ganoderma spp. can be acquired especially in terms of the cell wall degrading abilities of the lesser-studied G. boninense. Based on the results obtained, G. boninense NJ3 was found to be enriched with an extensive repertoire of CAZymes in similar fashions but with significantly higher numbers of lignocellulosicdegrading enzymes as compared to the non-pathogenic G. lucidum, hence, underlining the essentiality of CAZymes in cell wall degradation for the fungal growth and nutrient uptake. Differences in CAZymes characteristics especially CWDEs and polysaccharide-active AAs can be linked and predetermined by the nutritional strategy of either Ganoderma spp. hence providing genomic insight and characterization of plant cell wall degradation mechanism of these industrially-important fungi.
Profiling of CAZymes in selected phytopathogenic fungi
Following characterization of CAZymes in G. boninense, the innate ability of this white rot fungus to cause oil palm BSR disease was further investigated by comparative analysis with a few selected disease-causing Basidiomycetes. For this purpose, five phytopathogenic basidiomycetous fungi exhibiting the biotrophic, hemibiotrophic and necrotrophic mode of plant infections were employed for comparison with G. boninense. The fungi of interest were Ustilago maydis (model biotrophic pathogen), Melampsora larici-populina (biotrophic poplar pathogen), and Moniliophthora perniciosa (hemibiotrophic cacao pathogen). The remaining two fungi were Armillaria solidipes and A. mellea representing facultative necrotrophic fungi attributed to root rot in many conifers and ornamentals, respectively (Kämper et al., 2006;Duplessis et al., 2009;Meinhardt et al., 2014;Koch et al., 2017). Biotrophs primarily depend on and derive nutrients without killing the hosts while necrotrophs kill the plant and feed nutrients off the dead cells (Mendgen & Hahn, 2002). On the other hand, hemibiotrophs adopt early asymptomatic biotrophic phase and then switched to the host-killing necrotrophic stage with distinct disease symptoms, and decayed tissues (Horbach et al., 2011). Although each fungus may differ in targeting host and infection mechanisms, these plant pathogens have been found to rely on an array of hydrolytic enzymes for complete degradation of plant biomass for colonization and nutrient uptake with or without killing the hosts. In this study, we hypothesized that pathogenic fungi with necrotizing abilities (necrotroph and hemibiotroph) would harbor distinct CAZymes profile as compared to non-necrotizing fungi (biotroph) which may be attributed to specific host preference and interaction.
The profile of CWDE in all six pathogenic fungi was illustrated in Fig. 2. G. boninense NJ3 contained glycosyl hydrolase (GH) GH2 and GH10 for specialized hemicellulose degradation in addition to dual cellulose and hemicellulose-degrading activities of GH1, GH3, GH5, GH12, GH51, and GH131. The ability of this oil palm pathogen to hydrolyse the pectin component is further provided by GH28, GH105 and necrotroph-specific GH53 and GH78. Biotrophic U. maydis and M. laricis-populina lack GH1, GH6, GH78 and GH95 which were prevalent in the necrotizing fungi (G. boninense NJ3, M. perniciosa, A. solidipes and A. mellea) ( Fig. 2A). Additionally, U. maydis lacked GH7 that is common among pathogenic white rot fungi. On the other hand, biotrophic U. maydis and M. laricis-populina possess GH26 which was not observed in other 4 necrotizing pathogens investigated in this study. The lack of GH1, GH6 and GH78 were well-documented in biotrophs which generally harbor less plant cell wall degrading enzymes than necrotrophs and hemibiotrophs (Zhao et al., 2013;Li et al., 2017). Obligate necrotrophs (A. solidipes and A. mellea) and hemibiotrophs (G. boninense NJ3 and M. laricis-populina) were evidently supplemented with GH3 and GH28 for cellulose, hemicellulose and pectin degradation. From the analysis, G. boninense NJ3 exhibited the highest copies of GH18 (Chitinase/endo-β-Nacetylglucosaminidase), GH43 (Hemicellulase), GH79 (Glucuronidase), GH10 (Xylanase) and harbored unique GH4 (glycosidase), GH89 (α-N-acetylglucosaminidase) and GH109 (α-N-acetylgalactosaminidase) for polysaccharide depolymerization in comparison to other plant pathogens examined in this study. Taken together, these findings highlighted the essentiality of several GHs specifically GH3 and GH5 for cell wall degradation by the phytopathogens that corroborated with previous reports on the plant host infection interplays by the phytopathogenic fungi (Zhao et al., 2013;Blackman, Cullerne & Hardham, 2014;Chang et al., 2016). In addition to cellulose and hemicellulose, some plants are enriched with pectins comprising of homogalacturonan, xylogalacturonan or rhamnogalacturonan as external barriers against pathogen infections. In G. boninense NJ3, the cell wall-degrading GHs could work in tandem with pectic-acting enzymes of polysaccharide lyase 8 (PL8), PL12, PL14 and PL15. Common PL found in pathogens, PL1 and PL3 were not observed in G. boninense NJ3 which is interesting considering the abundance of these PLs in necrosis-causing M. perniciosa, A. solidipes and A. mellea (Fig. 2B). Pectin degradation by pectinolytic enzymes particularly PL4 is common among necrotizing fungi examined in this study except G. boninense NJ3 and this enzyme had been shown to be highly expressed during crops infection by necrotrophic Rhizoctonia solani (Zheng et al., 2013;Chang et al., 2016). Although important for cell wall degradation by fungi, the smaller amount of pectinases in G. boninense NJ3 may indicate substrate or host preference specifically monocotyledon-type as compared to dicotyledon-preferred pathogens that have been associated with increased secretion of pectinases (Zhao et al., 2013;Loyd et al., 2018). Hemibiotrophic and necrotrophic fungi are well-equipped with the extended set of CWDEs which enable tailored and extensive production of the cell wall degrading enzymes during infection.
In this study, it was found that all 6 pathogenic fungi possess at least one copy of carbohydrate esterase 4 (CE4) as one of the polysaccharide-modifying enzymes in the genomes (Fig. 2C). The genome of G. boninense NJ3 was well-represented with CE16 in addition to CE1 and CE12 that were also found in necrosis-causing M. perniciosa, A. solidipes and A. mellea while CE2, CE14 and high copies of CE10 were found only in the pathogenic Ganoderma spp. CEs have been associated in the first line of attack during fungal invasion via the removal of acetylated moieties of saccharides that formed parts of plant protection system against hydrolytic enzymes (Ospina-Giraldo, McWalters & Seyer, 2010;Sista Kameshwar & Qin, 2018). The CE10 enzyme is involved in the degradation of lignin and cellulosic components of the plant cell wall and was found to be abundant in several pathogenic fungi including Macrophomina phaseolina, Bipolaris cookei and Corynespora cassiicola (Islam et al., 2012;Zaccaron & Bluhm, 2017;Looi et al., 2017). In sum, notable differences in CWDE profiles of hemibiotrophic and necrotrophic fungi can be associated with the less aggressive nature of biotrophic U. maydis and M. laricis-populina that adapted the hydrolytic enzyme production specifically for limiting host cell wall damages hence supporting their host nutrient-dependent growth (Kämper et al., 2006;Duplessis et al., 2009;Olson et al., 2012).
Profiling of the remaining CAZymes in the six pathogenic fungi was carried out for comparing and establishing the association between mode of infection and type of genes present. Generally, glycoside transferase (GT) enzymes are mainly responsible for cell wall formation in contrast to the more abundant carbohydrate-hydrolysing GHs in the fungal genomes. As shown in Fig. 3A, the six pathogenic fungi harbored highly similar set of GTs while GT71 is unique for biotrophs and GT65 was only found in G. boninense NJ3. Metabolism of starch components of the plant biomass is linked to the presence of starchactive carbohydrate binding modules (CBMs) including CBM1, CBM20, CBM48 and CBM50. Only CBM48 and CBM50 were found in all of the studied pathogens while CBM1 was missing in biotrophic U. maydis and M. laricis-populina which in turn, represented the most in G. boninense NJ3 genome (Fig. 3B). The occurrence of CBMs was often associated with facilitating the hydrolytic activities of amylolytic GHs such as GH13 and GH15 by increasing cell-substrate attachment and degradation (Chen et al., 2012a). In particular, CBM1-containing proteins have been found mainly in nectrotrophs and hemibiotrophs for promoting cellulose hydrolysis and were shown to elicit plant defense response which is detrimental for fungi with biotrophic lifestyle (Jones & Ospina-Giraldo, 2011;Klosterman et al., 2011;Larroque et al., 2012).
For lignin decomposition, all six pathogenic fungi possessed auxiliary activities (AA) enzymes for AA1, AA3, AA5 and AA6 that encode for ligninolytic and redox enzymatic activities while phenolic-active AA4 (vanillyl-alcohol oxidase) was found only in G. boninense NJ3 (Fig. 3C). Another important lignin-modifying enzyme, AA9 also classified as lytic polysaccharide monooxygenases (LPMO), is involved in lignocellulosic degradation by oxidizing cellulose in synergistic reactions with laccase and lignin-modifying peroxidase enzymes. On one hand, biotrophic fungi M. larici-populina and U. maydis harbored a smaller set of lignolytic AAs with about 36 (M. larici-populina) and 23 (U. maydis) as compared to the other studied pathogens. These biotrophic strains contained lesser copies of AA1 encoding for laccase and multicopper oxidase enzymes that involved in degradation of lignin barrier. All pathogens possessed AA9 except U. maydis while AA2 and AA8 were absent in both biotrophic fungi. High copies of AA9 observed in necrotrophs and hemibiotrophs studied may indicate the importance of these enzymes during the host attack and cell wall deformation. The identification of cellulose-cleaving oxidoreductases LPMOs as part of that AA9 family was previously associated with improved fungal cellulase and wood decaying activities with the presence of reducing agents (Dimarogona, Topakas & Christakopoulos, 2013;Karnaouri et al., 2014). These auxiliary redox enzymes played an important role in completing hydrolysis of lignin by wood-decomposing saprotrophic fungi and have been associated with increased virulence of parasitic fungi (Hatakka, 1994;Levasseur et al., 2013;Janusz et al., 2017). The abundance of AAs may contribute to the enhanced ability of G. boninense NJ3 to invade and penetrate lignin and acetylated saccharides as it switches from biotrophic to necrotrophic parasitism that involves overlapped biological processes as found in forest pathogen and wood decayer Heterobasidion annosum sensu lato (Olson et al., 2012). The production of diverse ligninolytic enzymes by Ganoderma are therefore important for the fungal proliferation off plant tissues especially in the depolymerizing of the recalcitrant lignin barrier (Hu et al., 2017;Sarah Jumali & Ismail, 2017;Zhou et al., 2018). Expression patterns and production of carbohydrate-acting enzymes have been demonstrated to be correlated with the fungal mode of interactions with host plants. Transcriptome analysis of G. boninense-treated oil palm transcripts showed very high expression of a host of distinct up-regulated genes encoding for CAZymes from lignindegrading AAs (laccase and AA2 manganese peroxidase), carbohydrate-active CBM and CE (CBM13, CE10, CE9) to cell wall-hydrolyzing exo-β-1,3-glucanase, chitinase and polygalacturonase when compared to untreated and Trichoderma harzianum-treated control samples (Ho et al., 2016). Similar patterns of highly expressed CAZymes transcripts were observed in necrotrophic A. solidipes that exhibited high number of homologs of GH18, GH47, CE10, CE4 and polygalacturonase following plant-fungus inoculation (Ross-Davis et al., 2013). Higher expression of cell wall degrading enzymes (GH, PL, GT) were observed in necrotrophic Leptosphaeria biglobosa as compared to hemibiotrophic counterpart, L. maculans which accumulated higher CBM during early stage of plant infection (Lowe et al., 2014). Similar CAZymes interplays were suggested in the early infection of G. boninense that aimed at overcoming the oil palm host defense response mechanisms including hypersensitive response (HR) leading to the switch from biotrophic stage to the more aggressive necrotrophic attacks culminating in host cell death and successful invasion (Bahari et al., 2018). A closer look of the CAZymes in the selected pathogenic fungi would therefore enable genome-wide profiling of carbohydrate-active enzymes that are distinct and correlated with the fungal mode of infection. Importantly, G. boninense NJ3 harbored a distinct set of cell wall degrading and polysaccharide depolymerization enzymes that were suited for infecting monocot oil palm host through hemibiotrophic lifestyle.
Potential pathogenicity genes among CAZymes of G. boninense NJ3
Comparative CAZymes analysis of the selected of phytopathogens indicated the correlation between the fungal nutritional strategy with the profiles of carbohydrate-active enzymes essential for plant host cell wall degradation and nutrient consumption. Considering the lack of information of the genes related to the pathogenicity of G. boninense, further genome-wide analysis of the fungal genome was carried out using the protein sequences in the Pathogen-Host Interaction Database (PHI database) and functionally classified according to molecular function, biological process and cellular component via WEGO analysis. A total of 5,099 annotated PHI genes were obtained from the WEGO analysis of which membrane (1,682, 24.8%) and metabolic process (2,903, 42.8%) were represented the highest in respective cellular component and biological process categories (Fig. 4). In molecular function category, the PHI genes were predominantly annotated with catalytic activity (3337, 49.2%) including CAZymes-related polygalacturonase (GO:0004650), cellulase (GO:0008810) and endo-1,4-beta-xylanase (GO:0031176).
Considering the prevalence of carbohydrate-active enzymes and the high percentage of PHI genes with hydrolase activity, we hypothesized that some of the CAZymes may directly involve in plant pathogenesis via cell wall degradation by the secreted enzymes. As shown in Table 1, several genes of G. boninense NJ3 were shown to share PHI homologs with lignin depolymerization and cell wall degrading enzymes specifically pectic-acting polygalacturonase (PG)-coding homolog gene (PHI id:4879), endo-1,4-beta-xylanase GH10 (PHI id:2209), β-glucanase Eng1 (PHI id:6265) and laccase LCC2 (PHI id:552). CWDEs including pectinase, glycosyl hydrolase and laccase mainly serve as primary weaponry for fungal attack causing the plant cell wall becoming less compact and more permeable for consequent digestion by cellulase and hemicellulase enzymes (Chu et al., 2015). Importantly, PG is one of the first enzymes secreted by pathogenic fungi upon contact with plant cell wall and these pectinases have been widely studied for their role in plant pathogenesis especially necrosis and rotting in the infected plants (De Lorenzo & Ferrari, 2002;Kubicek, Starr & Glass, 2014). This finding corroborated with the reported polygalacturonase activities from the transcriptome profile of G. boninense-infected oil palm (Elaeis guineensis) whereby PG transcript was shown to be elevated as compared to none observed in control unaffected oil palm (Ho et al., 2016). It can be postulated that pectin-acting PG work synergistically with hemicellulases for Ganoderma infection in similar fashion to necrotrophic infection and virulence of many phytopathogens including Fusarium spp., the main causative agents of vascular wilt and head blight diseases in important crops (Gómez-Gómez et al., 2002;Chen et al., 2012b;Paccanaro et al., 2017).
Hemi-cellulosic digestion activities of G. boninense were previously demonstrated to assist the fungal growth on oil palm hence supporting the association of these PHI genes as potential pathogenicity factors in oil palm infection (Surendran et al., 2017;Surendran et al., 2018).
On the other hand, no cutinase (CE5) homolog was found from the CAZymes and PHIbase analysis of G. boninense NJ3 suggesting the lack of cutinase-mediated cell wall modification during wood-decaying process which may be compensated by the high numbers of oxidative AAs and hydrolytic GHs found in the Ganoderma spp. examined in this study (Table S2). CE5 was found not prevalent in wood-decaying basidiomycetes including pathogenic H. irregulare and Fomitiporia mediterranea which harbored multiple Schouten et al. (2002) copies of lignolytic peroxidase enzymes (Floudas et al., 2012;Zhao et al., 2013). Expression of cutinase was also reported to be non-essential during pathogenesis of other necrotizing pathogens such as F. solani f. sp. pisi and Botrytis cinerea (Van Kan et al., 1997;Stahl & Schafer, 1992;Zhao et al., 2013). Combined actions of ligninolytic and cellulolytic enzymes including laccase and endoglucanase were previously shown to be directly involved in the wood decaying and infection processes of wheat and cacao by necrotrophic F. graminearum and Moniliophthora roreri, respectively Meinhardt et al., 2014). Transcriptome analysis of Ganoderma infected-oil palm seedling demonstrated the presence of multiple copies of laccase transcripts as compared to none observed in the sample of beneficial fungus, T. harzianum, indicating the important role of cell wall degradation in oil palm infection (Ho et al., 2016;Ho et al., 2018). The identification of these cell wall degrading PHI genes further supported the hemibiotrophic mode of infection of G. boninense conferred by fungal genotypic capabilities to produce a plethora of carbohydrate-acting enzymes. Overall, the comparative genome analysis employed in this study succeeded in characterizing carbohydrate-active enzymes and identifying CWDE genes that are involved in plant cell wall degradation and pathogenesis of G. boninense. Further genome analysis of G. boninense strains can be carried out with the recent report of draft genome of G. boninense G3 strain isolated from Indonesian region (Utomo et al., 2018). Correlation between the fungal pathogenicity with CWDE production and other factors can be further validated via targeted transcriptome analysis and gene expression profiling of targeted genes (Isaac et al., 2018). Functional studies of the cell wall degrading enzymes in G. boninense shall be pursued for greater understanding on the essentiality of the enzymatic capacity in the fungal pathogenesis.
CONCLUSIONS
In this study, comparative genome analysis succeeded in the identification of carbohydrateacting and cell wall degrading enzymes in hemibiotrophic G. boninense NJ3. The pathogenic G. boninense NJ3 genome contained an abundant amount of CAZymes and shared many similar sets of CAZymes to closely-related G. lucidum of which the differences between the gene sets can be attributed to the different nutritional strategy of either Ganoderma spp. Necrotizing fungal pathogens including G. boninense NJ3 exhibited distinct CAZymes profiles as compared to the non-necrotizing counterparts which can be correlated with host preference and parasitic lifestyles. Several CWDE-related genes were identified from PHI analysis including polygalacturonase and laccase which could directly contribute to the fungal pathogenesis especially through degradation of the plant cell wall. These findings provide fundamental knowledge on the fungal genetic ability and capacity to secrete polysaccharide and cell wall degrading enzymes. Greater insight on the fungal phenotype can be obtained through future studies involving functional and gene expression analysis of specific genes in the fungal carbohydrate metabolism.
• Nor Azlan Nor Muhammad performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft.
Data Availability
The following information was supplied regarding data availability: The raw data for all CAZymes sequences are available in the Tables S1 and S2. The Augustus gene annotation is available at Figshare: Ramzi, Ahmad Bazli (2019)
Supplemental Information
Supplemental information for this article can be found online at http://dx.doi.org/10.7717/ peerj.8065#supplemental-information. | 7,788.2 | 2019-12-18T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Increasing the Density of Laboratory Measures for Machine Learning Applications
Background. The imputation of missingness is a key step in Electronic Health Records (EHR) mining, as it can significantly affect the conclusions derived from the downstream analysis in translational medicine. The missingness of laboratory values in EHR is not at random, yet imputation techniques tend to disregard this key distinction. Consequently, the development of an adaptive imputation strategy designed specifically for EHR is an important step in improving the data imbalance and enhancing the predictive power of modeling tools for healthcare applications. Method. We analyzed the laboratory measures derived from Geisinger’s EHR on patients in three distinct cohorts—patients tested for Clostridioides difficile (Cdiff) infection, patients with a diagnosis of inflammatory bowel disease (IBD), and patients with a diagnosis of hip or knee osteoarthritis (OA). We extracted Logical Observation Identifiers Names and Codes (LOINC) from which we excluded those with 75% or more missingness. The comorbidities, primary or secondary diagnosis, as well as active problem lists, were also extracted. The adaptive imputation strategy was designed based on a hybrid approach. The comorbidity patterns of patients were transformed into latent patterns and then clustered. Imputation was performed on a cluster of patients for each cohort independently to show the generalizability of the method. The results were compared with imputation applied to the complete dataset without incorporating the information from comorbidity patterns. Results. We analyzed a total of 67,445 patients (11,230 IBD patients, 10,000 OA patients, and 46,215 patients tested for C. difficile infection). We extracted 495 LOINC and 11,230 diagnosis codes for the IBD cohort, 8160 diagnosis codes for the Cdiff cohort, and 2042 diagnosis codes for the OA cohort based on the primary/secondary diagnosis and active problem list in the EHR. Overall, the most improvement from this strategy was observed when the laboratory measures had a higher level of missingness. The best root mean square error (RMSE) difference for each dataset was recorded as −35.5 for the Cdiff, −8.3 for the IBD, and −11.3 for the OA dataset. Conclusions. An adaptive imputation strategy designed specifically for EHR that uses complementary information from the clinical profile of the patient can be used to improve the imputation of missing laboratory values, especially when laboratory codes with high levels of missingness are included in the analysis.
Introduction
Given the complexity and high dimensionality of Electronic Health Records (EHR), the need for imputation is an inevitable aspect in any study that attempts to use such data for downstream analysis or building advanced machine learning models for decision support systems for clinical applications. The EHR or any other administrative dataset is not designed for research purposes, even though the breadth and depth of the information can be used to improve care at many levels [1]. Furthermore, the level and extent of the missing values in healthcare systems are typically not at random. Three main categories explain the missingness in clinical settings [2,3]-incompleteness, inconsistency, and inaccuracy-and these can capture a variety of situations, including the following: the patient could have been cared for outside of the healthcare system where the data are collected, the patient did not seek treatment, the health care provider did not enter the information, the patient expired, and the missing value was not needed.
Given the complexity of the clinical data and the advanced analytics that can be applied on such data, it is important to account for any sources of bias in the data that will be used to drive predictive models. Imputation is an example of data preprocessing that could lead to biased results. Furthermore, excluding variables or patients with a high-level of missingness can also introduce bias and reduce the scope of the study. From a recent review article, 85 out of 316 studies reported some form of missing data, and only 12 studies actively handled the missingness; as the authors showed, the majority of researchers exclude incomplete cases, causing biased outcomes [4]. Furthermore, imputation could boost the statistical power for data-poor patients who tend to be minorities and lowincome patients with more restricted access to primary and specialty care and rehabilitation programs.
Imputation has been an ongoing solution in many fields, but only recently, the research has been focused on medical applications. Twelve different imputation techniques applied to laboratory measures from EHR were compared [5]. In general, the authors found that Multivariate Imputation by Chained Equations (MICE) and softImpute consistently imputed missing values with low error [5]; however, in that study, the analysis was restricted to 28 most commonly available variables. In another study, the authors assessed the different causes of missing data in the EHR data and identified these causes to be the source of unintentional bias [6]. A comparative analysis of three methods of imputation (a Singular Value Decomposition (SVD)-based method (SVDimpute), weighted K-nearest neighbors (KNNimpute), and row average for DNA microarrays showed that, in general, KNN and SVD methods surpass the commonly accepted solutions of filling missing values with zeros or row averages [7]. However, comparing imputation for clinical data with a DNA microarray can be misleading. The missingness in a DNA microarray is likely at random due to technical challenges unlike missingness in the EHR. In another study, fuzzy clustering was integrated with a neural network to enhance the imputation process [8].
Research has also been done to evaluate imputation methods for non-normal data [9]. Using simulated data from a range of non-normal distributions and a level of missingness of 50% (missing completely at random or missing at random), it was found that the linearity between variables could be used to determine the need for transformation for non-normal variables. In the case of a linear relationship, transformation can introduce bias, while the nonlinear relationship between variables may require adequate transformation to accurately capture the nonlinearity. Furthermore, many of the techniques are optimized for smaller levels of missingness (the most commonly available measurements), yet most clinical datasets (including the EHRs) have a significant level of missingness for many of their important variables that are routinely used for diagnosis purposes. To address this problem, machine learning methods have also been proposed [10]. There are more examples of imputation applied to simulated than real-life EHR data; however, few studies focused on imputing laboratory values. For instance, Ford E. and colleagues [11] proposed using logistic regression models with and without Bayesian priors representing the rates of misclassification in the data. However, in that study, the authors focused on misclassified diagnoses rather than laboratory values. The challenges of imputation for EHRs are unique, and if left unaddressed, the utility of the data becomes limited [12]. Consequently, even though, for smaller targeted studies, it could be possible to integrate additional modalities or perform an analytical evaluation through a chart review to determine a likely cause of missingness, for larger studies, this becomes infeasible. For example, the missingness level for very important variables, such as hemoglobin A1C or HbA1c (LOINC ID: 17856-6) levels, a common biomarker for diabetes can easily reach 50% or more in many realistic large datasets. At last, in a more recent study, the integration of genetic and clinical information was shown to improve the imputation of data missing from the Electronic Health Records [13]; however, genetic data integrated with the EHR is still scarce.
Finally, given the complexity and the scale of the problem, in many studies, MICE [14] remains the method of choice. The MICE fully conditional specification (FCS) algorithm imputes multivariate missing data on a variable-by-variable basis [15]. An imputation model is specified for each incomplete variable, and the imputation of missingness in one variable is conducted iteratively based on the other variables. There are also variations of MICE that have been proposed [16]; however, the need for imputation for data from EHR poses its challenges, especially when targeting less commonly measured variables. Nonetheless, given the high level of redundancy and the presence of highly correlated entities in the EHR, imputation by MICE still performs relatively well for large clinical datasets. A comprehensive overview of handling missing data in the EHR is presented in [12].
In this study, we created three unique cohorts from the EHR data, with varying sizes and heterogeneity, and developed a hybrid imputation strategy that we applied to these cohorts. We selected the inflammatory bowel disease cohort because of its heterogeneity and the fact that a clear understanding of IBD's risk factors is still lacking. We selected the Clostridioides difficile, because understanding of the recurrent infection is important, and the existing data from the EHR can help us identify clinical biomarkers; finally, we created the osteoarthritis (OA) cohort to test the limits of this model, as the OA diagnosis is not based on any laboratory measurements known today. Our imputation model was based on using comorbidity information to cluster patients prior to the imputation of their laboratory values.
Methods
In the following section, we will (1) describe our cohort definition and data extraction for the laboratory values and comorbidities from our EHR data warehouse and (2) outline our imputation design.
Study Cohort
The cohort in this study consisted of 67,445 patients from the Geisinger Health System with three different phenotypes. This study was exempted by the Geisinger Institutional Review Board for using deidentified information.
Clostridioides difficile (Cdiff) Infection case and control cohort: Clostridioides difficile (C. difficile) is an anaerobic, Gram-positive, and spore-forming bacterium and a major cause of intestinal infection and antibiotic-associated diarrhea. Toxins are the major virulence factors of C. difficile [17]. Toxins A (TcdA) and B (TcdB) are large, secreted glucosyltransferase proteins that target intestinal epithelia cells and disrupt the epithelial barrier, leading to secretory diarrhea. The diagnosis of C. difficile at Geisinger is captured and documented by Polymerase Chain Reaction (PCR) confirmation, which is highly sensitive. The latter is also considered the gold standard by the eMERGE algorithm for EHR mining [18]. We identified the C. difficile cohort, which includes patients tested for C. difficile, from the EHR of the Geisinger Health System. The cohort includes both cases and controls. Cases are defined as having laboratory positive PCR test results. Controls are patients tested for C. difficile with negative PCR test results. Case/control ratio is 1:8. We are interested in the combined case and control cohort, since patients tested for C. difficile, irrespective of their test results, share some of the signs and symptoms (such as diarrhea); furthermore, using a case and control combined cohort increases our sample size, an important factor for imputation, while providing a framework for building predictive models that can benefit from the integration of a large number of laboratory-based features.
Inflammatory Bowel Disease (IBD) cohort: We identified the IBD cohort from the EHR of the Geisinger Health System. Inclusion criteria of this cohort were based on the extraction of the patient population based on the diagnosis recorded for patients under their visits, admissions, and currently active problems listed based on the ICD9 and ICD10 codes for Crohn's disease (CD) and ulcerative colitis (UC) (see Table A1 in Appendix A).
To have a higher fidelity regarding the diagnosis in the EHR, qualifying criteria included either two or more outpatient encounters, or one or more inpatient admissions, or an entry into the problem list with an active flag.
Osteoarthritis (OA) cohort: We identified an osteoarthritis (OA) cohort from the EHR of the Geisinger Health System; the cohort includes a knee or hip OA diagnosis, either primary or secondary diagnosis (see Table A1 in Appendix A for the OA diagnosis ICD codes).
Data Extraction
We extracted clinical laboratory measurements for this cohort using the Logical Observation Identifiers Names and Codes (LOINC) system. For comorbidities, we extracted all the diagnosis codes for all the patients based on the ICD9, as well as ICD10, codes. Comorbidity data included details from out-patient visits, in-patient admissions, and problem lists. The latter was used to capture conditions identified outside of the Geisinger Health System but discussed and assessed during the patient's care management. We excluded laboratory codes with more than 75% missingness. To further clarify, in this study, missingness is defined as the laboratory measure "not resulted". Therefore, if an order was placed but the results were not available (or not valid), we considered that as a missing value. We analyzed the data in three batches, including only laboratory measures that have, at most, (a) 25% missingness, (b) 50% missingness, and (c) 75% missingness.
Data Processing
Quality Control (QC) and outlier detection strategy: Geisinger has implemented a rigorous process to continuously extract, transform, organize, and store EHR data and remove erroneous entries for research purposes. For example, we currently have access to quality-controlled laboratory values with the reconciliation of units. Median laboratory values for each patient were calculated to be used for this study. It is important to mention that, especially for less common laboratory values, the frequency of measurements and the window between the first and last measurements per patient is relatively narrow. We analyzed the frequency patterns and reported the results in our descriptive section.
As part of the added data processing and outlier detection and removal, the distribution of each laboratory value was analyzed and fit to a tri-modal gaussian distribution model (see Equation (1)). The rationale for using this strategy, as opposed to the assumption of normality, is driven by the nature of the laboratory measures. Laboratory orders, especially those with a higher level of missingness, are typically missing not at random (MNAR), and there are mainly three groups of patients for whom there is a measurement recorded (those with higher or lower than average measures, as well as patients with average measurements). However, the average measurement is not necessarily associated with a larger group in all the cases, especially for laboratory measures that are specific to a phenotype, such as an iron-binding capacity. The latter is ordered for patients if the physician needs that information to make a diagnosis/management decision. Two cut-off values are created to filter outliers based on the three distributions model. The automated process to generate data-driven cut-off values is proposed for large-scale data mining, where limited manual curation is applied in the data preparation and preprocessing.
where µ is the mean and σ is the standard deviation. The lowest boundary to filter out the outliers is set to c_low = max (min(µ 1 − 3σ 1 ,µ 2 − 3σ 2 , µ 3 − 3σ 3 ), 0), and the highest boundary is set to c_high = max(µ 1 + 3σ 1 ,µ 2 + 3σ 2 , µ 3 + 3σ 3 ). Data processing of the comorbidity dataset was performed to remove noise by excluding the ICD9/10 codes that were recorded only once in the patient's chart (rule of 2). The resulting matrix was then converted to binary to represent the presence or absence of an ICD9/10 code for each patient. This is important, since the count does not necessarily correlate with the severity or duration of the condition. Therefore, a binary comorbidity matrix for each cohort was created for imputation modeling.
Data Abstraction and Imputation Strategy
The comorbidity dataset was used to compute an encoding matrix for each dataset (Cdiff, OA, and IBD) using singular value decomposition (Equation (2)).
where A PT_ICD_cohort is the matrix encompassing all the ICD9/10 codes (presence of absence) for all the patients for each dataset, U is an mxm square matrix, S is an mxn diagonal matrix with m rows and n colums, and V is an nxn square matrix. The columns of V are eigenvectors of A T A, and the columns of U are eigenvectors of AA T . The diagonal elements of S are the square root of the eigenvalues of A T A or AA T . The encoding matrix was then used to create different levels of data abstraction by retaining only 100 or 1000 of the encoding using the dimensionality reduction technique (Equation (3)) for each dataset. We used these predefined cut-off values based on our preliminary assessment [19], as well as empirical studies [20,21]. For comparison, the full rank was also used in the modeling. Note that the approximation matrix is referred to as the data abstraction. The finalized output is referred to as latent comorbidities.
where g is the level of abstraction (100 or 1000) corresponding to the level of reduced matrices. A PT_ICD_cohort_g is an approximation of the initial matrix (A PT_ICD_cohort ).
As a final step in the data abstraction process, a baseline noise reduction is performed by removing the ICD codes if the sum of all the values for a given code in the latent comorbidity matrix is less than 1. This strategy reduces noise that is due to irrelevant (very rare) comorbidities in the model. The imputation method presented in this work is a hybrid method-that is, based upon concurrently applying dimensionality reduction and a clustering strategy-to efficiently capture relationships among the features (or variables) and reduce noise (through dimensionality reduction) while providing an adaptive mechanism to perform imputation for any complex phenotype or trait. Using latent comorbidity data, patients are clustered using the k-mean clustering technique with K set to 2, 4, 8, and 16 clusters, depending on the heterogeneity of the cohort.
Imputation was applied using the MICE fully conditional specification (FCS) algorithm [5], which imputes multivariate missing data on a variable-by-variable basis. An imputation model is specified to each incomplete variable, and the imputation of missingness in one variable is conducted in an iterative fashion using the Markov Chain Monte Carlo (MCMC) method. More specifically, we selected the predictive mean matching (pmm) algorithm, which is the default method of mice() for imputing continuous incom-plete variables. For each missing value, pmm finds a set of observed values (default is 5) with the closest predicted mean as the missing one and imputes the missing value by a random draw from that set. In other words, pmm is restricted to the observed values. We also used Random Forest (rf), which is based on imputing missingness by recursively subdividing the data based on values of the predictor variables in the predictive model by a bootstrap aggregation of multiple regression trees to reduce the risk of overfitting and improve the predictions through a combination of prediction from many trees [22]. The latter does not rely on distributional assumptions and can better accommodate nonlinear relations and interactions.
Imputations using MICE-pmm and MICE-rf were applied to each subgroup independently to predict the missing values. The results were compared when MICE-pmm and MICE-rf were applied to estimate the missing in the laboratory values in three cohorts without any consideration of the comorbidity information. The reader is referred to the work [15] by S. van Buuren and K. Groothuis-Oudshoorn for more details about imputation by MICE.
Evaluation Strategy
Model evaluation is performed by randomly selecting variables and predicting them using the hybrid strategy. A total of 100 values from each laboratory measure was randomly withheld for testing. For example, for the Cdiff cohort, where we identified 48 laboratory codes with less than 75% missingness, we held out 100 values for each of the 48 laboratory codes and estimate these 10 times. The root mean square error (RMSE) was also calculated and averaged over the 10 runs. Comparison was based on calculating the difference between running imputation using the hybrid model and the standard MICE algorithm, without any consideration of the comorbidity information, using both the pmm and rf models implemented in the MICE package. The presented results were, therefore, the RMSE differences, where the negative values represent a reduction in the root mean square error.
Results
In the following section, we will (1) describe our cohorts, pattern of missingness, and frequency of available data for different levels of missingness and (2) present imputation results for the three datasets.
Description of Laboratory Values for the Three Cohorts
We identified a total of 67,445 patients in three different cohorts (Cdiff, OA, and IBD) from Geisinger's electronic data warehouse. Further, we identified 495 LOINC codes from this cohort. We selected the LOINC codes for which we had, at most, 75% missingness (i.e., the number of patients without any measurement divided by the total number of patients is less than or equal to 75%) in each of the three cohorts.
We identified a total of 46,215 patients tested for C. difficile. We extracted comorbidity and laboratory data from the EHR for this cohort. A total of 48 laboratory codes and 8160 ICD codes for comorbidities were used. Specifically, we identified a total of 48 of the laboratory codes from the 495 codes that had at least 25% of the 46,215 patients with at least one measurement in their records. It is important to highlight that many of the LOINC codes can be very specific (<1% of the patients have such measurements) or were used for a narrow period and may not be actively in use. The dimensionality reduction was set to 100 and 1000. The Cdiff cohort had high heterogeneity, since the dataset contained both cases (tested positive for C. difficile) and controls (tested negative for C. difficile). The number of clusters tested was 4, 8, and 16.
Similarly, we further identified 11,230 IBD patients with both comorbidity and laboratory data from the EHR. A total of 48 laboratory codes and 7916 ICD codes for comorbidities were identified. The dimensionality reduction was set to 100 and 1000. The number of clusters tested was two, four, and eight, given the smaller sample size of this cohort.
Finally, we identified 187,040 patients with a primary or secondary diagnosis of the knee or hip OA from which we randomly selected 10,000 patients for imputation modeling. A total of 44 laboratory codes and 2042 ICD codes for comorbidities were used. The OA cohort had high heterogeneity, since the dataset was large (almost 200,000 cases from the initial pool) and contained both hip and knee OA. We selected a random set of 10,000 patients, as it is impractical to use an extremely large cohort of patients for optimizing an imputation, as the optimization alone is a computationally extensive process. The number of clusters tested was 4, 8, and 16.
The distribution of missingness in the laboratory values was different for the different cohorts. Table A2 summarizes the percentage missing for the laboratory measures. Our results showed that the pattern and frequency of the laboratory measurements were dependent on the missingness level. Briefly, for laboratory values with high missingness, a larger percentage of patients (30-60%) had only one resulted value; therefore, the median that we calculated in our experiment was practically the exclusively reported value for the patient (see Figure 1A). We further observed that the laboratory values with a high level of missingness (when a patient had more than one value) tended to have an observation window of approximately two to six years (see Figure 1B) and a frequency that was below five measurements (see Figure 1C). However, for more common laboratory values, we observe a window of approximately 5 to 12 years and a frequency above 10 (see Figure 1C).
The outlier detection using a multimodal gaussian distribution function was applied to each laboratory measure for each cohort separately. Figure 2 highlights that, for laboratories with higher missingness levels, the distribution is different for the different cohorts, and therefore, the accepted range is adjusted accordingly. For more common laboratory measures (such as the example presented in Figure 3), the distributions are similar. The accepted range for these laboratory measures is within the calculated range. To further help the reader to better understand the pattern of laboratory data, we created distribution plots for all the laboratory values used in this study for the three cohorts (see Figure A1 and Table A2).
Imputation Applied to Laboratory Values
C. difficile (Cdiff) infection case and control cohort: Using adaptive imputation for the Cdiff cohort showed improved performance, especially for the high missingness group (laboratory measures that have, at most, 75% missingness). An average RMSE difference (comparing the proposed imputation with the standard imputation model, without any consideration of comorbidity information using MICE) was −31.47 for a level of abstraction g = 1000 and a cluster number k = 4. The average RMSE difference was −8.75 for g = 100 and k = 4, demonstrating that, at a high missingness level, additional information from the patient comorbidity information can play an important role in improving the accuracy of the imputation prediction. A total of 27 combinations (or nine combinations for each missingness threshold) were tested, and for each missingness level (Table 1), the tradeoff between the sample size and clustering approach resulted in one or two instances where clustering was associated with improved performance. Since the dataset is of fixed size, the higher number of clusters will reduce the power of the imputation method, especially when the number of clusters is increased to eight or beyond. However, as each dataset has its unique characteristics, the best set of parameters must be empirically determined prior to performing the imputation using the adaptive strategy. Using MICE and the random forest model (rf), the RMSE differences were negative for the majority of the combinations. The missingness group of <75% had seven out of the nine parameter combinations that were in favor of the novel method (See Table 1 and Figure 4).
. . Distribution of laboratory values normalized for LOINC 787-2 (mean corpuscular volume or MCV) for the three datasets (Cdiff in red, IBD in green, and OA in blue). The "MCV" is missing at 2% in the Cdiff dataset, 5% in the IBD dataset, and 4% in the OA dataset. The subpanels represent the three modeled distributions to calculate the upper and lower boundaries. The dashed lines represent the upper and lower outlier boundaries (based on Equation (1)).
Figure 2. Distribution of laboratory values normalized for Logical Observation Identifiers Names and Codes (LOINC)
2501-5 (iron-binding capacity) for the three datasets (Cdiff in red, IBD in green, and OA in blue). The "ironbinding capacity" is missing at 52% in the Cdiff dataset, 65% in the IBD dataset, and 64% in the OA dataset. The subpanels represent the three modeled distributions to calculate the upper and lower boundaries. The dashed lines represent the upper and lower outlier boundaries (based on Equation (1)). (1)). Inflammatory Bowel Disease (IBD) cohort: Using adaptive imputation for the IBD cohort showed improved performance, especially for the high missingness group (laboratory measures that have, at most, 75% missingness). An average RMSE difference when compared to the standard model using MICE alone was −8.35 with no abstraction and cluster number k = 2. Similarly, an average RMSE difference when compared to the standard model using MICE alone was −8.24 for k = 8. The results highlighted that, at a high missingness level, additional information from the patient comorbidity data can play an important role in improving the accuracy of the imputation prediction, even as the sample size is significantly smaller (in this case, 11 K versus 46 K for the Cdiff cohort). A total of 27 combinations (or nine combinations for each missingness threshold) were tested. The tradeoff between the sample size and clustering approach resulted in parameter combinations that were associated with improved performance. Additional analyses were performed with the random forest model in MICE, and an RMSE difference of −2.70 was recorded for a missingness level of 25% (see Table 1 and Figure 4). Our results corroborate the value of parameter optimization on the dataset using various modeling frameworks. Thus, the best set of parameters should be empirically determined for each dataset.
Osteoarthritis (OA) cohort: Using adaptive imputation for the OA cohort showed that the best performance improvement was for missingness at 50% (Table 1 and Figure 4). The tradeoff between the sample size reduction, when clustering is utilized, and the use of additional information from comorbidities did show benefits even for this smaller and more heterogeneous dataset. The rf model in MICE was best fitted for this dataset.
Discussion
This study is a first step towards improving our many layers of data analytics and quality control pipelines to help enhance the quality of data extracted from the EHR that is ingested in machine learning applications for precision medicine. The use of heterogeneous and large-scale clinical datasets, such as EHRs, provides an avenue for the exploration of strategies to improve care at individualized levels, which include developing personalized models of responses to therapy and the prediction of disease onset, among others [1]. However, the data extracted from EHRs are noisy and have many missing values. In the majority of studies, variables suffering from missingness are excluded from models and analyses [4], even for some variables with high discriminative ability according to the clinical knowledge. As we showed in this work, it is not recommended to solely rely on the redundancy of EHR laboratory data to conduct imputation for realistic applications. That is because the majority of redundancy from laboratory measurements are associated with variables that are missing at high levels. However, laboratory data is highly associated with comorbidity, as the latter is based on laboratory values in realistic settings. For instance, besides the commonly ordered laboratory tests (20-30 laboratory measures), the remaining values are missing at very high rates, even in a healthcare system with a stable population (Geisinger is an integrated healthcare system with a drop-out rate <5%). However, the laboratory measures are highly correlated with comorbidities and diagnosis. Therefore, our intuitive modeling strategy is focused on using this redundancy to improve the imputation for laboratory values.
Furthermore, many diagnoses are based on laboratory values; however, due to the challenges associated with mining laboratory measures, many models ignore this important parameter or only include the ones that are not missing at high levels to reduce the noise and bias due to poor imputation predictions. We created three diverse datasets to test this intuitive strategy of imputation designed specifically for EHR laboratory data by including information from the comorbidities.
The IBD dataset was used, because IBD is a heterogeneous disease and a clear understanding of its risk factors is still lacking. Recent advances in the knowledge of IBD's pathogenesis have led to the implication of a complex interplay between metabolic reprogramming and immunity [23]. Furthermore, the response to treatment in IBD varies significantly among individuals and disease subtypes based on demographic characteristics, diet, comorbidities, underlying immunological factors, and genetic polymorphisms. Thus, there is an urgent unmet need to replace the current imputation approaches with personalized strategies that consider individual variability, diversity, and more balanced patient representation. Therefore, building predictive models for treatment outcomes for IBD is an important step in utilizing the available data on drug responses to provide better care for this patient population. Thus, the integration of laboratory measures in a predictive model for IBD has clinical value.
We created the Cdiff dataset, because the understanding of recurrent C. difficile infection is important, and the existing data from EHR can help us identify clinical biomarkers and help in building a decision support system for physicians to target the patients at a higher chance of recurrence for more targeted preventive care.
Finally, the OA dataset was added to test the limits of this model. An OA diagnosis is not based on any laboratory measure known today. An OA diagnosis is based on imaging alone. Therefore, we did not expect the OA cohort to have any special patterns in their laboratory profile, yet we observed that, even in this situation, the use of a comorbidity pattern can help in improving the imputation of laboratory values. The OA dataset was also the smallest dataset tested in this study.
Overall, our results showed that each dataset is unique, and a one-size-fits-all approach does not apply when selecting the imputation model. On simulated datasets with interactions between variables, the imputation of missing data using MICE with regression trees resulted in less biased parameter estimates than MICE with linear regression. [24] In the CALIBER study, MICE random forest showed more imputation efficiency with narrower confidence intervals for the error metric [25]. Through a simulation of a dataset in which the partially observed variable depended on the fully observed variables in a nonlinear way, MICE-RF showed less bias in parameter estimates and better confidence interval coverage. In our study, rf also performed well; however, the best performance was observed when pmm was used in the Cdiff cohort. Nonetheless, because the RMSEs were calculated across all laboratory variables, the improvement may be contributed by a few variables that were imputed better in perhaps some, but not all, cases. Further analysis will be needed to address this assumption.
The method presented here is an intuitive approach for any given complex disease where biosignatures or risk factors are only partially known and the relationship among the variables can be convoluted given the large dimensionality of the dataset. Even though the level of missingness can vary, the best results are typically obtained when the level of missingness is low or moderate. The improvement over conventional methods without the consideration of comorbidity information can be achieved when the missingness level is high. Our strategy was to ensure that (1) our experiment aligned with the current methodologies in practice and (2) others can easily adapt this modification to their work. In future directions, we will explore if advanced modeling frameworks such as the generative adversarial network [26] (GAN) or the newly proposed generative adversarial imputation nets (GAIN) framework [27] can be optimized for imputing laboratory values from EHRs.
Finally, our study provided a step in what we believe is a pipeline of data quality improvements for empowering machine learning models using EHRs. The main limitation of this approach is the need for large datasets. This is due to the nature of this approach, as the clustering step will reduce the sample size for the imputation, thus reducing its power. Therefore, this approach is ideal for machine learning applications where the sample size tends to be large and comprehensive. Our smallest cohort consisted of 10,000 OA patients. Our best prediction improvement was observed for the largest dataset of 46,215 patients. Another limitation of this study that we could not address is based on our masking strategy for the evaluation, which was done at random, even though we knew that the missingness in the EHR was not at random. However, given that we did not know a priori the reason for missingness for each patient, given the complex nature of the data, masking at random was the most sensible strategy in this case. As of now, we do not have a better strategy to simulate MNAR to withhold values. The contributing factors to MNAR are multifactorial and largely unknown.
This study had several other limitations. First, by converting the comorbidity information into binary, we may have lost important information. This study design can be enhanced further to answer a specific research question by optimizing the pattern of ICD codes recoded (both the frequency and time intervals) to capture the duration and severity of the conditions. Second, we withheld a relatively small number of values to evaluate our model. This is because we included laboratory codes with as high as 75% missingness and applied clustering prior to imputing; thus, withholding a higher level of laboratory values may further increase the sparsity of the dataset and introduce further bias. As a future direction, we plan on applying the algorithm several times to random subsamples of the data of size n/2 (n = number of samples). This repeated double randomization, similar to the concept of bagging and sub-bagging [28,29] algorithms, could further help optimize our strategy. Third, we are not limiting the window with respect to the diagnosis index event, as it should be for a carefully designed study [30,31]. However, the identification of pre-and post-index windows should be thoroughly planned based on the research question, the sparsity of the data, the healthcare system, and the variables under consideration [30]. However, as this is a proof-of-concept study, we did not limit our observation window in order to help improve our data availability so that we could experiment with different levels of missingness. Even though this is a limitation of this study, we showed what, in many instances, were only a few laboratory values for each patient for the less commonly used laboratory codes. Fourth, as this was a pilot study, we wanted to corroborate the generalizability and scalability of the proposed strategy. Therefore, we did not exhaustively vary the abstraction level nor the size of the clusters; however, we applied the model on three different cohorts that were created specifically for this study. Finally, by combining the laboratory codes into three groups (<25% missing, <50% missing, and <75% missing), we were unable to determine if this improvement was due to one or a few laboratory variables. Further assessments will be needed to study the improvement of imputation for each laboratory on a case-by-case basis for more targeted evaluations and improvements.
To conclude, the advantages of imputing missingness are manifold; imputation can be used for increasing the data density, improving the representation of data-poor patients, thus reducing the implicit algorithmic bias. Patients with limited access to healthcare and specialty care may be prone to be less-represented in models, because their data footprint is lower. The inclusion of more laboratory values is important as a prediction of a diagnosis; if it is not at least partially based on laboratory information, it could be weak. Predicting a future disease by only focusing on past diagnoses (i.e., using only information based on the ICD codes) is not taking full advantage of the information in electronic health records. Laboratory measurements, similar to imaging and imaging reports, are at the core of diagnosis and care management. The novelty of this study is in its intuitive design and relatively simple implementation in incorporating information from a patient's comorbidity to improve the imputation of laboratory values.
As a future direction, we will investigate how best to impute longitudinal laboratory measures to better inform clinical studies. In addition, we will also explore integrating additional features, such as demographic information, age, gender, and medication usage, as well as genetic information when available, to further enhance the imputation outcome. Finally, we will evaluate various preprocessing and normalization strategies and evaluate if these manipulations can improve the outcome of our predictions, especially for variables with skewed distributions, and explore the impact of imputation on each laboratory value and further investigate any potential patterns or trends that can help improve predicting the missing values. To conclude, we optimized the level of abstraction needed to improve the imputation for three cohorts of varying sizes and complexities. This study demonstrates that the use of shared latent comorbidities can facilitate improvements in imputing laboratory measures from EHRs for downstream analysis and predictive modeling.
Institutional Review Board Statement:
The study was reviewed and approved by the Geisinger Institutional Review Board to meet "Non-human subject research", for using de-identified information.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data analyzed in this study is not publicly available due to privacy and security concerns. The data may be shared with a third party upon execution of data sharing agreement for reasonable requests, such requests should be addressed to V.A. (Vida Abedi) or R.Z.
Acknowledgments: The authors would like to thank the Phenomic Analytics and Clinical Data
Core at Geisinger-more specifically, Joseph B. Leader, Monika Ahuja, and Amy Kolinovsky-for helping with data extraction and deidentification from the Electronic Health Records. Special thanks to Alvaro E. Ulloa Cerna for the insightful discussion.
Conflicts of Interest: Authors J.B.-R. and R.H. were employed by BioTherapeutics, Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interests. The funders had no role in study design, data collection, and interpretation or the decision to submit the work for publication. Table A4. The RMSE difference from imputation is applied with and without the integration of comorbidity information for the IBD dataset. Negative RMSE correspond to improvement by the hybrid approach. The pmm and rf models in MICE were used in this study. The p-value is reported based on 10 runs. | 9,347.8 | 2020-12-30T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Nerve Growth Factor Compromise in Down Syndrome
The basal forebrain cholinergic system relies on trophic support by nerve growth factor (NGF) to maintain its phenotype and function. In Alzheimer’s disease (AD), basal forebrain cholinergic neurons (BFCNs) undergo progressive atrophy, suggesting a deficit in NGF trophic support. Within the central nervous system, NGF maturation and degradation are tightly regulated by an activity-dependent metabolic cascade. Here, we present a brief overview of the characteristics of Alzheimer’s pathology in Down syndrome (DS) with an emphasis on this NGF metabolic pathway’s disruption during the evolving Alzheimer’s pathology. Such NGF dysmetabolism is well-established in Alzheimer’s brains with advanced pathology and has been observed in mild cognitive impairment (MCI) and non-demented individuals with elevated brain amyloid levels. As individuals with DS inexorably develop AD, we then review findings that support the existence of a similar NGF dysmetabolism in DS coinciding with atrophy of the basal forebrain cholinergic system. Lastly, we discuss the potential of NGF-related biomarkers as indicators of an evolving Alzheimer’s pathology in DS.
INTRODUCTION
Down syndrome (DS), also known as trisomy 21, is a genetic disorder caused primarily by the triplication of chromosome 21, which leads to several abnormalities and lifelong intellectual disability. As DS individuals age, they become at a very high risk of developing Alzheimer's disease (AD). Indeed, DS is now recognized as the most common form of genetic AD, and AD presentation in DS (DSAD) is similar to that of autosomal-dominant AD (ADAD) (Lott and Lai, 1982;Zigman and Lott, 2007;Davidson et al., 2018;Strydom et al., 2018). Therefore, individuals with DS will inevitably develop full-blown AD pathology with extracellular amyloid plaques, intracellular neurofibrillary tangles, neuroinflammation, cholinergic depletion and cognitive and learning deficits leading to clinical dementia in 70% of DS people over 60 years of age (McCarron et al., 2014).
ALZHEIMER PATHOLOGY IN DS Amyloid and Tau Pathologies
Due in part to the triplication of genes encoding amyloid precursor protein (APP) and β-amyloid cleavage enzyme 2 (BACE2) (St George-Hyslop et al., 1987;Acquati et al., 2000), located on chromosome 21, individuals with DS display a progressive accumulation of amyloid-beta (Aβ) peptides starting before birth (Lemere et al., 1996;Teller et al., 1996;Mori et al., 2002;. As in ADAD, the AD pathology in DS (DSAD) follows a predictable disease trajectory (Wiseman et al., 2015;Carmona-Iragui et al., 2017). As early as childhood, a fraction of people with DS present diffuse Aβ plaques within their brain (Lemere et al., 1996;Leverenz and Raskind, 1998). Early Tau pathology (detected as AT8 immunoreactivity) in DS appears by middle age (30-40 years) (Head et al., 2003;Davidson et al., 2018), after Aβ pathology is established, and follows a distribution pattern resembling that of AD, starting in the entorhinal cortex and spreading to the hippocampus and the neocortex (Davidson et al., 2018). By 40 years old nearly all DS brains show advanced AD pathology with extensive amyloid plaques and neurofibrillary tangles (NFTs) (Mann, 1988;Lemere et al., 1996;Leverenz and Raskind, 1998;Lott and Head, 2001;Mori et al., 2002;Head et al., 2003). However, DS brains with AD pathology present a higher density of NFTs than that seen in sporadic AD (Hof et al., 1995). A contributing factor may be the triplication of the dualspecificity tyrosine phosphorylated and regulated kinase 1A gene (DYRK1A), also located on chromosome 21, which is known to phosphorylate Tau at several sites relevant to AD (Woods et al., 2001;Liu et al., 2008). The triplication of APP, PS1 and several immune response mediators associated with AD may also play a role (Arron et al., 2006;Ryoo et al., 2008;Ryu et al., 2010;Kurabayashi et al., 2015;García-Cerro et al., 2017). As in sporadic AD, Aβ seems to be the main driver of dementia in DS as indicated by case studies reporting on individuals with DS who had partial trisomy 21 but were disomic for APP and who did not develop plaques, NFTs or dementia (Prasher et al., 1998;Doran et al., 2017). However, as in AD, cognitive decline in DS shows a stronger association with NFTs than with Aβ plaques (Margallo-Lana et al., 2007). Recently, a comprehensive revision of the order and changes in AD biomarkers in adults with DS has been communicated by Fortea and collaborators (Fortea et al., 2020).
It is noteworthy that the presence of the apolipoprotein E ε4 allele (APOEε4), the highest genetic risk factor associated with AD in the general population, is also a major determinant of AD pathogenesis and progression in people with DS. It has been shown that APOEε4 raises the risk for both early-onset and sporadic AD (Corder et al., 1993;Strittmatter et al., 1993;Qian et al., 2017) and accelerates both symptom onset and pathology severity in a gene-dose-dependent manner (Blacker et al., 1997;Farrer et al., 1997;Fleisher et al., 2013;Liu et al., 2013;Gonneaud et al., 2016;Lautner et al., 2017;Cacciaglia et al., 2018;Mishra et al., 2018). Accordingly, 65-80% of all AD sufferers harbor at least one APOEε4 allele (Farrer et al., 1997). The elevated risk of developing dementia conferred by APOEε4 involves mechanisms associated with both Aβ and tau aggregation (Therriault et al., 2020). APOEε4 carriers also have increased blood-brain barrier breakdown that has been shown to predict cognitive decline (Bell et al., 2012;Zhao et al., 2015;Montagne et al., 2020). Similarly, in people with DS the presence of the APOEε4 allele increases the risk of dementia, although to a lesser extent than in the general population (Prasher et al., 2008;Rohn et al., 2014). It also lowers the age of disease onset (Schupf et al., 1996;Deb et al., 2000;Coppus et al., 2008;Bejanin et al., 2021), aggravates Aβ deposition (Hyman et al., 1995;Bejanin et al., 2021), and accelerates neurodegeneration (Bejanin et al., 2021). Additionally, DS individuals harboring the APOEε4 allele are at additional increased risk for early mortality (Prasher et al., 2008;Hithersay et al., 2019).
Neuroinflammation
Neuroinflammation is another paramount feature of AD pathology that contributes to the progression and severity of the disease (Akiyama et al., 2000). The interest in the role of immune processes in AD pathogenesis began with the discovery of major histocompatibility molecules and complement system proteins in amyloid plaques (Jonker et al., 1982), and the description of HLA-DR-and IL-1β-positive reactive microglia surrounding amyloid plaques and neurofibrillary tangles (McGeer et al., 1987(McGeer et al., , 1988. This concept was reinforced by genome-wide association studies indicating that immune-related genes, such as TREM2, HLA-DRB5-HLA-DRB1, CR1 and CLU are risk factors for AD (Harold et al., 2009;Lambert et al., 2009Lambert et al., , 2013Brouwers et al., 2012;Jonsson et al., 2013). DS brains display lifelong neuroinflammatory changes starting at the fetal stage, prior to plaque deposition. Still, the precise cause of neuroinflammation initiation -triggered either by the accumulating AD pathology or by the triplication of immune-related genes [reviewed in Wilcock (2012)]-remains unclear. Early reports on neuroinflammation in DS described a pronounced proliferation of activated glia overexpressing S100B, another chromosome 21 gene product, and interleukin-1 (IL-1) α and β (Griffin et al., 1989;Royston et al., 1999). Since then, the evolving neuroinflammatory phenotype of DS, which presents both similarities and differences compared to that in sporadic AD, has been increasingly described (Stoltzner et al., 2000;Head et al., 2003;Xue and Streit, 2011;Wilcock et al., 2015;Flores-Aguilar et al., 2020). In fetuses and neonates with DS, neuroinflammation is characterized by an increase in the number of IL-1β-expressing microglia (Griffin et al., 1989). This neuroinflammation escalates as children and young adults with DS show an exacerbated neuroinflammatory profile with activation of the complement pathway, elevated levels of key inflammatory cytokines and altered microglia morphology indicative of activation, including the presence of rod-like microglia (Stoltzner et al., 2000;Wilcock et al., 2015;Flores-Aguilar et al., 2020). Older DS individuals (over 40 years of age) also display increased levels of potent inflammatory cytokines compared to karyotypical controls, although to a lesser extent than their younger DS counterparts. However, an increase of dystrophic microglia with age has been reliably demonstrated (Stoltzner et al., 2000;Wilcock et al., 2015;Flores-Aguilar et al., 2020). Accordingly, elevated cytokine expression and immune dysregulation have been reported in the blood of children and adults with DS (Licastro et al., 2005;Sullivan et al., 2017;Waugh et al., 2019;Weber et al., 2020). It has been proposed that such changes promote AD pathology in DS (Wilcock and Griffin, 2013). Such changes may also be used to predict and monitor pathological progression. For example, longitudinal changes in TNFα, IL-8, and AD biomarkers in plasma along with a nerve growth factor (NGF) metabolism dysregulation could predict prospective cognitive decline in a population of DS individuals asymptomatic for AD (Iulita and Cuello, 2016).
Cholinergic Dysfunction
The cholinergic neurotransmitter system is crucial for cortical and hippocampal activity, learning and memory. Its atrophy and degeneration are central to AD symptomatogenesis (Bowen et al., 1976;Davies and Maloney, 1976;Whitehouse et al., 1981Whitehouse et al., , 1982Mufson et al., 1989;Grothe et al., 2010;Kerbler et al., 2015). Its role in the AD pathology is highlighted by the fact that four of the five drugs currently approved for AD treatment are acetylcholinesterase (AChE) inhibitors, which, by preventing the breakdown of acetylcholine, increase the cholinergic tone resulting in improved cognitive outcomes, as long as sufficient cholinergic terminals persist in the telencephalon (Hampel et al., 2018;Kabir et al., 2019;Marucci et al., 2020). Degeneration of basal forebrain cholinergic neurons (BFCNs) parallels the development of AD pathology, progressing silently for several years prior to the onset of cognitive symptoms (Grothe et al., 2014), as reviewed by Hampel et al., 2018. Further, the degeneration of BFCNs predicts atrophy of the brain regions innervated by their projections such as the entorhinal cortex and cerebral cortex (Schmitz and Spreng, 2016;Schmitz et al., 2018). Loss of cholinergic innervation has also been linked to vascular dysfunction, another early predictor of the progression to AD (Iturria-Medina et al., 2016), and increased blood-brain barrier permeability (Domer et al., 1983;Radu et al., 2017;Nizari et al., 2019Nizari et al., , 2021. Cholinergic dysfunction in DS was first evidenced by a significant reduction in choline acetyltransferase (ChAT) and AChE activity in the temporal cortex of older individuals with DS, which was not present in a younger DS subject (Yates et al., 1980(Yates et al., , 1983. Soon after, a significant and seemingly agerelated reduction in volume of the nucleus basalis was also observed (Casanova et al., 1985). Further studies demonstrated that abnormalities in the cholinergic system develop as the individuals age and accumulate AD pathology since fetuses display a neuronal density and vesicular acetylcholine transporter (VAChT) immunoreactivity comparable to controls and that newborns with DS have ChAT activity levels similar to agematched controls (Kish et al., 1989;Lubec et al., 2001). Agerelated atrophy and neurodegeneration of BFCNs is recapitulated in mouse models of DS (Holtzman et al., 1992(Holtzman et al., , 1996Fiedler et al., 1994;Cooper et al., 2001;Granholm et al., 2002) and was attributed to APP gene triplication through disruption of endosomal phenotype and function (Cataldo et al., 2003). Such cholinergic dysfunction is sex-dependent and can be restored by estrogen treatment (Granholm et al., 2002;Kelley et al., 2014b).
Interestingly, in the Ts65Dn mouse model of DS, maternal supplementation with choline, a critical substrate for the synthesis of acetylcholine, during pregnancy and lactation reduced cognitive dysfunction and degeneration of BFCNs in their adult offspring (Moon et al., 2010;Ash et al., 2014;Kelley et al., 2014a;Strupp et al., 2016;Kelley et al., 2016;Powers et al., 2017). Although the exact mechanisms underlying the effects of choline therapy remain obscure, it has been shown that choline treatment rescued the expression of genes related to the cytoskeleton and cholinergic neurotransmission amongst others (Kelley et al., 2019).
NERVE GROWTH FACTOR METABOLIC DYSREGULATION IN DS
Basal forebrain cholinergic neurons depend on the continuous supply of NGF for the maintenance of their functional phenotype, their synaptic integrity and ultimately their survival (Hefti and Will, 1987;Cuello, 1996;Levi-Montalcini et al., 1996). In the adult CNS it has been demonstrated experimentally that the levels of endogenous NGF regulates the day-to-day number of cortical cholinergic synapses (Debeir et al., 1999). These findings led to Appel's hypothesis that the trophic support to BFCNs is compromised in AD (Appel, 1981). However, the levels of NGF transcripts are unaffected (Goedert et al., 1986;Jette et al., 1994;Fahnestock et al., 1996) and the protein levels of the NGF precursor, proNGF, are greatly elevated in AD post-mortem brain samples (Fahnestock et al., 1996Peng et al., 2004;Pedraza et al., 2005;Al-Shawi et al., 2008;Bruno et al., 2009a). A resolution of such an apparent paradox and insight into the cause of the cholinergic deficits characteristic of AD was brought about by the discovery of an NGF metabolic pathway controlling the availability of mature NGF (mNGF) as well as its extracellular degradation (Bruno and Cuello, 2006). The pharmacological manipulation of this NGF metabolic pathway has shown it to regulate the cholinergic phenotype of both the cortical synapses and the BFCN cell bodies (Allard et al., 2012(Allard et al., , 2018. In brief, proNGF is released into the extracellular space in response to neuronal or neurotransmitter stimulation. In ex vivo studies it has been shown that proNGF (and not mature NGF, mNGF) is released along with a set of zymogens and convertases responsible for its maturation and degradation (Bruno and Cuello, 2006). Maturation of proNGF into mNGF is accomplished by the enzyme plasmin, which is generated by the cleavage of its inactive zymogen, plasminogen, by tissue plasminogen activator (tPA), a process regulated by the tPA inhibitor, neuroserpin (Bruno and Cuello, 2006). Degradation of receptor-unbound mNGF is performed by the matrix metalloproteinases 9 and 3 (MMP-9 and MMP-3), derived from cleavage of their protein precursors, a process regulated by tissue inhibitor of metalloproteinases-1 (TIMP-1) (Figure 1; Bruno and Cuello, 2006;Pentz et al., 2021b).
Investigations in post-mortem brain tissue, plasma and cerebrospinal fluid (CSF) revealed that NGF metabolic dysfunction is present in the preclinical and clinical continuum of sporadic AD (Peng et al., 2004;Bruno et al., 2009a,b;Mufson et al., 2012;Hanzel et al., 2014;Pentz et al., 2020). Specifically, both NGF maturation and degradation are FIGURE 1 | Schematic representation of the NGF metabolic pathway and its altered state in Alzheimer's and Down syndrome pathology. (A) In the healthy brain, proNGF is co-released in the extracellular space with zymogens and convertases involved in its maturation and degradation. proNGF is cleaved to yield mNGF by plasmin, itself derived from the cleavage of plasminogen by tPA, a process regulated by neuroserpin. mNGF then dimerizes and binds to p75/TrkA receptor complexes on presynaptic terminals of BFCNs, followed by the retrograde transport of mNGF to their cell bodies in the basal forebrain. Receptor-unbound mNGF is rapidly degraded by MMP-9 and MMP-3, which are produced from their pro-proteins under the control of TIMP-1. (B) In Alzheimer's disease and in Down syndrome brains, increased neuroserpin and decreased tPA lead to reductions in the maturation of proNGF to mNGF by limiting plasmin concentrations. Further, decreased TIMP-1 and increased MMP-3/MMP-9 result in the excessive degradation of unbound mNGF. These changes result in impaired trophic support to BFCNs, leading to their atrophy.
disrupted at preclinical AD stages as revealed in individuals with no cognitive impairment (NCI) but with high brain β-amyloid (Aβ) levels (HA-NCI). This NGF dysmetabolim correlated with cerebral Aβ and Tau deposition, cognitive performance, and loss of cholinergic synapses (Pentz et al., 2020). NGF dysmetabolism is also found in the brain of people with prodromal AD, also referred to as mild cognitive impairment (MCI), and with clinical AD as represented by increased levels of proNGF, neuroserpin, as well as MMP-3 and MMP-9 activity (Peng et al., 2004;Bruno et al., 2009a,b;Mufson et al., 2012;Pentz et al., 2020). These findings are also in accordance with other accounts of increased proNGF in CSF from people with AD (E Counts et al., 2016), and with the altered expression of MMP-3, neuroserpin, and plasminogen reported in CSF from AD and MCI participants (Hanzel et al., 2014). These findings have also been replicated in transgenic animal models of the AD-like amyloid pathology (Bruno et al., 2009a;Iulita et al., 2017). Further, it was suggested that there is a link between such NGF dysmetabolism and CNS inflammation in the amyloid pathology since injection of Aβ oligomers in the hippocampus of naïve rats provoked both brain inflammation and NGF dysregulation (Bruno et al., 2009a).
Interestingly, a similar NGF dysmetabolism with increased cortical proNGF levels has been reported in DS Iulita and Cuello, 2016;Caraci et al., 2017), therefore providing an explanation for the cholinergic atrophy in DS (Yates et al., 1983;Kish et al., 1989;Lubec et al., 2001). In DS as in AD, reduced levels of tPA and plasminogen, which are involved in proNGF maturation as well as heightened neuroserpin expression lead to a build-up of proNGF. In parallel, over-activation of MMP-9, the main NGF-degrading protease, leads to increased degradation of the biologically active mNGF protein (Iulita and Cuello, 2014;. This double hit on the NGF pathway results in decreased availability of mature NGF to sustain trophic support of BFCNs in DS as in AD. Such impairment in NGF metabolism is an early event in DS and is detectable before the clinical presentation of AD. Indeed, increased levels of proNGF, decreased tPA activity and increased MMP-9 activity were detected in conditioned media from primary cultures from fetal DS cortex . In addition, levels of proNGF, as well as MMP-1, MMP-3, and MMP-9 activity were found elevated at AD asymptomatic stages in the plasma from a cohort of clinically characterized DS individuals. In this cohort, an elevation of proNGF levels at the 1-year follow-up predicted the extent of cognitive deterioration (Iulita and Cuello, 2016). The association between Aβ and NGF pathway dysfunction was further strengthened by the fact that Aβ load highly correlated with the elevation of proNGF in older DS individuals (Iulita and Cuello, 2016). The presence of an APOEε4 allele in DS individuals, as in other people at risk of AD, may further aggravate the brain's NGF dysmetabolism. Indeed, APOEε4 mice show upregulated levels of both proMMP9 and MMP9 (Bell et al., 2012).
NGF METABOLIC PATHWAY RELATED BIOMARKERS AS INDICATORS OF AD PATHOLOGY IN DS
The diagnosis of AD in DS is challenging given the underlying DS intellectual disability and the lack of diagnostic criteria and cognitive screening tools adapted to people with DS (Lee et al., 2017). Therefore, validated biomarkers that signal the progression of Alzheimer pathology in DS are presently of great medical importance. Correlations between classical AD biomarkers and cognition are increasingly being established to define the status of this pathology in DS (Fortea et al., 2020). We propose that NGF metabolism-related biomarkers in body fluids should assist in that task.
Analysis of cortical thickness, intracranial volume, fraction anisotropy, and cerebral blood flow employing magnetic resonance imaging (MRI) could identify AD pathology in both DS and sporadic populations (Handen et al., 2020). Alternatively, positron emission tomography (PET) imaging to trace amyloid deposition with compounds such as Pittsburgh Compound B (PiB) and [18F]-florbetaben, commonly used to detect sporadic AD, have shown mixed results in identifying AD within the DS population. It was suggested that since those with DS display a lifelong amyloidoisis that is already very prominent at a young age, amyloid PET may not be of use in tracking the progress of AD (Abrahamson et al., 2019). More recently, a cross-sectional and longitudinal study in individuals with DS showed that it was possible to differentiate MCI-DS from the cognitively stable group using [18F]-AV-45 (florbetapir) PET. Additionally, although PET tracers for Tau have proved a challenge for the field (Robertson et al., 2017), a recent study using the Tau PET tracer [18F]-AV1451 in a small cohort of DS individuals showed that Tau deposition was correlated with age, amyloid deposition, decreased brain volume and reduced glucose metabolism (Rafii et al., 2017). Evaluation of Tau PET tracers using autopsy brain tissue also suggested that the regional distribution of Tau pathology in DS differs from ADAD and sporadic AD (Lemoine et al., 2020). An issue with current neuroimaging studies in DS populations is that the normative atlases being used were developed for the non-DS population, although this is currently being addressed by the creation of atlases for the DS brain (McGlinchey et al., 2020).
The pattern of biofluid biomarker changes in AD in DS have been considered to be largely similar to those in sporadic AD (Rafii et al., 2015). While those with DS have a higher baseline of Aβ peptides due to the triplication of the APP and BACE2 genes located on chromosome 21, an increase in CSF levels of Aβ42 or the Aβ42/Aβ40 ratio relative to this baseline are associated with the onset of AD in DS (Lee et al., 2017). Several studies have demonstrated that changes in plasma Aβ40 and Aβ42 in DS correlate with AD onset (Schupf et al., 2007;Jones et al., 2009;Matsuoka et al., 2009;Schupf et al., 2010;Coppus et al., 2012). As for Tau, increases in CSF total Tau (tTau) and phosphorylated Tau (pTau), have been correlated with AD onset in DS (McGlinchey et al., 2020;Pentz et al., 2021a). Likewise, plasma neurofilament light (NfL), and IL1β, have been shown in multiple studies to reliably distinguish DSAD individuals with DS asymptomatic for AD (aDS) (Petersen and O'Bryant, 2019;Startin et al., 2019;McGlinchey et al., 2020). Of the biomarkers discussed, NfL has emerged as the leading plasma biomarker. With 90% sensitivity and 92% specificity in its ability to distinguish between aDS and prodromal DSAD groups (Petersen and O'Bryant, 2019;McGlinchey et al., 2020). Additional more recently posited biofluid biomarkers include levels of TNF-a, IL-6, IL-10, and S-adenosylhomocysteine (SAH), a change in SAM/SAH ratio and CpG methylation percentage (Lee et al., 2017).
Given that degeneration of the cortical forebrain cholinergic system is a critical factor associated with cognitive decline in AD, both in the general population and in DS, as discussed above, current AD biomarker panels should be enriched by the addition of biomarkers able to monitor cholinergic dysfunction in both research and clinical contexts (Hampel et al., 2018;Cuello et al., 2019;Pentz et al., 2021a). NGF dysmetabolism's presence within DS and AD brains, and its relationship to cholinergic dysfunction, present the opportunity for the identification of novel biomarkers signifying AD pathology and subtyping for cholinergic dysfunction within DS populations. Analysis of NGF pathway proteins in matched CSF/plasma samples from DSAD and individuals with DS aDS, as well as controls, revealed that the levels of the 50 kDa isoform of proNGF and MMP9 in CSF were competent to identify symptomatic AD from the wider DS population. Both members of the NGF metabolic pathway identified symptomatic AD from the wider DS population with a sensitivity and specificity matching or outperforming that of the classical AD CSF biomarkers pTau, tTau, and the AB42/40 ratio (Pentz et al., 2021a). Importantly, longitudinal increases in 50 kDa proNGF levels in plasma over 1 year correlated to prospective cognitive decline over the subsequent 2 years (Iulita and Cuello, 2016), demonstrating a potential value of NGF-related biomarkers in identifying incipient cognitive decline in this population.
CONCLUSION
The nearly inexorable development of the AD pathology and the ensuing dementia in DS individuals is nowadays well-established and has been eloquently summarized by Lott and Head (2019). The growing awareness of this situation has triggered an increased interest and research in unraveling aspects of the AD pathology in DS, as this is the largest population of genetic AD and therefore offers clues regarding the early, preclinical stages of this pathology. A pathology which continues to defy therapeutic intervention.
As discussed in this brief review, the occurrence of NGF dysmetabolism leading to BFCNs dysfunction is now wellestablished. NGF metabolism-related biomarkers have proven significance in identifying AD pathology at preclinical stages and in monitoring its progression in the AD clinical continuum. This might offer distinctive possibilities of defining differential conditions of cholinergic compromise.
Alzheimer's disease is presently recognized as being the leading cause of death in DS. Therefore, novel biomarkers signaling the initial, preclinical stages of AD in DS should offer valuable tools for future early therapeutic interventions. A scenario which would spare DS individuals of the onset of clinical AD and which would also provide new therapeutic opportunities for individuals with sporadic AD.
The further investigation of the NGF metabolic compromise in AD should provide clues as to how best re-establish an adequate trophic support for the phenotypic maintenance of BFCNs; the atrophy of which importantly contributes to cognitive decline in AD pathology. If such pharmacological intervention becomes feasible it would halt the progressive atrophy of the BF cholinergic system. An effective pharmacological intervention of a deregulated NGF metabolic pathway would signify restoring mNGF homeostasis at physiological levels and at physiological sites.
AUTHOR CONTRIBUTIONS
ACC and SDC designed and outlined the structure and contents of the review. All authors contributed to the writing and revision of the manuscript and approved the submitted version. | 5,717.6 | 2021-08-09T00:00:00.000 | [
"Biology",
"Psychology"
] |
Viscous Dissipation Effects on the Motion of Casson Fluid over an Upper Horizontal Thermally Stratified Melting Surface of a Paraboloid of Revolution : Boundary Layer Analysis
The problem of a non-Newtonian fluid flow past an upper surface of an object that is neither a perfect horizontal/vertical nor inclined/cone in which dissipation of energy is associated with temperature-dependent plastic dynamic viscosity is considered. An attempt has been made to focus on the case of two-dimensional Casson fluid flow over a horizontal melting surface embedded in a thermally stratified medium. Since the viscosity of the non-Newtonian fluid tends to take energy from the motion (kinetic energy) and transform it into internal energy, the viscous dissipation term is accommodated in the energy equation. Due to the existence of internal space-dependent heat source; plastic dynamic viscosity and thermal conductivity of the non-Newtonian fluid are assumed to vary linearly with temperature. Based on the boundary layer assumptions, suitable similarity variables are applied to nondimensionalized, parameterized and reduce the governing partial differential equations into a coupled ordinary differential equations. These equations along with the boundary conditions are solved numerically using the shooting method together with the Runge-Kutta technique. The effects of pertinent parameters are established. A significant increases in Re x Cfx is guaranteed with St when magnitude of β is large. Re x Cfx decreases with Ec and m.
Introduction
Within the last thirty years, the study of non-Newtonian fluid flow over a stretching surface has received significant attention due to its industrial applications.Such interest is fueled by its pertinent engineering applications in a number of fields as in spinning of filaments, continuous casting of metal, extrusion of polymers, crystal growing, glass fiber production, extrusion of plastic sheets, paper production, and process of condensation of metallic plates.Boundary layer analysis of fluid flow passing through a thick needle with variable diameter was investigated by Lee [1].Historically, this can be referred to as the first report of flow adjacent to a surface with variable thickness where the effect of viscosity is highly significant.Thereafter, extensive studies were conducted on the boundary layer flows over a thin needle.Cebeci and Na [2] investigated the laminar free convection heat transfer from a needle.Ahmad et al. [3] examined the boundary layer flow over a moving thin needle with variable heat flux.The boundary layer flow over a stretching surface with variable thickness was analyzed by Fang et al. [4].Recently, the flow of different fluids over an upper horizontal surface with variable thickness has been investigated extensively in [5][6][7].This attracted Makinde and Animasaun [8,9] to focus on the case of quartic autocatalysis kind of chemical reaction in the flow of an electrically conducting nanofluid containing gyrotactic-microorganism over an upper horizontal surface of a paraboloid of revolution in the presence and absence of thermophoresis and Brownian motion.In most cases, plastic dynamic viscosity of non-Newtonian Casson fluid tends to take energy away from the motion and transform it into internal energy.However, this area has been neglected.
Journal of Applied Mathematics
In fluid mechanics, destruction of fluctuating velocity gradients due to viscous stresses is known as viscous dissipation.This partial irreversible processes is often referred to as transformation of kinetic energy into internal energy of the fluid (heating up the fluid due to viscosity since dissipation is high in the regions with large gradients).Pop [10] remarked that understanding the concept of energy dissipation and transport in nanoscale structures is of great importance for the design of energy-efficient circuits and energy-conversion systems.However, energy dissipation and transport of non-Newtonian fluid are also of importance to engineers and scientists.Motsumi and Makinde [11] examined the effects of viscous dissipation parameter (i.e., Eckert number), thermal diffusion, and thermal radiation on boundary layer flow of Cu-water and Al 2 O 3 -water nanofluids over a moving flat plate.In another theoretical study on combined effects of Newtonian heating and viscous dissipation parameter on boundary layer flow of copper and titania in water over stretchable wall, Makinde [12] reported an increase in the moving plate surface temperature and thermal boundary layer thickness.Depending on the admissible grouping of variables (parameterization), Eckert number and Brinkmann number (× ) may be used to quantify viscous dissipation.In addition, unsteady mixed convection in the flow of air over a semi-infinite stretching sheet taking into account the effect of viscous dissipation was carried out by Abd El-Aziz [13].Both at steady stage ( = 0) and unsteady stage ( = 1.5), velocity and temperature of the flow increase with an increase in the magnitude of Eckert number.Recently, effects of viscous dissipation, Joule heating, and partial velocity slip on two-dimensional stagnation point flow were reported by Yasin et al. [14].In another study conducted by Animasaun and Aluko [15], it is reported that when dynamic viscosity of air is assumed to vary linearly with temperature, normal negligible effect of Eckert number on velocity profiles will be noticed.Raju and Sandeep [16] focused on the motion of Casson fluid over a moving wedge with slip and observed a decrement in the temperature field with rising values of Eckert number.In the flow of non-Newtonian Casson fluid over an upper horizontal thermally stratified melting surface of a paraboloid of revolution, dissipation of smaller eddies due to molecular viscosity near the wall is significant.In the presence of constant magnetic field, electrically conducting Casson fluid flow over object that is neither a perfect horizontal/vertical nor inclined/cone is also an important issue.
The study of electrically conducting fluid flow is of considerable interest in modern metallurgical.This can be traced to the fact that most fluids in this sector are electrically conducting fluid.Historically, the first report on the motion of electrically conducting fluid in the presence of magnetic field was presented by Rossow [17].Thereafter, Alfvén [18] reported that if a conducting liquid is placed in a constant magnetic field, every motion of the liquid generates a force called electromotive force (e.m.f.) which produces electric currents.One of the most significant importance of these contributions are its applications in engineering problems such as MHD generators, plasma studies, nuclear reactors, and geothermal energy extractions.Soundalgekar and Murty [19] investigated heat transfer in MHD flow with pressure gradient, suction, and injection.It was observed that an increase in the magnetic field parameter leads to an increase in fluid velocity, skin friction, rate of heat transfer, and a fall in temperature.Rajeswari et al. [20] observed that, due to the uniform magnetic field and suction at the wall of the surface, the concentration of the fluid decreases with the increase in chemical reaction parameter.Ghosh et al. [21] found that an increase in inclination of the applied magnetic field opposes primary flow and also reduces Grashof numbers.Das [22] concluded that increasing magnetic field and thermal radiation leads to deceleration of velocity but reverse is the effect for the melting parameter when the solid surface and the free stream move in the same direction.Motsa and Animasaun [23] presented the behavior of unsteady non-Darcian magnetohydrodynamic fluid flow past an impulsively using bivariate spectral local linearization analysis.Koriko et al. [24] illustrated the dynamics of two-dimensional magnetohydrodynamics (MHD) free convective flow of micropolar fluid along a vertical porous surface embedded in a thermally stratified medium.It was concluded that velocity profiles and microrotation profiles are strongly influenced by the magnetic field in the boundary layer, which decreases with an increase in the magnitude of magnetic parameter.Recently, theoretical investigation of MHD natural convection flow in vertical microchannel formed by two electrically nonconducting infinite vertical parallel plates and effects of MHD mixed convection on the flow through vertical pipe with time periodic boundary condition was presented explicitly by Jha and Aina [25,26].
Several processes involving melting heating transfer in non-Newtonian fluids have promising applications in thermal engineering, such as melting permafrost, oil extraction, magma solidification, and thermal insulation.As such, a lot of experimental and theoretical work has been conducted in the kinetics of heat transfer accompanied with melting or solidification effect.The process of melting of ice placed in a hot stream of air at a steady state was first reported by Roberts [27].Historically, this report can be referred to as the pioneering analysis of the melting phenomenon.Another novel report on melting phenomenon during forced convection heat transfer when an iceberg drifts in warm sea water was presented by Tien and Yen [28].From their investigation, they observed that melting at the interface results in a decrease in the Nusselt number.Epstein and Cho [29] discussed the laminar film condensation on a vertical surface.Much later, melting heat transfer in a nanofluid boundary layer on a stretching circular cylinder was examined by Gorla et al. [30].In the study of the effect of radiation on MHD mixed convection flow from a vertical plate embedded in a saturated porous media with melting, Adegbie et al. [31] reported that the Nusselt number decreases with increase in melting parameter.Adegbie et al. [31] stated that the temperature of UCM fluid flow over a melting surface is an increasing function of variable thermal conductivity parameter.Omowaye and Animasaun [32] investigated boundary layer analysis in the flow of upper convected Maxwell fluid flow.Due to the fact that temperature at the wall is zero, classical temperaturedependent viscosity and thermal conductivity linear models were modified to suit the case of both melting heat transfer and thermal stratification.In another study of micropolar fluid flow in the presence of temperature-dependent and space heat source, the analysis of the case where vortex viscosity is a constant function of temperature was reported in [33].
Within the past two decades, the effects of temperaturedependent viscosity on the fluid flow have become more important to engineers dealings with geothermal systems, crude oil extraction, and machinery lubrication.Due to friction and internal heat generated between two layers of fluid, viscosity and thermal conductivity of fluid substance may be affected by temperature; for more details see Batchelor [34], Lai and Kulacki [35], and Abd El-Aziz [36].Proper consideration of this fact in the study on inherent irreversibility in a variable viscosity Couette flow by Makinde and Maserumule [37], numerical investigation of micropolar fluid flow over a nonlinear stretching sheet taking into account the effects of a temperature-dependent viscosity by Rahman et al. [38], effects of MHD on Casson fluid flow in the presence of Cattaneo-Christov heat flux by Malik et al. [39], fluid flow through a pipe with variation in viscosity by Makinde [40], Casson fluid flow within boundary layer over an exponentially stretching surface embedded in a thermally stratified medium by Animasaun [41], steady fully developed natural convection flow in a vertical annular microchannel having temperature-dependent viscosity in the presence of velocity slip and temperature jump at the annular microchannel surfaces by Jha et al. [42] have enhanced the body of knowledge on fluid flow, boundary layer analysis, and heat/mass transfer.In a recent experiment, Alam et al. [43] concluded that thermal boundary layer decreases with an increasing temperature-dependent viscosity.Hayat et al. [44] discussed the effect of variable thermal conductivity on the mixed convective flow over a porous medium stretching surface.In the article, the kinematics viscosity of Casson fluid was considered as a function depending on plastic dynamic viscosity, density, and Casson parameter and hence reported that increase in the magnitude of temperature-dependent viscosity parameter leads to an increase in fluid's velocity.The above literature review shows that there exists no published article on the effects of viscous dissipation in the flow of non-Newtonian Casson fluid over a upper horizontal thermally stratified melting surface of a paraboloid of revolution.
Formulation of the Problem
Consider a steady, incompressible, laminar flow of an electrically conducting non-Newtonian (Casson) fluid over a melting surface on upper horizontal paraboloid of revolution in the presence of viscous dissipation and thermal stratification.The -axis is taken in the direction of motion and -axis is normal to the flow as shown in Figures 1(a) and 1(b).A uniform magnetic field of strength is applied normal to the flow.The induced magnetic field due to the motion of an electrically conducting Casson fluid is assumed to be so small; hence it is neglected.The stretching velocity is = ( + ) and the wall is assumed impermeable.It is further assumed that the immediate fluid layer adjacent to the surface is specified as = ( + ) (1−)/2 , where < 1.Using Boussinesq approximation, the difference in inertia is negligible but gravity is sufficiently strong to make the specific weight appreciably different between any two layers of Casson fluid on the surface; hence the rate flow on this kind of surface is referred to as free convection.The body force term suitable to induce the flow over a surface which is neither a perfect horizontal/vertical nor inclined/cone is Following the theory stated in Casson [45] and boundary layer assumptions, the rheological equation for an isotropic Casson fluid flow together with heat transfer is of the form Suitable boundary conditions governing the flow along upper horizontal surface of a paraboloid of revolution are The formulation of the second term in boundary (5) states that the heat conducted to the melting surface on paraboloid of revolution is equal to the combination of heat of melting and the sensible heat required to raise the solid temperature to its stratified melting temperature () (for details, see [29]).For lubricating fluids, heat generated by the internal friction and the corresponding increase in temperature affects the viscosity of the fluid; hence, it may not be realistic to constant function of plastic dynamic viscosity knowing fully well that space-dependent internal heat source is significant.In order to account for this variation, it is valid to consider the modified mathematical models of both the
Nonporous and melting upper horizontal object of paraboloid of revolution
At the surface where → j is the induce current and B → is the interaction of magnetic field temperature-dependent viscosity and thermal conductivity models proposed in [46] and adopted in [31,32] as Meanwhile, the models are still in good agreement with experimental data of Batchelor [34].It is worth mentioning that the first and second terms in (7) are valid and reliable since ∞ () > () in this study, whereas thermal stratification ( ) at the melting wall and at the free stream is defined as Using similarity variable for temperature () in (7) to simplify [1 − ()] we obtain suitable temperature difference in flow past thermally stratified horizontal melting surface of paraboloid of revolution as From thermal stratification models in (9), the following relations can be easily obtained where is known as reference temperature.A significant difference between ( − ) and ( ∞ − ) can be easily obtained from (10).This can be traced to the fact that the linear stratification occurs at all points of "" on the wall ) and at all points of "" at the free stream as ( → ∞).In view of this, it is valid to define temperaturedependent viscous parameter by considering the second term in (10) since ∞ () > ().Mathematically, the ratio of the first two terms in (10) can thus produce dimensionless thermal stratification parameter ( ) as The stream function (, ) and similarity variable are of the form It is important to note that the stream function (, ) automatically satisfies continuity (2).The nonlinear partial differential equations ( 3) and ( 4) are reduced to the following nonlinear coupled ordinary differential equations In ( 13) and ( 14), melting parameter , Prandtl number , magnetic parameter , Eckert number , temperaturedependent thermal conductivity parameter , space-dependent internal heat source parameter Γ, skin friction coefficient , Nusselt number Nu , and buoyancy parameter depending on volumetric-expansion coefficient due to temperature are defined as where is the shear stress (skin friction) between Casson fluid and upper surface of horizontal paraboloid of revolution and is the heat flux at all points on the surface In order to nondimensionalize the boundary conditions ( 5) and ( 6), it is pertinent to note that the minimum value of is not the starting point of the slot.This implies that all the conditions in (5) are not imposed at = 0.As shown in Figures 1(a) and 1(b), it is obvious that it may not be realistic to say that = 0 at all points on upper horizontal melting surface of a paraboloid of revolution.Hence, it is not valid to set = 0 in similarity variable .Upon using = (+) (1−)/2 the minimum value of which accurately corresponds to minimum value of similarity variable This implies that, at the surface ( = ( + ) (1−)/2 ), the boundary condition suitable to scale the boundary layer flow is = .The boundary condition becomes Moreover, dimensionless governing equations ( 13) and ( 14) are depending on while the boundary conditions ( 18) and ( 19) are functions and/or derivatives depending on .In order to transform the domain from [, ∞) to [0, ∞) it is valid to adopt () = ( − ) = () and Θ() = Θ( − ) = (); for more details, see Figure 1
(b). Considering the fact that
Prandtl number is strongly dependent on plastic dynamic viscosity and thermal conductivity, and it is assumed that both properties vary linearly with temperature; hence for more accurate analysis of boundary layer as suggested in [47,48], the in ( 14) is Equation ( 18) reveals that Prandlt number at free stream is denoted as ∞ .The final dimensionless governing equation (coupled system of nonlinear ordinary differential equation) is Due to the fact that Θ() = 0, the influence of temperaturedependent thermal conductivity on heat conduction during melting process diminishes.Dimensionless boundary condition reduces to Upon substituting the similarity variables (12) and models of physical quantities (i.e., and Nu ) at the wall into ( 16) we obtain Re 1/2 = (1 + 1 ) (0) ,
Numerical Solution
Numerical solutions of the boundary valued problem ( 21)-( 24) are obtained using classical Runge-Kutta method with shooting techniques and MATLAB package (bvp5c).The boundary value problem cannot be solved on an infinite interval and it would be impractical to solve it for even a very large finite interval; hence at infinity is 10.Using the method of superposition by Na [49], the boundary value problem of ODE has been reduced to a system of five simultaneous equations of first order (IVP) for five unknowns following the method of superposition.In order to integrate the corresponding IVP, the values of ( = 0) and Θ ( = 0) are required.However, such values do not exist after the nondimensionalization of the boundary conditions ( 5) and (6).The suitable guess values for ( = 0) and Θ ( = 0) are chosen and then integration is carried out.The calculated values of () and Θ() at infinity ( = 10) are compared with the given boundary conditions in (24) and the estimated values ( = 0) and Θ ( = 0) are adjusted to give a better approximation for the solution.Series of values for ( = 0) and Θ ( = 0) are considered and applied with fourth-order classical Runge-Kutta method using step size Δ = 0.01.The above procedure is repeated until asymptotically converged results are obtained within a tolerance level of 10 −6 .It is very important to remark that setting ∞ = 10, all profiles are compatible with the boundary layer theory and asymptotically satisfies the conditions at free stream as suggested by Pantokratoras [50].It is worth mentioning that there exist no related published articles that can be used to validate the accuracy of the numerical results.In view of this, ( 21)-( 24) can easily be solved using ODE solvers such as MATLAB's bvp5c as explained in Kierzenka and Shampine [51] and Gökhan [52].
Verification of the Results.
In order to verify the accuracy of the present analysis, the results of classical Runge-Kutta together with shooting have been compared with that of bvp5c solution for the limiting case = 0.07, = 0.1, = 0.1, = 0.3, = 0.25, Γ = 1, = 0.09, ∞ = 1, = 0.17, = 0.2, = 0.3, and = 1 at various values of within the range 0 ≤ ≤ 1.As shown in Table 1, the comparison in the above case is found to be in good agreement.This good agreement is an encouragement for further study of the effects of other parameters.
Results and Discussion
The numerical computations have been carried out for various values of major parameters using the numerical scheme discussed in the previous section.This section presents the effect of different embedded physical parameters on the flow.Following Mustafa et al. [53], the ratio of momentum diffusivity to thermal diffusivity is considered to be unity (i.e., = 1) due to the fact that Casson fluid flow under consideration possesses substantial yield stress.
Influence of Velocity Index Parameter 𝑚 and Eckert
Number .At metalimnion level of thermal stratification ( = 0.5), when = 0.07, = 0.1, = 0.1, = 0.3, = 0.25, Γ = 1, = 0.09, ∞ = 1, = 0.2, = 0.3, and = 1, it is worth noting that both vertical and horizontal velocities decrease with ; see Figures 2 and 3. Combination of these practical meanings of velocity index parameter justifies the decrease in both vertical and horizontal velocities profiles we obtained.Meanwhile, temperature distribution within the flow increases with from a few distance after wall till free stream; see Figure 4.With an increase in the magnitude of , Figure 5 reveals that temperature gradient profile Θ () increases near the wall 0 ≤ ≤ 3.1 and decreases thereafter as → 10.It is very important to notice that velocity profile when = 0.15 perfectly satisfies free stream condition asymptotically.This can be traced to the fact that with Eckert number at various values of velocity index parameter when = 0.07, = 0.1, = 0.1, = 0.3, = 0.25, Γ = 1, = 0.09, ∞ = 1, = 0.5, = 0.2, = 0.3, and = 1. Figure 6.It is noticed that as increases within 0.05 ≤ ≤ 0.2, the thickness of the paraboloid of revolution decreases but corresponding influence of stretching on the flow is an increasing function.The variation in local skin friction coefficient and local Nusselt number which is proportional to local heat transfer rate as stated in (25) as a function of viscous dissipation term and velocity parameter is shown in Figure 7 and Table 2. Figure 7 shows that, at a fixed value of , Re wall.Figure 10 reveals that a distinct significant increase in Re 1/2 is guarantee with an initial increase in from 0.2 to 0.25.Physically, increase in the magnitude of non-Newtonian Casson parameter ( → ∞ implies sharp transition in the flow behavior from non-Newtonian fluid flow to Newtonian fluid flow.In view of this, resistance in the fluid flow is produced.It is worth mentioning that an increase in implies a decrease in yield stress of the Casson fluid and increase in the magnitude of plastic dynamic viscosity .
𝐸
It is observed that the present study complements related studies on Casson fluid flow with temperature-dependent plastic dynamic viscosity on nonmelting surface; see Figures 8 and 9 in [41], Figures 3 and 4 in [54], and Figure 2 reported by Jasmine Benazir et al. [55].The relationship between non-Newtonian Casson parameter and Eckert number is sought for and illustrated graphically in Figures 13 and 14.Within 0 ≤ ≤ 1.5, there exists no significant difference in Re 1/2 with .As shown in Figure 13, when the magnitude of = 0.25, a distinct significant increase in Re 1/2 is observed due to an increase in the magnitude of viscous dissipation parameter.At small magnitude of , local Nusselt number (Nu Re −1/2 ) which is proportional to local heat transfer rate is found to be decreasing with .At large value of , Nu Re −1/2 increases with ; see Figure 14.The simulation was further extended to unravels the relationship non-Newtonian Casson parameter , thermal stratification parameter , and local skin friction coefficients when = 0.07, = 0.1, = 2, = 0.25, Γ = 1, = 0.09, ∞ = 1, = 0.17, = 0.2, = 0.3, and = 1.It is revealed in Figure 15 that Re 1/2 increases with at epilimnion stage which is known as the highest and warmest layer ( = 0).In addition, a significant decrease in Re 1/2 is observed with an increase in at hypolimnion stage which can be referred to as the coolest layer.Mathematically, when = 0, this implies that Θ() = 1 and maximum wall temperature at the wall explains the increase in Re 1/2 since increase in corresponds to a decrease in yield stress .
Conclusion
The boundary layer analysis of non-Newtonian Casson fluid flow over a horizontal melting surface embedded in a thermally stratified medium in the presence of viscous dissipation internal space heat source has been investigated numerically.
The effects of the velocity power index, melting parameter, temperature-dependent viscous parameter, Eckert number, thermal conductivity, and magnetic interaction parameter were examined.Conclusions of the present analysis are as follows: (1) Increase in the magnitude of velocity index parameter leads to a decrease in velocity and increase in temperature due to combine practical influence of the parameter.( (5) In the case of Casson fluid flow over an upper horizontal thermally stratified melting surface of a paraboloid of revolution, decrease in horizontal velocity is guaranteed with an increase and .
(6) With an increase in the magnitude of , the influence of stretching velocity at the wall = ( + ) on horizontal and vertical velocities is stronger than that of = ( + ) (1−)/2 which describes the immediate fluid's layer next to upper horizontal surface of a paraboloid of revolution due to melting heat transfer.
An extension of the present study to the case of Williamson and Prandtl fluid flow over an upper horizontal thermally stratified melting surface of a paraboloid of revolution is hereby recommended.For suitable parametrization to achieve a comparative study, see [7].
𝑢:
Velocitycomponentin direction V: Velocitycomponentin direction : Coefficient related to stretching sheet : Magnetic field strength : Skin friction coefficient : Specific heat at constant pressure : Distance along the surface : Distance normal to the surface : Parameter x) > T m (x) u(x, y) = 0
Figure 1 :
Figure 1: (a) The coordinate system of Casson fluid flow over upper horizontal thermally stratified melting surface of a paraboloid of revolution.(b) Graphical illustration of fluid domain and conversion of domain from [, ∞) to [0, ∞).
Table 1 :
Validation of numerical technique: comparison between the solutions of classical Runge-Kutta together with shooting (RK4SM) and MATLAB solver bvp5c for the limiting case.
Table 2 :
Variations in Nusselt number Nu Re −1/2 1/2 decreases with Eckert number .At constant value of , unequal decrease in Re 1/2 with is also observed.In addition, Nu Re −1/2 decreases with at various values of .Table 2 shows that Nu Re −1/2 ) Nu Re −1/2 decreases with at various values of within the interval 0.05 ≤ ≤ 0.35.Re 1/2 decreases with and .Local skin friction coefficient Re 1/2 increases negligible with when magnitude of is small.A significant increases in Re 1/2 is guaranteed with when magnitude of is large.
related to stretching sheet Nu : Local Nusselt number : Prandtl number Electrical conductivity of the fluid (, ): Stream function. | 6,304.2 | 2017-01-04T00:00:00.000 | [
"Physics",
"Engineering"
] |
The escape problem for active particles confined to a disc
We study the escape problem for interacting, self-propelled particles confined to a disc, where particles can exit through one open slot on the circumference. Within a minimal 2D Vicsek model, we numerically study the statistics of escape events when the self-propelled particles can be in a flocking state. We show that while an exponential survival probability is characteristic for non-interaction self-propelled particles at all times, the interacting particles have an initial exponential phase crossing over to a sub-exponential late-time behavior. We propose a new phenomenological model based on non-stationary Poisson processes which includes the Allee effect to explain this sub-exponential trend and perform numerical simulations for various noise intensities.
A common trait for many soft active matter systems, formed by the self-propelled (active) individuals, is their ability to self-organize into complex flowing states that arise due to many-body interactions and an energy input on the particle level [1,2]. A wide range of systems live under the umbrella of active matter including biological microswimmers [3,4], Janus particles [5][6][7] and vibrated granular rods [8,9], and most of these systems are embedded into an environment or a spatial confinement which can alter the open-space particle dynamics [10]. Recently, experiments and simulations have shown that the interactions between the self-propelled particles or interactions with obstacles and boundaries give rise to interesting behaviours like particle migration towards walls [11,12], separation in systems with more than one type of active particles [13], as well as trapping [14]. The role of confinement of active particles in undoubtedly fundamental for in realistic systems especially for biological matter and biotechnology [15,16]. However, it is also one of the least understood and open topic in current active matter research. The confinement introduces a length scale into the problem, which interacts with the many other length-scales that are already present in active matter, changing the emergent pattern formations in the flocking states.
The narrow escape problem is a classical problem in statistical physics, where particles move inside a bounded 2D domain with a small part of the boundary being absorbing. The type of escape process is determined by the particle dynamics inside the domain, and various behaviors have been studied in the past. The classic example is that of Brownian motion which results in an exponential decay on the number of particles, or equivalently, the survival probability [17]. In recent times, the narrow escape problem has gained renewed interest due to its relevance in biological processes, where the absorbing window may for example represent a small patch of a cellular membrane where receptors are located, and the diffusing particle represents an ion [17,18]. An exponential decay is also found in chaotic billiard systems, while FIG. 1. A sketch of the system considered in this paper. The particles may represent self-propelled agents like Janus particles, vibrated granular rods or biological microswimmers, modeled for simplicity as spherically symmetric polar particles with a small volume exclusion radius rve = 1 in units of the particle step length. The angular opening of the escape window is fixed to 2π/18. The particles interact through a Vicsek-type alignment interaction with range rint = 5, enabling collective escape events. deterministic billiards give rise to a 1/t decay in particle number [19]. The survival probability in a 1D setting has also recently been studied in a run-and-tumble model of bacterial motion [20].
In this Letter, we study the problem of interacting active particles confined to a disc with a small opening through which they may escape, as depicted in Fig.(1 In the high-noise, weak-interaction limit the problem is similar to that of the Brownian escape problem in the sense that interactions are negligible, while in the opposite regime, we expect collective effects to alter the escape process leaving the particle number decay nonexponentially. It is the low-noise regime that is of primary interest in this Letter. We perform numerical simulations for both interacting and non-interacting selfpropelled particles, and study the survival probability and escape time distribution. The simulations reveal a sub-exponential decay at late times whose origin we are able to explain within a minimal Poisson process with a non-stationary rate λ. A model with a density dependent rate inspired by models in population ecology is proposed, which reproduces the obtained event statistics. In general terms, we consider active particles moving in a 2D bounded domain Ω with boundary ∂Ω = ∂Ω r ∪ ∂Ω a where the subscripts denote the reflective and absorbing parts of the boundary respectively. The particles have a density ρ( x, t), assumed to be normalized to unity at t = 0, which follows a continuity equation. From this probability density the survival probability is defined as (1) The first hitting time (FHT), in this case also the escape time, is the time T 1 at which a particle escapes the domain. The distribution of first hitting times H(t) is closely related to the survival probability, namely which simply states that the probability of survival up to time t is equivalent to the FHT being larger that t. This implies for the FHT distribution that We see that the distribution of escape times can be interpreted as the probability flux out of the system.
Vicsek models are undoubtfully the archetypal numerical models for collective swarming and flocking effects [21,22]. The particles are self-propelled with velocitẏ where D η is a rotation matrix rotating a vector by a random uniformly chosen angle in (−ηπ, ηπ). The parameter η ∈ (0, 1) determines the noise in the system. The velocity v i in Eq. (4) is the average velocity of the neighboring particles of i, representing the velocity with which particle i tries to align. The alignment interaction has a range r int . Note that the velocity of particle i itself is included in this sum leading to v i , so that in the noninteracting limit r int → 0 the particle moves according to a very simple stochastic model governed only by the parameter η and the self-propulsion speed v 0 , which we here set to unity without loss of generality. Eq. (3) and (4) must be supplemented with additional information when boundaries or obstacles are present. In the current case, the reflecting boundary of the disc can be simply dealt with by letting the directorP i be reflected about the tangent to the circle at the point of impact. A small volume exclusion interaction with range r ve is numerically included by moving particles a step length apart is the direction separating them should they come to close to each other. This is necessary in confined spaces since a flock colliding with a wall would otherwise tend to collapse the particle density. Fig. (2) shows an example of the dynamics produced by this model.
On the hydrodynamic scale, let us assume that the phase space density of an active particle Ψ( x, θ, t) satisfies a Boltzmann-type mean field equation D t Ψ = Q[Ψ] where the total time derivative includes the selfpropulsion term and takes the form D t = ∂ t +v 0P (θ)·∇ x . This must of course be supplemented with appropriate boundary conditions for the reflective and absorbing parts of the boundary. The operator Q[Ψ] contains a part resulting from the noise in the direction of motion and a non-linear part that originates from alignment interactions [24]. The particle density and velocity fields are simply the zeroth and first velocity moments of the field Ψ: By integrating the Boltzmann equation over the angles one obtains the mass conservation equation Since the collision operator is in general non-linear we do not expect a full solution to be available for Ψ through the method of separation of variables. However, it is instructive to make the somewhat weaker assumption that the two main hydrodynamic fields, after integrating out the direction of motion, are separable, namely ρ = X( x)S(t) and V = u( x)f (t). This reduces Eq. (7) to Snapshots of a simulation based on Eq. (3)-(4) with active particles confined to a disc with an absorbing window with opening angle 2π/18 centered at (R, 0). Background color map shows relative particle density for all the particles (absorbed and non-absorbed), bright implying high density. We see that as time progresses the density at the absorbing window increases as particles accumulate. System parameters: η = 0.2, R = 100, rint = 6, rve = 2, N0 = 2 9 .
which is in a separated form, implying immediately that the both sides are equal to some constant which in principle is determined from the boundaries, allowing us to writeṠ with k a separation constant. This shows that in general we expect the escaping Vicsek particles to behave like a non-stationary Poisson process, with a rate that is some complex function of all the system parameters and time λ(t) = λ(D θ , r int , ...; t).
In the absence of interactions when the non-linear collision term is not present, one can attempt a solution in terms of a fully separated set of variables. Writing the phase-space density as Ψ = X( x)Θ(θ)S(t) we easily see that the velocity field equation Eq. (6) reduces to which is time-independent. In this case Eq. (9) becomes a constant, and we expect a stationary Poisson process to be a valid description of the escape process. That the non-interacting system behaves like a stationary Poisson process can also understood from the memorylessness property [25]. This property states that, if ones waited some time t 1 and no escape has takes place, the probability of having to wait a further time t 2 is simply the probability of having to wait a time t 2 in the first place. This type of lack of memory, regarding how much time has passed, can be written in terms of the survival probability simply as S(t 1 + t 2 ) = S(t 1 )S(t 2 ) which is only satisfied by an exponential function. We expect the correlations between the particles in the interacting case to break this memoryless property through the fact that the system now depends on its history: flocks of particles may form and escape the system collectively, and the potential size of the clusters is limited by how many particles have already left the system. We therefore expect the escape rate to be a function of the particle number Such processes where rates are dependent on density of number of particles are ubiquitous in Nature. In epidemiology, for example, both death and infection rates may have non-trivial density or population size dependencies, which may be traced back to some sort of competition of resources or the simple fact that a higher-density population will have more contacts which act as possible disease transmission routes [26,27]. These ideas are found in several mathematical models in population ecology, and are typically associated with the Allee effect [28]. This is the effect where there is a correlation between the general well-being or chance of survival for an individual in a population and that populations size or density.
To extract the potential density dependence from the numerical data, we consider Eq. (8) in the form λ[S] = −Ṡ/S, where we used S = n(t)/n(0) to write the rate in terms of the survival probability. Fig. (3) shows these data, where a power-law behavior is observed. This motivates the use of the following power-law ansatz λ[n(t)] = λ 0 S ζ (t), with S = n/N 0 . This is easily shown to have the solution Here the parameter λ 0 is an escape rate, while the shape parameter ζ deforms the decaying function S(t) away from the exponential behavior which is regained in the limit ζ = 0. For short times we have an exponential-type behavior S(t) = 1 − λ 0 t + .. which is independent of the shape parameter. For ζ > 0, the solution in Eq. (10) represents a sub-exponential growth at late times, while for negative shape parameters the decay reaches zero at some finite time. Non-interacting case: Simulation results from noninteracting self-propelled particles are shown in Fig. (4), together with best fit exponential lines. We see that both the survival probability and the FHT distribution is exponential at late times as expected, with a rate that decays rapidly as a function of angular noise strength.
Interacting case: As the alignment interactions in the model are turned on, we expect some deviation from the exponential predictions based on the free theory. Since the particles move collectively in clusters, it is likely that when a single particle finds the absorbing window, so will several other particles in the same cluster. The same may also be said for cases where a cluster misses the window. Results from simulations are shown in Fig. (5) together with best fits from the phenomenological model. Fig. (3). b) FHT distribution also consistent with the same parameters.
FIG. 6. Plot showing the interacting and non-interacting survival probabilities as a function of time for the same system parameters. The interactions make the number of escapes be higher at early times, while the behavior is clearly subexponential at late times. Fig. (6) shows the interacting and non-interacting survival probability together for some chosen values of the noise. We clearly see that while the interacting particles are leaving the system more rapidly at short time scales, they are less efficient at emptying the system at late times. The curves typically cross each other for low noise values while the curves coincide at high noise values. By increasing the initial number of particles, a very similar behavior is expected, except that the crossover time increases since there are more particles present for a longer time to join a flocking state.
Conclusions:
We have studied the escape problem for self-propelled particles with tunable interaction parameter. In the interacting case, the numerical results agree well with a model where the escape rate is a power-law in the population fraction, leading to an early time exponential behavior followed by a sub-exponential decay in time. Surprisingly, collective alignment effects seem to make the escape process slower in the long run, and faster on short time scales as long as the noise is sufficiently low. In the limit of high noise, fluctuations will dominate over the alignment mechanism and the interacting and non-interacting cases are more or less identical and characterized by exponential decay. | 3,368.4 | 2020-07-17T00:00:00.000 | [
"Physics"
] |
Asthma and Allergy: Unravelling a Tangled Relationship with a Focus on New Biomarkers and Treatment
Asthma is a major driver of health care costs across ages. Despite widely disseminated asthma-treatment guidelines and a growing variety of effective therapeutic options, most patients still experience symptoms and/or refractoriness to standard of care treatments. As a result, most patients undergo a further intensification of therapy to optimize symptom control with a subsequent increased risk of side effects. Raising awareness about the relevance of evaluating aeroallergen sensitizations in asthmatic patients is a key step in better informing clinical practice while new molecular tools, such as the component resolved diagnosis, may be of help in refining the relationship between sensitization and therapeutic recommendations. In addition, patient care should benefit from reliable, easy-to-measure and clinically accessible biomarkers that are able to predict outcome and disease monitoring. To attain a personalized asthma management and to guide adequate treatment decisions, it is of paramount importance to expand clinicians’ knowledge about the tangled relationship between asthma and allergy from a molecular perspective. Our review explores the relevance of allergen testing along the asthma patient’s journey, with a special focus on recurrent wheezing children. Here, we also discuss the unresolved issues regarding currently available biomarkers and summarize the evidence supporting the eosinophil-derived neurotoxin as promising biomarker.
Introduction
Asthma is a major driver of health care costs for all ages and the most common chronic disease for children and adolescents [1,2]. The prevalence of asthma has risen steadily over the past decades, currently reaching about 300 million people worldwide and potentially involving a further 100 million by 2025 [3,4].
Despite the availability of both widely disseminated asthma-treatment guidelines and an ever-evolving variety of effective therapeutic options, most patients with asthma continue to experience symptoms, with one in two adults reporting very poorly controlled asthma [5]. Similarly, the prevalence of inadequately controlled asthma in the pediatric population is quite high, with 30-40% of all severe exacerbations occurring in children and adolescents. [6]. It has been well documented that a significant proportion of patients might be refractory to standard of care treatments, resulting in further intensification of therapy to optimize symptom control, thus remaining symptomatic despite maximal therapy and subsequent increased risk of side effects [7]. To date, implementation of existing guidelines has been inadequate, particularly in primary care setting [8], and due to several barriers, there is limited emphasis on identifying triggers or allergens, jeopardizing efforts to effectively improve asthma control and patients' burden [9]. Such a management approach seems to not fully acknowledge that the allergies are frequent triggers of asthma exacerbations [10] and up to 80% of childhood asthma and more than 50% of adult asthma cases may have an allergic component [11]. To date, in the USA, allergy evaluation has been discussed in about 33% of primary care office visits for asthma with allergy testing being only documented in 2% of cases of asthma over the course of a year [12].
Considering a precision medicine approach, to improve asthma patient care is paramount to bridge the observable characteristics (phenotypes) with the mechanisms driving the disease (endotypes) using biomarkers. While an endotype-driven treatment still needs to face multiple challenges before its implementation in daily clinical practice [13], asthma endotype classifications combined with specific biomarkers may hold great potential for new therapeutic modalities and better treatment efficacy [14]. Although specific, sensitive, and reliable (point-of-care) biomarkers would be critical for selecting the proper treatment for a given patient, current biomarkers appear to be good indicators of T2 endotypes but not strong predictors of response to targeted treatments [15] and most of them require further validation [16].
In this narrative review, we explore the tangled relationship between asthma and allergy from a molecular perspective, and the relevance of allergen sensitization and testing in asthma diagnosis and therapy selection, with a special focus on recurrent wheezing children. Here, we also discuss the unresolved issues regarding current biomarkers in clinical practice and summarize the evidence supporting eosinophil-derived neurotoxin (EDN) as promising biomarker.
Pursuing an Optimized Asthma Care: The Relevance of the Molecular Approach
Asthma is recognized as a disease with significant heterogeneity in clinical features (phenotypes), disease severity, pattern of underlying disease mechanisms and responsiveness to specific treatments (e.g., responder/non-responder, corticosteroid sensitive or resistant). Thus, precision medicine strategies are needed to better tailor therapy on a patient's clinical and immunological profile [17]. Accordingly, classifying asthma into distinct endotypes in a laboratory-and clinical-evidence-based manner contributes to personalized precision medicine by unravelling disease mechanisms. To date, asthma has been classically associated with type 2 inflammation, characterized by high levels of immunoglobin E (IgE), eosinophils, fractional exhaled nitric oxide (FeNO), and cytokines frequently found in allergic responses including interleukin 4, 5, 13 and 9 (IL-4, IL-5, IL-13, IL-9). However, none of these biomarkers have proven effective in differentiating responses between specific drugs that target type 2 inflammation. Moreover, between 10 and 33% of subjects with asthma are not associated with allergy (non-allergic asthma) and exhibit non-type 2 inflammation (non-T2 or T2-low endotype) with a prevalence of neutrophils or a paucigranulocytic pattern. Therefore, there remains an unmet clinical need in the study of the mechanisms and biomarkers for both T2-high and T2-low endotypes as concerns their ability to predict response to targeted therapy [18].
It has been recently proposed that the molecular allergology approach to allergic asthma may contribute to a better understanding of disease mechanisms, a precise diagnosis through the description of the molecular allergen sensitization profile, as well as to an optimal selection of responders to the targeted treatment, either with allergen immunotherapy (AIT), or with biologicals [19]. Importantly, it has been recently recognized that the heterogeneity of asthma can be mostly ascribed to the complex interactions between the host and the environment, including aeroallergens [20]. To this end, defining the allergen sensitization of a patient with asthma at the molecular level by measuring specific IgE to purified natural or recombinant allergens can improve diagnostic accuracy and improve asthma phenotyping [21]. Allergen components have been available for testing in the clinic for almost two decades. The different allergenic proteins in some pollens and perennial allergens (e.g., dust mite) were characterized early. Pet-derived allergens are the third leading cause of respiratory allergies, after mites and pollens, and a significant number of new findings are changing the understanding of this allergy. Of note, the prevalence of sensitization to dander from various animals appears to be increasing worldwide, with 1 in 5 adults being sensitized to cats. More recently, the allergenic proteins for cats and dogs have been characterized with three important lipocalins (Canis familiaris 4, 6 and Felis domesticus 7) being made available for testing as late as last year [22,23].
Furthermore, molecular-based diagnostics has a direct effect on the strategy of choosing which allergens should be used in AIT [24]. Of note, it is recognized that molecular diagnosis allows for a personalized AIT thus sparing patients the burden of multiple treatments or unnecessary lifestyle modifications involved in allergen avoidance [25]. In this scenario, the component resolved diagnosis (CRD) stands as a promising starting point as it allows to precisely identify the number and type of recognized molecules in the individual patient that are clinically relevant.
Among the innovative molecular approaches to allergic asthma, epigenetics is gaining significant interest by virtue of its role in immune cell differentiation and plasticity [26] and given the observation that airway epithelium pathobiology in asthma is regulated by epigenetic mechanisms [27]. Importantly, epigenetic modifications can mediate the effects of the environment on the development of or protection from allergic diseases [26]. Accordingly, DNA methylation changes might thus be used as molecular biomarkers to quantify the different allergy enhancing or protective exposures [28]. Epigenetic mechanisms, including methylation, can also contribute to childhood asthma; therefore, identifying DNA methylation profiles in asthmatic patients can inform disease pathogenesis. Of note, recent analyses from the MeDALL (European Mechanisms of the Development of Allergy) consortium reported a significant association between reduced whole blood DNA methylation at both 21 and 14 CpG sites and childhood allergy, thus providing novel insights into the shared molecular mechanisms underlying asthma, rhinitis, and eczema [29,30].
Diagnosis
It is increasingly recognized that evaluating aeroallergen sensitization in asthma patients is a key step to improve patients' care [31]. In line with this, the National Institute of Health Asthma Outcomes Task Force recommends the assessment of aeroallergen sensitization as a core biomarker for classification of asthma [32] while recommendations from the GA 2 LEN/EAACI state that the clinical relevance of each allergen is more important than the number of sensitizations itself [33]. Although the testing criteria and the timing of the testing (either alongside or after a diagnosis of asthma has been made) may vary among guidelines, integrating aeroallergen evaluation into asthma management is of paramount importance to optimize the asthma patient journey from diagnosis to treatment.
Different types of aeroallergens and specific sensitization profiles are associated with a different pattern of clinical symptoms and different levels of severity. It has been documented that there is a direct relationship between the degree of allergen sensitization, measured as serum specific IgE, and the likelihood of expression of asthma symptoms [34,35]. This association with allergen-specific IgE titers is especially marked in children, where elevated specific IgE supports a T2-high profile. Of note, when associated with childhood symptoms of atopy and asthma, positive IgE testing aids in diagnosing early-onset allergic asthma [36].
Testing for molecular aeroallergen sensitization can be of help to identify individuals who are sensitized to minor allergens or to cross-reactive allergens as well as to confirm two or more coexisting sensitizations (polysensitization). Most patients with allergic asthma seen by specialists are polysensitized, and many of them are also poly-allergic (sensitizations with clinical relevance) because polysensitization does not necessarily mean that all sensitivities are clinically significant. To date, while in Europe allergen immunotherapy practice recommends that 1 or 2 of the most important allergens for the patient are to be included in the same extract, in the United States, mixtures including many or most of the sensitizing allergens are commonly administered [25].
Aeroallergen sensitization testing enhances the ability to predict asthma development, drug response, and risk for future asthma exacerbations in children. In addition, among children and adolescents, the allergy testing may help identify common comorbid conditions such as allergic rhinitis, which when treated appropriately, can improve asthma control [37]. Within the pediatric population, preschool children with repeated wheeze have been recently included among the patients' profiles who may benefit the most from aeroallergen sensitization evaluation [31]. As recently proposed by Casale et al., several patients' profiles should undergo aeroallergen testing: persistent asthmatics, patients who need oral corticosteroids (OCS) or inhaled corticosteroids (ICS), patients who seek to get advice on the presence of pet at home or regular pet contact or to better understand their condition, and patients who may be eligible to receive AIT or biologicals [31].
Several allergen component tests are now available for clinicians to use in everyday practice [38]. Allergen-specific IgE tests utilizing individual allergenic molecules are regarded as a more precise and informative option, particularly in polysensitized patients compared to those tests based on whole allergenic extracts. A growing spectrum of molecules, representing single allergens of clinical relevance, have been identified, characterized, and produced for commercial in vitro assays. CRD can be of special help in unveiling co-sensitization and/or cross-sensitization of closely related or widely different allergen sources [39,40]. Furthermore, CRD can be of special interest when the prescription of AIT needs to be accomplished in areas with high frequency of sensitization to "minor allergens" [41]. In addition, this is relevant when one should identify whether the sensitization is primary (species-specific) or a result of cross-reactivity to proteins with similar protein structures. In this regard, CRD is gaining greater recognition for pet allergy diagnostics [42]. To date, the frequency of co-sensitization with cat and dog may be explained by shared proteins between the two species e.g., lipocalins, or serum albumins (Table 1). Four dog allergens (e.g., Can f 1, Can f 2, Can f 4, and Can f 6) and two cat allergens (Fel d 4 and Fel d 7) are in the lipocalins family of proteins [42]. A recent study carried out in 294 children and adults with suspected asthma and a positive skin prick test to cat and dog showed that allergen components can unveil the molecular basis of animal polysensitization and may be of help in both identifying primary sensitizers and explaining how individual IgE patterns of expression may correlate with previous pet ownership [46].
Therapy Selection
Asthma management entails allergen avoidance: therefore, identifying allergen sensitization is crucial to both foster education on allergen avoidance and guide appropriate exposure control [47]. Patients with asthma undergoing allergy testing are more likely to adopt preventive measures (including an asthma plan, trigger avoidance, and medication adherence), thus experiencing fewer days with allergy symptoms than their counterparts who had not been tested [48]. These outcomes were supported by a study of adults with moderately severe asthma, who had an individualized plan, including environmental control based on the results of allergy testing [49]. Although avoidance interventions at the population level for the main allergens are not supported by robust evidence of efficacy and are still somewhat controversial, knowing which specific allergen is triggering the allergic asthma can be of help in preventing the appearance or worsening of bronchial symptoms in sensitized selected individuals [50]. Importantly, it is key not only to employ appropriate avoidance measures, but also to effectively sustain these interventions over time to attain a long-lasting symptom improvement.
As suggested by Global Initiative for Asthma (GINA) guidelines, most asthma patients react to multiple triggers that are ubiquitous in the environment, thus making their avoidance very burdensome for the patient. Inhalant allergens, including indoor (molds, animal dander, and house dust mites) and outdoor ones such as pollens, appear to be the most important for children and adults with asthma. However, medications to maintain good asthma control have a key role because patients appear less affected by environmental factors when their asthma is well controlled [10].
Allergen testing can predict response to corticosteroids in both children and adults. Certain patterns of inhaled allergen sensitization in early life may help identify children at high risk for severe exacerbations and those who are likely to respond well to ICS [51]. Similarly, a recent study in the adult population with asthma and sensitization to airborne allergens found that among sensitized subjects, the decline in forced expiratory volume in 1 second (FEV 1 ) was lower for long-term ICS users (1.3-8 and >8 years), compared with nonusers and short-term users [52].
Identifying sensitizations in allergic rhinitis subjects may also help guide AIT for potential asthma prevention. Therefore, positive specific IgE is considered a useful biomarker for AIT candidate selection in the context of a clear history of symptoms on exposure to the relevant allergen [25]. To date, AIT stands as the only treatment approach able to alter the natural course of allergic respiratory diseases by decreasing frequency and severity of symptoms and progression of rhinitis to asthma [25,53]. As AIT works through induction of allergen-specific tolerance, its effectiveness relies on the inclusion of relevant allergens in adequate concentrations to achieve tolerance. As a result, omission of relevant allergens or the use of irrelevant allergens may reduce efficacy of immunotherapy.
Polysensitization develops over time and has clinical relevance for treatment decisions. In many patients originally classified as polysensitized based on SPTs to whole extracts, molecular diagnosis could identify clear sensitizations to major allergens with clinical relevance to be considered for AIT [54]. A detailed molecular diagnosis will add value when determining whether AIT is appropriate for a given patient and, if so, which allergen(s) should be administered. A study performed on 141 patients with pollen-induced AR [55] showed that molecular diagnosis would change the allergen composition in AIT. The patients were tested with SPTs and the ISAC ® microarray-based panel of allergens, and prescriptions before and after the knowledge gained with the input of ISAC were compared. In 1 out of 2 cases, the allergen composition of AIT was changed due to the molecular diagnosis results.
Finally, one must bear in mind that that even negative results for aerollergen sensitization can be meaningful to personalize asthma care, as patients will be spared from taking drugs that will not be effective and from the inconvenience of avoiding triggers to which they are not sensitized [31]. Last but not the least, negative test results may prompt a search for other causes of the observed symptoms, thereby improving asthma phenotype definition and a personalized therapeutic approach.
Biomarker Hunting: From Current Challenges to Promising Candidates
Asthma had been tackled for a long time with a "one size fits all" management, although it encompasses a wide range of phenotypes differing in severity and natural history. Therefore, elucidation of these phenotypes and identification of biomarkers with which to recognize them and guide appropriate treatment remain a priority. Biomarkers stand as measurable indicators, bridging an underlying pathway to a phenotype or endotype of a disease, and are of great value to both predict treatment response and monitor disease progression. Therefore, an ideal biomarker should be sensitive, specific, able to provide positive and negative predictive values, while being simple to measure and cost-effective.
Currently available biomarkers allow one to dichotomize individuals to either T2-high or non-T2-high groups, thus establishing criteria for therapy selection in routine practice. Of note, some biomarkers may offer highest yields when applied in the background of certain clinical characteristics (e.g., more reversible airway disease, age at disease onset). Therefore, a combination of different biomarkers, or a biomarker panel, may be more suitable than one to refine selection of biological therapies. Exhaled NO, blood eosinophils, serum IgE, and periostin are well-studied and established biomarkers for T2-high asthma endotype [36].
FeNO measurement can be a valuable tool as it predominantly signifies IL-4 and IL-13 activity, with FeNO concentrations greater than 50 ppb (>35 ppb in children) unveiling an eosinophilic airway inflammation and ICS responsiveness [56]. In conjunction with peripheral eosinophilia, an elevated FeNO is a risk factor for airway hyperreactivity and uncontrolled asthma [57]. However, despite being a reasonable indicator of T2-driven asthma, there are no specific guidelines on how FeNO can guide therapy with biologics and FeNo may be affected by relevant confounders including smoking, dietary nitrate intake, and virus infections. Nevertheless, a recent post hoc analysis of the Liberty Asthma Quest study evaluated the additional value of baseline FeNO, adjusted for baseline eosinophil level and other clinical characteristics, as a predictor of response in dupilumab-treated patients with uncontrolled, moderate-to-severe asthma [58].
By virtue of IgE's role as a key mediator in the development and maintenance of allergic inflammation and of a documented association between allergic asthma and elevated total IgE [11], serum total IgE has been proposed as a predisposing factor for allergic asthma and often employed for epidemiological analyses. However, elevated levels of serum total IgE are not only found in patients with allergic asthma but also with other conditions including parasitic infestations, inflammatory diseases, and some primary immunodeficiency diseases [59]. Accordingly, total levels of IgE should be carefully interpreted and not considered as an indication for the presence of allergic diseases. In terms of diagnostic and therapeutic relevance, a recent study proposed serum-free IgE rather than total IgE as biomarker of type 2 asthma in adults [60] and suggested that it could be used to determine therapeutic response. Given the emerging availability of biologicals targeting type 2 cytokines, it is important to assess total IgE ability to assist clinicians in choosing a biological therapy among the approved options. As suggested by Sanchez et al., the available evidence indicates that total IgE, as well as the blood eosinophil count and FeNO, are not enough to select one therapy over another [61]. The presence of specific IgE antibodies to environmental allergens proves sensitization and is associated with allergic asthma. However, it has been suggested that the interpretation of skin prick tests and specific IgE to whole allergen extracts relies on arbitrary cutoffs, which do not distinguish between clinically relevant and non-relevant sensitizations [11]. In addition, while positive serum specific IgE levels and symptoms upon exposure to the sensitizing allergen are indicated as inclusion criteria for starting AIT, specific IgE levels solely were found a poor predictor of AIT indication. Overall, the clinical relevance of total serum IgE and specific serum IgE as biomarker for therapy selection requires further studies.
Although its expression in bronchial tissue is not associated with asthma severity, periostin has been shown to be a biomarker of persistent eosinophilic airway inflammation despite corticosteroid use [62]. The potential use of serum periostin levels is the assessment of greater response to anti-T2-based therapies. Of note, periostin disadvantages encompass the presence of several periostin splice variants, that complicate its detection, and the uncertainty regarding its use as potential biomarker in children, since baseline periostin levels are higher in children given the ongoing bone growth during childhood.
The measurement of eosinophilia in peripheral blood cells (either by absolute count (BEC) or by percent differential of total leukocytes) has been investigated as a potential surrogate marker of bronchial and/or lung inflammation. Higher eosinophil counts identify patients with more severe disease and poorer outcomes, patients for whom biologic therapies targeting allergic and/or eosinophilic pathways are recommended. To date, the recent European Academy of Allergy and Clinical Immunology (EAACI) guideline on the use of biologicals in severe asthma suggested that the higher the blood eosinophils, the higher the expected impact of benralizumab, dupilumab, and mepolizumab in reducing severe asthma exacerbations as well as the higher improvement in asthma control in patients treated with benralizumab and reslizumab. In contrast, the effect of omalizumab on exacerbations did not depend on blood eosinophils [63][64][65], although a greater response in patients treated with omalizumab, when peripheral blood eosinophils were ≥300 cells·µL −1 , was also reported [66].
Although blood eosinophil numbers can easily and quantitatively be determined in any hospital laboratory, standardization of the appropriate cut-off of clinically relevant eosinophilia and the need for single or multiple measurements in different settings are needed prior to the application in clinical practice. In addition, some confounding factors should be acknowledged for blood eosinophils, including circadian variation, parasites (e.g., helminthiases, schistosomiasis, filariases), and treatment with systemic corticosteroids [15]. For example, a diurnal variability of BECs has been reported, with peak counts recorded soon after midnight and lowest counts at noon. Regarding the blood eosinophil counts variability, a recent clinical trial conducted in patients with severe uncontrolled asthma receiving standard maintenance treatment showed that most patients shifted to a different BEC group from their baseline group at some point during the study and such behavior was most marked in patients receiving long-term OCS. Therefore, the need for several measurements of BEC may be particularly relevant for patients receiving OCSs [67][68][69].
Biomarkers' clinical value also relies on their ability to both predict treatment response and monitor disease progression. While circulating blood eosinophil count of ≥300/µL may be helpful to identify a T2 immune biology to initiate therapy with an anti-T2 mAb, it has very limited value to monitor response [70,71]. Overall, peripheral eosinophil count is commonly obtained due to ease of performance and clinical accessibility. However, one should keep in mind the caveats in its use and the potential limitations as a biomarker for phenotyping refractory asthma or selecting therapy in mild asthma.
Overall, more differentiating, noninvasive, simply measurable, reliable biomarkers with well-defined cutoff values and documentation on their stability/behavior over time are urgently needed. Research focusing on eosinophil-derived molecular products is now emerging, which highlights more potential biomarkers for allergic diseases [15].
Eosinophil granular proteins are a useful eosinophilic activation marker in asthmatic patients and include the eosinophil peroxidase (EPO), eosinophil cationic protein (ECP) and EDN [72]. Earlier studies reported that in patients with asthma serum levels of EPO negatively correlated with FEV 1 and positively with the number of eosinophils in peripheral blood [73]. Of note, it can be measured in saliva and is documented correlating with sputum eosinophils. In addition, in asthmatic children, EPO levels were lower compared to healthy controls and positively correlated with total IgE levels [74].
ECP has been proposed as a marker for eosinophilic disease and quantified in biological fluids including serum, bronchoalveolar lavage, and nasal secretions. Elevated ECP levels are found in allergic asthma [75] and, among asthmatics, were associated with high neutrophil count [76]. ECP levels are affected by age, smoking, circadian rhythm, and seasonal variation, although only smoking appears to be of clinical significance [77]. Serum ECP was significantly higher in children with symptomatic asthma than in asymptomatic patients [78]. In children, ECP may provide complementary value when used together with lung function test and FeNO. A recent study suggested that plasma ECP concentrations may be a useful marker of type 2 inflammation in children and may help identify those children at highest risk for recurrent exacerbations who could benefit from corticosteroid treatment [79].
EDN, also known as eosinophil protein X, is a granular protein released from activated eosinophils [72]. Patients with severe asthma and uncontrolled asthmatics exhibit higher serum EDN level which showed a good positive correlation to total eosinophil count (TEC) [80], thus suggesting the use of EDN as biomarker of asthma severity in adults. A recent study carried out in adults from the Epidemiological Study on the Genetics and Environment of Asthma revealed that EDN could be a potential biomarker to monitor asthma evolution in adults as its levels are associated with different asthma expression patterns in adults [71]. It has been also reported that higher serum EDN levels could be found in children at the acute phase than at the stable phase of asthma, and in contrast to TEC, the serum EDN level can be predictive of the severity of asthma [81]. In the pediatric population, EDN may hold great promise in distinguishing persistent wheezing children from children presenting wheezing triggered by respiratory tract infections [82] and in aiding in the diagnosis of school age childhood asthma [83] as well as to monitor response to montelukast or budesonide in preschool children with asthma [84]. Given the limited ability of BEC to monitor treatment response to biologicals, the observed significant correlation between reduced serum EDN level from baseline and lung function improvement after omalizumab, benralizumab and reslizumab [85][86][87] may help monitoring the treatment response to IgE-and IL-5 targeted therapies.
Biomarker Usefulness Advantages Disadvantages
Blood eosinophil count It can serve as prognostic biomarker and to predict responsiveness to corticosteroid therapy in asthmatic patients with type 2 inflammation. Baseline value can be used to predict the clinical efficacy of mepolizumab, reslizumab), anti-IL5 receptor antibody (benralizumab) and anti-IL4 receptor antibody (dupilumab).
Easy to realize in the clinical setting, requires minimal patient effort, could be collected across the age spectrum, and it is cost-effective. Circulating blood eosinophil count of ≥300/µL may be helpful to identify a T2 immune biology to initiate therapy with an anti-T2 mAb The optimal cut-off has yet to be established and its levels may be elevated due to co-existing conditions such as parasitic infestations or decreased due to concomitant medications such as oral corticosteroids. An additional confounding factor is the diurnal variability.
Periostin
The potential use of serum periostin levels is the assessment of greater response to anti-T2-based therapies The stability of serum periostin over disease progression in adults with asthma (without seasonal effect) and in children between 4 and 11 years of age supports its use as a biomarker for type 2-high asthma.
The presence of several periostin splice variants complicates its detection. The uncertainty regarding its use as potential biomarker in children since baseline periostin levels are higher in children
Serum-specific IgE
The presence of several periostin splice variants complicates its detection. The uncertainty regarding its use as potential biomarker in children since baseline periostin levels are higher in children The main advantages of specific IgE measurements over the skin prick test are that virtually all available allergens can be tested, and the results are not influenced by antihistamines or eczema. Specific IgE tests are slightly more useful to confirm or reject the suspicion of specific sensitisation to a certain allergen. Assessment of sensitization at the molecular level can play a crucial role before prescribing AIT for the right selection of components.
The interpretation of skin prick tests and specific IgE to whole allergen extracts relies on arbitrary cutoffs, which do not distinguish between pathologic and non-clinically relevant sensitizations.
FeNO
The role of FeNO may be additive as a biomarker in relation to asthma morbidity. In conjunction with peripheral eosinophilia, an elevated FeNO is a risk factor for airway hyperreactivity and uncontrolled asthma. EDN It holds great promise in distinguishing wheezing children from children with wheezing triggered by respiratory tract infections, in aiding in the diagnosis of school age childhood asthma as well as to monitor response to montelukast or budesonide in preschool children with asthma. It can be used as biomarker to monitor asthma evolution in adults.
It is easy to obtain from multiple specimen types (e.g., serum, urine, sputum, bronchoalveolar lavage fluid, and nasal lavage fluid) and is not affected by circadian rhythm, smoking or gender differences. Compared to ECP, EDN is significantly less charged, making it easier to work with in routine setting. The quantification of serum EDN is not influenced by the type of storage tube used. It is stable at room temperature or for up to one year when frozen at −20 • C or −80 • C.
Additional studies are needed to validate the benefits of serum EDN for predicting long-term clinical outcomes and selecting right biologics for right patients with severe asthma.
The Wheezing Child: A Paradigm to Optimize Asthma Control in Adult Life
The global prevalence of current wheeze in children and adolescents was estimated to be 14.1% and 11.7% and projected to increase by 0.06% and 0.13% annually, respectively [95,96]. Wheezing is common among preschool children and infants; of note, young children with recurrent wheezing encompass a heterogeneous group with different genotypes and phenotypes that lead to different outcomes [97]. Overall, recurrent wheezing is associated with a two-fold increase in outpatient physician consultation or even the emergency department, an up to 5-fold greater risk to be admitted to hospital [98], missed days at school, activity limitation, and sleep disturbances [99]. Several phenotypes of preschool wheeze have been proposed to identify individuals at risk for persistent asthma at school age and display a temporal pattern of symptoms (Table 3). Table 3. Wheezing child phenotypes. IgE, immunoglobin E; NA, not available. Elaborated from data in [100][101][102][103][104][105].
Phenotype Clinical Profile
Transient early wheeze • Onset before the age of 3 years • Resolves by the age of 6 years without persistent lung function impairment.
Late-onset wheeze • Onset after 3 years of age and persists in childhood • Linked to atopy, reduced lung function and bronchial hyperresponsiveness.
•
Higher likelihood of asthma in adolescence Using the latent class analysis (LCA) technique, Fitzpatrick et al. identified four phenotypes of recurrent wheezing in preschool children based on type-2 inflammatory features including BEC, atopic eczema, aeroallergen, and food sensitization and/or pet exposures [51]. These phenotypes are distinguishable with regards to exacerbation risk, with inhaled allergen sensitization patterns being important risk factors, and, importantly, predict favorable response to daily ICS treatment to prevent exacerbations. Thus, certain patterns of inhaled allergen sensitization in early life can help identify children at high risk for severe exacerbations and those who are likely to respond well to ICS [51]. Of note, ICS are recommended as first choice of controller treatment in all preschool recurrent wheezing children irrespective of phenotype, but they are particularly beneficial in terms of fewer exacerbations in atopic children [106].
Genes and environmental factors such as respiratory viruses, tobacco smoke exposure, and inhaled allergens can modify the phenotypes of early childhood wheezing [107]. Furthermore, early exposures, including that to older siblings, pets, farm animals, and house-dust endotoxin, seem to influence the risk for persistent asthma [108,109]. To date, children suffering from allergic asthma, especially those with a persistent moderate and severe phenotype, were more often sensitized to all the three major dust mite allergens (Der p1, Der p2, Der p23) [110]. Of note, Der p 23 sensitization has been recently described as being associated with increased asthma risk [111]. Although many individuals later diagnosed with asthma exhibit their first symptoms during the preschool period, diagnosing asthma in preschool children is challenging, resulting in undertreatment of young asthmatic children and possibly overtreatment of transient wheezers [112]. Of note, transient early wheezing and nonallergic wheezing generally subside by 4-6 years of age as the airways enlarge in a growing child [31]. It is well documented that a lack of diagnostic criteria as well as the incapability of pre-school age children to perform conventional lung function tests, like exhaled nitric oxide or spirometry, hinder effective diagnosis and assessment in children [113,114]. As diagnosing asthma in the pediatric population is still challenging in clinical practice, defining the prognosis of preschool children requiring medical attention for recurrent wheezing appears of great value to optimize asthma care.
The progression from recurrent wheezing to persistent asthma is variably predicted by factors that emerged in cohort studies including the early sensitization to inhalant or food allergens, atopy, family history of asthma male gender, peripheral blood eosinophilia, as well as a history of wheezing with lower respiratory tract infections. The Asthma Predictive Index (API), that includes recurrent wheeze (e.g., more than three wheezing episodes per year) with risk factors such as parental asthma, atopic dermatitis, and allergen sensitization in the child, may more effectively identify young persistent asthma risk [115]. Overall, API has an appreciable likelihood ratio (~7.4) as it increases the probability of a prediction of asthma by 2-7 times [106,116]. Children with a positive API were found 7 times more likely to have active asthma at school age [117]. A modified version of API, modified API (mAPI), allows clinicians to identify children from birth to age 4 at high risk of developing asthma as well as to predict children response to ICS [118,119]. Importantly, mAPI acknowledges the contribution of allergic sensitization to at least one aeroallergen as a major criterion.
The identification of sensitization patterns based on individual allergens and linking these to disease morbidity for asthma in young children could have relevant clinical implications. Data from German MAS birth cohort study showed that schoolchildren sensitized to perennial allergens with high exposure early in life are more prone to develop an impaired lung function at school age than children without sensitization or sensitized to indoor allergens but with a low exposure [120]. Therefore, diagnosing specific inhalant allergen sensitizations in at-risk children can aid in designing allergen-avoidance strategies [121] as well as in guiding exposure mitigation and AIT choice in asthma prevention if allergic rhinitis is present. Allergen sensitization also negatively affected asthma outcomes in children with poorly controlled asthma and sensitization as they were more likely to use oral corticosteroids than their counterparts without allergen sensitization [122].
Furry animals, such as dogs and cats, represent important allergen sources [42]. It has been reported that sensitization to dog, cat, and horse throughout childhood was significantly associated with asthma at age 7 years [123]. In a population-based study with 259-animal sensitized, most were sensitized to two or more dog allergens, and cosensitization to Can f 5 [prostate (i.e., male) specific allergen] and Can f 1/2 conferred the greatest risk for asthma [124]. In line with this, a recent study in children with mild to severe asthma showed that sensitization to a greater number of dog components may identify children whose asthma severity is driven by dog exposure and may benefit from targeted interventions such as exposure mitigation and immunotherapy [125]. To this end, molecular-based allergy diagnostics such as CRD provides clinicians with opportunities to better characterize children with polysensitization [126]. Nevertheless, one should keep in mind that a disagreement currently exists between asthma guidelines on the routine use of allergy testing in the diagnostic work-up of a child with persistent asthma [127]. Collectively, healthcare providers dealing with a preschool child with recurrent wheeze may rely on several tools, such as mAPI, aeroallergen testing, and CRD, to effectively predict the risk of persistent asthma in later childhood and adulthood. Armed with this predictive information, personalized education and successful implementation of allergen avoidance and appropriate therapies can help to reduce the substantial burden associated with more severe, persistent, and exacerbation-prone disease. From a clinical standpoint, a recurrent wheezing child may be a useful paradigm to optimize asthma care in children at high risk of developing persistent, life-long asthma.
Conclusions
Asthma affects both children and adults and displays high morbidity. As asthma should be regarded as a syndrome, because of its phenotypic and endotypic heterogeneity, therapeutic approaches can be most effective if tailored to selected molecular targets in the dysregulated pathways of that given patient [128]. Despite growing therapeutic armamentarium, asthma control remains largely inadequate, and rates of healthcare use are high in both pediatric and adult populations, thus posing a substantial burden on societies. It has been suggested that raising awareness about the relevance of evaluating aeroallergen sensitizations in asthmatic patients is a key step in improving asthma/allergy care and informing clinical practice [31]. To date, mounting evidence is supporting the relevance of aeroallergen testing across the asthma patient journey (Figure 1).
in the dysregulated pathways of that given patient [128]. Despite growing therapeutic armamentarium, asthma control remains largely inadequate, and rates of healthcare use are high in both pediatric and adult populations, thus posing a substantial burden on societies. It has been suggested that raising awareness about the relevance of evaluating aeroallergen sensitizations in asthmatic patients is a key step in improving asthma/allergy care and informing clinical practice [31]. To date, mounting evidence is supporting the relevance of aeroallergen testing across the asthma patient journey (Figure 1). Nevertheless, allergic sensitization should be regarded as a quantifiable rather than dichotomous trait, e.g., we can use the titer of allergen-specific sIgE antibodies and/or the number of sensitizations to the major allergen components of an exposure (e.g., dog) [129]. Physicians are currently provided with tools to better describe sensitization such as CRD that measures sIgE response to many major and minor allergens. This approach aids in refining the relationship between sensitization, clinical outcomes, and asthma progression [130]. CRD stands as a useful tool to better characterize children with polysensitization, to offer guidance in pet allergy treatment recommendations, including exposure remediation and AIT prescription [42,126] based on major components present in allergenic extracts used in AIT and the patient's profile. The information retrieved through molecular allergy diagnostics opens novel avenues to advance asthma care while raising concerns about the correct timing for its application [131]. To implement the CRD approach in the diagnostic and management algorithms for asthma, appropriate interpretation tools should be developed to facilitate its use [129].
To unravel asthma molecular networks and to link phenotype with endotypes, great efforts have been placed in the search of biomarkers endowed with several features such as ease of performance, reliability, clinical accessibility, and the ability to predict outcome and disease monitoring [15]. Despite drawbacks of currently employed biomarkers, therapy selection still relies on them, as also documented in international guidelines and opinion papers [14,40,53,[63][64][65]. Total IgE, FeNO, and BEC are the main drivers for biologicals prescription in patients with moderate to severe T2 high asthma with established cut off values for likelihood of treatment response. Furthermore, we are still lacking more effective biomarkers for therapy selection in patients with overlapping phenotypes or in those for whom the presence of risk factors, such as smoking, or of concomitant therapies (e.g., systemic corticosteroids), which strongly hinders the reliability of some biomarker assessments. In this challenging scenario, EDN is an analytically attractive biomarker that can be reliably quantified in multiple specimens [93] and is stable during long-term storage. Of note, EDN may hold promise in distinguishing wheezing children from children with respiratory tract infections [82] and in aiding in the diagnosis of school age childhood asthma [83].
Future studies addressing the prognosis of preschool recurrent wheezing children appear of great value to optimize asthma care. It has been well established that diagnosis asthma in preschool children remains an unsolved issue. To date, preschool children with recurrent wheezing display several phenotypes (reflecting different endotypes and differing for clinical manifestations, natural history, response to therapy and inflammatory mechanisms) and may also switch between phenotypes or progress into another. Furthermore, not all the observed phenotypes are precursors of or manifestations of childhood asthma and, when it is the case, may be differentially responsive to asthma-targeted therapies. To this end, it has been advised to distinguish children based on allergen sensitization, evidence of eosinophilia and evidence of neutrophilia and/or infection [98]. In line with this, efforts should also be directed towards a better characterization of children with polysensitization and a routine implementation of allergen testing in at-risk children to design allergen-avoidance strategies as well as to guide AIT choice in asthma prevention. Another area deserving further investigation is the search of biomarkers for nonallergic type2-low children as the type-2 low inflammatory endotype is still poorly characterized in the pediatric population. Finally, it is important to recognize that asthma phenotypes are not static and can change with time in individual patients. Therefore, continued assessment of patients unresponsive to therapies requires reassessment of relevant biomarkers. | 9,647.4 | 2022-03-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Measurement of Corner-Mode Coupling in Acoustic Higher-Order Topological Insulators
Recent developments of band topology have revealed a variety of higher-order topological insulators (HOTIs). These HOTIs are characterized by a variety of different topological invariants, making them different at a fundamental level. However, despite such differences, the fact that they all sustain higher-order topological boundary modes poses a challenge to phenomenologically tell them apart. This work presents experimental measurements of the coupling effects of topological corner modes (TCMs) existing in two different types of two-dimensional acoustic HOTIs. Although both HOTIs have a similar four-site square lattice, the difference in magnetic flux per unit cell dictates that they belong to different types of topologically nontrivial phases—one lattice possesses quantized dipole moments, but the other is characterized by quantized quadrupole moment. A link between the topological invariants and the response line shape of the coupled TCMs is theoretically established and experimentally confirmed. Our results offer a pathway to distinguish HOTIs experimentally.
INTRODUCTION
The recent development of topological band theory has revealed the existence of higher-order topological insulators (HOTIs) [1][2][3][4][5]. An important hallmark of such HOTIs is the existence of (D − n) -dimensional topological boundary modes, where D is the dimensionality of the system, and n ∈(1, D] is an integer. Thanks to the development of classical wave crystals, HOTIs have not only been observed in solid-state electronics but also in photonic crystals [6][7][8][9][10][11][12], sonic crystals [13][14][15][16][17][18][19], and elastic-wave crystals [20,21]. For HOTIs with two real-space dimensions, the higher-order topological boundary modes are zero-dimensional modes localized at the corners of the lattice. These topological corner modes (TCMs) can be protected by a variety of topological invariants, such as quantized dipole moments [9,10,22], quantized quadrupole moments [1,23,24], combinations of first Chern numbers [17], etc. However, although their topological protection can be revealed by theoretical computation of the topological invariants, it is difficult to distinguish them from an observational point of view.
A previous theoretical study has analyzed the finite-size effect on neighboring TCMs in a 2D HOTI [25]. By comparing two different types of topologically nontrivial square-lattice HOTIs-a topological dipole insulator (TDI) wherein the dipole moments are quantized and a topological quadrupole insulator (TQI) with a quantized quadrupole moment, it was shown that the TCMs' spectral responses split, and the line shapes are associated with the topological characteristics of the HOTI. As such, the spectral responses of the coupled TCMs are an observable effect, by which the underlying topological nature can be phenomenologically revealed.
THEORETICAL CONSTRUCT
For the sake of completeness, we first briefly summarize the important theoretical background. A complete theoretical analysis can be found in Ref. [25]. Here, the TDIs and TQIs are both based on the extensions of the 1D Su-Schrieffer-Heeger (SSH) model, and we show the unit-cell structures for the 2D TDI and TQI in Figure 1A,B, respectively. The strengths of the staggered nearest couplings along x and y directions are denoted as intracell λ (thin tubes) and intercell c (thick tubes) hopping. For the TDI, all hopping coefficients are on the same sign so that the net magnetic flux in a plaquette is zero. For the TQI, the hopping coefficients can take opposite signs, as indicated by the red tubes in Figure 1B. The resultant net magnetic flux is π. When |λ/c| < 1, both systems are in the topologically nontrivial phase and have topological edge modes and TCMs, as shown in Figures 1C,D. For a N × N TDI lattice, the corresponding Hamiltonian can be written as where I 2N×2N is a 2N × 2N identity matrix, ⊗ denotes Kronecker product, and |m, A〉 and |m, B〉 denote states of the left and right atoms, respectively, in the mth unit cell for a 1D SSH chain model. For a N × N TQI, the Hamiltonian is where σ 0 is a 2 × 2 identity matrix, and σ 3 is the z-component of the spin-1/2 Pauli matrices. We plot the eigenvalues of Eq. 1, 3 in Figures 1C,D, respectively. The parameters are set as λ 0.1, c 0.5, and N 3. The bulk, edge, and corner modes are marked by black, blue, and red points, respectively. In both cases, four TCMs are found. As seen in the insets, the four TCMs are not degenerate. This is because, in a finite-sized lattice, the edges can provide coupling to neighboring TCMs [25]. For the TDI, the TCMs split into three clusters, with the two modes in the middle being degenerate. The two degenerate TCMs are still pinned at zero energy because of chiral symmetry. For the TQI, it can be proved that all eigenstates, including bulk modes and TCMs, are at least doubly degenerate [25,26]. Therefore, the TCMs are divided into two doubly degenerate clusters, which are symmetric about zero energy. The finite-sized coupling effect can be captured by a four-state effective Hamiltonian with the four TCMs as the basis, which reads for the TDI, and for the TQI. Here, t N a 1 b N λ(−λ/c) N−1 , where a m (b m ) denotes the strength of the eigenstate |m, A〉 (|m, B〉) of H SSH . Since |λ/c| < 1, t N is vanishing for large N. These models are schematically shown in Figures 1E,F. Note that similar to their respective unit cells, there is a magnetic flux of 0 and π in the TDI and TQI-corner models, respectively. From Eqs. 4, 5, we can use a Green's function to describe the spectral responses of the coupled TCMsĜ where E j is the eigenvalue and |ϕ j 〉 is the eigenvector, and η accounts for any dissipative effect in the system. When excited at corner |m〉, the response at the corner |n〉 will be G m,n (E) 〈m|Ĝ(E)|n〉. When excited at corner A, the responses at each corner are shown in Figure 1G for the TDI, and Figure 1H for TQI. It is seen that the spectral responses are different for the TDI and TQI. Particularly, the TDI responses can split into three peaks when measured at corners A and C, and the TQI response vanishes when measured at C. Such distinctions are an important manifestation of the quantized magnetic fluxes in the systems, which can be used as experimental evidence to distinguish the two classes of HOTIs.
EXPERIMENTAL RESULTS
We next present the designs of phononic crystals to realize both the TDI and TQI. The unit cells are shown in Figures 2A,B, respectively. The gray blocks denote the acoustic cavities, whose first-order resonance fulfills the role of the on-site orbital. The cavities have a height of H 80mm and a width of L 35mm, and they are coupled by tubes that facilitate the hopping terms. For the TDI, the widths of the intracell and intercell coupling tubes are w 17mm and W 30mm, respectively. They are connected at a vertical position with the height being the same h 21mm. The lattice constant is a 150mm. The design of the TQI is different because we need to realize hopping terms with a negative sign. To achieve this, we connect the top of one cavity to the bottom of the designated neighbor using a bent tube (red in Figure 2B). The blue tubes which facilitate positive hopping are also bent in the same manner so that all tubes have the same length. The positions of the cavities are staggeredly elevated so that the lengths of the intracell or intercell coupling tubes are the same. We use COMSOL Multiphysics to compute the band structures of the two types of unit cells. The medium inside the cavity and coupling tubes is air with a mass density of 1.23 kg/m 2 and a sound speed of 343 × (1 + 0.005i) m/s, where the imaginary part denotes losses. The results are respectively shown in Figures 2C,D, where four bands are seen for both cases. Based on these two designs, we have fabricated the phononic crystals. The cavities are machined from aluminum alloy and the coupling tubes are 3D printed using photosensitive resin. The photographs of the TDI and TQI configurations are shown in Figure 3A, 4A, respectively. Both lattices are 3 × 3 in size, containing a total of 36 cavities. At the top of the cavities, we drilled a small hole (covered with a small white plug), where a sound signal is sent or a probe can detect the acoustic signal inside the cavity. We excite corner A with a loudspeaker, as shown in Figure 3A, 4A. Then, we used a microphone to obtain the spectral response field in every cavity. For the TDI lattice, the responses measured at corners A, B, C, D are shown in Figure 3B. In the predicted frequency regime, i. e., 2,050-2,150 Hz, a three-peak response line shape is seen for both the spectra measured at corners A and C. And two-peak line shape is seen for corners B and D. These results agree well with the theoretical prediction by the tight-binding model (Figures 1C,G). To confirm that these responses are due to the coupled TCMs, we have measured the pressure responses in all cavities at the frequencies of the response peaks. The results are shown in Figures 3D-F. Clearly, the spatial distributions are strongly localized at the corners, which is a signature characteristic of TCMs. We have further verified the responses in numerical simulations. The results in Figure 3C,G-I also show excellent agreement with the experiment.
Similar experiments were performed for the TQI lattice. In Figure 4B, two response peaks are identified in the bulk gap (2,000-2,150 Hz) for the spectra measured at corners A, B, D. Also, the response at corner C is significantly weaker. These observations again align with the prediction (Figures 1D,H) and simulations ( Figure 4C). We further confirm in the measured (Figures 4D,E) and simulated (Figures 4F,G) spatial maps that the response peaks are indeed due to the TCMs. [4], honeycomb lattice [20], etc. On the other hand, the coupled higher-order topological modes can be a useful starting point for higher-order non-Hermitian physics [27,28]. They may also find applications such as topological wave and light confinement [13,29] and topological lasing [30,31].
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
AUTHOR CONTRIBUTIONS
XL and SW performed numerical simulations and designed the experiment. XL, SW, GZ carried out the measurements. All authors | 2,415 | 2021-10-26T00:00:00.000 | [
"Physics"
] |
The periodic rotary motions of a rigid body in a new domain of angular velocity
In the previous works, the limiting case for the motion of a rigid body about a fixed point in a Newtonian force field, which comes from a gravity center lies on Z-axis, is solved. The authors apply the small parameter technique which is achieved giving the body a sufficiently large angular velocity component ro about the fixed z-axis of the body. The periodic solutions of motion are obtained in neighborhood ro tends to ∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\infty$$\end{document}. In our work, we aim to find periodic solutions to the problem of motion in the neighborhood of r0 tends to 0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}. So, we give a new assumption that: ro is sufficiently small. Under this assumption, we must achieve a large parameter and search for another technique for solving this problem. This technique is named; a large parameter technique instead of the small one well known previously. We see the advantage of the new technique which appears in saving high energy used to begin the motion and give the solution of the problem in another domain. The obtained solutions by the new technique depend on ro. We consider that the center of mass of this body does not necessarily coincide with the fixed point O. We reduce the six nonlinear differential equations of the body and their three first integrals to a quasilinear autonomous system of two degrees of freedom and one first integral. We solve the rational case when the frequencies of the generating system are rational except (ω=1,2,1/2,3,1/3,…)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\,\omega = \,1,\,2,1/2,3,1/3, \ldots )$$\end{document} under the condition γ0″=cosθo≈0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma^{\prime\prime}_{0} = \cos \theta_{o} \approx 0$$\end{document}. We use the fourth-order Runge–Kutta method to find the periodic solutions in the closed interval of the time t and to compare the analytical method with the numerical one.
sufficiently small instead of sufficiently large value in [4]. We define a large parameter µ proportional to 1/r o instead of the small one in [4]. Consider A, B, C represent the moments of inertia of the body, p, q, r are the components of the angular velocity vector and γ , γ ′ , γ ′′ are the direction cosines of the unit vector in direction of the Z-axis. So, the equations of motion and their three first integrals are derived in the form: Such that:
The formal construction of the periodic solutions for a rational value of the natural frequency ω
We achieve the periodic solutions (14) when: The generating system of (14) is obtained when µ → ∞ in the form: So the periodic solutions of system (17), when the period T 0 = 2π n , become: where M i , i = 1, 2, 3 are constants to be determined. Assuming the following solutions of system (14) [9]: with a period T ( µ −1 ) = T 0 + α( µ −1 ) which reduces to (18) at µ → ∞ . Let us define the quantities M i , i = 1, 2, 3 as follow: where β i are functions of µ −1 which represent the deviations of the initial values of p 2 ,ṗ 2 , γ 2 for system (14) from their initial values of generating system (17) such that β i ( 0) = 0. Let us express the initial conditions (16) by the relations: We rewrite the periodic solutions (18) in the form: where E = M 2 1 + M 2 2 and ε = tan −1 M 2 M 1 . Using (22) and (12), we get: Substituting (22) and (23) into (16), we get: where: Using (24) and (25), the following functions are obtained: Substituting by the initial conditions (21) into the first integration (17) when τ = 0 , we get: Let γ 0 ″ depends on μ −1 , we get: (26) Taking into consideration, Eqs. (27) and (28), we get M 3 , β 3 as follows: The independent conditions for periodicity are: where L 1 (ω), N 1 (ω) are obtained from L(ω), N (ω) replacing M i by (M i + β i ), i = 1, 2, 3 to get: where: For zeros approximation for power series of 1/µ , Eq. (30) give: Since the z-axis is directed along with the major or the minor axis of the ellipsoid of inertia of the body, we get: W 0 (ω) > 0 for all ω under consideration.
Assume that: Using (30), we get β 1 , β 2 in power series expansions of powers less than µ −2 . Then for the rational values of the natural frequency ω does not equal to (1, 2, 1/2, 3, 1/3, . . .) , we get the required periodic solutions and the correction of the period α(µ −1 ) as: The obtained solutions (35) and (36) are considered as the generalization of the corresponding problem in gravity field which studied in previous works [10] (when k = 0), the deviations between them are given by:
Geometric interpretation of the motion
The geometric interpretation for the motion of the body at any instant of time to Euler's angles definitions θ, ψ, φ is given by [11]: where:
Numerical solutions
In this section, we use a computer program to determine the obtained solutions (19) and their derivatives for the time in the interval t ∈ [0, 300]. On the other hand, we use the fourth-order Runge-Kutta method [12] through another program to obtain numerical solutions for the autonomous system (14). In the end, we compare both solutions to check the accuracy of the method of solutions. These results are obtained through Tables 1 and 2. From these Tables, we deduce that the numerical solutions are in agreement with the analytical ones which prove the accuracy of considered methods. (36)
Conclusions
We conclude that the problem of the motion of a rigid body about a fixed point is studied in many works [13][14][15][16][17][18] in both the uniform and gravity fields. We study our problem in case of a right angle of nutation θ 0 when its center of mass does not necessarily coincide with the fixed point. The equations of motions of the problem are obtained and reduced to a quasilinear autonomous system. The obtained system is solved by assuming a large parameter achieved from an angular velocity component tends to zero. The obtained solutions are treated through computer programs in a bounded interval of time. The autonomous system is treated with the Runge-Kutta method in the same interval of time to obtain the numerical solutions of the motion. Both obtained solutions are in full agreement with others which prove the accuracy of both numerical and analytical techniques used in solving the problem. For the geometric interpretation obtained, we note that: The precession angle ψ From (38) when µ → ∞ , we deduce that the precession angle ψ is sufficiently large because r 0 is sufficiently small, that is, we obtain a case of large precession
The nutation angle θ
We obtain a case of steady regular permutation: θ = θ 0 .
The pure rotation angle φ
The case of a large pure rotation is obtained which depends on 1/r 0 in the form: The large parameter technique used here is considered as the only one suitable for this problem in the origin domain of r o tends to zero. Poincaré-Lindstedt method or Krylov Boboliubov Mitropolski one is failed to solve this problem because they depend on achieving a small parameter in domain r 0 tends to infinity. We conduct a comparison of the results of this manuscript with the results of the previous work. The results were obtained in [19] deals with the disk problem which satisfies the symmetry moments of inertia about two principal axes of the ellipsoid of inertia but our results here treat the general rigid problem in a limiting value of the Euler's angle θ o ≈ π 2 . The advantage of our used technique [20] depends on a large parameter µ → ∞ . The obtained solutions are checked using two programs to assert their accuracy through Tables 1 and 2. The main results in our work are the obtained analytical solutions in Eq. (35) which is represented through computerized digital data in Table 1. The secondary results are proving the validity of these solutions which are given through the Runge-Kutta method in | 2,007.6 | 2021-01-14T00:00:00.000 | [
"Physics"
] |
Digital Transformation: Inevitable Change or Sizable Opportunity? The Strategic Role of HR Management in Industry 4.0
: Background: The impact of technologies on workers has been a recurring theme in occupational health psychology. In particular, the sudden digital transformation of the last two decades, accelerated by the COVID-19 pandemic, has stressed the urgency to investigate new ways of working that are characterized by flexibility and a constant increase of autonomy. In this perspective, this study aims to investigate the state of the art of the innovation process in Italian factories, explore whether and how digitalization can be seen as an opportunity, and imagine a new way of working characterized by adaptability, resilience, and openness to change. Methods: Thirty in-depth interviews of Italian experts in HR management were collected and analyzed using a mix-method approach. Results: The findings underline the Italian HR experts’ perceptions of the risks associated with rapid changes required by technological progress in terms of workers’ wellbeing and satisfaction and suggest how important it is that organizations rapidly set up learning and training programs to guide workers to the acquisition of new skills required by Industry 4.0. Conclusions: Future workplaces will be characterized by extreme versatility, which requires workers to increasingly have both technical and soft skills as well as the ability to collaborate and build functional relationships.
Introduction
The impact of technologies on workers has been a recurring theme in occupational health psychology.However, the sudden digital transformation of the last two decades has stressed the urgency to investigate how these new ways of working, characterized by flexibility and a constant increase in autonomy, can accelerate processes and change the traditional way of work.This transformation has been influenced and strongly accelerated by one of the greatest black swans of this century: the COVID-19 pandemic.In this critical scenario, digitalization has represented one of the most important allies against COVID-19.In fact, to contain and proactively react to the spread of the pandemic, many countries not only implemented social distancing (Galanti et al. 2022;Scheid et al. 2020) but also encouraged organizations to adopt remote work practices (Donati et al. 2021).In Italy, where this study took place, before COVID-19, only 8% of the total workforce practiced remote working (Caronia 2021); during the first wave of the pandemic, this practice involved about one-third of Italian workers (INAP 2020).The pandemic, therefore, created the conditions to reinforce and accelerate digitalization, which offered new technologies to manage more flexible, automated, and interconnected work (Molino et al. 2020).
However, the literature is not unanimous about the implications of digitalization for employees' wellbeing and productivity and, consequently, the role of human resource (HR) management in promoting their adaptation to technologies at work.The urgency to Adm. Sci. 2023, 13, 30 2 of 19 use technology-based work arrangements because of the COVID-19 pandemic allowed researchers to gather data during this unprecedented time.Nevertheless, even though the health emergency is now over, remote working and the use of technology at work will likely remain a stable feature of the workplace (Molino et al. 2020).Thus, it is crucial to explore the role of digitalization at work after the pandemic by taking into account the representations of HR and innovation managers as well as academic experts in the field.This aim is particularly meaningful for organizational health psychologists as technology shapes the perception of work experiences (e.g., Christensen et al. 2020;Ferrara et al. 2022); and, consequently, poses specific challenges to HR managers, for example in terms of employee engagement (e.g., Gigauri 2020; De-la-Calle-Durán and Rodríguez-Sánchez 2021; Galanti 2021).The following sections aim to set out, albeit not exhaustively, the different viewpoints emerging in the literature on the effects of digitalization on employers and employees.
Literature Review
In the last two decades, we have witnessed one of the most radical changes in how we live, work, communicate, transmit and search for information: the advent of Industry 4.0.It is often described as the Fourth Industrial Revolution (FIR) (Schwab 2016), consisting of the implementation of cloud and mobile computing, big data and machine learning, sensors and intelligent manufacturing, and advanced robotics (Johansson et al. 2017).
After the first revolution of the 18th century and the discovery of electric motors in the 19th century, the introduction of electronic and information technologies in industrial systems set the basis for Industry 4.0.The reason underlying this revolution is found in two keywords: efficiency and resilience.The first allows organizations to satisfy ever-changing demands better, and the second is to be increasingly adaptable and responsive in the face of sudden changes.
Conceptualized in Germany in 2011 (Kagermann et al. 2011), this vision rapidly spread to other industrialized countries and has become a non-negligible asset for industries that want to compete and try to improve productivity and reduce costs (Badri et al. 2018).Its paradigm consists of three dimensions: horizontal integration between value creation networks, end-to-end engineering in the product life cycle and connectivity, and vertical integration in manufacturing systems.This revolution has resulted in a new way of production, characterized by low-cost, higher-quality products and services, fewer errors, short production time, and flexible production systems able to respond to customer requests quickly.
The existing literature on the FIR has focused mainly on IT and innovations, application fields, and new opportunities and challenges (Lasi et al. 2014;Vogel-Heuser and Hess 2016;Liao et al. 2017).However, less explored appear to be the psychological aspects associated with the FIR.Thus, the present paper contributes to the literature offering the point of view of work and organizational psychology (WOP) in order to expand the knowledge of this transformation and shed light on the implications in terms of human resource management.
Several researchers have compared Industry 4.0 to a flat organization with more organizational innovations, learning, knowledge, human-machine interaction, and especially, a more human-centered view of new technologies.In this sense, digital innovation is seen as a core task for the success of industrial production in the future (Dombrowski and Wagner 2014;Lee et al. 2014).Some authors also argue that the transition to Industry 4.0 provides great opportunities for sustainable manufacturing (Stock and Seliger 2016), underlining how much this change can produce resilience in terms of the transformation process and advancements in knowledge.Many studies, in fact, have shown the positive impact of Industry 4.0 in terms of the improvement of production processes and the reduction of energy and natural resource consumption (Margherita and Braccini 2020;Shahbaz et al. 2012;Strange and Zucchella 2017).
Adm. Sci.2023, 13, 30 3 of 19 However, if most industrial companies are aware of the potential benefits of achieving such a vision and are investing in Industry 4.0 capabilities and technologies, the majority are still in a transitioning phase, experimenting and piloting standalone solutions and working on establishing a digital foundation (Kadir and Broberg 2020).
Several authors have questioned the reasons for such latency, underlining the presence of several psychological barriers to adopting Industry 4.0 technologies in the manufacturing sector (Stentoft et al. 2020;Kumar et al. 2021;Mahmood et al. 2021).One of the major challenges to implementing FIT is the fear of job losses (Kamble et al. 2018;Muller 2019), which is negatively related to employee motivation and acceptance of I4.0 introduction.Another challenge is that a lack of skills threatens the adoption of these technologies (Muller 2019;Schneider 2018), e.g., the employees' perception of not having the necessary skills to perform their new role is an obstacle to this transition.At the same time, the organizations can also foster or dissuade employees' acceptance: having a culture that fosters digital innovation is essential to convey a positive idea of FIT (Raj et al. 2020).
Other studies have suggested that data security could be a significant obstacle to implementation (e.g., Kiel et al. 2017).First, because there is a psychological perception that highly interconnected systems are more exposed to hacker attacks; secondly, because it is hard to manage data consistency and integrity without specific software know-how.
The global COVID-19 pandemic can be considered a change driver in our productive realities, speeding up several innovation processes that would otherwise have remained virtuous isolated cases.It is, for example, the case for remote work implementation.According to Cotrino et al. (2020) the pandemic has shown the importance of companies embracing agile forms of work and the introduction of I4.0 made their implementations possible through its components of cloud computing, virtual reality solutions, and the internet of things.However, new ways of working also imply new ways for workers to experience work and new challenges for employers and employees.In this perspective, several recent studies have underlined both the positive and negative potentials of new technologies for working conditions and worker wellbeing (e.g., McFadden et al. 2021;Chang et al. 2021).From a positive perspective, ICT and digitalization seem to be positive in terms of satisfaction in teamwork-related contests (Meske et al. 2020), flexibility, and employee control over the time and place of their work.Moreover, a study by Kraan et al. (2014) showed that working with technology increases the need for job autonomy and the control of workers.Similarly, recent studies conducted during the first wave of COVID-19 underlined the positive role played by individual and organizational resources, such as goal setting, self-monitoring, and autonomy, in predicting satisfaction and wellbeing while teleworking (Wang et al. 2020;Miron et al. 2021).
Nevertheless, working away from traditional workspaces consists of being able to work anytime and anywhere and always being connected.Therefore, if this flexibility represents a benefit in terms of autonomy for some workers, it could also be seen as a pressure (Barber and Santuzzi 2015) and an invasion of one's personal life for others.As a result, the employees seem to have no private sphere left that allows them to unwind or recover from the workday (Chen and Karahanna 2018).In recent years, the term "technostress" (Brod 1982(Brod , 1984) ) has spread rapidly to denote one of the darker sides of new technologies.It has also been defined as "an inability to cope with the demands of organizational computer usage" (Tarafdar et al. 2010, p. 304) and consists of five dimensions: techno-overload, which forces the employee to work faster; techno-invasion, which invades personal life; techno-complexity, related to feelings of incompetence; techno-insecurity, due to the rapid changes of ICTs; and techno-uncertainty, due to unpredictable changes.The effects of technostress are anxiety, fatigue, skepticism, and inefficacy in using ICT (Cazan and Maican 2016;Schaufeli and Salanova 2007;Karsten et al. 2012).For these reasons, some countries have begun to safeguard the right to disconnect (Schlachter et al. 2018;Hesselberth 2018) by pushing organizations to clarify work times and ways of working.Another dark side of new technologies consists of techno-addiction, which implies excessive and compulsive work with ICT (Salanova et al. 2013), associated with lower levels of wellbeing (Huang 2010).Moreover, as mentioned above, new technologies also require new types of knowledge and skills for which workers may find themselves unprepared and, sometimes, unable to learn quickly.A recent study underlined how new ways of working, such as remote work, can overturn beliefs by suggesting that traditional positive elements, such as a good relationship with one's superior, risk becoming detrimental due to remote work (Toscano et al. 2022).
However, many studies have investigated moderator factors able to mitigate the impact of new technologies.According to Chen et al. (2009), for example, receiving specific training on a new IT system results in greater satisfaction after implementation.Additionally, including employees in the planning and implementation of new systems seems to play a crucial role in satisfaction and wellbeing (Elfering et al. 2010), showing the importance of promoting participation and ownership in workers.
Therefore, it is clear how complex a phenomenon Industry 4.0 actually is.Its implications can be, in fact, traced to three levels: macro-level, meso-level, and micro-level.
From a macro-level perspective, we can investigate the effects of digitalization on future employment.At this level, the literature is again split into two points of view.The positive viewpoint states that digital innovation may be the driving force behind employment growth in the future, leading to the emergence of new job roles, thanks to the cooperation between humans and machines (Evangelista et al. 2014).The negative viewpoint, instead, emphasizes the risks of unemployment due to automation (Dachs 2018;Osborne and Hammoud 2017), with several implications for workers' wellbeing (Herbig et al. 2013).
At the meso-level, we can examine how much the organizations are investing in new technologies to ensure more efficacy and productivity and what organizational actions are needed to guarantee a positive transition toward digital innovation (i.e., the adaptation of a method of risk assessment or communication) (Nielsen et al. 2010).
Finally, at the micro-level, the focus is shifted to individuals to underline the implications of human-robot interaction, on-screen control activities, and the monitoring of work performance.Regarding the last aspect, several studies have underlined the dichotomic consequences of monitoring systems.If, on one hand, they simplify employees' activities (Cascio and Montealegre 2016), on the other hand, monitoring can lead to high levels of stress and, in extreme cases, burnout.According to the job demands-resources model (Schaufeli and Bakker 2004;Bakker and Demerouti 2017), high demands with low control and little autonomy lay the foundation for negative working conditions, especially if the monitoring is perceived as unclear or non-transparent by the employees (Cascio and Montealegre 2016).On the contrary, a supportive culture would seem to represent an element that could foster the acceptance of digital surveillance (Spitzmüller and Stanton 2006).
Nevertheless, there are several countries where a real drive for digitalization has come as a result of the socio-sanitary emergency caused by COVID-19.This is also the case in Italy, where the present study was conducted.Before the pandemic, very few Italian companies had begun to experience the great possibility offered by digitalization, and, for a lot of them, it was not a choice but the only chance to survive and guarantee products and services (Galanti et al. 2022).Unfortunately, however, very few studies in the literature have focused on this sensitive theme, namely, the consequence of forced digitalization processes on organizational and individual levels.
Aim of the Study
Based on these premises, this exploratory study aims to investigate how different experts in the field have experienced and considered their role during the digital transformation process, as well as the implications for employees and organizations in general.More specifically, the study aims at exploring whether and how digitalization can be seen as an opportunity to look forward to the socio-economic crisis provoked by COVID-19 and imagines a new way of working, characterized by adaptability, resilience, and openness to change.Furthermore, it aims to underline to what extent human resource management is implicated in this transformation process and what role organizational psychology could play in fostering the transition to Industry 4.0.For these reasons, this study adopts a bottomup approach, intercepting HR managers, innovation managers, and academic experts to explore the multidimensional phenomenon of digitalization in the era of Industry 4.0.
Design
This exploratory study adopted a triangulation of methods to guarantee scientific and methodological rigor.Data were collected through the qualitative methodology of in-depth interviews and subsequently analyzed through both qualitative and quantitative methods to preserve the heuristic power of a qualitative level and the rigor of quantitative methods.This choice can also be explained by the researchers' intention to adhere to the paradigm of methodological appropriateness (Patton 1990), according to which a researcher should choose a method of collecting and analyzing data consistent with the research object rather than with personal competencies.
In this perspective, the in-depth interview is a technique designed to elicit a vivid picture of the participant's perspective on the research topic.During in-depth interviews, the respondent is considered the expert, and the interviewer the student.The researcher's interviewing techniques were motivated by the desire to learn everything the participant could share about the research topic, even without any hypotheses to be verified.The conversation was structured around three major themes: (1) the strategic role of HR management in the promotion of transformative resilience and innovation; (2) the risks associated with rapid changes required by technological progress; and (3) the transformation of work between old and new job skills.
Participants and Procedure
The sample consists of 30 Italian experts in human resource management (primarily HR managers and academic researchers), of which 21 are men and 9 are women, with an average of 17.5 years of experience.Table 1 shows the socio-demographic characteristics of the sample.To maximize the heterogeneity of professional realities and to obtain a broader and exhaustive understanding of the different points of view, the participants come from different Italian regions (center, south, and north) and from different backgrounds (smalland medium-sized enterprises).
Participant recruitment was done via email, having the following inclusion criteria as a reference: belonging to a medium-or small-sized organization; having at least five years of experience in human resource management, declined practical/field or academic/research experience; and playing a role in processes of innovation and organizational change.
Data were collected with the qualitative methodology of the in-depth interview in order to adopt a bottom-up procedure that was able to explore the management of human capital and the digital innovation strategies and procedures adopted in Italian organizations as well as the implications of the digitalization spearheaded by the COVID-19 pandemic.
The in-depth interviews took place virtually in Italy via the Google Meet platform between October 2021 and April 2022.Each interview lasted about an hour and was recorded and then transcribed verbatim.Using a semi-structured interview, various questions related to the research objectives were asked of the participants.For example, the following questions illustrate the theme covered by this study: "In your opinion, starting from the forced digitalization during the COVID-19 pandemic, what has changed in the way of working?And what will remain?What do you think about new challenges/risks to be faced?How could you cope with these challenges?Do you think there is a need for new professional figures, or how should existing figures change?".During the interview, the interviewer used probing questions to clarify more ambiguous answers, asking participants to give examples to support his/her affirmations.
Analysis
In order to provide the research with a thorough structure and quality, the Standard for Reporting Qualitative Research was followed (O'Brien et al. 2014).
Regarding the analysis of the interviews' content, a theoretical premise seemed to be necessary.More specifically, the theoretical approach of narrative analysis applied to organizational contexts (Manuti and Mininni 2013) was chosen, according to which organizations live by discourses.According to this theory, researchers should be ready to disentangle the collective narratives and discourses shaped through and by the shared practices of accounting (Cortini 2014).
Interviews were audio-taped and transcribed.Then, three different researchers read the transcripts several times to obtain an overall impression of the data collected and to ensure that the transcripts accurately reflected the arguments held by the participants before starting the coding stage.Next, data were analyzed using different techniques, such as discourse analysis and content analysis, and were run through T-LAB software (analysis of word occurrence and co-word mapping, analysis of Markovian sequences).For qualitative analysis, a thematic analysis and a classic analysis of discourse were conducted (Mininni and Anolli 2002), consisting mainly of an analysis of metaphors and linguistic agency.
Each sentence, paragraph, or passage representing an idea named by a participant was considered a unit of meaning.The smallest unit of meaning considered was a sentence containing at least one verb and one subject.Then, the coding of the data was subjected to a validation process of inter-rater agreements to ensure that the units of meaning coded represented the data.Inter-rater validation is defined by the consistency in which different analysts attribute, according to the same coding scheme, the same code to a randomly given segment (Mukamurera et al. 2022).For this reason, two researchers trained in qualitative analysis coded the interviews to identify and categorize the participants' metaphors, with a subsequent accordance calculated, thanks to the Cohen Kappa (0.85).Then, another research coded a sample of the full interviews to raise some ambiguities in the definitions of the codes and made it possible to specify certain elements related to the division of the coded segments, i.e., the defined unit of meaning.
We chose to use T-LAB software to better adhere to the paradigm that inspired us while guaranteeing a qualitative analysis, relating to the analysis of the speech, and a quantitative analysis, relating to the analysis of the content (Cortini 2014;Cortini and Tria 2014).Despite the typically qualitative nature of our data, which concerns transcripts of semi-structured interviews and, therefore, textual material, the T-LAB software allowed us to carry out both types of analysis, qualitative and quantitative, by triangulating the analysis methods.According to SRQR (O'Brien et al. 2014), triangulation can enhance the trustworthiness and credibility of data analysis.The quantitative part, specifically, allowed us to identify the repetitions of words and the most frequent associations within the text.
Qualitative Results
The interviews were analyzed by triangulating two qualitative techniques: content analysis and discourse analysis.Discourse analysis is a qualitative, interpretative, and constructionist methodology that allows researchers to explore how participants actively construct categories or clusters regarding the themes investigated.It considers metaphors and linguistic agency.Metaphors are considered a tool of thought conceptualization that can broaden the vision of the research object, creating connections with other themes.Linguistic agency, instead, refers to the use of the lexical and morphological aspects of the linguistic system to result or not result in oneself as the agent responsible.In this perspective, the content of the interviews was faithfully transcribed, and all linguistic metaphors used to talk about the topic were identified, interpreted, and explained.The idea was to generalize the conceptual metaphors they exemplified from them and to use the results to suggest understandings or thought patterns that construct or constrain people's beliefs and actions.
From the interviews' analysis, the three more interesting clusters appeared to be: (1) digitalization experience and consequences, (2) the need for competence and new professional figures, and (3) the human factor in digitalization.
Regarding the first theme, the results underlined the heterogeneity of the participants' digitalization experience and showed a misalignment with the traditional concept of digitization, which had especially declined in the adoption of agile forms of work, such as remote work, and the support offered by information technology in the performance of work activities.In this limited perspective, digitalization is seen as a positive tool that can be used to guarantee efficiency and immediacy.It is also considered a "COVID heritage to be capitalized" (Respondent n. 18)."It is absolutely negative-said one respondentthat many companies are returning to pre-COVID ways of working", referring to the fact that digitalization can be seen as an opportunity for organizations and workers to change and develop.It is interesting to note respondents' difficulty in separating the phenomenon of digitization from its pandemic consequences, ending up perceiving its merits and limitations only within this emergent framework.On the other hand, the limits of this digital innovation process emerge clearly.First, the realization that it was too sudden a change, in the face of which both companies and workers were unprepared."Forced digitalization has almost been imposed even on people who were furthest from this concept" (Respondent n. 5), and "If you have people who until yesterday didn't use that kind of program, you can't expect them to learn right away overnight because there is also a kind of defense of one's horticulture" (Respondent n. 9) are only two of several expressions that clearly show workers' disorientation in the face of such an imposed change.This resistance to change emerges at a twofold level: the individual level, understood as workers' difficulty in adapting to change and taking a role in innovative processes, and the organizational level, shown in managers' concern over career management and people development in a digital context.
The theme of disorientation correlates to the second key theme: the need for competence and new professional figures.Several respondents, in fact, underlined how workers found themselves unprepared to adapt to the new work environment and new ways of work.For example, referring to the experience of working remotely, one respondent says: "communicating through a system rather than live for some people is a change that requires different skills because you miss out on some of the communication".Another important element that emerges from the interviews is the inhomogeneity of digitalization in Italian factories."There are some companies that have always worked on digitalization and technological transformation, so they have been more ready, while others have had to reinvent themselves", said one respondent, introducing a key element related to digitalization and the pandemic experience, namely, the ability of both organizations and workers to be flexible and open to change.This aspect is strictly related to resilience, as underlined by several respondents through some interesting metaphors.The first of them associates resilience with "a tree branch able to bend without breaking" (Respondent n. 20) and also to a "Japanese technique of Kintsugi, through which it is possible to repair with gold" to symbolize a new way of looking at resilience, not as the ability to return to a pre-existent situation but rather as recognizing that "you are now something different from yesterday, something new and with more value" (Respondent n. 20).In a similar way, Respondent n.19 spoke about a recent concept of anti-fragility (Taleb 2012;Tseitlin 2013), that is, "the person who with respect to an external shock, to an event a stimulus that forces you to change you not only readjust but you improve, you change for something better than you were before, so you strive to improve yourself with respect to the external event".
Another theme extrapolated from interviews is the implications of digitalization for workers.According to the literature, a double scenario opens up.On one hand, participants seem to agree that digitalization has had positive consequences in terms of more flexibility and job autonomy.Some of them emphasize the increase in efficiency, especially related to communication processes."The positive aspect is that the relationship has become more effective, more immediate", said Respondent n. 8, and he added, "today, even though we have returned (in presence), we continue to have meetings remotely because they are more effective".On the other hand, several risks and downsides emerge, first of all, the risk of alienation and loss of motivation at work."We saw the alienation especially during the lockdown: people wanted to go back to the office afterwards, or at any rate they wanted to have a normalcy that was not just that of their home, which risked a bit of a cave effect, where a person is so well off that he or she never leaves the house again" (Respondent n. 5).Another negative aspect of digitalization is related to the progressive loss of relationships and opportunities for constructive discussion, which are key elements for professional and personal growth."One of the important elements of work is the relationship, because we all go to work for the salary, because there is a social value in what we do, for a social identity, etc., but we also go to work because there we meet the people with whom we talk, with whom we go for coffee, with whom we weave relationships, with whom we also have conflicts.Smart-working cuts you off from this piece or makes it virtual and therefore changes it dramatically" (Respondent n.12) and "The best creative things I did were talking in the lunchroom with my colleagues" (Respondent n. 13) are two examples of the negative drift of the high use of IT systems to "remotize" work life.
Some participants then underlined the costs in terms of work-life balance and the techno-invasion of their private boundaries.Regarding this aspect, one participant referred to the neologism "onlife" (Floridi 2009(Floridi , 2015)), saying "it is as if to say that we are always online and there is no longer a distinction between offline and online digital but that we should get used to this kind of continuum whereby we manage our digital and non-digital lives simultaneously" (Respondent n. 19) and another remembered "meetings where we would see the baby climbing behind a colleague's back in a dangerous position" (Respondent n. 27).An encroachment of work into private life and private life into work will have inevitable consequences in terms of job performance but especially of workers' satisfaction and wellbeing.
Quantitative Results
The content of the interviews was also analyzed with a quantitative methodology using the statistical software T-LAB, which was able to return a mapping of the contents characterizing the interview.Before deeply exploring the details of the analysis, it is important to remark that we prepared our text for analysis through lemmatization, which reorganizes the T-LAB database, creating different tables used to analyze data; in particular, the idea is that words that have the same root meaning are clustered together, such as "work" and "working".Such an operation was performed only for the words (lemmas or categories) considered interesting for the subsequent analyses, such as "innovation", "digitalization", "industry", etc.The authors carried out an automatic analysis of the content, which started from the idea that the more specific language to which families are referred (analysis of word occurrences), the more active these concepts are in the respondent's mind.In other words, when people often refer to the same concepts, it is because they are important to them.
Analysis of Occurrence and Co-Occurrence
The first thing T-LAB does with textual material is to analyze word occurrences and co-occurrences.The software output shows the most cited word in the middle, and all around are the words that co-occur the most with it, according to an association index: the Cosine coefficient.In graphical terms, the more two words co-occur, the more they are closed in the dimensional space (Cortini and Tria 2014).It is always possible to "dialog" with the software, asking it to put a specific word of interest in the middle for the user to have a graphical representation of its associations.In such a sense, T-LAB can assist the user by following both an automatic analysis path and a customized one.Concerning our study, it was remarkable that "person" and "work" were the most cited words.By clicking on the words associated with the central term, it is possible to obtain the phrase where the two words co-occur.This cue is particularly useful in mixed methods because, with just a "click", the original textual material is obtained, which can be analyzed by discourse analysis.We checked occurrences and co-occurrences, setting a frequency threshold of four.As Figure 1 shows, the value association of the thematic elements is graphically represented in terms of distance from the keyword in the center.
The most cited word is "work" (Figure 1).Firstly, it appeared strongly associated (see Table 2) with several lemmas related to the new typologies of work most widespread in Italy in response to the COVID-19 emergency.However, the presence of a strong association between the lemmas "work" and "company" (cosine coefficient 0.35) would seem to underline how, for participants, the idea of work performed in a typical working environment, rather than at home, for example, is still vivid.The presence of expressions such as work "from home" (cosine coefficient 0.26), "smart-working" (cosine coefficient 0.25), a typically Italian way to refer to agile forms of work, and also "remote" work (cosine coefficient 0.19) report the respondents' confusion and disorientation about this way of working as well as the absence of clear regulations to refer to.Another theme that emerged from the associations is related to the digitalization processes (cosine coefficient 0.18) and the related need to develop new skills (cosine coefficient 0.19) and professional figures (cosine coefficient 0.18) who are able to manage this unavoidable transition better.Finally, it seems interesting to dwell on the word "before" (cosine coefficient 0.19), an emblem of how, in order to innovate, a careful analysis of the organizational antecedents cannot be ignored.
way of working as well as the absence of clear regulations to refer to.Another theme that emerged from the associations is related to the digitalization processes (cosine coefficient 0.18) and the related need to develop new skills (cosine coefficient 0.19) and professional figures (cosine coefficient 0.18) who are able to manage this unavoidable transition better.Finally, it seems interesting to dwell on the word "before" (cosine coefficient 0.19), an emblem of how, in order to innovate, a careful analysis of the organizational antecedents cannot be ignored.The second most cited word was "people" (Figure 2), and it appeared strongly associated (see Table 3) with the word "work" (cosine coefficient 0.22) and "management" (cosine coefficient 0.13), clearly underlining the focus of this explorative research, namely, the investigation of the implications of industrial digital innovation processes in terms of human resource management.
The results seem to suggest three major critical points in the HRM in a digitalized context: firstly, the ability to support the integration between digital and fiscal activities; in this sense, several respondents have paid attention to the risk of losing their specificity in the run-up to digitalization; secondly, the need to review performance measurement systems and incentive and reward procedures; finally, the urgency to fostering the idea of a workplace understood not as a workplace but rather as a dimension in which organizational development and growth depend on the ability to create working relationships independent of physical presence or proximity.
In a similar way, the association with the verbs "to change" (cosine coefficient 0.15) and "challenge" (cosine coefficient 0.28) seems to validate the hypothesis that a change of thinking is necessary to see digital innovation not only as a challenge with an inevitable price to pay but also as an opportunity for organizational improvement and growth.According to this, even the word "responsible" (cosine coefficient 0.12) underlines the opportunities offered by new technologies in the era of Industry 4.0, which can promote positive changes if properly managed.Finally, the theme of work-life balance clearly emerged from interviewees' frequent use of the words "home" (cosine coefficient 0.11) and "life" (cosine coefficient 0.12), confirming that any transformational process cannot be separated from a careful analysis of costs, especially in terms of the human factor.
The second most cited word was "people" (Figure 2), and it appeared strongly associated (see Table 3) with the word "work" (cosine coefficient 0.22) and "management" (cosine coefficient 0.13), clearly underlining the focus of this explorative research, namely, the investigation of the implications of industrial digital innovation processes in terms of human resource management.
The results seem to suggest three major critical points in the HRM in a digitalized context: firstly, the ability to support the integration between digital and fiscal activities; in this sense, several respondents have paid attention to the risk of losing their specificity in the run-up to digitalization; secondly, the need to review performance measurement systems and incentive and reward procedures; finally, the urgency to fostering the idea of a workplace understood not as a workplace but rather as a dimension in which organizational development and growth depend on the ability to create working relationships independent of physical presence or proximity.Next, a personalized analysis was conducted, asking the software to map the cooccurrences with the stimulus words "digitalization" (Figure 3) and "professional_figures" (Figure 4).This choice is explained by the research interest to investigate the relationship between digitalization processes and the human factor and to underline if and which new skills and professional figures are needed to promote this change.Clear examples of this are statements such as the following: "I will instead have to develop new soft skills that are increasingly adaptive to what is our reality" (Respondent n. 30) and "this digitalization in small-and medium-sized enterprises has expressed the need to rebalance jobs and skills, with the need to revise production and work processes in many cases as well" (Respondent n. 24).It is interesting to note (Table 4) that it is related to words such as "process" (cosine coefficient 0.19) and "work" (cosine coefficient 0.18) to indicate the dynamic nature of the innovation.If, on one hand, digitalization requires a clear distinction between a "before" (cosine coefficient 0.17) and a "new" (cosine coefficient 0.13), understood as the ability to come up with new and divergent ideas, it also imposes the possession of specific skills to "manage" it (cosine coefficient 0.10) effectively.Finally, in Figure 4 and Table 5, we can see the association with the word "professions".In the current context, which requires organizations to continuously "evolve" (cosine coefficient 018), digitalization (cosine coefficient 0.16) could be a viable opportunity, provided there are professionals with context-specific "skills" (cosine coefficient 0.18) capable of dealing with this challenge (risks included).It is interesting to note (Table 4) that it is related to words such as "process" (cosine coefficient 0.19) and "work" (cosine coefficient 0.18) to indicate the dynamic nature of the innovation.If, on one hand, digitalization requires a clear distinction between a "before" (cosine coefficient 0.17) and a "new" (cosine coefficient 0.13), understood as the ability to come up with new and divergent ideas, it also imposes the possession of specific skills to "manage" it (cosine coefficient 0.10) effectively.Finally, in Figure 4 and Table 5, we can see the association with the word "professions".In the current context, which requires organizations to continuously "evolve" (cosine coefficient 0.18), digitalization (cosine coefficient 0.16) could be a viable opportunity, provided there are professionals with context-specific "skills" (cosine coefficient 0.18) capable of dealing with this challenge (risks included).
Discussion
The present study underlines the complexity of digitalization and innovation processes in organizations.In line with other studies conducted on the implementation of Industry 4.0, we found a multidimensional phenomenon whose implications cannot be reduced to a single level.Therefore, adopting a bottom-up approach, we wanted to investigate both individual-and organizational-level perspectives, exploring the different viewpoints HR managers, innovation managers, and academic experts had on digitalization processes in Italian small-and medium-sized enterprises.
Overall, participants showed mixed opinions and viewpoints on only one of the three main themes emerging from the analyses.More specifically, homogenous positions could be individuated to describe the digitalization experience (Theme 1) and the feeling of being unprepared (Theme 2) for the shift towards more digitalized work arrangements.The overlap between digitalization processes and remote working arrangements due to the pandemic was strictly connected to the feeling of low competencies and skills to manage the digitalization process.Consistently, participants reported the perception that organizational sectors and familiarity with digital products and techniques influenced the digitalization experiences of several organizations during the pandemic.These points are confirmed by the T-LAB results (see, for example, Table 1), where the terms "from home" and "smart-working" emerged among the most frequently linked to "work".While it is safe to affirm that COVID-19 acted as a digitalization accelerator within organizations, digitalization processes go beyond the shift to remote working conditions, including, for example, paperless processes and offices and automated digital systems to create and share documents, organize and pursue tasks, and meet colleagues (see Amankwah-Amoah et al. 2021).A valuable implication for the feeling of unpreparedness that arose from the interviews is the spread of training opportunities.According to the literature, new technologies (but in general, every change) will require new types of knowledge and skills.At the present time, we know little about what competencies will be needed, but it is reasonable to believe that we will be faced with an "augmented employee" (Cantoni and Mangia 2018), namely, an employee who is an expert in data treatment and analysis, supervision, and advanced decision-making.However, when the change happens so suddenly, it is difficult to plan specific and useful training, with the risk of leaving workers unprepared.Thus, this study suggests how important it is that organizations rapidly set up learning and training programs to help workers acquire new skills required by this changing reality (D'Alterio et al. 2019;Sartori et al. 2018).
It is also essential to be conscious that the need for training, expressed by workers, is twofold in nature.On one hand, they stressed a lack of technical skills and knowledge.On the other hand, the need to work on spreading a culture of innovation emerges clearly, where openness to new experiences and flexibility become shared values.
In applicative terms, these results open several future perspectives.Firstly, recruitment and training and development initiatives will need to consider the new skills required by workers and a new cultural mindset to support a collaborative work environment.With respect to talent attraction and retention processes, there are some questions that future studies may try to answer, such as "What will talent consist of in a digitized working world?" or "How will it be possible to recognize it and help it flourish?".
A case study conducted in a French industry (N'Cho 2017) showed they used digitalization to enhance their talent management process by identifying the best talent based on the requirements of each project phase and defining the right time and way to develop talent appropriately.Similarly, McIver et al. (2018) showed how HR analytics can predict store performance improvement using online assessment data and in-store interview processes.
In contrast with the uniform viewpoints for the first two themes, opinions on the third theme emerging from the interviews, namely, the human factor in digitalization, suggest two main approaches to human-technology interaction at work.The first describes working out of the office as a way to reach higher autonomy and flexibility; the other underlines experiences of lower integration between personal and organizational life and fewer opportunities to engage with colleagues meaningfully.The polarization likely arises from the overlapping of digitalization at work and remote working experiences.It is quite common, indeed, to find similar representations in reviews and meta-analyses describing the effects of remote working arrangements on employees' health and wellbeing (e.g., Charalampous et al. 2019;Crawford et al. 2011;Juchnowicz and Kinowska 2021).This overlapping, which permeates all the results, calls for a wider approach to digitalization within organizations and a better understanding of digitalization processes during training and development activities.
At the same time, the three themes are deeply connected.According to the literature, our results suggest that it is impossible to think of a digitalization process that does not consider the human factor (Ghislieri et al. 2018;Fernandez and Gallardo-Gallardo 2020).This means questioning the benefits introduced by I4.0 (i.e., agility and work simplification) and also lucidly analyzing the costs and implications on the psychophysical wellbeing of workers as well as the starting conditions necessary to understand how ready different organizational realities are for this innovation process.So, the first element to be considered is the distinction between a deliberate choice and an inevitable change.In Italy, before the COVID-19 pandemic, there were very few companies that could boast of their digitalization processes, especially if we consider the realities of small-and medium-sized businesses.The need to prevent the spread of COVID-19 and simultaneously guarantee products and services led to a drastic acceleration in using IT systems and digitalization.However, urgency hardly goes together with planning, and, to be rapidly ready to work, many companies have seen in digitalization the best (in some cases, the only) opportunity without questioning whether their realities were adequately prepared to cope with such change.This study highlights the difficulties experienced by workers and HR managers in dealing with the rapid transition from traditional to new ways of working.
When accounting for training, the human factor influences how innovative features of work, such as the introduction of digitalized solutions, processes, and tools at work, can be appreciated and welcomed by different employees according to their individual differences (for example, regarding demographic features).In fact, while some papers in the field suggest that new technologies facilitate more flexible, automated, and interconnected work (Molino et al. 2020;Galanti et al. 2022), there is clear evidence that individual differences, for example, the age of workers, may moderate this relation negatively (Iancu and Iancu 2020;Arenas-Gaitán et al. 2019).Indeed, senior workers are more fatigued when learning and using new technologies than younger ones, probably due to their lower adaptability and flexibility.Future studies could investigate this aspect to highlight what actions HR managers can implement to reduce this resistance.Interestingly, a recent study by Fernandez and Gallardo-Gallardo (2020) proposes two competing views of how digitalization affects workers of different ages.The first is that younger persons, more familiar with IT technologies, should be better able to deal with new software than older workers (Fernandez and Gallardo-Gallardo 2020).The second view is that recent generations of software are so simplified that they reduce the specialized knowledge required to use them, leveling, in effect, the gap caused by age differences.Besides these aspects, another interesting issue is the challenge of work and organizational identity, which the literature has proven to be interconnected with age (for example, Avanzi et al. 2012).In other words, we see an urgent call for studies investigating the risks for the organizational identity of digitalization and, especially, mass teleworking.Furthermore, it would be interesting to analyze in detail how teleworking, especially for specific age groups, may affect the process of both job socialization and organizational identification.
All these results further stress the need for a radical transformation of HR departments that need to consider how workers will interact with smarter machines.
Last but not least, all these changes call for detailed and specific company policies, as others have previously stressed (Cortini and Fantinelli 2018), to guarantee HR practices that can support both performance and wellbeing.
In conclusion, future workplaces will be characterized by extreme versatility, which requires workers to have increasing technical and soft skills and, first of all, the ability to collaborate and build functional relationships.
Limits and Future Perspective
The study's limitations are the research method and the participants.Because there has been little research on Industry 4.0 in the Italian context, this study is exploratory.The gender distribution is skewed because many of the participants were men.The qualitative approach used in this study is not generalizable and cannot be applied to a larger population.However, this is an explorative study whose preliminary results, even if not entirely representative, indicate the urgency of future research.
The future line of research will be to determine the effectiveness of the measures incentivizing smart and sustainable manufacturing, whether the Italian regions that are most advanced in the adoption of the I4.0 paradigm have shown greater resilience during the crisis after the pandemic, and whether the less prepared regions have started to catch up.Future studies could also explore the existence of differences in the consequences of digital transformations for blue-collar and white-collar workers.
Moreover, it would be interesting to map and compare the supporting measures introduced by different European regions and compare their level of readiness and responsiveness.
Finally, the findings of this study can be used by HR departments to develop new training and learning strategies that incorporate the specialized knowledge required to use IT technologies and interpersonal and communicative skills, which are increasingly necessary in new work scenarios.
1HR M = human resource manager; HR D = human resource director; IC D = internal communication director; CT D = cultural transformation director.
Table 1 .
Socio-demographic characteristics of the sample.
Table 2 .
Coefficient of cosine and chi2 of co-occurrence with the lemma.
Table 3 .
Coefficient of cosine and chi2 of co-occurrence with the lemma PEOPLE.
Table 4 .
Coefficient of cosine and chi2 of co-occurrence with the lemma "innovation".
Table 5 .
Coefficient of cosine and chi2 of co-occurrence with the lemma "professions".
Table 4 .
Coefficient of cosine and chi2 of co-occurrence with the lemma "innovation".
Table 5 .
Coefficient of cosine and chi2 of co-occurrence with the lemma "professions". | 10,880.8 | 2023-01-19T00:00:00.000 | [
"Business",
"Engineering"
] |
Construction and tests of the research stand with built-in correctors for the detection of gas outflow from long pipelines
The localization and identification of small gas leaks from a damaged gas pipeline is a very important but at the same time problematic and difficult process to carry out. Quick identification, estimation and precise localization of the leak leads to minimizing the financial and equipment losses resulting from damage. The most commonly used continuous internal methods of leak identification are based on pressure and mass intensity measurements flow at various places in the pipeline. The article presents a description of the construction of a test stand characterized by the correctors built in it and the examples of the signals obtained from its tests for different values of simulated leaks. The results obtained will be used to develop a new method for identifying and locating leakages from long pipelines. This method is based on the cross power spectral density of measured pipeline standard signals (pressure and mass flow) and the membrane displacement signals delivered from the test equipment (correctors) connected to the system.
Introduction
The gas pipelines are used for decades and during this period, due to progressive corrosion, mechanical leaks may occur. Gas leaks from long pipelines can cause serious accidents and may damage the equipment, hence it is important to detect, estimate and locate them as soon as possible.
There are a number of methods to detect the occurrence of a gas leak, generally they can be divided into non-continuous (e.g. smart pigging, helicopter inspection) and continuous monitoring. [1,2] Non-continuous methods do not allow for constant observation and assessment of the pipeline's technical condition. Individual inspections are carried out at certain period of time which often does not allow for instant leak detection. [4,7] Continuous methods are used without interruption during the entire operation of the pipeline. They could be divided into external and internal based systems. [2, 3, 6] Externally based systems detect leaking product outside the pipeline (acoustic systems, fiber optic, dielectric cables, laser sensors). Internal based systems (e.g. pressure analysis, mass balance method), known as computational pipeline monitoring (CPM) uses different types of sensors to monitor internal pipeline parameters which are a basis for different methods of gas leak detection, however, in order to use them, it is necessary to use them in order to use them. installation of leak detection systems before the pipeline is started. [9,11,12,13,14] The article presents the construction and test results of a test stand for simulating gas leaks from long pipelines. The stand is characterized by two correctors attached to the pipeline which provide an additional diagnostic signal in the form of weak interaction signals. [5, 8, 10] The connection of correctors to the pipelines does not affect the quality of the pipeline and is easy to implement.
2 Construction of a test stand with built-in correctors to detect gas leaks from the pipeline 2.1 Construction of a test stand Test stand built on the basis of the scheme in Fig.1. is shown in Fig. 2. The stand was built with about 27 meters of welded PPR pipes with an inner diameter of Ø 45mm mounted on a frame of aluminium structural profiles. At the beginning of the pipeline there is an air tank (V) with a capacity of 0,025m 3 , to which atmospheric air is pumped using a compressor. Between the compressor and tank there is a compressed air preparation unit for adjusting pressure to the required level of working and for cleaning the. Along with the tank, the entire system has a capacity of about 0,075m 3 . Behind the tank and at the end of the system are mounted correctors (r1, r2). Pressure sensors (p1, pc, p2) and mass flow sensors (m1, m2) have been installed in the pipeline in the order shown in Fig. 1. The taps at the beginning (t1), centre (tc) and end of the pipeline (t2) are used to simulate gas leaks. In order to enable precise control and measurement of the amount of leakage behind the tap from which the leak is simulated, a solenoid valve and a flow meter are installed.
Data instruments from National Instruments were used to collect data and control the simulated leaks volume.
Construction of the correctors
At the beginning and the end of the test stand, two additional devices -correctors (r1 and r2) were built using the tees. The corrector was built in the form of two chambers separated from each other by a rubber membrane with a mass installed in the middle (brass sheet). Fig. 3. Corrector build-in test stand.
The correctors built into the test stand are to provide an additional diagnostic signal -it is a device that transforms the pressure change that appears in the pipeline as a result of a leakage on the membrane displacement signal. 3 The test results of the test stand
The test stand tests were carried out at the pressure of about 0.6 MPa. The leakage control solenoid was set to obtain different leakages. The research was carried out by simulating leakages from three places in the pipeline (t1, tc and t2). Each measurement lasted 6.5 seconds from the moment of the leak occurrence and during this time the following data was recorded with indicators describing their position in the pipeline 1 for the beginning, c -centre, 2 end of the pipeline: - Fig. 4. to Fig. 8. the results of measurements with simulated small leakages are presented. This results confirm that correctors connected to the pipelines, can provide the new valuable diagnostic information that could increase the effectiveness of leaks identification, estimation and localization.
Summary
The article presents a research stand for simulating the leakages of gas from long gas pipelines characterized by additional two research equipment -correctors placed at the inlet and outlet of the pipeline and standard mounted pressure sensors and mass flow sensors.
Attaching correctors to the gas pipeline have a purpose to obtain additional valuable diagnostic information, which are weak interactions signals. Unlike standard pressure signals, the signals of weak interactions are more sensitive and more resistant to noise. The use of weak interaction signals can complement existing mass flow and pressure signals and can be used to identify the location and size of gas outflow. Signals from the system and research devices (correctors) will be the basis for the development of a new method for testing the outflow from the gas pipeline based on the quotient of the spectral power density of signals generated by the corrector and measured signals of pressure and mass flow. | 1,530.2 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
AI Data-Driven Personalisation and Disability Inclusion
This study aims to help people working in the field of AI understand some of the unique issues regarding disabled people and examines the relationship between the terms “Personalisation” and “Classification” with regard to disability inclusion. Classification using big data struggles to cope with the individual uniqueness of disabled people, and whereas developers tend to design for the majority so ignoring outliers, designing for edge cases would be a more inclusive approach. Other issues that are discussed in the study include personalising mobile technology accessibility settings with interoperable profiles to allow ubiquitous accessibility; the ethics of using genetic data-driven personalisation to ensure babies are not born with disabilities; the importance of including disabled people in decisions to help understand AI implications; the relationship between localisation and personalisation as assistive technologies need localising in terms of language as well as culture; the ways in which AI could be used to create personalised symbols for people who find it difficult to communicate in speech or writing; and whether blind or visually impaired person will be permitted to “drive” an autonomous car. This study concludes by suggesting that the relationship between the terms “Personalisation” and “Classification” with regards to AI and disability inclusion is a very unique one because of the heterogeneity in contrast to the other protected characteristics and so needs unique solutions.
INTRODUCTION
This study aims to help people working in the field of AI understand some of the issues regarding disabled people who are greatly disadvantaged in society in many ways.
The United Kingdom government states 1 that there are over 11 million people with a limiting long-term illness, impairment, or disability, and the prevalence of disability rises with age (6% of children, 16% of working age adults, and 45% over state pension age). Compared to people who are not disabled, disabled people are substantially more likely to live in poverty, less likely to be employed, three times as likely not to have qualifications, and half as likely to hold a degree level qualification.
Artificial intelligence technologies, such as seeing AI 2 , are improving in their abilities to identify objects and faces. This application was created by a blind developer, and although such useful technologies are being developed by talented people with a deep knowledge and understanding of the needs of people with visual impairment, most technology developers do not have such a deep knowledge or understanding and do not learn about disability and accessibility on their university courses.
Data-driven personalisation normally implies the use of some sort of AI classification algorithm, and this study examines the relationship between the terms "Personalisation" and "Classification" with regard to disability inclusion. Classification using big data struggles to cope with the individual uniqueness of disabled people 3 , and whereas developers tend to design for the majority so ignoring outliers, designing for edge cases would be a more inclusive approach as these solutions will also work for the majority.
Since AI machine learning classification categorises people into groups and needs big data to do this, it struggles to cope with the individual uniqueness of disabled people. Of all the protected characteristics groups covered by the United Kingdom Equality Act 4 (age, disability, gender reassignment, race, religion or belief, sex, sexual orientation, marriage and civil partnership, and pregnancy and maternity), disability is the most heterogeneous.
This study begins by examining definitions of personalisation and classification and discussing whether "group size" is the main factor.
It then presents two simple common examples (buying clothes and buying a pencil with a name on it) to clarify that "data driven personalisation" in the context of AI is normally taken to mean that the data have not been provided for that explicit purpose by the person. The examples also indicate how diversity (culture and disability) is often not adequately provided for in AI training datasets.
The next section examines some specific issues relating to the use of technologies by disabled people. The example of the difficulty of selecting the optimum accessibility setting on a mobile phone from the near infinite possibilities is described, and a possible solution is presented. The example of an autonomous vehicle is then provided to illustrate some of the ethical issues involved and also how not including disabled people in the training data could have disastrous consequences. Speech recognition is also provided as another example of how the unique requirements of disabled people may not be adequately catered for by standard AI solutions. The question whether localisation is "personalisation" for a cultural group is then discussed and illustrated through the example of the author's work on developing Arabic symbols for Arabic people unable to communicate in speech or writing, The section ends with a brief discussion of the potential of neurosymbolic AI that integrates probabilistic machine learning with structured symbolic AI to help overcome many issues such as small datasets and explainability.
The review and discussion of relevant literature covers a wide range of issues concerning AI and disabled people.
The study finishes with a conclusion section that summarises the study's arguments and identifies some of the remaining challenges.
RELATIONSHIP BETWEEN PERSONALISATION AND CLASSIFICATION
This study will first examine the relationship between the terms "Personalisation" and "Classification." The Cambridge dictionary definitions 5 are as follows: Personalization 6 : "the process of making something suitable for the needs of a particular person" Classification 7 : "the act or process of dividing things into groups according to their type" This raises the issue of whether we can only think of classification as personalisation when there is just one member of a group or whether classification can be thought of as personalisation for every member of a group and whether the term personalisation should only be used for a maximum group size. The range of personalisation could be from a unique group of one through dividing everyone into many groups to the extreme of no personalisation where everyone gets the same and so is in just one group.
Data-driven personalisation also raises the issue of who originally created the data.
If the data used were originally created by the person who the data refer to can this be called "data driven personalization," or for this to be the case, must the data be inferred from other data?
For example, considering classification and personalisation with regards to clothing, very large group classification could be into two groups based on gender, e.g., blue boy baby outfit/pink girl baby outfit; smaller group classification could be based on color or style or size (e.g., an "off the peg" suit); and personalised clothing could be a unique made to measure suit.
If somebody simply supplied the exact data of the details of color, style, or measurements for a made to measure suit then, although these data have driven the personalisation, I doubt this is what most people would refer to as "data driven personalisation." I would suggest most people would rather think of "Data driven personalisation," for example, suggesting suits based on those you have bought previously; suggesting suits based on purchases of those people who have also bought the suits you have bought previously; or estimating your preferences and measurements from photos of you.
However, for somebody with a physical disability, they may not be able to put on or take off standard clothing independently; may not fit any "off the peg" clothing; and may not fit any standard algorithms based on photos and so could be an "outlier" in any existing clothing related dataset and so not benefit from standard AI data-driven personalisation algorithms.
Let us also use as an example somebody buying a pencil with their name on it. There are various possibilities. They could select a pencil with their name already on it from a shop where there can only be a limited number of most popular names available. They could have their name printed to order with their name provided directly by themselves. They could have their name printed with their name provided indirectly (e.g., through data from Facebook if signed up through Facebook). A company could send an unsolicited promotional marketing free gift of pencil with printed name with name provided indirectly (e.g., through data obtained from their Facebook postings). Only the indirectly provided names would be considered "data driven personalization." People from a nonnative culture would have a much lower chance of finding their name as one of the limited number of names available in the shop. A person with a disability might also require a nonstandard shaped pencil to help them be able to write.
The next section examines some specific issues relating to the use of technologies by disabled people.
TECHNOLOGIES AND DISABILITIES
There are many aspects of personalising technologies for a disabled person. They can have different strengths (e.g., visual, auditory, kinesthetic, dexterity, mobility, confidence, processing speed and attention, health, memory, technology skills, motivation, knowledge, and experience). There can be different tasks (e.g., reading and understanding information, writing, organisation and planning, communication, memory and recall, time, money, numeracy, and daily living). They can have access to different resources (e.g., financial, training, peer support, professional support, and technical support). They can be in different environments (e.g., workplace, study, daily life, accessibility constraints, security, and IT policies) and using different tools (text to speech and e-reading, word processing and proofing, graphical mapping and planning, reminders, speech recognition, calculators and mathematics, study support, alarms and environmental controls, wearable technologies, and communication devices).
Technologies can have many personalisation settings to accommodate the individual needs of disabled people, and the example of a mobile phone will be used to illustrate the issue of how the optimum settings can be chosen.
Personalising a Mobile Phone
A disabled person can change the accessibility settings on their phone, but on the iPhone, for example, I have calculated that there are as many unique permutations of accessibility settings as there are atoms in the known Universe, and so, while it would be possible in theory for every person to create a unique personalised setting, it would be practically impossible for somebody to actually try all the possible permutations of settings out. Interoperable accessibility profiles would allow disabled people's preferred settings to work on any system anywhere in the world, but since settings are not interoperable between different manufacturers' devices, a person would have to set up every device they used. Some of these settings may be more important than others to a person (e.g., increasing the rate of speech (when using "text to speech" for speaking out text for people with reading difficulties) by 5% will not have as much effect as changing it by 20%), but having some automated systems to make these selections could speed up this personalisation process. For example, where there are a large range of settings such as speaking rate, the system could adaptively find the chosen setting using comparisons of pairs of settings and measuring just noticeable differences. For example, 5 possible settings of speaking rate from 1 to 5 could involve listening to 10 pairs of settings to compare them all, but an adaptive system could only involve listening to and comparing 3 pairs of rate settings using the following algorithm.
Listen to and compare 1 and 5, and if no preference, then 1 is the final selection and have listened to only 1 pair.
If preferred 5 over 1, then listen to and compare 5 with 3, and if no preference, then compare 3 (i.e., if no preference, we arbitrarily choose lowest setting and assume there would also be no preference with 4) with 2. If preferred 2 or no preference, then 2 is the final selection, and if preferred 3, then 3 is the final selection and have listened to only 3 pairs.
If preferred 1 over 5, then listen to 3, and if no preference, then compare 1 and 4. If preferred 1 to 4 or no preference, then 1 is the final selection. If preferred 4, then 4 is the final selection and have listened to only 3 pairs.
If preferred 1 over 3, then listen to 2. If no preference or 1 preferred, then 1 is the final selection. If preferred 2, then 2 is the final selection and have listened to only 3 pairs. If preferred 5 over 3, then listen to 4, and if 4 preferred or if no preference, then 4 is the final selection. If 5 preferred, then 5 is the final selection and have listened to only 3 pairs.
There is a privacy issue whether the disability of somebody can be determined from the settings shared with 3rd parties. For example, if they have their screen reader turned on, then they are very probably visually impaired/blind. It would be possible to infer accessibility settings using a recommender type system from people with similar disabilities as a starting point from which somebody could further personalise their system settings.
The next subsection uses the example of an autonomous vehicle to illustrate some of the ethical issues involved and also how not including disabled people in the training data could have disastrous consequences.
Autonomous Vehicles
Autonomous vehicles issues include how they will make ethical decisions (e.g., avoid a child but kill an elderly person). Will there be one globally accepted ethical algorithm? Will each car manufacturer have their own ethical algorithm? Will the owner select from a choice of ethical algorithms? Will the car learn from how the owner drives and behaves and personalise an ethical algorithm from this? Will a blind or visually impaired person be permitted to "drive?" 8 How will autonomous vehicles respond to disabled "pedestrians?" An example of the issue is that if a disabled person in a wheelchair cannot use their arms to push themselves along, they can use their legs to push themselves backwards and even possibly use a mirror to see where they are going. When the scenario of a disabled person in a wheelchair crossing the road was put into a self-driving car simulation, the car ran the simulated wheelchair user over as it misunderstood which way the person was crossing 9 . Developers tend to design for the majority ignoring outliers, whereas designing for edge cases would be a more inclusive approach. It is, therefore, also important to include disabled people in decisions to need to understand AI implications. Also, AI could be used to help wheelchair users independently control manual or electric wheelchairs or people with cognitive disabilities (e.g., dementia) travel or navigate independently.
The next subsection uses speech recognition as another example of how the unique requirements of disabled people may not be adequately catered for by standard AI solutions.
Speech Recognition
Speech recognition can help people who have difficulty writing to use their voice to write. It can also assist people who have difficulty hearing by providing captions and transcripts. Speech recognition was originally personalised for each individual through extensive training by that individual on systems installed locally, but now, cloud-based speaker independent recognition is ubiquitous, and only one locally installed speaker dependant recognition software is commercially available 10 . There is little commercial benefit for companies to develop speech recognition, speech synthesis, or machine translation for minority languages. Standard speech recognition also does not work well for people with dysarthric speech and so needs a special system (Hawley et al., 2019). Using AI for lipreading has been shown to increase the accuracy of speech recognition and especially in noise 11 . The growing availability and reduction in cost of 3D cameras 12 should help continue to improve accuracy. Many people have expressed concerns about "Deepfakes" 13 where AI has, for example, been used to control people's lip movements and speech to make them appear to say things they never said. Nobody, however, appears to have thought of using the same technology to make people more lipreadable. Automatic captions can indicate some nonspeech sounds (e.g., music, laughter, and applause 14 ), and emotion detection from speech 15 and faces 16 is improving.
For people who will lose their voice due to disease, a personalised voice can be created before this occurs 17 .
The question whether localisation is "personalisation" for a cultural group is discussed and illustrated in the next subsection through the example of the author's work on developing Arabic symbols for Arabic people unable to communicate in speech or writing.
Localisation
Localisation can be defined as "the process of making a product or service more suitable for a particular country, area, etc." 18 Is localisation "personalisation" for a cultural group? Assistive technologies can need localising in terms of language as well as culture. We developed Arabic symbols for people who found it difficult to communicate in speech or writing because many western symbols were not culturally appropriate and also some cultural symbols did not exist 19 . These symbols were created by a graphic designer working with symbol users and so were expensive and time consuming to produce. We are currently investigating ways in which AI could be used to create symbols automatically from photographs.
To be able to select the required symbol from a hierarchical structured symbol board can take a long time (e.g., select foods at top level board, vegetables at next level board, and cauliflower from the vegetable board), and so, it would be more efficient to automatically select the required symbols based on the context (e.g., system knows user is in supermarket and knows their shopping list).
The final subsection gives a brief discussion of the potential of neurosymbolic AI that integrates probabilistic machine learning with structured symbolic AI to help overcome many issues such as small datasets and explainability.
Neurosymbolic AI
Machine learning can use deep neural networks to develop probabilistic models from large training datasets without having prior knowledge of the knowledge structure of the data. This has, for example, allowed the development of speech recognition and machine translation systems that do not need to be provided with a model of language structure. Symbolic AI methods can use logic-based structured semantic conceptual knowledge representation and reasoning from ontologies or knowledge graphs to help create rules that do not require the large training datasets needed by many machine learning methods.
Neurosymbolic AI 20 is an approach that tries to integrate machine learning approaches with symbolic methods to gain the combined benefits of both approaches (e.g., where large datasets are not available and perhaps where less computing power is available and also to help provide explainable or verifiable AI).
While this can help in overcoming the limited information about disabled individuals available in machine learning training datasets, it can only "broadly" categorise disabled individuals in terms of their disabilities rather than personalise a disabled individual in terms of their unique abilities and disabilities.
This approach could, however, for example, help reduce the number of possible accessibility settings in their mobile phone; a disabled individual would need to select from to find their personalised optimum setting. Mao et al. (2019) presented a method that jointly learns visual concepts, words, and sentences from images, questions, and answers and suggested applying neurosymbolic learning frameworks as a future work toward automatic learning in complex interactive environments. Although not discussed in the study, this would appear to have particular potential for assisting blind people navigating and interpreting their environment. Kursuncu et al. (2020) proposed a learning framework that infuses domain knowledge within the neural networks unlike previous approaches that utilized knowledge outside neural attention models to provide "better generalizability, reduction in bias and false alarms, disambiguation, less reliance on large data, explainability, reliability, and robustness, to the real world applications." Besold et al. (2017) reviewed ideas on neurosymbolic learning and reasoning and outline some of the technical challenges while acknowledging "knowledge about these issues is only limited and many questions still have to be asked and answered" with impact "in many areas including the web, intelligent applications and tools, and security." Arabshahi et al. (2020) inferred missing presumptions through reasoning to discover commonsense knowledge from if-then-because statements from a human-derived dataset.
Readers wishing to know more about the many current technical approaches to neurosymbolic AI may find the recent presentation by Alexander Gray (IBM Research) "A recent review of Neuro-Symbolic AI: Overview and Open Questions" of interest 21
REVIEW AND DISCUSSION OF RELEVANT LITERATURE
This section discusses some published studies regarding a range of issues concerning AI and disabled people. Draffan et al. (2019a) discussed how data collections are not often inclusive or algorithms transparent. They presented a roadmap for digital accessibility research and development using AI to support those with disabilities with examples where strategies can help prevent barriers to inclusion. Their extensive literature review showed how "disability" was wrongly considered as a homogeneous concept and inclusion did not consider accessibility or design for all or equity of access. They concluded that algorithms needed to be designed for inclusion by removing bias and ensuring fairness to achieve enhanced digital accessibility.
Datasets used to train machine learning algorithms can exclude or underrepresent disabled people and so discriminate against them (e.g., education, employment, and credit) (Gilligan, 2019). A loan may be refused because the applicant is wrongly classified whether due to ignorance, motivated by good intentions with respect to privacy or safety or ethical concerns, or no better dataset exists. Preprocessing techniques such as oversampling and undersampling can help equalise the size of the classes, but it would be better to have inclusive datasets for underrepresented groups respecting ethics, privacy, and safety.
"AI bias" can marginalize disabled people by classifying them as outliers affecting fair access to important services (e.g., health insurance and credit). The IBM Fairness 360 Open Source Toolkit's algorithms 22 claim to "examine, report and mitigate discrimination and bias in machine learning models." Zimmerman et al. (2019) studied the effect of AIF360 on the accuracy of gender recognition for face images of persons with and without Down syndrome (DS) in the proportion of persons with DS in the German population (0.1%). They found the AIF360 toolkit has the potential for mitigation of AI bias, but a larger sample is needed to confirm this.
Wolters (2019) examined the extent to which ergonomic and accessibility issues are acknowledged and discussed in the literature but found that research studies only consider eHealth solutions for chronic pain management and not ergonomic or accessibility aspects and concluded that this needed to be undertaken before leveraging AI meaningfully to address them.
Individuals with complex communication needs can use symbols with text translations, but data are scarce, and conversions are fraught with complications due to the different types of linguistic concepts, imagery, and language and limited harmonization or standardization, and so, users find it hard to access suitable personalised or localized symbols. Draffan et al. (2019b) examined how symbol sets can be linked with multilingual options using AI image recognition to improve outcomes by automatically creating a more diverse range of symbols based on transforming photos. Potter et al. (2019) identified four pitfalls in the use of deep learning for personalisation of assistive technology in order to help allocate scant resources to benefit end users: fallacies that there is "true" knowledge inherent in data; mistakes that derive from ignorance of the limitations of methods; constraints of human commerce; and failings from incorrect, ill-considered, or improper use of AI.
Another issue of data-driven personalisation is the ethics of AI for "eugenics" or "curing" neurodiversity (e.g., biomarkers for autism) or disability. It is offensive to people with autism to see this as something people should aim for, and so, individuals with autism and their families need to be treated with respect and understanding (Walsh et al., 2011). Hens et al. (2019 discussed "whether autism is a disorder to be treated or an identity to be respected." The power of AI deep learning to search the human genome for mutations and prediction of autism or other conditions (Zhou et al., 2019) increases the possibility of data-driven "personalisation" for parents to ensure their babies are born without disabilities. Johnston (2005) argued that "the premise that deafness is not a disability of some sort is false and thus the claim that genetic selection against deafness is unethical is untenable." A deaf lesbian couple turned to a friend with five generations of deafness in his family after being turned away by a sperm bank which told them that donors with disabilities were screened out 23 .
Clause 14/4/9 of the Human Fertilisation and Embryology (HFE) bill 24 blocks any attempt by couples to use modern medical techniques to ensure their children are deaf as it states that "Persons or embryos that are known to have a gene, chromosome or mitochondrion abnormality involving a significant risk that a person with the abnormality will have or develop a serious physical or mental disability, a serious illness or any other serious medical condition must not be preferred to those that are not known to have such an abnormality." Fayemi (2014) discussed the need for "prenatal genetic testing, as well as abortion of foetuses with a high risk of the autism mutation." Johannessen et al. (2017) discussed how "Adults with ASD fear that people with ASD traits eventually will be eliminated through prenatal testing and selective abortion" and that "professionals believe that genetic testing could improve the possibility for early intervention" and reported the results of their study of parent members of the Norwegian Autism Society, 76% of whom would undergo clinical genetic testing if it would improve the possibilities for early interventions.
CONCLUSION
This study will hopefully have helped people working in the field of AI understand some of the issues regarding disabled people.
This study has suggested that the relationship between the terms "Personalisation" and "Classification" with regards to AI and disability inclusion is a very unique one because of the heterogeneity in contrast to the other protected characteristics and so needs unique solutions. This can, for example, result in assistive technologies developed for a broad category of disability (e.g., visually impaired people or hearing impaired people) not being appropriate or the optimum for a particular individual with a specific unique visual impairment or hearing impairment as well as perhaps other disabilities.
Issues that have been discussed in this study include personalising mobile technology accessibility settings with interoperable profiles to allow ubiquitous accessibility, the ethics of using genetic data-driven personalisation to ensure babies are not born with disabilities, the importance of including disabled people in decisions to help understand AI implications, the relationship between localisation and personalisation as assistive technologies need localising in terms of language as well as culture, the ways in which AI could be used to create personalised symbols for people who find it difficult to communicate in speech or writing, whether blind or visually impaired person will be permitted to "drive" an autonomous car, and how neurosymbolic AI can help reduce the number of possible accessibility settings in a disabled individual would need to select from to find their personalised optimum setting.
Classification using big data struggles to cope with the individual uniqueness of disabled people 25 ; whereas developers tend to design for the majority so ignoring outliers, designing for edge cases would be a more inclusive approach as these solutions will also work for the majority. It is, therefore, important for AI developers to involve disabled people when developing AI systems.
Technology that accommodates the needs of disabled people can also often better meet the needs of nondisabled people (e.g., captions for deaf people can help everyone when the sound is not available such as in airport lounges).
There are still many challenges for AI to support disabled people. For example, automatic audio description of videos requires reasoning and understanding subtle meanings and context to identify what visual information is important (e.g., if a person leaves a room, is it important to know they did not hear what was said after they left?), and while AI can help provide automatic sign language translation of captions using human video clips or avatars, the quality of translation for a visual language is not currently as good as translations between written languages which have vast amounts of data available for training the AI systems.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
FUNDING
This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. | 6,643 | 2021-01-18T00:00:00.000 | [
"Philosophy"
] |
Assessing Temporary Speed Restrictions and Associated Unavailability Costs in Railway Infrastructure
This paper analyses the occurrence of temporary speed restrictions in railway infrastructure associated with railway track geometry degradation. A negative binomial regression model is put forward to estimate the expected number of temporary speed restrictions, controlling for the main quality indicators of railway track geometry degradation and for the maintenance and renewal actions/decisions. The prediction of temporary speed restrictions provides a quantitative way to support the assessment of unavailability costs to railway users. A case study on the Lisbon–Oporto Portuguese line is explored, comparing three statistical models: the Poisson, the ‘over-dispersed’ Poisson and the proposed negative binomial regression. Main findings suggest that the main quality indicators for railway track geometry degradation are statistically significant variables, apart from the maintenance and renewal actions. Finally, a discussion on the impacts of the unavailability costs associated with temporary speed restrictions is also provided in a regulated railway context.
31
In transportation infrastructure systems, maintenance and renewal operations might cause some impacts in the 32 availability of the railway system, besides the associated costs related to these operations. One of these impacts is 33 the occurrence of temporary speed restrictions, which affect the normal operation of trains in the railway 34 infrastructure and cause unavailability costs. A main cause of the occurrence of temporary speed restrictions is the 35 degradation of railway track, namely the railway track geometry degradation. In order to study these impacts, 36 performance indicators of the railway infrastructure have to be measured and monitored [1], namely temporary 37 speed restrictions as an availability indicator. Besides, the analysis of such indicators has to take into account the 38 infrastructure influence on rail punctuality (delays), i.e. infrastructure fault datasets should be linked with 39 operational delay datasets to improve railway infrastructure management. Temporary Speed Restrictions are also 40 part of the performance indicators that regulators use to assess the infrastructure manager performance [3].
41
Moreover, the imposition of temporary speed restrictions has been found to be an influencing factor on train 42 punctuality [4]. Operating speed has also been identified as a key variable in infrastructure design consistency [5].
43
Many studies on delays in railway infrastructure have focused on quantifying the delays [6], given the occurrence of 44 a delay (e.g. a temporary speed restriction), exploring the impact of a given train delay in the network. These studies 45 do not discuss what caused the occurrence of a temporary speed restriction, and they just assume that it happens 46 and then they are interested in computing the different train delays imposed to a given network. From our 47 perspective, it seems that there is a missing link/connection/dependence between railway track geometry, the 48 maintenance and renewal actions, and the occurrence of temporary speed restrictions in the network. Moreover, in 49 any decision support system for planning maintenance and renewal actions in transportation infrastructures, the 50 assessment of unplanned impacts like the temporary speed restrictions are crucial for the definition of a 51 maintenance/renewal strategy that not only minimizes maintenance and renewal costs but also minimizes delays.
52
The past research on railway delay modelling has walked a long path, mainly focused on the quantification of train 53 delays. Several studies aimed to model train delays, considering primary delays and knock-on effects or secondary 54 delays, i.e. the propagation of delays to other trains in the network. A delay estimation methodology is put forward 55 in [7], which defines an exponential relation between travel time delay and train mix for single and double track lines, validated with simulation results from a design of experiments and also with real-world delay values from a 57 sub-network existing in the Los Angeles area. Moreover, they provided an excellent review on previous research, 58 identifying two main approaches dominating the research in this topic: the analytical models and the simulation 59 models. Another approach also relied on simulation software (Rail Traffic Controller) results to fit an exponential 60 dependence with the number of trains per day to estimate average delay times for single and double track due to 61 in-service failures of different length (e.g. 1h, 3h and 5h) and associated costs [8]. They also conducted some analysis 62 on the variability of train delays for different traffic volumes. Further micro-simulation/simulation models that 63 support decisions regarding timetabling and railway operations were also explored in [9]. Another statistical 64 estimation approach to model railroad congestion delay from BNSF railway data for eight districts in the western US, 65 using multiple linear regression with an exponential functional form to explain the total train running time (i.e. the 66 free running time plus the congestion-related delay) using as independent/explaining variables: train-related and 67 track-related variables, primary and secondary-effect variables and capacity utilization effect variables [10]. 68 Moreover, delays incurred by the passengers have been analysed and an overall generalized waiting cost was put 69 forward, comprising: the cost of extra stopping in the stations, the cost of extended transfer times and the cost of 70 deviating from the ideal running time supplements [11]. This approach detailed all passenger flows in train 71 connections, namely: the transfer passengers, the through passengers, the departing passengers and the arriving 72 passengers; and assigning distinct costs to each type of delay. In the same research direction, i.e. focusing in delays 73 suffered by the passengers, passenger delay models were explored, instead of the typical train delay models, in 74 which passengers are adaptive agents that may choose a different route than their planned route (assuming in the 75 most optimistic scenario that they have a complete knowledge of present and future delays in the system) [12].
76
These two contributions [11,12] represent the most important steps towards the quantification of railway delays 77 suffered by the passengers (or freight).
78
Regarding the delays caused by the infrastructure manager, i.e. the infrastructure delays, there is a lack of published 79 references. To the best of our knowledge, the only reference discussing infrastructure delays is [13]. Delay risks associated with train schedules were modelled, detailing three types of delays: track related delays, train dependent 81 delays and terminal/schedule stop delays.
82
Nevertheless, there has been little research on the impact of maintenance and renewal decisions in railway delays 83 and unavailability costs. For simplicity, let us put forward a classification for different delays, following the idea of 84 quantifying delays depending on the agent responsible for causing it, in order to frame this research work in a larger 85 research framework. The term 'agent' is used considering the vertical separation between the Infrastructure 86 Manager (IM) and the Train Operating Companies (TOC), which means that we may have as agents: the IM, the 87 different TOC and also the passengers or freights (i.e. the final users). Having said that, delays can be classified into 88 three groups: 89 i) infrastructure delays, i.e. the delays whose responsibility is assigned to the infrastructure manager; 90 ii) train operating companies delays, i.e. the delays whose responsibility is assigned to the train operating 91 companies; 92 iii) the passengers or freight delays, i.e. the delays whose responsibility is assigned to the passengers or the 93 final users.
94
This classification is particularly useful as it emphasizes the need of more research on the link between degradation 95 processes, maintenance and renewal actions of the IM's responsibility and the above-mentioned infrastructure 96 delays. Note that other railway agents that could also be integrated in this conceptual framework for a vertically 97 separated sector would be the regulator or the regulatory entity and the maintenance contractors.
98
To a certain extent, there is a parallel between this proposed delay classification and the one put forward before in 99 [13], especially in the first two groups, i.e. the infrastructure delays (or track-related delays) and the train operating 100 companies' delays (train dependent delays), respectively. However, the terminal/schedule stop delays from [13] are 101 not necessarily equal to the passenger delays as a passenger can catch a delayed train without incurring into any 102 delay impact in his/her trip.
Let us now focus on the infrastructure delays as they are the most relevant for IM decision-making process regarding the infrastructure delays due to temporary speed restrictions (i.e. the unplanned infrastructure delays).
107
Some of these delays are not even totally perceived by the passengers, by the operators and even by the regulator.
108
These delays were above defined as planned infrastructure delays because they are associated with medium-/long-109 term downgrades of speed performance due to reductions of the maximum permissible speed. As these changes 110 immediately affect the train schedule production, they are not perceived by the other railway agents and in fact, 111 they may hide a poor performance of the IM in terms of asset management regarding maintenance and renewal 112 actions. However, the aim of this paper is to discuss solely the unplanned infrastructure delays due to temporary 113 speed restrictions and these planned infrastructure delays are left for further research, though some first steps have 114 been taken in [14] within a bi-objective optimization model for maintenance and renewal decisions.
115
The outline of this paper is as follows: this first section introduces the need to assess the occurrence of temporary 116 speed restrictions in railway infrastructure and reviewed the past research on railway delays, focusing on the delays 117 related with maintenance and renewal actions (or the 'infrastructure delays'). Afterwards, a review is provided on 118 the statistical methodology followed within the Generalized Linear Model (namely the negative binomial regression, 119 the Poisson and the 'over-dispersed' Poisson regressions), in which the different regression models are estimated 120 for our case study and compared using the Akaike Information Criterion (AIC). A context discussion on the impacts 121 of the assessment of temporary speed restrictions and associated costs in the railway regulatory framework is put 122 forward. Finally, the last section highlights the main conclusions and suggests further research in this topic.
124
This section explores and discusses the statistical methodology followed in this paper to predict the occurrence of 125 temporary speed restrictions in railway infrastructure within the Generalized Linear Model framework, namely using 126 the negative binomial regression model, the Poisson and the 'over-dispersed' Poisson regression models.
To assess the temporary speed restrictions related with rail track geometry, a database from the Portuguese IM 128 (REFER), called 'e-LVs', was analysed. This application/database compiles a series of information regarding 129 temporary speed restrictions, namely: the identification details as the line, the direction and the location; the delay 130 details as the theoretical/computed delay, the restriction speed, the maximum permissible speed, the initial and 131 final times, the motive; and other information not relevant for the following discussion.
132
Of course, many temporary speed restrictions have other motives than the ones related with the rail track subsystem 133 or related with the railway track geometry. Take for instance the example of the temporary speed restrictions due 134 to maintenance actions associated with the catenary subsystem. Those speed restrictions were not included in the 135 following assessment because only the speed restrictions related with rail track geometry condition, maintenance 136 or renewal actions were included in this analysis. In fact, IM is responsible for 20% up to 30% of the total delays in 137 the railway system, and the track system and their faults are responsible for around 3% of the total delays in the 138 railway system [1,2].
165
The Immediate Action Limits (IAL), the Intervention Limits (IL) and the Alert Limits (AL) are set by the European 166 Standard EN 13848-5 [16] for all rail track geometry defects. For further information on these rail track geometry 167 defects and their indicators, the reader is referred to [17][18][19][20], while for further details on the railway track system, 168 irregularities and variability of some physical parameters, the reader is referred to [21,22].
169
The Poisson distribution is usually parameterized through the parameter λ and has the following probability
191
The main difference between the over-dispersed Poisson regression model and the Negative Binomial regression 192 model is that the variance of the former is a linear function of the mean, while the variance of the latter is a quadratic 193 function of the mean [23]. The negative binomial regression model has been used in several studies related to 194 infrastructure modelling [24], from estimating transition probabilities in highway infrastructure degradation [25] to 195 hurricane-related outages in the electric power systems [26], or even in railway safety [27] and road safety [28].
356
Moreover, the variables controlling all other rail track geometry defects, i.e. IAL, IL and AL, also proved to be 357 statistically significant predictors. Finally, the maintenance (Tp) and renewal (Rw) decisions were also statistically 358 significant predictors, and exhibited Incidence Rate Ratios equal to 1.150 and 6.009, respectively. Some discussion 359 on the need to include unavailability costs associated with temporary speed restrictions was also provided.
360
Regarding further research, the final objective of the present model is the integration of the expected number of 361 temporary speed restrictions and associated delays in an objective function in order to optimize the Alert Limits that 362 trigger preventive maintenance actions, as part of a planned maintenance strategy. | 3,091.2 | 2018-02-01T00:00:00.000 | [
"Mathematics"
] |
Robustness evaluation for rolling gaits of a six-strut tensegrity robot
Locomotive robots based on tensegrities have recently drawn much attention from various communities. A common strategy to realize long-distance locomotion is combining several basic gaits that are designed in advance. Considering the unavoidable uncertainties of the environment and the real locomotive system, selecting the gaits with high robustness is essential to the implementation of long-distance locomotion of tensegrity robots. However, no quantitative approach for robustness evaluation of rolling gaits is reported in recent research work. In this study, a practical and quantitative method is proposed for the robustness evaluation of rolling gaits of tensegrity robots. A mathematical model is built to describe the evaluation process, and the success rate of rolling is adopted as an indicator of robustness. Sensitivity analysis and robust evaluation are conducted on the rolling gaits of a typical six-strut tensegrity robot. Specifically, the sensitivities of the rolling gaits to five uncertain parameters (i.e. tendon stiffness, initial tendon prestress, the equivalent mass of nodes, actuation lengths of actuators, and slope of ground) are investigated and discussed in detail, and the robustness of the rolling gaits is evaluated by correlated random sampling. Experiments on a physical prototype of the six-strut tensegrity robot are carried out to verify the proposed concept and method.
Introduction
A tensegrity system is a special self-equilibrated pin-jointed structural system comprising a discontinuous set of compressed components inside a continuous set of tensioned components. 1 The shape of a tensegrity system can be actively controlled by adjusting the prestress in the components, making it a good candidate for structural systems that require controllable shapes, such as smart structures, 2,3 deployable structures, 4,5 and locomotive robots. [6][7][8] More attention has been paid to tensegrity-based robots due to their features of lightweight, efficiency, and high deformability. Paul et al. 9 investigated the dynamic characteristics and control strategies of tensegrity robots and conducted experimental validation using physical prototypes. Shibata et al. 10 designed and experimentally validated the crawling behaviors of tensegrity robots based on body deformation. Boehm and Zimmermann 11 proposed vibration-driven mobile robots based on single actuated tensegrities. Among various types of tensegrity robots, spherical tensegrity robots with rolling gaits have attracted the most attention due to their excellent locomotion ability. Spherical tensegrity robots have excellent locomotion ability and thus have potential application in fields such as planetary exploration. Koizumi et al. 12 designed and tested a spherical tensegrity robot driven by a set of pneumatic soft actuators, which can perform rolling over flat ground. Caluwaerts et al. 13 developed a physical prototype of a six-strut spherical tensegrity robot. Kim et al. 14 presented a spherical tensegrity robot that can deliver payloads over a long distance by combining cable-driven rolling and thruster-based hopping. Chen et al. 15 demonstrated a teleoperated spherical tensegrity robot capable of performing locomotion on steeply inclined surfaces. Luo and Liu 16 set up a mathematical model of spherical tensegrity robots and analyzed the relationship between the deformation and the trajectory of the tensegrity centroid. Böhm et al. 17 proposed a locomotion system based on a spherical tensegrity consisting of two compressed curved members and a continuous net of tensioned members. Zhang et al. 18 achieved automatic learning of rolling gaits for a tensegrity robot based on mirror descent guided policy search. It is worth noting that automatic learning is a hot topic in the robot field. For example, Tutsoy et al. 19 modeled the legged NAO humanoişd robot and developed a reduced-order reinforcement learning-based robot adaptive control algorithm for the balancing task.
Rolling is the main locomotion form of spherical tensegrity robots to achieve long-distance movements. The long-distance rolling of a spherical tensegrity robot is usually composed of a series of rolling gaits. Cai et al. 20 generated a series of repeatable rolling gaits with identical initial and final states for possible long-distance rolling. Lu et al. 7 proposed a Dijkstra algorithm-based path planning approach to combine rolling gaits for long-distance locomotion. Chang et al. 21 presented a path planning method based on basic rolling gaits using the A* algorithm. Littlefield et al. 22 proposed approaches to produce long-term locomotion using rolling gaits, in which a standard search method is used for simple environments and an informed sampling-based planner for complex environments.
The robustness of rolling gaits should be considered when applying the tensegrity robots in practice. The rolling gaits obtained from numerical simulation might deviate from expectations when applying in practice due to the uncertainties of the environment and the real locomotive system. Therefore, robust rolling gaits capable of executing the expected motions under the uncertainties are needed for a real tensegrity robot. There have been some research on generating robust motion of tensegrity robots. For example, Iscen et al. 23 proposed a coevolution algorithm to generate robust goal-directed motion for a six-strut tensegrity robot. A learning algorithm together with form-finding-based simulation was used to generate robust movement for a six-strut tensegrity robot (Kim et al. 24 ). These kinds of research provide effective ways for generating robust motion of tensegrity robots. However, the robustness mentioned in both the above studies is conceptual and qualitative, and no quantitative uncertainties of parameters have been considered and no quantifiable definition of gait or motion robustness has been proposed. A quantitative approach is needed to better understand and evaluated the robustness of tensegrity robots.
In this study, the robustness of spherical tensegrity robots that achieve long-distance movement by combining several basic rolling gaits is quantitatively investigated. A definition for the robustness of the motion gaits is proposed, and a procedure evaluating the robustness involving the uncertainties of physical parameters and environment is developed. The proposed definition and procedure are numerically and experimentally employed on a six-strut spherical tensegrity robot.
The layout of the article is as follows. The definition and evaluation procedure of the robustness for rolling gaits of spherical tensegrity robots are presented in the second section. The third section presents the structural configuration of a six-strut tensegrity robot and a number of typical rolling gaits of it. The fourth section investigates the sensitivities of structural and environmental parameters and then selects the parameters involved in robustness evaluation. In the fifth section, the robustness of the rolling gaits of the six-strut tensegrity robot is evaluated by the proposed approach. Experimental validation based on a physical prototype of the six-strut tensegrity robot is carried out in the sixth section. Finally, the seventh section concludes the article. Figure 1 shows the main steps of the robustness evaluation of rolling gaits.
Rolling gait
Tension members of a tensegrity are assumed to be only able to bear tensile forces, while compression members are assumed to be rigid and able to bear both compressive and tensile forces. Spherical tensegrity robots that achieve long-distance movement by combining a number of basic rolling gaits are considered. A rolling gait denoted as B i can be determined by the actuations of actuators and the initial state of the tensegrity system, that is where e i ðt i Þ is an n a -length vector of actuations of the gait i and n a is the number of actuators; t i is the actuation time; and s i is the initial state of the tensegrity system. e i ðt i Þ must satisfy that where e l and e u represent n a -length vectors of the lower and upper limits of the actuations, respectively. The initial state s i is expressed as where e si is an n a -length vector of the initial elongations of actuators of the rolling gait i and C i is the initial contact condition of the rolling gait i. The internal force vector T i ðt i Þ of a spherical tensegrity robot can be expressed as where T i; j ðt i Þ is the internal force of the j'th member and q is the number of structural members. The internal forces of members should not exceed the corresponding design strengths. During the deployment of a gait, the compression members can bear compressive or tensile forces, and the tensile members may slacken temporarily. As a result, the internal forces of members must satisfy where T l j and T u j are the compressive and tensile strengths of the j'th member, respectively. For the tensegrity robots considered in this article, it is assumed that at the initial state of a rolling gait, they comply with the conventional definition of tensegrity system widely used in structural engineering, that is, the compression members are in compression, and the tension members are in tension. The above assumption can be formulated as T i;j ð0Þ 2 ½ÀT l j ; 0Þ; for compression members T i;j ð0Þ 2 ð0; T u j ; for tension members ( ð6Þ
Gait robustness and evaluation procedure
The relationship between the robustness of rolling gaits and the uncertainties of environmental and structural parameters might be highly nonlinear, and finding an explicit solution can be quite difficult or even impossible. Monte Carlo sampling provides a simple and direct way to estimate the robustness by repeated tests. Therefore, it is adopted here to obtain a preliminary and global insight on the robustness of rolling gaits of tensegrity robots. The parameters of a tensegrity robot are randomly sampled using given distributions. A rolling gait is tested using various sampling results, and the success rate of the gait is calculated and used as an index of the robustness, that is where ROBUST i is the robustness of gait i; NUM i; su is the number of times that gait i successfully achieves the expected locomotion; and NUM i; to is the total number of tests of gait i. Note that the robustness defined above is used to scale the effectiveness of the gait design under the uncertainties of the numerical model used in the design, which is not the same as the one considered in robust control. To evaluate the robustness with the above definition, a procedure including four steps is developed as follows: n that are included in the evaluation, where B eva i represents the rolling gait i that is selected as a typical rolling gait used in the robustness evaluation and n is the number of gaits.
Structural configuration
The six-strut tensegrity robot is a typical kind of spherical tensegrity robots, which have potential application in the exploration of complex environments due to their excellent locomotion ability. The six-strut tensegrity robot is adopted here because of its simplicity and representativeness. The robustness of the numerically generated rolling gaits of the six-strut tensegrity robot under the uncertainties of the environment and the real locomotive system is evaluated. This study will be helpful in the design or selection of more robust locomotive gaits based on numerical simulations. As shown in Figure 2, the tensegrity is composed of 6 struts, 12 nodes, and 24 tendons. Each node is connected to a strut and four tendons. The six struts are divided into three pairs, and the struts in each pair are parallel to each other at the reference state. The outside surface of the tensegrity is a pseudo-icosahedron that consists of eight closed triangles (TCs) and 12 open triangles (TOs). A TC has three tendon edges, and a TO has two tendon edges and a virtual edge without structural members. These triangles are numbered for the convenience of rolling gait descriptions, as listed in Table 1. According to the types of touching-ground triangles, there are two basic states for the tensegrity: TC state and TO state, as shown in Figure 2. In a rolling gait, the tensegrity moves from one state to the other state. To ensure the repeatability of the rolling gaits, the initial state and final state of a rolling gait are required to be identical to one of the basic states. As a result, the rolling gaits can be classified by the type of touchingground triangles. The struts labeled with the prefix "A" in Figure 2(a) are used as active members whose rest lengths can be actively changed by actuators. The rest length of the actuated struts is assumed as 200 mm at the initial state and it is able to change within 156-256 mm at a speed of 14 mm/s. Hence, in this typical case, n a ¼ 6, e i ðt i ¼ 0Þ ¼ 200 mm, and e i ðt i Þ 2 ½156; 256 mm. The properties assumed for all the structural members are given in Table 2. To consider the control system and power supply, it is further assumed that a rectangle control box with a size of 50 Â 50 Â 60 mm 3 and a mass of 127.6 g are suspended at the center of the system by 24 additional tendons connected to the nodes.
Pool of rolling gaits
Various rolling gaits for the six-strut tensegrity robot can be generated by an approach based on a genetic algorithm incorporated with the incremental dynamic relaxation method. 20 Typical rolling gaits with different initial and final touching-ground triangles and different control strategies are listed in Table 3, that is, gaits 1; 4; 7; 10; . . . ; 28. Note that the gait primitives are categorized into TC!TO and TO!TC types according to the initial and final states of them. To increase the diversity of gaits and to investigate the effect of actuation lengths on robustness, new gaits are generated by multiplying the actuation lengths of the gaits with a scale factor of 0.9 and 0.8, and the generated gaits are numbered as gaits 2; 5; 8; 11; . . . ; 29 and gaits 3; 6; 9; 12; . . . ; 30, respectively. The resultant gaits, as presented in Table 3, compose the pool of gaits B eva that are used in the robustness evaluation.
Sensitivity analysis of parameters
Parameters with significant influences on the rolling gaits are opted to be involved in the robustness evaluation. In this section, sensitivity analysis of parameters is conducted to identify the main parameters that have significant influences on the rolling gaits.
Analysis method
The tornado diagram is utilized for the sensitivity analysis of parameters. 25 In the tornado diagram, the sensitive parameter is modeled as an uncertain value while all the other parameters are held at baseline values, and thus, the effect of the sensitive parameter on the target variable can be obtained. The relative importance of parameters can be evaluated according to the effects of each parameter. A target variable should be selected to evaluate the influence of parameters. Since the objective of a rolling gait is to achieve motion of the tensegrity system and the traveling distance of the tensegrity centroid can be used to indicate
Parameters participated in sensitivity analysis
According to the authors' experience on the design, manufacture, and test of a physical prototype for the six-strut tensegrity robot, 7,20,26 the tendon stiffness ' k , the initial tendon prestress ' p , the equivalent mass of nodes ' m , the actuation lengths of actuators a (with a vector length of n a ), and the slope ' s of ground are selected as interested parameters for sensitivity analysis. Specifically, the equivalent mass of nodes is calculated by ' m ¼ F=g, where F is the equivalent load applied on each node and g ¼ 9:81 m=s 2 is the gravitational acceleration. Among these parameters, ' s is an environmental parameter, and the others are structural parameters. The initial orientations of the tensegrity system in TCstate and TO-state on a slope are shown in Figure 3. The slope direction is perpendicular to y-axis, and the changeable slope angle ' s of ground is defined as the angle from the plane z ¼ 0 to the slope. The initial orientations of the tensegrity system reference to the slope direction are fixed. For examples, for the initial state with a touching-ground triangle TC-6 (3, 7, 12), the edge 12-7 is parallel to the slope direction and points to the positive x-direction, as shown in Figure 3(a); and for the initial state with a touching-ground triangle TO-5 (3,11,12), the edge 12-3 is parallel to the slope direction and points to the positive x-direction, as shown in Figure 3(b).
Since the range of uncertainty of each parameter is unknown in practice, the value of each parameter is assumed to have a distribution with the design value as the mean and 0.4 times of the design value as the standard deviation. The distributions and ' i;50 , ' i;10 , and ' i;90 of the interested parameters are listed in Table 4. Note that the means of the tendon stiffness ' k and the initial tendon prestress ' p are identical to the values in Table 2. Specifically, a is a vector composed of the actuation lengths of the six actuators, and the values of it are not given in Table 4. Obviously, all the five uncertainties are parametric, and the slope of the ground is an external uncertainty and the others are internal uncertainties. An adaptive controller that is able to deal with parametric and nonparametric uncertainties was developed recently. 27 Since this study focuses on the robustness of the given gaits, the design of proper controllers is beyond the scope of it and thus will not be discussed in detail here.
Tornado diagram
The mean and standard deviation of the target variable are calculated based on Table 4 and recorded in Tar_Va-lue_10 and Tar_Value_90, as listed in Table 5. The tornado diagram is shown in Figure 4, in which the yellow and blue bars represent Tar_Value_90 and Tar_Value_10, respectively, and the green bar denotes the overlapping portion of them. It is found that the variation ranges of the target variable due to the uncertainties of the four structural parameters are comparable to each other. The variation due to the uncertainty of ' k in gait TO!TC is larger than those due to the uncertainties of the other three structural parameters. While in gait TC!TO, the variation due to the uncertainty of ' k is close to the others. Therefore, all the four structural parameters (i.e. ' k , ' p , ' m , and a ) are taken into consideration in correlated random sampling for robustness evaluation. It is also observed that the influence of the uncertainty of the environmental parameter (i.e. ' s ) is more significant than the influences of the uncertainties of the structural parameters, especially in gait TC!TO. Therefore, ' s is also considered in correlated random sampling. It is worth noting that when the uncertainty of ' s is considered, the mean value of the traveling distances of all TC!TO gaits dramatically increases from 0.0695 m to 0.13 m. A further check reveals that a gait that generates a single rolling on a plane may generate double or even multiple rollings on a slope. These double or multiple rollings lead to the significant increment of traveling distance, which indicates that there is a significant effect of the slope of the ground on the robustness of rolling gaits.
Robustness evaluation of rolling gaits
Fifty samples are generated by correlated random sampling on the five uncertain parameters to build the state library S str . For each rolling gait listed in Table 3, each state of S str is tested and the robustness of the gait is evaluated using equation (7). The results of the robustness evaluation are listed in Table 6. Note that the superscript s represents simulation results, and NUM s i; to ¼ 50 for all gaits. It is found that the robustness of rolling gaits increases as the number of used actuators increases. For a TC!TO gait, rolling cannot be achieved if there are only one or two used actuators, as given in Table 3, indicating that the corresponding ROBUST s i is zero. For TC!TO gaits using three to five actuators, ROBUST s i is zero if the scale factor of actuation lengths equals to 0.9 or 0.8 (i.e. gaits 2, 3, 5, 6, 8, and 9), while for TC!TO type gaits using six actuators, ROBUST s i is 0.98 even if the scale factor is smaller than 1.0 (i.e. gaits 11 and 12). For TO!TC type gaits, ROBUST s i increases from 0.94 to 0.98 when the number of used actuators increases from 1 to 5. The average ROBUST s i of TO!TC type gaits using one to three actuators is 0.96, smaller than the average value 0.98 of the gaits using four to six actuators.
It is also found that the robustness of rolling gaits decreases or remains unchanged as the scale factor of actuation lengths decreases when the same number of actuators is used. As given in Table 6, for gaits 1-3, 4-6, and 7-9, which use three, four, and five actuators, respectively, ROBUST s i becomes zero if the scale factor decreases from 1.0 to 0.9 or 0.8; for gaits 16-18 which use two actuators, ROBUST s i decreases from 0.96 to 0.94 if the scale factor decreases from 0.9 to 0.8; and for gaits 28-30 which use six actuators, ROBUST s i becomes zero if the scale factor decreases from 0.9 to 0.8. The robustness of TC!TO type gaits is lower than the robustness of TO!TC type gaits. Many of the TC!TO type gaits cannot achieve rolling if a scale factor of actuation lengths smaller than 1.0 is applied, while most of the TO!TC type gaits with the same scale factor can achieve rolling successfully. This indicates that the actuation margins of TC!TO type gaits are generally smaller than the margins of TO!TC type gaits. The number of actuators, the scale factor, and the gait type has combined effects on the robustness of gaits. For example, gait 30 has six actuators and a scale factor of 0.8, but the robustness of it is smaller than gait 27, which has five actuators and a scale factor of 0.8. This might be due to the relatively small actuation margins of gait 30.
Physical prototype
Experiments on a physical prototype of the six-strut spherical tensegrity robot are carried out to verify the proposed concept and method. The physical prototype that is based on the configuration detailed in the third section was manufactured, as shown in Figure 5. The properties of the members used in the prototype are identical to those given in Table 2. Six servo linear actuators are used as active struts, and 24 rubber ropes are used as tendons, and the actuators and rubber ropes are connected with 3D printed nodes. The servo linear actuators each have an initial length of 15.6 cm and able to actively change to 25.6 cm at a rate of 14 mm/s. At the initial state, they extend to 20 cm to prestress the system. The weight of each servo linear actuator is 56.0 g and the weight of each 3D printed node is 4.0 g. Hence, a servo linear actuator plus two nodes at the ends of it has a total weight of 65.0 g. The rubber ropes each have an initial length of 6.0 cm, a stiffness of 104 N/m, and a weight of 1.0 g. In the center of the prototype system, a control box consisting of a Bluetooth communication module, a servo control module, and a lithium battery is attached to the nodes with 24 rubber ropes the same as those used for tendons. It has a size of 50 Â 50 Â 60 mm 3 and a weight of 127.6 g. Note that the nominal properties of the structural members and the control components given above are identical to the corresponding assumed properties used in the numerical simulations.
The prototype is wirelessly controlled through Bluetooth communication with a control program installed in a personal computer. The control program with a graphic user interface (GUI) is developed base on the stm32 platform. Each actuator can be controlled by executing an input command, and multiple actuators can be actuated simultaneously by this program to achieve a rolling gait. By inputting a series of commands, multiple step control of the actuators can be conducted to achieve a series of rolling
Experiment scheme
The rolling gaits listed in Table 3 are tested using the physical prototype. Though the rolling gaits listed in Table 3 are all represented as using TC-6 or TO-5 as the initial touching-ground triangle, they are also applicable to other initial states due to the pyritohedral symmetry of the structure with order S ¼ 24, which means that there are 24 unique combinations of rotations and reflections that result in an equivalent configuration. 22 As a result, the repeated tests for each gait are conducted by switching the starting touching-ground triangle according to the pyritohedral symmetry of the structure. Specifically, six tests are conducted for each gait in gaits 1-12, and the corresponding starting touching-ground triangles are TC-1, TC-2, TC-3, TC-4, TC-6, and TC-7, respectively, in which TC-2, TC-3, and TC-7 can be transformed into each other by a rotation operation with a rotation angle of 120 , and TC-1 and TC-6 can be transformed into each other by a combined rotation(60 )-reflection operation, while six tests are conducted for each gait in gaits 13-30, and the corresponding starting touching-ground triangles are TO-1, TO-2, TO-3, TO-4, TO-5, and TO-6, respectively, in which TO-1 and TO-2 (as well as TO-4 and TO-5) can be transformed into each other by a reflection operation, and TO-3 and TO-5 (as well as TO-6 and TO-2) can be transformed into each other by a rotation operation with a rotation angle of 180 . The robustness of gaits in the experiment is calculated using equation (7).
Experimental results
A typical successful rolling is shown in Figure 6. The tensegrity robot starts with a touching-ground triangle of TO-2 and achieves the rolling by using a control strategy corresponding to the gait 13. The experimental results of robustness evaluation are listed in Table 7. Note that the superscript p denotes experimental results, and NUM p i; to ¼ 6 for all gaits. It is shown that the robustness of gaits increases as the number of used actuators increases. For gaits 1-12, ROBUST p i of gaits 1, 4, 7, and 10 (i.e. the gaits with the same scale factor of 1.0) are 0.33, 0.33, 0.00, and 0.50, respectively, indicating that the robustness of the gaits with six actuators is greater than the robustness of the gaits with fewer actuators. For gaits 13-30, ROBUST p i increases from 0.33 to 0.67 as the number of actuators increases. The robustness of most of the gaits follows this trend, although the robustness of some gaits (e.g. gaits [7][8][9] deviates. The decreasing of the scale factor of actuation lengths may lead to a decreasing in the robustness. For example, in gaits 10-12, ROBUST p i decreases from 0.5 to 0.17 as the scale factor decreases from 1.0 to 0.8. The robustness of TO!TC type gaits is higher than the robustness of TC!TO type gaits. The average value of ROBUST p i of gaits 1-12 is 0.19, much smaller than the average value of 0.44 of gaits 13-30. The above findings from the experimental results are qualitatively consistent with the findings from the numerical simulations in general. However, there are nonignorable quantitative differences between the experimental and numerical results. The possible reasons for the differences are as follows. The physical prototype is different from the numerical model. In the numerical model, the size of the nodes and the sectional sizes of the struts and tendons are ignored, resulting in a geometrical deviation from the physical prototype. Moreover, in the numerical model, the control box is simulated by adding equivalent mass to the nodes, and thus, the eccentric effect of the control box during motion cannot be considered. The distributions of the uncertain parameters used in numerical simulations are ideally assumed and different from the real distributions in the experiments.
Conclusions
A practical method is proposed for the robustness evaluation of rolling gaits of tensegrity robots. A mathematical model is built to describe the evaluation process, and the success rate of rolling is adopted as an indicator of robustness. Sensitivity analysis and robustness evaluation are conducted on the rolling gaits of a six-strut tensegrity robot. Specifically, the sensitivities of rolling gaits to five uncertain parameters, that is, tendon stiffness, initial tendon prestress, the equivalent mass of nodes, actuation lengths of actuators, and slope of the ground, are analyzed, and then, the robustness of rolling gaits is evaluated by correlated random sampling on the five uncertain parameters. Experiments are carried out using a physical prototype of the six-strut tensegrity robot. Based on the numerical and experimental results, it is found that the robustness of rolling gaits increases as the number of used actuators increases; and reducing the actuation lengths proportionally is observed to lead to the decreasing of the gait robustness. Moreover, the robustness of TC!TO type gaits is usually lower than the robustness of TO!TC type gaits, and the robustness of the gaits might be combinedly affected by the number of actuators, the actuation lengths, and the gait type. The qualitative agreements between the numerical results and the experimental results indicate that the proposed robustness evaluation method is effective in both simulated cases and physical cases. There are still quantitative differences between the experimental and numerical results. This is caused by the unavoidable difference between the numerical model and the physical prototype: (1) the numerical model simplifies the real threedimensional components and joints of the tensegrity robot by lower-dimensional idealized virtual components and volume-less nodes, which makes the numerical touchingground triangle deviate from the experimental one; (2) the self-weight of the system is applied by equivalent nodal masses in the numerical model, ignoring the change of mass distribution during rolling; and (3) the magnitudes and distributions of the uncertainties are ideally assumed. Using a more elaborate physical prototype and an improved numerical model will be able to reduce the differences between the numerical and experimental results. For example, the stability of the contact between the joints and the ground can be improved using ball-like plastic joints as used by the Reservoir Compliant Tensegrity Robot, 13 and the magnitudes and distributions of some uncertainties such as the tendon stiffness, the initial tendon prestress, the equivalent mass of nodes, and actuation lengths of actuators can be determined by repeated tests in advance. These improvements will be adopted in the authors' future works. It is also worth noting that since the magnitudes of the uncertainties is assumed in advance, the effect of the magnitude of the uncertainties on the robustness of a given rolling gait is not shown in this study and needs further investigation.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 7,187.6 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Experimental investigation of turbulent transport of material particles
We report measurements of Lagrangian velocity and acceleration statistics of particles transported in a turbulent flow obtained with an acoustic Doppler velocimelly technique. We consider a homogeneous isotropic grid turbulence generated in a wind tunnel. As a first step we study isolated particles dynamics with a particular focus on the influence of particles finite size on their response to the turbulence forcing. As particles we use neutrally buoyant soap bubbles inflated with helium. The size of the particles can be adjusted from 1.5 mm to 6 mm, corresponding to inertial range scales. We show that the response time of the particles to the turbulence forcing increases with their s i ze . We analyze our data in the frame of two times stochastic models, and show that the cut-of! time scale in the Lagrangian energy spectrum of the particles dynamics has a dependence on their diameter consistent with a low pa ss .filtering of the turbulent cascade by the particles finite size. des bulles de savons gonflees a /'helium de sorte a etre rendues iso-densite dans !'air. Leur faille, ajustable entre 1.5 mm et 6 mm, correspond a des echelles inertielles de Ia turbulence. Nous montrons que le temps de reponse des particules augmente avec leur faille et analysons ces resultats a Ia lumiere de modeles stochastiques a deux temps. Nous montrons ainsi' que le spectre Lagrangien d'energie des particules presente bien une coupure a petite echelle liee a leur tail/e.
the influence of particles finite size on their response to the turbulence forcing. As particles we use neutrally buoyant soap bubbles inflated with helium. The size of the particles can be adjusted from 1.5 mm to 6 mm, corresponding to inertial range scales. We show that the response time of the particles to the turbulence forcing increases with their size. We analyze our data in the frame of two times stochastic models, and show that the cut-of! time scale in the Lagrangian energy spectrum of the particles dynamics has a dependence on their diameter consistent with a low pass .filtering of the turbulent cascade by the particles finite size.
Introduction
Particle laden turbulent flows play an important role in various situations such as industrial processes or atmospheric dispersion of pollutan ts for instance. When the particles are neutrally buoyant and small (typically comparable in size with the dissipation scale of the surround ing turbulence) they behave as tracers for fluid particles. However, in many practical situations, the particles are heavier and/or larger, their dynamics is then af f ected by inert ial effects and it deviates from fluid particles dynamics, (Maxey et al (1983), Aliseda et al. (2002), Ayyalasomayajula et al. (2006)). The precise role of size and density of the particles in the modification of their dynamics with respect to fluid tracers, remains largely an open question.
Here, we report measurements of Lagrangian velocity and acceleration stausttcs of material particles transported in a grid generated windtunnel turbulent flow, with a Reynolds number (based on Taylor microscale) of R:�.-200. The dissipation scale T} is 200 J. lm and the energy injection scale L is 2.5 em. As a first step, we only explore particles finite size effects. To decouple the role of size and density of the particles, we consider neutrally buoyant particles, which are soap bubbles inflated with helium and which diameter can be adjusted from 1 .5 mm to 6 mm which corresponds to inertial range scales. The Lagrangian measurements are obtained with an acoustic Doppler velocimetry technique (figure I a): from the instantaneous Doppler frequency shift of acoustic waves scattered by a particle in a turbulent flow, we measure the velocity of the particle (Poulain et al. (2004)). The instantaneous fr equency is determined with a parametric maximum oflikelyhood algorithm (Mordant et al. (2001)). The particles can be tracked over a period covering several dissipation time scales, corresponding to a significant fraction of the integral time scale of the flow.
Experimental Approach
The Lagrangian measurements are obtained with an acoustic Doppler velocimetry teclmique (figure I a): from the instantaneous Doppler frequency shift of acoustic waves scattered by a particle in a turbulent flow, we measure the velocity of the particle (Poulain 2004). The instantaneous frequency is determined with a parametric maximum of likelyhood algorithm derived by Mordant et al. (2004). The particles can be tracked over a period covering several dissipation time scales, corresponding to a significant fraction of the integral time scale of the flow.
Experiemental Results
In order to investigate the influence of particle size on its Lagrangian dynamics, we first consider how the Lagrangian velocity autocorrelation function is affected when we change the bubbles diameter. Note that we only show a relatively short time lags range, for which we have enough Lagrangian trajectories to ensure a good statistical convergence.
From the curvature at r=O we can estimate an equivalent Lagrangian Taylor time scale r;_(D) associated to the Lagrangian dynamics of a particle of diameter D. Figure 2a shows a clear dependence of r;. on particle size. We note that as the particle size decreases, r; appears to approach an asymptotic value (which we can estimate here around 25 ms) which corresponds to the intrinsic Lagrangian microscale of the turbulent flow, as smaller particles approach fluid tracers. The increase of the microscale of the Lagrangian dynamics of the particles as their size increases suggests a longer response time of larger particles to the turbulence forcing. This is consistent with the intuitive phenomenology, that large parti cles do not feel velocity gradients at sea les smaller than their size , and therefore, they must filter in some way the turbulent energy cascade at some small scale related to their size. To test fu rther this scenario, we analyse the Lagrangian velocity correlation function in the frame of two times stochastic model given by
Saw ford et a!. (I 991).
In this description, the autocorrelation function is given by a double exponential law : (1) where r0 is a small time scal e chamcterizing the cut-off of the particles Lagrangian energy spectrum. For fluid particle tracers, for instance, ro is directly related to the viscous dissipation time r,1. For particles with finite diameter D in the inertial range, in the scenario described above where the fluid turbulent energy is low-pass filtered by the particle at a scale corresponding to its diameter D, the corresponding cut-off time scale can be estimated in the framework of K41 phenomenology as 'tv � e113 D713, where <: is the energy dissipation rate.
Conclusion
In the work presented above, the finite size eff ects of the particles, in a homogeneous isotropic turbulent flow have been studied. We have fo und that the Lagrangian response time of the material particles increases as their diameter increases, which is due to the fact that particles are insensible to the velocity gradients at scales smaller than their diameter. As a result of this phenomenon we have observed filtering in the Lagrangian energy spectrum. Lagrangian rnicroscale time, r;, have been determined by fitting parabolas to Lagrangian velocity autocorrelation. It has been found that r;, decreases as the diameter of the particle decreases and it tends to reach an asymptotic value. Later on, a two times stochastic model is used to determine small time scale, r0. The realation derived from K4l phenomenology, r0 -D 21 3 appears to work in a good agreement when we plot ro I D213 as a function of particle's diameter. Other diagnosis not discussed here, based for instance on measurements of the acceleration variance of the particles as a function of their diameter also confirm this idea. Further investigation will explore the role of particles density. | 1,794.8 | 2007-06-25T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Investors’ reaction under uncertainty
ABSTRACT This study investigates investors’ reaction to good/bad earnings news when faced with market- and industry-wide uncertainties. Our results provide little support for the discount rate explanation that investors’ reaction to good news is dampened during high market volatility. However, the results strongly support the learning hypothesis that earnings news provides value-relevant information for investors during periods of high-market volatility, but that investors cannot learn as much from earnings news under industry-wide uncertainty. These findings also support the conservation hypothesis that investors react more strongly to bad earnings news when faced with market-wide uncertainty.
I. Introduction
How do investors react to news when confronted with unfamiliar environments? Earnings news, the principal source of pivotal information for investors, contains market-and industry-wide information relevant to future cash flow estimations. Because uncertainty makes forecasting difficult, it affects how investors react to earnings news. This study tests three of the most commonly cited hypotheses in the economic literature on investors' reaction to earnings news under uncertainty: learning, discount rate, and the conservation hypothesis. Specifically, we investigate investors' reaction to earnings news under two types of uncertainty: market-and industry-wide uncertainty.
Market-wide uncertainty is systematic and arises from volatile market conditions and the state of the economy. Volatile markets are associated with volatility in firms' future cash flows (Bloom 2009). Industry-wide uncertainty arises from a firm's business engagements. For instance, tech firms have extensive investments in intangible assets that are characteristically ambiguous and subjective, making their financial information less precise (Kwon and Yin 2015) and forecasting future cash flows difficult. Moreover, uncertainty in a firm's operational environment exacerbates investors' perceptions of its future value (Cui and Zhang 2020). Choi's (2018) learning hypothesis states that investors assign more weight to precise signals than noisy signals. When market conditions are volatile, investors face difficulty in predicting future cash flows. During these periods, investors can obtain valuable information about firms' prospects from earnings announcements (Loh and Stulz 2018). Consequently, investors react more to earnings news during high market-wide uncertainty. Conversely, when investors receive imprecise information signals, as in the case of earnings announced under high industry-wide uncertainty, investors put less weight on the signal, and their reaction to earnings news is weak. The implicit prediction of this model is that investors react symmetrically to both good and bad news.
According to the discount rate hypothesis, investors update their beliefs about the future state of the outcome based on the information signals they receive. Information signals that contradict their prior beliefs about the future state of the outcome increase uncertainty and decrease future cash flows used in valuing firms (Gupta, Marfatia, and Olson 2020). Therefore, investors' reaction is stronger during high-uncertainty periods (Huang, Lu, and Chen 2021). In addition, because investors are risk-averse, they require a higher risk premium for higher uncertainty since uncertainty engenders a return premium (Ang and Boyer 2010). Thus, information signals that increase uncertainty invoke a higher discount rate, whereas signals that are consistent with investors' prior beliefs about the future outcome do not. Accordingly, the discount rate effect offsets the positive effect of good news in periods of high uncertainty. Under this hypothesis, if industry-wide uncertainty is idiosyncratic, it should not affect investors' reaction to earnings news received under high industry-wide uncertainty.
The conservation hypothesis also views investors as uncertainty-averse, and uncertainty-averse investors take a conservative approach when making decisions by adopting a worst-case scenario (Ellsberg 1961). Neuroeconomics explains this behaviour as the activation of survival instincts when faced with decision making under uncertainty (Smith et al. 2002). The hypothesis predicts that when investors face uncertainty, they act cautiously and choose a cautious approach because high uncertainty amplifies the effects of bad news. Furthermore, investors react similarly to earnings news released under both high market-and high industry-wide uncertainty.
Utilizing the VIX index to capture market-wide uncertainty (Choi 2019) and industry classification to capture industry-wide uncertainty arising from the information imprecision associated with the nature of business, we find that investors' reaction is consistent with the learning hypothesis: investors' reaction to earnings news is stronger with market-wide uncertainty and weaker with industry-wide uncertainty. Supporting the conservation theory, investors react more to bad news when the market is volatile. However, there is little evidence to support the discount rate hypothesis. Further analyses show that the effects of industry-wide uncertainty on investors' reaction predominate over those of market-wide uncertainty. Thus, industry-wide uncertainty from imprecise information exacerbates the difficulty investors face in learning from earnings news during periods of high market-wide uncertainty. These results support the learning hypothesis.
This study addresses how different types of uncertainty affect investors' decisions. Prior studies have focused on firm-level uncertainty arising from firm-specific practices, such as earnings management or governance (Kyaw, Olugbode, and Petracci 2020). By estimating fixed-effects (FE) panel models across different industry groups, we capture the effects of the uncertainty associated with the nature of the business. Finally, this study investigates the interactive effects of these two types of uncertainty on investors' reaction.
Section 2 discusses the study's data and variables. Section 3 explains the results, and Section 4 concludes the study.
II. Data and variables
We collect annual earnings announcement dates, reported and forecasted earnings per share (eps), accounting, and financial data from 2002 to 2016 from Thomson Reuters Eikon. We exclude cases where a firm has two annual earnings announcements in the same calendar year or a negative market-to-book ratio. Further, we include only those earnings announcements with a minimum of 60 available returns before the announcement dates to estimate the market model. Our final sample consists of 12,466 observations from 1,620 firms.
The three-day cumulative abnormal return (car) centred on the annual earnings announcement day t is estimated from the market model using the returns from 253 days to 2 days before the announcement date.
We measure good/bad earnings news by using earnings surprises. We calculate earnings surprise (ue) as the difference between the reported eps and analysts' mean eps forecast on the day before the announcement day scaled by total assets at the beginning of the year. The surprise can be either good or bad news: goodnews (badnews) takes the value of ue if ue is positive (negative), and zero otherwise.
The control variables are: earnings prospect, defined as the difference between analysts' mean eps forecast on the day following the earnings announcement and the reported eps on the announcement scaled by the reported eps; the natural logarithm of market capitalization; the market-to-book ratio; the return on assets; the ratio of total debt to total assets; the market beta; the firm's return over 250 trading days leading up to the two days before the earnings announcement; the natural logarithm of the number of shareholders whose shareholding is greater than 5%; and the natural logarithm of the number of analysts following a stock. Figure 1 shows the evolution of the VIX during the sample period. We capture the difficulty investors experience in assessing firms' prospects due to market-wide uncertainty through vixh, which takes the value of 1 when the standard-deviation-of-daily-VIX for the year is higher than the median standarddeviation-of-daily-VIX in the most recent five years. In Figure 1, the grey periods show the high market-uncertainty periods, as indicated by vixh.
We capture the uncertainty arising from the nature of business through tech industries (tech) classified based on SIC codes, as in Chan, Lakonishok, and Sougiannis (2001). 1 Firstly, tech firms operate in a business environment that changes rapidly, which makes estimating their real value more complicated (Kohers and Kohers 2004) and consequently more ambiguous (Gomes, Gorton, and Madureira 2007). Secondly, tech firms have significant investments in research and development (R&D) and intangible assets, which are subjective and engender information asymmetry between firms and investors (Chan et al. 2006;Kwon and Yin 2015). Table 1 shows that tech firms are predominantly from the electrical equipment, measuring instruments, computer programming, and software industries. The number of observations more than doubled over the 15 years. Table 2 shows that car averages at 0.3-0.4%. Good (bad) earnings surprises from tech firms are generally larger than those from non-tech firms. Tech firms exhibit higher mtbv and have a larger number of analysts following the firms. FE panel regression model estimated is: where y it represents car for firm i at time t, x it is a vector of covariates, v i denotes an unobservable time-constant firm-level fixed effect, u it is an idiosyncratic error term, and β is a vector of coefficients to be estimated. Table 3 reports the estimation results. The interaction term vixhXgoodnews from Model (1) indicates that investors' reaction to good news under a high market-wide uncertainty is not different from the reaction at any other time. However, vixhXbadnews indicates that investors react strongly to bad news when the market is experiencing high uncertainty. These results somewhat support the discount rate and conservation theory that investors' reaction under market-wide uncertainty is dampened for good news and amplified for bad news. Reestimations of Equation (1) across high versus low market-wide uncertainty periods in Models (2) and (3) respectively yield similar results. The coefficients for goodnews and badnews in Model (2) are higher during high market-wide uncertainty periods (1.8678 and 1.4083, respectively) than during low market-wide uncertainty periods (0.7813 and 0.5084, respectively). The results partially support the conservation theory but are in line with the learning hypothesis that investors learn from earnings news during high market-wide uncertainty periods. The negative and statistically significant coefficients of techXbadnews in the models show that investors react relatively less towards bad news from tech firms. These findings support the learning hypothesis that investors' reaction to earnings news weakens with industry-wide uncertainty. The presence of different reactions to earnings news under industry-wide uncertainty is against the discount rate hypothesis, which postulates industry uncertainty to be idiosyncratic. The asymmetric reactions to good and bad news under industrywide uncertainty partially support the conservation theory.
III. Empirical results
The weaker reactions to earnings news from firms with high industry-wide uncertainty during periods of high market-wide uncertainty (−0.7188 and −1.1007) suggest the importance of industrywide uncertainty over market-wide uncertainty for investors facing both types of uncertainty. Model (4) reports the results from the re-estimation of Equation (1) with market-wide and industry-wide uncertainty indicator variables, which are then interacted with each other to investigate the effect of both types of uncertainty on market reaction. The interaction terms show similar results to those observed before: investors do not react differently to good news during high market-wide uncertainty (vixhXgoodnews), but react significantly strongly towards bad news during high market-wide uncertainty (vixhXbadnews). The negative coefficient of vixhXbadnewsXtech implies that industry-wide uncertainty weakens the market-wide uncertainty effect. These results suggest that investors do not learn much from earnings news released in times of market-wide uncertainty if industry-wide uncertainty is high. Therefore, these findings support the learning hypothesis over the conservation hypothesis.
IV. Conclusion
Investors react more under high market-wide uncertainty and less under high industry-wide uncertainty, and their reactions are asymmetric. The results support the learning explanation best, with some support for the conservation explanation but not the discount rate explanation. | 2,573.2 | 2022-07-05T00:00:00.000 | [
"Business",
"Economics"
] |
Upper-Lower Bounds Candidate Sets Searching Algorithm for Bayesian Network Structure Learning
Bayesian network is an important theoretical model in artificial intelligence field and also a powerful tool for processing uncertainty issues. Considering the slow convergence speed of current Bayesian network structure learning algorithms, a fast hybrid learning method is proposed in this paper. We start with further analysis of information provided by low-order conditional independence testing, and then two methods are given for constructing graph model of network, which is theoretically proved to be upper and lower bounds of the structure space of target network, so that candidate sets are given as a result; after that a search and scoring algorithm is operated based on the candidate sets to find the final structure of the network. Simulation results show that the algorithm proposed in this paper is more efficient than similar algorithms with the same learning precision.
Introduction
Bayesian network (BN), as a graphic model handling uncertainty issues, has been discussed by many researchers through these years.It has been applied successfully in many areas such as fault detection, medical diagnosis, and traffic management [1][2][3].It had been years that people focused on finding a data structure to compress the storage of joint probability density and developing inference algorithms based on that data structure, and then BN was brought up.After that, when BN had been a successful tool in this area, researchers began to follow with interest structure learning algorithms of BN based on sample data.Essentially, the problem of structure learning of BN is part of combinatorial optimization issues, and it is proved theoretically that learning structure from data was NP hard [4].Nonetheless, some heuristic methods have been proposed and performed well in several areas [5,6].
Currently, there are two approaches for BN structure learning.One is CI-test method [7,8] and the other is scoredsearching method [9,10].The first one uses conditional independence tests (CI test) to determine the conditional independence relationships among all the variables and build networks based on these relationships.The scored-searching methods attempt to find the network by maximizing a scoring function which indicates how well the network fits the data.
Both methods above have their own advantages and disadvantages.CI-test algorithms are simple and easy to operate.Because low-order CI test is quite computational effective and has high precision, they are very helpful to build a hyper graph of the target (it will be discussed in the following sections).The main drawback about these methods is about performing high-order CI test, which needs large sample sizes and has low accuracy along with the orders of getting higher [11,12].The scored-searching methods may have higher precision in structure learning than the CItest methods.But they are relatively slow, especially when the scales of networks become large, as the structures space increases super-exponentially with the number of nodes.
It is obvious that if it is possible to combine the learning efficiency of CI test and prediction accuracy of scoredsearching algorithm, we will get a better algorithm to deal with BN learning issues.In view of the above reasons, some hybrid methods have been proposed [13][14][15][16][17].These methods may use CI-test algorithms to learn a network structure pattern at first and then use some scored-searching algorithms to find the final BN structures based on the previous pattern.These hybrid methods may perform better in some applications, but there are still some problems unsolved, as fusion in algorithm level does not always mean promotion in performance.Take MMHC (max-min hill climbing) as an example.It includes two steps: the first one called MMPC (max-min parents and children), which constructs parents and children sets of each node via CI-test method, is to provide a partial skeleton frame.While in the second step a hill climbing algorithm is operated to refine every edge in the network.To ensure the precision of partial skeleton frame given by MMPC, high-order CI test must be involved, which unfortunately is unstable [11,12].So in the searching phase, it is not based on the prior structure given by MMPC strictly but operates in a relatively open space.This manner seems somewhat wasteful for computational resources.
The upper-lower bounds candidate sets searching algorithm (UBCS) which is proposed in this paper can provide a more instructive set of candidate networks through constructing the upper-lower bounds of the structure space by low-order CI test.In this framework, we get the final network structure by using the greedy search algorithm.Simulation shows that it could guarantee precision and reduce the time complexity at the same time.
Because nodes in BN have no difference with random variables, they will not be distinguished in this paper, and they will be both called as node.In addition, let ⇀ denote directed edge V → V , and let denote undirected edge Definition 2 (V-structure).Let BN = (,), where = (, ), ∀V where MI(, | ) = 0 means random variables sets , are conditional independence given , which can be expressed as Ind(, | ) too.Therefore, it usually uses MI(, | ) as CI test among random variables and calls cardinal number of as orders of CI test.Furthermore, its zero order CI test, if = Φ.Definition 4 (Markov equivalence).Two DAGs are graph equivalent if and only if (1) both of them have the same skeleton frame and (2) they have the same V structure.
The characteristics of Markov equivalence have been given by Frydenberg [19], while Verma and Pearl expanded these into DAG [20].Based on the Markov equivalence, all the DAGs composed by the same nodes set can be divided into different equivalent classes, which are called Markov equivalent classes.Each equivalent class indicates a unique statics model, and it can be represented by a PDAG (partial directed acyclic graph), which is called complete PDAG.
Method
Given data sets , BN structure learning methods are devoted to find the best network structure of BN = (,).The reference [21] proved the structure quantity of the BN which contains nodes is From the formula above it can be seen that the potential network structure space rises exponentially with the node increasing.So searching for the candidate sets of network structure is a good approach to reduce dimensions effectively.Based on it, we provide an algorithm named upper-lower bounds candidate sets searching algorithm (short for UBCS), which can get the ultimate network model by constructing the upper-lower bounds of the target network pattern to find candidate sets of network and using search and scoring method.In the following section we will give the first part of the UBCS which is called upper-bound of graph learning algorithm (UGLA), prove the output + is the upper bound of moral graph of the target network, and then bring in principle of nonincreasing for 0-order mutual information to reach the second section of the UBCS which is called lower bound of graph learning algorithm (LGLA).After that the searching algorithm will be discussed.
UGLA and
LGLA.We will first give the algorithm description in Algorithm 1.
UGLA processing indicates that + is a triangulating graph, and for triangulating graph, we have the following theorem being tenable.Theorem 5. Any undirected graph is complete PDAG, if and only if is a triangulating graph.
(1) Input: Data set D; Variable set = {V 1 , V 2 , . . ., V }; (2) Initialization: undirected graph + = (, ), where = Φ; (3) Order-0 CI test: for each pair variables For each pair variables The theorem which has been proved in [22] shows that + is a complete PDAG; that is, in the best situation, + obtained by UGLA is the PDAG of the target BN.Certainly, this condition is too strong, and we will give a theorem below which has more generality.Theorem 6.Given sample dataset , let the optimal structure of = (, ) to be learned be , moral graph of the BN is , and then the undirected graph + obtained by UGLA is the upper bound of partially ordered set = ( , ⊆).
Proof.It only needs to prove that ⊆ + holds for each ∈ , where = (, ) and + = (, + ).Theorem 5 tells that if the complete PDAG of is triangulating graph, = + is tenable.So the following task is to get ⊂ + proved.As all the graphs have the same nodes set, it only needs to prove that there is ∈ + in any case that undirected edge ∈ .For all the undirected edges in , it is clear that it can be divided into two classes.One is composed by the undirected edges transformed by directed edges in ; let it be .The other one is being constructed by the moral edges adding between the nodes which have the same parent; let it be Ẽ .It is obvious that, for ∀ ∈ , 0-order CI test ensures that ∈ + must be tenable; for ∀ ∈ Ẽ , the fifth step of UGLA is the assurance of ∈ + .Proof is completed.Theorem 7. Any V-structure in = (, ) exists in a subgraph decomposed from + by the method MPD (maximal prime subgraph decomposition) [23].
Theorem 7 was proved in reference [17].This theorem guarantees that the 1 sub , 2 sub , . . ., sub obtained by UGLA covers all the V-structure in the target graph.
The above section discussed the upper bound of = ( , ⊆), from what we can get the candidate sets for searching the structure.In the following part, the lower bound of BN structure space will be debated for choosing a relatively precise initial value.
We will start with a lemma as blow.
Lemma 8.For any two random variables V , V ∈ and subset
and the equity holds if and only if
The proof is omitted.
Proof.As ⇀ ∈ , without loss of generality, let MI(V , V ) = min{MI(V , V ), MI(V , V )}, and it is only necessary to prove that MI(V , V ) > MI(V , V ).According to the definition of mutual information, MI(V , It can be seen that V ∈ from the relationship among V , V , V , where scarifies Ind(V , V | ), so the equation above can be expressed as ( Lemma 8 shows MI(V , V ) ≤ MI(V , V ), while the equity holds for /V = Φ.
Proof is completed.
We named Theorem 9 as principle of nonincreasing for 0-order mutual information (principle NZMI).The condition of the theorem indicates that it is not suitable for the situation when there is V-structure.For the BN structure shown in Figure 1, it cannot tell whether MI(, ) is bigger than MI(, ) or not, only from the Theorem 9. But, if we can eliminate all the V-structure first and then come to consider the connected relationship between node and nodes , we will notice that MI(, ) = MI(, ) > MI(, ) must hold when MI(, ) is the biggest.So it only needs 1-order CI test to rule out the possibility of connection, so it turns out that are connected.As a matter of fact, principle NZMI provides a new approach for ascertaining whether there is an undirected edge between two nodes without using the Vstructure methods.
Algorithm 2 gives the − learning algorithm (LGLA) based on the discussion above.
The VSTA (V-structure test algorithm) is involved in LGLA list as in Algorithm 3.
For V-structure test algorithm (VSTA) see Algorithm 3. VSTA is a testing method which only provides "best effort" services.It only involves 0-order and 1-order CI tests, the high accuracy of which guarantees the existence of Vstructure detected.For the situation that there is more than one edge between two father nodes in a V-structure, the detecting will not be operated.This approach avoids bringing in high computation and additional interference edges.
These is a theorem that holds for the output from LGLA. that a V-structure must exist in if it is contained by − , which does not hold water certainly conversely.It is obvious that an acyclic graph will be still acyclic no matter any directed edges are deleted.So − is a PDAG.Proof is completed.
Theorem 11. − is the lower bound of if output from VSTA is entirely accurate.
Proof is omitted.The condition of Theorem 11 is relatively strong.As a matter of fact, it can be considered that − is the lower bound of in many cases.Take network Asia as an example; Figure 2 shows that all the edges in − exist in the PDAG of original network.
Searching Method.
Hill-climbing algorithm which is based on search and scoring method is one of the greedy searching algorithms for BN structure learning.It contains three searching operators: adding edge, subtractive edge, and reversing edge.The hill-climbing method is also involved in the UDCS algorithm, but the searching processing is restricted by the upper and lower bounds given by UGLA and LGLA, which means abandoning the new structure got form searching operators if the new one beyond the bounds is given by UGLA nor LGLA.In case of trapping in local optimum too fast, bring in suboptimal competitive mechanism and retain the top structures which got higher scores each round to the next iteration.The is decided by the scale of network in principle, which means the greater the scale of network is, the bigger the is.But it should be noticed that the oversize candidate sets will lead to an increase in the time complexity of algorithm.For the BN network of which scale is as big as alarm, recommend empirical value is = 10.In order to make a comparison with the BENA algorithm mentioned in [17], the BDeu score function is used as the objective function of searching.
Experiment
We test the performances of UBCS with BNEA and MMHC together in Alarm network.The comparison of scores is shown in Table 1.For ease of observation, we present a normalization to deal with the results.Table 1 shows the results averaged over 10 runs, where SS represents sample size.As can be seen from Table 1, the performance of UBCS is the best among three methods when SS is small, and the scores from all three methods tend to be very close with the increase of SS.Although the VSTA which is involved by LGLA cannot be adequate to assure the facticity of the detection, it has little impact on the learning performance according to the simulation result.This phenomenon is caused by two reasons: on one side, the upper bound given by UGLA is very stable, and on the other side, the effect is reduced by the process of search and scoring.It should be noticed that the BENA shows "over learning" when SS becomes larger (SS > 5000).The "over learning" is considered as a phenomenon only occurs in small sample size typically, while, as the combinatorial optimization with high dimensions (such as BN), it is hard to get plenty of samples, and the time cost is also unacceptable when the current algorithms operate on extremely large datasets.So it is reasonable to find the algorithms that could get a balance between precision and generalization in dataset with appropriate size, which is the intention of UBCS, as the restriction of the upper-lower bounds.
Figure 3 shows that UBCS has an obvious advantage over the other two algorithms in time complexity.The experiment was operated on a typical desktop computer.Comparing with MMHC, both BNEA and UBCS perform better in time complexity because of using MDP to reduce the dimensions of searching space.On the other hand, because MMPC which is involved in MMHC is used in BNEA, MMHC should have the same time complexity with BNEA, in the worst situation.While BNEA only involves 0-order and 1-order CI test, therefore it has better performance in time complexity.
Conclusion
We propose a hybrid method for Bayesian network structural learning (UBCS).In this method, two constructional algorithms are given to build the upper and lower bound of the BN structure and theoretical proof is completed as well.UGLA which is the first part of UBCS outputs upper bound of the moral graph of the target structure, while the following part named LGLA offers lower bound of the target structure's PDAG.Principle NZMI is also proved in this paper, which indicates the hidden information in 0-order CI test that could be used for reducing the reaching space.As only involving low order CI test, UBCS has an advantage in time complexity comparing with other hybrid learning methods, which is also supported by simulation results.
Figure 2 :
Figure 2: Three subpictures from left to right are structure of Asia network, PDAG of Asia network, and − of Asia network.
Figure 3 :
Figure 3: Comparison of time complexity in different algorithms.
Table 1 :
Comparison of scores based on the data set from UBCS, BENA, and MMHC. | 3,879.4 | 2014-11-25T00:00:00.000 | [
"Computer Science"
] |
Multiple-binding-site mechanism explains concentration-dependent unbinding rates of DNA-binding proteins
Recent work has demonstrated concentration-dependent unbinding rates of proteins from DNA, using fluorescence visualization of the bacterial nucleoid protein Fis [Graham et al. (2011) (Concentration-dependent exchange accelerates turnover of proteins bound to double-stranded DNA. Nucleic Acids Res., 39:2249)]. The physical origin of this concentration-dependence is unexplained. We use a combination of coarse-grained simulation and theory to demonstrate that this behavior can be explained by taking into account the dimeric nature of the protein, which permits partial dissociation and exchange with other proteins in solution. Concentration-dependent unbinding is generated by this simple model, quantitatively explaining experimental data. This effect is likely to play a major role in determining binding lifetimes of proteins in vivo where there are very high concentrations of solvated molecules.
W e use a Brownian Dynamics simulation that incorporates a Monte-Carlo type update step to simulate binding and unbinding events. The former iteratively solves the discretized Langevin equation (with finite time increments δt) to update a number of particles i with a radius of a and positions r i : where µ ij = δ ij /(6πηa) is the Stokes mobility without the effect of hydrodynamic interactions (though in principle they could be included), ξ i is a random velocity satisfying the fluctuation-dissipation theorem ( ξ i = 0 and ξ i ξ j = 2k B T µ ij δ ij ), δ ij is the Kronecker delta, and U ij is the * To whom correspondence should be addressed. Tel: +44 000 0000000; Fax: +44 000 0000000; Email<EMAIL_ADDRESS>pairwise interaction between particles i and j: where the first contribution is a Lennard-Jones potential of strength˜ k B T that occurs between beads i and j a distance r ij apart, and the second contribution is a spring potential with spring constantκ = 200.0 that provides connectivity between beads i and j. This value ofκ assures connectivity between i and j such that deviations from a distance of r ij = 2a are small. The matrices ω LJ αβij and ω C αβij provide the information regarding interaction and connectivity respectively between beads i and j of type α and β. In our system, we have two groups of beads α/β: DNA beads D and binder beads B. We focus largely on dimeric sets of binders due to the dimeric geometry of most non-specific DNA binding proteins. (1,2,3) We use the constraint ω C BBij = ω C BBji = 1 (ω C BBij = 0 otherwise) when i is even and j = i+1 for the dimeric binders. In monomeric situations, ω C BBij = 0 for all binder pairs. We also use the conditions that ω LJ BDij = 1 (binders interact with DNA through Lennard-Jones forces), ω LJ BBij = 0 (binders do not interact through Lennard-Jones), and ω C BDij is a timedependent manifestation of the connectivity that arises due to binding interactions. This behavior is based on the Bell model of biological interactions,(4) and the following manifestation in Brownian Dynamics is adapted from previous work on biological systems. (5,6) The matrix ω C BDij , which we will denote as ω R,ij for brevity, represents the accounting of all the binding interactions between proteins and DNA monomers. In a nonbonding scenario, ω R,ij = 0 and the harmonic potential in Equation S2 is not applied between the two species. The c 2013 The Author(s) This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/2.0/uk/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. possibility of binding, and consequently unbinding, occurs through a Monte-Carlo type update step that is applied every time interval τ 0 . We note that this time interval is chosen such that computational expediency and accurate statistics were obtained, however otherwise the selection is arbitrary; the jump frequency is therefore defined as ν 0 = 1/τ 0 in the simulation, however in principle the combination of this jump frequency with the energy barrier allows for a redefinition of the absolute energy scale used to describe results. Each time interval τ 0 , the matrix is recalculated based on the update step: where Ξ is a random number between 0 and 1, chosen randomly for each i and j in every update step τ 0 . Energies denoted with tildes are normalized by k B T (i.e.Ẽ U B = E U B /(k B T ); positions and times can also be normalized by the bead radius a and single bead diffusion time 6πηa 3 /(k B T ) = τ D , which when referenced in this manner will also be denoted with a tilde. r RXN (= 2.1a) is a reaction radius which sets the spatial limit within which binding can occur. For this simulation, the time step of the Langevin simulation is δt = 0.002τ D and the Monte Carlo update step is τ 0 = 0.05τ D .
What the ω R,ij matrix conceptually does is it places a spring connection between spatially adjacent beads if they meet the Boltzmann factor-based criterion for overcoming the energy barrier to transition into a bound state (and vice versa). This reproduces the appropriate thermodynamics for a binding pair,(5) and provides a means to change the binding kinetics of a single interaction through changing the unbinding barrier ∆Ẽ U B and the binding energy ∆Ẽ 0 independently. In our system, we fix the value of ∆Ẽ B to reflect the idea that the relevant time scale for binding τ B is on roughly the same order of magnitude as the binder diffusion. With a value of ∆Ẽ B = 3.0, we reproduce this behavior since τ B =τ 0 e ∆Ẽ B ≈ 1.0τ D . The binding barrier and binding energy are therefore changed simultaneously, and we report these values in terms of the binding energy ∆Ẽ 0 .
The overall simulation takes place in a box with periodic boundary conditions 100a×100a×200a with the DNA molecule represented by N beads centered along the long axis of the simulation box (see simulation snapshot in Figure 1). To expedite the simulation and compare it to systems where the DNA is tethered and stretched significantly such that its thermal motions are highly constrained, we do not update the position of the DNA beads in the iteration through the Langevin equation and consider it immobile in our model.
CHAIN-LENGTH DEPENDENCE
One common consideration in the study of DNA/protein interactions is the length and topology of the DNA chain itself, (7,8) especially since there are hypothesized states Figure S1. Chain length-normalized exchange kinetics (ln n B /N versus t) for ∆Ẽ 0 = −5.0, and a number of different lengths of DNA N (N = 10 dotted, N = 25 dashed, N = 50 solid) and concentrations c (c = 50nM black, c = 500nM red, c = 5µM blue). The actual kinetics, upon normalization by N do not change a great deal. Differences only manifest at low N = 10, which we attribute to edge effects. These effects lead to quantitative differences, but the same qualitative behavior is observed. The normalization of n B by chain length N suggests a local process of concentration-enhanced unbinding.
where the binding protein is unbound but still affiliated with a nearby DNA chain.(8) Such effects may be due to long-distance chain-chain correlations, and recent work has elucidated the geometric possibility of a random walk escaping from the vicinity of a chain of various dimensions (straight chain, random coil, and collapsed globule) without rebinding. (7) These investigations suggest that rebinding is not strongly dependent on length of a straight DNA segment like the ones in our simulation,(7) however we nevertheless test to see the importance of such effects by adjusting the length of the DNA chain.
In the analytical results, Equation 10 does not contain any reference to the length of the chain. The numerical results likewise do not include the chain length to be a factor in the matrix k ij , suggesting that the dynamics of these systems is independent of chain length. This is further backed up by the highly local g(j) functions in Figure 2c. Therefore, we expect that the decay in the fraction of binders that remain bound to the chain during an exchange experiment ( n B /N ) should decay with the same time evolution n B /N = f (t) = f (t,N ) in a way that does not depend on N . At large N this holds true, though edge effects appear at smaller values of N . This is illustrated in Figure S1, which plots the fraction n B /N as a function of time t for N = 50,20,10. N = 50 and N = 20 are essentially identical (within simulation error). This significantly suggests that, at least at the large N limit, this scaling holds true ( n B ∼ N ). The time evolution is qualitatively similar, but at N = 10 edge effects apparently start to influence results. We anticipate that these edge effects are due to the oscillatory correlation functions shown in Figure 2, which extend a distance of ca. 3 sites away along the chain. This suggests that beads less than 3 Figure S2. Exchange kinetics (ln n B versus t) for ∆Ẽ 0 = −5.0 and N = 2, with a number of different concentrations c. c-dependent unbinding is observed, on the same order of magnitude as in Figure S1 and Figure 3 however quantitative differences arise due to chain end effects which dominate at N = 2. This limiting case is often used in experimental investigations,(9) and we demonstrate here that c-dependence must be taken into account even in small DNA oligomer investigations.
indices away from a chain end have different equilibrium and therefore dynamic behavior than long chains. This would alter the quantitative (but not qualitative) picture in a complicated fashion.
The kinetics of DNA-protein binding interactions are often observed using DNA oligomers with sequences that specifically attach to the binding proteins of interest. (9) This results in a situation that in the context of our work is N = 2. The derivation of our analytical and numerical theory had long DNA chains in mind, however in principle there is nothing about the shortness of the chain that prevents the same qualitative mechanism from applying in this special limiting case. Indeed, the states considered in Figures 3 and 5 only involve 2 binding sites. To illustrate that this concentrationdependent unbinding effect is indeed still relevant, albeit quantitatively altered, we ran a few simulations at ∆Ẽ 0 = −5.0 for a number of concentrations c, which are plotted in Figure S2. Clearly, concentration plays a prominent role that must be accounted for in these experiments.
RANDOM BINDING EFFECTS
In experiment, it has been observed that large swaths of binders remain bound for a large amount of time, and do not leave even at long times after the apparent relaxation time scale. These are not homogeneously dispersed; it appears that large numbers of binders are clustered at positions along the chain, an effect which may be due to sequence heterogeneity. In a DNA chain, there are a limited number of permutations of sequence spanning the two dimers in a chain. Nevertheless, it is not clear what the connection is between binding sequence and binding strength is in these systems. Therefore, we investigate the behaviors resulting from a non-constant binding strength and consider a system that represents the non-constant binding extreme situation: a fully randomized binding strength. Such random binding energies are known to have significant effects on the one-dimensional diffusion of protein-binding DNA,(10) and we can explore the possibility of strong effects in this dynamic process as well. We introduce this into the system by considering a binding energy ∆Ẽ U B = ∆Ẽ U B,0 +ξ B,sγS that represents a Gaussian distribution of random corrections to the mean binding strength ∆Ẽ U B,0 . This is characterized by a magnitude of the random energetic contributionγ S and a set of random numbers ξ B,s that each correspond to a DNA position s and are distributed around ξ B,s = 0 with Gaussian distribution such that ξ B,s ξ B,s = δ s,s . Figure S3 demonstrates the effect of including a random binding energy at ∆Ẽ 0 = −7.0 and two extreme values of c (50nM and 5000nM). Dynamics at both concentrations are essentially unchanged upon small random deviations being introduced (γ S < 1.0), however the time scale and long-time limit of the decay both increase significantly when there is a large amount of variance in the binding energies (γ S = 1.0,2.0). This suggests that sequence heterogeneity may be a primary reason for the experimental observations that the decays are not to n B = 0 in Figure 6, which is the main difference between our predictions and the experimental data in Graham,et al.(11)
TRANSFER-MATRIX THEORY FOR DIMER BINDING EQUILIBRIUM
In order to understand the equilibrium properties of these DNA-protein binding simulations (and subsequently the behavior of the experimental systems) we use a transfer matrix calculation of the partition function. This method is wellknown in statistical mechanics, and we demonstrate how an abundance of dimer binding statistics can be obtained upon using these tools. As a simplified representation of such a calculation, we can determine the partition function (and number of bound binders n B ) for a system of monomeric binders. This will yield Equation 2, which can be determined in a number of alternative routes.
To calculate the grand partition function, we must calculate: where we use a vector notation and n B represents the occupation (each component i is 1 or 0 for bound or unbound at index i respectively) state of the system, and ∆ E 0 and µ are the vectors where the ith components are the binding energy upon binding at i and chemical potential at i respectively. This partition function can be calculated, rather than a direct sum, by a multiplication of matrices M i,j that represent the conditional probabilities of the state at the index i based on the possibilities of what the state was at index i−1. For this system, such a matrix is: where P = e −∆Ẽ+μ is the contribution to the partition function of having a bound binder at an arbitrary index (we proceed with the homogenous case that ∆ E 0 = 1∆E 0 where all indices have a binding energy ∆E 0 ) and 1 is the contribution to the partition function of having an unbound position. The matrix provides all the contributions to the partition function at position i given all the contributions at position j. The partition function for a chain of length N is then: where φ N = φ 0 = (1P ) are the contributions from the first and last positions. This iterative equation is tedious, but as long as N is large the result can be well-approximated by the largest eigenvalue of M ij , λ 0 , to the N -th power: The eigenvalue of the above matrix M i,j for the monomeric binding (Equation S5) is λ 0 = (1+P ), so for monomer binding we have: The thermodynamic relationship −k B T lnΞ = G−µn P (the value of n P counts the number of singly plus doubly bound dimers as opposed to the occupied binding sites n B ; the two values are equivalent for monomers but not for dimers) allows the determination of n B from Ξ: which is the result presented in the manuscript. The same method can be used to determine the much more complicated scenario of dimeric binding. In this situation, the matrix M i,j is 3×3 and the states are unbound, bound by the first monomer in the dimer, and bound by the second monomer in the dimer. We write this matrix as: where the first row corresponds to the possibility of moving from any state (unbound, first monomer, second monomer) to the unbound state, the second row corresponds to the possibility of moving from any state to the first monomer in a dimer being bound (with the contribution P the same as earlier), and the third row corresponds to the possibility of moving from only the first monomer in the dimer to the second monomer in the dimer (with the contribution P = e −∆Ẽ 0 ). P does not include the chemical potentialμ since the binder has already bound once upon the chain and has already moved from outside the system. An additional contribution of 1 to account for the loss of rotational degrees of freedom upon binding is not included, since the simulation constraints still permit significant rotational freedom even in the bound state. We attribute the slightly non-constant nature ofμ 0 , which varies by about k B T /2 in our fits (−μ 0 = 14.1,13.4,13.2,13.1 for c = 0.05,0.5,2.5, and 5.0µM respectively), to such degree of freedom ambiguities. The largest eigenvalue for this matrix is given by: which yields the result: which is the result in Equation 2. We note that any number of other similar calculations are possible using this method. We can introduce an artificial field µ 2 , for example, that acts only on states of interest. For example, we can calculate the prevalence of singly-bound states using the matrix: where we have included the factor P = eμ 2 on the components that are characteristic of singly-bound states (i.e. those transfer movements that involve the first monomer on a dimer going to anything other than the second monomer on the same dimer). The largest eigenvalue can be found to be: and the number of singly bound states n * B is:
MASTER EQUATION NUMERICAL THEORY
We presented an abbreviation of the numerical approach to calculating exchange kinetics using the Master Equation representation of the simplifies states of the system. To provide a more thorough picture of how we carried out this calculation, we discuss the development of this theory in more detail. The starting point for a Master Equation approach is the Master Equation itself: where the matrix of rate constants k ij has units of 1/time. We defined the full matrix conceptually in Figure 3 and mathematically in Equation 5. This allowed the writing of an evolution equation upon expanding time evolution using small time increments ∆t: where the matrix δ ij +∆tk ij can be thought of as an operator that evolves the state at time t, φ j (t), to a later time φ j (t+∆t). We can therefore numerically solve for the exchange process at time t = ∆t×n: where the initial state is that all the binders are fully bound φ j (t = 0) = δ j0 . In principle, this works for any system so long as k ij is appropriately defined, and the states φ i can be articulated in an unambiguous fashion. This is often practically difficult due to the abundance of independent and interrelated states that can be defined. In our system, we provided a simplified picture that focuses on a minimal representation of φ i such that we can define a relatively straightforward representation of the rate constants k ij , an approach that has found success in similar systems. (6) To determine the dimers that have separated from the DNA chain and moved outside of the simulation box (and hence become untagged), we must convolute the unbound states with the Green's function G(0,E;τ ) to diffuse to the edge E of the box: This equation describes the incremental increase in occupation of state i = 5 due to the binder processes indicated by Equation S18, and subsequently the binder propagates diffusively (via the Green's function G) to the boundaries of the simulation box. It is this process which dictates the decay of the original bound population of binders n B,0 . While in a perfectly cylindrical region around the DNA molecule we could, in principle, calculate the exact Green's function since 2-dimensional diffusion is a well-known process. (12) We use an approximate form, since for this aspect of the calculation it is not imperative that we have anything too complicated. For this system, we use the approximation: where τ DE = N 2 a 2 /D is the diffusive time scale for a dimer with diffusion constant D to reach a distance of 2N a (the distance from the center to one of the faces). This method, upon quantitative comparison with the simulation results, permits for a much more rapid calculation of the exchange kinetics; instead of iterating through the non-deterministic (i.e. multiple runs are needed to evaluate averages) Langevin equation for ca. 100−900 species, we are able to iterate through what is essentially 5×5 matrix multiplication. Since the time scales of the experimental results are on the order of 1000s, this results in 1×10 10 −10 11 iterations and is prohibitively long for a Langevin simulation. | 4,925.2 | 2014-01-06T00:00:00.000 | [
"Biology",
"Chemistry",
"Physics"
] |
Sparse Space Shift Keying Modulation with Enhanced Constellation Mapping
For reducing the switching frequency between the radio frequency (RF) chain and transmit antennas, a class of new sparse space shift keying modulation (SSSK) schemes are presented. This new class is proposed to simplify hardware implementation, through carefully designing the spatial constellation mapping pattern. Specifically, different from traditional space shift keying modulation (SSK), the proposed SSSK scheme utilizes more time slots to construct a joint design of time and spatial domain SSK modulation, while maintaining the special structure of single RF chain. Since part of the multi-dimension constellations of SSSK concentrate the energy in less time slots, the RF-switching frequency is effectively reduced due to the sparsity introduced in the time domain. Furthermore, through theoretical analysis, we obtain the closed-form expression of the bit error probability for the SSSK scheme, and demonstrate that slight performance gain can be achieved compared to traditional SSK with reduced implementation cost. Moreover, we integrate transmit antenna selection (TAS) to achieve considerable performance gain. Finally, simulation results confirm the effectiveness of the proposed SSSK scheme compared to its traditional counterpart.
Introduction
The concept of spatial modulation (SM) [1], characterized by the principle of single radio frequency (RF) multiple-input multiple-output (MIMO) design [2][3][4], has attracted considerable attention in research as summarized in [5,6] in order to simplify the implementation of MIMO systems. This is performed by utilizing the index of the activated transmit antenna for information modulation along with traditional digital modulation. Meanwhile, the transmission performance of SM-MIMO was demonstrated to be comparable to traditional MIMO techniques as a possible development direction for future wireless communications toward different applications [7,8], and the unique structure of SM was also suggested to adapt orthogonal frequency division multiplexing (OFDM) [9], massive MIMO [10], high-frequency transmission [11], intelligent surface [12] and wireless security [13]. Following the basic idea of SM, toward a low-cost hardware implementation, space shift keying modulation (SSK) [14] offers an extremely simplified MIMO structure, by inheriting the method of the antenna index modulation process in SM while abandoning the traditional digital modulation. In a nutshell, SSK benefits from a simple RF-switching process at the transmitter, which makes it feasible for scenarios with low-cost devices such as Internet of Things (IoT) [15][16][17].
Meanwhile, with the idea of the fifth generation (5G) becoming reality, the upcoming sixth generation (6G) [18] has focused on even enhanced transmission rate by exploring the use of high-frequency bands as terahertz [19]. Due to the expansive implementation cost on this band, more efficient transmission technologies such as space modulation have been suggested [20] to further reduce the implementation cost. Therefore, the above-mentioned SSK technique has the potential to offer a low-cost MIMO implementation toward 6G wireless communications.
Although the basic idea of SSK focuses on the most efficient implementation of modulation for MIMO, a new challenge lies in that the unaffordable RF-switching frequency becomes a bottleneck for information transmission, which remains an unsolved problem as increasing the implementation cost. To alleviate this issue, a class of offset SSK and SM schemes were developed in [21,22] for reducing the RF-switching frequency. However, the above solution assumes perfect channel state information (CSI) available at the transmitter, which may be not practical in many transmission scenarios. Therefore, reducing the RF-switching frequency without the aid of CSI for traditional SSK modulation remains an attractive challenge. On the other hand, for SSK, the constellation optimization becomes particularly challenging, due to its extremely simplified structure. For example, Hamming-code-aided constellation design was proposed in [23], toward better transmission performance at the cost of the increase in RF chains. Furthermore, extended SSK (ESSK) has been proposed in [24] where the number of active antennas is variable with high spectral efficiency. Furthermore, the performance of ESSK scheme using different detection strategies was evaluated in [24]. To fully exploit the spatial domain to transmit information, Fang, S. et al. [25] proposed a layered space shift keying (LSSK) modulation scheme to further improve the spectrum efficiency which employs a layered architecture of SSK systems. Therefore, on the limitation of utilizing the magnitude of the transmit signal while maintaining the single-RF structure, the constellation optimization for SSK has remained an open challenge. Therefore, of prime concern in this paper is to offer a new constellation mapping method with reduced RF-switching frequency and enhanced system performance.
Against the above background, the major contribution of this paper lies in that, a class of sparse SSK schemes are proposed, characterized by carefully designing and optimizing the constellation in both the spatial and time domain. Meanwhile, with regard to effectively reducing the switching frequency between the RF chain and multiple antennas, part of the multi-dimension constellations of SSSK concentrate the energy of multiple time slots and hence the RF chain does not switch in this duration, while strictly maintaining the single-RF structure of original SSK. Furthermore, a closed-form expression of the union bound on bit error probability for the proposed scheme is also derived by theoretical analysis, in order to demonstrate its slightly improved bit-error rate (BER) performance over original SSK. Finally, in order to further enhance the performance by increasing the transmit diversity, the concept of transmit antenna selection (TAS) is integrated. Specifically, different TAS criteria are firstly compared in terms of transmission performance and computational complexity, then a class of low-complexity TAS algorithms are designed to strike a balanced tradeoff between performance and complexity.
The remainder of this paper is organized as follows. The conventional SSK is reviewed and then the sparse SSK is presented in Section 2, while the comparison of RF-switching frequency is also presented. Section 3 presents the closed-form expression for the union bound on bit error probability of SSSK. In Section 4, transmit antenna selection (TAS) is utilized in sparse SSK to improve the BER performance. Both theoretical and simulation results as well as discussion are presented in Section 5. In Section 6, the conclusion is given.
Notation 1.
We use (·) T and · F to denote the transpose and the Frobenius norm of a vector/matrix, respectively. P(·) is taken to mean the probability of an event. Q(·) represents the Q-function as Q(z) = 1 π π 2 0 exp − z 2 2sin 2 θ dθ. We use log a (·) for representing the logarithm with base a. Finally, CN (m, σ 2 ) denotes the complex Gaussian distribution of a random variable having independent Gaussian distributed real and imaginary parts, with mean m and variance σ 2 /2.
Conventional Space Shift Keying Modulation
Let us consider a generic MIMO system with N t transmit and N r receive antennas. Generally, a random sequence of independent bits b = [b 1 b 2 . . . b K ] is generated at the transmitter, where K represents the sequence length. For the conventional SSK scheme, the groups of log 2 N t bits are mapped into a constellation vector x = [x 1 x 2 . . . x N t ] T with a power constraint of unity. The vector x specifies the activated antenna, during which all the other antennas remain idle, and hence have the form x = [0 0 . . . 1 . . . 0 0], in which the element 1 in the vector is the index of the activated antenna. More explicitly, an example of SSK modulation with a spectral efficiency of 2 bits/s/Hz is given in Table 1.
Then the modulation vector is transmitted over an N r × N t wireless channel H, thus the received signal can be expressed as Y = Hx + W, where the entries of H are assumed to be i.i.d. complex Gaussian random variables with zero means and unit variances and W is the additive white Gaussian noise (AWGN) with mean zero and variance σ 2 .
In general, the structure of original SSK offers a special low-cost implementation at the cost of increasing the switching frequency between the RF chain and transmit antennas. Therefore, a new challenge occurs as the overhead of RF-switching frequency. On the one hand, the RF-switching frequency becomes a new bottleneck to enhance the transmission rate. On the other hand, the increase of the RF-switching frequency also introduces extra cost for hardware implementation.
Input Bits
Antenna Index Transmission Vector
Proposed Sparse Space Shift Keying Modulation
Similarly, we consider an (N t × N r )-element MIMO system for the design of sparse SSK. In this contribution, the SSSK combines multiple moments, i.e., the power of multiple time slots can be concentrated into the constellation mapping toward lower RF-switching frequency. Therefore, the modulated symbol is not a vector but a matrix. Specifically, the transmission data are mapped by the joint design of the spatial domain and time domain so the index formed by the data to be transmitted selects the combination of the activation time slot and the activated transmit antenna. Furthermore, it can be seen from the analysis and simulation results in the following section that this mapping method has comparable BER performance to the conventional SSK. The difference from the original SSK lies in that, the time slot is also treated as a special resource for index modulation, while making the RF-switching frequency considerably reduced because of the introduction of silent slots.
Specifically, to describe SSSK in a detail, we firstly assume that the number of combined time slots is N. In N slots, an N t × N matrix X is transmitted on N t antennas, and the matrix X specifies the activated antenna and time slots, during which all other antennas and time slots remain idle. The number of X and the spectral efficiency are expressed as l and m, respectively. Now we introduce the concept of SSSK further via the following example in Table 2 with a transmission efficiency of 2 bits/s/Hz transmission mapping rule. Specifically, in Table 2, the case of four transmit antennas and two time slots is considered. Therefore, the four information bits determine the location of the active antenna and time slot, which shows the transmission mapping rule in the context of N = 2, N t = 4. In Table 2, matrix X has N 2 t + N t × N = 4 2 + 4 × 2 = 24 states and we choose 16 states from the state collection as the constellation of SSSK. To reduce the RF-switching frequency and enhance the system performance, the states with only one time slot activated have the priority. In general, the four information bits determine the location of the active antenna and time slot. Due to the power constraint, when one time slot is activated, the transmit power on the active antenna is √ 2, and when two time slots are activated, the transmit power on the active antenna is 1.
To generalize the basic idea behind Table 2, an algorithm to design the SSSK scheme is summarized as follows.
Step 1: Calculate the number of states available for indexing and choose the constellation of SSSK. It is worth mentioning that when N ≥ 3, X is allowed to concentrate the energy on part of the slots. For example, when states, and 108 of the collection concentrates the energy, which changes the energy distribution thus reducing the switching frequency.
Step 2: Divide the information bits into bit streams of length l, which are mapped to a constellation matrix X.
Step 3: Transmit the corresponding X in the MIMO system. In traditional SSK systems, the special structure requires frequent switching between the transmit antennas and the RF chain. For example, when N t = 4, the expectation of RF-switching frequency is 3 4 as the the probability of the case that the constellations in continuous two time slots are different is 3 4 . However, in practical implementation, the switching frequency between RF and antennas is limited. As X in which the RF chain does not switch is considered first in SSSK, the RF-switching frequency of SSSK is considerably reduced.
Similarly as SSK, the SSSK modulation is transmitted over an N r × N t wireless channel H, then the received signal can be expressed as Y = HX + W, where the entries of H and W are assumed to be CN (m, 1) and CN (m, σ 2 ), respectively.
Performance Analysis
In this section, we analyze the error performance of the developed SSSK system. A tight upper bound on the average bit error probability (BEP) is given by the well-known union bound [11] as: where P X i → X j is the pairwise error probability (PEP) of deciding SSSK matrix X j given that the SSSK matrix X i is transmitted, and d X i → X j is the number of bits in error between the matrices X i and X j . We obtain the conditional PEP on channel matrix H as Averaging (21) over the channel matrix H and the unconditional PEP is obtained by using the moment generating function (MGF) approach as where n = 1, 2, . . ., N, and M γ n is an MGF of a random variable γ n and can be defined as follows The MGF of most fading distributions can be obtained by standard Laplacian transformation or numerical integration and common fading distributions such as Rayleigh MGF is expressed as Thus, the unconditional PEP can be calculated under Rayleigh fading as whereγ n is average signal-to-noise ratio which can be caculated as where N 0 is is the power of noise and λ i,j,n are the eigenvalues of the distance matrix X i − X j X i − X j H . Consequently, the union bound of the BER of the SSSK system is expressed as
Transmit Antenna Selection
When considering the reduction of the number of RF chains, spatial resources could also be utilized to improve the BER performance rather than spectral efficiency, and an abundance of methods have been proposed, among which transmit antenna selection could achieve considerable performance gain in conventional MIMO systems. Specifically, due to the lack of diversity, TAS is usually suggested to adapt SM and SSK in order to enhance the error performance via extra spatial diversity. Focusing on different aspects of the channel matrix, multiple TAS algorithms have been suggested as the norm-based capacity optimized antenna selection (COAS) and the Euclidean distance(ED)-based Euclidean distance optimized antenna selection (EDAS) [26,27].
The aforementioned two algorithms evaluate the channel matrix with totally different criteria, resulting in different tradeoff between complexity and performance. In this treatise, COAS, EDAS and a compromised TAS algorithm are conceived for SSSK systems, while the complexity will be further quantified.
Capacity Optimized Antenna Selection
The COAS uses the simultaneous SNR to evaluate the quality of the selected antenna set. Due to the feature of additive white Gaussian noise, COAS focuses on the Frobenius norm of the selected q columns in the channel matrix H ∈ C N r ×N t , which could be expressed as where p andp denote the indices of a certain selection and optimal selection, respectively. S is the index set having ( N t q ) elements. The COAS does not utilize the prior information of the code book at the transmitter, which reduces the complexity compared to ED-based algorithms at the cost of performance gain limitation.
EDAS
Compared to COAS, the EDAS algorithm adopts a different criterion with much increased complexity to select the antennas. In EDAS, the Euclidean distance between transmit vectors which are distorted by the partial channel matrix H p ∈ C N r ×q , and selected from the whole matrix H, is calculated and evaluated, and the minimum distance in H p is then used to select the final antenna set. Therefore, at the cost of high complexity for exhaustive research, the optimal set has the maximum minimum distance. The whole algorithm could be thus depicted aŝ where p andp are the indices of a certain selection and the optimal selection, respectively. X is the transmitted vector and x is an element in X. As the Euclidean distance is calculated in each selection and the constellation is traversed, EDAS demonstrates the most excellent performance, with tremendous computational complexity compared to norm-based algorithms.
Compromised TAS Algorithm
In order to balance the performance and complexity for adapting the SSSK scheme, a compromised method of antenna selection, taking both performance and computational complexity into account, is conceived to combine the above-mentioned two criteria. Specifically, a primary selection could be performed with low-complexity norm-based algorithms as COAS, to alleviate the computational complexity, and a complicated, near optimal antenna selection could be performed afterwards to acquire better BER performance. Algorithm 1 could be depicted as follows.
The parameters q t and H t denote the number of antennas and the temporary channel matrix selected by COAS, respectively. The combination of COAS and EDAS constitutes a balanced trade-off between complexity and performance. As q t increases, better performance could be attained at the cost of complexity, and it is reduced to EDAS when q t = N t .
Complexity Analysis
The computational complexity is measured by the real number float operations per second (flops), including complex addition and multiplication. For given complex matrices A ∈ C a×b , B ∈ C b×c , c ∈ C b×1 and d ∈ C b×1 , the complexity of c + d, c, d , AB and ||A|| 2 F is quantified by 2b flops, 4b − 1 flops, 8abc − 2ac flops, and 4ab − 1 flops, respectively, [28]. Assume an SSSK system with N t transmit antennas, N r receive antennas, M(M < q) time slots and a spectral efficiency L, while q columns of channel matrix are selected. For convenience, define N b = ( N t q ). In COAS, computing the norm of a selected antenna set costs 4N r q − 1 flops. Since there are N b antenna sets to be computed, the overall complexity of COAS is N b (4N r q − 1) flops.
With regard to EDAS, the problem becomes different from that in conventional MIMO systems for the sparsity of SSSK transmit vectors. For any i and j, x i − x j generates at most 2M non-zero rows, so that the computational complexity of for a certain i and j, and the overall complexity is N For the compromised algorithm, the TAS process could be divided into two steps, as the primary COAS and EDAS, and the computational complexity is strongly correlated with the temporarily selected antenna number q t . From the calculation above, the complexities of COAS and EDAS are ( N t q t )(4N r q t − 1) and ( q t q )( 2 ML 2 )(2q + 16N r M + 2N r − 1), respectively. Then the overall computational complexity is
Simulation and Discussion
In this section, the comparison of RF-switching frequency and a range of numerical BER performance simulation results of the sparse SSK and conventional SSK are presented and compared with different numbers of transmit antennas and receive antennas. In all the simulation results, unless otherwise specified, the Rayleigh fading channel is considered while perfect channel state information is assumed. When channel estimation is not ideal [29,30], the variance of the Gaussian estimation error decreases as the SNR of the data symbols increases, i.e., σ 2 e = SNR −1 . We employ maximal likelihood (ML) detection for both SSK and SSSK schemes. Let N t and N r be the numbers of the transmit and receive antennas, respectively, N be the number of combined time slots for SSSK modulation. Moreover, the theoretical curve is given in each figure. As shown in the figures, the upper bound derived becomes very tight upon increasing the SNR values for both SSSK and SSK which is helpful for verifying the correctness of the simulation results. The details for both will be given as follows.
Firstly, to demonstrate the advantages of SSSK in detail, a comparison of the RFswitching frequency between SSK and SSSK is shown in Table 3. We use E SSK and E SSSK to denote expectation of RF-switching frequency of SSK and SSSK, respectively, with the identical spectral efficiency. Subsequently, we present the BER performance curves of the SSSK and SSK modulation with N t = 2, N r = 1 in Figure 1. More specifically, the theoretical and simulation performances of SSSK for N = 2 at a spectral efficiency of 1.5 bits/s/Hz and N = 3 at a spectral efficiency of 1.33 bits/s/Hz are shown. When SNR increases, the simulation performances gradually approach the theoretical upper bound, which means our theoretical analysis is correct for bounding the performance of proposed SSSK. In general, in the case of N t = 2, the spectral efficiency of SSSK is higher than that of the traditional SSK. Meanwhile, as seen in Table 3, the RF-switching frequency of SSSK is effectively reduced compared to SSK. Figures 2 and 3 compare the BER performance of SSSK and conventional SSK having N t = 4 and N r = 1 at a spectral efficiency of 2 bits/s/Hz. The figures provide the following observations. Firstly, the BER performance of SSSK improves as N decreases under the same antenna configuration. Secondly, when N = 2, theoretical and simulation results both show that the BER performance and of SSSK is better than that of the conventional SSK with lower RF-switching frequency. However, as the N increases, the advantage of SSSK is gradually reduced. Moreover, we give the results in the case of N t = 8, N r = 1 shown in Figure 4. We show the analysis results match the simulation results to demonstrate the correctness of the derived performance. The above-mentioned antenna selection schemes are validated in SSSK systems, and SSSK without TAS is also simulated as a reference. The total transmit power is set to be unit per time slot. Assume perfect channel state information is available at the receiver and the result of TAS is conveyed to the transmitter through the feedback channel. Figure 5 shows the BER performance of SSSK with multiple TAS algorithms, in the context of N t = 8, q = 4 and N r = 1. The SSSK system combines two time slots and the firststage TAS parameter q t in the compromised TAS algorithm is six. It can be observed from the figures that in the SSSK system COAS could attain a performance gain of 2dB at a BER of 10 −2 . The SSSK system obtains excellent performance with ED-based algorithms (EDAS and the compromised TAS algorithm), however, the gap between the compromised algorithm and EDAS becomes quite thin in the SSSK system, implying that the high-complexity EDAS could be replaced by a simplified version with a moderate performance loss. The complexity of the aforementioned TAS algorithms is also illustrated in Figure 6 with varied q t , indicating that the compromised TAS algorithm could reduce the computational complexity of EDAS effectively. The SSSK system with EDAS possesses highest complexity due to the exhausive search of the maximum minimum Euclidean distance between the transmitted symbols to provide an optimal antenna selection, while the complexity of the system with COAS is much lower, with limited performance gain. The compromised algorithm, however, reduces the complexity of EDAS efficiently by pre-selecting a q t sized antenna set using the simpler COAS algorithm, with a moderate performance degradation compared to EDAS. The complexity of the compromised algorithm increases as q t increases, and when q t = N t , the algorithm is increased into EDAS.
In general, through the above-mentioned simulation works, we demonstrate the following issues for the developed SSSK scheme. Firstly, SSSK has the unique advantage of reduced RF-switching frequency without the CSI at the transmitter, while offering slightly improved BER performance compared to its traditional counterpart as SSK. Secondly, the TAS is proved to be effective at combining SSSK for reaping its spatial diversity advantage offered. Lastly, the proposed low-complexity TAS-SSSK is capable of striking a balanced trade-off between performance and complexity.
Conclusions
In this paper, we introduce a class of sparse SSK modulation techniques toward lowcost implementation of MIMO in terms of reduced RF-switching frequency, characterized by the construction of a joint index modulation in the space and time domains. Furthermore, a theoretical upper bound is achieved through theoretical analysis, to further quantify the performance advantage compared to traditional SSK. In addition, for further improvement of the BER performance through spatial diversity enhancement, the transmit antenna selection is considered while several detailed algorithms are compared. We finally conclude that the developed low-complexity TAS algorithm can strike a balanced trade-off between BER performance and computational complexity.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,784.8 | 2022-08-01T00:00:00.000 | [
"Computer Science"
] |
Calibration techniques for fast-ion D α diagnostics a)
Fast-ion D α measurements are an application of visible charge-exchange recombination (CER) spec-troscopy that provide information about the energetic ion population. Like other CER diagnostics, the standard intensity calibration is obtained with an integrating sphere during a vacuum vessel opening. An alternative approach is to create plasmas where the fast-ion population is known, then calculate the expected signals with a synthetic diagnostic code. The two methods sometimes agree well but are discrepant in other cases. Different background subtraction techniques and simultaneous measurements of visible bremsstrahlung and of beam emission provide useful checks on the calibrations and calculations. © 2012 American Institute of Physics . [http://dx.doi.org/10.1063/1.4732060] paper methods to
I. INTRODUCTION
Fast-ion D α (FIDA) diagnostic technique exploits the Doppler shift of the Balmer-alpha emission from neutralized deuterons to obtain velocity and profile information about the fast-ion distribution function. 1 This paper focuses on methods to assess the validity of the intensity calibration.
The primary intensity calibration during a vacuum opening follows a standard procedure. Each optical fiber is backlit, then the aperture of a calibrated integrating sphere is positioned to intersect the light cone of the illuminated fiber. After the fiber is reconnected to the spectrometer, the camera measures the number of counts produced by the source, providing a calibration factor that relates counts to the absolute spectral radiance. Usually this intensity calibration is performed both before and after experimental campaigns, since tokamak operation can degrade optical components.
A calibration during physics operations requires a known source. A common approach is to attempt to produce a known fast-ion distribution function by injecting neutral beams into an MHD-quiescent plasma. For such conditions a code such as TRANSP NUBEAM (Ref. 2) can accurately model the distribution function. The NUBEAM distribution function is input to a synthetic diagnostic code such as FIDASIM (Ref. 3) that calculates the expected FIDA spectral radiance for use as a calibration reference. Agreement (to within ∼25%) between theory and experiment have been reported for spectrometers at DIII-D (Ref. 4) and ASDEX-Upgrade. 5 In other cases, the predicted signal disagrees with the measurement. There are many possible causes of a discrepancy, including measurement errors (particulary background subtraction), errors in beam parameters (power, species mix, spatial profile), errors in plasma parameters (which affect calculations of injected neutral beam, halo, and fast-ion densities), and modeling errors. The latter include programming a) Contributed paper, published as part of the Proceedings of the 19th Topical Conference on High-Temperature Plasma Diagnostics, Monterey, California, May 2012. b) Electronic mail<EMAIL_ADDRESS>"bugs" such as ones that were recently indentified in the FI-DASIM code (the comparisons in this paper are from IDL version 4.0) and deficiencies in the NUBEAM model such as the neglect of fast-ion transport by instabilities. Using recently analyzed NSTX and DIII-D cases as examples, this paper presents several additional comparisons that can confirm or eliminate some of these potential sources of error.
II. NSTX VERTICAL FIDA EXAMPLE
The NSTX vertically-viewing s-FIDA diagnostic 6 uses a transmission grating spectrometer in conjunction with a CCD camera to measure D α spectra between 645-667 nm. An OD2 neutral density filter in the spectrometer image plane partially blocks the bright, cold D α centerline. One set of active fibers views a heating beam, while a similar set of toroidally displaced fibers provides reference views.
In 2008 and 2009, a set of experiments were conducted to check the calibration and modeling of the FIDA emission. To minimize MHD activity, a single modulated (50 Hz at 50% duty cycle) 65 keV neutral beam was injected into plasmas with different values of plasma current I p , density n e , and toroidal field B T . Of the beam-driven instabilities that are commonly observed in NSTX, low-frequency instabilities such as the toroidal Alfvén eigenmode (AE) were absent but, despite the low beam power, MHz global or compressional AEs were present in these discharges. The measured neutron rate agrees well with TRANSP predictions during the lowpower phase, suggesting that any spatial transport caused by the MHz instabilities is modest. Figure 1 shows raw and calibrated spectra from a representative spatial channel. For the channel that views the beam, a FIDA feature is obviously present on the blueshifted wing of the D α line (652-655 nm) ( Fig. 1(a)). As expected, the FIDA feature is absent in the spectrum from the toroidally displaced fiber ( Fig. 1(b)). Other features in the spectra are associated with impurity lines and with the attenuation caused by the neutral-density filter. Figure 1 active spectrum after application of the calibration factors.
The spectrum is reasonable. The central D α line is very bright. Impurity lines, such as the oxygen V line at 650.0 nm, rise above a fairly flat background. The spectral intensity of visible bremsstrahlung (VB) is essentially constant between 645-670 nm and provides a convenient check on the intensity calibration. It is straightforward to calculate the expected VB level from the plasma profiles and diagnostic sightlines used in the FIDASIM code. The code neglects any emission from outside the last-closed flux surface, so the measured background should be the calculated VB. The data in Fig. 1(c) fail this test, strongly suggesting that the intensity calibration underestimates the true intensity by a factor of ∼2. The error source is currently unknown.
Another useful check is to compare methods of background subtraction. The data should satisfy the following criteria. (1) The net spectra should go to zero at large Doppler shifts. (2) The spectrum derived from beam modulation ("beam on -beam off") should equal the spectrum derived from the reference view ("active view -reference view"). (3) For a reference view, the beam modulation spectrum ("beam on -beam off") should be flat and approximately zero. Figure 2 shows comparisons of this type for a representative spatial channel. For this channel, the blueshifted spectra meet all three criteria but the redshifted spectra do not. In particular, the redshifted beam-modulation spectrum is >0 at large Doppler shift (661-662 nm), the spectra derived from the two background-subtraction techniques differ, and the reference background is larger when the diagnostic beam is on than when it is off. Similar comparisons for the other spatial channels show that the validity of the blueshifted and redshifted spectra depends upon position. An investigation suggests that the errors in background subtraction are caused by scattered light. The spectra are measured in three bands: large blueshift I B , cold D α line I C , and large redshift I R . The "large" blue and redshifts are for Doppler shifts greater than the injection energy. A database shows that the baseline offsets I B and I R are both strongly correlated with the cold intensity I C .
Some aspects of the FIDASIM predictions disagree with the data while others agree. The overall intensity is discrepant for all channels. The observed spatial profile shape is ∼30% broader than theory. On the other hand, the spectral shape is in excellent agreement with theory for all channels. The parametric dependencies of the signals also agree with theory. To test this, the maximum measured and calculated radiance is measured for all 12 discharges in the dedicated experiment for both the blueshifted and the redshifted side of the spectrum. The correlation coefficient between theory and experiment is r 0.9 for both sides of the spectrum.
To summarize, although the vertical NSTX FIDA diagnostic unquestionably is measuring FIDA light, the absolute intensity calibration is suspect. At this point, calibration errors, errors in beam parameters, and modeling errors all remain candidates to explain the discrepancy.
III. DIII-D EXAMPLES
DIII-D is currently equipped with three spectroscopic FIDA diagnostics with vertical, oblique, and tangential views of the plasma. The vertically-viewing profile diagnostic employs a Czerny-Turner spectrometer tuned to the blue side of the cold D α line. The obliquely-viewing diagnostic 7 employs a transmission grating spectrometer that only measures the blueshifted side of the spectrum. A bandpass filter transmits the blue wing but strongly attenuates the cold D α line. The main-ion CER diagnostic 8 measures the entire D α feature with a pair of tangential views. It employs a Czerny-Turner spectrometer and a 12-bit CCD camera. For FIDA measurements, the pixels at the cold D α line are allowed to saturate weakly. ("Weak" saturation occurs for signals that are less than about twice the full well depth.) When weak saturation happens, the spectra are merely clipped over a few (2)(3)(4)(5) pixels and other pixels appear unaffected. (In contrast, for stronger saturation, the entire register (128 pixels) associated with the saturated pixels will exhibit a baseline "sag"; these spectra are unusable.) The analysis procedure fits the entire spectrum. 8 All three diagnostics normally use beam modulation to remove the background. Figure 3 shows analyzed main-ion CER data following injection for 100 ms of a single 74 keV, 2.2 MW source. The diagnostic beam is pulsed on for 10 ms. Figure 3 full, half, and third-energy components with the brightness predicted by FIDASIM. The agreement is excellent. The first FIDA diagnostics intentionally avoided the beam emission in their design 1 but the data from the main-ion CER diagnostic show that it is preferable to measure beam emission as well as the FIDA feature. The good agreement between theory and experiment shown in Fig. 3(a) confirms that the injected neutral density is accurately modeled, eliminating one potential source of error in the modeling. The calculated VB level is in excellent agreement with the observed background at large Doppler shifts for this instrument ( Fig. 3(c)). Only the outermost channel shows a significant discrepancy, owing to reflections off a metallic surface that is in its sightline. The excellent agreement confirms the validity of the experimental calibration and of the modeling of the plasma profiles for this discharge.
The halo of thermal neutrals that surround the injected beam also contribute to the charge-exchange events that produce FIDA light. Figure 3(b) shows the profile of light produced by thermal deuterons. The discrepancy between the measured and calculated deuteron brightness suggests that the halo density is underestimated by FIDASIM. Alternatively, the geometry of the neutral beam may be specified incorrectly. Similarly, the shape of the spatial profile agrees well with FIDASIM predictions but the magnitude differs (Fig. 3(d)), probably because FIDASIM is underestimating the halo neutral density.
To create a calibration discharge for the vertical and oblique diagnostics, a single steady 60 keV beam injected 1.3 MW into an L-mode discharge with negligible MHD activity. The measured neutron rate is in excellent agreement with the rate predicted by TRANSP, suggesting that the fast-ion distribution function is accurately modeled. For both of these diagnostics, the predicted VB signal is about a factor of two smaller than the apparent baseline. This probably indicates that scattered light is increasing the background level. A laboratory calibration experiment 9 indicates that scattered light is a problem for the transmission-grating spectrometer design.
The spectral shape predicted by FIDASIM is in excellent agreement with the measurements for both diagnostics. The magnitude of the FIDA signal also agrees reasonably well with the predictions for both systems, although the spatial profile shape is only in fair agreement with theory.
These examples illustrate the power of comparing the data with as many spectral features as possible. Each successful comparison eliminates potential sources of error, while unsuccessful comparisons highlight likely sources of error. | 2,643.2 | 2012-07-03T00:00:00.000 | [
"Physics"
] |
Online evaluation method of coal mine comprehensive level based on FCE
An online evaluation method of coal mine comprehensive level based on Fuzzy Comprehensive Evaluation method (FCE) is proposed. Firstly, following the principles of fairness, systematicness and hierarchy, taking research and development, production, sales, finance, safety and management as the first level indicators, a set of multi-level evaluation indicator system of coal mine comprehensive level combining objective and subjective evaluation indicators is established. Secondly, according to the characteristics of the indicator system, the specific process of FCE of coal mine comprehensive level is given. Then, taking SQL Server as the database management system and C#.NET as the development language, a set of B/S structure online evaluation system of coal mine comprehensive level based on FCE is designed and developed. Finally, the proposed method is applied to Coal group PM for test. The application shows that the method proposed can provide an efficient and convenient online evaluation platform to evaluate the comprehensive level of coal mines for the Coal group, and the horizontal and longitudinal comparison of the evaluation results can urge the coal mines to maintain their advantages and avoid their disadvantages, which is of some significance for improving the overall competitiveness of the Coal group.
Introduction
Coal mines are important economic cells that provide coal resources for a country. As a Coal group, it is of some significance to evaluate the comprehensive level of the coal mines under its jurisdiction and promote improvement through evaluation for improving the overall competitiveness of the coal mines and even the Coal group.
Current research about coal mine evaluation mainly includes safety evaluation [1][2][3], risk evaluation [4][5][6][7], ecological environment evaluation [8,9], system evaluation [10], science and technology evaluation [11], etc. These belong to professional evaluations from a certain point of view, which may fall into the one-sided. As a production-oriented enterprise, we think that comprehensive evaluation including research and development, production, sales, finance, management, etc. has more guiding significance.
The comprehensive level evaluation of coal mines belongs to multi-criteria evaluation. Common methods of multi-criteria evaluation include Delphi Method [12,13] hierarchy process method (AHP) [14][15][16], Weight summation method (WSM), Weight product method (WPM), Entropy method [17,18], Factor analysis method (FA) [19], TOPSIS method [20][21][22], Artificial neural network method (ANN) [23,24], Multiple regression analysis method (MRA) [25,26], Fuzzy comprehensive evaluation method (FCE) [27][28][29], etc. Among them, Delphi method and AHP method are suitable for evaluation of the subjective evaluation indicators. WSM method, WPM method, Entropy method, FA method and TOP-SIS method are suitable for the evaluation of the objective evaluation indicators. ANN method, MRA method and FCE method are all suitable for the evaluation of the subjective and objective indicators. However, ANN method and MRA method are not suitable for the evaluation of multi-level indicators. FCE method is suitable for the evaluation of multi-level indicators. Through FCE method not only can the overall evaluation result be obtained, but also can the evaluation results of each indicator be obtained, which makes it easy to find out the disadvantages and propose corresponding improvement measures.
In the aspect of evaluation operation, the informationalized method of coal mine evaluation needs to be improved urgently. With the advent of the information age, manual or stand-alone evaluation has more and more exposed its shortcomings. For example, the evaluation has a certain space limitation, evaluation and calculation efficiency is low, it is difficult to make evaluation results be shared and compared.
Based on above analysis, an online evaluation method of coal mine comprehensive level based on FCE is put forward. Following the principles of fairness, systematicness and hierarchy, a set of multi-level evaluation indicator system for coal mine comprehensive level is established. The specific process of FCE for coal mine comprehensive level is given. Taking SQL Server as the database management system and C#.NET as the development language, a set of online evaluation system for comprehensive level of coal mines is designed and developed. The proposed method is applied to Coal group PM for test.
Establishing of evaluation indicator system
Following the principles of fairness, systematicness and hierarchy, through literature search and investigation of Coal group PM, a set of multi-level evaluation indicator system for coal mine comprehensive level is established, which takes research and development, production, sales, finance, safety and management as the first-level indicators, as shown in Table 1. The specific process is as follows. Firstly, a draft indicator system is constructed through literature search. Secondly an expert group consisting of 18 experts coming from Coal group PM and the coal mines of Coal group PM is established. Thirdly, the rationality of the selected indicators is discussed through expert meetings. Finally, the weight of each indicator is determined one by one through expert meetings.
Process design of FCE
It can be seen from Table 1 that the evaluation indicator system is a multi-level evaluation indicator system including objective evaluation indicators and subjective evaluation indicators. For this kind of evaluation indicator system, FCE method is suitable for evaluation. The quantitative indicators (hoping-large, hoping-target, hoping-small) and have-no indicators (hoping-have, hoping-no) belong to objective evaluation indicators. The qualitative indicators belong to the subjective indicators. For the objective evaluation indicators, no matter which expert evaluates it, the evaluation result is the same, so the same indicator only needs to be evaluated once. For the subjective evaluation indicators, it depends on the subjective judgment of the experts to give their grades, so the same indicator usually needs to be evaluated for more than once by different experts. Based on this, the specific process of FCE designed in this paper is as follows.
(1) Evaluation grading. Select grade A, B, C, D, E to evaluate the coal mines.
(2) Evaluation of final-level indicators. For the final-level subjective evaluation indicators, invite several experts to give grade A, B, C, D or E for each indicator. For the final-level objective evaluation indicators (hoping-large, hoping-target, hoping-small, hoping-have, hoping-no), invite one or more experts to give numeric value for each indicator according to their own expertise. Among them, for the hoping-have or hoping-no indicators, 1 or 0 should be given, 1 represents "have" and 0 represents "no". Different from the subjective evaluation indicators, the same objective evaluation indicator is only evaluated once.
(3) Membership vector determination of final-level indicators.
1) For the final-level subjective evaluation indicator x, apply statistical method to determine its membership vector U(x). The specific method is as follows. Count the number of grade A, B, C, D, E respectively, and assign them to n A , n B , nc, n D , n E . Let n = n A +n B +nc+n D +n E . Get the membership vector U(x) according to Formula (1).
2) For the final-level objective evaluation indicator x, determine its membership vector U(x) according to its characteristics (hoping-large, hoping-target, hoping-small, hoping-have, hoping-no) by appropriate methods. (3); when x>α E, firstly determine U 0 (x) by Formula (4), then normalize it to get U(x) according to Formula (3). Among them, α 1 , α 2 , α 3 , α 4 , α 5 are adjustment coefficients, whose values can be calculated by substituting the indicator values (Ah, Bh, Ch, Dh, Eh) with that corresponds to membership degree of 0.5 into the membership formula. Usually let β = 2. (5), then normalize it to get U(x) according to Formula (3); when x>α E , firstly determine U 0 (x) by Formula (6), then normalize it to get U(x) according to Formula (3). ⑤ Membership vector determination of final-level hoping-no indicators. For the indicator x, apply grade exchange method to determine its membership vector. Let 1 represent "have" and 0 represent "no". When
PLOS ONE
Online evaluation method of coal mine comprehensive level based on FCE UðxÞ (4) Membership vector determination of non-final-level indicators. For the non-final-level indicators, apply the weighted average fuzzy operator to determine the membership vector of each indicator from low level to high level. For the non-finial-level indicator x, suppose it has k sub-indicators which are x 1 , x 2 , . . ., x k-1 , x k , with weights of w 1 , w 2 , . . ., w k-1 , w k . Determine its membership vector U(x) according to Formula (7). Among them, take u A (x) for an example, its calculation formula is shown in Formula (8).
(5) Membership determination of evaluated coal mine. For the evaluated coal mine, apply the weighted average fuzzy operator to determine its membership vector U according to the first-level indicators. The determination method is the same as that of non-final-level indicators.
(6) Grade determination of each indicator and the evaluated coal mine. After membership vector determination of each indicator and the evaluated coal mine, give the grade of them respectively according to their maximum membership degrees. If number of the maximum membership degree are larger than one, take more than one grades. For example, if the membership degrees of both grade A and grade B of an indicator is equal to the maximum membership value of 0.3, then the evaluation grade of the indicator is A or B, denoted as AB.
(7) Score determination of each indicator and the evaluated coal mine. In order to reflect the advantages and disadvantages of each indicator and the evaluated coal mine more intuitively, apply the weighted average method to calculate their scores according to 5-point system. Taking the indicator x as an example, its calculation formula is shown in Formula (9).
Design of online evaluation system
It can be seen that the calculation amount of the above evaluation method is large. Manual or stand-alone evaluation method is difficult to meet needs of the Coal group's evaluation and comparison of the comprehensive level of coal mines under its jurisdiction. In order to improve the evaluation efficiency, ensure the accuracy of calculation results, ensure the sharing of evaluation results and realize the comparison of evaluation results, taking SQL Server as database management system and C#.NET as the development language, a set of online evaluation system for coal mine comprehensive level based on FCE is designed and developed.
Function planning
There are three kinds of identities
Online evaluation process design
1. Preparations: The Group administrator logs in the system. Add evaluation experts through the "User manage" module. Establish one or more evaluation indicator systems through the "Indicator system" module which includes indicator system name management, indicator management, objective indicator setting and indicator system inspection. Add evaluated coal mines through the "Coal mine manage" module. Add evaluation tasks through the "Task manage" module which includes specifying the evaluated coal mine, specifying the evaluation indicator system, setting evaluation start time and end time. Assign evaluation tasks to experts through the "Assign tasks" module. Among them, the "indicator management" module provides the form of tree-shaped to add, modify the indicators and set their weights; the "objective indicator setting" module is used to set specific parameters of objective indicators, including characteristics, target, αA, αB, αC, αD, αE, Ah, Bh, Ch, Dh, Eh, etc.; the "indicator system inspection" module is used to check whether the sum of the weights of the sub-indicators of each non-final level indicator is 1, and whether the parameters of each objective indicator meet the requirements. If the result of inspection is "Y", the indicator system is effective; otherwise, it is invalid. Only the "effective" indicator systems can be used in the "Task manage" module, so as to ensure the effectiveness of the evaluation.
2. Evaluation: After the preparations, the Expert logs in the system. View the list of evaluation tasks assigned by the Group administrator through the "My evaluation task" module. Enter the evaluation page to evaluate the finial-level indicators through the "objective indicator evaluation" and "subjective indicator evaluation" sub-module. For each final-level subjective evaluation indicator, check its explanation, choose grade A, B, C, D or E according to his own subjective judgment. For the final-level objective evaluation indicator, check its explanation, accurately input the numeric value of the indicator as required. In particular, for the hoping-have or hoping-no indicator, 1 or 0 should be input, where 1 represents "have" and 0 represents "no".
3. Evaluation data management: After collecting the evaluation data, the Group administrator logs again in the system. Set the status of the evaluation task as "over" through the "Task manage" module to prohibit further evaluation. View the evaluation data through the "Evaluation data" module. If the number of evaluation data is insufficient, incomplete, or any of the evaluation data are obviously unreasonable, he can reset the status of the evaluation task as "not over" through the "Task manage" module, urge relevant experts to supplement or modify the evaluation data, or assign evaluation tasks to other experts to evaluate through the "Assign tasks" module. This process is repeated until the evaluation data collected is sufficient, complete and reasonable. In this condition, set the status of the evaluation task as "over".
Database design
Taking SQL Server as the database management system and following the standardized design principle, the database of the evaluation system is designed.
(1) Database structure design. The E-R diagram of the database is shown in Fig 6. Set "cascade" to the update rule between the primary table and the child table so as to guarantee data integrity. Set "do nothing" to the delete rule between the primary table and the child table to guarantee data security.
The table "Coal mine and group" used to store the information of coal mines and Coal groups. The field type has two kinds of value which are "Coal mine" and "Coal group".
The table "User" is used to store the users of the system. The default value of the field "Permission" is "available". If a user leaves or cannot continue to use the system for some reasons, set "disable" to it. There are three identities for the users in the system, which are Group administrator, Mine administrator and Expert. Among them, the user of Group administrator and Expert comes from one of the Group companies, while the user of Mine administrator comes from one of the coal mines.
The table "Indicator system" is used to store information of the indicator systems. The default value of field "Status" is "unchecked". After system inspection, if an indicator system is valid, set "valid" to it, otherwise set "invalid" to it.
The table "Indicator" is used to store information of the indicators. It has a special structure. When adding a first-level indicator to an indicator system through the system, set 0 to the field "Parent indicator No.". When adding a second-level indicator to an indicator with the Indicator No. x, set x to the field "Parent indicator No.". When adding a third-level indicator to a second-level indicator with the Indicator No. y, set y to the field "Parent indicator No.". And so on. It can be seen that each indicator should be set a field "Indicator system name". In the system, a first-level indicator is led out by the system name, the second-level indicators of each first-level indicator is led out by the field "Parent indicator No.". And so on. For each firstlevel indicator, an inverted tree can be established through "recursive process". All inverted trees for all of the first-level indicators can form a tree-shaped indicator system. By the special structure, the table "Indicator" can store contents of all indicators of infinite levels. The default
PLOS ONE
value of the field "Final level" is "Y". When adding a sub-indicator to a parent indicator, set "N" to the field of the parent indicator. The value of the field "Class" is Subjective, Objective or "", which should be specified by the user through the system. If an indicator is a final-level indicator, set "subjective" or "objective" to it, otherwise set "subjective" or "objective" or "" to it according to the specific condition. The field "Feature" is only specific for the final-level objective indicators. For the final-level objective indicators, set one of "hoping-large", "hoping-target", "hoping-small", "hoping-have", "hoping-no" to it, otherwise set NULL to it. The field "Target" is only specific for the final-level objective hoping-target indicators, that means for other indicators, set NULL to it. The fields "αA", "αB", "αC", "αD", "αE", "Ah", "Bh", "Ch", "Dh", "Eh" are only specific for the final-level objective indicators of hoping-large, hoping-target or hoping-small type. These fields are used to calculation the adjustment coefficients α1, α2, α3, α4, α5. The fields "α1", "α2", "α3", "α4", "α5" are the parameters needed to determine the membership degree of an objective indicator of hoping-large, hoping-target or hopingsmall type by the Cauchy membership function. The table "Evaluation task" is used to store information of the evaluation tasks. Among them, the fields "Coal mine name", "Indicator system name" and "Start time" are defined as the unique index to prevent the same record from being input repeatedly. The default value of the field "status" is "not over". When the evaluation is over, set "over" to it. The default value of the field "Summary mark" is "N". When the evaluation data is summarized, set "Y" to it.
The table "Assigning of evaluation tasks" is used to assign evaluation tasks to evaluation experts. Where, the field "Class" takes the value of "Objective" or "Subjective".
The table "Evaluation data of final-level objective indicators" is used to store the evaluation data of final-level objective indicators. The fields "Indicator No." and "Task No." are defined as the compound primary key so as to ensure that the same final-level objective indicator of the same evaluation task is only evaluated once. The type of field "Evaluation value" is defined as "real" to ensure that evaluation value of the objective indicators of "hoping-large", "hoping-target", "hoping-small", "hoping-have" or "hoping-no" type can be stored by it.
The table "Evaluation data of final-level subjective indicators" is used to store the evaluation data of final-level subjective indicators. Different from table "Evaluation data of final-level objective indicators", here the fields "Indicator No.", "Task No." and "Expert" are defined as the compound primary key. By this means, the same subjective indicator of the same evaluation task can be evaluated more than once. The value of field "Evaluation grade " is one of "A", "B", "C", "D", and "E".
The table "Summary result of final-level indicators" is used to store the membership degree, grade and score of the indicators obtained by evaluation summary. Among them, fields "A", "B", "C", "D" and "E" are used to store the membership degree of the indicators; field "Grade" is used to store the grade of the indicators determined according to the membership degree of the indicators; field "Score" stores the score calculated by the 5-point system according to the membership degree of the indicators.
The table "Overall evaluation result" is used to store the general evaluation result of each coal mine. The role of each field is the same as that of the table "Summary result of final-level indicators". The relation between this table and table "Evaluation task" is one-to-one. In theory, they can be merged into one table. However, from the process of the system, it makes more sense to design them separately.
(3) Stored procedure design. It can be seen from the system evaluation process described in section 4.2 that the work with the largest amount of calculation in the system is the summary of evaluation results. In order to simplify the front-end program, design a stored procedure named "Summary by Task No." with the parameter "@taskno" to realize the evaluation summary. It firstly uses the "cursor", "while loop" to determine the membership vector, grade and score of the final-level indicators from tables "Evaluation data of final-level objective indicators" and "Evaluation data of final-level subjective indicators" by the parameter evaluation task "@taskno" based on the step (3) described in section 3, then determines the membership, grade and score of the non-final-level indicators and the evaluated coal mine from low level to high level respectively, finally stores them into the data tables "Summary result of indicators" and "Overall evaluation result". The specific code is shown in S3 Appendix.
Program design
Taking C#.NET as the development language, a set of online evaluation system of coal mine comprehensive level base on FCE is designed and developed. The specific design is not described in this paper. Among it, the program calls the stored procedure "Summary by Task No." to get summary result of evaluation.
Case study
Taking Coal group PM as an example, five coal mines under its jurisdiction are evaluated for test. Fig 7 is the main interface of Group administrator (username: lq). The main interfaces of Mine administrator (username: zw) and Expert (username: lxs) are not shown in this paper.
In the interface of "Indicator system management" shown in Fig 8, the Group administrator can add or delete the indicator system, set contents of indicators for the indicator system, set parameters for the final-level objective indicators, and check the indicator system. In order to ensure the effectiveness of the evaluation, before modifying or deleting the indicator system, the trigger designed in the database will check whether the indicator system has been used to evaluate any coal mines. Once used, modification or deletion is not allowed.
In the interface of "Indicator management" shown in Fig 9, the Group administrator can click on the indicator system name to add first-level indicators for it, or click on any indicator to add sub-indicators for it, modify it or delete it.
In the interface of "Indicator system management" shown in Fig 8, click the button "Obset" to enter the interface as shown in Fig 10. This interface lists all of the final-level objective 1 Indicator_insert Calculate the fields "α1", "α2", "α3", "α4", α5" by the fields "αA", αB", "αC", "αD", "αE", "Ah", Bh", "Ch", "Dh", "Eh" when an indicator is inserted and the field "Feature" is hoping-large, hoping-target or hoping-small S1 Appendix indicators of the indicator system. Click the button "Set" in column 1 to enter the interface shown in Fig 11. In this interface, the parameters of the objective evaluation indicator can be set. Fig 11 shows the parameter setting interface with the Indicator No. 856.
In the interface of "Assigning tasks to experts" shown in Fig 12, the Group administrator can assign tasks to evaluation experts. It should be pointed out that because each evaluation expert has his own expertise, it is not necessary for each evaluation expert to evaluate all of the objective indicators and subjective indicators, but to make reasonable arrangements according to their expertise. For example, some evaluation experts are responsible for evaluation of the objective indicators, and some experts are responsible for evaluation of the subjective evaluation indicators, and some experts are responsible for evaluation of both objective and subjective indicators.
In the interface of "Evaluation of final-level objectvie indicators" shown in Fig 13, the Expert can select the objective indicators he is familiar with and input or modify the evaluation value of these indicators according to their explanation. It can be seen that current Expert cannot modify or delete evaluation data given by other Experts.
In the evaluation interface of "Evaluation of final-level subjective indicators" shown in Fig 14, the Expert can choose the familiar subjective evaluation indicators to give their grades.
In the interface of "Evaluation data of final-level objective indicators" shown in Fig 15, the Group administrator can view the evaluation data of final-level objective indicators given by all of the Experts. On the one hand, he can check whether the evaluation data are complete. On the other hand, he can check whether the evaluation data are reasonable, and give some human intervention when necessary. In the interface of "Evaluation data of final-level subjective indicators" shown in Fig 16, the Group administrator can view the evaluation data of final-level subjective indicators given by all of the Experts.
In the interface of "Evaluation summary" shown in Fig 17, the Group administrator can summarize each evaluation task in turn. For example, click the Summary button of Task No. 16, the system calls the stored procedure "Summary by Task No." with 16 as the parameter. Wait a moment, the Summary is completed, and the summary results are stored into the data tables "Summary result of indicators" and "Overall evaluation result".
In the interface of "Membership evaluation result" shown in Fig 18, the Group administrator can view the membership degree of each indicator or the evaluated coal mine. He can also click the first row to view the membership degrees of each first-level indicators, or click the non-final-level indicator to view the membership degrees of its sub-indicators.
In the interface of "Score evaluation result" shown in Fig 19, the Group administrator can view the score of each indicator or the evaluated coal mine. He can also click the first row to adminstrator can easily seen the advantages and disadvantages of each coal mine under its jurisdiction.
Conclusion and prospect
Aiming at the coal industry group, an online evaluation method of coal mine comprehensive level based on FCE is put forward. The research conclusions are as follows.
1. According to the comprehensive level of coal mines, a multi-level evaluation indicator system is established from research and development, production, sales, finance, safety and management. Only in a more systematic and comprehensive perspective can the comprehensive level of coal mines be evaluated, so as to ensure the comprehensiveness of the evaluation results.
2. For the final-level objective evaluation indicators (hoping-large, hoping-small, hoping-target, hoping-have, hoping-no) and subjective evaluation indicators, appropriate methods are adopted to determine their membership vectors, and then FCE is adopted to evaluate the coal mines, which expands the application scope of the evaluation method.
3. The online evaluation system of coal mine comprehensive level designed in this paper can make evaluation, summary and comparison of coal mines conveniently and efficiently. by comparing the evaluation results of different coal mines and each indicator in the same period (longitudinal comparison), the advantages and disadvantages of each coal mine can be seen. For the disadvantages, the Coal group can timely urge the coal mine to improve them so as to improve the competitiveness of the coal mine and the Coal group.
5. The method proposed in this paper can not only be used to evaluate coal mines, but also to evaluate similar enterprises or organizations after a little modification.
It should be pointed that the method proposed in this paper is suitable for the evaluation of subjective indicators or the combination of subjective and objective indicators. If all of the evaluation indicators are objective indicators, other accurate quantitative methods may be more suitable, such as Entropy method, FA method and TOPSIS method, etc.
Although the method proposed in this paper has realized the online evaluation of the comprehensive level of coal mines based on FCE, there are two research directions in the next step. One is to give improvement measures on the basis of the evaluation to make the evaluation system more intelligent. The second is to develop mobile online evaluation system (phone APP) so as to make the evaluation more convenient. | 6,279 | 2021-08-16T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Characterizing Gene and Protein Crosstalks in Subjects at Risk of Developing Alzheimer ’ s Disease : A New Computational Approach
Alzheimer’s disease (AD) is a major public health threat; however, despite decades of research, the disease mechanisms are not completely understood, and there is a significant dearth of predictive biomarkers. The availability of systems biology approaches has opened new avenues for understanding disease mechanisms at a pathway level. However, to the best of our knowledge, no prior study has characterized the nature of pathway crosstalks in AD, or examined their utility as biomarkers for diagnosis or prognosis. In this paper, we build the first computational crosstalk model of AD incorporating genetics, antecedent knowledge, and biomarkers from a national study to create a generic pathway crosstalk reference map and to characterize the nature of genetic and protein pathway crosstalks in mild cognitive impairment (MCI) subjects. We perform initial studies of the utility of incorporating these crosstalks as biomarkers for assessing the risk of MCI progression to AD dementia. Our analysis identified Single Nucleotide Polymorphism-enriched pathways representing six of the seven Kyoto Encyclopedia of Genes and Genomes pathway categories. Integrating pathway crosstalks as a predictor improved the accuracy by 11.7% compared to standard clinical parameters and apolipoprotein E ε4 status alone. Our findings highlight the importance of moving beyond discrete biomarkers to studying interactions among complex biological pathways. Processes 2017, 5, 47; doi:10.3390/pr5030047 www.mdpi.com/journal/processes Processes 2017, 5, 47 2 of 17
Introduction
It is common knowledge that the prognostics of diseases such as Alzheimer's disease (AD) is of national importance.AD alone affects about 10% of the population over 65 years old [1,2], and is among the leading causes of death in patients over 75 years of age in the U.S. [3].There is evidence suggesting that the progression to AD dementia begins years before it is clinically determined and is preceded by a phase of mild cognitive impairment (MCI), during which AD-related treatments are likely to be more effective.Thus, it is important to discover the mechanisms underlying risk of AD and to develop accurate biomarkers that reflect the complexity of the disease at an individual level.Although a number of biomarkers are currently being evaluated for use to predict AD or study disease progression (e.g., tau, p-tau181P, β-amyloid1-42, apolipoprotein E ε4 (APOE ε4), and microRNAs) [4][5][6][7], none of these markers are yet fully validated or approved for predicting the risk of AD.Indeed, AD is no longer seen as a disease of single discrete lesions, but as a perturbation of altered cortical networks by pathological processes in interlinked pathways.Hence, the application of systems biology methods to the discovery and characterization of novel biomarkers [8][9][10][11][12][13][14][15][16][17][18][19][20] has taken on greater promise and urgency.
The cellular mechanisms underlying many neurological disorders are complex, with crosstalks between multiple molecular pathways likely contributing to disease initiation and progression.In living organisms, pathways are said to crosstalk if they are linked together to perform biological functions as a system.Crosstalks can also be defined as interactions between signal transduction pathways, and usually take the form of protein or transmembrane interactions.A number of potential crosstalks have been noted in vitro in AD, such as those between amyloid and tau pathways, oxidative phosphorylation, the p53 signaling pathway, and apoptosis [21][22][23].Another example is the reported crosstalk among MAPK, insulin, and calcium signaling pathways [24].There is also evidence of crosstalk among pathways involved in the regulation of glycolysis metabolism, pathways involved in the regulation of the actin cytoskeleton, and apoptosis [24].The latter crosstalk is also associated with other neurodegenerative disorders, such as Huntington disease and amyotrophic lateral sclerosis [24].Furthermore, the cellular signaling pathways in AD have been reported, such as Wnt signaling, 5 adenosine monophosphate-activated protein kinase, mammalian target of rapamycin, Sirtuin 1, and peroxisome proliferator-activated receptor gamma co-activator 1-α, and possible crosstalk between these pathways has been discussed [25].For a review of multiple interacting pathways in neurodegenerative disease, see [26].In clinical AD research studies of diagnosis or prognosis, biomarkers are typically treated as discrete entities, in part because biological pathway crosstalks between genes or proteins have not yet been fully characterized at a systems biology level in AD.
From the computational methodology standpoint, the study of pathway crosstalks is still in its infancy.Existing methods predict crosstalks between known metabolic pathways using chemical protein interaction networks [24,[27][28][29].However, these computational methods do not take advantage of the different chemical evidence available, such as direct binding, the biochemical evidence, such as phosphorylation, and the functional evidence, such as transcriptional regulation.Moreover, the discovery, characterization, and utilization of pathway crosstalks as biomarkers for disease prognosis has not been investigated.
Here, we use clinical, cognitive, and genetic data from a national cohort study, the Alzheimer's Disease Neuroimaging Initiative (ADNI-1), along with a systematic computational methodology to discover and characterize biological pathway crosstalks in subjects with MCI.We further examine the utility of these novel biomarkers to discriminate stable MCI from those who progress to AD dementia.The first part of the methodology (Figure 1), focuses on utilizing several existing evidence, such as chemical interaction, genetic interaction, domain interaction, and transcription factors, to identify potential pathway crosstalks.In the second part (Figure 2), Single Nucleotide Polymorphisms (SNPs) are used to find patient-specific pathway crosstalks as biomarkers.In the third part, we build and test initial prognostic models that use pathway crosstalks as biomarkers to predict patient progression from MCI progression to AD dementia (see Results).To the best of our knowledge, this is the first such systematic characterization of biological pathway crosstalk biomarkers associated with the risk of AD.
Polymorphisms (SNPs) are used to find patient-specific pathway crosstalks as biomarkers.In the third part, we build and test initial prognostic models that use pathway crosstalks as biomarkers to predict patient progression from MCI progression to AD dementia (see Results).To the best of our knowledge, this is the first such systematic characterization of biological pathway crosstalk biomarkers associated with the risk of AD. mapping the Single Nucleotide Polymorphisms (SNPs) to genes and in turn to pathways using the SNP and gene location information, (2) choosing a genetic model and calculating a patient-specific SNP enrichment score for each pathway using the patient's allele information, and (3) overlaying the pathway enrichment scores on the reference crosstalk map to build patient-specific pathway crosstalk maps.
Materials and Methods
Our methodology consists of the following steps: (A) identifying potential pathway crosstalks by using existing gene and protein data (Figure 1), (B) identifying patient-specific pathway crosstalks via SNP information (Figure 2), and (C) identifying significant pathway crosstalks as biomarkers for MCI progression to AD dementia progression prediction.Step (1)
Identification of Potential Pathway Crosstalks
Step (2) Step (3) Polymorphisms (SNPs) are used to find patient-specific pathway crosstalks as biomarkers.In the third part, we build and test initial prognostic models that use pathway crosstalks as biomarkers to predict patient progression from MCI progression to AD dementia (see Results).To the best of our knowledge, this is the first such systematic characterization of biological pathway crosstalk biomarkers associated with the risk of AD. mapping the Single Nucleotide Polymorphisms (SNPs) to genes and in turn to pathways using the SNP and gene location information, (2) choosing a genetic model and calculating a patient-specific SNP enrichment score for each pathway using the patient's allele information, and (3) overlaying the pathway enrichment scores on the reference crosstalk map to build patient-specific pathway crosstalk maps.
Materials and Methods
Our methodology consists of the following steps: (A) identifying potential pathway crosstalks by using existing gene and protein data (Figure 1), (B) identifying patient-specific pathway crosstalks via SNP information (Figure 2), and (C) identifying significant pathway crosstalks as biomarkers for MCI progression to AD dementia progression prediction.Step (1)
Identification of Potential Pathway Crosstalks
Step ( 2) Step ( 3) Step (1) Step Step (3) Figure 2. Identification of patient-specific pathway crosstalks.The methodology has three steps: (1) mapping the Single Nucleotide Polymorphisms (SNPs) to genes and in turn to pathways using the SNP and gene location information, (2) choosing a genetic model and calculating a patient-specific SNP enrichment score for each pathway using the patient's allele information, and (3) overlaying the pathway enrichment scores on the reference crosstalk map to build patient-specific pathway crosstalk maps.
Materials and Methods
Our methodology consists of the following steps: (A) identifying potential pathway crosstalks by using existing gene and protein data (Figure 1), (B) identifying patient-specific pathway crosstalks via SNP information (Figure 2), and (C) identifying significant pathway crosstalks as biomarkers for MCI progression to AD dementia progression prediction.
Identification of Potential Pathway Crosstalks
We quantify how likely it is that a pair of pathways will crosstalk based on biological datasets that provide evidence for possible crosstalks (including chemical interaction, genetic interaction, and transcription factors).To have a more robust pathway crosstalk map, we incorporate a wide array of evidence.The scores from each of these evidence are then combined to build one generic pathway crosstalk reference map analogous to the "Kyoto Encyclopedia of Genes and Genomes" (KEGG) pathway reference map.
The likelihood of a pathway pair crosstalking can be scored by utilizing one of two different methods.The first method is based on the presence of common elements, such as kinases and enzymes.The second method is based on the presence of interacting elements, such as chemically interacting proteins.In the following sections, we will discuss the different evidence used and their corresponding scoring methods.
Scoring Pathway Crosstalks Based on Common Elements
The pathway pairs were scored for how likely they are to crosstalk based on common elements from each of the following evidence:
•
Shared enzymes and metabolites: The number of enzymes and metabolites shared by a pair of pathways is utilized as one of the evidence to identify potential pathway crosstalks.This is reasonable because a variation in the concentration of common enzymes or metabolites will affect both pathways.
•
Phosphorylation: Phosphorylation, performed by protein kinases, is the addition of a phosphate group to a protein, which results in a change of the protein's function.Co-phosphorylated proteins in different pathways suggest potential pathway crosstalks.
•
Transcriptional regulation: Genes with common transcription factors are likely coexpressed.
Coexpressed genes in different pathways provide an avenue for the pathways to crosstalk.
For each pathway pair, we find the group of transcription factors that have coexpressed genes in both pathways.
For each pair of pathways, P i and P j , we define the scoring function as Equation (1): where Y(P i ) is the set of proteins (enzymes, metabolites, transcription factors, kinases) associated with pathway P i .
Scoring Pathway Crosstalks Based on Interacting Elements
The pathway pairs were scored for how likely they are to crosstalk based on interacting elements from each of the following evidence:
•
Chemical interactions: Protein interactions have previously been used to identify pathway crosstalks [24,30].Chemical interaction between proteins belonging to different pathways provides a mechanism for pathways to crosstalk.
•
Genetic interactions: The use of genetic interactions for identifying pathway crosstalks stems from the concept of "between-pathway" interactions.This essentially states that if there is a genetic interaction between pathways, one pathway covers for the defects in the other pathway.
•
Protein domain: Protein function is closely related to fundamental units of protein structure called "domains".In the domain interaction network, a pair of proteins has an edge if they are associated with the same set of protein domains.These edges are taken into consideration to assess for potential pathway crosstalks because of the common domains.
• Synthetically lethal gene pairs: Gene pairs whose simultaneous low-or non-expression can cause the organism to die are called synthetically lethal pairs [31,32].The presence of synthetically lethal pairs of genes across two pathways is a possible sign of pathway crosstalks.
For each pair of pathways, P i and P j , we define the scoring function as Equation (2): where N inter P i , P j is the number of interactions (genetic, chemical, domain, synthetically lethal) that exist among the proteins associated with pathway P i and the proteins associated with pathway P j .
Significance Estimation of Pathway Crosstalk Scores
Estimating p-values using Monte Carlo methods [33] is a robust technique for statistical significance assessments.This technique was utilized to assess the significance of the scores obtained for the pathway crosstalks using different evidence, as follows: 1.
For each pair of pathways, a score for how likely they are to crosstalk is calculated based on each evidence.
2.
Each pathway is randomized by replacing all proteins in that pathway with randomly selected proteins from the set of all proteins in the organism.This pathway randomization step is repeated W = 1000 times, i.e., we obtain W sets of pathways with randomized proteins.
3.
The evidence-specific scores for each pathway pair are recalculated W times using each set of pathways with randomized proteins.4.
An evidence-specific p-value is estimated for each pathway pair as R/W, where R is the number of randomized versions of that pathway pair that produce an evidence-specific score greater than or equal to the score obtained for the original pathway pair.
Combining the Scores for Each Pathway Crosstalk
For each pathway pair, we combine the evidence-specific p-values obtained using Monte Carlo methods.This gives a combined estimation for crosstalk likelihood between the pathway pair.To combine the p-values, we use the QFAST information fusion methodology proposed by Bailey and Gribskov [34], which is based on a theorem by Feller [35].The QFAST methodology uses the product of the individual p-values as a test statistic to calculate the combined p-value; using the product of p-values as a test statistic has been shown to be a desirable method for information fusion [34].One issue to consider is that some pathway pairs may not be scored by some of the evidence due to missing data.For those cases, we assign a p-value of 1 to denote that the particular evidence offers no information about those pathways crosstalking.The QFAST formula to calculate the combined p-value is Equation (3): where P i is the p-value obtained for evidence i, and n is the number of evidence.
A generic pathway crosstalk reference map is then built as a network, where the nodes represent pathways and the edges represent a statistically significant combined p-value for crosstalk likelihood between a pathway pair (at a significance level of α = 0.01).
Identification of Patient-Specific Pathway Crosstalks
To determine which of the pathway crosstalks in the generic reference map may be utilized as a biomarker for MCI progression to AD dementia progression, we identify patient-specific pathway crosstalks.For this purpose, we make use of SNP data.SNPs are variations in the deoxyribonucleic acid (DNA) sequence at particular locations, which can influence phenotypes such as proneness to disease or reaction to drugs.Initiatives such as the ADNI collect patient-specific SNP information.We utilize this information to identify patient-specific pathway crosstalks via the following four steps (Figure 2): 1.
Obtain a mapping of SNPs to pathways using genetic information.
2.
Identify the list of SNPs that are present in a patient.
3.
Use the mapping obtained in Step 1 and the patient-specific SNP list in Step 2 to obtain the pathways that are "SNP-enriched" in the patient.4.
Use the "SNP-enriched" pathways from Step 3 to obtain patient-specific pathway crosstalks.
Obtain a Mapping of SNPs to Pathways
Every SNP is assigned a chromosome number and a location on the genome, which can be used to map SNPs to genes, and, in turn, SNPs to pathways.Starting with a list of all genes that map to at least one pathway, we assign an SNP to a gene if it is present within 10 kilo base pairs (kbp) distance upstream or downstream of that gene.This method has been previously used by Silver et al. [36,37].Note that since SNPs are mapped to all genes within a range of 10 kbp, the same SNP may be mapped to more than one gene.The set of SNPs assigned to a pathway is the union of all SNPs assigned to the genes of that pathway.
Identify Patient-Specific SNPs That Are Present
For each patient, we identify a list of SNPs that are present based on the homozygous minor (recessive) genetic model.This genetic model requires a minor allele count of 2 for an SNP to be considered present, i.e., the minor allele is inherited from both parents.
Identify Patient-Specific SNP-Enriched Pathways
Given the set of SNPs assigned to a pathway, SNP pathway , the set of SNPs that are present in a patient, SNP patient , and the set of SNPs of interest, SNP interest , we define an enrichment score for this pathway and patient as Equation (4): where SNP interest is the set of all SNPs found on the human genome or a set of relevant SNPs from the scientific literature.
A p-value for the enrichment score is calculated using Monte Carlo methods, as discussed previously.The "SNP-enriched" pathways for each patient are then defined as the pathways with a statistically significant p-value for that patient (at a significance level of α = 0.05).
Identify Patient-Specific Pathway Crosstalks
Given the SNP-enriched pathways for each patient, we build patient-specific pathway crosstalk maps from the generic pathway crosstalk reference map, analogous to building organism-specific pathway maps from the KEGG pathway reference map.A pathway crosstalk, i.e., an edge in the patient-specific reference map, is present if both pathways are SNP-enriched for that patient.
Identification of Biased Pathway Crosstalk
The pathways and patient-specific pathway crosstalks that are biased towards MCI progressive patients or MCI non-progressive patients (at a significance level of α = 0.01) are incorporated as features into the model to predict MCI progression to AD dementia progression.The bias of an active pathway crosstalk towards MCI progressive patients is quantified using the hypergeometric test (Equation ( 5)): • Population: n is the total number of patients.
•
Success in population: x is the total number of MCI progressive patients and y is the number of MCI non-progressive patients.
•
Sample: v is the total number of patients (both MCI progressive and MCI non-progressive patients) a pathway crosstalk is enriched in.
•
Success in sample: w is the number of MCI progressive patients and z is the number of MCI non-progressive patients the pathway crosstalk is enriched in.
Similarly, the bias of an active pathway crosstalk towards MCI non-progressive patients can be calculated via φ(n, y, v, z).
Datasets
In this study, we utilize cellular subsystems that model biological pathways.Henceforth, we will refer to a cellular subsystem as a pathway.To create a potential pathway crosstalk reference map, we used cellular pathway data from the KEGG database [38][39][40].We obtained evidence for human chemical interaction, genetic interaction, and synthetic lethal gene pairs from BioGRID [41], domain interaction from GeneMania [42], transcription factors from the FANTOM database [43,44], and protein phosphorylation [45].We obtained SNPs associated with genes that were manually curated to be associated with AD from the Comparative Toxicogenomics Database [46], and we obtained a compilation of genes from the literature that have been identified as likely risk factors of AD from SNPedia [47].This information was utilized as our biologically meaningful knowledge priors.Some of the genes associated with Alzheimer's that were used in this study can be found in Table 1.
Table 1.Some of the genes associated with Alzheimer's disease (AD) that were used in this study.
APP amyloid beta (A4) precursor protein
Mutations in this gene have been implicated in autosomal dominant AD and cerebroarterial amyloidosis (NCBI Entrez Gene)
IL-1β
Four new genetic studies underscore the relevance of IL-1 to Alzheimer's pathogenesis, showing that homozygosity of a specific polymorphism in the IL-1α gene at least triples Alzheimer's risk, especially for an earlier age of onset and in combination with homozygosity for another polymorphism in the IL-1β gene [48] SOD2 A polymorphism in SOD2 is associated with development of AD [49] NOS3 NOS3 may be a new genetic risk factor of late onset AD [50] The data used in the preparation of this manuscript were obtained from the ADNI [51] database.The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD.The primary goal of the ADNI has been to test whether serial MRI, PET, other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of MCI and early AD.
For our predictive study, we utilized the dataset from an earlier study by Shaffer et al. [52] based on ADNI-1.That particular study identified 97 MCI patients and predicted progression to AD dementia based on their clinical parameters, MRI results, PET scans, cerebrospinal fluid (CSF) markers (tau, p-tau181P, and β-amyloid1-42), the APOE ε4 genotype, and results from at least one follow-up clinical examination.Out of the 97 patients from the earlier study, only 91 patients have corresponding SNP data in the ADNI database.Hence, for the current study, we only utilized these 91 patients.However, this reduction in the number of patients did not considerably affect the ratio of MCI progressive patients to MCI non-progressive patients.The original study had 43 MCI progressive patients and 54 MCI non-progressive patients, and the reduced dataset has 41 MCI progressive patients and 50 MCI non-progressive patients.Thus, there is still sufficient representation of the two classes of patients.
Sample Characteristics
The mean age for all 91 MCI patients was 74.96 ± 7.32 years (mean ± standard deviation).The male-to-female ratio was 2.37, and 96.7% of subjects were white.A total of 36.26% of subjects had a family history of AD, and 54.94% had a positive finding for the APOE ε4 genotype.The mean follow-up duration for all of the subjects was 31.6 ± 10.6 months.Of these, 41 progressed to AD during follow-up (MCI progressive patients) and 50 did not (MCI non-progressive patients), with MCI progressive patients tending to have longer follow-up times by about 4.5 months.Statistically, MCI progressive patients did not differ from MCI non-progressive patients in mean age, sex ratio, education, race, ethnicity, family history of AD, or APOE ε4 prevalence.See Table 2 for details.
SNP-Enriched Pathways and Associated Crosstalks
Our analysis identified SNP-enriched pathways that represent six of the seven KEGG pathway categories, including Cellular Processes, Metabolism, Environmental Information Processing, Genetic Information Processing, Human Diseases, and Organismal Systems.This broad array of pathway categories represents the complex nature of AD pathogenesis, which has been attributed to many different biological mechanisms, ranging from amyloid toxicity to metabolic dysfunction to immune dysregulation.Figure 3 depicts the distribution of SNP-enriched pathways amongst the six KEGG categories.The majority of enriched pathways are classified under Human Diseases (31%).This supports the well-established relationships between AD and multiple other cardiovascular, autoimmune, and neurodegenerative diseases.For instance, diabetes, obesity, and heart diseases are well-established risk factors of AD, so much so that AD has been referred to as type 3 diabetes.As such, finding SNP-enriched pathways for cardiovascular, endocrine, and metabolic diseases in individuals with MCI is anticipated [53].
Similarly, the enrichment of metabolic pathways, organismal systems including nervous and immune system pathways, and common signaling pathways of the environmental information processing category is also expected and well-supported in the literature [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68].Interestingly, several genetic information processing pathways, including cell cycle regulation and DNA replication and repair, were found to be enriched.Evidence for the roles of these pathways in AD has only recently begun to surface [69][70][71].Our findings of the SNP-enrichment of these pathways among MCI individuals may provide support for further investigations into such pathways.
SNP-enriched pathway crosstalks were discovered between six KEGG categories, with the greatest number of crosstalks occurring between Human Diseases and Organismal systems.It is difficult to stipulate the significance of these findings.However, given that the etiology of many diseases, including AD, is complex and likely involves the failure/dysregulation of many pathways that are involved in the normal functioning of multiple organ systems, such significant crosstalk between these two categories among MCI individuals is not unexpected.The ageing process itself may facilitate a greater number of crosstalks in many pathways, since aging is associated with degeneration in many tissues and raises the risk for other chronic diseases besides dementia.
Processes 2017, 5, 47 9 of 17 may facilitate a greater number of crosstalks in many pathways, since aging is associated with degeneration in many tissues and raises the risk for other chronic diseases besides dementia.To investigate the genetic load in regards to AD, we further examined enriched pathway crosstalks specifically relating to the KEGG AD pathway.We identified 97 AD-related crosstalks and grouped the participating pathways by KEGG category (Figure 4).In line with the overall findings of crosstalk enrichment, the AD-specific pathway crosstalks primarily fell between the categories Human Diseases and Organismal Systems, supporting the importance of the pathways within these categories in AD genetic load.In contrast, pathways of Metabolism and Genetic Information Processing had very few crosstalks, suggesting that genetic load To investigate the genetic load in regards to AD, we further examined enriched pathway crosstalks specifically relating to the KEGG AD pathway.We identified 97 AD-related crosstalks and grouped the participating pathways by KEGG category (Figure 4).To investigate the genetic load in regards to AD, we further examined enriched pathway crosstalks specifically relating to the KEGG AD pathway.We identified 97 AD-related crosstalks and grouped the participating pathways by KEGG category (Figure 4).In line with the overall findings of crosstalk enrichment, the AD-specific pathway crosstalks primarily fell between the categories Human Diseases and Organismal Systems, supporting the importance of the pathways within these categories in AD genetic load.In contrast, pathways of Metabolism and Genetic Information Processing had very few crosstalks, suggesting that genetic load In line with the overall findings of crosstalk enrichment, the AD-specific pathway crosstalks primarily fell between the categories Human Diseases and Organismal Systems, supporting the importance of the pathways within these categories in AD genetic load.In contrast, pathways of Metabolism and Genetic Information Processing had very few crosstalks, suggesting that genetic load in these processes is not as important to the disease process, at least in this particular cohort.Similar findings were seen in the analysis of all pathway crosstalks.Focusing in on the AD pathway, we observe significant crosstalk in between all pathway categories supporting the complex etiology of this disease.
SNP-Enriched Features with Baseline Clinical Parameters
We predicted MCI progression to AD dementia progression using a support vector machine (SVM) with a linear kernel function with baseline clinical parameters (age, education, and Alzheimer's disease assessment scale-cognitive subscale (ADAS-Cog)), significant pathways, or significant pathway crosstalks as predictors.The results for 100 iterations of 10-fold cross-validation are shown in Table 3.The model built with the clinical parameters only produced an accuracy of 59.19 ± 2.46% with 83.64 ± 0.29% of training data points as support vectors.The model built with significant pathways alone produced an accuracy of 56.78 ± 3.5% with 68.36 3.5% support vectors.Typically, we expect a random guessing model to yield an accuracy of 50%; thus, both models only perform moderately above a random model.A high percentage of support vectors indicate that an SVM model is overfitted and unlikely to generalize well.Thus, if we have two models that produce the same accuracy, then we pick the model that has the lower percentage of support vectors.Sixty-eight percent (68%) or more of the training data points were used as support vectors and this indicates highly overfitted models, which is shown by the poor cross-validation accuracy.
Incorporating both the baseline clinical parameters and significant pathways as predictors produced a model with an accuracy of 64.57± 3.56% with 63.3 ± 1.15% support vectors.This combined model demonstrated a 5.38% increase in accuracy compared to the baseline clinical parameters model and a 7.79% increase in accuracy compared to the model using significant pathways alone.Additionally, the reduced support vector percentage of this combined model indicates a better generalizability than the baseline clinical parameters model (20.34% decrease in support vectors) and the significant pathways model (5.04% decrease in support vectors).
With our novel approach of using significant pathway crosstalks to predict AD progression, our model provides an accuracy of 60.97 ± 3.24, which is higher than using baseline clinical parameters or significant pathways alone.Furthermore, this crosstalks model has the lower support vector percentage of 50.83 ± 4.77%, and thus the greatest generalizability of all of the models.
The enhancement of the significant pathway crosstalks model with the inclusion of baseline clinical parameters produced a model that has the greatest accuracy of 70.9 ± 3.3% with a moderate support vectors percentage of 54.29 ± 0.56%.These initial results support the utility of using pathway crosstalks as significant predictors in the progression from MCI progression to AD dementia and warrant replication in larger samples followed for longer periods.We compared models built using the clinical parameters and the SNP-enriched features (significant pathways or significant pathway crosstalks) to a logistic regression model with only clinical parameters by Shaffer et al. [52] (Table 4).We also noticed that the average accuracy of the logistic regression model slightly increased (from 58.7% to 59.10 ± 1.71%) when we repeatedly created random 10-folds instead of using the 10 original folds from Shaffer et al. [52].It decreased (to 57.04 ± 2%) when we removed the six patients that did not have corresponding SNP data in the ADNI database.Our method, when incorporating either significant pathways or significant pathway crosstalks, had a higher average accuracy on 100 randomly generated 10-folds than the method by Shaffer et al. [52].Impressively, the combination of the baseline clinical parameters, APOE ε4, and significant pathway crosstalks in our logistic regression model yielded an accuracy of 72.1 ± 2.66.Also, a similar accuracy was obtained using a linear kernel SVM built on the SNP-enriched features.This indicates that the pathways and pathway crosstalks indeed lead to a better rate of prediction from MCI progression to AD dementia progression.
Randomized SNP-Enriched Features
To demonstrate that the pathway crosstalks found in this study have true predictive power and the results are not a random occurrence, we generated 25 random samples of pathway crosstalks with no prior association to Alzheimer's and performed 100 iterations of 10-fold cross-validation for each of these 25 samples.The results are shown in Table 5.The model with the baseline clinical parameters and randomized significant pathway crosstalks gave an accuracy of 59.27 ± 3.66 with 83.47 ± 1.84 support vectors.This model yields 12% less accuracy and a 29.1% increase in support vectors, in comparison to the original model that uses baseline parameters and significant pathway crosstalks (instead of randomized significant pathway crosstalks).As expected, our randomly generated pathway crosstalks shows worse performance than significant pathway crosstalks.The model accuracy is still moderately above a random guessing model, likely due to the presence of the clinical parameters.A similar trend was seen when investigating models with baseline clinical parameters and all AD biomarkers to determine the effects of randomized pathways.
In this work, we focus on the development of a novel computational methodology for the discovery of pathway crosstalks to be used as biomarkers for the prognosis of AD.To demonstrate the efficacy of our methodology, we compared it with methods and results from prior studies in this area, which used ADNI-1 data.Although there is more recent data available, ADNI-1 data was used so that we could benchmark our methodology against these prior studies.In future work, we will continue our characterization efforts by incorporating the newer ADNI datasets as well as increasing the sensitivity of the proposed methodology through the use of the additive genetic model for the identification of patient-specific SNPs.There are also some limitations to our study.The ADNI is not a population-based study; it is essentially a biomarker cohort at research sites and our sample size was relatively small: we relied on a sample that was previously studied, since our initial goal was to examine the additive value of crosstalk biomarkers.We also did not incorporate other biomarkers such as tau, p-tau181P, β-amyloid1-42, APOE ε4, and microRNAs at this time, since our main focus was on methodological development for discovering and characterizing pathway crosstalks.However, the ADNI results have formed the basis for many current clinical prevention drug trials, and hence the ADNI is a highly relevant dataset.Moreover, its careful selection criteria and the way it makes available rich biomarker and genetic data and longitudinal cognitive data are enormous strengths.Indeed, the study of pathway crosstalks may yield novel insights into how AD pathological (e.g., beta-amyloid, tau) and neuronal loss (e.g., apoptosis, atrophy) mechanisms interact, and our methods lay the foundation for such future work.
The generic pathway crosstalk reference map was built using several different datasets, and hence the question arises as to whether all datasets should be treated equally.For simplicity, in this study, we treated all datasets equivalently.However, a modification to our information fusion method would allow us to introduce parameters to weigh evidence differently based on expert knowledge or trustworthiness.In the future, we would like to perform additional experiments to see the effects of these parameters on disease AD prognosis.This is non-trivial, as we would first need to define a weighting scheme and then develop additional methods to gauge the weights for different evidence.
Conclusions
AD is a major public health challenge, and there remain substantial gaps in our knowledge of its biology and treatment targets.Fully characterizing AD at a systems biology level is a priority for these reasons.In this work, we demonstrate a new methodology to build a pathway crosstalk reference map using the combined power of several gene and protein knowledge antecedents, and use this to make AD-specific discovery pathway crosstalks by enrichment with patient-specific SNP information.Our pilot data documents the promise of utilizing those SNP-enriched pathway crosstalks to identify potential AD-linked mechanisms at a systems level.More specifically, we demonstrate a three-step methodology to build a generic pathway crosstalk reference map by combining several protein/gene evidence.We then used the identified pathway crosstalks from this map as potential AD biomarkers by enriching them with patient-specific SNP information.In an initial sample of at risk subjects, we found that utilizing SNP-enriched pathway crosstalks as additional features significantly improved the prediction accuracy of MCI progression to AD dementia progression.
In addition, we verified some previously identified pathways and identified some new pathway crosstalks that warrant further study.Furthermore, we built the prediction model including the identified pathways and crosstalks, and compared our model's outputs with a previous study.These prediction model comparison analyses show that the identified pathways and crosstalks can be used as significant biomarkers of MCI progression to AD dementia progression prediction with other clinical information.Additional analysis would be required to understand the biological mechanisms that explain the association of these pathways to AD.
In summary, this is the first report to our knowledge that characterizes biological crosstalk pathways in subjects at risk of AD using gene and protein knowledge antecedents and studies their potential utility as prognostic biomarkers.Further application of this methodology to the full ADNI-1 and ADNI-2 cohort as well as to other population studies is warranted, and may yield further insights into disease mechanisms as well as novel targets for biomarker development and drug discovery.
Figure 1 .
Figure 1.Identification of potential pathway crosstalks.The methodology has three steps: (1) quantifying crosstalk likelihood using multiple individual evidence to score each pathway pair, (2) obtaining a combined score using information fusion, and (3) building the crosstalk reference map.
Figure 2 .
Figure 2. Identification of patient-specific pathway crosstalks.The methodology has three steps: (1) mapping the Single Nucleotide Polymorphisms (SNPs) to genes and in turn to pathways using the SNP and gene location information, (2) choosing a genetic model and calculating a patient-specific SNP enrichment score for each pathway using the patient's allele information, and (3) overlaying the pathway enrichment scores on the reference crosstalk map to build patient-specific pathway crosstalk maps.
Figure 1 .
Figure 1.Identification of potential pathway crosstalks.The methodology has three steps: (1) quantifying crosstalk likelihood using multiple individual evidence to score each pathway pair, (2) obtaining a combined score using information fusion, and (3) building the crosstalk reference map.
Figure 1 .
Figure 1.Identification of potential pathway crosstalks.The methodology has three steps: (1) quantifying crosstalk likelihood using multiple individual evidence to score each pathway pair, (2) obtaining a combined score using information fusion, and (3) building the crosstalk reference map.
Figure 2 .
Figure 2. Identification of patient-specific pathway crosstalks.The methodology has three steps: (1) mapping the Single Nucleotide Polymorphisms (SNPs) to genes and in turn to pathways using the SNP and gene location information, (2) choosing a genetic model and calculating a patient-specific SNP enrichment score for each pathway using the patient's allele information, and (3) overlaying the pathway enrichment scores on the reference crosstalk map to build patient-specific pathway crosstalk maps.
Figure 3 .
Figure 3.The distribution of the types of SNP-enriched pathways identified in this study and a comparison to the pathway distribution of the Kyoto Encyclopedia of Genes and Genomes (KEGG).NOTE: Although there are seven KEGG pathway categories, here we only show the six KEGG pathway categories that included identified SNP-enriched pathways in this study.
Figure 4 .
Figure 4. Pathways found to have significant crosstalk with the AD pathway and corresponding KEGG categories (shown in colored blocks).Specific KEGG pathway types are listed below each category with the number of occurrences in parentheses.NOTE: Although there are seven KEGG pathway categories, here we only show the six KEGG pathway categories that included identified SNP-enriched pathways in this study.
Figure 3 .
Figure 3.The distribution of the types of SNP-enriched pathways identified in this study and a comparison to the pathway distribution of the Kyoto Encyclopedia of Genes and Genomes (KEGG).NOTE: Although there are seven KEGG pathway categories, here we only show the six KEGG pathway categories that included identified SNP-enriched pathways in this study.
greater number of crosstalks in many pathways, since aging is associated with degeneration in many tissues and raises the risk for other chronic diseases besides dementia.
Figure 3 .
Figure 3.The distribution of the types of SNP-enriched pathways identified in this study and a comparison to the pathway distribution of the Kyoto Encyclopedia of Genes and Genomes (KEGG).NOTE: Although there are seven KEGG pathway categories, here we only show the six KEGG pathway categories that included identified SNP-enriched pathways in this study.
Figure 4 .
Figure 4. Pathways found to have significant crosstalk with the AD pathway and corresponding KEGG categories (shown in colored blocks).Specific KEGG pathway types are listed below each category with the number of occurrences in parentheses.NOTE: Although there are seven KEGG pathway categories, here we only show the six KEGG pathway categories that included identified SNP-enriched pathways in this study.
Figure 4 .
Figure 4. Pathways found to have significant crosstalk with the AD pathway and corresponding KEGG categories (shown in colored blocks).Specific KEGG pathway types are listed below each category with the number of occurrences in parentheses.NOTE: Although there are seven KEGG pathway categories, here we only show the six KEGG pathway categories that included identified SNP-enriched pathways in this study.
3. 4 .
Comparison of Model Performances from Shaffer et al. (2013) with Our Model Performance including SNP-Enriched Features
Table 2 .
Baseline Characteristics of mild cognitive impairment (MCI) Study Sample.
Table 3 .
Performance of support vector machine (SVM) models with baseline clinical parameters.
Table 4 .
[52]ormance of Shaffer et al.[52]model with clinical parameters with 97 patients in comparison to our model with 97 and 91 patients.
Table 5 .
Performance of models with randomized pathway cross-talk features. | 9,060.8 | 2017-08-17T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Vertical Signalling Involves Transmission of Hox Information from Gastrula Mesoderm to Neurectoderm
Development and patterning of neural tissue in the vertebrate embryo involves a set of molecules and processes whose relationships are not fully understood. Classical embryology revealed a remarkable phenomenon known as vertical signalling, a gastrulation stage mechanism that copies anterior-posterior positional information from mesoderm to prospective neural tissue. Vertical signalling mediates unambiguous copying of complex information from one tissue layer to another. In this study, we report an investigation of this process in recombinates of mesoderm and ectoderm from gastrulae of Xenopus laevis. Our results show that copying of positional information involves non cell autonomous autoregulation of particular Hox genes whose expression is copied from mesoderm to neurectoderm in the gastrula. Furthermore, this information sharing mechanism involves unconventional translocation of the homeoproteins themselves. This conserved primitive mechanism has been known for three decades but has only recently been put into any developmental context. It provides a simple, robust way to pattern the neurectoderm using the Hox pattern already present in the mesoderm during gastrulation. We suggest that this mechanism was selected during evolution to enable unambiguous copying of rather complex information from cell to cell and that it is a key part of the original ancestral mechanism mediating axial patterning by the highly conserved Hox genes.
Introduction
Determination of regional specificity along the anterior-posterior (A-P) axis of the vertebrate Xenopus laevis and of all other vertebrates begins during gastrulation. This patterning involves interactions between the Spemann organizer (SO) and surrounding tissues that are key events leading to genesis of the basic body plan [1]. Classical embryology also revealed a remarkable phenomenon known as vertical signalling, a mechanism that copies A-P positional information from nonorganiser mesoderm (NOM) to overlying neurectoderm during gastrulation [2]. Nieuwkoop showed that the A-P pattern of the amphibian embryo is generated in the developing nervous system (neurectoderm) during gastrulation by two types of signals: activation and transformation [3]. These signals are emitted by mesoderm and act on ectoderm. The SO secretes activation signals that induce neuralisation of the ectoderm but induce only an anterior neural identity (presumptive forebrain) [4]. Tissue recombination and grafting experiments indicated that this patterning occurs similarly in an amniote (the chick embryo) as in the anamniote Amphibia [5] and also that transformation signals originate from NOM (lateral and paraxial mesoderm) and that they induce a progressively more caudal identity of the neural tissue [5][6][7][8][9][10][11]. Known signalling pathways have been proposed to be involved in transformation: these include retinoids [11][12][13], FGFs [14] and Wnts [15]. Elevated concentrations of these signalling molecules cause posteriorisation by inducing relatively posterior positional values in the neurectoderm and each of these factors has been proposed to act as a posterior to anterior gradient within the embryo [8][9][10][11][12][13]. These factors possibly mediate some of the known planar signals that act along the A-P axis of the germ layers during gastrulation. However, planar signals do not fully account for the properties of neural transformation. The use of exogastrulae and other approaches revealed a second type of signal. It appeared that posterior neural markers were expressed only at the border between ectoderm and mesoderm in exogastrulae, excluding the existence of very extensive planar signalling [16]. These and other observations rule out that axial patterning is fully accounted for by planar signals and the evidence actually indicates that the second type of signalling: vertical signalling is the more important in generating the A-P pattern of the neural plate [16][17][18][19]. The nature of the molecules involved in vertical signalling remains unclear.
We reported previously that expression of the A-P determining Hox genes begins during gastrulation in Xenopus laevis. This initial expression starts in NOM tissue in the mid-gastrula [20]. It presumably corresponds with the initial ''Hox induction field'' or ''opening zone'' that has been reported in other vertebrates [21,22]. This is the place where Hox codes are first available and later, when these mesodermal cells involute and come to lie underneath prospective neural tissue, the same A-P information spreads to that neural tissue by the end of gastrulation [20]. Interestingly, use of a tissue recombination assay, the wrap assay showed a requirement for NOM and SO to induce Hox gene expression in the neurectoderm [23]. The SO induces the embryonic ectoderm to a neurectodermal identity and the NOM induces expression of various Hox genes in the neuralised ectoderm ( [23], reviewed in [24]). These steps appear to correspond to Nieuwkoop's activation and transformation steps, respectively. In this study, we investigated the importance of the Hox genes for vertical signalling using the wrap assay to provide a controlled setting for investigating signalling from mesoderm to neurectoderm. We found that Hox expressing NOM mesoderm provides positional information for the adjacent neurectoderm. Moreover, in these wrap assays, mesodermal expression of each of the Hox genes that we investigated appears to be necessary as well as sufficient for inducing the expression of the same gene in the neurectoderm. The absolute requirement for mesodermal expression of the homologous Hox gene is striking. It indicates an extraordinarily high degree of specificity for vertical signalling: a feature that was already anticipated from the embryological data [2]. Using recombinant Hox proteins, we also detected transfer of the homologous Hox proteins between NOM mesoderm and neurectoderm during this non cell autonomous Hox autoregulation. We suggest that this is the basis of the extraordinary specificity of this mechanism. We also detected Xenopus Hox protein uptake from the medium by Xenopus embryonic cells as well as Drosophila imaginal discs. Considering that Hox codes are thought to be synonymous with A-P positional information and that the NOM induces neurectodermal Hox gene expression, our data suggest that expression of individual Hox genes is copied specifically from NOM to neurectoderm during neural transformation. This information sharing transfer involves a peculiar protein transfer of the homeoproteins themselves as has previously been described by others [25]. This primitive mechanism has presumably been conserved in evolution to enable specific information sharing between tissue layers in a very simple and direct manner.
Hox expression in the mesoderm is necessary and sufficient to induce neurectodermal Hox expression
We have shown previously that zygotic Hox gene expression is first initiated in the non-organizer mesoderm (NOM) in the Xenopus mid gastrula (St. 10.5) [20]. This expression then spreads during gastrulation to the overlying prospective neurectoderm that was induced from embryonic ectoderm by signals from the Spemann organizer (SO). We investigated whether Hox expression in the gastrula's NOM causes Hox expression in the neurectoderm. This was done using a recombinant wrap assay, in which pieces of SO and NOM mesoderm were combined with two animal caps (Fig. 1A). The main advantage of this assay is that it mimics the embryonic situation with respect to proximity and physical connectivity of SO, NOM and neural tissue, but still allows the independent manipulation of different parts of the early embryonic tissues by loss and gain of function techniques. It also offers the advantage of clear consistent tissue separation [20,23] (Fig. 1A). Interestingly, a combination of two SO explants within a single wrap is in itself not sufficient to induce Hox gene expression, despite causing neuralization of the animal cap ( Fig. 1Ac, 1Bb, 1Cb). These results show that this assay mimicks the embryonic situation ( Fig. 1A, B, C) and they demonstrate a requirement for a NOM derived signal for induction of Hox expression in the clearly neuralised animal cap. We used loss of function via morpholino anti-sense nucleotides to investigate whether a single Hox gene knockdown in NOM affects neurectodermal expression of the same gene. MO's for each of Hoxd1, Hoxb4 and Hoxb9 respectively were injected into zygotes and the ability of NOM from the injected embryos to induce neurectodermal expression of these Hox genes in wraps was examined in comparison to NOM loaded with control MO (ctMO). Fig. 1 shows that NOM knockdown of each of these Hox genes prevented expression of the same gene in the neurectoderm (Fig. 1Ae, Bd, Cd) whereas ctMO loaded NOM did not impair neurectodermal Hox expression in wraps (Fig. 1Ad, Bc, Cc). These results indicate a specific requirement for expression of a particular Hox gene in NOM for its own expression in overlying neuralized ectoderm. Clearly, no other Hox gene or developmental regulator coexpressed in wraps can substitute this requirement, although there are obviously other routes to inducing neurectodermal Hox genes in vivo. Conversely we asked whether ectopic expression of a single Hox gene in mesoderm can induce its own expression in the neurectoderm. In this gain of function approach we took SO grafts from embryos zygotically injected with a single Hox mRNA. An explant of SO expressing this single Hox gene was then combined in a wrap with a wild type SO graft and 2 animal caps. The untreated SO was included to exclude that Hox ectopic expression blocks a necessary SO function. When Hoxd1, Hoxb4 or Hoxc6 respectively were ectopically expressed in SO, each efficiently induced its own expression in neurectoderm in such a wrap recombinant (Fig. 2B', C and D). We detected induced expression of the endogenous Hox gene using a 39 UTR probe that does not recognize the ectopically expressed messenger. These results show that ectopic expression of a single Hox gene in SO induces expression of the same gene in neurectoderm.
We also show in Fig. 2 that these wrap explants show the normal expression of tissue layer markers as seen in a normal gastrula ( Fig. 2A, A', A'', A''', E and F). This combinatorial tissue assay shows no intermingling between the different mesoderm types or between mesoderm and ectoderm during wrap culture and the markers and lineage labelling clearly show different localisations of mesoderm and neurectoderm within the wrap (See Fig. 2 legend for further description). There is clear correspondence of Hox expression with expression of both mesodermal and neurectodermal markers.
Altogether, our results show that in the wrap setting, a single Hox gene expressed in the mesoderm is necessary and sufficient to induce its own expression in overlying neurectoderm.
Vertical signalling and Hox protein transfer
Tissue recombination as in the wraps used in this study and in grafting experiments in various vertebrates have shown consistently that mesodermal signals induce the neural A-P pattern (see introduction, [3,5,6]). Major signalling pathways, including Wnts, Retinoids and Fgfs have been suspected to have a role [11][12][13][14][15]. Another type of molecule that may be involved in vertical signalling is the Hox proteins themselves. This idea was inspired by results from Prochiantz and colleagues who showed that homeoproteins move from cell to cell due to the existence of a special sequence, penetratin, within the homeodomain [25][26][27]. This led us to investigate whether Xenopus Hoxd1 protein possesses such a sequence. Fig. 3A shows that Xenopus laevis Hoxd1 protein has a penetratin sequence in its homeodomain. We constructed a labelled (myc tagged) Hoxd1 and analysed its ability to translocate from mesoderm to neurectoderm. Myc-Hoxd1 was ectopically expressed in SO as in S1 Figure. The labelled (myc tagged) Hoxd1 protein was seen to translocate from the labelled SO mesoderm to unlabelled mesoderm and neurectoderm tissues in these wrap recombinates. Hox proteins translocate due to their penetratin sequence, which is known to confer transfer capacity and cargo functions that depend on key amino-acids in the penetratin sequence (see Fig. 3A). Mutation of these amino acids abolishes the protein's ability to translocate (Fig. 3A, WF to SR). Chimeric GFP constructs were constructed containing either a wild type translocatable Hoxd1 homeodomain (d1-HD-gfp), or a homeodomain containing a mutated penetratin sequence that blocks translocation (mut d1-HD-gfp) and translocation of these constructs in wraps was compared to that of wild type GFP as a control. Fig. 3B-D'' shows that whereas wild type GFP and mut d1-HD-gfp both stay localised within the mesoderm in the wrap assay, the d1-HD-gfp spreads to the edges of the explant ( B', A wrap containing normal SO and SO ectopically expressing Hoxd1 also shows the induction of endogenous Hoxd1 in the neurectoderm as well as in the mesoderm. Endogenous Hoxd1 expression was detected using a 39UTR probe that recognizes only the endogenous messenger. C, ectopic Hoxb4 in SO induces its own expression within the neurectoderm and in the mesoderm as in B'. D, wrap as in B' and C but with ectopic Hoxc6 expression. This shows induction of Hoxc6 in neurectoderm and in the mesoderm. We used 39UTR probes to detect expression of the endogenous mRNA's in each of these experiments. E, F sections showing expression of Nrp1 (neural) and Bra (mesodermal) in control or standard [AC(SO/NOM)AC] recombinant. E Nrp1 expression is internal in the recombinant but excluded from an internal cell mass that is clearly the mesoderm. It is particularly strong around one end of the cell mass which is the neural inducing SO. Expression is also absent from the very outer layer of the recombinant, which represents the outer non neural layer of the neurectoderm. F: Bra expression is in an internal cell mass (the mesoderm). Please note that the germ layer markers Bra, Ch, and the mesodermal lineage label GFP are confined to an internal cell mass excluding tissue intermingling and that Hox expression is detected in neurectoderm as well as the mesodermal cell mass. Each photo in this figure represents at least 20 recombinants and embryos with consistently the same results. S1A and B). These results clearly show that Xenopus Hoxd1 contains a functional penetratin sequence with a cargo function as has been reported for other homeoproteins.
Another striking property of the penetratin sequence is its ability to translocate from cell to cell non species specifically. We tested whether the Xenopus Hoxd1 homeodomain shows this property. Drosophila imaginal dics were incubated with Fig. 3. Hoxd1 homeoprotein containing a penetratin sequence is transferred from mesoderm to neurectoderm and its homeodomain plays a cargo function for gfp. A: Penetratin sequence is shown. Above, Hoxd1 homeodomain (HD) contains a penetratin like sequence (in red). This conserved sequence is a feature of all homeodoproteins. Below, two amino-acids WF within the penetratin were mutated into SR (highlighted by stars) to create a mutated Hoxd1 HD, mut HD. This mutation abrogates the transfer function of the HD. B-D'': Localisation of different fluorescent chimeric GFP proteins in recombinants after 6-8 hrs of culture. These proteins were introduced into wrap recombinates in SO. B: wild type GFP. C: wild type GFP coupled to wild type Hoxd1 homeodomain (d1-HD-gfp). D: the mutated homeodomain version coupled to wild type GFP (mut d1-HD-gfp).The signal has spread in d1-HD-gfp but not in wild type GFP or mut d1-HD-gfp. The GFP fluorescence is combined with phalloidin staining to increase its visibility. B, B', B'': GFP expressed in the SO stays confined within the SO explant and fails to spread into surrounding neurectoderm. C, C', C'': d1-HD-gfp spreads outside the mesodermal explant to the neurectoderm (spreading indicated by arrowheads). D, D', D'': mut d1-HD-gfp protein shows a SO localisation pattern as shown by wild type GFP. Each photo in this figure is representative of 23 recombinates, each of which gave the same result. Homeoproteins contain a separate HD sequence regulating homeoprotein secretion as well as the penetratin sequence. The mutations we made were in the 'penetratin' uptake regulating sequence. Please note that d1-HD-GFP, mut d1-HD-GFP and GFP in Fig. 3 evidently diffuse less than Myc tagged Hoxd1 in S1 Figure. This is expected, due to the large size of GFP. wild type GFP, d1-HD-gfp or mut d1-HD-gfp recombinant proteins for 15 min at room temperature. Fig. 4 shows clearly that d1-HD-gfp (Fig. 4B, B') is taken up by the discs while GFP alone (Fig. 4A) or mut d1-HD-gfp (Fig. 4C) do not cross the epithelial layer of the disc. This is in accordance with previous reports that showed that mutation of 2 key amino acids within the penetratin sequence abolishes the uptake of the latter [26]. This result shows a clear non species specific mechanism. This is a very robust mechanism that could allow massive information transfer between mesoderm and neurectoderm during neural transformation. This lack of species specificity is consistent with homeoprotein transfer (see discussion) but we can not rule out from the data that a highly conserved ligand-receptor mechanism (like BMP-chordin) is involved.
A phenotypic assay demonstrates functionality of the homeodomain transfer in the Xenopus embryo.
The ability of Xenopus Hoxd1 homeoprotein to exhibit properties predicted by its penetratin sequence is interesting. However, this unconventional transfer was not shown to account for an embryonic function. So, we investigated whether recombinant Hoxd1 protein exhibits biological activity when applied in the extracellular space of Xenopus gastrula embryos. When Hoxd1 is overexpressed according to standard methods by injecting messenger RNA, the embryo's craniofacial structures are strongly affected and show a severe reduction of size of the branchial arches as we have previously reported (Fig. 5d) [28]. This reflects posteriorisation of A-P positional information in the head region. A wild type recombinant Hoxd1 protein was produced and injected either into the blastomeres of an early embryo (Fig. 5e) or into the extracellular space of the blastocoel during gastrulation (Fig. 5f). Upon injection of Hoxd1 recombinant protein, craniofacial structures are reduced similarly as following standard mRNA injections, without any difference between the intra-and extracellular injection conditions. Conversely injection of GFP protein into the blastocoel during early gastrulation causes no phenotype (Fig. 5, compare b to c) excluding a possible toxic side effect due to blastoecel injections per se. These results show that the recombinant protein induces the same effect when applied intra-or extracellularly and demonstrate that the Hoxd1 protein is efficiently internalized by the embryo and that it seems to retain its function.
Altogether, these results indicate a role of the conserved Hox transcription factors in signalling during neural transformation, and in vertical signalling.
Discussion
Axial patterning of the central nervous system is a crucial event during development. It enables proper locomotion and behaviour of the animal. Many studies have investigated the nature of the molecules involved in neural patterning. The Spemann organizer is one major source of patterning and neuralizing molecules [1,29]. Previous work also brought evidence that signals acting within the plane of the neural tissue (called planar signals) are necessary but not sufficient to account for the complicated A-P pattern of the neural plate [16][17][18][19]. Emerging evidence has also indicated the involvement of signals from the non organizer mesoderm (NOM) in several vertebrates including the frog [4][5][6][7][8][9]. These clearly mediate what has been called vertical signalling [2]. We have investigated the involvement of Hox genes in vertical signalling. Hox transcription factors are the key players in determining positional information along the anteroposterior axis of the vertebrate embryo. In Xenopus laevis, it has been shown that Hox gene expression is initiated in the NOM of the gastrula and then subsequently spreads to the adjacent neurectodermal cells [20]. We investigated the possibility of a link between mesodermal Hox expression and the subsequent Hox expression in the neurectoderm using a tissue recombination assay (wrap assay), that was previously proven to be an ideal way to manipulate different tissues in a controlled manner [20,23]. We previously showed a clear requirement for a vertical, non organizer mesoderm (NOM) derived signal for the induction of neural Hox expression. Here, we show that the NOM Hox expression itself is part of the information necessary for this signal and that there is thus a clear connection between the early mesodermal and early neurectodermal Hox codes. Our investigations using gain and loss of function approaches revealed that this requirement for NOM Hox expression is specific. It is striking that the loss of function MO approach shows that each Hox gene we tested actually needs to be translated itself in NOM to enable its own induced expression in neurectoderm. This implies that mesodermal function of each of these Hox genes is uniquely required for its own induction in neurectoderm in the wrap recombinates used for this test. No other Hox gene or developmental regulator that is co-expressed in the recombinate can substitute this function. This is high specificity. We have so far not identified the actual signalling agents travelling between the mesoderm and neurectoderm and the possibilities exist that these are Hox induced downstream signal molecules or the Hox proteins themselves. However, it would be very difficult to account for the very high degree of specificity we see by a conventional mechanism, such as a morphogen gradient. In previous studies, the Drosophila Hox protein Antennapedia and other homeoproteins have been shown to exhibit the capacity to travel across biological membranes by means of a 'penetratin' sequence within the homeodomain under conditions excluding any classical endocytosis mechanism [25][26][27]. This is a much more specific mechanism and due to the extreme conservation of Hox proteins throughout evolution, it is likely that the presence and the conservation of sequences allowing homeoprotein internalization and secretion might be of physiological relevance during development. We thus decided to investigate if a Xenopus Hox protein (Hoxd1: the first expressed Hox gene) exhibits a capacity to translocate across biological membranes. We conclusively demonstrated that Hoxd1 exhibits properties of transfer and that the transfer is active in the Xenopus gastrula and it does not negatively affect the functionality of the protein. This is a very specific mechanism that delivers a very specific signal, a fact that is entirely consistent with our finding that induction of the expression of a particular Hox gene in neurectoderm specifically requires expression of this same Hox gene itself (and not just, for example of one of its paralogues) in NOM mesoderm. Other studies point to unexpectedly specific actions of secreted homeoproteins in brain function [43,44].
Gradients of FGFs, RA and Wnts have all been proposed to be involved in activation of Hox gene expression along the anterior-posterior axis [12][13][14][15]. Biologically active retinoids are good candidates for signalling from mesoderm to neurectoderm because they are synthesized in mesoderm by RALDH and destroyed there by Cyp 26, and it seems that retinoid receptors are present mainly in the neurectoderm [31][32][33][34]. The synthetic and degradative regions of the mesodermal axis frame the hindbrain: the axial segment where Hox paralogue groups 1-5 have their anterior neurectodermal boundaries and show collinearly graded RA sensitivities [31,33]. In Xenopus, disruption of RA production in the mesoderm has no effect on Hox mesodermal expression while it only downregulates Hox in the neurectoderm [34]. However, Hox paralogue groups 6-13 do not exhibit RA sensitivity [31]. FGFs and Cdx might play a role in regulating neurectodermal Hox expression here [35,36]. We think that these different pathways work in concert with specific Hox signals. In fact, it has been shown in the chordate Amphioxus that the labial Hox gene (Hox1) mediates (all of) the effects of RA on the hindbrain (including regulating expression of more posterior Hox genes) [37]. This fits the finding that knocking out the whole Hox1 paralogue group in Xenopus knocks out expression of more posterior Hox genes as well as Hox1 [28]. It has also been shown previously that homeoprotein transfer and conventional signalling pathways work in concert in other contexts [29,30].
The growing evidence that homeoproteins are signalling molecules could be the key to understanding the whole picture of neural patterning. Indeed, the simplicity of Hox protein transfer would elegantly solve the patterning problem because it accounts for the one to one connection between mesodermal and neurectodermal Hox expression. Hox homeoprotein transfer was discovered almost 30 years ago [25,26] but its biological relevance has been unclear until recently [38,39,43,44]. We suspect that homeoprotein transfer is an ancient conserved signalling mechanism that is central to the ancestral Hox patterning mechanism that evolved in bilateria. Findings by Prochiantz and colleagues indicate that homeoprotein transfer is an extremely ancient form of signalling that antedates the division between animals and plants and presumably thus evolved before specific ligand-receptor mediated signalling. We suspect it has been selected and retained for vertical signalling because it is so eminently suitable to mediate Hox information copying.
In summary, our data show that at least several Hox genes expressed in the Xenopus gastrula show a causal connection between initial mesodermal expression and later neural expression. This connection is highly specific. The Hox protein encoded by at least one of these genes also shows unconventional intercellular transfer behaviour during vertical signalling and seems able to exert its biological function after passing through cell membranes as reported for members of other Hox paralogous groups in former studies. Figs. 1 and 2, that Hox proteins expressed in Xenopus gastrula NOM mesoderm induce their own expression in gastrula neurectoderm.
We show, in
There is thus non cell autonomous autoregulation of individual Hox genes. We equate this autoregulation with the classical concept of 'vertical signalling': copying of positional information from gastrula mesoderm to gastrula neurectoderm. Loss of function experiments for three different Hox genes, using morpholinos (Fig. 1), show that this homologous induction is specific for each of these three genes. Loss of function for each particular gene in NOM mesoderm deletes its own expression in neurectoderm. Clearly, no other endogenously expressed Hox gene or developmental regulator can substitute for this unique function. Fig. 2 shows that Hox expression induced in Hox gain of function exeriments is localised not only in the (SO) mesoderm in which the Hox gene was introduced into a wrap recombinate but also widely in mesoderm and in neurectoderm. See Figs. 1 and 2 for details. 2. The specificity of vertical signalling led us to consider an unorthodox mechanism. Prochiantz and colleagues have shown that Hox proteins and other homeoproteins can transfer from cell to cell. They transfer by virtue of a special 'penetratin' sequence within the homeobox of the homeoprotein which can be mutated to ablate transfer of the homeoprotein (internalisation or secretion). This mechanism is now well known and well accepted and has been shown to mediate several different developmental functions. It is exactly the type of mechanism needed to mediate 'vertical signalling', which is copying of positional information from one germ layer to another and which we have shown (above) to be mediated by copying of Hox gene expression from one germ layer to another. 3. We examined whether Prochiantz transfer occurs during vertical signalling between mesoderm and neurectoderm from Xenopus gastrulae. Myc tagged Hoxd1 was introduced into wrap recombinates in NOM mesoderm and cultured over 6-8 hrs. Over this time course, Myc tagged Hoxd1 spreads to every part of the recombinate, from the SO mesoderm in which it was introduced; to neurectoderm as well as mesoderm. We conclude that Hoxd1myc is transferred from cell to cell. (See S1 Figure). We also tested the dependence of this transfer on 'penetratin'. We thus constructed a chimeric protein consisting of Hoxd1 homeodomain (including the penetratin sequence) coupled to GFP. We tested the mobility of this labelled protein in comparison to a mutant where two essential amino acids in the penetratin sequence are mutated and the capacity for transfer is ablated; and with GFP itself. The wild type Hoxd1HD-GFP (d1-HD-gfp) was transferred from mesoderm to neurectoderm (transfer made more visible using red phalloidin background staining). The mutated homeodomain coupled to GFP protein (mut d1-HD-gfp) and GFP were not. We conclude that the homeodomain of the Hox gene Hoxd1 transfers from mesoderm to neurectoderm (Fig. 3). And that Hoxd1 itself is indeed transferred from gastrula mesoderm to gastrula neurectoderm by the Prochiantz mechanism ( S1 Figure). 4. We tested the species specificity of d1-HD-gfp homeoprotein transfer. This was taken up by Drosophila imaginal discs whereas GFP and mut-d1-HD-GFP were not. This finding is consistent with the known extreme aspecificity of Prochiantz signalling (which occurs both in animals and plants and presumably evolved very early on, possibly before ligand-receptor signalling). This emphasizes that Hoxd1 passes cell membranes in a species independent manner, confirming the previous indications of species independent Hox transfer and excluding a requirement for species specific forms of ligandreceptor signalling (Fig. 4). 5. We tested the functionality of Hoxd1 protein transfer. When this protein was injected into the blastocoel of Xenopus gastrulae, it gave the same posteriorising phenotype as cytoplasmically injected Hoxd1 mRNA or protein. This again emphasizes that Hoxd1 protein passes cell membranes and that the transported protein is functional. 6. Finally, the findings above demonstrate unambiguously that non cell autonomous autoregulation of Hox proteins mediates vertical signalling/ neural transformation in the Xenopus gastrula and that at least for Hoxd1, a mechanism involved is Prochiantz intercellular homeoprotein transfer.
Ethics
The Leiden University Animal Experimentation Ethics (DEC) Committee have approved this work.
Handling and treating Xenopus embryos
Embryos were staged according to Nieuwkoop and Faber [40]. In vitro fertilization, embryo culture, mRNA and antisense morpholino oligonucleotides (MO, Gene-Tools Inc.) and protein injections were used as previously described [41]. In situ hybridization analyses were carried out as formerly reported [41]. The wrap assay was as described previously [41]. Microsurgery was carried out using hair knives. Mesodermal tissue (NOM or SO) was explanted and the epithelial layer removed. After keeping these tissue explants for a few minutes in MBS, they were placed between two animal caps, which had been cut immediately before to prevent curling. Wraps were cultivated in 1%MBS for about 30 min and then transferred to 0.1% MBS. Wraps were usually fixed 6 to 8 h after preparing them, when control embryos reached stage 14-15 (i.e. mid neurula).
outer layers of the wrap (neurectoderm) while no signal is detected in the control wrap (A). The signal spreads throughout the recombinate to originally unlabelled neurectoderm as well as originally unlabelled mesoderm. Each photo represents 10 wraps giving identical results. doi:10.1371/journal.pone.0115208.s001 (TIF) | 6,874 | 2014-12-16T00:00:00.000 | [
"Biology"
] |
The non-SUSY $AdS_6$ and $AdS_7$ fixed points are brane-jet unstable
In six- and seven-dimensional gauged supergravity, each scalar potential has one supersymmetric and one non-supersymmetric fixed points. The non-supersymmetric $AdS_7$ fixed point is perturbatively unstable. On the other hand, the non-supersymmetric $AdS_6$ fixed point is known to be perturbatively stable. In this note we examine the newly proposed non-perturbative instability, called brane-jet instabilities of the $AdS_6$ and $AdS_7$ vacua. We find that when they are uplifted to massive type IIA and eleven-dimensional supergravity, respectively, the non-supersymmetric $AdS_6$ and $AdS_7$ vacua are both brane-jet unstable, in fond of the weak gravity conjecture.
Introduction and conclusions
The AdS/CFT correspondence, [1], has provided a framework to study quantum field theories in various dimensions with various amount of supersymmetry through their gravitational duals. When it comes to the non-supersymmetric quantum field theories, even though there are several known perturbatively stable, [2,3,4], non-supersymmetric AdS vacua, due to the limited control over the non-supersymmetry, not much was able to be investigated. 1 Furthermore, recently, as a stronger version of the weak gravity conjecture, [6], a conjecture on non-supersymmetric AdS vacua was suggested: there is no stable non-supersymmetric AdS vacua from string and M-theory, [7]. In support of testing the conjecture, a new non-perturbative decay channel called brane-jet instability was proposed by Bena, Pilch and Warner in [8]. This examines the force acting on the probe branes and when the force is repulsive, the vacua is determined to be unstable. In [8], the authors showed the only known perturbatively stable non-supersymmetric AdS 4 vacuum, [9,10], among the AdS vacua of four-, [11], and five-, [12], dimensional maximal gauged supergravity is, in fact, brane-jet unstable.
The purpose of this note is to examine the brane-jet instability of AdS vacua of six-and seven-dimensional gauged supergravity in [13] and in [14,15,16,17], respectively. In six-and seven-dimensional gauged supergravity, each scalar potential has one supersymmetric and one non-supersymmetric fixed points.
In seven dimensions, minimal gauged supergravity, [14,15], is a subsector of maximal gauged supergravity, [16,17]. As we identify the scalar fields to a scalar field, the maximal theory reduces to the minimal theory. The scalar potentials of the theories have a pair of supersymmetric and non-supersymmetric fixed points. The non-supersymmetric fixed point is known to be perturbatively stable in the minimal theory, [15], but not stable in the maximal theory, [17]. Maximal and minimal theories commonly uplift to eleven-dimensional supergravity, [18,19,20] and [21], but the minimal theory also uplifts to massive type IIA supergravity, [22]. We will examine the brane-jet stability of the AdS 7 fixed points when they are uplifted to eleven-dimensional supergravity.
In F (4) gauged supergravity in six dimensions, [13], there are also a pair of supersymmetric and non-supersymmetric fixed points. The non-supersymmetric AdS 6 fixed point is known to be perturbatively stable, [13]. F (4) gauged supergravity is a consistent truncation of massive type IIA supergravity, [23] and also of type IIB supergravity, [24,25,26]. We will examine the branejet stability of the AdS 6 fixed points when they are uplifted to massive type IIA supergravity.
Indeed we show that when they are uplifted to massive type IIA and eleven-dimensional supergravity, respectively, the non-supersymmetric AdS 6 and AdS 7 fixed points are both branejet unstable in favor of the conjecture on non-supersymmetric vacua in [7].
It would be interesting to consider the alternative uplifts of the AdS 6 and AdS 7 fixed points to type IIB, [24,25,26], and massive type IIA supergravity, [22], respectively. Indeed, the instabilities of AdS 7 solutions in massive type IIA supergravity are already examined with matter couplings in [27,28,29]. It would be interesting to see if a fixed point is uplifted to different theories in higher dimensions and stabilities of the uplifted solutions are different.
In section 2 and 3, we test the brane-jet instabilities of AdS fixed points from six-and seven-dimensional gauged supergravity, respectively. In an appendix, we present the calculation of potentials of the fluxes for supersymmetric flows and show that the probe brane potentials vanish over the whole flows identically.
2 The AdS 6 fixed points
Solutions in massive type IIA supergravity
We consider the scalar-gravity action of F (4) gauged supergravity, [13], in the conventions of [23], There are supersymmetric and non-supersymmetric fixed points of the scalar potential at e We employ the uplift formula to massive type IIA supergravity, [30], in [23]. In Einstein frame, the metric, the dilaton, and the four-form flux are non-trivial and are given, respectively, by, [31], 2) where we define The metric and the volume form of the three-sphere are given, respectively, by where σ I are SU (2) left-invariant one-forms, We may introduce explicit SU (2) left-invariant one-forms, Then the metric and the volume form are (2.9)
D4-brane probes
The uplift formula for the six-form flux is given by, [23], (2.10) At the AdS 6 fixed points, it gives and l is the radius of AdS 6 . Thus we obtain that the five-form potential is where we use ∂ r U = 0, ∂ ξ U = 0 at the fixed points. U is so-called geometric scalar potential. 2 We partition the spacetime coordinates, and choose the static gauge, where η a are the worldvolume coordinates. The pull-back of the metric is Now we study the worldvolume action of the D4-branes which is given by a sum of DBI and WZ terms. If the probe branes move slowly, the worldvolume action in Einstein frame is = − ∆ 1/16 X 5/16 sin 5/24 ξ d 5 η e 5A ∆ 15/16 X 5/16 sin 5/24 ξ − 1 2 e 3A ∆ 9/16 X 3/16 sin 1/8 ξG mnẏ mẏn + · · · − √ 2g 3
17)
2 For the supersymmetric flows we can calculate the five-form potential over the whole flow. See appendix A.1. whereC (5) is the pull-back of the five-form potential. 3 Then the worldvolume action reduces to where the kinetic and the potential terms are The final probe brane potential is quite simple. From the probe brane potential, we test the brane-jet instabilities of the supersymmetric and non-supersymmetric AdS 6 fixed points. We set g = 3 √ 2 2 for l = 1. The plots of the brane potential over the hemisphere, 0 ≤ ξ ≤ π, are given in Figure 1. We conclude that the non-supersymmetric AdS 6 fixed point is not stable.
3 The AdS 7 fixed points 3
M5-brane probes
At the AdS 7 fixed points, the seven-form flux is where and l is the radius of AdS 7 . Thus we obtain that the six-form potential is where we use ∂ r U = 0, ∂ α U = 0 at the fixed points. U is so-called geometric scalar potential. We partition the spacetime coordinates, and choose the static gauge, where η a are the worldvolume coordinates. The pull-back of the metric is Now we study the worldvolume action of the M5-branes which is given by a sum of DBI and WZ terms. If the probe branes move slowly, the worldvolume action is whereC (6) is the pull-back of the six-form potential. 6 Then the worldvolume action reduces to where the kinetic and the potential terms are (3.14) The final probe brane potential is quite simple. From the probe brane potential, we test the brane-jet instabilities of the supersymmetric and non-supersymmetric AdS 7 fixed points. We set g = 2 for l = 1. The plots are given in Figure 2. We conclude that the non-supersymmetric AdS 7 fixed point is not stable.
A Potentials of the fluxes for supersymmetric flows
In the appendix we derive the potentials of the fluxes for supersymmetric flows.
A.1 Flows from AdS 6
We consider the domain wall background, [34], The supersymmetry equations are given by The uplift formula for the six-form flux is given by, [23], For the domain wall solutions, the six-form flux is Then we obtain that the five-form potential is If we employ this five-form potential to compute the probe brane potential, it vanishes identically over the whole flow, V = e 5A (∆ − ∆) = 0 . (A.8)
A.2 Flows from AdS 7
We consider the domain wall background, [37], The uplift formula for the six-form flux is given by, [32], From an analogous calculation of the previous subsection, we obtain that the six-form potential is If we employ this six-form potential to compute the probe brane potential, it vanishes identically over the whole flow, V = e 6A (∆ − ∆) = 0 . (A. 13) | 1,985.2 | 2020-04-14T00:00:00.000 | [
"Physics"
] |
The Effect of Multimedia in Increasing the Integers Operation Ability
This study aims to reveal the effectiveness of multimedia in increasing the ability to calculate integers. This research was conducted in the third class of SDN Cibodas Majalengka. The research method uses quantitative methods with a pre-test post-test control group design. One class uses multimedia and one class uses ordinary learning. After analysing the data, it can be concluded that the ability to calculate integer operations of the experimental class is better when compared to the control class. Other results show that student responses are positive after the use of multimedia learning.
Introduction
Education is a conscious and planned effort to develop individuals in society so that they can humanize human beings [1]. The purpose of manipulating humans is to form humans as part of individuals who cannot be separated from society. Mathematics as a subject that must be given to students has an important role in the development of technology. Through mathematics, we can solve problems that can be used in solving everyday problems. Therefore, one of the goals of mathematics is that students can think systematically.
Problems in mathematics are of concern to mathematic activists themselves. Problems that arise include mathematics material which is considered to be the foundation of mathematics learning. One often encountered a problem is in integer operating material. Difficulties occur in plus and minus operations when combining positive and negative integers. This problem occurs at SDN Cibodas Majalengka. Extra energy is needed so students understand well the operation of integers. This is based on the importance of material operating numbers which are the foundation in working on other mathematical problems. It needs innovation in learning mathematics so that the material that is the foundation of this problem can be solved. The effort that can be taken by the teacher in overcoming the problem is to use the media. Media that can be used of course media that can make a visualization of the material. The use of multimedia is a way that can be taken by the teacher in making it easier for students to understand integer operation material. Through multimedia learning mathematics that seems abstract can be seen more concretely because it was previously prepared in advance by the teacher. Multimedia is media that combines several elements such as graphics, writing or the other. This is following the opinion of saying that Multimedia is the use of computers to present and combine text, sound, images, animation, and video with aids (tools) and connections (links) so that users can navigate, interact, work and communicate [2]. Multimedia technology empowers the educational process by means of increased interaction between teachers and the students [3] Based on this opinion, it can be seen that multimedia learning uses computer devices as hardware and programs in the form of software in the implementation of learning. Simple multimedia can also be interpreted as a combination of several media, but if it is further examined, multimedia is a combination of text, graphics, sounds and picture in the form of animation that design for meaningful learning [4].
Multimedia if done well it can have a considerable influence on changing views of mathematics in the eyes of students is a difficult and frightening subject. Through interactive multimedia learning becomes more dynamic because there is communication between students and multimedia created. Multimedia can change the teacher's role as a facilitator in learning and can control students in implementing learning well. Regarding the benefits generated from multimedia are as follows: Kristi & Belet states that technology support which aims to annihilate students' negative views, attitudes and relationships about learning can make learning more effective. it can be emphasized that the use of multimedia can reduce student negativity towards learning then student attitudes become better which in turn can facilitate teachers to achieve learning goals [5].
Research Method
This research was conducted in third-grade students at SDN Cibodas Majalengka. The number of students is 32 students. The study was conducted in 2019. This study uses a type of research with a quantitative approach, with the design of "One Groups Pretest-Posttest Design".. The research design drawings are as follows: O1 is a pretest used before learning by using multimedia use. X is the treatment of learning that is learning using multimedia learning and O2 is a post that is used to see the extent of understanding that students have after the implementation of learning by using multimedia learning. Tests used in this study are paired sample t-test if the prerequisite and Wilcoxon test are not eligible. The pretest is done by students on learning multimedia as well as post-tests carried out by students on multimedia learning that is made. Pretest and posttest deliberately carried out through multimedia in the hope that students can see first hand the value obtained after all the questions are done. This will be an encouragement for students to carry out learning if they get a small pretest value. The description of research thinking framework can be seen in tabular form as follows: Mathematical learning problems Low mathematical ability Figure 1. Framework
Result and Discussion
The results of the study provide an overview of the ability of integer operations of third-grade students of SDN Cibodas Majalengka. This research started by making learning multimedia so that it can be used in the learning process in class III SDN Cibodas. Multimedia previously provided to experts for validation. Experts taken are Dendi S. Kom for multimedia experts and M. Gilar Jatisunda M.Pd. as a mathematician content expert. The following is an overview of multimedia created by the teacher. Researchers provide a parable in the operation of integers added by modeling the apples so that learning mathematics is concrete. By giving examples like this it is hoped that students can better understand the material being taught. To answer at the beginning of the meeting students are given answer choices while in subsequent questions students must directly answer questions from the teacher. Through
Low integer operations
Need interesting learning and use of multimedia It is hoped that students' motivation will increase The ability to operate arithmetic increases schemes like this students can already correct answers directly from the multimedia created. The following is the display when students answer.
Figure 3. Interactive Display of Wrong Answers
The picture is a display when students answer questions. Based on the picture, students answer incorrectly, so students must answer the questions again. At the time of practice, students must arrive correctly in answering the questions provided so that they can proceed to the next question. If students experience difficulties then there is a teacher who is ready to help to solve the problem. Through this, the teacher's role changes to become a facilitator in learning.
In general, the multimedia display is made simple and contains contextual elements because it is given to elementary school students. The figure shows the integer number operation. Students by themselves can already know whether the answers made are correct or wrong if wrong because this is a learning process students can still correct it. All learning processes are carried out in a multimedia program created. Multimedia learning is ready, then the next step is to do pre-test and post-test on integer operation material. The pretest is done before learning is carried out and the posttest is done after learning is done. The description of pretest and posttest obtained by students in class III SDN Cibodas Majalengka is as follows: Based on these data it can be seen that the average post-test score is greater than the pretest. The average protest was 74.69 while the average pretest was 57.66. The maximum value for the posttest reaches the maximum value of 100. These values only describe the description of the data. To analyze and test hypotheses, it is followed by prerequisite tests. The first prerequisite test is the normality test using the Shapiro-Wilk test. The significance value for the pretest is 0.89 thus it can be concluded that the data distribution for the pretest is normal, while for the posttest data the significance value is 0.022 thus it can be concluded that the posttest data is not normally distributed because of the significant value <0.05. The test used to answer the hypothesis is a non-parametric test because one of the prerequisites for data distribution is not fulfilled. The test used is the Wilcoxon test, the result of the significance value is 0,000. These results when compared with alpha then <0.05 so it can be concluded that Ho is rejected means that there is a significant difference in the students' integer operating ability before and after multimedia learning is given. These results indicate that multimedia learning influences on improving students' understanding of integer operations. Multimedia has a positive influence including students' views in learning mathematics becoming more dynamic. This is because there are animations that make learning media more interactive. Another thing that can be seen by researchers when the research takes place is that it seems that students' motivation is increasing because of the use of this multimedia. Motivation is formed because students feel learning is not saturated and there are challenges when working on problems so that each student during learning becomes more motivated to get the best results. This is following the opinion of, the advantages of interactive multimedia include: can increase student motivation [6], different if the teacher applies the traditional model. Traditional learning media used in teaching and learning activity also made students less motivated [7]. Student motivation can be seen from the results of the interview that students feel motivated by learning to use multimedia. Students' views change in learning mathematics because mathematics learning is packaged in an interesting way using multimedia learning. The researcher asks in what part students are interested in multimedia learning. Students answer in the multimedia response section when we do an incorrect answer. This means that when a student answers an incorrect question quickly he can fix it without waiting for the teacher. This makes it easy for students to return to answering questions from the teacher. Another thing that is obtained from the multimedia learning information is students can carry out learning anywhere and anytime. Through multimedia, students can practice anytime and anywhere.
Conclusion
The results of data processing and analysis illustrate that the ability to operate student numbers has increased from pretest to posttest. Based on the results of hypothesis testing using the Wilcoxon test, it was concluded that there were significant differences in the students' integer operating abilities before and after multimedia learning was given. Suggestions from this study are teachers of elementary school students can make multimedia learning as an alternative in implementing learning in mathematics learning. This is based because multimedia learning can improve mathematical abilities and make learning more dynamic. Another thing that also underlies that this research can be applied by other teachers is that student motivation increases in learning mathematics after implementing learning by using multimedia learning. | 2,556.4 | 2020-03-01T00:00:00.000 | [
"Education",
"Computer Science",
"Physics"
] |
Excess-entropy scaling in supercooled binary mixtures
Transport coefficients, such as viscosity or diffusion coefficient, show significant dependence on density or temperature near the glass transition. Although several theories have been proposed for explaining this dynamical slowdown, the origin remains to date elusive. We apply here an excess-entropy scaling strategy using molecular dynamics computer simulations and find a quasiuniversal, almost composition-independent, relation for binary mixtures, extending eight orders of magnitude in viscosity or diffusion coefficient. Metallic alloys are also well captured by this relation. The excess-entropy scaling predicts a quasiuniversal breakdown of the Stokes-Einstein relation between viscosity and diffusion coefficient in the supercooled regime. Additionally, we find evidence that quasiuniversality extends beyond binary mixtures, and that the origin is difficult to explain using existing arguments for single-component quasiuniversality.
q calculated from the virial W and potential energy U constant-volume canonical-ensemble fluctuations at a given state point. A pragmatic definition of this class of liquids is R ≥ 0.90 which depends on the state point 41 . R-simple liquids include most or all van der Waals and metallic liquids, but exclude network-forming, covalent-bonding, and strongly ionic or dipolar liquids. R-simple liquids have been shown to exist both in experiments and simulations, and the concept is also relevant for the crystal region, under nanoconfinement, in nonlinear shear flow, and more 26,[48][49][50][51][52][53][54] . A review of R-simple liquids and their isomorphs is given in ref. 55 .
Rosenfeld reported in his seminal paper a quasiuniversal relation 16,17 for single-component atomic liquids given by the expression in which k B is Boltzmann's constant, N is the number of particles, and A and B are system-independent constants, with, e.g., A ≈ 0.6 and B ≈ 0.8 for diffusion 16 and A ≈ 0.2 and B ≈ −0.8 for viscosity 17 . Equation (1) enables prediction of unknown transport coefficients for a given system if its excess entropy is known. Later studies revealed that the exponential behavior of the excess entropy does not apply for supercooled liquids whereas excess-entropy scaling in the form e X ¼ f ðS ex Þ may still apply 25,31,34,40 . Furthermore, the quasiuniversal relation of single-component atomic liquids was found to break down for, e.g., molecules which in general do not show quasiuniversal behavior 25,56,57 . The quasiuniversal behavior of single-component atomic liquids may be explained by the exponential (EXP) pair potential [58][59][60] , which can be used as a basis for expanding other pair potentials under certain conditions.
Notwithstanding the importance of excess-entropy scaling for single-component atomic liquids, mixtures of atoms are more often used in simulations and experiments to avoid crystallization and to obtain desirable properties in, e.g., metallic alloys 61 . Alas, for atomic mixtures one does not expect quasiuniversality; mixtures may involve atoms of various sizes, different compositions, alongside different interactions amongst the constitutent particles. Krekelberg et al. 22 found poor scaling with excess entropy for binary hard-sphere (HS) mixtures with respect to composition and size and formulated a generalized excess-entropy scaling to remedy this problem. Banerjee et al. 34 studied supercooled binary mixtures and found no universal collapse between a tetrahedralforming ionic melt and other simple mixtures. This has also been found for other ionic melts in the supercooled region 62 . On the other hand, Lötgering-Lin et al. 36 found collapse with composition of binary Lennard-Jones (LJ) mixtures in the hightemperature regime over a limited range in viscosity (factor of two). A related result has also been found for the computersimulated metallic alloy AlNi in the high-temperature limit 32 . On account of the high temperatures simulated these mixtures are expected to behave approximately as single-component atomic liquids, and the results are therefore consistent with the previously mentioned studies and results.
The Stokes-Einstein (SE) relation connects the diffusion coefficient D of a large particle immersed in a solvent with viscosity η, predicting that D ∝ η −1 T. The SE relation breaks down in the supercooled regime and explanations have been presented from various theoretical perspectives 33,38,[63][64][65][66][67][68][69][70] . Flenner et al. 68 obtained a good collapse of the diffusion coefficient plotted against the structural relaxation time (which may be used as a proxy for the viscosity) for supercooled binary mixtures by scaling the diffusion coefficient and the relaxation time. In other words, the authors showed that the breakdown of SE in the supercooled regime occurs at the same scaled relaxation time and in a quasiuniversal manner for these binary mixtures. Flenner et al. 68 also showed that dynamical heterogeneity exhibits universal features for supercooled liquids. However, it is not clear why a quasiuniversal curve should be observed in the supercooled region as the binary mixtures are very different, and this was also noted by the authors. The focus of the present study is not on the origin behind the SE breakdown, but on the possible quasiuniversality observation of Flenner et al. 68 related to the SE breakdown.
The above observations motivate us to carry out an in-depth study of viscosity and diffusion coefficient going deep into the supercooled regime of a wide range of binary atomic mixtures to investigate whether quasiuniversality applies to mixtures, contrary to the expectation and findings of previous studies. We use molecular dynamics GPU-based computer simulations in the NVT ensemble (the RUMD package 71 ) to study six different binary mixtures: The Kob-Andersen binary Lennard-Jones (KA) mixture, the Wahnström (WS) mixture, the generalized LJ (GLJ) mixture, the KA exponential pair potential (KAEXP) mixture, alloys of copper and zirconium (CuZr), and a size asymmetric (AS) mixture. The systems under study include additive and nonadditive mixtures, different steepness of the pair interactions, effective medium interactions, various size asymmetries, and different compositions. Model and simulation details are found in the Methods section. The Supplementary Tables S1 and S2 include all simulation results in a tabular form. The virial potential-energy correlation coefficient R is >0.90 at all investigated state points, except for the CuZr 36:64 and AS mixtures where it is somewhat below 0.90 (see Supplementary Tables S1 and S2); some of these systems have previously been investigated in detail for isomorphs see, e.g., refs. 26,41,44,[52][53][54] .
The computer models have various degrees of glass-forming ability and thus different ranges of supercooling. Throughout the study, we use two different sets of dimensionless units: one using the microscopic parameters of the potentials based on the length and energy scales of the larger (A) particle, which is standard in computer simulations, and another set of dimensionless units using macroscopic quantities with length given in units of ρ −1/3 , energy in units of k B T, and time in units of ρ À1=3 ffiffiffiffiffiffiffiffiffiffiffiffiffiffi m=k B T p (m is the particle mass), as applied in excess-entropy scaling and the isomorph theory 16,44,59 . The macroscopic dimensionless units are termed reduced units and use a tilde above the variable name; microscopic dimensionless units are implicitly assumed when no tilde is given (an exception is the metallic alloys; see "Methods" for their units).
The main findings of the current study are: (1) A nearly composition-independent excess-entropy scaling relation for all studied binary mixtures extending over eight orders of magnitude in viscosity or diffusion coefficient, going three to four orders of magnitude below the mode-coupling temperature T MCT (i.e., where the dynamics starts to become landscape dominated). (2) A quasiuniversal excess-entropy relation amongst binary atomic mixtures with different interactions (e.g., pair interactions and effective medium interactions), mixing rules, and size asymmetry. We find, additionally, that the departure from universality in the supercooled regime can be rationalized using the so-called density-scaling exponent. As a consequence of these findings, we show that the product of viscosity and diffusion coefficient has virtually the same excess-entropy dependence for all mixtures. Our results thus rationalize the observations of Flenner et al. 68 that SE breaks down at the same scaled relaxation time. The presented simulation results are corroborated by experimental data on metallic alloys from the literature which additionally support the validity of the scalings beyond binary mixtures.
Results
Excess-entropy scaling. The study commences by demonstrating deeply supercooled dynamics exemplifed by the self-part of the intermediate scattering function (ISF; see "Methods" for definition) for the KA mixture at 2:1 composition in Fig. 1a. The value of the wave vector q is that of the first peak of the static structure factor. The 2:1 composition is a much better glass former than the standard 4:1 composition 72-74 , thus giving access to a wider dynamical range. The supercooled dynamics goes three to four decades below T MCT where the standard 4:1 composition would crystallize. The mode-coupling temperature for the 2:1 KA mixture is T MCT = 0.55. We find a plateau in the ISF extending over almost five decades with a stretching exponent β = 0.55 at the lowest temperature, i.e., the ISF is well fitted by the stretched exponential function exp½Àðt=τ α Þ β , where τ α is the α-relaxation time. Figure 1b displays the viscosity η as a function of 1/T for all the studied binary mixtures and shows in all cases strong deviations from a straight line in the supercooled regime, i.e., a significant non-Arrhenius behavior. Figure 1c, d demonstrates excess-entropy scaling for the diffusion coefficient in the 4:1 KA and 3:1 WS mixtures for three different densities and several temperatures; Supplementary Fig. S1 provides the corresponding figures for the viscosity. The 4:1 KA mixture is more commonly studied in the literature than the 2:1 KA mixture, and the former model is therefore used to illustrate excess-entropy scaling. We focus here on the large Aparticle diffusion coefficient; results for the B-particle diffusion coefficient are given in Supplementary Fig. S2. Consistent with previous studies 21,34,69 , an excellent collapse with excess entropy is found, extending here to much lower diffusion coefficients than previously studied. Hereafter we focus on showing results for a fixed density only for each system.
Composition excess-entropy scaling. Excess-entropy scaling for a fixed composition was demonstrated in the previous section. However, composition is an extra variable besides density and temperature in the phase diagram of mixtures, and the question is therefore whether excess-entropy scaling can absorb this extra variable and still collapse data to a univariate function of the excess entropy S ex . As mentioned, in light of the results of Krekelberg et al. 22 and from the fact that mixtures have rich phase diagrams, one does not a priori expect any collapse for different compositions. Figure 2 shows the reduced viscosity e η as a function of the excess entropy for the KA mixture, the WS mixture, the GLJ mixture, and the CuZr mixture, each plotted for several compositions. An almost composition-independent curve is found for all mixtures for a dynamic range extending over eight orders of magnitude in viscosity. This result cannot in an obvious way be explained by the quasiuniversality of single-component atomic liquids or by appealing to high temperatures where binary mixtures are expected to behave approximately as singlecomponent liquids.
Quasiuniversal excess-entropy scaling. We proceed to investigate excess-entropy scaling relationships by comparing different systems. Figure 3a shows the reduced viscosity as a function of the excess entropy for all mixtures and compositions. Figure 3b shows the reduced A-particle diffusion coefficient. For reference we have also included data for the single-component LJ (SCLJ) liquid. Additional data for SCLJ are given in ref. 75 , demonstrating that the same trend continues into the gaseous region.
For all investigated mixtures and compositions, a quasiuniversal relationship is observed for both viscosity and diffusion coefficient using the excess entropy as the relevant variable. Some deviations are found for the most supercooled states, depending on the mixture, and thus the use of the term quasiuniversal is appropriate as opposed to the nearly universal relationship observed for different compositions in Fig. 2. We conclude that quasiuniversality applies also for binary mixtures, contrary to expectation and previous studies.
To put the magnitude of the observed deviations into perspective, Figure 3b provides as a reference excess-entropy scaling for an almost sphere-like dumbbell molecule (DB; see grey data points with data taken from ref. 26 ). This model also has R above 0.90 for all investigated state points. Significant deviations are observed at higher temperatures and no quasiuniversality can possibly be established in the deeply supercooled region, indicating that the deviations between the different binary mixtures are relatively small. The departure from universality in the supercooled region is studied more closely below where it is found to correlate with the value of the density-scaling exponent.
How do the above quasiuniversality observations relate to those of Flenner et al.? Flenner et al. 68 observed a quasiuniversal breakdown of SE for five different binary atomic mixtures by scaling the relaxation time (a proxy for the viscosity) and the diffusion coefficient and plotting the diffusion coefficient against the relaxation time 68 . The SE relation in its traditional form is 1 Supercooled dynamics and excess-entropy scaling. a Temperature dependence of the self-part of the ISF for the 2:1 KA mixture (ρ = 1.400). The mode-coupling temperature is The viscosity η as a function of 1/T for all mixtures. Significant non-Arrhenius temperature dependence is observed for all systems. The KAEXP and CuZr mixtures are omitted due to their different temperature scales (they show a similar behavior). The studied densities are given in "Methods". c Excess-entropy scaling for diffusion coefficient for the 4:1 KA mixture at the densities: ρ = 1.204, 1.402, 2.000. d Excess-entropy scaling for diffusion coefficient for the 3:1 WS mixture at the densities: ρ = 1.100, 1.500, 2.000. An excellent collapse is found for both systems, consistent with the isomorph-theory predictions 44 . given by in which σ H is the hydrodynamic diameter and c is a constant. Assuming that the hydrodynamic diameter is not a constant but proportional to ρ −1/3 , the SE relation in reduced units becomes 76 A hydrodynamic diameter proportional to ρ −1/3 was proposed by Zwanzig 77 and recently shown to be a consequence of the isomorph theory in the sense that Eq. (2) with a constant hydrodynamic diameter is inconsistent with isomorph theory while Eq. (3) is not 76 . We therefore focus on this expression for SE. For atomic mixtures, a possible generalization of the SE relation is to use the individual diffusion coefficients in Eq. (3), e.g., for the A-particle the diffusion coefficient D A is used 33 while the constant c is expected to depend on the particle type. The SE relation is now investigated for all the binary mixtures. Figure 3 documents a quasiuniversal relation for both e D A and e η as a function of the excess entropy S ex . This result implies that the product is also a quasiuniversal function of S ex . Figure 4a shows e D A e η as a function of the excess entropy for all investigated systems. We find a quasiuniversal breakdown of SE (i.e., departure from a constant value) around S ex /k B N ≈ −5.0. A breakdown of SE has been correlated to the crossing of the socalled two-particle excess entropy and the excess entropy with temperature 69 . It would be interesting to check whether this observation holds for the systems studied here. Figure 4b shows e D A versus e η in a plot where the SE relation is a straight line with slope −1 (the full black line). Around e D A % 2 × 10 −2 the SE relation begins to break down for all systems and compositions. These data suggest that the relevant variable is the excess entropy which in a quasiuniversal way correlates to both viscosity and diffusion coefficient and hence also their product, defining the SE relation. Although the departure from universality for viscosity and diffusion coefficient go in opposite directions in Fig. 3, a specific value of S ex corresponds to a specific value of e D A or e η due to the separate quasiuniversality of these two quantities. The breakdown is therefore bound to occur at more or less the same value of e D A or e η for all the studied systems. Supplementary Fig. S2 provides the same figure for the Bparticle diffusion coefficient in which case the same conclusion is reached. We conclude that quasiuniversality for binary mixtures can rationalize the observations of Flenner et al. 68 that SE breaks down at the same scaled relaxation time. Fig. 3 Reduced viscosity and A-particle diffusion coefficient as a function of S ex for all mixtures and compositions. A quasiuniversal curve is observed in both cases with excess entropy. The black dashed lines give Rosenfeld single-component quasiuniversality Eq.
(1). a Reduced viscosity e η as a function of S ex . b Reduced A-particle diffusion coefficient e D A as a function of S ex . The grey data points (DB, data taken from ref. 26 ) give reference data for an almost sphere-like dumbbell molecule that is easily supercooled; in this case no quasiuniversality is found. In comparison, the observed deviations between the different binary mixture data are relatively small. The excess entropy approach detailed here does not clarify the origin of the SE breakdown, other than it should occur in a quasiuniversal manner. Other theoretical approaches, such as dynamical facilitation or the random first-order transition theory, have the SE breakdown as a consequence of dynamical heterogeneity 15,64,65 . These theories provide predictions for the fractional SE exponent observed in Fig. 4b (see the black dashed line) which the excess entropy approach does not provide. We find that the fractional SE exponent for our most supercooled 2:1 KA data is ξ ≈ 0.73, which interestingly is also the number found in simulations of the one-dimensional East model in dynamical facilitation 64 . Similar fractional SE exponents have been noticed before, but in this study we go almost four decades below T MCT and find an excellent agreement with the quasiuniversal excessentropy scaling.
Excess-entropy scaling in experiments. A recent experimental study by Blodgett et al. 78 proposed an interesting universality for metallic liquids by scaling viscosity with the high-temperature limit η 0 and temperature with the onset of cooperative motion T A . A good collapse of many different alloys was obtained in the high-temperature limit and close to the glass transition, motivated by avoided critical point theory (KKZNT) 5 . The authors therefore found to a good approximation η/η 0 = F(T/T A ). For the alloys studied the authors noted on average that η 0 ∝ ρ and T A /T l ≈ 1.075, where T l is the liquidus (freezing) temperature. Recall that for binary mixtures the liquidus temperature specifies the temperature at constant pressure above which the system is completely liquid (the opposite being the so-called solidus temperature). For R-simple liquids, the temperature is given by T = h (ρ)f(S ex ) in which h(ρ) is a function of density 79 . The freezing line is an approximate isomorph 44,80 , and since an isomorph is characterized by h(ρ)/T = const., one has h(ρ) ∝ T f (ρ) with the reference isomorph being the freezing line 81,82 . The quasiuniversality found here explains the quasiuniversality found for metallic alloys since T/T A ≈ T/T l ≈ T/T f (ρ) = f(S ex ).
In Fig. 4b we plot quasielastic neutron scattering measurements of the Ni diffusion coefficient against the reduced viscosity for the binary metallic alloy Zr 64 Ni 36 using data of Brillo et al. 83 (see also, e.g., ref. 84 ) and similar data for Zr 36 Ni 64 from ref. 85 . The reduced diffusion coefficients and viscosities for both Zr 64 Ni 36 and Zr 36 Ni 64 collapse nicely onto the quasiuniversal curve, reflecting the underlying quasiuniversal excess-entropy scaling relationship. The same figure also shows data for the Vit4 (Zr 46.8 Ti 8.2 Cu 7.5 Ni 10 Be 27.5 ) five-component metallic glass former from Yang et al. 86 . The Vit4 glass former also collapses nicely onto the quasiuniversal curve. This shows that quasiuniversality extends beyond the binary mixtures of main focus here. We return to this observation in the "Discussion".
For testing quasiuniversal excess-entropy scaling in experiments as in Fig. 3, the two-body entropy 32,87 could be used as a proxy, but a high-temperature study indicates that it is not always a good approximation 33 . For our data the two-body entropy is a somewhat worse correlator than the excess entropy and also weakens the correlation to the density-scaling exponent (see later section). We therefore emphasize that the scaling is correlated to the full excess entropy which is more difficult to calculate, unfortunately. Figure 4b provides an alternative procedure for testing quasiuniversality in experiments which avoids having to evaluate S ex explicitly.
Additional tests for quasiuniversal behavior. Rosenfeld quasiuniversality for single-component atomic liquids can be explained by appealing to the EXP pair potential, in terms of which other pair potentials under certain conditions may be expanded 59 . For single-component systems quasiuniversality therefore implies not only quasiuniversal Rosenfeld scaling, but also Young and Andersen's structure-dynamics scaling principle 88,89 , quasiuniversal freezing rules 90 , invariance of the reduced viscosity along the melting line 91 , and more. The singlecomponent arguments do not, however, readily generalize to mixtures. In view of this, we proceed to test to which extent quasiuniversality holds for binary mixtures by checking whether the structure is similar amongst state points with similar dynamics, i.e., whether Young and Andersen's scaling principle applies.
Figures 5a, b compares two different compositions (4:1 and 2:1) for the KA mixture at state points for which the excess entropy and reduced diffusion coefficients are almost identical. For these state points there is less than 9% difference in reduced diffusion coefficient and <0.5% difference in excess entropy. Nevertheless, we find that the AA-particle radial distribution functions (RDFs) show rather large deviations between these two systems, certainly much larger than what is normally found for single-component atomic systems 88,89 . Even larger deviations are found for the AB and BB-particle RDFs in Supplementary Fig. S3.
This observation implies that two-body correlations do not uniquely determine the supercooled dynamics and thus that many-body correlations are important for the dynamics of the system 10,92,93 . The rather large difference in RDFs between the two compositions might also be anticipated from the relevance of the locally favored structures (bicapped square antiprisms) for the dynamics in these mixtures 94 . Furthermore, this anticipation is supported by a connection between decoupling of component dynamics, dynamical heterogeneity, and development of different local medium-range-like ordering in the supercooled regime for certain alloys where the local ordering is directly detectable in the RDFs 95 . Figure 5c, d compares AA-particle RDFs and MSDs amongst the KA and WS mixtures at the same 3:1 composition. The state points have <1% difference in reduced diffusion coefficient and <0.3% difference in excess entropy. We find also here rather large variations of the AA-particle RDFs and even larger ones for the AB and BB-particle RDFs (Fig. 5e, f). Supplementary Fig. S4 compares the distribution of Voronoi volumes in the liquid for the same systems and state points as above, showing also here clear differences. The quasiuniversality found in supercooled binary mixtures thus appears to be more subtle than the quasiuniversality observed in single-component atomic liquids at high temperatures. Future work should focus on clarifying the nature behind this observation in the supercooled regime which could be related to local orderings in the supercooled liquid 95 .
Departure from universality. Figure 3 displayed some deviations from universality in the scaling in the supercooled regime. This section considers these deviations in more detail. Figure 6 shows the reduced viscosity and diffusion coefficient as a function of the excess entropy, where each data point is colored after its value for the density-scaling exponent 44 .
The departure from universality in the supercooled regime correlates with the value of the density-scaling exponent, with a smaller value of γ moving the curve up for viscosity and down for diffusion, the opposite being the case for larger γ-values. More similar γ-values-irrespective of mixing rules, interaction types, etc.-therefore conform to a more universal scaling in the In order to investigate the departure from universality more closely, we use an empirical reference-curve fit to the viscosity data for the SCLJ and 4:1 GLJ systems representing the leftmost part of the data set. The following functional form is used with the best-fit coefficients c 0 = −0.601 ± 0.0541, c 1 = 0.267 ± 0.0657, c 2 = 0.256 ± 0.028, c 3 = 0.0568 ± 0.005, and c 4 = 0.00448 ± 0.000319, where the number after the ± indicates the estimated standard deviation. A plot of the viscosities for the SCLJ and 4:1 GLJ systems along with the reference curve is shown in Fig. 7a. Figure 7b displays for all binary mixtures the ratio of the viscosity to that obtained from the reference-curve fit e η ref of Eq. (5). There is clearly a systematic trend in γ, though a few exceptions can also be found. The excess-entropy dependence of the viscosity is super-Arrhenius. A pragmatic approach for linearizing the data is therefore to consider log 10 ðlog 10 ðe η=e η ref ÞÞ; these values are shown in Fig. 8a. The slope in these coordinates, for a given value of γ, is approximately constant with a value of −0.8. The intercept value b(γ) was found to be acceptably modeled by linear interpolation between the intercept values for γ min = 1.9 and γ max = 6.1 bðγÞ ¼ À0:369γ À 3:649; ð6Þ yielding the overall correction of log 10 log 10 e η e η ref ¼ À0:8ðS ex =k B NÞ þ bðγÞ: ð7Þ Figure 8b shows the corrected data for viscosity using this threeparameter expression; to apply the correction only knowledge of γ is needed. A better collapse is obtained compared to Fig. 6a.
Discussion
For both the diffusion coefficient and the viscosity, the current study has detailed an almost composition-independent relation to the excess entropy for a given system, as well as a quasiuniversal relation amongst different systems. As the viscosity and diffusion coefficient both show quasiuniversality, their product is also quasiuniversal. The SE relation for viscosity and diffusion coefficient must then break down at the same reduced relaxation time or, equivalently, the same value of the excess entropy. Our observations therefore rationalize the universal SE breakdown results of Flenner et al. 68 . The departure from universality correlates with the density-scaling exponent γ with more similar γ-values exhibiting a more similar scaling. This may provide a hint towards explaining the observed deviations in the future.
The isomorph theory states that certain quantities in reduced units are invariant along constant excess-entropy curves in the thermodynamic phase diagram. This fact leads immediately to excess-entropy scaling as described in the Introduction. Does this necessarily imply a causal link between the excess entropy and transport coefficients? The answer is no because one can in principle take the opposite view and posit that transport coefficients control the excess entropy. Quasiuniversality is often explained by referring to the HS model 96 . The HS model was recently questioned as a good reference system as this model cannot account for all quasiuniversality observations [58][59][60]96 . Likewise, we do not believe that the HS model can explain our observations, even by introducing two different spheres, as we considered both very soft and very harsh repulsive pair potentials, highly nonadditive and exothermic mixtures, and mixtures with effective medium interactions. It is also not obvious how the EXP pair-potential arguments for single-component atomic liquid's quasiuniversality can be extended to explain our observations. The fact that the RDFs and Voronoi volumes were observed not to be the same for state points with very similar dynamics and excess entropy, points to a possibly more complex kind of quasiuniversality than that of single-component systems.
An open question is why quasiuniversality is only observed in atomic mixtures but not also in, e.g., single-component molecular systems, even for small molecules 25 . A conjecture is that by removing certain degrees of freedom (e.g., vibrational degrees of freedom) one might be able to unravel quasiuniversality in molecular systems 52,97 . Another relevant question is how a large mass or size ratio would influence the scalings. We studied up to a factor of two in mass ratio and up to a factor of three in size ratio between the constituent particles. A recent study 98 has shown that both cases can have a nontrivial effect on the dynamics of supercooled liquids. Binary mixtures with very large size ratios are not expected to be R-simple 53 . A possible explanation for the lack of scaling in some of the results of Krekelberg et al. 22 is then that these systems are not R-simple.
A limitation of the current study is the focus on binary atomic mixtures. Figure 4b included data for the Vit4 (Zr 46.8 Ti 8.2 Cu 7.5 Ni 10 Be 27.5 ) five-component glass former 86 and showed a good collapse for the diffusion coefficient of Ni/Ti/Cu. We therefore anticipate that mixtures with several components are also covered by the current quasiuniversality relation discovered for binary mixtures. However, it has been observed in some metallic glass formers that SE can apply for one specific component but not for others (see, e.g., ref. 99 ). This behavior could be related to the development of different local orderings in the liquid as seen for a Cr-based alloy 95 . Fundamental questions are therefore: For which component does quasiuniversality hold-and why?
Related to this topic, a recent study found similar structure and dynamics for weakly polydisperse systems sharing the same repulsion when compared at the same T/T g value 100 , where T g is the glass transition temperature. These results are consistent with our conjecture that quasiuniversality extends beyond binary mixtures. Due to the extremely time-consuming simulations of this paper, this intriguing topic is left for future research.
A long-standing issue in the study of supercooled liquids is what controls the dynamics. We find in this study that the excess entropy correlates well with the viscosity and diffusion coefficient for a wide range of binary mixtures, including metallic alloys. Furthermore, evidence has been presented that these results may extend beyond binary mixtures. The novel multicomponent metallic alloys being designed today cannot be comprehensively studied in experiments because of the immense number of possible mixture compositions 101,102 . The approach proposed in this paper offers a means of providing predictive guidance for the transport properties of novel alloys since the quasiuniversal excess-entropy scaling is expected to hold for these liquids. As a result, it is a realistic hope that excess-entropy scaling may facilitate the design of future metallic glasses.
Methods
Simulation details. Molecular dynamics computer simulations were carried out using Nvidia Geforce GTX 1080 graphics cards and the Roskilde University Molecular Dynamics (RUMD) package, version 3.4, in single precision 71 . Very long equilibration runs (the longest ones lasting more than 12 months) were used to ensure equilibrium before initiating production runs. The equilibrium and production runs were in the NVT ensemble with Nosé-Hoover thermostatting 103 . Possible crystallization was checked using various order parameters, potential energy, etc. It was confirmed after equilibration that the results are reproducible by running the production-run simulations at least twice.
Binary mixtures. We studied six different binary mixtures: the Kob-Andersen binary Lennard-Jones (KA) mixture, the Wahnström (WS) mixture, the GLJ mixture, the KA exponential pair potential (KAEXP) mixture, alloys of copper and zirconium (CuZr), and a size asymmetric (AS) mixture. One or several compositions were studied for each mixture. We focused mainly on one density and varied the temperature, but for the 4:1 KA and 3:1 WS mixtures density was additionally varied. For reference the SCLJ liquid was also simulated (ρ = 0.850 and N = 1024). All pair potentials used a shifted-potential cutoff, except for KAEXP which used a shifted-force cutoff 47,60,104,105 .
The KAEXP mixture uses the same parameters as the KA mixture but replaces the LJ pair potential with repulsive exponential pair potentials given by v αβ (r) = ϵ αβ exp½Àr=σ αβ . The cutoff is r cut = 4.50ρ −1/3 , i.e., the cutoff depends on density. The density was ρ = 0.001 for 4:1 composition. The particle number was N = 1024 and the time step was Δt = 0.0025 in (macroscopically) reduced units. The longest production runs were 4.3 billion time steps. The single-component EXP pairpotential liquid was studied in refs. 60,105 .
The CuZr mixture was simulated using the Effective Medium Theory (EMT) for metallic alloys 109,110 . EMT is a semiempirical many-body potential derived from DFT that offers a significant advantage over, e.g., most standard Embedded Atom Method (EAM) potentials since EMT does not require a tabulated format for the potential. The unit system used for CuZr is with the length scale of angstrom Å, mass dimension of atomic mass unit u, and energy scale of electron volt eV. We studied the compositions CuZr 64:36 and CuZr 36:64 at the density ρ = 0.08 Å −3 , and the particle number was N = 1000 for both compositions. The time step was Δt ≈ 7.13 fs. The longest production runs were 400 million time steps.
The AS mixture is governed by the LJ pair potential with σ AA = 1.00, σ AB = 0.65, σ BB = 0.30. ϵ AA = 1.00, ϵ AB = 1.40, ϵ BB = 0.80. m A = 2.0 and m B = 1.0. r cut = 2.50σ αβ and ρ = 1.100 at 3:1 composition. For this kind of size disparity (more than a factor of three) it is difficult to avoid crystallization by phase separation even with a negative heat of mixing. The particle number was N = 1000 and the time step was Δt = 0.0025. The longest production runs were 67 million time steps.
Analysis. The diffusion coefficients of each particle type were obtained from fitting their respective long-time mean-square displacements to the Einstein relation. The shear viscosities were obtained from integrating the shear-stress time autocorrelation function via the Green-Kubo relation where S αβ is the αβ-component of the stress tensor (α ≠ β = x, y, z), V is the volume, and 〈. . . 〉 denotes an ensemble average. All three off-diagonal stress tensor components were averaged for better statistics. The value of the viscosity was extracted from the first maximum of the integral which corresponds to the plateau value obtained in the running integral of Eq. (8). The self-part of the ISF was evaluted from F s ðq; tÞ ¼ hexp½iq Á Δr i i, where r i is the position of particle i, and q is the wave vector. The length of the wave vector is given by the position of the first peak of the static structure factor. The excess entropy S ex was calculated from the thermodynamic relation using thermodynamic integration, where F ex is the excess Helmholtz free energy, and U ex ≡ U is the potential energy. Application of thermodynamic integration to supercooled liquids is standard 6,34,38 . First, a path at a high temperature T ref above the critical point was chosen, integrating the equation from low density (the ideal gas) to the density of interest ρ in order to obtain F ex (ρ, T ref ), in which W is the virial defined by W = PV − Nk B T. Both U and W were obtained from the actual simulations. Afterwards, a path at the constant density ρ was simulated, integrating from T ref to T to obtain F ex (ρ, T) using the identity We confirmed that the results for S ex , within half a percent, are independent of the thermodynamic path as well as of the applied discretization of density and temperature. Larger error bars on S ex were found for the CuZr mixtures than for the other systems due to a nonmonotonic behavior at very low densities.
Data availability
The data in csv file format that support the findings of Figs | 8,402.8 | 2020-08-27T00:00:00.000 | [
"Physics"
] |
Comparing the latent space of generative models
Different encodings of datapoints in the latent space of latent-vector generative models may result in more or less effective and disentangled characterizations of the different explanatory factors of variation behind the data. Many works have been recently devoted to the exploration of the latent space of specific models, mostly focused on the study of how features are disentangled and of how trajectories producing desired alterations of data in the visible space can be found. In this work we address the more general problem of comparing the latent spaces of different models, looking for transformations between them. We confined the investigation to the familiar and largely investigated case of generative models for the data manifold of human faces. The surprising, preliminary result reported in this article is that (provided models have not been taught or explicitly conceived to act differently) a simple linear mapping is enough to pass from a latent space to another while preserving most of the information. This is full of consequences for representation learning, potentially paving the way to the transformation of editing trajectories from one space to another, or the adaptation of disentanglement techniques between different generative domains.
Introduction
The task of generating new data from samples has always exerted a particular fascination in machine learning, both because of the potential for almost endless streams of new and original data, as well as for the implications on the knowledge extracted by a model about the data manifold.It is clear that the effectiveness of generative techniques crucially depends on data representation, and different encodings may result in more or less entangled combinations of the different explanatory factors of variation behind the data [1,2].The key idea behind unsupervised learning of disentangled representations is that real-world data depends on a relatively small number of explanatory factors of variation which can be compressed and recovered by unsupervised learning techniques [3][4][5].Strictly related to representation learning, the task of exploration of the latent space of generative models aims to understand the "arithmetic" of the variational factors [6,7], and the effect that particular trajectories inside the latent space could produce in the visible domain [8][9][10].
In spite of the huge amount of work devoted to the exploration of latent spaces, relatively little attention has been so far devoted to the problem of comparing the latent space of different generative techniques, i.e. to the problem of locating the internal representation z X of X in a given space starting from its representation in the latent space of a different model (see Figure 1).
The key questions we are interested in are the following: 1. Do different trainings of the same generative model induce the extraction of similar Fig. 1: Given a generative model, it is usually possible to have an encoder-decoder pair mapping the visible space to the latent one (even GANs can be inverted, see Section 2.2.1).From this assumption, it is always possible to map an internal representation in a space Z 1 to the corresponding internal representation in a different space Z 2 by passing through the visible domain.This provides a supervised set of input/output pairs: we can try to learn a direct map, as simple as possible.
The astonishing fact is that a simple linear map gives excellent results, in many situations.This is quite surprising, given that both encoder and decoder functions are modeled by deep, non-linear transformations.
features from data, and hence substantially isomorphic spaces up to, say permutations or linear transformations?We refer to this type of transformations as being of Type 1. 2. Do different architectural models driven by common learning objectives (e.g.maximizing log-likelihood) learn similar features?How much do the extracted features depend on the neural network structure?We refer to this type of transformations, between spaces of variants of models in the same class, as being of Type 2. 3. Finally, what is the influence of the learning objective on the internal representation?Is e.g. a Generative Adversarial Network learning the same features of a Variational AutoEncoder?We refer to these transformations as being of Type 3. Any answer, whether positive or negative, could substantially improve our knowledge of generative techniques.
Our surprising preliminary results, reported in this article, seem to suggest that (provided models have not been taught or explicitly conceived to act differently) it seems to be possible to pass from a latent space to another by means of a simple linear mapping preserving most of the information.
This linear transformation may be computed directly through linear regression, but we advocate a learning-based technique based on a suitable small "support set" of data samples enucleating, in the visible space, the key variational factors of the data manifold.When we say "small", we mean that the set has a cardinality comparable with the number of variables in the latent space (so, really small): for instance, in the case of CelebA, we experimented with a support set of 150 images.Locating these 150 samples in the two spaces is enough to allow the definition of a relocation map for all data.
The main results of our investigation are summarized in Figures 2. Figure 2a describes an example of relocation between different trainings of a same network (relocation of Type 1); Figure 2b is relative to the relocation between different models of a same class-two different VAEs, in this case (relocation of Type 2); Figure 2c is an example of relocation from a VAE to a GAN, that is between different models with different learning objectives (relocation of Type 3).While details may slightly differ, especially for transformations between different generative models, the overall appearance (pose, colors and background) is substantially preserved.Considering the non-linearity of these generative processes, the result is, at a first glance, quite surprising: pairs of points related by a simple linear mapping in the latent spaces of two different generative models are decoded by the respective decoders in closely related-in some cases almost identical-images!
Structure of the article
The structure of the article is the following.We start by providing, in Section 2, a quick introduction to generative modeling, and in particular to latent variables models, comprising the popular Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs); in this section, we also discuss the problem of inverting GANs.Section 3 cover the domain of semantic exploration of latent spaces, representation learning and disentanglement.In Section 4 we start introducing the datasets, the models and the methodology that we used for our experiments.Since we focus on linear transformations, they can be defined by a small set of points, that we call Support Set: locating the points in the Support Set in the two latent spaces is enough to define the transformation.Our approach to get a good Support Set is discussed in Section 5.In Section 6 we give numerical results about the mappings (visual examples, more readily interpretable, are spread over the article).Section 7 is devoted to the discussion of the latent space of StyleGAN, that seems to present some pathological issues: many faces in the CelebA dataset lie outside of its generative range.Even in this case, however, provided we confine the transformation to the StyleGAN subspace, we discover interesting linear mapping to other spaces.Conclusion and future works are discussed in Section 8.Additional material is given in appendix: a detailed description of the models used in this work (Section A), a full list of all images in the CelebA Support Set (Section B).
Generative Modeling
Generative modeling is the task of learning the high-dimensional probability distribution of a data manifold starting from a representative set of samples.When successfully trained, generative models can be used to create new samples from the underlying distribution, possibly providing estimations of their likelihood.The learning process provides an essential and valuable insight of the kind of features used to encode the distribution, and the way the model "interpreted" and "understood" data.
In this article, we shall mostly focus on the popular and effective Latent-Variable models, that is models where the actual distribution p(x) of a data point x is expressed through marginalization over a vector z of latent variables: where z is the latent encoding of x distributed with a known distribution p(z) named prior distribution.The distribution p(x|z) is usually learned by a deep neural network; after training it can be used to generate new samples via ancestral sampling: 1. sample z ∼ p(z); 2. generate x ∼ p(x|z).
Variational Autoencoders
A Variational AutoEncoder (VAE) [28] has a structure similar to a classical auto-encoder [29,30], being composed of an encoder producing a latent vector z from an input x and of a decoder which reconstructs the input x from a latent code; the two components are simultaneously trained using, e.g., a mean squared error loss | x − x | 2 .However, in order to regularize the latent space, which is a precondition to support semantically meaningful generation [12], latent variables are interpreted as parameters of a local distribution q(z|x) and a Kullback-Leibler component KL(q(z|x) N (0, 1)) is added to the reconstruction loss, with the purpose of pushing the marginal distribution q(z) towards a standard Gaussian N (0, 1).Balancing of these two loss components, usually via a γ or β parameter, is crucial for better generation and learning of disentagled features [31][32][33].
Several issues affect the performance of VAEs, most importantly blurriness of generated images [34].As such, many variants have been proposed over the years to improve results by addressing the mismatch between the aggregate inference distribution q(z) and the prior p(z)).These comprise: quantization of the latent code (VQ-VAE [35]), use of normalizing flows (Hybrid VAE [36]), two-Stage architectures [37], and hierarchical models [16,38].
Generative Adversarial Networks
In a Generative Adversarial Network (GAN) [6,39,40] a generator, acting as a sampler for the desired distribution, is jointly trained with a discriminator, evaluating the output of the generator by attempting to distinguish real from generated ("fake") data.This can be formalized in the form of a zero-sum game, where one agent's gain is another agent's loss; the generator and the discriminator must be trained alternately, freezing the respective adversarial component; at the end of the process the generator is supposed to win, producing samples that the discriminator is unable to distinguish from real.
GANs are known to have unstable training and several issues among which the well known mode collapse phenomenon [40].Indeed, multiple variations for the loss function have been studied over time [41], including the Wasserstein loss [42], least squares loss [43] or the introduction of a penalty term for the discriminator [44].Furthermore, a myriad of variations on the structure itself have been proposed, among which: maximizing the mutual information between specific latent variables [45]; exploiting pairs of GANs to perform style transfer between images in distinct datasets [46]; GANs with attention layers [47].
A particularly interesting series of works come from the application of style transfer concepts to GANs (StyleGAN and its successors [48][49][50]).StyleGAN builds on Progressive GANs [51], whose structure is unchanged from that of a baseline GAN but is implemented progressively: the architecture is trained starting from down-sampled images at very low resolution, and at each progression step the input size is increased while additional layers are introduced to both generator and discriminator.
StyleGAN further builds on this structure by adding to the generator (Synthesis network) a fully connected Mapping network which takes the usual seed z ∈ Z and produces a "style" vector w ∈ W .This vector is then specialized perlayer through Adaptive Instance Normalization (AdaIN), which according to the authors produce a behavior similar to style-transfer.Furthermore, a small amount of noise is added to all blocks of the Synthesis network to better fill in the output details.The full structure of StyleGAN can be seen in Figure 3.
GAN Inversion
The generator of a GAN usually takes as input a seed z ∼ N (0, 1), and has a role directly comparable to that of a VAE decoder.However, GANs lack a direct encoding process of the original input sample, unlike a VAE encoder.If, as is the case for our study, both generative and encoding processes are needed, a third neural network has to be added to a pre-trained GAN as a sort of plug-in encoder.This re-coder component is known as an inverse GAN, and building an accurate re-coder is a known problem in the literature [52].
Several approaches to inversion have been explored [53][54][55][56], mostly for editing applications, the simplest being SGD optimization [57] or a learning-based approach such as using a neural network trained on generated images to reconstruct the original latent vector using a mean squared error loss | z − ẑ | 2 , with the advantage that over-fitting is never an issue since training is not constrained to samples of the original data.Hybrid methods combining both efforts have also been explored [58,59].Recent works have focused mostly on the inversion of the popular StyleGAN, building on previous work with a variety of inversion structures and minimization objectives [60][61][62][63][64] with the aim of generalization to any dataset.However, we used a simpler and narrow approach by developing our own StyleGAN inverter for the W space using a naive recoding network.It works surprisingly well for commonly generated samples, with a final mean square error close to 0.0040.We show some examples of recoding in Figure 4.
Semantic Interpretation of Latent Spaces
The latent space of a generative model efficiently synthesizes information from data, however the resulting compressed vectors cannot be easily mapped onto understandable features such as labels or attributes.Therefore, it is also unknown how exactly a model learns from data, in terms of how well it encodes its features, biases and humanmeaningful characteristics.At the same time, this knowledge could fundamentally influence the quality of models and provide a foundation on which to improve their performances without relying solely on empirical and qualitative analyses.
Conditional architectures [45,65] can indeed mitigate this issue by explicitly feeding features alongside samples during training, but in doing so they remodel the task as a supervised problem with respect to the classes on which conditioning is done, with all other data features remaining non-explainable.These approaches do not provide interesting information about the way the neural network understand data, and for this reason, they will not be discussed in this work.
Exploration and Disentaglement
Many works attempt to understand the latent space of GANs by performing exploration on the latent space, that is, they introduce small nudges in a direction based on the empirical principle that they will correspond to a small change in the corresponding generated data.The approach can be particularly useful for image editing, as once a semantically meaningful direction is found (eg.color, pose, shape), it can be traveled to tweak an image, introducing a desired feature without the need for a conditional generation model.Inter-FaceGAN [8] supposes that for a given feature taking values in (−∞; ∞) there exists an hyperplane in the latent space whose normal vector allows for a gradual modification of the feature, which can be found e.g.via an SVM [66].Further work based on this idea searches for these directions as an iterative or an optimization problem [67] and also extend it to controllable walks in the latent space [10].
A different, more systemic approach to the problem is by [7], which use a closed-form equation to find the editing direction n i applied per-layer i of a generator, which is then composed to find the overall direction n.Another approach of the same "arithmetic" flavor comes from [68], where a generative application of PCA with a non-linear kernel is used to determine the hidden features of a small-scale dataset, without any reliance on a particular generative model.
Much less work on exploration has been devoted to VAEs.An example is given by [9], which however works on a conditional architecture, in order to produce lower-dimensionality subspaces that are easier to analyze.
Datasets
As stated in the abstract, we confined our analysis to the familiar and largely investigated data manifold of human faces.Our dataset of reference is CelebA [69], including its higher-quality version CelebAHQ [24].Images taken from CelebA have been aligned as per their paper [69] and then cropped to size 128 × 128 with a y offset of 45 and an x offset of 25 in order to remove as much background information as possible.The crop is then downsampled to size 64 × 64 with bilinear interpolation).
CelebaHQ is a dataset of 30K images at resolution 1024 × 1024, obtained from a subset of CelebA with a complex methodology explained in appendix C of [51], comprising a sophisticated preprocessing phase, super-resolution techniques, and selection of best quality samples.
Generative models
For our experiments we took into considerations 4 different models, two GANs and two VAEs; in each class, we investigated a basic, average quality "vanilla" version and a more sophisticated, state-of-the-art model.A summarizing Table 1 for these models is provided.More in-detail, we have investigated the following architectures: 1. Vanilla VAE [28] using γ balancing [31] with a latent dimension Z = 64 trained on the cropped CelebA; 2. Vanilla GAN [39] with a latent dimension Z = 64 trained on the cropped CelebA; 3. SVAE [11] with a latent dimension Z = 150 trained on the cropped CelebA; 4. StyleGAN [48] pre-trained on CelebA-HQ, which has a latent dimension Z of size 512 The dimension of the latent space and the resolution of the different models is summarized in Table 1.
Methodology
For each one of the previous models, apart Style-GAN where we only had at our disposal a single set of pre-trained parameters, we trained and tested five different instances.When reporting values in the results, if not differently stated, they have to be understood as an average over the different trainings.
Mapping between different models (transformations of Type 2 and 3) can have a lot of additional issues.Firstly, the two latent spaces may have sensibly different dimensions, for instance 512 for StyleGAN versus 150 for the SVAE and for the other models, and may work at different resolutions, for instance 1024×1024 for StyleGAN versus 64 × 64 for the other models.Furthermore, the two generative models may have been trained on the two different datasets which, albeit similar, have different data and different crops.To this aim, when passing from CelebA-HQ to CelebA we take a simplified crop of dimension 880 × 880 with an height offset of 20 and a width offset of 60, which is then downsampled to size 64 × 64 with bilinear interpolation.
Since we are interested in linear mappings, the transformations may be defined by a small set of "corresponding" points common to both spaces: this is what we call a Support Set.Our methodology to build it is defined in Section 5.The support Set is defined in the visible domain; we trace their respective encodings in the different spaces, and define the map by linear regression with mean squared error as a loss.When we cannot use a Support Set, we may directly work with the whole visible domain (or the subset of the visible domain common to the two spaces), sampling minibatches in it.
Support Set
In this Section we explain the technique used to build a small support set of examples driving the linear transformation.This is based on the following steps, each one detailed in a respective subsection: features ordering we order latent variables according to their relevance for reconstruction, using a suitable metric discussed below; features selection we select a small number n of particularly significant latent variables; 2 n must be lower than the cardinality of the support set; sample selection we select points in the space belonging to extremal regions with respect to the selected features.
Features ordering
Feature importance-the task of associating a score to input features based on how useful they are for solving a specific problem-is a major subfield of Machine Learning.In the case of generative modeling, the goal is to maximize the (log)likelihood of data, and it is natural to associate a score to features according to their contribution to this objective.It is worth observing that different techniques, like e.g.PCA, would not be beneficial to this aid, due to the shape of the prior latent distribution which is, typically, a spherical Gaussian distribution 1 .Our feature importance technique requires an encoder in addition to a decoder: it fits particularly well with VAEs, but it can be generalized to GANs by exploiting a re-coder network (see Section 2.2.1).Specifically, in order to evaluate the contribution of the variable to the loss function, we compute over a large number of data the average difference between the reconstruction error when 1 Even the potential mismatch between the prior and the aggregate inference distribution in the case of VAEs cannot be exploited by PCA, since this technique only takes into consideration the first two moments of the distribution.
the latent variable is zero-ed out with respect to the case when it is normally taken into account.We call this information the reconstruction gain associated with the latent variable.It was introduced in [70] where it was used to compare the reconstruction error and the Kullback-Leibler divergence on a per-variable base, in order to clarify the variable collapse phenomenon [71][72][73].
We did the experiment on the SVAE, which in our experiments has a latent space of 150 variables.In Figure 5 we show the information gain relative to all its latent variables, ordered by relevance.Eleven variables have a score higher than 10, although the distribution has a relatively long tail: the first 20 variables are responsible for about 75% of the information.
Feature Selection
We keep a small number of the most informative variables.For the way we shall use it, this number must be smaller than the logarithm of the cardinality of the support set.In our case, we aim to a support set of dimension 150, so we focus on the 7 most relevant variables.
In Figure 6 we show examples of the effect of some of these variables on generated images: we take a random point and progressively modify the given variable in the range between -2.25 and 2.25 (remember that the latent space standard deviation is 1).
Sample selection
Finally, we divide the latent space in sectors corresponding to extreme values for the previously selected variables, and pick up samples in these sectors.More precisely, having defined a threshold th and a "direction" dir given by a +/− sign for each selected variable, a sector defined by the pair (th, dir) is the set of points with direction compatible with dir and at a distance from the origin larger than th.Since we consider all possible directions, this gives a total of 2 n sectors where n is the number of selected variables (for a fixed th).In each sector, we pick up a sample at random (enlarged th sectors become progressively less inhabitated).
It is interesting to observe that the number of latent points in the dataset within different sectors at a given threshold is far from uniform.This seems to be a confirmation that the actual image distribution is far from the desired Gaussian normal prior and, in a VAE, a symptom of the potential mismatch between the generative prior and the aggregate inference distribution computed by the encoder, which is a well known and problematic aspect of VAEs [74][75][76].Attempts to solve this issue have been made both by acting on the loss function [77] or by exploiting more complex priors [36,78,79]; the actual effects on the latent space of these techniques is an interesting research direction for future investigations.
In Figure we show typical inhabitants for a few given sectors.As expected, they share macroscopic features like background color, pose, hairs, and illumination.
Part of the 128 images resulting from our selection process are depicted in Figure 9.The complete list of labels for the support set is reported in the appendix.The samples in the support set occupy "extreme" positions in the latent space with respect to the most informative directions: Fig. 7: Example of sectors in 3 dimensions (cropped to distance 2 from the origin).The distance between sectors is equal to twice a configurable threshold.We work with the 7 most informative latent variables, obtaining a total of 2 7 = 128 sectors.for this reason, they as supposed to be representative of the principal factors of variations in the dataset.
As a partial confirmation of the previous hypothesis, we expect the distance between elements in the support set to be sensibly higher than the average distance between points in the full dataset.This is actually the case: the mean
Results
This Section contains numerical results relative to the transformation between latent spaces.The discussion of StyleGAN, for its relevance and some interesting pathological issues, will be postponed to the next Section.
Here, with we shall use the names VAE, GAN and SVAE to refer to our specific implementations of these models, discussed in Section 4.2 and detailed in appendix A.
We build a set of correspondent input-output pairs by encoding the Support Set (or the full set of visible data) into the two latent spaces.Then, we directly build a linear map by linear regression, minimizing the mean squared error between target and computed latent vectors.
For each transformation, we provide three values: L-MSE Latent Mean Squared Error.This is the loss of the model, namely the mean squared error between the target vectors and those computed by the model; R-MSE Reconstruction Error.This is the mean squared error between the original image in the visible domain and its reconstruction via the source generative model; M-MSE Mapped Error.This is the mean squared error, in the visible domain, between original images and images reconstructed by the target generative model after linear mapping.The three errors are graphically described in Figure 10.The latent error L-MSE is not easily deciphered; the comparison between R-MSE and M-MSE provides a more intelligible information about the quality of the translation.
Results are given in Table 2.
For the sake of comparison, it it worth to recall that the mean squared error between CelebA images is 0.116; in all models the M-MSE is always below 0.039.
The StyleGAN space
The "extreme" nature of the images in the Support Set makes them a very natural benchmark of the expressiveness of generative models: is it possible to reconstruct these images by passing them through an encoding-decoding process?
For StyleGAN trained on CelebA-HQ, results are disappointing (see Figure 11, and compare them with the inversion of generated images in When source and target coincide, we mean different trainings of the same model (Type 1 transformations).
Figure 4).Although the macrostructure is preserved (background, pose, illumination), details are sensibly different.Numerically, while the average mean squared error on generated images is 0.026, the corresponding value for the Support Set is 0.251, almost ten times higher.Our conjecture is that StyleGAN is simply unable to generate data in the support set: they do not belong to its latent space, specifically due to its training dataset.To check this claim we implemented a gradient ascent technique to generate latent representations corresponding to a desired output.Once again, the gradient ascent technique provides almost perfect results on generated images but substantially fails on images in the CelebA support set, as shown in Figure 12.
We believe that the latent space of StyleGAN, trained on CelebA-HQ, only faithfully reflects a subspace of the latent space of our other models, trained on the full CelebA dataset.In particular, points in our extreme sectors seem to lie outside of the generative range of StyleGAN, or to be severely underrepresented (Figure 13).The problem is possibly also related to the well-known fact that faces generated by StyleGAN (and other generative networks) can be easily distinguished from reals [80][81][82].
Comparison with different spaces
Since exploiting the Support Set is not a viable solution, we need to define a direct mapping by regression on all data.Furthermore, we choose to work with the W StyleGAN space, since the Z space is passed through a series of fully connected layers (the Mapping network) which we suppose, by construction, cannot be inverted linearly.Here we try to map the W space of StyleGAN, trained over CelebA-HQ, to the latent space of SVAE trained over CelebA.The input to the transformation map is the vector w, obtained by ancestral As usual, input vectors w may be generated ad libitum, with no risk of overfitting.
After training, the mean squared error between z and ẑ is around 0.45 with a standard deviation of 0.05.The mean squared error between SVAE (z), and SVAE (ẑ) is 0.014 with standard deviation of 0.002.All results have been repeated over 5 different parameters configurations of SVAE, relative to 5 different trainings (obviously, each experiment results in a different linear transformation).
Result are shown in Figure 14.They are not perfect, but definitely interesting.
We also tested a few variants weighting the distance between latent variables according to their "information relevance", but we did not observe significant improvements.
Let us come to the mapping from the latent space of VAE to that of the StyleGAN.To train the transformation model (as usual, a single dense layer with no bias), we simply invert input and output of the previous network.After training, the mean squared error between w and ŵ is around 0.029 with a standard deviation of 0.004.The mean squared error between StyleGAN (w), and StyleGAN ( ŵ) is 0.076 with standard deviation of 0.014.Results are visually really good, as can be visually checked in Figure 15.
Conclusions
In this article we addressed the problem of comparing the latent space of different generative models, defining transformations between them.Specifically, we proved that we can pass from a latent space to another by means of a simple linear map preserving most of the information.Hence, the organization of the latent space seems to be largely independent from • the training process • the network architecture • the learning objective: GANs and VAEs share the same space The result is original, surprising and largely unexpected; apparently, the latent space, if not artificially constrained with different objectives, seems to naturally organize itself in a way that is merely dependent from the data manifold.Of course, we expect that this "natural" structure can be altered in many different ways, e.g. through conditioning, which strongly impacts the latent structure, or via transformations like normalizing flows, explicitly aiming towards a strong regularization of the space.We also do not expect the two spaces Z and W of StyleGAN to be linearly related, since otherwise the long chain of 8 dense layers between them would have no purpose.
Our result is full of implications from the point of view of representation learning and disentanglement.The fact that the latent space has a sort of implicit and native structure raises promising expectations about the possibility of learning features in a completely unsupervised way.Moreover, the recent observation [8,67] that variations over a single semantical feature is a quasi-linear manifold in the latent space of generative models fits well with our empirical observations, opening interesting perspectives about the possibility of "porting" disentanglement between different spaces, and more generally, to better understand the issue in a more general framework.
The fact that the transformation between spaces is linear obviously permits its definition in terms of a small set of independent points of the same cardinality of the dimension of the latent space; this is what we call a Support Set.Locating these points in the two latent spaces is enough to define the map.In principle, any set of independent points could serve as a Support Set, but for robustness reasons, it seems preferable to chose points as apart as possible between each other.We described a possible approach for defining such a set, based on "sectors" in the space.This set is of interest in its own, as it is representative of the principal factors of variations in the dataset.Due to this fact, it also provides a natural benchmark to test the expressiveness of generative models.
This leads to an additional side contribution of our work: in contrast with the usual belief, StyleGAN trained on CelebA-HQ seems to have serious generative deficiencies: many images, in particular most of the images in our Support Set from CelebA, seem to lie outside the generative range of StyleGAN.In particular, as it is also evident in inversion results, The StyleGAN generative process is privileging standardization, strongly penalizing defects, oddities and eccentricities: the StyleGAN space is not a space for minorities.
This could be a cause for concern about CelebA-HQ.Not only it is computationally demanding, but one could also wander if it has statistical relevance: an assortment of 30K images in a space of dimension 3 × 2 20 looks more like a collection of scattered points than a data manifold.
Our results also raise serious worries about the increasing use of generative techniques for data augmentation purposes.All generative techniques seem to have serious biases, privileging likelihood over diversity: using them for data augmentation may have no statististical significance.It is a bad practice that should be discouraged and deprecated.
As for future developments, most of the work just lies ahead.Here is a short, not-exhaustive list of possible topics: • test and hopefully confirm our mapping results on different datasets; • deepen the relationship between the field of disentanglement through suitable linear manipulations of the latent space; • define and test a Support Set for StyleGAN and Celeba-HQ; • investigate the possibility to improve the transformation with residual non-linearities, and in that case study them; • better investigate and possibly find a remedy to the generative deficiencies of StyleGAN.
Data Availability
The training datasets can be found at CelebA-dataset and CelebAHQ-dataset.
(a) Relocation of Type 1, between latent spaces relative to different training instances of the same generative model, in this case a particular Variational Autoencoder[11].The two reconstructions are almost identical.(b)Relocation of Type 2, between a Vanilla VAE and a state-of-the-art Split-VAE[11].The SVAE produces better quality images, even if not necessarily in the direction of the original: the information lost by the VAE during encoding cannot be recovered by the SVAE, which instead makes a reasonable guess.(c)Relocation of Type 3, between a vanilla GAN and a SVAE.Additional examples involving StyleGAN are given in Section7.To map the original image (first row) into the latent space of the GAN we use an inversion network.Details of reconstructions may slightly differ, but colors pose and the overall appearance is surprisingly similar.In some cases (e.g. the first picture) the reconstruction re-generated by the VAE (from the GAN encoding!) is closer to the original than that of the GAN itself.
Fig. 2 :
Fig. 2: Examples of relocations of different Types.In the first row we have the original, in the second row the image reconstructed by the first generative model, and in the third row the image obtained by the second model after linear relocation in its spcae.
Fig. 3 :
Fig. 3: Structure of the StyleGAN generative network (picture from [48]).Observe: (1) the two distinct latent spaces Z and W ; (2) the mapping network taking a randomly sampled point z ∈ Z as input and generating a style vector w; (3) the use of Adaptive Instance Normalizationation, or AdaIN (Blocks A), to apply style vectors after each convolution layer of the Synthesis network; (4) the exploitation of noise as an additional source of randomness passed through learned scaling layers (Blocks B).
Fig. 4 :
Fig.4: Results of our own network for StyleGAN inversion.Images in the first row have been generated by StyleGAN; they are re-coded into the W space and regenerated (second row).The two images are hardly distinguishable.However, as we shall see in Section 7, inversion can be more problematic for images outside the generative range of the model; in principle, a good generative model should be able to produce any sample, provided it is not too atypical.
Fig. 5 :
Fig. 5: Information gain for all variables, in decreasing order.Only a bunch of variables are in charge of the macroscopic factors of variations.
Fig. 6 :
Fig. 6: Effect of the seven most informative latent variables in the visible domain.Each image is obtained by varying a specific variable in the range [-2.25; +2.25].Considering these are the variables with the largest information gain, it may be argued that their impact is less pronounced than expected.Most of the variables are associated with a change in luminosity of all or part of the image, possibly associated with modifications in hair color, source of illumination and tiny variations in the pose.In the case of variable 21, there seems to be progressive Female-Male transition (and vice-versa for variable 114).
Fig. 8 :
Fig. 8: Examples of data in different sectors.For each sector, images are different, but share macroscopic features: background color, pose, hairs, illumination, etc.
Fig. 9 :
Fig. 9: Part of the images in the support set resulting from our selection process.The samples are supposedly representative of the principal factors of variations in the dataset.Additional examples are given in the appendix.
Fig. 10 :
Fig. 10: Relocations Errors.An original point o in the visible domain is mapped into internal representations z 1 and z 2 in the latent spaces Z 1 and Z 2 .The map M is trained to reconstruct z 2 from z 1 : L-MSE is the mean squared error between z 2 and M (z 1 ).R-MSE is the mean squared error, in the visible domain, between o and its reconstruction according to the first generative model.M-MSE is the mean squared error, in the visible domain, between o and D 2 (M (z 1 )).
Fig. 11 :
Fig. 11: StyleGAN inversion on images in the Support Set.The macro structure (background, pose, illumination, etc.) is preserved, but all other features are lost: images in the Support Set seem to lie outside of the generative range of StyleGAN.Note also the more "conventional" nature of the images obtained by the inversion.
Fig. 12 :
Fig. 12: Gradient ascent technique for StyleGAN on data in the Support Set.The original is in the first row, and the image generated through gradient ascent, in the second.The technique confirms that these images cannot be generated by StyleGAN.
Fig. 13 :
Fig. 13: CelebA Sectors seem to be external to the latent space of StyleGAN
Fig. 14 :
Fig. 14: Mapping from the W space of StyleGAN to the latent space of SVAE.In the first row we have sources, sampled by StyleGAN from w ∈ W .In the second row we have the SVAE reconstruction, starting from a suitably cropped and rescaled images (SVAE work at resolution 64): these images are the best possible approximation of the source images obtainable by SVAE.In the third row we show the output produced by the SVAE decoder after mapping each w in its latent space: results are very similar to those of the second row.
Fig. 15 :
Fig. 15: Mapping from the latent space of SVAE to the W space of StyleGAN.In the first row we have images generated by StyleGAN: StyleGAN (w), for w ∈ W .In the second row we have their SVAE reconstructions, starting from suitably cropped and rescaled versions.Images in the third row are obtained by first encoding StyleGAN (w) in the latent space of the SVAE, obtaining a latent representation z.This z is then linearly transformed to a vector ŵ ∈ W ; the final image is StyleGAN ( ŵ).
: Encoder: the input is progressively downsampled via convolutions, preceded by Scale Blocks.At the final scale, a global average pooling layer extract features that are further processed via dense layers to compute mean and variance for latent variables.Decoder: the decoder is essentially symmetric.A SVAE only differs in the final layer (circled in the picture): instead of directly producing x, it produces two images x1 and x2 and a compositional map σ, defining x = σ x1 + (1 − σ) x2 .
Fig. 18 :
Fig. 18: Examples of images in the support set, in addition to those in Figure9.
Table 1 :
Dimension of the Latent Space and Resolution for the different models.and a style-vector latent dimension W of the same size.The structure of the StyleGAN has been already briefly discussed in Section 2.2.The in-depth architecture of the other models, not central to the topic of this article, is given in Appendix A. | 9,325.4 | 2022-07-14T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Transesterification of Glycerol to Glycerol Carbonate over Mg-Zr Composite Oxide Prepared by Hydrothermal Process
A series of Mg-Zr composite oxide catalysts prepared by the hydrothermal process were used for the transesterification of glycerol (GL) with dimethyl carbonate (DMC) to produce glycerol carbonate (GC). The effects of the preparation method (co-precipitation, hydrothermal process) and Mg/Zr ratio on the catalytic performance were systematically investigated, and the deactivation of the catalyst was also explored. The Mg-Zr composite oxide catalysts were characterized by XRD, TEM, TPD, N2 adsorption-desorption, and XPS. The characterization results showed that compared with the co-precipitation process, the catalyst prepared by the hydrothermal process has a larger specific surface area, smaller grain size, and higher dispersion. Mg1Zr2-HT catalyst calcined at 600 °C in a nitrogen atmosphere exhibited the best catalytic performance. Under the conditions of reaction time of 90 min, reaction temperature of 90 °C, catalyst dosage of 3 wt% of GL, and GL/DMC molar ratio of 1/5, the GL conversion was 99% with 96.1% GC selectivity, and the yield of GC was 74.5% when it was reused for the fourth time.
Introduction
With the increasing consumption of fossil fuels and the consequent environmental problems, especially the threat of global warming, China has put forward the strategic goal of "carbon peaking and carbon neutralization". The realization of this goal needs to accelerate clean energy substitution and energy transformation. Biodiesel is a promising clean and renewable energy, and has become a hot spot for the sustainable development of global energy and the environment [1]. Biodiesel is obtained by transesterification of vegetable oil and waste oil with methanol or ethanol and will produce by-product glycerol (GL). By 2024, the global biofuel market is expected to reach US $153.8 billion, but for every 1000 kg biodiesel produced, there will be 100 kg GL [2].
In order to solve the problem of crude glycerol utilization, scientists have explored and developed different synthetic procedures to convert GL into high value-added derivatives, such as the steam reforming of glycerol [3][4][5][6][7], catalytic esterification of glycerol, catalytic hydrogenolysis of glycerol and among others. Among various GL derivatives, glycerol carbonate (4-hydroxymethyl-1,3-dioxolane-2-one, GC) has the advantages of low flammability, low toxicity, high boiling point, and biodegradability [8]. It is widely used as a solvent in the cosmetics industry, and can also be used in the manufacture of paint, fiber, plastic, coating, cement curing agent, biological lubricant, and so on [9].
At present, the routes for the synthesis of GC using GL as raw material mainly include phosgenaion [10], oxidative carbonylation [11], urea alcoholysis [12], and transesterification [13]. Among them, transesterification (Scheme 1) has the advantages of mild reaction conditions and simple operation, which is considered to be one of the most direct and feasible ways in the industry [14].
In recent years, alkali catalysts such as MgO [15,16] and CaO [17] have been widely used in GC synthesis by transesterification. However, CaO can be dissolved into the reactant GL and form a calcium-glycerin bond [18]. Moreover, CaO may produce CaCO 3 with DMC in the presence of water [19], which reduces the catalytic activity, limiting the reuse of catalysts. In addition, single metal oxides such as MgO and CaO will react with water and CO 2 in the air during preparation and storage, and then deactivate [20]. In general, composite oxides have stronger acidity and basicity and larger specific surface areas than single metal oxides; the lattice structure can also be changed by doping metal cations with different electronegativity, thereby changing the acidity and basicity of the catalyst surface [21]. So it shows good application prospects in heterogeneous alkali catalytic reactions [22]. Zhang [23] prepared a large specific surface area CaO-ZrO 2 catalyst with the mesoporous structure for continuous transesterification synthesis of GC in a fixed bed reactor. Under the optimized conditions, the yield of GC can reach 90%. However, the catalysts prepared by the co-precipitation process have some disadvantages, such as easy loss of active components and deactivation due to carbon species deposited on the surface [24]. The hydrothermal process has been widely used in the synthesis of oxide nanoparticles in recent years. Compared with other preparation processes, hydrothermally synthesized nanoparticles have high purity, good dispersibility, and controllable grain size [25]. Cui prepared MgO nanosheets with a two-dimensional flaky porous structure by simple hydrothermal process, which has a larger specific surface area than commercial MgO nanoparticles [26]. The ZrO 2 nanocrystals prepared by hydrothermal synthesis of Akune [27] show high catalytic activity due to their high specific surface area and high crystallinity. Wang compared the Mg/Sn/W composite oxide catalysts prepared by coprecipitation process and hydrothermal process, and pointed out that the catalysts prepared by the hydrothermal process had smaller particles, higher thermal stability, and catalytic activity [28].
In this article, Mg-Zr composite oxide catalysts with different Mg/Zr molar ratios were prepared by hydrothermal process for the transesterification of GL and DMC to synthesize GC. The effects of the preparation method and Mg/Zr molar ratio were systematically investigated. The catalysts were characterized by XRD, N 2 adsorption-desorption, TEM, TPD, and XPS, and the structure-activity relationship of the Mg-Zr oxide catalysts was discussed. In addition, the transesterification reaction conditions were optimized, the reusability of the catalyst was investigated, and the deactivation reasons of the catalysts were explored.
Catalyst Preparation
A series of Mg-Zr composite oxide catalysts with different Mg/Zr ratio were prepared by hydrothermal process. Briefly, the typical preparation route could be described as follows: Mg (NO 3 ) 2 ·6H 2 O (0.64 g, 2.5 mmol) and ZrOCl 2 ·8H 2 O (1.61 g, 5 mmol) were dissolved in the deionized water at room temperature, and the solution was added drop by drop to the 500 mL flask with the suitable amount of 2 mol/L NaOH solution at the same time by the co-current-precipitation process under vigorous stirring. Until the pH of the solution reached 11, and then stirred continuously for 30 min. Subsequently, the suspension was hydrothermally treated in a Teflon-lined stainless-steel autoclave at 150 • C for 6 h, and then calcined at 600 • C in flowing nitrogen atmosphere for 3 h. Depending on the molar ratio of Mg/Zr used in the preparation step, the catalysts were marked as Mg1Zr3, Mg1Zr2, Mg1Zr1, Mg2Zr1 and Mg3Zr1. Single metal oxide catalysts ZrO 2 and MgO were synthesized in the same way as above composite oxide catalysts. The difference between co-precipitation process and hydrothermal process is that the mixture after stirring is allowed to stand at room temperature for 6 h without high-temperature treatment, and other steps remain unchanged. The catalyst samples prepared by hydrothermal method and coprecipitation method were named Mg x Zr y -HT and Mg x Zr y -CP respectively, where x/y was n(Mg)/n(Zr) molar ratio.
Characterization
The crystal phases of all samples were identified by powder X-ray diffractometer using D8 FOCUS (German Brook AXS Company, Karlsruhe, Germany) with Cu Kα radiation (40 kV) and a secondary beam graphite monochromator (SS/DS = 1 • , RS 0.15 mm, counter SC). Talos F200s field emission transmission electron microscope (FEI company, Hillsboro, OR, USA) was used to observe the morphology and grain size of the catalysts. The strength and distribution of the basic/acid sites of the catalyst were determined by temperature programmed desorption of preadsorbed CO 2 or NH 3 , which was performed using Auto Chem 2920 instrument. (Micromeritics, Norcross, GA, USA). The texture properties including the specific surface area, pore volume, and pore size of the catalysts were derived from N 2 adsorption-desorption technique using 3H-2000PS2 (Beishied, Beijing, China) at −196 • C. The catalysts were pretreated by outgassing in vacuum at 200 • C for 3 h before measurement. X-ray photoelectron spectroscopy (XPS) data were collected on a Thermo Scientific K-Alpha electron spectrometer (Thermo Fisher, Waltham, MA, USA) equipped with Al Kα radiation (hv = 1486.6 eV).
Catalytic Activity Test
Transesterification of GL to GC was carried out in a round bottom flask with reflux condenser at atmospheric pressure. A total of 3.3 g of GL and 16.3 g of DMC were added into a 100 mL round-bottomed flask, the reaction mixture was heated to 90 • C while stirring in oil bath, then the catalyst of 3 wt% of GL was added to the reaction mixture. After the desired time, the products were separated by centrifugation and analyzed by gas chromatography using an Agilent 7890B gas chromatograph equipped with a DB-wax capillary column (30 m × 0.32 mm × 0.25 µm) and a hydrogen flame detector. The injector and detector temperatures were 250 • C and 300 • C, respectively. The yield of GC was calculated using internal standard method, in which N-butanol was the internal standard. The GL conversion, GC selectivity and yield were calculated by the following equations: GC selectivity(%) = mole of GC, produced mole of GL, feed − mole of GL, final × 100 GC yield(%) = GL conversion × GC selectivity 100 (3) 4 of 16
Effect of Preparation Method
The XRD patterns of Mg1Zr2-HT catalyst prepared by hydrothermal process and Mg1Zr2-CP catalyst prepared by co-precipitation are shown in Figure 1. It can be seen from the figure that the diffraction peaks at 30.2 • , 34.9 • , 50.7 • and 60.2 • belong to tetragonal ZrO 2 (t-ZrO 2 , JCPDS No. 50-1089), and there is no monoclinic ZrO 2 . t-ZrO 2 has a unique bridging hydroxyl group and strong surface basicity, which is conducive to transesterification reaction [29]. Compared with ZrO 2 , the diffraction peak intensity of MgO is relatively weak, which is not due to the low content of Mg, but due to the low atomic scattering factor (atomic number) of Mg [30]. In addition, the grain sizes of Mg1Zr2-CP and Mg1Zr2-HT calculated by Scherrer formula are 13.4 nm and 13.1 nm respectively, and there is little difference between them.
Effect of Preparation Method
The XRD patterns of Mg1Zr2-HT catalyst prepared by hydrothermal process and Mg1Zr2-CP catalyst prepared by co-precipitation are shown in Figure 1. It can be seen from the figure that the diffraction peaks at 30.2°, 34.9°, 50.7° and 60.2° belong to tetragonal ZrO2 (t-ZrO2, JCPDS No. 50-1089), and there is no monoclinic ZrO2. t-ZrO2 has a unique bridging hydroxyl group and strong surface basicity, which is conducive to transesterification reaction [29]. Compared with ZrO2, the diffraction peak intensity of MgO is relatively weak, which is not due to the low content of Mg, but due to the low atomic scattering factor (atomic number) of Mg [30]. In addition, the grain sizes of Mg1Zr2-CP and Mg1Zr2-HT calculated by Scherrer formula are 13.4 nm and 13.1 nm respectively, and there is little difference between them. The textural properties and surface basicity of Mg1Zr2-HT and Mg1Zr2-CP are summarized in Table 1. It can be seen that the Mg1Zr2-HT has a larger specific surface area and pore volume than Mg1Zr2-CP. This is because the intense collision between colloidal particles promotes the secondary pore formation of composite oxides under hydrothermal conditions, whereas the condensation between colloidal particles is a very slow process at room temperature. Therefore, the hydrothermal process is conducive to the formation of a more developed pore network structure, thereby improving the specific surface area and pore volume of Mg1Zr2-HT [31]. In addition, the dissolution deposition/crystallization process also occurs in the hydrothermal process [32]. Due to the dissolution of some precursors under hydrothermal conditions, the local solubility at the junction (neck) of the two colloidal particles will be lower than that at the nearby surface. Therefore, the deposition process will occur preferentially in the neck, resulting in the reinforcement of the colloidal network structure. During the subsequent calcination, the specific surface area The textural properties and surface basicity of Mg1Zr2-HT and Mg1Zr2-CP are summarized in Table 1. It can be seen that the Mg1Zr2-HT has a larger specific surface area and pore volume than Mg1Zr2-CP. This is because the intense collision between colloidal particles promotes the secondary pore formation of composite oxides under hydrothermal conditions, whereas the condensation between colloidal particles is a very slow process at room temperature. Therefore, the hydrothermal process is conducive to the formation of a more developed pore network structure, thereby improving the specific surface area and pore volume of Mg1Zr2-HT [31]. In addition, the dissolution deposition/crystallization process also occurs in the hydrothermal process [32]. Due to the dissolution of some precursors under hydrothermal conditions, the local solubility at the junction (neck) of the two colloidal particles will be lower than that at the nearby surface. Therefore, the deposition process will occur preferentially in the neck, resulting in the reinforcement of the colloidal network structure. During the subsequent calcination, the specific surface area and pore volume of the xerogel prepared by co-precipitation decrease rapidly due to the collapse of the gel skeleton and the sintering and growth of the catalyst particles [33]. The catalytic performance of the above two catalysts for GL transesterification was investigated, and the results are shown in Table 2. As can be seen from the data in Table 2, Mg1Zr2-HT and Mg1Zr2-CP both have good catalytic performance for GL transesterification, with GL conversion greater than 90% and GC selectivity of about 95%. Because Mg1Zr2-HT catalyst has a larger specific surface area, reactant molecules are more easily in contact with active sites, therefore have higher catalytic activity.
Effect of Mg-Zr Molar Ratio
The XRD patterns of catalysts with different Mg/Zr molar ratios are shown in Figure 2. It can be seen that with the increase of the Mg/Zr ratio, the diffraction peak of t-ZrO 2 at 2θ of 30 • gradually shifts to a high angle, which may be due to the doping of Mg 2+ into the lattice of ZrO 2 , and some Zr 4+ ions are replaced by Mg 2+ , resulting in the distortion of the crystal structure. Because the ion radius of Mg 2+ is smaller than that of Zr 4+ (the ion radius of Mg 2+ and Zr 4+ is 0.780 Å and 0.840 Å, respectively), the lattice shrinks, and the cell parameters decrease, so the corresponding 2θ shifts to high angle [34]. At a low Mg/Zr molar ratio, no diffraction peak of MgO is observed, indicating the formation of a solid solution. With the increase of Mg content, the characteristic diffraction peaks of periclase MgO (JCPDS No. 45-0946) were detected at 2θ of 43.2 • (200) and 62.5 • (220), and the intensity and sharpness gradually increased with the increase of Mg content, indicating that the particle size of MgO increased significantly. The lattice parameters and crystal plane spacing of Mg-Zr catalysts were analyzed by Jade, and the results are listed in Table 3. It was found that the lattice constant "a" and crystal plane spacing of (011) crystal plane of Mg-Zr catalyst decreased with the increase of Mg content, indicating that a stable and uniform Mg-Zr composite oxide structure was generated after the introduction of Mg 2+ into t-ZrO 2 . Figure S1a displays the Mg 1s spectra of Mg-Zr composite oxides catalysts, and the XPS spectrum of a single MgO is also presented for comparison. All the catalysts exhibited a broad and intense band centered at 1360 eV related to the emission from Mg 1s of Mg 2+ in the oxide state. More importantly, the binding energies of Mg 1s in all the mixed oxides were lower than that of pure MgO, because the Mg-Zr oxides possessed a solid solution structure. The typical Zr 3d spectra are presented in Figure S1b. For pristine ZrO 2 , there appeared two peaks at 184.8 and 182.4 eV with a high intensity, which were associated with Zr 3d 3/2 and Zr 3d 5/2 energy states of Zr (IV) oxide species, respectively. The intensity of these two reflections gradually decreased with an increase in Mg content. Meanwhile, it is worth noting that adding Mg into ZrO 2 support could give rise to a continuous increase of Zr 3d binding energy. These observations also support that Mg 2+ had entered into the t-ZrO 2 lattice, creating a solid solution. Dis-cussing the peak fitted O 1s spectra from Figure S1c. The peak at 531 eV, 533 eV, and 534 eV can be attributed to the presence of lattice oxygen species (O L ), oxygen vacancies (O V ), and chemisorbed oxygen species (O C ). In general, the chemical valence of Zr ion is 4, but the Mg ion has only 2 valence, thus some vacancies are generated when substitution in order to keep charge neutrality in the ionic crystal, and these vacancies are favorable for heterogeneous catalysis [35]. It is worth noting that the Mg1Zr2 catalyst has the highest concentration of oxygen vacancies. Figure S1a displays the Mg 1s spectra of Mg-Zr composite oxides catalysts, and the XPS spectrum of a single MgO is also presented for comparison. All the catalysts exhibited a broad and intense band centered at 1360 eV related to the emission from Mg 1s of Mg 2+ in the oxide state. More importantly, the binding energies of Mg 1s in all the mixed oxides were lower than that of pure MgO, because the Mg-Zr oxides possessed a solid solution structure. The typical Zr 3d spectra are presented in Figure S1b. For pristine ZrO2, there appeared two peaks at 184.8 and 182.4 eV with a high intensity, which were associated with Zr 3d3/2 and Zr 3d5/2 energy states of Zr (IV) oxide species, respectively. The intensity of these two reflections gradually decreased with an increase in Mg content. Meanwhile, it is worth noting that adding Mg into ZrO2 support could give rise to a continuous increase of Zr 3d binding energy. These observations also support that Mg 2+ had entered into the t-ZrO2 lattice, creating a solid solution. Dis-cussing the peak fitted O 1s spectra from Figure S1c. The peak at 531 eV, 533 eV, and 534 eV can be attributed to the presence of lattice oxygen species (OL), oxygen vacancies (OV), and chemisorbed oxygen species (OC). In general, the chemical valence of Zr ion is 4, but the Mg ion has only 2 valence, thus some vacancies are generated when substitution in order to keep charge neutrality in the ionic crystal, and these vacancies are favorable for heterogeneous catalysis [35]. It is worth noting that the Mg1Zr2 catalyst has the highest concentration of oxygen vacancies.
In order to observe the microstructure and morphology of the catalyst, the Mg-Zr composite oxides with different Mg/Zr ratios were characterized by TEM, and the results are shown in Figure 3. It can be seen from the TEM image of ZrO2 that its particle size is relatively uniform, with an average particle size of 23 nm (based on the statistics of 91 particles in the TEM image), but its dispersion is poor, and a large number of particles agglomerate together. After adding a small amount of Mg, the uniformity of particle size of Mg1Zr3 becomes worse, indicating that the addition of Mg affects the crystallization and growth process of ZrO2. In addition, compared with ZrO2, there are some substances between the Mg1Zr3 particles, which may be extremely small MgO particles according to the preparation process and XRD results. With the increase of Mg content, the uniformity of ZrO2 particle size becomes worse, and the particles with a size of about 50 nm appear in Mg1Zr2. When Mg content exceeds Zr, ZrO2 particles gradually become smaller. Especially in Mg3Zr1, flake particle aggregates appear, and the large particle ZrO2 disappears completely. Sádaba et al. [30] prepared the Mg-Zr catalyst by co-precipitation method. They pointed out that in the preparation process, Zr 4+ preferentially precipitated to form Zr(OH)4 or ZrO2(H2O)X. When most of Zr 4+ was precipitated, Mg 2+ formed Mg(OH)2 precipitation at pH 8~10. Therefore, Mg-Zr catalyst has an embedded structure with ZrO2 as In order to observe the microstructure and morphology of the catalyst, the Mg-Zr composite oxides with different Mg/Zr ratios were characterized by TEM, and the results are shown in Figure 3. It can be seen from the TEM image of ZrO 2 that its particle size is relatively uniform, with an average particle size of 23 nm (based on the statistics of 91 particles in the TEM image), but its dispersion is poor, and a large number of particles agglomerate together. After adding a small amount of Mg, the uniformity of particle size of Mg1Zr3 becomes worse, indicating that the addition of Mg affects the crystallization and growth process of ZrO 2 . In addition, compared with ZrO 2 , there are some substances between the Mg1Zr3 particles, which may be extremely small MgO particles according to the preparation process and XRD results. With the increase of Mg content, the uniformity of ZrO 2 particle size becomes worse, and the particles with a size of about 50 nm appear in Mg1Zr2. When Mg content exceeds Zr, ZrO 2 particles gradually become smaller. Especially in Mg3Zr1, flake particle aggregates appear, and the large particle ZrO 2 disappears completely. Sádaba et al. [30] prepared the Mg-Zr catalyst by co-precipitation method. They pointed out that in the preparation process, Zr 4+ preferentially precipitated to form Zr(OH) 4 or ZrO 2 (H 2 O) X . When most of Zr 4+ was precipitated, Mg 2+ formed Mg(OH) 2 precipitation at pH 8~10. Therefore, Mg-Zr catalyst has an embedded structure with ZrO 2 as core and MgO as a shell. According to the conclusion of Sádaba et al. [30] and TEM results, it can be inferred that MgO was formed in the outer layer of ZrO 2 in the Mg-Zr catalysts prepared in this paper, which can be regarded as MgO wrapping ZrO 2 . Guan et al. [36] also believed that Mg 2+ could enter the ZrO 2 lattice to form Mg-Zr solid solution. When Mg content is large, MgO which cannot enter the lattice of ZrO 2 can appear as an independent crystal phase and attach to the surface of magnesium-zirconium solid solution. The EDX spectrum and Elemental composition ( Figure S2) show the presence of Mg and Zr. Even though several random areas were selected for the EDX test, the detected Mg/Zr molar ratio was almost the same as the theoretical value. The existence of all the elements in the oxide forms can be confirmed due to the presence of the high amount of oxygen and also the presence of Mg in Mg-Zr composite oxides enhances the basicity and stability of the catalyst. core and MgO as a shell. According to the conclusion of Sádaba et al. [30] and TEM results, it can be inferred that MgO was formed in the outer layer of ZrO2 in the Mg-Zr catalysts prepared in this paper, which can be regarded as MgO wrapping ZrO2. Guan et al. [36] also believed that Mg 2+ could enter the ZrO2 lattice to form Mg-Zr solid solution. When Mg content is large, MgO which cannot enter the lattice of ZrO2 can appear as an independent crystal phase and attach to the surface of magnesium-zirconium solid solution. The EDX spectrum and Elemental composition ( Figure S2) show the presence of Mg and Zr. Even though several random areas were selected for the EDX test, the detected Mg/Zr molar ratio was almost the same as the theoretical value. The existence of all the elements in the oxide forms can be confirmed due to the presence of the high amount of oxygen and also the presence of Mg in Mg-Zr composite oxides enhances the basicity and stability of the catalyst. The N2 adsorption-desorption isotherms of Mg-Zr composite oxide catalysts are shown in Figure 4. There are obvious type IV adsorption equilibrium isotherms in the range of P/P0 = 0.5~1.0, indicating that the catalysts had mesoporous structures. ZrO2, Mg1Zr3, Mg1Zr2, and Mg1Zr1 catalysts all have H2 type hysteresis loops, indicating that The N 2 adsorption-desorption isotherms of Mg-Zr composite oxide catalysts are shown in Figure 4. There are obvious type IV adsorption equilibrium isotherms in the range of P/P 0 = 0.5~1.0, indicating that the catalysts had mesoporous structures. ZrO 2 , Mg1Zr3, Mg1Zr2, and Mg1Zr1 catalysts all have H2 type hysteresis loops, indicating that the catalyst internal pore structure is ink bottle; The N 2 adsorption-desorption isotherms of Mg2Zr1, Mg3Zr1, and MgO catalysts have no obvious saturated adsorption platform, accompanied by H3 hysteresis loop, indicating that the pore structure of the catalyst is very irregular, combined with the TEM results, it can be seen that there is the slit hole formed by the accumulation of flake particles. The N2 adsorption-desorption isotherms of Mg-Zr composite oxide catalysts are shown in Figure 4. There are obvious type IV adsorption equilibrium isotherms in the range of P/P0 = 0.5~1.0, indicating that the catalysts had mesoporous structures. ZrO2, Mg1Zr3, Mg1Zr2, and Mg1Zr1 catalysts all have H2 type hysteresis loops, indicating that the catalyst internal pore structure is ink bottle; The N2 adsorption-desorption isotherms of Mg2Zr1, Mg3Zr1, and MgO catalysts have no obvious saturated adsorption platform, accompanied by H3 hysteresis loop, indicating that the pore structure of the catalyst is very irregular, combined with the TEM results, it can be seen that there is the slit hole formed by the accumulation of flake particles. The specific surface area, pore diameter, and pore volume of the catalysts are listed in Table 3. It can be seen that the specific surface area of Mg1Zr2 is 68.8 m 2 /g, and then the specific surface area increases with the addition of Mg, which is consistent with the experimental results of Guan [36]. As the Mg content increased, the specific surface area of Mg- The specific surface area, pore diameter, and pore volume of the catalysts are listed in Table 3. It can be seen that the specific surface area of Mg1Zr2 is 68.8 m 2 /g, and then the specific surface area increases with the addition of Mg, which is consistent with the experimental results of Guan [36]. As the Mg content increased, the specific surface area of Mg-Zr oxide catalysts has an upward trend, which may be due to the multi-layer dispersion of MgO attached to the surface of magnesium-zirconium solid solution, resulting in the increase of specific surface area. MgO has the largest specific surface area, but the catalytic performance is not the best, indicating that although the structure of the catalyst has a certain impact on the catalytic performance, it is not a completely decisive factor.
To better understand the intrinsic acid-base functionalities and correlate the catalysts with their catalytic behavior, CO 2 -TPD and NH 3 -TPD measurements were performed to quantitatively determine the distribution of surface acidity and basicity and the number of acidic and basic sites of MgO-ZrO 2 catalysts. The CO 2 -TPD characterization of Mg-Zr oxide catalysts was carried out, and the influence of the Mg/Zr molar ratio on the basicity of the catalyst was investigated. The results are shown in Figure 5 and Table 3. It can be seen from Figure 5 that the Mg/Zr molar ratio has a significant effect on the basicity of Mg-Zr oxide catalysts. When the Mg content is 0 (ZrO 2 ), there are mainly weak basic sites on the surface of the catalyst with a CO 2 desorption temperature lower than 200 • C. With the addition of Mg, the number of medium strong basic sites (CO 2 desorption temperature is in the range of 200-600 • C) on the catalyst surface gradually increases, while the number of weak basic sites decreases. Among them, Mg1Zr2 has the largest number of total basic sites, because it has more weak sites and medium and strong sites at the same time. With the further increase of Mg content, the number of weak basic sites decreased rapidly. The surface of the Mg3Zr1 catalyst is mainly composed of medium and strong basic sites, while the weak sites almost disappear. Its CO 2 -TPD curve is similar to that of MgO. The results showed that the weak basic sites on the surface of Mg-Zr oxide catalysts were mainly provided by ZrO 2 , while the medium and strong sites were mainly related to MgO. Zhang et al. [34] believed that the weak basic sites of the MgO-ZrO 2 catalyst were related to its surface hydroxyl group, while the medium and strong basic sites were related to metal-oxygen pairs (Mg-O and Zr-O) and low coordination oxygen atoms (O 2− ). In addition, according to the data in Table 3, the total number of surface basic sites of ZrO 2 and MgO is similar, but the number of Mg-Zr oxide catalysts increases significantly. In particular, the number of total basic sites of Mg1Zr2 catalyst reaches 145.3 µmol/g, which is 55.7% and 36.3% higher than that of the two single metal oxides, respectively. It is considered that Mg 2+ and Zr 4+ are fully mixed during the hydrothermal preparation of the catalyst, and part of Zr 4+ in the lattice of ZrO 2 is replaced by Mg 2+ after calcination. Due to that Zr 4+ is more positive than Mg 2+ , the electron density of O 2− in Mg-Zr oxide catalysts increases, thus increasing the number of medium and strong basic sites of the catalyst [28]. The effect of the Mg/Zr ratio on the transesterification of GL and DMC to GC over Mg-Zr composite oxide was studied, and the results are shown in Figure 6. As can be seen from the figure, ZrO2 and MgO alone are active for the transesterification of GL, and GL conversion is 67.2% and 73.8%, respectively. The activity of the Mg-Zr oxide catalysts was higher than that of the two single metal oxides, indicating the interaction between ZrO2 As shown in Figure S3, in the NH 3 -TPD curve of bare ZrO 2 , there are two NH 3 desorption peaks at 130 • C and 530 • C, corresponding to weak acidic sites and strong acidic sites respectively. With the increase of MgO content, the medium-strength acid sites of the catalyst increased, while the weak and strong acid sites decreased. The results showed that there was no strong correlation between catalyst acidity and glycerol conversion. Although the role of acid sites in the activation of DMC cannot be completely ruled out, the effect of acid sites is less clear and predictable compared to the evident effect of basic sites [37].
The effect of the Mg/Zr ratio on the transesterification of GL and DMC to GC over Mg-Zr composite oxide was studied, and the results are shown in Figure 6. As can be seen from the figure, ZrO 2 and MgO alone are active for the transesterification of GL, and GL conversion is 67.2% and 73.8%, respectively. The activity of the Mg-Zr oxide catalysts was higher than that of the two single metal oxides, indicating the interaction between ZrO 2 and MgO and improving the performance of the catalyst. When Mg1Zr3 was used, the GL conversion was 84.0%. With the increase of Mg content, the catalyst activity increased first and then decreased. Among them, Mg1Zr2 has the highest activity for transesterification of GL, with GL conversion of 96.0% and GC selectivity of 95.3%. The Mg/Zr ratio had little effect on the selectivity of GC. The byproduct was glycidyl, and no other products were detected. According to the characterization results, Mg1Zr2 has the highest number of total basic sites. Moreover, the order of GL conversion is basically the same as that of the number of basic sites on the catalyst surface. This indicates that the influence of the Mg/Zr ratio on catalyst performance lies in the change in the number of catalyst basic sites. In this transesterification reaction, the main function of the solid catalyst is to support the abstraction of H + from glycerol by the basic sites so as to form glycerol anion. The higher the basicity of the catalyst, the more negative the charge of the glyceroxide anion (C 3 H 7 O 3 − ), and consequently, the lower the free energy of the reaction [38]. In other words, the deprotonation of glycerol (on basic sites) is likely more important than the activation of dimethyl carbonate (on acidic sites) for the transesterification of glycerol and dimethyl carbonate [39]. of the Mg/Zr ratio on catalyst performance lies in the change in the number of catalyst basic sites. In this transesterification reaction, the main function of the solid catalyst is to support the abstraction of H + from glycerol by the basic sites so as to form glycerol anion. The higher the basicity of the catalyst, the more negative the charge of the glyceroxide anion (C3H7O3 − ), and consequently, the lower the free energy of the reaction [38]. In other words, the deprotonation of glycerol (on basic sites) is likely more important than the activation of dimethyl carbonate (on acidic sites) for the transesterification of glycerol and dimethyl carbonate [39].
Effect of Reaction Conditions on Transesterification of GL over Mg1Zr2-HT
Using Mg1Zr2-HT as a catalyst, the effects of reaction time, reaction temperature, catalyst amount, and GL/DMC molar ratio on the transesterification of GL with DMC to GC were investigated.
Effect of Reaction Time
As shown in Figure 7a, the effect of reaction time on the transesterification of GL with DMC was investigated. It can be seen that GL conversion increased gradually with the increase in reaction time. When the reaction time was 90 min, the GL conversion was 99.0% and GC selectivity was 96.1%; With the continuous extension of reaction time, GL conversion remained unchanged and GC selectivity decreased. This was caused by the decomposition of GC into glycidyl.
Effect of Reaction Conditions on Transesterification of GL over Mg1Zr2-HT
Using Mg1Zr2-HT as a catalyst, the effects of reaction time, reaction temperature, catalyst amount, and GL/DMC molar ratio on the transesterification of GL with DMC to GC were investigated.
Effect of Reaction Time
As shown in Figure 7a, the effect of reaction time on the transesterification of GL with DMC was investigated. It can be seen that GL conversion increased gradually with the increase in reaction time. When the reaction time was 90 min, the GL conversion was 99.0% and GC selectivity was 96.1%; With the continuous extension of reaction time, GL conversion remained unchanged and GC selectivity decreased. This was caused by the decomposition of GC into glycidyl.
Effect of Reaction Temperature
It can be seen from Figure 7b that increasing temperature before 90 • C is conducive to promoting the reaction. This is because the reaction equilibrium constant of this reaction increases with the increase of temperature, so heating is conducive to the reaction. At 90 • C, GL conversion was 99.0% with GC selectivity of 96.1%. When the temperature continues to rise, the decomposition of GC into glycidyl occurs more readily [40], so GC selectivity decreases.
Effect of Reaction Temperature
It can be seen from Figure 7b that increasing temperature before 90 °C is conducive to promoting the reaction. This is because the reaction equilibrium constant of this reaction increases with the increase of temperature, so heating is conducive to the reaction. At 90 °C, GL conversion was 99.0% with GC selectivity of 96.1%. When the temperature continues to rise, the decomposition of GC into glycidyl occurs more readily [40], so GC selectivity decreases.
Effect of Catalyst Amount
The transesterification reaction of glycerol was highly influenced by the catalyst amount (wt% based on GL) and presented in Figure 7c, the increase of the amount of catalyst from 1 wt% to 3 wt%, the GL conversion and GC yield gradually increased, which was attributed to the increase in the basic sites of the transesterification catalyst. However, the amount of catalyst increased from 3 wt% to 7 wt%, and the GC yield decreased slowly, which may be due to the agglomeration of catalyst at a higher amount, which makes the reactants unable to enter the active center of the catalyst. The higher the amount of catalyst is, the greater the mass transfer resistance is, which may hinder the transesterification of GL with DMC [41].
Effect of Catalyst Amount
The transesterification reaction of glycerol was highly influenced by the catalyst amount (wt% based on GL) and presented in Figure 7c, the increase of the amount of catalyst from 1 wt% to 3 wt%, the GL conversion and GC yield gradually increased, which was attributed to the increase in the basic sites of the transesterification catalyst. However, the amount of catalyst increased from 3 wt% to 7 wt%, and the GC yield decreased slowly, which may be due to the agglomeration of catalyst at a higher amount, which makes the reactants unable to enter the active center of the catalyst. The higher the amount of catalyst is, the greater the mass transfer resistance is, which may hinder the transesterification of GL with DMC [41].
Effect of the Molar Ratio of GL/DMC
The molar ratio of GL/DMC has a great influence on the GL conversion and GC yield during the transesterification. Since the transesterification reaction is essentially reversible, excessive DMC is needed to shift the reaction equilibrium to GC. From Figure 7d, it is clear that with the increase of the molar ratio of DMC/GL, the conversion of GL showed an upward trend, and when the molar ratio was 1/5 (GL/DMC), the maximum conversion was 99.0% and the GC selectivity was 96.1%. If the molar ratio of DMC/GL continues to increase, the conversion of GL and GC yield decreases. This may be due to the excessive DMC diluting the catalyst and limiting the contact between GL and the catalyst, thus reducing the reaction rate [40].
Catalyst Stability
The reusability of a catalyst is an important index to evaluate the performance of the catalyst. In this study, the reusability of Mg1Zr2-HT and Mg1Zr2-CP catalysts for transesterification of GL with DMC is compared, as shown in Figure 8. After the reaction, the catalyst was centrifuged, washed three times with methanol, dried at 100 • C, and then calcined at 600 • C in air for 3 h. As can be seen from the figure, GC selectivity was little affected by repeated use and was almost constant. However, GL conversion gradually decreased, and there were significant differences between the Mg1Zr2-HT and Mg1Zr2-CP catalysts. When a fresh catalyst was used, the GL conversion over Mg1Zr2-HT and Mg1Zr2-CP catalysts was 99.0% and 95.2%, respectively. Moreover, when repeated for the fourth time, GL conversions were 80.1% and 58.2%, respectively. The stability of Mg1Zr2-HT is much better than that of Mg1Zr2-CP.
DMC diluting the catalyst and limiting the contact between GL and the catalyst, thus reducing the reaction rate [40].
Catalyst Stability
The reusability of a catalyst is an important index to evaluate the performance of the catalyst. In this study, the reusability of Mg1Zr2-HT and Mg1Zr2-CP catalysts for transesterification of GL with DMC is compared, as shown in Figure 8. After the reaction, the catalyst was centrifuged, washed three times with methanol, dried at 100 °C, and then calcined at 600 °C in air for 3 h. As can be seen from the figure, GC selectivity was little affected by repeated use and was almost constant. However, GL conversion gradually decreased, and there were significant differences between the Mg1Zr2-HT and Mg1Zr2-CP catalysts. When a fresh catalyst was used, the GL conversion over Mg1Zr2-HT and Mg1Zr2-CP catalysts was 99.0% and 95.2%, respectively. Moreover, when repeated for the fourth time, GL conversions were 80.1% and 58.2%, respectively. The stability of Mg1Zr2-HT is much better than that of Mg1Zr2-CP. In order to explore the reasons for the differences between the two catalysts, Mg1Zr2-HT and Mg1Zr2-CP catalysts after four times of reused were characterized by XRD, N2 adsorption-desorption, CO2-TPD, TEM, and XPS.
The XRD patterns of Mg1Zr2-HT-used and Mg1Zr2-CP-used catalysts after the fourth cycle are presented in Figure 9. It can be seen that there are obvious characteristic diffraction peaks at 2θ of 30.2°, 34.8°, 50.7°, 60.2°, and 62.9°, corresponding to (011), (110), (020), (121) and (202) crystal planes of tetragonal ZrO2, respectively. The characteristic diffraction peaks appear at 2θ of 43.2° and 62.5°, corresponding to the (200) and (220) crystal In order to explore the reasons for the differences between the two catalysts, Mg1Zr2-HT and Mg1Zr2-CP catalysts after four times of reused were characterized by XRD, N 2 adsorption-desorption, CO 2 -TPD, TEM, and XPS.
The XRD patterns of Mg1Zr2-HT-used and Mg1Zr2-CP-used catalysts after the fourth cycle are presented in Figure 9. It can be seen that there are obvious characteristic diffraction peaks at 2θ of 30. Compared with the fresh catalyst, the particle sizes of the two catalysts after repeated use both increased, but the grain size of MgO in Mg1Zr2-CP-used increased by about double, while the grain size of MgO in Mg1Zr2-HT-used increased by only 14%. Under hydrothermal conditions, ions in the solution automatically aggregate to form the most stable chemical structure that cannot be decomposed in the system during the temperature change, so they have good grain stability.
MgO in Mg1Zr2-CP-used increased by about double, while the grain size of MgO in Mg1Zr2-HT-used increased by only 14%. Under hydrothermal conditions, ions in the solution automatically aggregate to form the most stable chemical structure that cannot be decomposed in the system during the temperature change, so they have good grain stability. It can be seen from Figure 10 that after transesterification, the particle sizes of both two catalysts increased significantly and the particle sizes became uneven, indicating that the catalyst particles appeared aggregate and sintering, resulting in the gradual increase of grain size and the decrease of dispersion. The Mg1Zr2-CP-used catalyst had serious agglomeration, while the Mg1Zr2-HT-used catalyst had slight sintering but no obvious agglomeration, indicating that the catalyst prepared by the hydrothermal process had strong sintering resistance. This is because under hydrothermal conditions, the compounds in the solution may renucleate and restructure, so that the particles after hydrothermal treatment have better dispersion and grain stability than those particles only by neutralization precipitation [42]. It can be seen from Figure 10 that after transesterification, the particle sizes of both two catalysts increased significantly and the particle sizes became uneven, indicating that the catalyst particles appeared aggregate and sintering, resulting in the gradual increase of grain size and the decrease of dispersion. The Mg1Zr2-CP-used catalyst had serious agglomeration, while the Mg1Zr2-HT-used catalyst had slight sintering but no obvious agglomeration, indicating that the catalyst prepared by the hydrothermal process had strong sintering resistance. This is because under hydrothermal conditions, the compounds in the solution may renucleate and restructure, so that the particles after hydrothermal treatment have better dispersion and grain stability than those particles only by neutralization precipitation [42]. Compares the deconvoluted Mg 1s, Zr 3d, and O 1s XPS spectra of fresh and used Mg1Zr2-HT. The relative abundances of the Mg 1s, Zr 3d, and O 1s of the samples from Figure S4 showed that the content of the oxygen vacancy decreased from 26.5% to 23.1% and the content of the chemisorbed oxygen species increased from 1.1% to 3.9%, respectively, indicating that irreversible deactivation was caused.
As can be seen from Table 4 that the Mg1Zr2-HT-used catalyst has a larger specific surface area than the Mg1Zr2-CP-used catalyst, and more active sites can be retained. This may be because the colloidal network structure formed in the hydrothermal process is more stable through dissolution-deposition, which alleviates the collapse of the structure and the sintering of particles, so that it still maintains a large specific surface area in the reaction process [43]. Since the catalytic reaction occurs on the surface of active compo- Compares the deconvoluted Mg 1s, Zr 3d, and O 1s XPS spectra of fresh and used Mg1Zr2-HT. The relative abundances of the Mg 1s, Zr 3d, and O 1s of the samples from Figure S4 showed that the content of the oxygen vacancy decreased from 26.5% to 23.1% and the content of the chemisorbed oxygen species increased from 1.1% to 3.9%, respectively, indicating that irreversible deactivation was caused.
As can be seen from Table 4 that the Mg1Zr2-HT-used catalyst has a larger specific surface area than the Mg1Zr2-CP-used catalyst, and more active sites can be retained. This may be because the colloidal network structure formed in the hydrothermal process is more stable through dissolution-deposition, which alleviates the collapse of the structure and the sintering of particles, so that it still maintains a large specific surface area in the reaction process [43]. Since the catalytic reaction occurs on the surface of active components, the agglomeration and growth of grains lead to the decrease of active surface area, the decrease of active sites, and the reduction in catalytic activity [44]. Compared with fresh catalysts, the number of weak basic sites of Mg1Zr2-HT and Mg1Zr2-CP catalysts decreased by 12% and 13%, respectively, but the number of medium and strong basic sites decreased by 25% and 50%, respectively. Moreover, it was observed that Mg1Zr2-CP-used suffered a greater loss of basic sites than Mg1Zr2-HT-used. This is probably the reason why Mg1Zr2-HT showed better catalytic performance than Mg1Zr2-CP after four cycles. Mg1Zr2-HT possesses a more stable crystal structure to avoid the irreversible reduction of basic sites amount.
Conclusions
In this work, Mg-Zr composite oxide catalysts with different Mg/Zr molar ratios were prepared by hydrothermal process and their activity and stability towards GC synthesis were studied. The results showed that the catalysts prepared by the hydrothermal process had larger specific surface area, smaller grain size, and higher dispersion than those prepared by the co-precipitation process. The Mg1Zr2-HT catalyst calcined at 600 • C in a nitrogen atmosphere showed the best catalytic performance, with GL conversion of 99% and GC selectivity of 96.1% under mild reaction conditions. This is attributed to the balanced strong and weak basic sites and highly dispersed MgO. Moreover, the GL conversion was demonstrated to increase in parallel with the total amount of basic sites. Compared with the Mg1Zr2-CP catalyst, the Mg1Zr2-HT catalyst has good thermal stability and reproducibility. The conversion of GL is still up to 80.1% and the selectivity of GC is 93.0% in the fourth reuse, while the regenerated Mg1Zr2-CP catalyst is 58.2% and 94.8% in the fourth reuse. The reason for the difference may be that in the cyclic reaction process, Mg1Zr2-HT has good grain stability and small growth amplitude, but the grain growth of active species in Mg1Zr2-CP is large, which will greatly reduce the effective active surface area of the catalyst, resulting in a significant decrease in the catalytic performance.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 10,822 | 2022-06-01T00:00:00.000 | [
"Chemistry"
] |
Single-cell transcriptome analysis reveals distinct cell populations in dorsal root ganglia and their potential roles in diabetic peripheral neuropathy
Diabetic peripheral neuropathy (DPN) is a common complication associated with diabetes, and can affect quality of life considerably. Dorsal root ganglion (DRG) plays an important role in the development of DPN. However, the relationship between DRG and the pathogenesis of DPN still lacks a thorough exploration. Besides, a more in-depth understanding of the cell type composition of DRG, and the roles of different cell types in mediating DPN are needed. Here we conducted single-cell RNA-seq (scRNA-seq) for DRG tissues isolated from healthy control and DPN rats. Our results demonstrated DRG includes eight cell-type populations (e.g., neurons, satellite glial cells (SGCs), Schwann cells (SCs), endothelial cells, fibroblasts). In the heterogeneity analyses of cells, six neuron sub-types, three SGC sub-types and three SC sub-types were identified, additionally, biological functions related to cell sub-types were further revealed. Cell communication analysis showed dynamic interactions between neurons, SGCs and SCs. We also found that the aberrantly expressed transcripts in sub-types of neurons, SGCs and SCs with DPN were associated with diabetic neuropathic pain, cell apoptosis, oxidative stress, etc. In conclusion, this study provides a systematic perspective of the cellular composition and interactions of DRG tissues, and suggests that neurons, SGCs and SCs play vital roles in the progression of DPN. Our data may provide a valuable resource for future studies regarding the pathophysiological effect of particular cell type in DPN.
Introduction
Diabetic peripheral neuropathy (DPN) is the most common form of neuropathy, and have considerable morbidity, which occurs in approximately 50% of diabetic patients [1].With the progress of the disease, the individuals may suffer from foot ulceration, neuropathic pain or even lower-limb amputation, and the quality of life is impaired significantly [2].The pathogenesis of DPN is complex.Hyperglycemia, dyslipidemia and altered insulin signaling could cause various pathological alterations in peripheral nervous system, such as endoplasmic reticulum stress, DNA damage and mitochondrial dysfunction, and eventually result in DPN [3].However, the distinct mechanisms of DPN development remain unclear.
Dorsal root ganglion (DRG) is situated between each spinal cord and spinal nerve on the posterior root, and is a critical structure in processing and transmitting the sensory neural signals from the periphery nerves to the central nervous system [4].The DRG and peripheral nerve could undergo functional and structural damage caused by persistent diabetic status.Particularly, without the protection of the blood-nerve barrier, the DRG is a more vulnerable site compared with the peripheral nerve [5].Animal models of DPN have revealed the pathophysiologic changes in DRG and their contributions to neuropathy [6].Thus, the DRG tissue may be the primary choice in the research of DPN pathogenesis.
The pseudo unipolar cells within the DRG are somatosensory neurons, which enable body to detect and respond to various noxious and innocuous stimuli.In addition to sensory neurons, DRG also contains a variety of other cell types, such as satellite glial cells (SGCs), Schwann cells (SCs), fibroblasts, and immune cells [7].Previous study has reported that the hypoxia-inducible factor-1 alpha (HIF-1α) signaling in diabetic peripheral sensory neurons is impaired, which has protective effect by suppressing nerve damage and promoting peripheral nerve survival [8].The expression of lipocalin-2 (LCN2) is upregulated in SGCs from diabetic DRGs, subsequently promote the inflammatory responses in peripheral nervous system, and lead to DPN ultimately [5].Besides, SCs apoptosis is occurred under high glucose, this pathological process is related with oxidative stress, autophagy, inflammatory reactions and so on, and indicate neuropathy [9].Therefore, different cell types in DRG tissue could have distinct critical roles in the pathogenesis of DPN.
Single-cell RNA-seq (scRNA-seq) technology has been used extensively and rapidly in the biological areas in recent years.scRNA-seq can identify the cell types based on their global transcriptome patterns, as well as identify the disease-associated gene expression changes in each cell type, and dissect the mechanisms underlying a given disease [10].With these advantages, the scRNA-seq technology has the potential to uncover transcriptome pattern of each cell in DRGs and their roles in the DPN development.Former study has investigated the DRG neuron changes and the development of mechanical allodynia using scRNA-seq technology [11].However, the expression patterns of other cell types, the interactions of different cell types, and their contributions to DPN still lack systematic evaluation in the literature.
Here, the major goals of this work were to comprehend the cellular composition and interactions of DRG tissues, and to understand the cellular change in DPN.Therefore, we performed scRNA-seq in DRG tissues isolated from healthy control and DPN rats, to conduct expression profiling in each cell type.As a result, we classified cell populations in DRG tissue to eight cell types, including neurons, SGCs, SCs, endothelial cells, mural cells, fibroblasts, macrophages, and neutrophils.Moreover, neurons, SGCs and SCs were the three major cell types in DRG, and were classified to several sub-types.In particular, the expression patterns, the cellular interactions, and the pathophysiological effects to DPN of these important cell types were further analyzed and compared.
Animals
Healthy male SD rats (8 weeks old, 180-220 g) were used in this study, and were purchased from Experimental Animal Center, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.Rats were placed in separated cages with free access to water and food.The surrounding environment was kept at 22-24˚C temperature, 40% humidity, and 12 hours dark and light cycle.Diabetes was induced after one week of adjustable feeding.This study was approved by the Animal Ethics Committee of Huazhong University of Science and Technology, and was performed based on the National Institute of Health Guidelines and Regulations.
Induction of diabetes
All rats were split into diabetic and control groups randomly.Diabetes was induced as described previously [12].Briefly, diabetic group was received the streptozotocin (STZ) through a single intraperitoneal (IP) injection at a dose of 65 mg/kg body weight after a 12-hour fast.STZ was freshly dissolved in citrate buffer (0.1 M, pH 4.4 at 4˚C) before injection.The control group was injected with an equal volume of citrate buffer alone.Rats with blood glucose levels greater than 300 mg/dL (16.7 mM) were considered diabetic and selected in our study.Rats were left for 8 weeks after STZ injection to allow for the DPN development in diabetic rats.
Measurements and tissues
The blood glucose, body weight, water intake, food intake, urine volume and withdrawal threshold were measured as previously described [12].Following 8-week diabetic duration, rats were executed by cervical dislocation and decapitation under sodium pentobarbital anesthesia, and bilateral L3-L6 DRGs were dissected and collected, which were immediately used for the preparation of single cell suspension.
Cell suspension preparation
Collected DRGs were minced into small pieces using micro-scissors.Subsequently, DRGs were digested at 37˚C using the following two enzymes: 0.1% type I collagenase (Sigma, St Louis, MO) for 40 min, and 0.25% Trypsin-EDTA (Gibco, Grand Island, NY) for another 20 min.At the conclusion of the digestion, the DRGs were transferred to a complete medium (RPMI 1640 + 0.04% BSA), triturated using a fire-polished Pasteur pipette, and filtered through a 70-μm then 40-μm cell strainer (Falcon).The dissociated cells were collected by centrifugation at 1000 r/min for 7 min.Through using the MACS Dead Cell Removal Kit (130-090-101), the dead cells were removed, and the cell suspensions were obtained with high quality, which were used for single-cell sequencing immediately.
Single-cell sequencing
Cell suspensions were loaded on a Chromium Controller (10× Genomics, GCG-SR-1) to form the gel beads-in-emulsion (GEMs).The Barcoded gel beads labeled single cell populations were transferred into a tube strip for reverse transcription.Single-cell RNA-seq libraries were constructed using the Chromium Single Cell 3' Library & Single Cell 3´v3 Gel Beads (Chromium, PN-1000075) according to the manufacturer's protocols.Briefly, the cell suspensions were mixed with RT-PCR reagents and Barcoded gel beads, were added to a Chromium chip.Then, the Chromium chip was placed in a Chromium Controller.The GEM-RT-PCR was conducted on a PCR instrument (Bio-rad, MyCycler) using the following program: 45min at 53˚C; 5min at 85˚C; hold at 4˚C.Barcoded cDNA was extracted from the partitioning Oil, and amplified using cDNA Amplification Reaction Mix.The 10× Chromium kit, which included reagents for fragmentation, ligation and sample index PCR, were used to generated sequencing libraries.The final libraries were sequenced on an Illumina Novaseq platform.The original Sequencing data were stored in NCBI database with GEO accession number GSE248328.
Sequencing data processing
The Cell Ranger software (10× Genomics, version 3.1.0)was applied to demultiplex cellular barcodes, map reads to the transcriptome and genome by the STAR aligner, and down-sample reads to generate normalized aggregate data across samples as required, producing a matrix of gene counts versus cells.To remove possible multiple captures, dead cells and low-quality cells, the following criteria were applied: the number of expressed genes per cell (median ± 4×MAD), the unique molecular identifier (UMI) counts per cell (median ± 4×MAD), and proportion of mitochondrial gene counts (< 20%).Principal component analysis (PCA) was carried out to reduce the dimensionality, and the data were visualized in two dimensions through the t-distributed stochastic neighbor embedding (t-SNE) method.Batch effect was corrected by mutual nearest neighbor detection [13].The R package SingleR was used to infer the origin of each single cells and identify their cell types independently [14].
Identification of marker genes and differentially expressed genes
The marker genes of each cluster were identified using Seurat FindAllMarker function [15].The expression levels of those marker genes in specific cluster were significantly higher compared with other clusters, and had potential ability to verify and define the cell type of each cluster.The marker genes were visualized using VlnPlot and FeaturePlot functions.Differentially expressed genes between DPN group and control group were identified using MAST test through the Seurat package [15].Only genes that were expressed in at least 10% of cells in either of the two groups were used for differential expression analysis.Significantly aberrantly expressed genes were chosen only if P-value < 0.05 and foldchange > 1.5.
GO enrichment and KEGG pathway enrichment analysis
Both differentially expressed genes or marker genes in different cell types or sub-types were subjected to Gene ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis.GO enrichment and KEGG pathway enrichment analysis of DEGs or marker genes were respectively performed using R (version 4.0.3)based on the hypergeometric distribution.The quantification of gene set expressions was conducted using R package Quantitative Set Analysis of Gene Expression (QuSAGE) [16].
Pseudotime trajectory analysis and cell communication analysis
Single-cell pseudotime trajectory analysis was conducted to reveal the evolution of different cell types using Monocle2 algorithm (version 2.4.0)[17].Cell communications were analyzed on the basis of ligand-receptor interactions by using CellPhoneDB [18], to reveal the diversity, complexity and dynamics of intercellular communication in a wide range of biological processes.
Identification of multiple cell types in DRGs
Based on the blood glucose, body weight, water intake, food intake, urine volume and withdrawal threshold, the STZ-injected rats developed DPN successfully (S1 Fig) .The DRGs (bilateral L3-L6) from five control and five DPN rats were dissociated into single-cell suspensions, which were used to perform high-throughput scRNA-seq through 10× Genomics platform.
Identification of sub-types of neurons in DRGs
Cells in cluster 11 were interpreted as neurons (Fig 1C and 1D) and further sub-clustered into six groups by dimensionality reduction (Fig 3A).To identify the group-specific marker genes, the relative expression levels of genes in each group were also calculated.The distributions and expression levels of the top marker genes were displayed in violin plots (Fig 3C).Marker genes in neuron sub-type 1-6 (N1-N6) were illustrated in a heatmap (Fig 3B ), and the gene list was shown in S2 Table .According to previous studies, the N1 group could be classified as peptidergic (PEP) neurons, which expressed the classical markers such as neurotrophic receptor tyrosine kinase 1 (Ntrk1 or TrkA), calcitonin-related polypeptide (Calca or CGRP), and substance P (Tac1) [32].And the N2 group might belong to the non-peptidergic (NP) neurons based on their highly expressed marker genes including P2rx3, Ret, and Gfra2 [32][33][34].In addition, we concluded that the N4, N5 and N6 groups might be related to neurofilament containing (NF) neurons, which expressed the reported markers including neurotrophic receptor tyrosine kinase 2 (Ntrk2 or TrkB), secreted phosphoprotein 1 (Spp1), lactate dehydrogenase B (Ldhb), and calcium voltage-gated channel subunit alpha1 H (Cacna1h) [33] (Fig 3E).However, the N3 group did not match any reported sensory neuron cell types according to their expressed marker genes.
Differentially expressed genes in neurons with DPN
By using the filtering criteria of P-value < 0.05 and foldchange > 1.5, the differentially expressed transcripts were identified in different neuron sub-types with DPN (S3 Table ).All neurons from both groups were shown by the t-SNE dimension reduction approach (Fig 4A ), and the proportion of each sub-type was shown in Fig 4B and 4C.Pathway and GO analyses of differentially expressed transcripts in each sub-type of neuron were shown in Fig 4D -4G, S2 and S3 Figs.Notedly, we found numerous voltage-gated, ligand-gated and transient receptor potential (TRP) channels were significantly differentially expressed across the neuronal subtypes, such as the voltage-gated sodium channels (including Scn3a and Scn7a in N6 group), the voltage-gated potassium channels (including Kcnk12 and Kcnj11 in N1 group, Kcnd1, Kcnk2 and Kcng2 in N2 group, Kcnq5 in N6 group), the ligand-gated ion channels (including Htr3a in N2 group, Grik1 and P2rx3 in N6 group), and the TRP channels (including Trpv1 in N2 group, Trpm7 in N3 group) [35].Furthermore, lots of other operational components of sensory neurons were also aberrantly expressed, such as neurotransmission (including Scg3 in N1 group, Gal and Vgf in N2 group, Calca in N4 group, Tac1 in N4 and N6 groups), presynaptic regulation (Gabarapl2 in N1 group), chronic pain (Ptgir in N2 group), and conductive channel (Ina in N2 group) (Fig 4H) [33].
Identification of sub-types of SGCs in DRGs
Based on the expression of known gene signatures, the cluster 1, 3 and 6 (also named as SGC1, SGC2 and SGC3, respectively) were classified as SGCs (Fabp7, Tyrp1, Hmgcs2 and Slc1a3) Particularly, the SGC1 and SGC2 groups shared plenty of common marker genes, and GO analysis indicated that those enriched genes were related with fatty acid metabolism, such as fatty acid binding proteins (Fabp5 and Fabp7), fatty acid elongases (Elovl2 and Elovl6), and desaturases (Scd, Scd2, Fads1 and Sc5d).Besides, the SGC1 and SGC2 groups also expressed marker genes associated with cholesterol metabolism, including Cyp51, Fdps, Npc2, Insig1, and Hmgcr [36].In addition, we found the SGC3 group expressed a cohort of immune-related genes, such as vimentin (Vim) and interferon regulatory factor 1 (Irf1), indicating these cells might participate in defending DRGs against viral or bacterial infections [36,37] (Fig 5B -5E).
Identification of sub-types of SCs in DRGs
By screening the top marker genes, we identified the cluster 2, 4 and 12 cells as SCs, each cluster represented one sub-types of SCs, and were named as SC1, SC2 and SC3, respectively.Marker genes in each SGC sub-type were shown in S5 Table.Among them, four top marker genes in SC3 were Mag, Cldn19, Pou3f1 and Prx, which were myelin-related proteins, and responsible for the process of myelination [20,38].Meanwhile, we found that the top transcript in SC1, Scn7a, belonged to the family of voltage-gated sodium channels to sustain the electrical activity in excitable tissues [20].Besides, one top transcript in SC2, epithelial membrane protein (Emp1), belonged to the family of peripheral myelin protein 22 (PMP22), and participated in the neuronal differentiation and axon growth [39] (Fig 6B).Considering the heterogeneity of sub-types in SCs, we performed pseudotime trajectory analysis of our single cell data on SCs using Monocle analysis [17].As illustrated in the trajectory, SC1 and SC3 showed similar distributions along the pseudotime trajectory, primarily located on the left side of the trajectory; whereas SC2 was also distributed towards the upper right of the trajectory.Due to the relatively higher expression of myelin-associated genes in SC3, this suggested a higher degree of differentiation on the left side of the trajectory compared to the right (Fig 6F and 6G).
The communication relationships between neurons, SGCs and SCs in DRGs
The cellular communications between neurons, SGCs and SCs were evaluated by ligand-receptor pairs using CellPhoneDB [18] (Fig 7A -7C).We found that the strongest interaction between neurons and SGCs occurred between N6 and SGC1 or SGC2 (Fig 7D), interactions between neurons and SCs were higher between N6 and SC2 (Fig 7E ), and interactions between SGCs and SCs were higher between SGC2 and SC2 (
Differentially expressed genes in SGCs with DPN
To isolate transcripts that were dysregulated in SGCs with DPN, differential expression analysis was conducted.As a result, all three cell groups of SGCs showed significantly differentially expressed genes with DPN (Fig 8 and S6 Table ), and shared the most significantly downregulated genes, including Hspa1b and Angptl4.Interestingly, both these two genes were found to be involved in apoptotic process, which mediated anti-apoptotic effects to protect cells from multiple proapoptotic stimuli [40,41].Besides, growth factor related genes, Igf1 and Igfbp2, which were significantly downregulated in SGC1 and SGC2, were also apoptosis-associated genes, reported to have protective effects on cell growth and cell apoptosis [42,43].All SGC clusters also shared plenty of upregulated genes, such as heat shock protein genes (Hspa5 and Hsp90b1), which were known as unfolded protein response (UPR) genes, and could be activated by the endoplasmic reticulum (ER) stress [44].Other mutual upregulated genes, such as Adamts1 and Pdia3, which were hypoxia-inducible gene [45] and proapoptotic response gene [46], respectively.
Differentially expressed genes in SCs with DPN
Through differential expression analysis, lots of upregulated or downregulated transcripts in SCs with DPN were also identified (Fig 9 and S7 Table ).We found the three clusters of SCs shared numerous dysregulated transcripts, and among them many transcripts were also aberrantly expressed in SGCs.For example, the upregulated transcripts including Scn7a, Txnip, Sptbn1, Hspa5 and Hsp90b1; and the downregulated transcripts including Hspa1b.The transcripts Scn7a played regulatory roles in diabetic neuropathic pain [47]; Txnip was the key factor to cause SC dysfunction in DPN [48]; Sptbn1 was involved in neurodegenerative diseases and exhibited roles in regulating axonal transport and neurite growth [49]; Hspa1b, Hspa5 and Hsp90b1 belonged to heat shock protein genes, Hspa1b had anti-apoptotic role to protect cells from multiple proapoptotic stimuli [40], and the latter two transcripts could be activated by the ER stress [44].
Additionally, the growth-associated marker genes, such as stathmin 2 (Stmn2) and peripherin (Prph), were upregulated in SC groups with DPN.Stmn2 and Prph were classical markers for axonal growth and neurite extension [50], indicating the SCs might exhibit some extent regeneration during the process of DPN.Besides, known genes associated with DPN were also identified in SCs, such as the downregulated genes metallothionein 3 (Mt3) and Pmp2 in mySC cluster [51,52].
Discussion
In the current study, by comprehensive analyzing single-cell sequencing data of DRG tissues in SD rats, we identified the complexity of cellular composition, and classified DRG cells into 8 main cell types, including neurons, SGCs, SCs, endothelial cells, mural cells, fibroblasts, macrophages, and neutrophils.Furthermore, our analysis revealed 6 sub-types of neurons, 3 subtypes of SGCs and 3 sub-types of SCs, provided the typical gene expression profile of each cell type.CellPhoneDB-predicted cell communications revealed close cell-cell interactions between neurons, SGCs and SCs.Besides, we analyzed the possible biological roles of neurons, SGCs and SCs in DPN.Our data showed dynamic gene expression alterations of these three cell types, which may play crucial roles in DPN.
The DRG neurons are somatosensory neurons, which serve to detect physical and noxious stimulation, and transmit these signals from peripheral nervous system into central nervous system.Particularly, previous study has dissected DRG neurons into four main types, including PEP, NP, NF and tyrosine hydroxylase containing (TH) neurons [33].Consistent with former studies, we annotated the N1 cluster as PEP neurons, the N2 cluster as NP neurons, and the N4, N5 and N6 clusters as NF neurons based on the expression of known neuronal markers.Different neuronal types might have different functional assignments such as mechanosensitive, thermosensitive or nociceptive neurons [33].We did not classify the N3 cluster as any known neuronal types, however, Notch1, one top marker gene in N3 group, was a neural progenitor proliferation marker [53], indicating this cluster might in the early stage of neuron development.
Interestingly, Zhou et al. also detected the sub-types of DRG neurons, and reclassified neurons according to the classical DRG neuron markers [11].In accordance with our study, PEP neurons and NP neurons were identified.Based on the t-SNE plot, these two sub-types account for most of neurons.However, for the rest neuron sub-types, such as Trpm8-positive neurons (TRPM8), C-fiber low-threshold mechanoreceptors (C-LTMR) neurons or somatostatin-positive neurons (SOM), we could not identify them in our project.Particularly, the SOM neurons belong to NP3 neurons based on the study of Usoskin et al. [33].Besides, the newly identified neurons by Zhou et al., MAAC neurons, could originate from PEP neurons on the basis of the gene expression profiles [11].Future studies may combine these results to obtain a more complete classification of neurons.
We successfully induced the DPN rats using STZ injection [12], and identified the differentially expressed transcripts in neurons with DPN.Remarkably, lots of transcripts related to voltage-gated, ligand-gated and TRP channels were significantly aberrantly expressed.An increasing number of studies have reported various channels expressed in DRG neurons, which play vital roles in modulating the excitability of sensory neurons, and contribute to the development of painful symptoms directly [47,54,55].Take the voltage-gated potassium channels for instance, our study identified multiple potassium channels which were downregulated in DPN, and the reduced expression of voltage-gated potassium channels in DRG neurons could increase the neuronal excitability and contribute to the diabetic neuropathic pain [54].Together with other aberrantly expressed operational components known to participate in sensitization during neuropathic pain, which associated with neurotransmission, presynaptic regulation, chronic pain and conductive channel [33], we believed that the DRG neurons could play pivotal roles in the pathogenesis of diabetic neuropathic pain.
SGCs are flattened sheet-like cells, located in the surrounding of neuronal soma.In line with previous research, the SGCs we identified were enriched for genes involved in cholesterol biosynthesis and fatty acid metabolism (such as chaperone proteins, elongases and desaturases) [36], suggesting that lipid and cholesterol syntheses in SGCs are important to the associated neuronal compartments.A wide variety of neuronal stress situations, such as diabetes and traumatic nerve injury, could trigger the SGC activation, and activated SGCs are featured by profound changes [56].Our differential expression analysis for SGCs revealed multiple dysregulated genes, such as apoptosis-associated genes, heat shock protein genes and hypoxiainducible genes.Particularly, hypoxia is a significant etiologic factor in DPN, and numerous metabolic abnormalities, such as oxidative stress, could impair neural function and eventually cause cell apoptosis [57].Besides, the aberrantly expressed heat shock protein genes are involved in ER stress, and dysfunction of ER affects lots of aspects of cell physiology and secretion, which ultimately leads to apoptosis of cells [44].To sum up, those dysregulated genes reveal the possible ways of SGCs to participate in DPN.
Another glial cell type in DRGs are SCs, which wrap around axons in nerve trunks.SCs could be classified into myelinating cells and nonmyelinating cells on the basis of the way they interact with axons.In diabetic neuropathy, the myelinated and nonmyelinated axons were decreased, and the morphological changes and metabolic disorders were induced in SCs, such as aggregates of glycogen particles, edematous cell cytoplasm, activation of protein kinase C and polyol pathway hyperactivity [58].Particularly, our single-cell sequencing data also identified numerous differentially expressed genes in SCs with DPN, which related to cell apoptosis, oxidative stress, diabetic neuropathic pain, SC dysfunction or demyelination.For instance, Mt3 and Pmp2 were downregulated in mySC cluster.As previously reported, metallothionein was a potent antioxidant to scavenge of free radicals, and oxidative stress was found to play vital roles in pathogenesis of DPN [51]; Pmp2 was expressed in myelin of peripheral nervous system which had essential roles in myelin sheath structure and nerve function, and mutant Pmp2 caused severe demyelination and decreased nerve conduction velocities [52].Thus, the SCs could also play crucial role in the pathogenic mechanisms of DPN and need to be further studied.
Apart from the DPN model, the DRG tissues also exhibit cellular alterations in other injury models.Following peripheral nerve injury, scRNA-seq of the DRG tissues showed that multiple sub-types of the DRG neurons were in a regenerative condition [59], and the expression of classic regeneration-associated genes, such as Atf3 and c-Jun, were upregulated [60].Besides, repair SCs were identified after spinal nerve transection, which specifically labeled by Shh, but the increase of repair SCs following sciatic nerve crush or transection were not detected [61].Furthermore, following nerve injury, SGCs could promote axon regeneration of DRG neurons by upregulating genes related to the immune system and lipid metabolism [62].However, pathological characteristics of DPN include nerve demyelination, axonal atrophy, cell apoptosis and delayed regeneration.Thus, the cellular alterations of DPN model are quite different from nerve injury models, and these differences may be instructive in the treatment of DPN.
As we known, neuronal cell bodies in DRG are covered SGCs, whereas axons in nerve trunks are ensheathed by nmSCs or mySCs.Cells in DRGs normally interact with other cells for cellular communications and signal transduction.For instance, neurons control SC functions by providing essential signals, whereas SCs promote neuronal survival and ensure efficient action potential transductions, and abnormal interactions of neuron-SC could result in diseases, including peripheral neuropathy [63].Besides, SGCs in DRG participate in cellular communication through gap junctions, and the abnormal interactions between neurons and SGCs could contribute to various pain, such as post-herpetic pain, post-surgical pain, and diabetic neuropathic pain [64].Also, SGC activation and TNF-α release could establish a neuronglial communication in DRG, and play important roles in inflammatory visceral hyperalgesia [65].Our bioinformatic results showed close cell-cell interactions among neurons, SGCs and SCs, and showed numerous specific ligand-receptor pairs on these cells.Notably, Alk is expressed in nociceptive DRG neurons and is involved in the neurons-SCs interaction [66], and Ptn is a secreted binding protein that exhibits vital roles in neural-glial interactions in the development of nervous system [67].Thus, those ligand-receptor pairs might reveal specific ways of cell communication, which require further validation.
Although we identified multiple cell types in DRGs and investigated the roles of three major cell types in the pathophysiology of DPN, this study suffers from some limitations.For example, the diameters of some cells were too large to be captured by 10× Genomics, such as the large diameter neurons.In addition, some vulnerable cells may be loss during the digestion process.Thus, our data may not represent the actual cell proportion in DRGs.Furthermore, it was worth noting that signals from genes with high expression levels were analyzed while the low expression level genes might be neglected.Lastly, we did not discuss any cell types outside neurons, SGCs, and SCs, which may also play roles in DPN.
Despite these technical inherent defects and limitations, we reported the cellular and molecular landscape of DRG tissues at single-cell level.Our data revealed the complexity of cellular composition and dynamic gene expression alterations in DPN.These findings expand our understanding of the pathophysiological processes of DPN, and may serve as a resource for studying the functions of different cell types and treating DPN disease.
control and 7,314 diabetic) were retained for further analysis (Fig 1C).Following gene expression normalization, DRGs were classified into 16 clusters using FindVariableGenes function and FindAllMarkers function in Seurat package (Fig 1C).Each cluster included cells from both control and DPN DRG tissues (Fig 1E-1G).
Fig 1 .
Fig 1. scRNA-seq identifies multiple cell types in DRGs.(A) Flow chart of DPN rat induction, sample collections and database construction.(B) Violin distribution map of the percentage of mitochondrial genes (percent.mito),the number of genes (nGene) and the number of UMIs (nUMI) detected in each cell after quality control.(C) A t-distributed stochastic neighbor embedding (t-SNE) plotting of the 14,652 cells showing 16 cell clusters.(D) A t-SNE plotting showing cell types for the 14,652 cells.(E) A t-SNE plotting showing cell populations colored as originating either from control tissues or from DPN tissues.(F, G) The proportion of each cell type in control and DPN tissues.https://doi.org/10.1371/journal.pone.0306424.g001
Fig 2 .
Fig 2. The marker gene identifies distinct cell types in DRGs.(A-H) The cell types identified by representative marker genes, using t-SNE plots.(I) Heatmap of expression signals of top marker genes in each cell type.https://doi.org/10.1371/journal.pone.0306424.g002
Fig 3 .
Fig 3. Top marker genes and biological processes in sub-types of neurons.(A) A t-SNE plotting showing six subtypes of neurons.(B) Heatmap of expression signals of top marker genes in each sub-type of neuron.(C) Violin plots showing the expression distribution for top marker genes in each sub-type of neuron.(D) GO analysis indicates enriched biological processes of each sub-type of neuron.(E) Bubble map showing the relative expression levels of reported marker genes in each neuron sub-type.https://doi.org/10.1371/journal.pone.0306424.g003
Fig 4 .
Fig 4. Aberrantly expressed genes and their biological functions in neurons with DPN.(A) A t-SNE plotting showing neurons colored as originating either from control tissues or from DPN tissues.(B, C) The proportion of each sub-type of neuron in control and DPN tissues.(D, E) KEGG pathways of aberrantly expressed genes in N1 and N2 neurons with DPN.(F, G) GO biological processes and molecular functions of aberrantly expressed genes in N1 and N2 neurons with DPN.(H) Bubble map showing aberrantly expressed genes in each sub-type of neuron with DPN.https://doi.org/10.1371/journal.pone.0306424.g004
Fig 5 .
Fig 5. Identification of sub-types of SGCs in DRGs.(A) A t-SNE plotting showing 3 sub-types of SGCs.(B-D) Bioinformatics analyses indicates enriched biological processes (B), molecular functions (C) and pathways (D) of each sub-type of SGCs.(E) Bubble map showing the relative expression levels of reported marker genes in each SGC sub-type.https://doi.org/10.1371/journal.pone.0306424.g005
Fig 6 .
Fig 6.Identification of sub-types of SCs in DRGs and pseudotime trajectory analysis of peripheral glial cell sub-types.(A) A t-SNE plotting showing 3 sub-types of SCs.(B) Bubble map showing the relative expression levels of reported marker genes in each SC sub-type.(C-E) Bioinformatics analyses indicates enriched molecular functions (C), biological processes (D) and pathways (E) of each sub-type of SCs.(F, G) Inference of SGC and SC developmental connection by pseudotime trajectory analysis.SGCs and SCs exhibit distinct cell fates.Color key from bright to dark indicates cell progression from the early to the late stage.https://doi.org/10.1371/journal.pone.0306424.g006 Fig 7F).Next, we further investigated the specific receptor-ligand pairs in different cell groups in detail.It was demonstrated that Alk on N6 closely bound to Ptn on SGC2 (Fig 7G) and SC2 (Fig 7H), and Ptn on SGC2 closely bound to Ptprz1 on SC2 (Fig 7I).
Fig 7 .
Fig 7. The cell-cell communications between neurons, SGCs and SCs in DRGs.(A-C) Chord diagrams showing the cellular communications between neurons, SGCs and SCs in DRGs.(D-F) Stacked bar graphs showing the number of interacting ligand-receptor pairs between neurons, SGCs and SCs in DRGs.(G-I) Ligand-receptor interactions between neurons, SGCs and SCs in DRGs.https://doi.org/10.1371/journal.pone.0306424.g007
Fig 8 .
Fig 8. Aberrantly expressed genes and their biological functions in SGCs with DPN.(A) A t-SNE plotting showing SGCs colored as originating either from control tissues or from DPN tissues.(B, C) The proportion of each sub-type of SGCs in control and DPN tissues.(D-F) GO biological processes and molecular functions of aberrantly expressed genes in each sub-type of SGCs with DPN.(G) Bubble map showing aberrantly expressed genes in each sub-type of SGCs with DPN.https://doi.org/10.1371/journal.pone.0306424.g008
Fig 9 .
Fig 9. Aberrantly expressed genes and their biological functions in SCs with DPN.(A) A t-SNE plotting showing SCs colored as originating either from control tissues or from DPN tissues.(B, C) The proportion of each sub-type of SCs in control and DPN tissues.(D-F) GO biological processes and molecular functions of aberrantly expressed genes in each sub-type of SCs with DPN.(G) Bubble map showing aberrantly expressed genes in each sub-type of SCs with DPN.https://doi.org/10.1371/journal.pone.0306424.g009 | 7,303.8 | 2024-07-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Deuteron Helicity Flip Generalized Parton Distributions in a Convolution Model
We discuss the general properties of generalized parton distributions with helicity flip (transversity) for spin-1 hadrons in the leading twist case. Using a basic light cone convolution model, we show the deuteron helicity amplitudes containing quark helicity flip GPDs and comment on the role deuteron angular momentum plays in these.
Helicity flip GPDs for spin 1 hadrons
Generalized parton distributions (GPDs) appear as scalar functions in the decomposition of offforward quark and gluon correlators in hadrons. Through QCD factorization theorems they parametrize the non-perturbative part of the amplitude in processes such as deeply virtual Compton scattering (DVCS) and deep exclusive meson production (DEMP) [1]. While a rich phenomenology exists for the nucleon, there is comparatively less material available concerning nuclear GPDs, which enter in coherent exclusive processes on nuclei. In this proceedings, we report on recent work on quark helicity flip GPDs for the deuteron. More details can be found in Refs. [2,3].
As the deuteron is a spin 1 object, it admits more GPDs than the spin 1/2 case. In the leading twist case, both for quark and gluons there are 9 helicity conserving GPDs and 9 helicity flip or transversity ones. The helicity conserving ones were introduced in Ref. [4], while the helicity flip ones were recently introduced in Ref. [2]. Both sectors evolve separately under QCD evolution, and for the helicity flip sector quarks and gluon operators do not mix.
For the helicity flip sector, the decomposition of the quark correlator for a spin-1 hadron takes the following form [2] T q i Here, n is a light-like fourvector and i a transverse index. The initial spin-1 hadron (mass M) has fourvector p, polarization vector ϵ and light-front helicity λ, with the equivalent primed variables for the final hadron. Kinematic variables are defined as follows The 9 real GPDs have the following properties: (1) 6 GPDs are even functions in skewness ξ (H ) independent generalized form factors in the quark helicity flip sector [3]. Four transversity GPDs (i ∈ {2, 3, 5, 8}) have zero sum rules for the first Mellin moment.
Deuteron convolution model
To compute the quark transversity GPDs for the deuteron, we consider a basic convolution model. We only consider the dominant np component of the deuteron wave function and consider the leading order impulse approximation, where one of the nucleons acts as a so-called "spectator". Using methods of light-front perturbation theory, nuclear and nucleon structure can be separated and we can write the correlator of Eq. (1) as a convolution of two light-front deuteron wave functions and a quark helicity flip correlator for the nucleon [2,5]. The nucleon correlators are then decomposed through their corresponding spin 1/2 GPDs.
One disadvantage of only considering the lowest Fock state in the deuteron (np component) is that this truncation breaks Lorentz covariance and consequently also the polynomiality requirements of the deuteron GPDs [2]. Extensions including additional contributions to restore the polynomiality condition will be the topic of a future study.
In Figs. 1 and 2 we show results in this convolution model for the helicity amplitudes A λ ′ +;λ− , where the plus and minus refer to the helicities of the outgoing and incoming quark in the correlator. These helicity amplitudes form linear combinations of the transversity GPDs through the relation and exhibit the role deuteron angular momentum in a more transparent way than the corresponding GPDs. For plots of the GPDs, we refer to Ref. [2]. For the calculations in the convolution model, we use the nucleon transversity GPDs of Ref. [6], and the AV18 deuteron wave function parametrization [7]. Figure 1 shows the different contributions from the deuteron S-and D-wave components to the total result. The difference between the first two and the latter originates from S-D interference contributions. Considering the top row of Fig. 1, which correspond to the deuteron helicity conserving amplitudes, it is clear that these are dominated by the pure S-wave contribution, whereas the other amplitudes that admit a change in deuteron helicity receive major contributions from the S-D interference terms. The two amplitudes with two units of deuteron helicity flip (bottom row, right two panels) are identically zero when only including the deuteron S-wave as in that case there is no orbital angular momentum available in the deuteron to compensate the change in helicities (two units for the deuteron, one for the quark). Figure 2 shows the helicity amplitudes at two values of momentum transfer t. Helicity amplitudes with different units of deuteron helicity change show different behavior with increasing momentum transfer: for no helicity flip the amplitude shrinks with larger t, the amplitudes with a single unit of helicity change increase a little bit in size at the larger t value, and the amplitudes with a complete deuteron helicity flip grow significantly larger. This again reflects the role deuteron angular momentum plays, supplied through the momentum transfer.
To conclude, the role of these GPDs for the deuteron could be explored in the phenomenology of coherent DVCS (where gluon transversity enters at NLO) on the deuteron, double vector meson production (See Refs. [8,9] for the nucleon case) and DEMP (in combination with a higher twist distribution amplitude) [6,10] on deuteron targets. | 1,223.6 | 2019-02-11T00:00:00.000 | [
"Physics"
] |
A Review of Recent Solar Type III Imaging Spectroscopy
Solar type III radio bursts are the most common impulsive radio signatures from the Sun, stimulated by electron beams traveling through the solar corona and solar wind. Type III burst analysis provides us with a powerful remote sensing diagnostic tool for both the electron beams and the plasma they travel through. Advanced radio telescopes like the LOw Frequency ARray (LOFAR), the Murchison Widefield Array (MWA) and the Karl G. Jansky Very Large Array (VLA) are now giving us type III imaging spectroscopy with orders of magnitude better resolution than before. In this review, the recent observational progress provided by the new observations is discussed for type III bursts at GHz and MHz frequencies, including how this enhanced resolution has facilitated study of type III burst fine structure. The new results require more detailed theoretical understanding of how type III bursts are produced. Consequently, recent numerical work is discussed which improves our understanding of how electron beams, Langmuir waves and radio waves evolve through the turbulent solar system plasma. Looking toward the future, some theoretical challenges are discussed that we need to overcome on our quest to understand type III bursts and the electron beams that drive them.
INTRODUCTION
Type III radio bursts are the most common coherent radio emission produced by the Sun. Type III bursts are an indirect signature of energetic electrons propagating through the plasma of the solar corona and the solar wind. A gift of non-linear physics, the more we understand type III bursts, the more we can use them as remote sensing tools for astrophysical plasma. As high energy electron beams propagate through plasma with decreasing background electron densities, and hence decreasing plasma frequency, they emit type III radio emission at correspondingly decreasing radio frequencies. The spatial and spectral evolution of type IIIs thus contains a wealth of plasma dynamics information that has been studied for many decades since their first observational report by Payne-Scott et al. (1947). Analysis of type IIIs can provide insight on astrophysical processes including particle acceleration, charged particle transport through plasma, and the structure of solar system plasma. Space-based observations can detect in situ the electron beams, their associated plasma waves and radio spectra. However, we are dependent upon Earth-based telescopes to provide type III imaging, which we obtain above the 10 MHz ionospheric cut-off. These frequencies correspond to electron beams propagating through the solar corona before they reach interplanetary space.
The focus of this review is to cover the advances in type III theory that have arisen due to new high resolution imaging spectroscopy which became available in the last decade. Type III observations in the past were either analyzed spectroscopically or through imaging only at a few discreet frequencies. Now orders of magnitude better spatial, spectral and temporal resolution is allowing the physics of the radio Sun to be examined like never before. The main telescopes that have been facilitating new type III observations of the Sun are (in descending frequency) the upgraded Karl G. Jansky Very Large Array (VLA, Perley et al., 2011), the Mingantu Ultrawide Spectral Radioheliograph (MUSER, Yan et al., 2009), the Murchison Widefield Array (MWA, Lonsdale et al., 2009), the Low Frequency Array (LOFAR, van Haarlem et al., 2013) the Long Wavelength Array (Ellingson et al., 2009). Additionally, imaging at discrete frequencies has been provided by the Nançay Radioheliograph (NRH, Kerdraon and Delouis, 1997) and the Giant Metrewave Radio Telescope (GMRT, Swarup et al., 1991).
This review is not intended to be a historical overview on type III bursts, nor a review of all type III properties. In science we are all "perched on the shoulders of giants" and so readers are encouraged to get a more complete understanding of the field by reading the introductions that are contained within the cited works. There are also many other reviews specifically on type III bursts (Suzuki and Dulk, 1985;Reid and Ratcliffe, 2014) and more generally on solar radio emission (e.g., Dulk, 1985;McLean and Labrum, 1985;Bastian, 1990;Pick and Vilmer, 2008;Gary et al., 2018).
Over the last decade, snapshot synthesis imaging techniques have substantially improved for generating solar radio images. A significant upgrade was made the VLA, described in Perley et al. (2011), where state-of-art receivers and electronics were added, greatly increasing capabilities. There are now a larger number of spectral channels, a larger instantaneous bandwidth for imaging and a faster sampling times, enabling new solar radio observations first documented by Chen et al. (2013). Additionally new radio telescopes like LOFAR, MUSER, and the MWA have been built with large numbers of antenna distributed across large spatial scales. These new telescopes have drastically improved the UV coverage available for making solar imaging spectroscopy, leading to temporal resolution for imaging of 100 ms or better, and for spectroscopy it can go down to microsecond resolution. The increased number of long distance baselines provides orders of magnitude better spatial resolution although radio transport effects limit the use of such high resolution for solar science at MHz frequencies. New and improved telescopes have enhanced spectral resolutions, with low frequencies especially going down to 100s kHz resolution. The latter is particularly significant as previous low frequency imaging spectroscopy has been carried out with spectral resolution of 40 MHz till the 1980s, preventing past imaging spectroscopic analysis of type III fine structure. An example of new imaging techniques using the MWA is given by Mohan and Oberoi (2017).
As well as traditional interferometric techniques, new radio interferometers are able to operate in a coherent tied-array mode that involves combining the collecting area into array beams, or a coherent sum of multiple station beams (see e.g., Stappers et al., 2011, for a description using LOFAR). Hundreds of tiedarray beams are pointed at the Sun in a honeycomb pattern that mosaics the solar radio intensity. The advantage of this method of imaging is enhanced spectral resolution of 10s kHz and temporal resolutions of ms, which are particularly important for imaging type III bursts that are short-lived and change significantly with frequency. The disadvantage is a reduced spatial resolution. An early example of tied-array imaging performed by LOFAR is given in Figure 1. This example highlights the power of imaging spectroscopy as each pixel has an associated dynamic spectrum. One is able to disentangle each burst from the other via their spatial information which would not have been possible from full-disc integrated dynamic spectra.
With the successful launch of Parker Solar Probe (PSP, Fox et al., 2016) and Solar Orbiter (SolO, Müller et al., 2013) traveling close to the Sun, analysis of coronal magnetic connectivity is hugely important. In particular with PSP, analysis of in situ data close to the Sun is dramatically improved once we know where on the solar disc the plasma originated from. Type III imaging spectroscopy from Earth plays a crucial role here as radio bursts can isolate where high energy particles were accelerated and what trajectory they took when escaping the Sun. Despite energetic protons not producing radio emission, they are likely to follow the same magnetic connectivity as the electrons. Similarly, type III bursts can show the trajectory of heated plasma jets, typically observed in UV or X-rays, which can subsequently be observed in situ. Type III bursts are also able to ascertain coronal plasma parameters in high regions of the solar corona (around 1 solar radius and above), where UV and X-ray diagnostics are not effective due to the tenuous plasma not emitting enough photons at these wavelengths. Coronal parameters deduced from type III imaging spectroscopy can then be compared with solar wind parameters detected in situ to help understand how the solar corona transitions into the solar wind.
This review begins by discussing high frequency radio bursts and the constraints they make for particle acceleration. Recently observed properties of low-frequency bursts are then discussed in the frame of particle propagation through the corona, along with new type III fine structure observations. The type III contribution toward coronal density models is then featured, along with the difficulties that result from radio wave propagation effects. New insights about electron beams, Langmuir waves and radio waves from recent theoretical models are then presented. The review concludes with a summary of some future observatories and scientific questions that type III imaging spectroscopy can help answer.
HIGH FREQUENCY BURSTS
Type III bursts observed at high frequencies are signatures of electron transport in the low corona. The term "high frequency" is subjective, and in this context we consider the frequency range of 2-0.2 GHz, with imaging spectroscopy available from the VLA, the NRH and the GMRT. This relates to altitudes lower than roughly 0.5 solar radii from the solar surface (e.g., Newkirk, 1961;Saito et al., 1977) although care must be taken FIGURE 1 | LOFAR tied-array beam observations of Type III radio bursts and solar S bursts. Left: The 170 tied-array beams covering a field-of-view of about 1.3 degrees about the Sun. Right: Two dynamic spectrum highlighting different solar activity coming from different regions in the solar disc. From Morosan et al. (2015).
when assuming heights from density models. For example, type III bursts can be observed in the GHz frequencies which requires either active Sun coronal density models or some multiplicative of quiet Sun models.
The importance of analyzing type III bursts at high frequencies is that electron beams which produce the emission have not traveled far from their acceleration region. The electron beams have not undergone significant transport effects and so their kinetic profile as deduced from the type III emission is closer to the acceleration region characteristics that generated them. Solar particle acceleration region characteristics are ill-quantified, and a subject of intense study, because the energized particles propagate away before generating significant electromagnetic emission (see e.g., Zharkova et al., 2011, as a review). This makes high frequency type III burst imaging spectroscopy attractive for diagnosing the spatial, energetic and temporal profile of electron acceleration in the corona.
High resolution type III imaging spectroscopy in GHz frequencies have been carried out using observations from the VLA. Type III bursts were imaged in the low corona, in association with coronal jets (Chen et al., 2013;. The evolution of type III source location with frequency was used to estimate the background density profile, assuming second harmonic emission due to low polarization degree. Bestfit density scale heights were derived to be 40 Mm (Chen et al., 2013) and 3-17 Mm , or 5-29 Mm taking into account a 60 degree inclination angle. These are very steep density profiles when one considers that the scale height for a 2 MK plasma is 94 Mm, and likely highlight that the flux tubes are far from their hydrostatic state or highly dynamic in nature. Assuming a density model, the electron beam acceleration site was estimated to be 15 Mm below the 2 GHz type III emission detected in Chen et al. (2013). The acceleration site was estimated even closer (1 Mm at closest approach) in using the conjunction of varying straight line trajectory fits through the type III centroids at different frequencies (see Figure 2). The different trajectories varied systematically over each 50 ms timestep of the VLA observations and diverged from a compact (<600 km 2 ) region. The authors suggest that the very short acceleration timescales strongly favor a reconnection-driven particle acceleration mechanism (e.g., Drake et al., 2006) and estimate a lower limit of E > 0.1 V m −1 if a macroscopic DC electric field is responsible. Moreover, by extrapolating their density models back into the acceleration region, a high level ( n/n > 100%) of density inhomogeneity is inferred.
Other high frequency type III imaging observations have been recently analyzed using the GMRT and the NRH. An example using the GMRT found type III emission observed at 610 MHz during a GOES C-class flare (Bisoi et al., 2018). Whilst radio emission was imaged close to the flaring site, a remote source 500 arcsecs away also glowed brightly. The authors confirmed the source was generated by plasma emission and explained the remote source through wave ducting. They also highlighted that a clearer picture could be found if high spectral resolution imaging spectroscopy had been available. An example using the NRH analyzed type III emission before a large coronal mass ejection (Carley et al., 2016). By combining the NRH imaging spectroscopy with radio spectroscopy at higher frequencies, Carley et al. (2016) identified where and when electron acceleration to > 75 keV took place, deducing either tether-cutting or flux-cancellation type reconnection at the flux rope center. As the flux rope erupted, it caused reconnection to take place in a fan-spine null point above the rope, producing FIGURE 2 | Left: VLA dynamic spectrum of a series of type III radio bursts. Right: The electron beam trajectories fit over the type III centroids, showing a systematic change in the spatial motion. All trajectories lead to a common acceleration region, denoted by the red star. Both panels adapted from . many electron beams around 5 keV for a period of 5 min which caused lower-frequency type III bursts.
All the type III studies above have simultaneous X-ray sources, indicating bi-directional electron beam acceleration. When cotemporal images were available, they provided an impression of the locality of the flare acceleration region and a sense of scale, particularly in . Simultaneous study of type III radio and X-rays (see e.g., Pick and Vilmer, 2008, as a review) is attractive because electron beam characteristics can be obtained from the X-ray emission and applied to the type III producing electron beams by assuming a common acceleration region. As an example, the distance an electron beam must travel before a "bump-on-tail" distribution forms and it becomes unstable to the production of Langmuir waves has been postulated to be r ≈ dα, where d is the longitudinal extent of the acceleration region and α is the electron velocity spectral index (Reid et al., 2011). This was shown through a correlation between X-ray spectral index with the type III starting frequency such that the "soft-hard-soft" pattern of X-ray spectral index was mirrored by a "low-high-low" pattern in the type III starting frequency. The compact acceleration region of < 600 km 2 estimated from fits this picture if the instability distance is on the scale of megameters. Additionally, the same physical arguments infer that the electron beam instability distance is also connected to the temporal evolution of the flare acceleration, so as to also be dependent upon r ≈ vτ α for a characteristic beam velocity v, where τ is the characteristic temporal injection time (Reid and Kontar, 2013). The fast type III time profiles on the order of 50 ms shown by infer similar acceleration timescales, which are consistent with the small instability distances of megameters assuming beam velocities around 0.3c. The combined type III and X-ray flare study by was also able to estimate acceleration region spatial characteristics, with altitudes ranging from 25 to 200 Mm and longitudinal extents ranging from 2 to 13 Mm.
The temporal correlation between hard X-ray (HXR) bursts and type III radio emission is well-established, being shown in many single-event studies and backed up by statistical studies (Kane, 1972;Kane et al., 1982;Hamilton et al., 1990;Aschwanden et al., 1995;Arzner and Benz, 2005;Reid and Vilmer, 2017). Figure 3 shows two examples of flares with temporally correlated radio and X-ray emission. Nevertheless, such a correlation is not present in all flares, presumably relating to different magnetic connectivity preventing electron beams to simultaneously stream down into the chromosphere and up into the higher corona. A statistical correlation over 10 years of type III events with co-temporal X-ray flares has been found between the peak type III flux and the peak X-ray count rate using imaging spectroscopy from the NRH to obtain the type III flux profile (Reid and Vilmer, 2017). Whilst a large amount of non-thermal X-ray counts were accompanied by high flux type IIIs, a low amount of non-thermal X-ray counts was accompanied by both high and low flux type IIIs. This result is explained by low density electron beams being able to produce detectable type III bursts via the amplification of coherent waves. Conversely, a high number of hard X-rays counts is dependent upon high beam densities due to the incoherent nature of Bremsstrahlung Kontar et al., 2011). The dependency of hard X-rays on the number of high energy electrons naturally explains the notable absence of events with high X-ray intensity and low type III radio flux. This is another reason that co-temporal type III bursts and HXRs are not observed during all flares. FIGURE 3 | An example of two type III bursts with associated X-ray flares, highlighting the temporal correlation between hard X-rays and type III radio emission. The flare and radio burst durations are indicated, with the longer flare duration defined both by accelerated electrons and cooling bulk plasma. From Reid and Vilmer (2017).
Electron beams that propagate down through the dense corona can also produce reverse type III bursts, which have corresponding positive frequency drift rates and typically start at frequencies >500 MHz (Isliker and Benz, 1994;Aschwanden and Benz, 1997). Simultaneous type III and reverse type III events are of particular interest for new-age imaging spectroscopy studies at higher frequencies because the radio positions localize the acceleration region which must be situated between the standard and reverse type IIIs. These bi-directional type III events are a key motivation for simultaneous imaging spectroscopy between 1 GHz and 100 MHz with high frequency resolutions. Currently, it is typical that only one side (normal or reverse) of the bi-directional burst is imaged (e.g., Feng et al., 2016). A wide range of bi-directional type III properties were reported by Tan et al. (2016) using radio spectroscopic observations. Using the full MHD equations and an assumption of a barometric atmosphere, Tan et al. (2016b) devised a model where the electron beam velocity can be estimated using the plasma beta and the type III drift rates. By estimating the plasma temperature (e.g., using soft X-ray line ratios), the upper and lower estimates of electron beam velocity can be used to obtain estimates of the magnetic field at densities corresponding to the start frequencies of the bi-directional type III bursts. The magnetic field of the acceleration region is then simply assumed to be an average of these two values, with estimates found between 50-90 G and 4-18 G for two events (Tan et al., 2016a).
LOW FREQUENCY BURSTS
Type III bursts at the low frequencies are signatures of electron beams traveling through the high corona before they reach the solar wind. In this section we define "low frequency" from around 200 MHz down to 10 MHz, the frequency at which Earth's ionosphere becomes opaque to solar radio emission, with imaging spectroscopy results mainly from LOFAR and the MWA.
Low frequency type III radio emission provides diagnostics of electron beams that have propagated into the upper solar corona. The electron beams have undergone more propagation effects than when they produce the high frequency type III components, and so low frequency type IIIs are a key source of electron transport diagnostics. Electron beams that produce low frequency type IIIs are likely to propagate out of the FIGURE 4 | MWA contours at 20, 50, 80% peak flux of a type III burst following a splitting magnetic connectivity above a EUV jet. The background jet structure is highlighted in black and white, over the 304 Angstroms Sun. From McCauley et al. (2017). solar atmosphere and therefore their presence indicates coronal magnetic field lines open to the solar wind. As such, low frequency type IIIs provide an important diagnostic of magnetic field connectivity for solar wind and space weather studies, provided they are corrected for radio propagation effects. The higher the flux of low-frequency type III emission, as found from NRH imaging spectroscopy at 150 MHz, the more likely that an interplanetary type III burst is observed (Reid and Vilmer, 2017), with almost all sampled type III events with flux greater than 1000 SFU generating interplanetary bursts.
Type IIIs are commonly associated with jets in extreme ultraviolet (EUV) and X-rays (e.g., Bain and Fletcher, 2009;Klassen et al., 2011;Krucker et al., 2011), with electron beams typically following the same path as the jet. A number of studies have analyzed type IIIs using high resolution MWA imaging spectroscopy that occurred co-temporal and co-spatially with jets observed in UV (McCauley et al., 2017;Cairns et al., 2018;Mulay et al., 2019). In all three events the type III emission showed resolved fine temporal structure, consistent with several distinct EUV jet episodes and caused by multiple electron beam injections, explained via energization within magnetic reconnection regions by Cairns et al. (2018). A splitting of the magnetic connectivity was highlighted by McCauley et al. (2017) using MWA imaging spectroscopy of a succession of type III radio bursts. The radio bursts started from a common source around 200 MHz and split into two separate sources, following different magnetic flux tubes. The UV jet traces out a region where the magnetic field connectivity diverges, which appears to facilitate the splitting of the type III into two separate sources, indicated in Figure 4. For this event, type III imaging spectroscopy was used to ascertain typical electron beam speeds around 0.2c. Comparing with magnetic field extrapolations, Mulay et al. (2019) found type III radio sources at the flaring site which did not appear at the expected points along magnetic field lines. There was a distinct absence of type III frequency evolution along the field that would be consistent with electron beam propagation. Mulay et al. (2019) concluded this was possibly due to radio wave scattering or the magnetic field extrapolation not including local small scale variations.
Not all electron beams that generate type III bursts are able to escape into the solar wind. Some electron beams travel along loops that are confined to the corona, producing radio emission that forms a J-or U-shape in the dynamic spectrum, known as J/U-bursts (Maxwell and Swarup, 1958). J-bursts can also occur at the same time as coronal jets, with one imaged using LOFAR by where the accelerated electron beam traveled along a large magnetic coronal loop. The electron beam can also mirror at the footpoint of magnetic loops forming what is known as an N-burst, with Kong et al. (2016) reporting a wellobserved N-burst using the NRH. The bulk of magnetic flux is closed in the corona and so we might expect U-bursts and Jbursts to be observed more often than type III bursts when in fact the converse is true. Using the derived magnetic loop and electron beam parameters from LOFAR imaging spectroscopy, Reid and Kontar (2017a) analyzed the electron beam instability criteria, shown in Figure 5. For radio emission to be generated on closed magnetic fields, the loop needs to be long enough for a power-law accelerated electron beam to become Langmuir-wave unstable through time-of-flight. Additionally the beam needs to be dense enough for the timescale of Langmuir wave growth and their successive conversion to radio waves to be shorter than the electron propagation timescale. These conditions result in a stricter set of requirements for electron beams and background loop plasma parameters to produce U/J-burst radio emission over type III radio emission.
One quintessential property of type III radio bursts is the frequency drift rate, typically attributed to the bulk speed of the electron beam traveling through the solar plasma. The enhanced spectral resolution from LOFAR and the MWA allows drift rates to be more accurately measured and has been statistically sampled recently by a number of radio spectroscopic studies by and Reid and Kontar (2018a) that used LOFAR, by Zhang et al. (2018) that used the Nançay Decametre Array (NDA, Lecacheux, 2000), and by Stanislavsky Konovalenko et al., 2016). Similar to Alvarez and Haddock (1973) who compared type III drift rate from 550 MHz to 50 kHz from many studies, the drift rate has been approximated by a power law of the form ∂f ∂t = −Af α . Findings for α were −1.82 ± 0.11, −1.63 ± 0.11, −1.23, −2.11 ± 0.66, respectively for the four studies, compared to the value of α = −1.84 found by Alvarez and Haddock (1973). It is unsurprising that the power law spectral indices vary because the type III drift rate depends primarily upon the speed of the electron beam exciter (e.g., demonstrated numerically by Reid and Kontar, 2018b), the density gradient of the background plasma, and whether the radio burst was generated via fundamental or second harmonic emission. For fundamental emission, the electron beam travels smaller distances over the same frequency range. Moreover, the fundamental emission is more susceptible to radio wave propagation effects (see section 3.3).
It is noteworthy that Zhang et al. (2018) found such a low spectral index as they sampled nearly 1400 type III bursts over half a solar cycle via an automatic analysis system using a Hough transform. Figure 6 shows a scatter plot of the frequency drift rates, highlighting the huge spread in values between different radio bursts. If we assume that the background electron density was similar for all studies, as was the proportion of fundamental to harmonic emission then Zhang et al. (2018) observed more electron beams with lower bulk velocities than the other studies. Lower velocity beams take longer to travel from one frequency to another and hence the frequency drift rate is smaller in magnitude. The large number of type IIIs detected by the automated method might have analyzed a greater number of type IIIs with low signal-to-noise ratio, produced by slow, weak beams, which may account for the lower magnitude drift rates observed. A similar study would be beneficial using new type III imaging spectroscopy from telescopes with larger collecting areas to detect very faint type III bursts and make a drift rate comparison between them and the type III bursts with higher fluxes.
The time profile of type III bursts is another property that has undergone recent analysis. At a single frequency, the time profile is influenced by a convolution of processes based upon the plasma emission mechanism including; beam acceleration characteristics, beam velocity dispersion, the radio emission process, radio propagation, and density variation in the solar corona. Characterizing when certain processes are dominant is essential for extracting diagnostic information about the electron beam. The high flux sensitivity and time resolution of new type III imaging spectroscopy makes it ideally suited for analyzing the rise and decay of type III emission at a single frequency. At frequencies between 30-70 MHz, Reid and Kontar (2018a) used LOFAR to analyzed 31 radio bursts, coming to the conclusion that their half-width half-maximum (HWHM) rise and decay was best fit by a Gaussian rise plus a Gaussian decay. This was the first study to analyse the type III HWHM rise time, finding t rise ∝ f −0.77±0.14 . The Gaussian decay was in contrast to the exponential decay used in previous works (e.g., Aubier and Boischot, 1972;Barrow and Achong, 1975;Mel'Nik et al., 2011) although similar HWHM decay times were found. The decay time r decay ∝ f 0.89±0.15 compares very well with comparisons of decay times all the way down to kHz frequencies, shown by Kontar et al. (2019), that could be fit with a power-law that had a spectral index close to 1. The rise and decay times also showed a very strong correlation, indicating that one process dominates both time scales at LOFAR frequencies.
The explanation put forward by Reid and Kontar (2018a) is that the rise and decay times are primarily caused by the front and back of the electron beam, respectively, separated through velocity dispersion. The front of the electron beam consists of faster particles and so consequently they arrive at a certain background plasma frequency first. Similarly the back of the electron beam is made up of slower particles and so arrives last. This theory was consistent with the larger and smaller magnitudes of drift rates found using the rise and decay time, respectively (Reid and Kontar, 2018a) and with quasilinear simulations (see section 4). Assuming a coronal density model gave average front and back velocities of 0.2 ± 0.06c and 0.15 ± 0.04c. The same conclusion was obtained by Zhang et al. (2019) who used LOFAR imaging spectroscopy to analyse one radio burst. They found the source locations from the rise, peak and decay times were displaced with respect to one another, and followed different paths in the solar atmosphere. Derived velocities from centroid locations were different, with the velocities relating to the front edge, peak and back edge of a type III burst were 0.42c, 0.25c, and 0.16c, respectively. The centroids of the front edge were farther away from the solar disc meaning that the front of the electron beam was propagating along a magnetic flux tube with higher coronal density whilst the back of the beam was propagating along a flux tube with lower coronal density. In seems that whilst the rate of velocity dispersion is governed by the electron beam acceleration characteristics like energy spectral index, the magnetic flux tubes that guide electron beams also influence the type III durations.
Velocity dispersion is not the only effect that contributed to type III durations, and Zhang et al. (2019) estimated the contribution of density turbulence and wave propagation effects.
Density variation causes different regions in the solar atmosphere to have the same plasma frequency, and hence contribute to the time profile at any given frequency (Roelof and Pick, 1989). By analyzing the wave frequency distribution of sources observed at the same distance from the solar disc, Zhang et al. (2019) estimated the effect of density variation, finding that it caused an observed duration about 2.2-5.7 times the duration caused by density variations (a lower effect than, Roelof and Pick, 1989). Wave propagation effects were also analyzed using shorter duration type III bursts occurring before the main type III burst. These provided an upper limit on the effect of wave propagation effects of less than half of the observed duration. Whilst the growth rates of radio waves from Langmuir waves are fast, more theoretical study should be carried out to fully explore the effect of the radio emission mechanism on the type III drift rates.
Type III polarization measurements are a strong diagnostic potential for discerning between fundamental and harmonic emission. There has not been much recent works using polarization information in imaging spectroscopy as calibration issues complicate matters. A recent work (Rahman et al., 2020) looked at the polarization of type IIIs using the MWA. They found the degree of circular polarization increased as a function of frequency and was higher at the start of the radio bursts, consistent with previous models that fundamental emission is generated first. The polarization fraction decreased with time, consistent with scattering effects depolarizing the radio emission.
Fine Structure
One of the most powerful applications of high resolution imaging spectroscopy lies in the analysis of fine structure. For type III bursts this typically takes the form of striae bursts, or type IIIb bursts (de La Noe and Boischot, 1972), fine structure exhibited along a backbone of fundamental emission. The general consensus about the driver is density turbulence in the background plasma (e.g., Takakura and Yousef, 1975) although a formal theoretical treatment of the entire process is yet to be formulated. With previous imaging, the spectral resolution was too sporadic to have multiple images of one stria. A number of studies using LOFAR imaging spectroscopy (Kontar et al., 2017;Chen X. et al., 2018;Kolotkov et al., 2018;Sharykin et al., 2018) have analyzed striae bursts, concentrating on one specific event on the 16 April 2015.
The LOFAR dynamic spectrum and corresponding image is shown in Figure 7. Kontar et al. (2017) concentrated on analyzing the spatial information of individual striae. They found that individual striae increased in size at a single frequency, and this rate was different between the fundamental and harmonic emission. They reasoned that the intrinsic source size (the actual size of the source in the corona) is much smaller than the apparent source size (source size derived from radio waves at Earth) and hence the brightness temperature of the sources is orders of magnitude larger than what is estimated using observed source size. The power spectral density of the same type IIIb burst was analyzed by Chen X. et al. (2018). They found that the fluctuations had an almost 5/3 spectral index in wavenumber space, similar to what is normally observed for solar wind density turbulence at 1 AU (Chen, 2016). Interestingly the same spectral index was found for the harmonic emission, possible to obtain due to the sensitivity and the spectral resolution of LOFAR. A characteristic spatial scale of striae was estimated around 700 km, using the Newkirk density model (Newkirk, 1961) rather than spatial positions because the source was located over the solar disc. Sharykin et al. (2018) extended this study to analyse the spatial motion of the individual striae. These structures have an instantaneous bandwidth around 20-100 kHz that increases with increasing central frequency. By fitting each striae with a elliptical Gaussian, they found the striae drift rate around 0-0.3 MHz s −1 , increasing with increasing central frequency. The mean striae speed from the drift rate was around 600 km s −1 , larger than the typical sound speed of 200 km s −1 and smaller than the type III burst speed of 0.2c. Kolotkov et al. (2018) analyzed the same type III event, finding quasi-periodicity in the signal. The authors explained the periodicity by a propagating fast wave train that modulated the radio emission produced by the electron beam.
Spectroscopic observations of type IIIb bursts have been carried out by Tun Beltran et al. (2015) using the LWA. Analyzing a type III storm that displayed both type IIIs and type IIIbs, they concluded that electron beams must travel along magnetic structures with density inhomogeneities present. Moreover, the sudden onset of type IIIb storms from a normal type III storm must be explainable, which could be caused by different electrons propagating along different magnetic field lines with an increased level of density turbulence. Mugundhan et al. (2017) also spectroscopically analyzed a number of type IIIb bursts using the Guribidanur Low Frequency Solar Spectrograph (GLOSS, Kishore et al., 2014). By analyzing numerous striae, they approximated n/n using the observed value of f /f , finding ranges of 0.006 ± 0.002. Sharykin et al. (2018) made the same assumption and found similar amplitudes of 10 −3 .
Coronal Density Models
The advent of high resolution imaging spectroscopy has brought increased interest in using type III bursts to diagnose the density structure of the solar corona. Type III frequencies are associated with a background electron density assuming either fundamental or second harmonic emission. The change in centroid position of the type III sources as a function of frequency thus provides information about how the background coronal density changes with altitude, n e (r). Altitudes inferred from type IIIs are typically larger than altitudes predicted by standard coronal density models. Previous investigations using spectra and images at a few frequencies explained the enhanced altitudes through type IIIs being generated in over-dense structures (e.g., Wild et al., 1959;Trottet et al., 1982;Kundu et al., 1983) as there were spatially correlated streamers imaged in white light. An alternative explanation is that the enhanced altitudes are not real. Radio source centroids are shifted by scattering of radio waves off density inhomogeneities, which causes their apparent position to be farther away from the Sun (e.g., Riddle, 1974). This theory is currently preferred as many type III bursts are not necessarily observed over dense streamers (e.g., Leblanc and de La Noe, 1977). The reality is likely that both scenarios are possible, with some proportion of type III events occurring on over-dense flux tubes. A nice historical overview on some issues arising from coronal density models derived from type IIIs is given in McCauley et al. (2018).
The initial results from LOFAR imaging spectroscopy were consistent with previous findings that type III sources corresponded to altitudes much higher than standard coronal density models would infer (Morosan et al., 2014), with altitudes extending out to 3 solar radii around 30 MHz. This was significantly farther out than predicted by a density model using white-light data from the same day . Further observational studies using the MWA and LOFAR found altitudes deduced from type III observations to be much higher than standard coronal density models predicted (Reid, 2016;Mann et al., 2018;McCauley et al., 2018;Gordovskyy et al., 2019). Figure 8 shows an example of such a density model being found from type III centroid locations. Whilst it might be possible that type IIIs preferentially travel along over-dense flux tubes, the electron beam velocities that can be deduced from type III bursts must also be consistent with theory. Some deduced velocities from type III bursts observed by LOFAR were found to be superluminal (Reid, 2016;Mann et al., 2018). Such velocity estimates are likely influenced by the spatial and temporal modifications due to radio wave propagation effects (see section 3.3) but other effects may also play a role such as different regions of the electron beam emitting radio waves at different times, creating an apparent speed faster than the beam speed (Reid and Kontar, 2018b). Other deduced velocities by LOFAR and the MWA using imaging spectroscopy have been higher than the standard 0.1-0.3c velocities typically observed (e.g., McCauley et al., 2018).
Type U and J bursts allow diagnosis of coronal densities within magnetic flux tubes confined to the corona. Using LOFAR, Reid and Kontar (2017a) found the coronal density profile from two J-bursts and one U-burst that occurred in quick succession. Estimating second harmonic emission, the density profile roughly fit enhanced standard density profiles, the 3.0× Baumbach-Allen Model (Allen, 1947), 3.5× Sittler model (Sittler and Guhathakurta, 1999) and 4.5× Saito Model (Saito et al., 1977) density model. However, the lowest frequencies around 40 MHz at the loop apex did not fit any density models as the magnitude of the density gradient became much less at these frequencies. Only by taking into account this change in density gradient obtained through the imaging spectroscopy were realistic burst exciter speeds around 0.2c able to be estimated.
Radio observations taken only from the Earth suffer from projection effects; our limitation of imaging on a 2-dimensional plane without any spatial information on the line-of-sight dimension. Projection effects can only amplify the larger derived electron density altitudes from type III bursts, getting larger the more the electron beam is propagating toward/away from the Earth. The uncertainty is amplified for electron beam source regions that are close to the center of the Sun. An example was shown by Gordovskyy et al. (2019) on how projection effects modify derived density models using LOFAR imaging spectroscopy for four different events at electron beam propagation angles of 90, 60, and 30 degrees from the Sun-Earth line. Most of the sources were off-disc and so 30 degrees gave widely inaccurate frequency vs. distance estimates. Such small projection angles are more likely to occur for on-disc sources close to Sun center. Whilst radio projection effects amplify the larger derived electron density altitudes, the larger the radio projection effect, the smaller the correction for radio wave propagation from source to observer as more of the scattering will occur along the line of sight and so not affect the 2D radio imaging spectroscopy.
Radio Wave Propagation
Understanding the propagation of radio waves through the solar system is paramount if we are to make best use of radio imaging spectroscopy at low frequencies. The displacement of sources by scattering was convincingly shown using LOFAR's high resolution imaging spectroscopy of a type III burst displaying both fundamental and harmonic emission (Kontar et al., 2017). A systematic radial displacement of 1.8 arcmin s −1 was observed for the fundamental emission. We would expect the fundamental type III emission to be more displaced through scattering off density inhomogeneities because the radio frequency is closer to the local plasma frequency. The radial displacement was found for numerous fundamental emission fine structures (stria) between 32 and 38 MHz. Moreover the areal extent of the fundamental sources increased faster than the harmonic emission at a single frequency, consistent with radio waves scattering off density inhomogeneities. An increase in the area of the burst source was also observed by Mohan et al. (2019) using type III imaging spectroscopy from the MWA. They found a radial expansion around 43 Mm s −1 , two orders larger than the local Alfvén speed and so rejected the increase in physical area due to magnetic waves. With LOFAR and MWA showing how important radio wave propagation is to the spatial characteristics of fundamental emission, this opens up the ability to test radio wave propagation models (Robinson, 1983;Arzner and Magun, 1999;Thejappa and MacDowall, 2008) and investigate analysis of the turbulent structure of the solar corona. Analyzing the same type III event as Kontar et al. (2017), Sharykin et al. (2018) found that the source size across the line-of-sight exceeds the size along the line-of-sight, implying that radio wave scattering must be anisotropic. Mohan et al. (2019) used the model by Arzner and Magun (1999) on the analyzed type III bursts to estimate a value of n/n = 4 × 10 −3 .
It is attractive to correct for propagation effects so that type III burst source positions can derive more realistic coronal density models. A method proposed by Chrysaphi et al. (2018) treats radio wave propagation like scattering of a charged particle in plasma. They applied this technique to LOFAR imaging spectroscopy of a type II, showing that split-band emission can arise from the same spatial location. The mean scattering rate depends upon the local intensity of density turbulence and the frequency of radio emission. By integrating over regions where the optical depth is greater than one, a radial correction can be approximated. Chrysaphi et al. (2018) found corrections around 0.3 R ⊙ for 40 MHz and 0.6 R ⊙ for 32 MHz. The technique was applied to type III emission by Gordovskyy et al. (2019) showing that it can indeed explain larger than expected heliocentric distances of radio sources. A different correction method was proposed by McCauley et al. (2018) who calculated synthetic radio images using the FORWARD software (Gibson et al., 2016) to find expected Bremsstrahlung and gyroresonance emission from a model atmosphere found using the MAS software (Lionello et al., 2009) to extrapolate coronal magnetic fields and then applying a heating model (Schrijver et al., 2004) to compute density and temperature. The difference between observed type III burst source locations, found using MWA imaging spectroscopy between 80 and 240 MHz, and the synthetic radio images was found to estimate the effect of radio propagation effects. Using three type III radio bursts they found corrections around 0.3 R ⊙ for 80 MHz and 0.1 R ⊙ for 240 MHz, slightly higher than those predicted by Chrysaphi et al. (2018). Applying the corrections to the estimated coronal density models from type III bursts, McCauley et al. (2018) found a better agreement with typical density models, although two type III bursts had unusually steep density profiles. The third type III burst agreed well with a type III density model predicted by Cairns et al. (2009) from type III burst spectra.
The above methods attempt to approximate the effect of radio wave scattering off density fluctuations but to properly understand this effect, ray-tracing simulations are required. There have been a number of ray-tracing studies in the past that have tracked type III burst propagation (Steinberg et al., 1971;Thejappa and MacDowall, 2008;Krupar et al., 2018) which assumed isotropic scattering by small-scale density fluctuations. Krupar et al. (2018) using the STEREO spacecraft and Krupar et al. (2020) using Parker Solar Probe found that from the arrival time, the exponential decay times observed at low frequencies from spacecraft are able to be explained through the scattering of radio waves by density inhomogeneities. Bian et al. (2019) modeled the scattering process using a Fokker-Planck equation and were able to reproduce the time profile but not the inverse frequency dependence of the decay time, which they concluded was down to the exclusion of a large-scale refractive term. Kontar et al. (2019) recently extended the work of Bian et al. (2019) but treated the scattering in the anisotropic domain, with the dominant effect being perpendicular to the heliospheric radial domain (Kontar et al., 2017). As well as explaining temporal profiles, Kontar et al. (2019) used ray-tracing simulations to explain the increase in source sizes, finding a scattering increase in the FWHM around 1.1 R ⊙ at 35 MHz, although this value will depend upon the size of the density fluctuations from event to event. Changing the anisotropy parameter strongly influences source sizes that are off the solar limb and less so at disc center.
ELECTRON BEAM PROPAGATION
Electron propagation through plasma is the cause of type III radio bursts and there has been an extensive amount of theoretical work on the subject. Electrons have been simulated propagating through the solar coronal plasma and out into the interplanetary medium. Their propagation is not simply ballistic but is modified by the energy exchange with Langmuir waves as the electron beam becomes unstable during transport. The radio emission is then believed to be mainly produced through wavewave processes with ion-sound waves to produce fundamental emission, and with almost oppositely directed Langmuir waves for second harmonic emission.
In this section, the recent theoretical progress is discussed that has been undertaken to explain how type III bursts are generated by propagating electron beams. This theoretical understanding is critical for using type III bursts as remote sensors of electron acceleration and propagation through the solar corona and to maximize the research output that we can obtain from Earthbased imaging spectroscopy. It is beyond the scope of this review to cover all simulations works and so, as indicated in section 1, readers are encouraged to look through the introductions contained within cited works to obtain a more historical overview of the subject. In almost all of the work the electron beam acceleration is taken as an initial condition. This is largely due to the complexity and unanswered questions about which mechanism is responsible for electron acceleration in the corona (e.g., Zharkova et al., 2011).
Assuming electron beam acceleration with a power-law energy spectrum, the beam undergoes an initial period of propagation without becoming unstable to Langmuir waves. This "instability distance" is related to the distance required for velocity dispersion to create the bump-in-tail velocity distribution required for Langmuir wave generation. As discussed in section 2, quasilinear simulations (Reid et al., 2011;Reid and Kontar, 2013) showed that the distance is dependent upon the electron beam velocity spectral index, the size of the acceleration site, and the temporal injection profile. The instability distance is the reason that the starting frequency of type III bursts is not observed at the particle acceleration region. Perhaps the most striking observation that show this are the bi-directional type III bursts where there is a frequency gap, and hence a spatial gap between the forward and reverse type III bursts. The simulations done by Li et al. (2011) show this spatial gap well and highlight the reduced intensity of radio emission generated by electron beams propagating through plasma with a positive density gradient (Figure 9). Whilst Li et al. (2011) assumed a 20 MK Gaussian distribution of accelerated electrons, their distribution had a spatial width of 1 Mm which likely influenced the frequency gap between downward and upward propagating electron beams.
Despite electron beams being made up of electrons with a distribution of velocities, type III bursts are typically tracked using one velocity derived by the frequency drift rate. The main theoretical reason behind this pseudo-constant velocity is the beam-plasma structure that is formed by the electron beam wave-particle interactions with Langmuir waves, proposed theoretically (Zheleznyakov and Zaitsev, 1970;Zaitsev et al., 1972;Mel'Nik, 1995) and successfully simulated (Takakura and Shibahashi, 1976;Magelssen and Smith, 1977;Kontar' et al., 1998;Mel'Nik et al., 1999). Electrons travel as an ensemble with roughly the mean velocity of all electrons taking part in the energy exchange between waves and particles. Langmuir waves are generated at the front of the beam and re-absorbed at the back of the beam, allowing propagation over long distances of 1 AU, and avoiding a catastrophic beam energy loss postulated by Sturrock (1964).
The initial properties of the accelerated electron beam play a significant role in how the resultant type III burst evolves through the solar corona. Understanding how these properties modify the radio dynamic spectra is key to using type III imaging spectroscopy as a probe of electron beam transport in the solar corona. Assuming that the injected energy spectra is a power-law, the spectral index of this distribution influences which electrons contribute to the beam-plasma structure and therefore how fast the resultant electron beam propagates through space (e.g., Li and Cairns, 2013;Reid and Kontar, 2018b). When simulated, Li and Cairns (2013) found that smaller spectral indices give rise to faster electron beams, cause type III bursts to have higher magnitude drift rates and result in higher peak values of type III burst fundamental emission. However, the electron beam speed is not just governed by the spectral index but by the initial beam density too as both properties govern the energy density contained with the electron beam. It is this energy density that more completely governs which electrons contribute to the beam plasma structure (Reid and Kontar, 2018b). If the energy density is too small at certain electron energies, the Langmuir wave growth rate will not be high enough and these energies will not contribute to the beam-plasma structure that dictates beam speed. As electron beams expand in the solar wind, their energy density decreases and they stop producing radio emission (Reid and Kontar, 2015). Additionally, Reid and Kontar (2018b) showed that the peak brightness temperature of type III fundamental emission is proportional to the energy density contained within the electron beam. This result is significant as, if proven to be true via in situ measurements from PSP or SolO, type III bursts can be used to estimate the energy density of beams traveling through the solar corona. Moreover, with electron beam size estimates using type III imaging spectroscopy by taking into account wave propagation effects, the total energy contained within escaping electron beams during solar eruptive events can be estimated.
As discussed from LOFAR observations in section 3, the drift rates at the front of the beam are faster than the drift rates at the back of the beam, relating to faster and slower velocities, respectively (Reid and Kontar, 2018a;Zhang et al., 2019). This dependence was found using numerical simulations by Reid and Kontar (2018b) using the drift rates from synthetic fundamental emission dynamic spectra. The front of the beam was always faster than the back and could travel over twice as fast. The maximum and minimum electron energies in the beam plasma structure were significantly higher at the front than at the back of the beam, and so average velocities greater than 0.5c were possible. This is in stark contrast to the back of the beam, where the minimum energy was dictated by the temperature of the background plasma and so velocities cannot go higher than 0.5c. Simulations from Li and Cairns (2014) showed that higher beam velocities occur when the background plasma is simulated by a kappa distribution as the minimum energy that contributes toward the beam plasma structure is higher. It remains to be proven whether the solar corona can be described by a kappa distribution like the solar wind. The difference between the electron energies at the front and back of the beam dictate how fast the electron beam elongates in space (expansion velocity). FIGURE 9 | Predicted type III second harmonic dynamic spectrum of a bi-directional electron beam injection using simulations with different ininital conditions. From Li et al. (2011). This is related to the type III duration at one frequency. However, it should be emphasized again that whilst velocity dispersion likely makes the most significant contribution toward the type III decay time, radio wave propagation effects, density turbulence and the radio emission process will also influence type III durations and derived speeds. As an example, Ratcliffe et al. (2014b) found that using the peak flux to estimate electron beam speeds from a dynamic spectrum of second harmonic emission, the derived exciter speed was more closely related to the region of the beam that produced the peak in back-scattered Langmuir waves, which was slightly farther back in space from where the peak Langmuir waves were generated.
When electron beams become unstable, it has been the focus of many recent theoretical works how density inhomogeneities in the background plasma influence subsequent Langmuir wave growth, electromagnetic emission and the development of the electron distribution function. Studies are typically carried out either in one spatial location (e.g., Ratcliffe and Kontar, 2014;Krafft et al., 2015;, in a small spatial box (e.g., Thurgood and Tsiklauri, 2015;Volokitin and Krafft, 2018;Henri et al., 2019;Krafft and Volokitin, 2020) or over distance comparable to the solar corona or longer (e.g., Li et al., 2012;Reid and Kontar, 2013, 2017bLoi et al., 2014;Ratcliffe et al., 2014b). Each of these different approaches has their own advantages and disadvantages and are used based upon the focus of the relevant study. Studies in one spatial location are computationally less expensive and are typically used to investigate how wave k-vectors develop over time, taking a static spatial gradient for the background density. Studies in a small spatial box focus on both on wave-particle and wave-wave interactions required to generate radio emission.
These studies aim for a more complete treatment of the problem, with the small spatial box and hence restrictive length scales necessary due to the computational overhead. Studies over large distances typically use the quasilinear approximation to reduce the computational overhead and try to capture the large-scale evolution of the beam-plasma system and the fine structure that occurs within the resulting Langmuir waves and radio waves that we detect as type III bursts.
It has been known for decades that density inhomogeneities in the background plasma suppress the generation of beaminduced Langmuir waves by refracting them in phase space, out of resonance with the electron beam. Langmuir waves refracted to low phase velocities (high k-vectors) are eventually re-absorbed by the background plasma. Langmuir waves refracted to high phase velocities can be re-absorbed by the electron beam, accelerating a tail of energetic electrons. The level of Langmuir wave suppression is dependent upon on the characteristic length scale of density inhomogeneities L ∝ 1 n e ∂n e ∂x (e.g., Kontar, 2001;Reid andKontar, 2010, 2017b;Krafft and Volokitin, 2020) such that if the magnitude of L reaches a certain value, Langmuir waves are suppressed. The level of density inhomogeneities also influence the conversion of Langmuir wave energy into electromagetic energy (e.g., Li et al., 2012;Ratcliffe and Kontar, 2014;Krasnoselskikh et al., 2019).
When electric fields associated with Langmuir waves are measured in the solar wind at the same time as electron beams and type III bursts, they are distributed in spatial clumps (e.g., Vidojevic et al., 2012). This is attributed to aforementioned Langmuir wave suppression from density inhomogeneities. How the distribution of the beam-driven electric field is modified by density inhomogeneities has been simulated both locally (Voshchepynets et al., 2017;Krafft and Volokitin, 2020), with an electron beam propagation through the solar corona and through the solar wind (Reid and Kontar, 2017b). Without any density fluctuations, the beam-driven electric field distribution is peaked at the highest electric fields. As the intensity of the density fluctuations increases, the logarithm of the electric field becomes more uniformly distributed and the mean field is decreased. When the intensity of the density fluctuations is high, the largest electric field amplitude part of the distribution is better approximated by a power-law or exponential decay. The effect was described probabilistically and through resonance broadening (Bian et al., 2014). In the resonance broadening description, for homogeneous plasma, wave-particle interactions have a sharp resonance function δ(ω − kv). For inhomogeneous plasma, wave particle interactions occur over a range of velocities v due to wave refraction and so the growth rate of the beam-plasma instability changes to become a function of the electron beam velocity gradient averaged over v. If this width is small then wave growth can still occur but as the width increases the average slope reduces and can even become positive (see Figure 10).
The most visible consequence of density inhomogeneities is the type IIIb radio burst fine structure which was discussed in section 3.1. Quasilinear simulations are able to capture the fluctuating Langmuir waves (e.g., Reid and Kontar, 2015) and produce dynamic spectra that are similar to type IIIb bursts (Li et al., 2012;Loi et al., 2014;Ratcliffe et al., 2014b). Indeed, without simulating density inhomogeneities the electron flux at 1 AU does not compare with in situ observations (Reid and Kontar, 2013). However, there are still notable discrepancies when comparing synthetic dynamic spectra to observations, particularly using recent high resolution imaging spectroscopy. Whilst the electric fields in the solar corona cannot currently be measured, for events that are also seen at lower frequencies, the in situ measurements of the beam-induced electric field from PSP and SolO can be analyzed to see how they change as a function of distance from the Sun. Combining with numerical simulations, the measurements could be used to back-project the beam-induced electric field and infer what was happening in the solar corona, and then compared with the type III imaging spectroscopy of type IIIb striae bursts.
Many of the studies above use a 1D approximation for the propagation of electrons along magnetic field lines. Whilst this simplifies the models and is grounded by observations of electrons with small pitch angles at 1 AU, it is still a major simplification. Recent efforts have been undertaken (Ziebell et al., 2014(Ziebell et al., , 2015Tigik et al., 2016) to model the plasma emission process in two velocity dimensions for a single point in space, taking into account all the steps involved in the plasma emission process. In the simulations by Ziebell et al. (2015), fundamental emission was generated by Langmuir waves both with ionsound waves and by scattering. These processes dominated initially, whilst over time the harmonic emission overtook the fundamental. Taking into account collisions, Tigik et al. (2016) found that a wider plateau was formed in the distribution function and increased the tendency to isotropization.
The bulk of the results documented above use the weak turbulence approximation to simulate electron beam dynamics. However, there has been significant effort to simulate the beamplasma interaction and the subsequent generation of radio waves using the Zakharov equations (e.g., Zaslavsky et al., 2010;Krafft et al., 2015;Volokitin and Krafft, 2018;Krafft and Volokitin, 2020). This approach is not self-consistent with the electron beam exciter but has produced comparable type III fluxes using parameters in the solar wind. The plasma emission process has also been reproduced using particle-in-cell (PIC) codes (e.g., Thurgood and Tsiklauri, 2015;Henri et al., 2019). Both studies were able to produce electrostatic and electromagnetic waves through plasma instabilities. Henri et al. (2019) found weaker electron beams produced radio waves that were more forward directed at the source, whilst larger beam densities widened the Langmuir wave spectrum, leading to a larger available angular spread of radio emission. The validity of the weak turbulence approximation has been analyzed using PIC code both for 1D (Ratcliffe et al., 2014a) and 2D (Lee et al., 2019) weak turbulence codes. Both studies found a plateau forming in the beam region within comparable timescales, although the weak turbulence code developed a extended tail along the forward direction not seen in the PIC code (Lee et al., 2019). The Langmuir wave spectrum was similar unless the ion temperature was increased to the electron temperature or hotter (Ratcliffe et al., 2014a). In terms of radio emission Lee et al. (2019) found a good agreement, especially for larger beam velocities but it required a high number of particles per cell in the PIC codes.
Future Observing
The future is bright for type III imaging spectroscopy. Not only are we now taking advantage of the capability of instruments like the VLA, MWA, and LOFAR but there are numerous new observational platforms that have either recently come online or will be operational very soon.
Starting at the ground, the first notable platform is the Mingantu Ulrawide Spectral Radioheliograph (MUSER, Yan et al., 2009Yan et al., , 2016, based in Inner Mongolia, China. Most relevant for type III bursts is MUSER I that will operate between 0.4 and 2 GHz. MUSER is a solar dedicated radio telescope unlike the astrophysical telescopes like LOFAR which means that it has a much higher chance of catching transient type III bursts when they occur and MUSER I has already observed a radio burst (Yan et al., 2016) around 1 GHz. MUSER I is poised to provide the community with a plethora of type III imaging spectroscopy data that will significantly help to understand the physics behind these radio bursts.
Another platform that will come online soon is the Square Kilometer Array (SKA, see Nindos et al., 2019, for a solar physics overview). Both the SKA1-LOW, observing between 50 and 350 MHz and the SKA1-MID, observing from 0.35 to 15.3 GHz will be relevant for observing type III bursts from different regions within the solar corona. Commissioning of SKA1 is expected to start in 2024. With SKA1-LOW being based in Western Australia it should hopefully be available to pair with FIGURE 10 | An example of weakening and possible suppression of the beam instability by resonance broadening. The shaded gray corresponds to a positive slope from resonance width v = v2 − v1. If the resonance width is increased to v = v4 − v3 the slope becomes negative and the Langmuir wave instability is suppressed From Bian et al. (2014).
MUSER I for complimentary observations of type III imaging spectroscopy from the low to high corona. Similarly, SKA1-MID is based in the Karoo desert of South Africa and should be able to take complimentary type III observations with LOFAR. Both the sensitivity and angular resolution of SKA will be much better than what has come before and promises to provide major advances on key type III science questions.
Going into space and the launch of Parker Solar Probe and Solar Orbiter has far-reaching implications for type III theory. Both spacecraft are spectroscopically observing type III bursts from 20 MHz and below from changing vantage points around the solar system. They are also taking in situ particle measurements at different distances from the Sun that will allow analysis of high energy electrons, solar wind particles and plasma waves. ESAs BepiColombo should provide a third point in the inner solar wind for radio wave and in situ plasma measurements when the Mio spacecraft starts science operations.
On the horizon are two NASA space missions that are attempting to break the 10 MHz frequency barrier for type III imaging spectroscopy. CURIE (Sundkvist et al., 2016) is a two-cubesat mission that will formation fly in low-Earth orbit with a few km separation. It will take imaging spectroscopy observations of the Sun from 40 to 0.1 MHz. SunRISE (Alibay et al., 2017) is a six-cubesat mission that will fly slightly above the Geostationary Equatorial Orbit in a passive formation that will allow the formation of an interferometer whilst minimizing operations complexity. SunRISE will take imaging spectroscopy observations of the Sun from 25 to 0.1 MHz. Both missions intend to observe type III bursts. A further mission concept study NOIRE (Cecconi et al., 2018) is being developed in Europe to launch a swarm of nanosatellites for imaging low frequency radio emission targetted toward the astronomical dark ages and planetary radio emissions. Such a venture would certainly be of use for observing type III bursts.
Outstanding Science Questions
Our new age of type III imaging spectroscopy has already brought us many new observational discoveries. The high frequency VLA observations are showing us the signatures of energetic particles very close to their acceleration site. Low frequency MWA and LOFAR observations are showing us how the particles are escaping the Sun and what the structure of the upper corona is like. However, there are still many challenging science questions that require detailed answers, which type III imaging spectroscopy can help contribute.
Where are the locations of electron acceleration sites and what physical processes accelerate electrons? Electron acceleration properties are generally assumed in type III studies and not selfconsistently generated by an acceleration mechanism. There are a few works done in the context of 3D magnetic reconnection, based on particle-in-cell codes, that focus on the physical mechanisms behind electron acceleration at reconnection sites (e.g., Markidis et al., 2013). Type IIIs regularly appear in groups which is not traditionally simulated, nor the duration of these groups fully understood. As indicated in section 2, we have started to address these issues with the help of high frequency type IIIs (e.g., and combined analysis with other wavelengths (e.g., but there still remains significant uncertainty on the spatialy characteristics of acceleration sites. For example, are all electron beams that make type III groups accelerated in a compact volume, or spread throughout a larger volume around 1,000 Mm 3 ? Do they change location in time? Is there size connected with type III burst properties? Type III analysis has provided observational constraints on accelerated electron beam parameters such as characteristic times and electron energies. Electron acceleration can certainly occur at a range of heights in the solar corona, with high frequency type III bursts starting low in the corona and type III noise storm sources probably being accelerated much higher in the corona. Does the same acceleration mechanism produce electron beams that form type III bursts at 10 and 200 Mm? Type III imaging spectroscopy will help by catching the location of the type III starting frequency and the subsequent evolution of position in time. It will also allow the localization of acceleration through the imaging of bi-directional type III bursts. What physical processes are responsible for the transport of accelerated electrons? Whilst we have a general understanding of how wave-particle interactions affect the propagation of electrons through the solar atmosphere (e.g., Reid and Kontar, 2018b), there are still unknowns about how the 3D phase space properties of these particle beams evolves with time as they propagate away from the acceleration region. Recent numerical studies are not taking into account modeling large spatial scales in three dimensions and we risk missing many details (Harding et al., 2020), in a similar way that 3D magnetic reconnection is different from 2D. Type III imaging spectroscopy has been helping to answer this question about electron transport by analyzing the spatial evolution of different frequencies with time (e.g., Zhang et al., 2019). Such studies can confirm and constrain numerical simulations, with the imaging spectroscopy providing more detailed diagnostics about the spatial location of electron beams with time as they travel out through the solar corona. These studies are only just beginning and there is still much to be analyzed both statistically and using single event studies. The combination of electron beam diagnostics from type III imaging spectroscopy studies with in situ measurements from PSP and SolO should provide significant clarity on how electron beams evolve through the solar system and help disentangle transport and acceleration effects.
How does the type III emission mechanism influence observed properties? There are still many open questions about how Langmuir waves undergo wave-wave coupling to produce radio waves, and when fundamental emission dominates over harmonic emission. Analysis of type III fine structures should help answer this question and is an area of type III study that is significantly enhanced by new imaging spectroscopy.
There has already been notable advances in the evolution of type III striae (e.g., Sharykin et al., 2018) that are providing us with new insight on small-scale dynamics. Imaging spectroscopy analysis of the location and spatial extent between fundamental and harmonic emission (e.g., Kontar et al., 2017) have been providing observational constraints than can help develop theoretical models (e.g., Li and Cairns, 2013;Ratcliffe et al., 2014b;Krasnoselskikh et al., 2019) that are describing these non-linear processes. Future imaging spectroscopy should be used to further analyse fundamental and harmonic image properties as a function of frequency as their differences diagnose how the distinct wave-wave processes modify the radio burst properties, provided light transport effects are accounted for.
What properties are intrinsic to the type III source and what are caused by light transport effects? There has been a reinvigorated effort recently to understand and model how radio waves travels from the solar corona to Earth. It is apparent that the scattering of waves off density fluctuations significantly affect what we observe at Earth, in particular for low frequency fundamental emission. If we want to fully unlock the benefits of type III imaging spectroscopy we must be able to untangle these effects and significant efforts are already under way (e.g., Kontar et al., 2019). The variation in source parameters as a function of frequency can be used to constrain and improve the ray-tracing models that are being used to describe light transport. However, as with many complex processes, knowledge of light scattering can, and is, providing new diagnostics of the turbulent nature of the solar corona.
What is the structure of the flaring solar corona? Type III studies have been approximating the density structure of the solar corona for some time and directly producing a number of density models (e.g., Cairns et al., 2009;Saint-Hilaire et al., 2013). As discussed in section 3, the validity of these and other density models is something that is being tested by current observations using type IIIs for magnetic loops that extend into the solar wind (e.g., McCauley et al., 2018) and using U-burst observations for magnetic loops confined to the corona (e.g., Reid and Kontar, 2017a). Imaging radio sources at coronal heights around 1 solar radii and above will help to understand the structure of the magnetic field as it evolves from the corona to the solar wind. Despite this, our estimates of source heights are still uncertain and we are yet to have a good handle on source projection effects, something that is likely to ellude us without some future mission that can perform radio interferometric imaging from a spacecarft not near the Earth.
To help answer the above science questions, we must overcome a number of logistical challenges we face in the coming years. The advent of high volume data sets will bring with it significant challenges to store and analyse such large amounts of data. The astrophysical telescopes that are providing some of the new high resolution type III imaging spectroscopy are only observing the Sun sporadically. Whilst there has been numerous successful observing campaigns already on all these telescopes, the limited observing time will miss most type III radio bursts and highlights the benefits of solar monitoring for capturing type III burst activity from the Sun. Additionally the solar coverage in radio frequencies is not uniform around the globe and we risk missing key information when the Sun provides us with interesting type III events.
What is certain is that our new radio interferometer tools are allowing type III imaging spectroscopy with much higher spatial, spectral and temporal resolution that ever before. Not only are we going to further our understanding of the science questions described above, this new leap in solar radio observing is likely to bring about new discoveries that we have not even thought of yet. Furthering our quest to enable type III bursts as remote sensors of astrophysical plasma.
AUTHOR CONTRIBUTIONS
HR contributed all of the text to the document.
ACKNOWLEDGMENTS
I want to acknowledge all the authors that produced the highquality research which was reviewed in this article. I also want to acknowledge the support that goes into running the many ground-based radio observatories that provided the data for these studies. Finally I want to acknowledge the helpful and interesting discussions about type III bursts with by colleagues over the years. | 16,844.8 | 2020-09-24T00:00:00.000 | [
"Physics"
] |
Deformable Polymer Dielectric Films in Phase Light Modulators
Experimental study of the deformation amplitude of the dielectric polymer gel films directly right in the display device of optical information was conducted. The deformation amplitude of the gel film was measured during its change by controlled constant voltage on the electrodes. On the basis of polymer gel deformational layer the display devices of optical information and registration of electrostatic potential of charged dielectric thin films in the presence of defects are considered.
Introduction
The devices based on polymer gel atmospheres (GA) are developed to display of information in optical form [1,2]. The devices based on GA are developed in two directions of recording information: by electron beam and at the electrode controlling. At the electrode controlling the service life of polymeric GA is increased; the size and power of the device are reduced. It is also important to know the necessary values of GA deformation for getting the maximum light output. Figure 1a shows scheme of the display devices of optical information on basis on polymer deformational gel layer.
Display device of optical information record
The device works in the following way. Capacitor 1 displays filament body of the light source 2 on the opening of input diaphragm 3; after that through the lens 4 a transparent conductive layer 6 and parallel light stream, directed on the GA 5 through the prism of total internal display 7, are formed. Next, the light, reflected from the interface of "GA -air gap", secondarily passes through GA, the electrically conductive layer 6 and the prism of total internal display 7, and then focuses on the opaque output diaphragm 12 by the lens 8. Control voltages from source of control signals 10 are fed on the strip electrodes 9 of the raster on the second glass plate 11. GA surface is deformed in accordance with the voltage on the electrodes. GA relief changes the direction of light propagation; rejected light beams will pass to the screen, bypassing the opaque output diaphragm 12. Lens 8 focuses this light on the screen 13 in the form of light strips. The brightness of the light strips is proportional to the amplitude of the control voltages. The light output of the device was measured as the dependence of the light line brightness on the screen on the voltage value on the control electrodes. Two semitranslucent 14, 15 and reflecting mirrors 16 are included in the scheme for measuring the amplitude of the deformations of GA surface. These mirrors create two arms of the Mach-Zehnder interferometer; they form an interference picture on the screen 13 during setting. Since the deformable gel layer is in one arm of the interferometer under the action of the control voltage, the interference picture shows the increasing phase incursion of light due to change of the deformation amplitude. A source of coherent radiation (helium-neon laser) was used as radiation source. Figure 1b shows dependence of the light output (ρ) on the constant electrical voltage U on the electrodes. It follows from the graph that the sensitivity of the method of defocusing is considerably higher sensitivity of the method of dark field on 100 V. Control voltage amplitude U increases sharply with increasing of air gap between the GA and strip electrodes 9 ( Figure 1a); it is a condition for getting of maximum light output. Light output function equal to 0 at the gap ~ 100 µm at any voltages between control and rasterized electrodes. Reduction of light output is possible to 0 at simultaneous feeding a control voltage on the electrodes of raster 9, when the voltage has a value, close to the breakdown voltage of the working air gap between the electrically conductive substrate 6 GA film 5 and the electrodes 9 (with the gap of 40 µm). This is because the density of the ponderomotive forces of harmonic components decreases exponentially with the increasing of the air gap; the light output depends on the density of the ponderomotive forces of harmonic components. At the same time, the density of the ponderomotive forces of constant component varies slightly with the increasing of the gap. Light output decreases sharply for a time of 10÷20 seconds at feeding the control voltage on all the control electrodes and electrodes of raster 9. This change of light output occurs because of the leakage of the charge in the electrode gaps due to the final value of the electrical conductivity of the substrate and potential leveling of control plane of the film, respectively. If raster electrodes are connected to zero potential, then slow decreasing of light output occurs exponentially at disconnecting of the source of control voltage from the control electrodes. This phenomenon can be used as an electrostatic memory.
Light output and GA strain amplitude can be measured for the same conditions. The light output curve can be represented as the dependence of strain value on the control voltage; it shows that a small increment of amplitude of the order of 0.18 µm is required for getting maximum light output. This is consistent with theoretical estimates of phase increment taking into account the double passage of light through the surface relief of GA [2]. Relative elongations do not exceed 1% of the polymeric composition of GA with thick of 20 µm. Therefore, the equations of continuum mechanics in the linear approximation can be used at the calculation of deformation amplitude value. Performed studies show a good optical quality of the spatial-time light modulators based on polymer GA.
Control device of defects in thin films
The need to control the heterogeneity of the surface properties and defects of technological thin layers of dielectrics appears during development and study of films in microelectronics [3,4], electret [5][6][7] and the active thin film materials [8][9][10][11][12]. One of effective methods of testing is a principle of spreading of excess charge on a controlled surface; in this case the charge drains arise in the locations of defects (or inhomogeneities). It leads to considerable heterogeneity of potential relief of surface film of charged dielectric. Heterogeneity of potential relief of surface indicates location and defect size.
Vibrating electrode method with compensation of external electric field is used for registration the potential relief of surface of charged dielectric materials, for example, electrets [13]. The method has high measurement accuracy with an error not exceeding ±1%. In this case, vibrating electrode (as a probe) is necessary to implement structurally with small area that is making significant edge effect of the electrode in the measuring cell; it leads to significant increasing measurement error at low resolution.
The above-described electro-optical principle of gel dielectric layer of GA (as a probe) in measuring cell can be used to solve the problem of registration of potential relief with high resolution. Layer deformation of GA is carried out by external electric field E 0 as charged dielectric in system o flat capacitor and by supplying external voltage into electrodes of capacitor. It allows studying the picture of charge distribution on the surface in the form of light inhomogeneities with subsequent signal processing. Figure 2 shows schematic diagram of measurement system based on electro-optical sensor (EOS) with GA, where 1 is charged dielectric layer of sample, coated on hypotenuse side of prism, 2 is sensitive GA, 3 is transparent electrically conductive electrode, 4 is photo receiving optical device of recording by dark field method, 5 is prism, 6 is source of the plane-parallel light beam, 7 is supply source, 8 is electrode, 9 is regulated constant-voltage source, 10 is defect in the film, 11 is display device of information. interaction. Plane-parallel light beam from source 6 passes through prism 5, optically transparent electrode 3 and GA 2; it is reflected from GA boundaries, returns to the prism 5 and then falls into photodetector 4 (PD), for example, CCD camera.
Transformation of distribution picture of electric potential of charged surface of studied film 1 occurs in EOS with gel layer of GA (electric signal σ→U 0 →E 0 into output optical signal in form of light image). PD 4 registers and visualizes a picture of light field on the screen of display device of optical information 11.
Surface charge flows on the substrate 8 is significantly faster in the defect region 10 of film 1 and forms a region of reduced surface charge density σ. Gel layer 2 of EOS of electric field has a maximum value of deformation in non-uniform charge region on the dielectric surface 1, i.e. in film defect region. Therefore, PD 4 registers more bright fluorescence in this region of picture of light display.
Thus, charge regions on surface with greater or lesser charge density appear on picture of light display as different intensity of emission in this defect region. Brightness in predetermined region of picture can change to certain light intensity by varying the constant electric voltage from source 9 on the electrode 8. Voltage value U K of compensation field Е 0 is recorded at full compensation of the electric field in gap at Е 0 =0; charge density σ in the controlled region is calculated according to equation The light picture of field shows complete picture of potential relief of controlled dielectric surface in the form of two-dimensional light field with presence of light spots corresponding to defects 10 of investigated layer 1 of film dielectric. Use of standard optic in control system (with the gap between sample and GA) allows to determine defects in the form of dots and spots of size less than 50 microns at resolving of potential of dielectric surface of not more ±5 V.
Conclusion
Experimental studies of deformation amplitudes of gel films directly in optical pickup device of potential relief of charged dielectric show that it is possible to control parameters of sensitivity and registration accuracy of optical information by feeding external constant electric voltage.
Device of registration and defects control in thin and nano-sized dielectric films in microelectronics is proposed on the basis on the studied phenomenon of gel layer deformation under action of the ponderomotive forces of electric field.
Discussed method also allows carrying out research of potential relief picture of charged dielectrics. | 2,371.6 | 2017-04-01T00:00:00.000 | [
"Physics"
] |
Planning of Electric Public Transport System under Battery Swap Mode
Applying battery electric buses (BEBs) in the city is a good means to reduce the increasing greenhouse gas emissions and crude oil dependence. Limited by the driving range and charging time, battery swap station seems to be the best option for battery electric buses to replenish energy currently. This paper presents a novel method to plan and design an electric public transport system under battery swap mode, which comprised of battery electric buses, routes, scheduling, battery swap station, etc. Thus, new routing and scheduling strategies are proposed for the battery electric bus fleets. Based on swapping and charging demand analysis, this paper establishes an algorithm to calculate the optimal scales of battery swap station, including scales of battery swapping system, battery charging system and battery packs, and power capacity of output. Regarding the case of Xuejiadao battery swap station serving 6 BEB routes in Qingdao, China, a numerical simulation program is established to evaluate the validity of our methods. The results reflect that our methods can optimize the system scales meeting an equivalent state of operation demand. In addition, sensitivity analyses are made to the scales under different values of battery capacity and charging current. It suggests that the scales and cost of battery swap station can be effectively reduced with the development of power battery manufacture and charging technology in future.
Background
The continued growth in motor vehicles use worldwide will inevitably have an influence on global crude oil demand and CO 2 emissions [1]. The transportation sector produced 23% of global CO 2 emissions in 2012, which was the second contributor only next to the generation of electricity and heat [2]. To limit the emissions and demand for fossil fuel due to the increasing number of vehicles, developing battery electric vehicles (BEVs) seems to be a good choice [3]. Nevertheless, there are some problems using BEVs, mainly their high costs, limited driving ranges and long charging time [4]. The fixed route, schedules and stops of battery electric bus (BEB) can make these problems simpler. Besides, transit priority is considered to be an effective way to relieve traffic congestion and decrease greenhouse gas emissions [5]. Therefore, the electric public transport system is the breakthrough point of BEVs' popularization and application.
The BEB requires more frequent energy replenishment because of the heavy daily tasks and limited driving range. So, the most significant obstacle for the electric public transport system to overcome is the energy replenishment. At present, the energy replenishment technologies are mainly
Objectives and Organization of the Study
To fill up the gap of research, the objective of this paper is to design an electric public transport system under battery swap mode which comprised of BEBs, routes, scheduling, battery swap station, etc. Specifically, this paper aims at setting up new routing and scheduling strategies for BEB fleets and seeking the optimal scales of swapping/charging facilities and battery packs that the BSS should deploy to satisfy the swapping and charging demand. By doing so, we can accelerate the wide adoption of BEBs in the marketplace.
The rest of this paper is organized as follows. Section 2 details the design method of the electric public transport system which includes BEB route planning, scheduling strategies, swapping and charging demand analysis, and design of the BSS. In Section 3, a case study of Xuejiadao Station in Qingdao, Shandong Province, China is conducted by numerical simulation. The results of the numerical simulation are discussed in Section 4. Finally, the conclusions are presented in Section 5.
Materials and Methods
Several rules and assumptions need to be made before the planning and design.
Rule 1: BEB only departs from bus terminal station, running back and forth along the route counts as a round. All BEBs of the same route go to the same BSS for battery swapping, then back to bus terminal station waiting for next round. Rule 2: BEB won't go to the BSS during a round until it has finished this round. Namely, if the energy is not enough to support next round, it's time for the BEB to swap the battery. Rule 3: When BEB finishes the daily running hours, it heads for the BSS to replace the battery pack with a full one, no matter how much energy is left. Then, back to the bus terminal station for parking, ending the operations of a day.
The running track of BEB is shown in Figure 1.
Objectives and Organization of the Study
To fill up the gap of research, the objective of this paper is to design an electric public transport system under battery swap mode which comprised of BEBs, routes, scheduling, battery swap station, etc. Specifically, this paper aims at setting up new routing and scheduling strategies for BEB fleets and seeking the optimal scales of swapping/charging facilities and battery packs that the BSS should deploy to satisfy the swapping and charging demand. By doing so, we can accelerate the wide adoption of BEBs in the marketplace.
The rest of this paper is organized as follows. Section 2 details the design method of the electric public transport system which includes BEB route planning, scheduling strategies, swapping and charging demand analysis, and design of the BSS. In Section 3, a case study of Xuejiadao Station in Qingdao, Shandong Province, China is conducted by numerical simulation. The results of the numerical simulation are discussed in Section 4. Finally, the conclusions are presented in Section 5.
Materials and Methods
Several rules and assumptions need to be made before the planning and design. The running track of BEB is shown in Figure 1. Assumption 1: Each battery pack in BEB is full initially and will be replaced with another full one when the BEB return to BSS; Assumption 2: The energy consumption of the battery pack is linearly associated with mileages ignoring the impact of weather and passenger flow; Assumption 3: The charging time of battery pack is in direct proportion to charging power and charging depth, in inverse proportion to battery energy; Assumption 4: The swapping and charging facilities in BSS is enough so that each battery pack in BEB arriving at BSS can be swapped and charged without waiting.
Given the rules and assumptions above, we can plan an electric public transport system under battery swap mode following Figure 2. Details about the method are described hereinafter. Assumption 1: Each battery pack in BEB is full initially and will be replaced with another full one when the BEB return to BSS; Assumption 2: The energy consumption of the battery pack is linearly associated with mileages ignoring the impact of weather and passenger flow; Assumption 3: The charging time of battery pack is in direct proportion to charging power and charging depth, in inverse proportion to battery energy; Assumption 4: The swapping and charging facilities in BSS is enough so that each battery pack in BEB arriving at BSS can be swapped and charged without waiting.
Given the rules and assumptions above, we can plan an electric public transport system under battery swap mode following Figure 2. Details about the method are described hereinafter.
BEB Route Planning
Considering the special running performance of BEBs, new routes should be planned to match with the operation and battery swapping. There are several key parameters in the process of planning.
Terminal Station
Terminal station is departure or arrival station of one or more routes, where a bus starts or ends its scheduled route. It provides the service of dispatching, parking, maintaining, and logistics. According to the "Design Specifications of Urban Public Transport Station, Depot and Factory (CJJ15-87)" of China, the planning land area of the terminal station is calculated as follow: where Nbus (T) is the number of buses parking in the terminal station, a0 is the design area for each bus, usually between 90 m 2 and 100 m 2 . The terminal station should be set up in an open area where passenger flow distribution is more concentrated, such as the intersection of several routes. Since all the BEBs run to the BSS from the terminal station, it is significant to minimize the distance between the terminal station and BSS when selecting the site. max ll (2) where l is the distance between the terminal station and BSS, lmax is usually not exceeding the route length.
Route Length
Plenty of studies show that the depth of discharge between 70% and 80% is beneficial to the life of the battery and provides a favorable working environment for BEB [24]. Therefore, keeping the state of charge (SOC) at a reasonable level can not only reduce operating costs but also avoid out of energy in case of uncertainty and stochasticity. Then the actual maximum driving range of BEB can be calculated as follows: Range E e = / 1000
BEB Route Planning
Considering the special running performance of BEBs, new routes should be planned to match with the operation and battery swapping. There are several key parameters in the process of planning.
Terminal Station
Terminal station is departure or arrival station of one or more routes, where a bus starts or ends its scheduled route. It provides the service of dispatching, parking, maintaining, and logistics. According to the "Design Specifications of Urban Public Transport Station, Depot and Factory (CJJ15-87)" of China, the planning land area of the terminal station is calculated as follow: where N bus (T) is the number of buses parking in the terminal station, a 0 is the design area for each bus, usually between 90 m 2 and 100 m 2 . The terminal station should be set up in an open area where passenger flow distribution is more concentrated, such as the intersection of several routes. Since all the BEBs run to the BSS from the terminal station, it is significant to minimize the distance between the terminal station and BSS when selecting the site. l < l max (2) where l is the distance between the terminal station and BSS, l max is usually not exceeding the route length.
Route Length
Plenty of studies show that the depth of discharge between 70% and 80% is beneficial to the life of the battery and provides a favorable working environment for BEB [24]. Therefore, keeping the state of charge (SOC) at a reasonable level can not only reduce operating costs but also avoid out of energy in case of uncertainty and stochasticity. Then the actual maximum driving range of BEB can be calculated as follows: where Range is the theoretical maximum driving range of BEB (km), SOC min is the minimum value of SOC, usually between 20% and 30%, E the energy of battery pack (kWh), e is the energy consumption of BEB running per kilometer (kWh/km), C is the electric capacity of the battery pack (Ah), and U is the voltage of the battery pack (V). The route length in this paper which denoted as L (km) is the distance between the departure station and arrival station. In general, the mileage of running a round is 2L. The route length should not be too long so that the BEB can finish at least a round without battery swapping, nor too short as it will increase the frequency of transfer. Thus, the route length should be designed within a reasonable range.
Given the rule 2, the route length can influence the utilization rate of battery capacity in BEB. For example, if the route length is too long, when the BEB finished a round, the remainder driving range is not enough for the next round but still able to sustain a long way running. In this case, the battery capacity cannot be used sufficiently. In order to maximize the utilization rate of BEBs' driving ranges, the route length should also satisfy the following relationship.
Bus Configuration
The bus configuration of a route should meet the maximum passenger traffic demand during the peak hour. Therefore, the number of buses configured for route R (N bus (R) ) depends on the turnover time and departure interval in peak hours of the route.
where T t is the turnover time of the route (min), V the average speed of buses (km/h), n stop is the number of stops in the route, t stop is the average time of each stop (min), and I P is the maximum departure interval in peak hours (min).
Scheduling Strategies
Scheduling strategies of the public transit system may fall into two categories: (1) static scheduling which arranges the departure plans of the BEBs; and (2) dynamic scheduling which adjusts or modifies the original departure plans for emergency or special situation. This paper only discusses the static scheduling for the fcommon cases.
Departure Period Division
The daily operation time of a route is usually divided into peak hours and off-peak hours based on passenger flow variation with time. The same period of time has the same departure interval. We use the non-uniformity coefficient of each hour to help divide the departure period [25].
where K ti is the non-uniformity coefficient of the i-th hour, Q i is the passenger flow of the route during the i-th hour, and Q h is the average passenger flow of the route. Then, we can divide the departure period based on the value of K ti . When K ti ≥ K tp (K tp = 1.8~2.2), the i-th hour is a peak hour, otherwise an off-peak hour. Consequently, the continuous hours with the same range of K ti are regarded as the same departure period j.
Departure Interval
The departure interval is decided by passenger volume and passenger capacity of the route. Given the number of passengers arriving at each stop in a departure period j, the departure interval in this period can be calculated as follow.
where P jk is the number of passengers arriving at the k-th bus stop during j-th departure period, and P j max is the maximum number of passengers arriving at a bus stop during j-th departure period, PC is the passenger capacity of the route, ϕ is the average load factor of the route, F j is the frequency of departure during j-th departure period, T j is the duration of the j-th departure period, I j is the departure interval during i-th departure period.
Scheduling Timetable
Given the departure rules, departure time and departure intervals of the route, the scheduling timetable can be easily obtained (Table 1).
where Departure(m,n) represents the m-th departure time of bus n, Arrival(m,n) represent the m-th arrival time of bus n, both of them are m × n matrixes.
Time Distribution of Battery Swapping
According to rule 2, the maximum number of rounds a BEB can finish without swapping is calculated from Equation (18).
where fix(x) rounds the x to the nearest integer towards zero. When N ≥ N max , the BEB need to return to BSS for battery swapping. Then the swapping start and end time of each BEB can be calculated based on arrival time of N max -th round.
where SwapS (h, n) represents the start time of h-th swapping of the bus n, SwapE (h, n) represents the end time of h-th swapping of the bus n, both of them are h×n matrixes, T l is the travel time of BEB running from terminal station to BSS, and T s is the time each battery swapping should take. As a result, the number of battery swapping in every minute can be counted based on SwapS (h, n) and SwapE (h, n), and sequentially recorded into matrix SwapDis(t) which reflects the time distribution of battery swapping.
Charging Time of Battery
The charging time of the battery pack is determined by the depth of charge (DOC) which is equal to the depth of discharge (DOD). According to the rule 2 and assumption 2, the DOC can be calculated as follow: According to the assumption 3, the charging time of battery pack is expressed as follow: where P CU is the charging power of charging unit (kW), and α is the battery's charging efficiency which converts energy/power ratio to charging time and the default value is 1.3 [12].
Time Distribution of Battery Charging
In case the replaced battery pack will be transported to charging rack immediately, the start time of battery charging is equal to end time of battery swapping ignoring the transport time. And given the charging time of battery pack, the end time of charging is obtained.
where ChargeS(b) represents the start time of b-th battery charging, ChargeE(b) represents the end time of b-th battery charging, both of them are b × 1 matrixes. As a result, the number of battery charging in every minute can be counted based on ChargeS(b) and ChargeE(b), and sequentially recorded into matrix ChargeDis(t) which reflects the time distribution of battery charging.
Design of the Battery Swap Station
The battery swap station mainly consists of the battery charging system, battery swapping system and monitoring system (as shown in Figure 3). Design of the BSS aims to decide the optimal scales of BSS meeting the swapping and charging demand, which include the number of charging devices and swapping robots, the number of battery packs, the power capacity of BSS, etc.
where Nb is the needed number of the backup battery packs, NBb is the actual number of the backup battery packs, NRack is the number of charging racks where the battery packs store, Nbus (S) is the number of buses the BSS serving for, and % is the design margin (10%~20%). Due to the expensive cost of the battery, the number of backup battery packs should be minimized ensuring the normal operation. Therefore, we propose an algorithm ( Figure 4) to calculate the optimal number of backup battery packs. The details of the algorithm are provided as follows: (1) Initialize: set the initial number of backup battery packs Nb = 0, the serial number of battery pack i = 1, and the serial number of battery swapping k = 1;
Scale of Battery Packs
Under battery swap mode, BEBs should be equipped with a certain amount of backup battery packs to relive the disadvantages of short driving range and long charging time. Thus, the number of the battery packs the BSS should hold (N battery ) is calculated as N Battery = N Bus (S) + N Bb (25) where N b is the needed number of the backup battery packs, N Bb is the actual number of the backup battery packs, N Rack is the number of charging racks where the battery packs store, N bus (S) is the number of buses the BSS serving for, and γ% is the design margin (10%~20%). Due to the expensive cost of the battery, the number of backup battery packs should be minimized ensuring the normal operation. Therefore, we propose an algorithm ( Figure 4) to calculate the optimal number of backup battery packs. The details of the algorithm are provided as follows:
Scale of Battery Swapping System
The battery swapping system is made up of several swapping units, each of which consists of a swapping lane and two swapping robots. When the BEB drives into the swapping lane and stops at the specific location, the two swapping robots will work simultaneously on both sides. The robots can automatically complete a battery swapping within 8 min.
To meet the swapping demand, the number of battery swapping units (NSU) is equal to the maximum number of BEBs whose battery packs are being swapped at the same time in the BSS, which can be obtained from the time distribution of battery swapping.
where tS is the start of service time of BSS, tE is the end of service time.
Scale of Battery Charging System
The battery charging system consists of a series of charging units responding to battery packs. Therefore, the output power of the charging unit should be matched with the voltage of the battery pack.
where PCU is the output power of the charging unit, I is the intensity of charging current, ηmax is the max efficiency of the charger, and δ is the reactive loss of lines.
To meet the charging demand, the number of charging unit (Ncu) should not be less than the maximum number of batteries that are charged at the same time, which can be obtained from the time distribution of battery charging.
Scale of Battery Swapping System
The battery swapping system is made up of several swapping units, each of which consists of a swapping lane and two swapping robots. When the BEB drives into the swapping lane and stops at the specific location, the two swapping robots will work simultaneously on both sides. The robots can automatically complete a battery swapping within 8 min.
To meet the swapping demand, the number of battery swapping units (N SU ) is equal to the maximum number of BEBs whose battery packs are being swapped at the same time in the BSS, which can be obtained from the time distribution of battery swapping.
where t S is the start of service time of BSS, t E is the end of service time.
Scale of Battery Charging System
The battery charging system consists of a series of charging units responding to battery packs. Therefore, the output power of the charging unit should be matched with the voltage of the battery pack.
where P CU is the output power of the charging unit, I is the intensity of charging current, η max is the max efficiency of the charger, and δ is the reactive loss of lines.
To meet the charging demand, the number of charging unit (N cu ) should not be less than the maximum number of batteries that are charged at the same time, which can be obtained from the time distribution of battery charging.
Because each battery pack is composed of several batteries, each charging unit should deploy the same number of single chargers with matched output powers.
where P charger is the output power of the single charger and n b the number of batteries composing the battery pack, which is equal to the number of single chargers composing the charging unit. The total number of single chargers in the BSS can be calculated: As a consequence, the power capacity of the battery charging system depends on the output power and number of charging units in BSS, which can be expressed as:
Case Descriptions
Xuejiadao Station is the world's largest and most advanced intellectual BSS for BEBs in Qingdao, Shandong Province, China. The station is located at Huangdao Exit of Jiaozhou Bay Undersea Tunnel, which connects Qingdao and Huangdao. It is designed to serve for 6 BEB routes near the Undersea Tunnel ( Figure 5). Besides, it takes only 8 min to swap a battery for a bus. Therefore, it can complete 540 times of battery swapping during the service time of one day [26]. The working process of Xuejiadao BSS is displayed in Figure 6 which is from an advertising video made by Phoenix Contact.
where Pcharger is the output power of the single charger and nb the number of batteries composing the battery pack, which is equal to the number of single chargers composing the charging unit. The total number of single chargers in the BSS can be calculated: As a consequence, the power capacity of the battery charging system depends on the output power and number of charging units in BSS, which can be expressed as:
Case Descriptions
Xuejiadao Station is the world's largest and most advanced intellectual BSS for BEBs in Qingdao, Shandong Province, China. The station is located at Huangdao Exit of Jiaozhou Bay Undersea Tunnel, which connects Qingdao and Huangdao. It is designed to serve for 6 BEB routes near the Undersea Tunnel ( Figure 5). Besides, it takes only 8 min to swap a battery for a bus. Therefore, it can complete 540 times of battery swapping during the service time of one day [26]. The working process of Xuejiadao BSS is displayed in Figure 6 which is from an advertising video made by Phoenix Contact. The operation and scheduling of the 6 BEB routes are shown in Table 2 and other parameters for simulation are presented in Table 3.
Simulation of Operation
Based on the method proposed above, we designed a MATLAB program to simulate the operation of Xuejiadao electric public transport system. Inputting the prepared data and run the program, the daily scheduling of the routes and operation of the BSS can be simulated minute by minute. Then, the minutely state of the BEBs, swapping systems and charging systems are recorded into matrixes:
n), SwapS(h,n) & SwapE(h,n), and ChargeS(b) & ChargeE(b)
respectively, by which the swapping and charging demand of the BSS can be analyzed ( Figure 7).
As shown in Figure 7a, the BEBs arrive at BSS from 11:00 of the current day to 2:00 of the next day. There are two peak-hours during the service time, which are 14:00-15:00 and 20:00-21:00 respectively. And different routes have different time distributions of BEBs arriving at BSS. According to Rule 3, every BEB needs to go to swap battery pack after finished its last round of one day. Since there are no more tasks for the BEB after that, it is not necessary to complete the swapping and charging immediately. In consequence, we exclude the last battery swapping and charging of all BEBs when plotting the Figure 7b The operation and scheduling of the 6 BEB routes are shown in Table 2 and other parameters for simulation are presented in Table 3.
Simulation of Operation
Based on the method proposed above, we designed a MATLAB program to simulate the operation of Xuejiadao electric public transport system. Inputting the prepared data and run the program, the daily scheduling of the routes and operation of the BSS can be simulated minute by minute. Then, the minutely state of the BEBs, swapping systems and charging systems are recorded into matrixes: Departure(m,n) & Arrival(m,n), SwapS(h,n) & SwapE(h,n), and ChargeS(b) & ChargeE(b) respectively, by which the swapping and charging demand of the BSS can be analyzed ( Figure 7).
As shown in Figure 7a, the BEBs arrive at BSS from 11:00 of the current day to 2:00 of the next day. There are two peak-hours during the service time, which are 14:00-15:00 and 20:00-21:00 respectively. And different routes have different time distributions of BEBs arriving at BSS. According to Rule 3, every BEB needs to go to swap battery pack after finished its last round of one day. Since there are no more tasks for the BEB after that, it is not necessary to complete the swapping and charging immediately. In consequence, we exclude the last battery swapping and charging of all BEBs when plotting the Figure 7b
Evaluation of Operation
There are several key parameters evaluating the operation of the Xuejiadao electric public transport system as shown in Table 4
Evaluation of Operation
There are several key parameters evaluating the operation of the Xuejiadao electric public transport system as shown in Table 4. The average daily mileage measures the travel demand of the BEBs. The average daily swapping times measures the swapping demand of the BEBs. The average charging time evaluates the charging efficiency of the BSS. The max driving range per charge evaluates the capacity of the battery packs of BEBs. The energy consumption evaluates the use efficiency of the battery packs of BEBs. Thus, these parameters can reflect the operation effect of the electric public transport system. By comparing the simulated values with the actual values, it can be concluded that the simulation can provide an equivalent state of operation with the actual situation, which verifies the validity of the simulation and method proposed above.
Results and Discussion
Through the MATLAB simulation of operation, we can design scales of the Xuejiadao electric public transport system. Comparing the simulated values with actual values in Table 5, we find that the scales calculated by our method are mostly smaller than the actual scales. It indicates that there is redundancy in the actual configuration of Xuejiadao station. This method can optimize the system scales satisfying the equivalent state of operation demand and improve the efficiency of the system. With the development of power battery manufacture and charging technology, the battery capacity (C) and intensity of charging current (I) will increase, which can influence the swapping and charging demand. As a result, the required scales of the BSS are changed. Thus, we make sensitivity analyses to the numbers of battery packs, swapping units and charging units, and power capacity of BSS via changing the simulation parameters: C and I (as shown in Figure 8).
Sustainability 2018, 10, x FOR PEER REVIEW 13 of 17 concluded that the simulation can provide an equivalent state of operation with the actual situation, which verifies the validity of the simulation and method proposed above.
Results and Discussion
Through the MATLAB simulation of operation, we can design scales of the Xuejiadao electric public transport system. Comparing the simulated values with actual values in Table 5, we find that the scales calculated by our method are mostly smaller than the actual scales. It indicates that there is redundancy in the actual configuration of Xuejiadao station. This method can optimize the system scales satisfying the equivalent state of operation demand and improve the efficiency of the system. With the development of power battery manufacture and charging technology, the battery capacity (C) and intensity of charging current (I) will increase, which can influence the swapping and charging demand. As a result, the required scales of the BSS are changed. Thus, we make sensitivity analyses to the numbers of battery packs, swapping units and charging units, and power capacity of BSS via changing the simulation parameters: C and I (as shown in Figure 8). We can conclude that the design scales of the electric public transport system decrease significantly when C varies from 300 Ah to 500 Ah. Less swapping and charging are needed for BEBs We can conclude that the design scales of the electric public transport system decrease significantly when C varies from 300 Ah to 500 Ah. Less swapping and charging are needed for BEBs because the driving range of BEB can be increased as the battery capacity increases. However, if the battery capacity reached above 500 Ah, the battery energy of BEB could be enough to support whole day operation. In this case, it's not necessary to build battery swap station since the BEBs can be charged in slow charging station after finishing all tasks of the whole day.
On the other hand, the numbers of battery packs and charging units decrease gradually, and the power capacity of BSS increases slightly with I varying from 80 A to 200 A. The changes of I have no influence on the number of swapping units. When the charging voltage is fixed, larger I means larger charging power and faster-charging speed. As a consequence, the charging time of a battery pack can be reduced, so that there is no need for more charging facilities and backup battery packs.
To sum up, with the increase of battery capacity (C) and charging current (I), fewer battery packs and swapping/charging devices are needed for a BSS so that the high cost of a BSS can be reduced effectively. Consequently, it's important to advance the technologies in the power battery manufacture and charging in the future. And advancing battery technology seems to promise larger impacts than the charging technology.
Conclusions
This paper presents a novel method to design the electric public transport system under the battery swap mode. The terminal stations, route lengths, and bus configurations are replanned for BEB routes: the terminal stations should be nearby the BSS; the route length should be matched with the driving range of BEB; and the bus configuration should satisfy the departure in peak hours. Based on the running rules of BEBs, new scheduling strategies are made for the electric public transport system. The swapping and charging demand of BEB fleets can be analyzed through the scheduling timetable. Then the time distribution of battery swapping and charging are obtained. As a result, the design scales of battery swap station can be calculated, including the scales of battery packs, battery swapping system and battery charging system. The method has been verified via a case study of Xuejiadao Station serving 6 BEB routes. With a numerical simulation, we simulate the operations of the 6 BEB routes and evaluate the simulation results. The operation parameters and scales of the electric public transport system in the simulation are close to the actual values which indicate that the design method we proposed is effective. Besides, we make sensitivity analyses to the scales of BSS under different values of battery capacity (C) and charging current (I). It suggests that advances in battery manufacture and charging technology can significantly reduce the cost of the BSS, which can promote the adoption of BEB.
This paper contributes to the planning of the alternative fuel vehicle transportation systems. It provides a design guide for government, bus companies, infrastructure operators and other decision-makers, which can avoid blind constructions and investments. However, there are also several limitations in this study, which also motivate a few future research directions. First, the rules and assumptions we proposed in this paper may not be always realized in the real world. For example, the charging time of battery pack is not in linear correlation with the charging depth since the last 10% capacity is in constant voltage charging mode, which is significantly slower. In this case, the battery may not be fully charged when it is swapped in the BEBs. As a result, the driving range of the BEBs may decrease and the charging time is variable. To make this method more practical and reliable, more investigations on the battery characteristics and charging efficiency should be conducted in future. Besides, we will further consider the stochasticity of demand in the model in the future study. the number of batteries composing a battery pack n stop the number of stops in the route P CU the charging power of charging unit (kW) P charger output power of single charger P S the power capacity of battery charging system | 8,002.4 | 2018-07-19T00:00:00.000 | [
"Engineering"
] |